skip to content

 
Presented by: 
Hiroaki Kikuchi (Meiji University)
When: 
Monday, December 5, 2016 - 14:00 to 14:20
Venue: 
INI Seminar Room 1
Abstract: 

One
of the main difficulties is to be able to design and formalize realistic
adversary models, by taking into account the background knowledge of the
adversary and his inference capabilities. In particular, many privacy models currently
exist in the literature such as k-anonymity,
and its extensions such as l-diversity
and differential privacy. However, these models are not necessarily comparable
and what might appear to be the optimal anonymization method in one model is
not necessarily the best one for a different model. To be able to assess the
privacy risks of publishing a particular anonymized data, it is necessary to evaluate
the risk of the data
anonymized from a common dataset. 

 

The
main objective of the competition
is precisely to investigate the strengths and limits of existing anonymization
methods, both from theoretical and practical perspective. More precisely, by given a common dataset containing personal data and
history of online retail payments, some attendances of the competition attempt
to anonymize the given dataset in a way where re-identification of records of
the dataset is impossible without losing data utility. They are encouraged to
try to re-identify the dataset anonymized by the other attendances as
well.  With pre-defined utility functions
and re-identification algorithms, the security and the utility of the
anonymized dataset are automatically evaluated as the maximum re-identification
probability and the mean average error between the anonymized data and the
original dataset, respectively. Throughout the competition, we
aim at gaining an in-depth understanding on how to quantify the privacy level
provided by a particular anonymization method as well as the achievable
trade-off between privacy and utility of the resulting data. The outcomes of
the meeting will greatly benefit to the privacy community.





Presentation material: