Current image anonymization techniques, largely focus on localized pseudonymization, typically modify identifiable features like faces or full bodies and evaluate anonymity through metrics such as detection and re-identification rates. However, this approach often overlooks information present in the entire image post-anonymization that can compromise privacy, such as specific locations, objects/items, or unique attributes. Acknowledging the pivotal role of human judgment in anonymity, our study conducts a thorough analysis of perceptual anonymization, exploring its spectral nature and its critical implications for image privacy assessment, particularly in light of regulations such as the General Data Protection Regulation (GDPR). To facilitate this, we curated a dataset specifically tailored for assessing anonymized images. We introduce a learning-based metric, PerceptAnon, which is tuned to align with the human Perception of Anonymity. PerceptAnon evaluates both original-anonymized image pairs and solely anonymized images. Trained using human annotations, our metric encompasses both anonymized subjects and their contextual backgrounds, thus providing a comprehensive evaluation of privacy vulnerabilities. We envision this work as a milestone for understanding and assessing image anonymization, and establishing a foundation for future research.
Dataset
We introduce a new dataset tailored for studying anonymization.
Two human annotation setups:
HA1: only see anonymized image
HA2: see original and anonymized image pairs
PerceptAnon Metric
CNN-based training on human annotation scores.
HA1: single CNN with anonymized image input
HA2: Siamese network with original-anonymized image pair input
Spearman's (ρ) and Kendall's (τ) correlation of traditional image assessment metrics and PerceptAnon with human annotations on our dataset splits. PerceptAnon has consistently the best correlation with human perception.
Impact of using PerceptAnon metric as a classification vs. regression problem with varying level of granularity.
Sample GRAD-CAM visualizations using PerceptAnon demonstrating its focus on potential privacy-compromising cues.
@inproceedings{patwari2024perceptanon,
title={PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR},
author={Patwari, Kartik and Chuah, Chen-Nee and Lyu, Lingjuan and Sharma, Vivek},
journal={ICML},
year={2024}
}