PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR

1University of California, Davis
2SonyAI
ICML 2024

*Equal Contribution

Work done during internship at SonyAI
Motivation Image

General Data Protection Regulation (GDPR): "the use of additional information can lead to identification of individuals".

Pseudonymization focuses on anonymizing local regions that can leave privacy-compromising cues in the image.

PerceptAnon focuses on identifying and quantifying privacy-compromising cues in face/full body pseudonymized images.

Abstract

Current image anonymization techniques, largely focus on localized pseudonymization, typically modify identifiable features like faces or full bodies and evaluate anonymity through metrics such as detection and re-identification rates. However, this approach often overlooks information present in the entire image post-anonymization that can compromise privacy, such as specific locations, objects/items, or unique attributes. Acknowledging the pivotal role of human judgment in anonymity, our study conducts a thorough analysis of perceptual anonymization, exploring its spectral nature and its critical implications for image privacy assessment, particularly in light of regulations such as the General Data Protection Regulation (GDPR). To facilitate this, we curated a dataset specifically tailored for assessing anonymized images. We introduce a learning-based metric, PerceptAnon, which is tuned to align with the human Perception of Anonymity. PerceptAnon evaluates both original-anonymized image pairs and solely anonymized images. Trained using human annotations, our metric encompasses both anonymized subjects and their contextual backgrounds, thus providing a comprehensive evaluation of privacy vulnerabilities. We envision this work as a milestone for understanding and assessing image anonymization, and establishing a foundation for future research.

Motivation

Comparing traditional image assessment metrics and PerceptAnon against human anonymity assessments in image-anonymized pairs.

PerceptAnon Comparison Metrics

Method Overview

Dataset Examples

Dataset
We introduce a new dataset tailored for studying anonymization. Two human annotation setups:
HA1: only see anonymized image
HA2: see original and anonymized image pairs

PerceptAnon Method

PerceptAnon Metric
CNN-based training on human annotation scores.
HA1: single CNN with anonymized image input
HA2: Siamese network with original-anonymized image pair input

Results

Spearman's (ρ) and Kendall's (τ) correlation of traditional image assessment metrics and PerceptAnon with human annotations on our dataset splits. PerceptAnon has consistently the best correlation with human perception.

HA1

HA1 Results

HA2

HA2 Results

Impact of using PerceptAnon metric as a classification vs. regression problem with varying level of granularity.

Kendall's Correlation Results
Spearman's Correlation Results

Sample GRAD-CAM visualizations using PerceptAnon demonstrating its focus on potential privacy-compromising cues.

PerceptAnon Examples 1
PerceptAnon Examples 2

BibTeX

@inproceedings{patwari2024perceptanon,
        title={PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR},
        author={Patwari, Kartik and Chuah, Chen-Nee and Lyu, Lingjuan and Sharma, Vivek},
        journal={ICML},
        year={2024}
      }