In today’s digital age, generative artificial intelligence tools are increasingly integrated into everyday life. Individuals routinely share personal images, profile pictures, and identifying photos on various platforms, often risking exposure and privacy.
To address these concerns, researchers at Purdue University—Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty—have developed an innovative system that manages photos both before and after they are uploaded to AI editing platforms.
This groundbreaking research is currently patent-pending and has been published in the peer-reviewed journal, IEEE Transactions on Artificial Intelligence.
Aggarwal serves as a University Faculty Scholar and the Reilly Professor of Industrial Engineering, with additional roles in the Department of Computer Science and the Elmore Family School of Electrical and Computer Engineering.
Tamboli, who holds a doctoral degree, works alongside Punyamoorty, a doctoral candidate, in Aggarwal’s research group.
According to Tamboli, modern generative AI tools require users to upload full, untouched images for effective editing. However, existing traditional privacy solutions often fall short. They may blur sensitive areas, rendering the images unusable for high-quality editing. Additionally, local privacy tools struggle to replicate the realism of cloud-based generative models, while standard privacy techniques fail to safeguard biometric pixels.
Aggarwal emphasizes the lack of control users have once an image is shared with a third-party AI model, as they can no longer manage its retention or distribution.
“Our system empowers users to mask sensitive sections of their photos, such as their faces, prior to sending them to an AI editing service,” explains Tamboli. “These areas are masked locally on the user’s device using a detailed outline. Once the image is edited by AI, our system reintegrates the sensitive areas back into the modified image, utilizing geometric alignment and blending.”
Only the masked image is transmitted to the AI service, making this system a pioneering solution that combines privacy, high-quality edits, compatibility, and photorealism, asserts Aggarwal.
“It’s privacy by design. With our system, the AI platform never sees the face, yet the final edited image appears entirely natural,” Aggarwal adds.
Tests conducted by the researchers validated the efficacy of their system, showing how leading AI foundation models struggle to infer biometric attributes from blurred versus clear images. The results revealed a significant reduction—by 80%—in the AI’s ability to identify traits such as eye color, facial hair, and age group, offering robust protection against identity leakage. The team is actively working on expanding the technology to safeguard more sensitive information, including medical records, identification documents, and other private data.

