Meta Platform’s artificial intelligence (AI) tools are specifically designed to avoid creating and distributing nude images, according to an upcoming statement expected before the Oireachtas media committee.
The social media giant, which oversees well-known platforms like Facebook, Instagram, and WhatsApp, will convey its firm stance against non-consensual intimate imagery, stating that sharing such content is considered one of its most significant policy infractions.
“Our AI tools are specifically trained not to generate nude images or digitally undress individuals in photos. We have integrated robust safeguards to prevent the creation of these objectionable images,” remarks Dualta Ó Broin, Meta Ireland’s director of public policy.
Ó Broin will be attending the joint Oireachtas media committee session alongside David Miles, the EMEA safety policy director at Meta, and representatives from Google and TikTok.
This meeting follows the European Commission launching a formal investigation into social media platform X and its “nudification” feature, which enabled users to inappropriately undress images of individuals, including minors, without consent.
[ French prosecutors raid Elon Musk’s X offices in ParisOpens in new window ]
X had been invited to attend the committee meeting on two occasions but declined both, opting instead to submit written correspondence.
Alan Kelly, Labour TD and chairman of the Arts, Media, Communications, Culture and Sport committee, described X’s refusal to participate as “extremely disappointing.”
[ European Commission opens investigation into Grok’s ‘nudification’ featureOpens in new window ]
“It is truly disheartening that X has declined to attend, especially following a specific request from the Taoiseach for their presence,” he added.
Regarding age verification measures, Susan Moss, TikTok’s head of public policy and government relations, is expected to highlight, “Despite our best efforts, there is no universally accepted method for reliably confirming a person’s age while also protecting their privacy.”
Moss plans to announce that TikTok will soon implement “enhanced technology” in Europe to aid its moderation teams in identifying and eliminating accounts belonging to individuals under the age of 13.
“We are currently the only major platform that publicly shares, quarterly, the number of suspected accounts owned by those under 13 that are removed,” her statement asserts.
She will also explain that various technological advancements are improving the company’s ability to swiftly remove harmful content, thus reducing the chances of the community encountering it.
By automating content takedowns, she will state, TikTok’s safety teams can focus on areas that require human expertise, such as processing appeals, consulting with external experts, and responding to rapidly evolving situations.
A TikTok spokeswoman confirmed the accuracy of the company’s remarks.
The committee meeting is aimed at exploring how prominent online platforms are handling regulation, user safety, and the safeguarding of minors.
Topics such as content moderation, technological solutions, and human oversight in addressing harmful and illegal content will also be on the agenda.
At the session, Google will be represented by its government affairs and public policy manager, Ryan Meade, along with Chloe Setter, the child safety public policy manager. TikTok will also have its minor safety public policy lead, Richard Collard, present.
Meta has been approached for comments regarding this session.
Overall, the discussions at this committee meeting shine a light on the ongoing challenges and responsibilities that social media platforms face concerning user safety and ethical content management. As technology evolves, the need for transparent policies and effective implementation becomes crucial in maintaining trust and safety among users.