Meta fails to suppress the spread of many sexualized images AI Deepfake Celebrity on Facebook
Meta removed over a dozen false, sexualized images of well -known female actors and athletes after investigating news about CBS has discovered high distribution Ai-Manipulated Deepfake Pictures on the company’s Facebook platform.
Dozens of false, highly sexual images of actors Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, Scarlett Johansson and former tennis star Maria Sharapova shared a wide multiple Facebook account, collecting hundreds of thousands of thousands and many residents on the platform.
“We have removed these images due to the violation of our rules and we will continue to monitor the other violation of the posts. This is a challenge throughout the industry, and we are constantly working to improve our detection and implementation technology,” said Meta Logan spokeswoman for CBS News in the statement E – by mail on Friday.
Analysis of more than a dozen of these paintings by reality defender, a platform acting to detect the media generated by AI, showed that many photos were deep images-s a-rated bodies of underwear that replaced the bodies of celebrities otherwise the true photographs . Several pictures are probably created using a sewing tools that do not include AI, according to an analysis of reality defender.
“Almost all Deepfake pornography has no consent that the subject is deep,” Ben Colman, co-founder and reality defender director for CBS News, said on Sunday. “Such content grows at dizzying speed, especially if existing measures to stop such content are rarely implemented.”
CBS News sought comments from Miranda Cosgrove, Jeanette McCurdy, Ariana Grande and Maria Sharapov in this story. Johansson refused to issue any comment, according to a representative for the actor.
According to the target of the harassment and harassment policy, the company forbids “derogatory sexualized Photoshop or drawings” on its platforms. The company also forbids nudity for adults, sexual activity and sexual exploitation of adults, and its regulations are intended to block the users to share or threaten to share non -consensual intimate images. The target also introduced the use of “AI info” stickers to clearly mark the content that manipulates AI.
But there are questions about the efficiency of the police of a technological company such content. CBS News has discovered that dozens of Ai-Sexualized pictures of Cosgrave and McCurdy are still publicly available on Facebook, even after they are widely distributed by such content, violating the company terms.
One such Deepfake picture of Cosgrave, which was still on the rise over the weekend, shared an account with 2.8 million followers.
Two actors – both former children’s stars on Nickelodeon Show -in Icarly, owned by CBS News Paramount Global – are the most widespread target for the contents of Deepfake, based on pictures of public figures that CBS News analyzed.
Meta’s Supervisory Board, quasi independent body consisting of experts in the field of human rights and freedom of speech and gives recommendations to moderate content on the target of platforms, CBS News said in a statement by email that the current regulations of the company are not sexualized by Deepfake content sufficient.
The Supervisory Board quoted the recommendations that Meta made over the past year, including the urging of the company to become clearer, updating his rules up to the ban on “derogatory sexualized Photoshop” to particularly include the word “non -consensual” and encompasses another manipulation of techniques like AI.
The Committee also recommended that the target to transplant its ban on “derogatory sexualized Photoshop” in regulations on the sexual exploitation of adults, so that the moderation of such content would be stricter.
Asked on Monday CBS News about the Committee recommendations, the target pointed to Guidelines on his transparency websiteshowing that the company evaluates the feasibility of three of the four recommendations of the Supervisory Board and implements one of its prop The target also says that it is unlikely to move its policy “derogatory sexualized Photoshop” to its regulations on the sexual exploitation of adults.
In her statement, the target noted that it still considers ways to signal a lack of consent in the pictures generated by AI. The target also said that considering the reforms of their policies of sexual exploitation of adults, to “capture the spirit of the Committee’s recommendation.
“The Supervisory Board made it clear that unconensual intimate images of a deep disorder were serious violation of privacy and personal dignity, disproportionate damage to women and girls. These images are not only abuse of technology-none are a form of abuse that can permanently have permanent consequences,” he said in Friday Michael McConnell, co -operator of the CBS News Supervisory Board.
“The Committee actively monitors the response of the target and will continue to advocate for stronger protective measures, faster implementation and more responsibility,” McConnell said.
The target is not the only social media company that faced a question of broad, sexualized deep content.
Last year, Elona Musk X platform temporarily blocked tests associated with Taylor Swift after Ai-raid false pornographic paintings in the character of the singer widely circled on the platform and collected millions of views and impressions.
“Publishing images without consensual nudity (NCN) is strictly forbidden to X and we have a policy of zero tolerance according to such content,” said the platform safety team in the post at that time.
A study published earlier this month by the UK Government has revealed that the number of Deepfake Society Pictures is expanding at speed, and the Government has projected that 8 million deep faculties will be divided this year, compared to 500,000 in 2023.