Elon Musk’s social media platform, X, formerly known as Twitter, has implemented search blockades for users attempting to find content related to Taylor Swift. This comes in response to the struggles X is facing, in containing a widespread circulation of sexually explicit deepfake images of the American singer on the platform. As of Monday night, searches for “Taylor Swift” or “Taylor Swift AI” on X show a message stating, “Something went wrong. Try reloading.”

The deepfake images, some of which gained millions of views before removal, prompted X’s Safety Team to address the situation on January 26, emphasising a zero-tolerance policy toward non-consensual nudity (NCN). Despite their efforts, false media content featuring Swift, including various deepfakes, continued to circulate.

Among the disturbing images was one portraying Swift as an obese individual consuming fast food, while another depicted her hugging former U.S. President Donald Trump in apparent support, wearing a hat adorned with his slogan.

Microsoft, upon learning of the explicit deepfakes, initiated an investigation to determine if its services were involved in their creation. The White House expressed its concern early in the week, with Press Secretary Karine Jean-Pierre labelling the situation “alarming.” She urged social media companies to combat the spread of misinformation and non-consensual intimate imagery.

Experts have cautioned that deepfakes are frequently used to target female celebrities in sexually abusive manners. This issue is not unique to Taylor Swift, as Indian actor Rashmika Mandanna, among others, spoke out against the manipulation of technology to produce deepfake videos featuring her.

Read more at thetechportal.com

Add comment