HUFFPOST
While the concern around generative AI has so far mainly focused on the potential for misinformation as we head into the U.S. general election, the possible displacement of workers, and the disruption of the U.S. education system, there is another real and present danger — the use of AI to create deepfake, non-consensual pornography.
Last month, fake, sexually explicit photos of Taylor Swift were circulated on X, the platform formerly known as Twitter, and allowed to stay on there for several hours before they were ultimately taken down. One of the posts on X garnered over 45 million views, according to The Verge. X later blocked search results for Swift’s name altogether in what the company’s head of business operations described as a “temporary action” for safety reasons.
Swift is far from the only person to be targeted, but her case is yet another reminder of how easy and cheap it has become for bad actors to take advantage of the advances in generative AI technology to create fake pornographic content without consent, while victims have few legal options.
Even the White House weighed in on the incident, calling on Congress to legislate, and urging social media companies to do more to prevent people from taking advantage of their platforms.
The term “deepfakes” refers to synthetic media, including photos, video and audio, that have been manipulated through the use of AI tools to show someone doing something they never actually did.
The word itself was coined by a Reddit user in 2017 whose profile name was “Deepfake,” and posted fake pornography clips on the platform using face-swapping technology.
A 2019 report by Sensity AI, a company formerly known as Deeptrace, reported that 96% of deepfakes accounted for pornographic content.
Meanwhile, a total of 24 million unique visitors visited the websites of 34 providers of synthetic non-consensual intimate imagery in September, according to Similarweb online traffic data cited by Graphika.
The FBI issued a public service announcement in June, saying it has noticed “an uptick in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web postings, provided to the malicious…