The new face of school bullying is real. It's the body below the face and it's fake.
Last week, administrators and parents at Beverly Vista Middle School in Beverly Hills were shocked by reports that fake photos had spread online that placed real students' faces on artificially created nude bodies. According to the Beverly Hills Unified School District, the images were created and shared by other students at Beverly Vista, the district's only school for grades six through eight. The number of students enrolled there is about 750, according to the latest census.
The district investigating the matter joins a growing number of educational institutions around the world dealing with fake images, video and audio. In Westfield, New Jersey, Seattle, Winnipeg, Almendralejo, Spain, and Rio de Janeiro, people using “deepfake” technology seamlessly linked legitimate photos of female students to manufactured or fraudulent photos of naked bodies. And in Texas, someone allegedly did the same thing to a teacher, grafting her head onto a woman in a pornographic video.
Beverly Hills Unified officials said they were prepared to impose the harshest disciplinary measures allowed by state law. “Any student found to be creating, publishing or possessing images of this type generated by artificial intelligence will face disciplinary action, including, but not limited to, a recommendation for expulsion,” they said in a statement sent to parents last week.
However, deterrence may be the only tool at their disposal.
Dozens of apps are available online to “undress” someone in a photo, simulating what the person would look like if they were naked when the photo was taken. The apps use AI-powered image mapping technology to remove pixels that represent clothing, and replace them with an approximate image of that person's naked body, said Rijul Gupta, founder and CEO of Deep Media in San Francisco.
Other tools allow you to “face swap” a target person's face with another person's naked body, said Gupta, whose company specializes in AI-generated content detection.
Versions of these programs have been available for years, but earlier versions were more expensive, more difficult to use, and less realistic. Today, AI tools can clone lifelike images and quickly create fakes; Even using a smartphone, this can be done within seconds.
“The ability to manipulate [images] “It's been democratized,” said Jason Crawforth, founder and CEO of Swear, whose technology verifies the authenticity of video and audio recordings.
“You needed 100 people to create something fake. Today you need one, and soon that person will be able to create 100 in the same amount of time,” he said. “We have moved from the information age to the disinformation age.”
AI tools “have escaped Pandora’s box,” said Seth Rhoden of BioCatch, a company that specializes in detecting fraud through behavioral biometrics. “We're starting to see how much damage could be done here.”
If children have access to these tools, “it's not just a problem with deepfakes,” Roden said. He added that the potential risks extend to creating images of victims “doing something very illegal and using that as a way to blackmail them for money or blackmail them into doing a specific act.”
Reflecting the widespread availability of cheap, easy-to-use deepfake tools, the amount of non-consensual deepfake porn has been increasing. According to Wired, a study by an independent researcher found that 113,000 deep porn videos were uploaded to the 35 most popular sites for such content in the first nine months of 2023. At this pace, the researcher found that more will be produced by the end of the year than in the first nine months of 2023. All previous years combined.
At Beverly Vista, the school's principal, Kelly Scone, met with nearly all students in all three grades Monday as part of her regularly scheduled “administrative conversations” to discuss a number of issues raised by the incident, she said in a memo to parents.
Among other things, Sukoon said she asked students to “think about how you use social media and not be afraid to leave any situation that doesn’t align with your values,” and “make sure your social media accounts are private and that you don’t have people you don’t know following your accounts.” “
Another point she made to the students was that “there are Bulldog students who are hurting by this event and that is to be expected considering what happened,” Scone said in her memo. “We also see the courage and resilience of these students as they try to restore normalcy to their lives after this terrible act.”
What can be done to protect against deep nude photos?
Federal and state officials have taken some steps to combat fraudulent use of artificial intelligence. According to the Associated Press, six states have banned non-consensual deep porn. In California and a few other states that do not have specific criminal laws against deep porn, victims of this type of abuse can sue for damages.
The technology industry is also trying to come up with ways to combat malicious and fraudulent use of AI. DeepMedia has joined many of the world's largest AI and media companies in the Alliance for Content Source and Authenticity, which has developed standards for tagging images and sounds to determine when they have been digitally manipulated.
Swear takes a different approach to the same problem, using blockchain to maintain immutable records of files in their original state. Comparing the current version of a file to its record on the blockchain will show if and exactly how the file has been changed, Crawforth said.
These criteria can help identify deepfake media files and potentially block them online. With the right mix of methods, the vast majority of deepfakes can be filtered out of a school or company network, Gupta said.
However, one challenge is that many AI companies have released open source versions of their applications, enabling developers to create custom versions of generative AI software. This is how AI applications emerged, for example, Gupta said. These developers can ignore standards that the industry develops, just as they can attempt to remove or circumvent tags that would identify their content as artificially created.
Meanwhile, security experts warn that the photos and videos people upload daily to social networks provide a rich source of material that bullies, scammers and other bad actors can harvest. They don't need much to create a convincing falsity, Crawforth said; He saw a demonstration of Microsoft technology that can convincingly reproduce someone's voice from just three seconds of online audio.
“There is no content that cannot be copied or manipulated,” he said.
The risk of victimization probably won't stop many, if any, teens from sharing photos and videos digitally. So the best form of protection for those who want to document their lives online may be “poison pill” technology that alters the metadata of files they upload to social media, hiding them from image searches or online recordings.
“Poison pills are a great idea. That's something we're researching as well,” Gupta said. But to be effective, social media platforms, smartphone photo apps and other popular content-sharing tools must automatically add poison pills, because you can't rely on people to do it systematically themselves.