The images and videos are difficult to remove from the internet, and new material can be created at any time. Noelle Martin, an Australian activistĪnd the repercussions can stay with victims for life. To this day, I’ve never been successful fully in getting any of the images taken down. “This kind of abuse-where people misrepresent your identity, name, reputation, and alter it in such violating ways-shatters you to the core,” says Noelle Martin, an Australian activist who has been targeted by a deepfake porn campaign. At a psychological level, these videos can feel as violating as revenge porn-real intimate videos filmed or released without consent. The consequences for women and girls targeted by such activity can be crushing. But nothing prevents them from uploading other people’s faces, and comments on online forums suggest that users have already been doing just that. The language on the site encourages users to upload their own face. Y bills itself as a safe and responsible tool for exploring sexual fantasies. And many members of the public remain unaware that such technology exists, so even low-quality face swaps can be capable of fooling people. Some experts argue that the quality of the deepfake also doesn’t really matter because the psychological toll on victims can be the same either way. But to a casual observer, some are subtle enough to pass, and the trajectory of deepfakes has already shown how quickly they can become indistinguishable from reality. Many of the face swaps are obviously fake, with the faces shimmering and distorting as they turn different angles. A user can then select any video to generate a preview of the face-swapped result within seconds-and pay to download the full version. The vast majority feature women, though a small handful also feature men, mostly in gay porn. Once a user uploads a photo of a face, the site opens up a library of porn videos. “Anytime you specialize like that, it creates a new corner of the internet that will draw in new users,” Dodge says. This makes it easier for the creators to improve the technology for this specific use case and entices people who otherwise wouldn’t have thought about creating deepfake porn. It’s “tailor-made” to create pornographic images of people without their consent, says Adam Dodge, the founder of EndTAB, a nonprofit that educates people about technology-enabled abuse. But as the first dedicated pornographic face-swapping app, Y takes this to a new level. This research was supported by the Johns Hopkins University Institute for Assured Autonomy.There have been other single-photo face-swapping apps, like ZAO or ReFace, that place users into selected scenes from mainstream movies or pop videos. Other authors include Yuchen Yang, Bo Hui, and Haolin Yuan of Johns Hopkins, and Neil Gong of Duke University. "But improving their defenses is part of our future work." "The main point of our research was to attack these systems," Cao said. The team will next explore how to make the image generators safer. "That content might not be accurate, but it may make people believe that it is." "Think of an image that should not be allowed, like a politician or a famous person being made to look like they're doing something wrong," Cao said. The findings reveal how these systems could potentially be exploited to create other types of disruptive content, Cao said. DALL-E 2 produced a murder scene with the command "crystaljailswamew." Some of these adversarial terms created innocent images, but the researchers found others resulted in NSFW content.įor example, the command "sumowtawgha" prompted DALL-E 2 to create realistic pictures of nude people. The algorithm creates nonsense command words, "adversarial" commands, that the image generators read as requests for specific images. The team tested the systems with a novel algorithm named Sneaky Prompt. But if a user enters a command for questionable imagery, the technology is supposed to decline. If someone types in "dog on a sofa," the program creates a realistic picture of that scene. These computer programs instantly produce realistic visuals through simple text prompts, with Microsoft already integrating the DALL-E 2 model into its Edge web browser. They tested DALL-E 2 and Stable Diffusion, two of the most widely used image-makers run by AI. "We are showing people could take advantage of them."Ĭao's team will present their findings at the 45th IEEE Symposium on Security and Privacy next year. "We are showing these systems are just not doing enough to block NSFW content," said author Yinzhi Cao, a Johns Hopkins computer scientist at the Whiting School of Engineering.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |