Uncensored AI art model poses ethical questions

Uncensored AI art model poses ethical questions – The first week of the release of a new open-source AI image generator capable of creating photorealistic images from any text input was met with astoundingly rapid adoption. Art generation services such as Artbreeder, Pixelz.ai, and others are using Stability AI’s Stable Diffusion, a high-fidelity algorithm that can be executed on standard consumer hardware. However, due to the unfiltered nature of the concept, not every usage has been entirely ethical.

The majority of use cases have proven legitimate. NovelAI, for instance, has been experimenting with Stable Diffusion to make artwork that may accompany the user-generated AI-generated tales made on its platform. Midjourney has released a beta version that utilizes Stable Diffusion to improve photorealism.

However, Stable Diffusion has also been used for less desirable objectives. Several posts on the notorious online forum 4chan, where the model leaked early, are devoted to AI-generated artwork of naked celebrities and other sorts of produced erotica.

The CEO of Stability AI, Emad Mostaque, described the model’s breach on 4chan as “unfortunate” and emphasized that the business was working with “top ethicists and technologies” on safety and other processes around responsible distribution. One of these strategies is Safety Classifier, an adaptable AI tool included in the Stable Diffusion software suite that aims to recognize and block unpleasant or undesired pictures.

Stable Diffusion is a really novel concept. Other AI art-generating systems, such as OpenAI’s DALL-E 2, have added severe sexual content controls. (The license for the open source Stable Diffusion bans specific uses, such as the exploitation of children, although the concept itself is technically unrestricted.) Moreover, unlike Stable Diffusion, many lack the capacity to make artwork of public persons. The combination of these two skills might be dangerous, enabling unscrupulous actors to generate pornographic “deepfakes” that, in the worst-case scenario, could prolong abuse or falsely incriminate someone in a crime.

Unfortunately, women are by far the more likely to be affected by this. A 2019 research indicated that among the 90% to 95% of non-consensual deepfakes, almost 90% are of women. Ravit Dotan, an AI ethicist at the University of California, Berkeley, believes this portends badly for the future of these AI systems.

Dotan told TechCrunch via email, “I worry about further repercussions of synthetic representations of unlawful stuff, namely that it would intensify the shown illicit actions.” “For instance, would synthetic child [exploitation] enhance the occurrence of genuine child [exploitation]? Will the number of pedophile assaults increase?”

Abhishek Gupta, director of research at the Montreal AI Ethics Institute, concurs. “We really need to consider the lifespan of the AI system, which includes post-deployment usage and monitoring, and how we might conceive of policies that may mitigate effects even in the worst-case scenario,” he added. “This is especially true when a potent capacity [such as Stable Diffusion] is released into the wild and might cause actual pain to people against whom such a system could be utilized, for as by making undesirable material in the victim’s picture.”

On the suggestion of a nurse, a father snapped photographs of his young child’s swelling genital region and sent them to the nurse’s iPhone throughout the course of the last year. The snapshot was automatically backed up to Google Photos and was classified as child sexual abuse content by the company’s AI filters, resulting in the man’s account being deactivated and the San Francisco Police Department opening an investigation.

Uncensored AI art model poses ethical questions
Uncensored AI art model poses ethical questions

 

Experts like Dotan assert that if a genuine photograph could fool such a detection system, there is no reason why deepfakes created by a system like Stable Diffusion couldn’t as well, and at scale.

“Even when individuals have the finest intentions, the AI systems they design may be utilized in terrible ways that they cannot predict or avoid,” Dotan added. I believe that developers and academics often undervalue this aspect.

Whether fueled by AI or not, the ability to make deepfakes has existed for some time. Hundreds of explicit deepfake videos featuring female celebrities were uploaded to the world’s largest pornography websites each month, according to a report by deepfake detection company Sensity in 2020. The report estimated the total number of deepfakes online to be approximately 49,000, of which over 95% were pornographic. Since AI-powered face-swapping tools reached the mainstream many years ago, actresses such Emma Watson, Natalie Portman, Billie Eilish, and Taylor Swift have been the targets of deepfakes, and some, including Kristen Bell, have spoken out against what they regard as sexual exploitation.

However, Stable Diffusion represents a newer generation of algorithms that can generate very realistic, if not flawless, false visuals with minimum human input. Additionally, installation is simple, needing just a few setup files and a graphics card costing several hundred dollars on the high end. The development of even more efficient versions of the system that can operate on an M1 MacBook is now progress.

A deepfake of Kylie Kardashian uploaded on 4chan.

Sebastian Berns, a Ph.D. candidate in the AI group at Queen Mary University of London, believes that the automation and scalability of customized picture production are the most significant distinctions between Stable Diffusion and other systems, as well as the greatest challenges. “The majority of dangerous pictures can already be created using traditional techniques, but it is labor-intensive and manual,” he added. “A model that can generate near-photorealistic film may open the door to targeted blackmail assaults”

Berns is concerned that personal photographs gathered from social media may be used to train Stable Diffusion or a similar model to create obscene or unlawful images. Certainly precedent exists. After reporting on the rape of an eight-year-old Kashmiri child in 2018, the Indian investigative journalist Rana Ayyub became the focus of Indian nationalist trolls, some of whom made deepfake pornography using her face on the body of another person. The head of the nationalist political party BJP disseminated the deepfake, and the hostility Ayyub experienced as a consequence grew so severe that the United Nations had to intercede.

Berns stated, “Stable Diffusion gives sufficient customisation to deliver automated threats to people to either pay or risk having bogus but possibly harmful film disseminated.” “People are already being extorted after their camera was remotely accessed. This infiltration stage may no longer be required.”

With Stable Diffusion already being used to produce pornography, some of which is non-consensual, image hosting may be required to take action. TechCrunch reached out to one of the largest adult content sites, OnlyFans, but had not received a response as of the time of writing. Patreon, which also permits sexual material, has a policy on deepfakes and prohibits photos that “repurpose celebrity likenesses and insert non-adult content in an adult setting.”

Two ladies were detained for suspected child abuse in New Mexico

Russian weapons dealer Viktor Bout offered a prisoner swap

If the past is any indication, however, enforcement will likely be inconsistent, in part because few laws protect against pornographic deepfakes. And even if the possibility of legal action forces some sites with problematic AI-generated material to go down, nothing prevents new sites from appearing.

“Creative and malicious users can abuse the capabilities [of Stable Diffusion] to generate subjectively objectionable content at scale, using minimal resources to run inference — which is less expensive than training the entire model — and then publish them on Reddit and 4chan to drive traffic and hack attention,” Gupta explained. “Much is at risk when such capabilities escape “into the wild,” when constraints such as API rate limitations and safety controls on the kind of outputs delivered by the system no longer apply.”

Keep obsessing! Book Mark OL NEWS for the Daily News newsletter and follow us on Facebook, Twitter, Instagram and TikTok.

Leave a Comment