That Popular AI Portrait App Can ‘Undress’ People Without Their Consent
We may have opened Pandora’s box.
Almost anybody who has been online in the past few days has probably seen a sudden uptick in the number of individuals uploading self-portraits done in an anime style.
Their inventor, a photo-editing tool called Lensa AI, has christened them “Magic Avatars,” and they have seized social media by storm. Their virality has evolved in tandem with that of ChatGPT, OpenAI’s next-generation Artificial intelligence program chat-bot.
It has been a big year for AI programs. With another called Midjourney that is also taking the internet by storm, which has digital designers the world over worried they’ll soon be out of the job. Text-to-image generators, the most prominent examples of which are OpenAI’s DALL-E and Midjourney’s Stable Diffusion, have caused a disruption in the creative industry.
Futurism reports that a recording company presented an artificially intelligent rapper, but they did away with it just as quickly as they created it. Futurism further reports that machine learning was even used to generate full on conversations with living celebrities and.. dead ones.
You can say AI has been making serious strides lately, and lest we forget, that Google engineer who was recently suspended after going public that he was totally convinced they have made a chat bot called LaMDA that was sentient.
Experts have been working on the underlying technology for years, but recent advances, coupled with significant financial investment, have sparked a mad rush to market. This has led to a proliferation of consumer goods that include cutting-edge technologies.
The only catch is that neither the goods nor the consumers are prepared.
Example: those “Magic Avatars,” who don’t seem dangerous at first glance. After all, and there is nothing wrong with letting the software transform you into a colorful avatar.. In contrast to text-to-image converters, you’re restricted to the confines of the images you currently own.
Artists began raising concerns as soon as the “avatars” became viral, pointing out that Lensa provided no safeguards for the artists whose work may have been utilized to train the computer. On a negative note, despite Lensa’s “zero nakedness” usage guideline, users still found it surprisingly easy to make nude photographs, both of themselves and of anybody else they possessed photos of.
“The ease with which you can create images of anyone you can imagine (or, at least, anyone you have a handful of photos of), is terrifying,” writes Haje Jan Kamps in Techcrunch.
Kamps evaluated the app’s pornographic creation capabilities by giving it badly altered photographs of celebrity faces superimposed on naked individuals. Unfortunately for him, the manipulated photographs easily bypassed all of the app’s purported safety features.
“Adding NSFW content into the mix, and we are careening into some pretty murky territory very quickly: your friends or some random person you met in a bar and exchanged Facebook friend status with may not have given consent to someone generating soft-core porn of them,” he said.
Horrible stuff, but it gets far worse than that. Despite Lensa’s claims to prevent users from creating child pornography, journalist Olivia Snow found this awful possibility after uploading photographs of herself as a kid to the app’s “Magic Avatars” feature. “I managed to piece together the minimum 10 photos required to run the app and waited to see how it transformed me from awkward six-year-old to fairy princess,” she wrote in Wired. “The results were horrifying.”
“What resulted were fully-nude photos of an adolescent and sometimes childlike face but a distinctly adult body,” she continued. “This set produced a kind of coyness: a bare back, tousled hair, an avatar with my childlike face holding a leaf between her naked adult’s breasts.”
The stories told by Kamps and Snow both highlight an aggravating fact about today’s AI technology: it often acts in ways its designers didn’t anticipate and can even circumvent the safeguards that were put in place to prevent this. It suggests the AI business is moving ahead of the pace at which society and even their own technology can keep up. This is really concerning, especially in light of these findings.
Lensa said in a statement to Techcrunch that users are solely accountable for any sexual content found in the app. This opinion is shared by many in the business world: bad actors will always exist, and they will continue to be terrible actors. Another prevalent argument is that a competent user of Photoshop could make anything produced by such applications just as easily. They argue that any explicit or pornographic images are “the result of intentional misconduct on the app.”
These two points of view are not completely without merit. Neither of these things, however, alters the reality that Lensa’s software, like many others of its like, renders it much simpler for evil actors to achieve what they could otherwise do. For the first time, anybody with access to the correct algorithm may generate convincing fake nudes or high-quality renderings of child sexual abuse material.
The opening of Pandora’s box metaphor is also apparent. The Lensas of the world may try to patent their technology all they want, but copycat algorithms that circumvent the protections will still be developed. It will happen. It is a certainty.
Since Lensa’s debacle, there has been a rising awareness of the potential for actual individuals to face real and severe damage from the premature adoption of AI technologies, including picture generators. Meanwhile, the sector seems to be embracing a full steam ahead stance, with a focus on racing rivals to get venture capital investment rather than ensuring the tools are adequately secure.
Keep in mind that nonconsensual porn is just one of many potential dangers here. Another key worry is the ease with which political disinformation might be produced. Also, what about automatic text creation tools? Teachers are scared out of their wits at this possibility.
While AI may make our lives easier, it is undoubtedly going to create many issues going forward. Where does it end when soon it will be entirely possible to fake an entire speech from a politician?
While it’s cool we will probably be able to bring dead actors back from the dead for the big screen, will you be able to trust literally anything digitally that you see anymore without it being directly in front of you? My guess is not.
Even an apparently harmless tool like “Magic Avatars” serves as a reminder that AI is still an experiment, even as it alters our reality in profound ways, and that collateral damage is not a foreseen danger. This is an undeniable fact.
Buckle up, folks.
Typos, corrections and/or news tips? Email us at Contact@TheMindUnleashed.com