Use of artificial intelligence to generate photo realistic images of people has come under criticism. Recently, an image of the Pope in a puffer jacket went viral on social media. A fake, AI-generated image of Pope Francis stepping out in a stylish white puffer jacket and bejewelled crucifix racked up millions of views over the weekend – with many mistaking it for a real image.
A fake, AI-generated image of Pope Francis stepping out in a stylish white puffer jacket and bejewelled crucifix racked up millions of views over the weekend – with many mistaking it for a real image. Experts fear the rapidly developing technology behind the image could soon undermine our ability to distinguish fake photos, which can be generated in seconds, from reality.
The AI program Midjourney was used to create the Pope image, which appears to have first been shared on a Reddit page dedicated to AI-generated art – before being reposted to Twitter, where a series of viral posts racked up millions of views while failing to disclose it wasn’t real.
Midjourney is an AI-based technology that creates images from simple text prompts. While Photoshopping fake images has been possible for years, no skill is required to use the tool, which takes just seconds to generate photorealistic fakes. Other similar programmes include Open AI’s DALL-E and Stable Diffusion.
Henry Ajder, AI expert and presenter of the BBC radio series The Future Will be Synthesised, told that this technology has developed with “lightning speed” within the last year, and shows no signs of slowing down.
Not only have the tools become “radically accessible” and easy for anyone to use, they have also become more sophisticated – generating images that look increasingly realistic.
“A lot of that image looks really good, especially at a first glance people might think the Pope is really adopting the Italian fashion sense.”
But Mr Ajder said there were still visible flaws that confirm the recent viral images as fakes., including the depiction of some body parts.
Recent advances in artificial intelligence have yielded warnings that the rapidly developing technology may result in “ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
That’s according to an open letter signed by more than 1,000 AI experts, researchers and backers, which calls for an immediate pause on the creation of “giant” AIs for six months so that safety protocols can be developed to mitigate their dangers.
Image generators have raised serious ethical concerns around artistic ownership and copyright, with evidence that some AI programs have being trained on millions of online images without permission or payment, leading to class action lawsuits.
Tools have been developed to protect artistic works from being used by AI, such as Glaze, which uses a cloaking technique that prevents an image generator from accurately being able to replicate the style in an artwork.