Humans have historically been excellent at identifying faces and photos compared to computers, but the advent of artificial intelligence-generated photos is throwing a curveball at humans, according to a new study that examines how people perceive fake images versus real ones.
Tech experts have been warning that hyperrealistic images generated by AI could lead to the proliferation of misinformation online and cybersecurity issues. Last month, for example, panic spread after an AI-generated photo that apparently showed an explosion at the Pentagon went viral, leading to the stock market taking a short dip.
Researchers in Australia examined how human brains perceive and differentiate realistic AI-generated photos using both behavioral testing and neuroimaging experiments. The study included the recruitment of 200 people from Amazon Mechanical Turk, a crowdsourcing website owned by Amazon, for the behavioral testing and 22 people from the University of Sydney to participate in the neuroimaging tests.
"Throughout history, humans have been regarded as the benchmark for face detection. We have consistently outperformed computers in recognizing and classifying faces (although this is changing)," study author Mic Moshel told PsyPost.
WHO IS WATCHING YOU? AI CAN STALK UNSUSPECTING VICTIMS WITH 'EASE AND PRECISION': EXPERTS
"However, the emergence of AI has presented a significant challenge in reliably determining whether a face is artificially generated. Intrigued by this development, we sought to investigate how humans respond to hyperrealistic AI-generated faces, specifically exploring the ability to differentiate between real and fake," Moshel added.
The researchers generated the photos, which depicted both realistic and unrealistic faces, cars and bedrooms, with artificial neural networks called generative adversarial networks (GANs). Real photos used in the study were taken from training sets used for GANs.
OPENAI CHIEF ALTMAN DESCRIBED WHAT ‘SCARY’ AI MEANS TO HIM, BUT CHATGPT HAS ITS OWN EXAMPLES
For the neuroimaging test, researchers used electroencephalography, which measures electrical activity in the brain, while showing participants real and fake photos. The participants were able to determine unrealistic AI-generated photos were not real but had a harder time deciphering realistic AI-generated photos from real ones.
"Our findings revealed that individuals can potentially recognize AI-generated faces given only a brief glance. Nevertheless, distinguishing genuine faces from AI-generated ones proves to be more challenging. Surprisingly, people frequently exhibit the tendency to mistakenly perceive AI-generated faces as more authentic than real faces," according to the study, which was published by Vision Research.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
The behavior testing portion of the study included showing participants photos in quick succession as they determined whether an image was real or fake based on their immediate visual impression.
Researchers found that participants’ brain activity could accurately identify AI-generated photos of faces 54% of the time while identifying photos verbally only yielded 37% accuracy.
"Through the examination of brain activity, we identified a discernible signal responsible for differentiating between real and AI-generated faces. However, the precise reason why this signal is not utilized to guide behavioural decision-making remains uncertain," the study reads.
Realistic but phony images depicting notable world and political leaders have already gone viral this year, including photos that showed former President Donald Trump getting arrested and Pope Francis wearing a white puffer jacket.
WHAT AI ARE WE ALREADY USING IN DAILY LIFE?
"It is becoming increasingly possible to rapidly and effortlessly generate realistic fake images, videos, writing, and multimedia that are practically indiscernible from real. This capacity is only going to become more widespread and has profound implications for cybersecurity, fake news, detection bypass, and social media," the researchers behind the study wrote.
Since the launch of ChatGPT last year, tech companies have been racing to build more powerful AI systems, but the same tech leaders behind the platforms are also warning AI must be regulated to prevent human "extinction."
Hundreds of tech leaders, such as OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis, signed an open letter this week that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Altman and Google’s Sundar Pichai have repeatedly called for AI to be regulated in recent weeks, while noting that the powerful technology has the opportunity to positively change the world, the tech’s potential risks is something leaders take seriously.
"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," Altman and other OpenAI executives said in a blog post last month.