AI is fueling the spread of misinformation. Here's how to avoid falling for it

Misinformation runs rampant on social media everyday, but it's becoming increasingly harder to detect. Thanks to artificial intelligence-generated images and videos, you've probably scrolled past a photo depicting a celebrity or political figure that's completely fabricated and passed off as real.

The danger of AI-generated misinformation

AI art started as a fun trend, like using an app to turn photos of yourself into colorful avatars, or watching fan-made trailers reimaging a "Star Wars" film directed by Wes Anderson. But the more capable and "intelligent" AI becomes, the more dangerous it can be.

Not only are AI-generated images and videos proving to be a cyber security threat, their rising popularity combined with tools such as ChatGPT, which can easily produce text that sounds indiscernible from a real human, will make misinformation like fake news and conspiracy theories cheap, easy and fast to produce.

How well can humans detect AI content?

The average human ability to distinguish between high-quality, AI-generated images and real images is only 61%, according to a study conducted by Cornell University. The study also found that an individual's background, such as their gender, age, and experience with AI-generated content did not significantly affect their ability to distinguish the AI-generated images from real photographs.

While the use of this media to purposefully deceive social media and digital users is troublesome in and of itself, it's even more troublesome in the context of an approaching election year, when fake news tends to peak online.

Fake news, misinformation and politics

Former President Donald Trump, who has announced he will be running in 2024, shared AI-generated content with his followers on social media: A manipulated video of CNN host Anderson Cooper that distorted Cooper's reaction to CNN's May town hall featuring Trump, created with an AI voice-cloning tool.

When President Joe Biden announced his reelection campaign, the Republican National Committee released a manipulated online ad, featuring AI-generated images of Biden, as well as boarded up storefronts in the U.S. and soldiers and armored military vehicles patrolling local streets as tattooed criminals create panic.

And while the RNC acknowledged its use of AI in the video's caption, other political ads have not offered such discretion (See: Hillary Clinton has officially endorsed Governor Ron Desantis for President in 2024.)

How AI is used in social media

AI has also impacted more than a social media user's newsfeed. In May, a fake image of a Pentagon explosion went viral, directly impacting the U.S. stock market.

Lawmakers and politicians have sounded the alarm, with Rep. Yvette Clarke (D-NY) proposing a new bill that would require full disclosure on AI use in political ads, and Senate Majority Leader Chuck Schumer (D-NY) taking early steps toward legislation to regulate artificial intelligence technology back in April.

Resources such as NewsGuard — a journalism and technology site that rates the credibility of news and information websites and tracks online misinformation directly related to AI — also exist, but AI use is spreading faster than these tools can keep up. According to a report by Europol, AI could create or edit up to 90% of content on the internet by 2026.

6 ways to spot AI images and deep fake videos

Since it appears that artificial intelligence is not just a trend but is here to stay, here are six tips you can use to spot AI images and deep fake videos so you don't get duped.

1. Look for odd details, specifically hands

AI images may look like nothing out of the ordinary on first glance, but the longer you look, the more likely you'll find odd or inaccurate depictions of body parts or background characters. Hands typically have extra or mangled fingers, extra body parts may appear out of nowhere, a human's eyes may be looking in opposite directions or missing facial features all together, and bodily proportions may be off.

2. Watching a video? Look for unnatural eye and mouth movements, poor lip-syncing and unnatural skin tones

A common warning sign is jerky or inconsistent eye movements, and a lack of blinking. The figure's skin tone may seem unnatural or different from how they normally appear, so reference their appearance with a verified image or video.

3. Ask yourself the context, then fact check

If you see an image or video depicting an event that makes you raise your eyebrows, a simple Google search can usually prove whether it actually happened or not if there is or isn't any other documentation or reporting on the supposed occurrence.

4. Does it look more like a painting than a photograph?

AI-generated images typically have inaccurate or unrealistic lighting, and can look almost "plasticky," with depictions of skin looking too airbrushed and smoothe. If you look closely, they may even look more like a painting with brushstrokes than a clear image.

5. Try a reverse image search

Similar to simply fact checking, try a reverse image search and see if the photo's source comes up anywhere else, or if the image has been used on any other websites or news articles.

6. Check for words and text used in the image

Another weakness of AI image generators is the use of text within the photo. While it seems that AI can generate the alphabet properly, full words are almost always a jumbled and unreadable mess, so look for things like logos and street signs to check if they're spelled correctly.