A magnifying glass over a light blue surface as a symbol of how careful we should be when analyzing the content we see online, how media literacy is important and how AI-generated content does require us to acquire new skills.

Trust But Verify—Identifying AI-Generated Content

Today, I did a test with my own child. For context, the child is 12 years old and has just finished 6th grade. I showed him two videos and asked him if he could tell me what they had in common.

Here are the videos in question:

Are you able to tell me what those videos have in common? Because he was able to.

AI-Generated Content and Media Literacy

Like it or not, AI-generated content is becoming increasingly common, and it is not going to stop. It started with captions on social media, then some text here and there, and eventually moved to entire articles (remember the one we mentioned in our blog two weeks ago about the summer reading list that included fake books?), and now we are seeing more images, audio, and video being generated by AI.

And no, that is not a bad thing per se.

AI has opened the doors to many ideas for many people. A lot of content that would otherwise stay in their creators’ minds has been able to see the light of day thanks to how easy it is now to bring those ideas to life. Still, just as many good things can come from it, we should take a moment to reflect on how we can and should adapt to AI-generated content, as well as how to distinguish between misinformation, propaganda, and genuine content.

As per a survey conducted by Hookline& about written content:

  • 82.1% of people can sometimes spot AI-written content.
  • Among those aged 22–34, the rate rises to 88.4%.
  • Only 11.6% of young people said they never notice AI content.

Of course, these numbers change a lot when it comes to images or video. A recent study by iProov found that only 0.1% of participants could accurately identify all deepfake and real content (including images and videos) in a test where they were specifically primed to look for fakes. This suggests that in real-world scenarios where people are less aware, vulnerability to deepfakes is even higher.

How Can We Get Better at Identifying AI?

The key to navigating the flood of AI-generated content lies in media literacy—a skill that, much like critical thinking, must be taught and practiced. Yes, AI detection tools exist, but you and I know they are far from perfect and often lag behind the rapid advancements in generative AI. Instead, we must train ourselves (and younger generations) to recognize subtle clues that hint at artificial creation.

As more content is created with or enhanced by AI, the responsibility to pause, question, and verify falls on each of us. So, how can we get better at spotting AI-generated content? Here are a few practical starting points:

1. Pause and Analyze the Context

Before sharing or reacting to a piece of content, ask yourself: Where is this coming from? Who posted it? Does this seem too perfect or too emotionally charged? Does it make sense? Retaking the Kangaroo video as an example, while I must admit I did not directly identify it as AI-generated, I also did not think it was real for a very simple reason: in most places, it is illegal to have a Kangaroo as a pet. On top of the illegality of owning a Kangaroo, who would actually train it to be an emotional support animal? Honestly, I couldn’t see that being possible (but if you know somebody who does train Kangaroos as emotional support animals, please let me know).

AI-generated content often lacks context, makes no sense, or originates from a source that lacks credibility. A quick search can help identify these discrepancies.

2. Look for Visual and Auditory Artifacts

With images and video, look closely. AI-generated images and videos often exhibit small irregularities that betray their synthetic origins, such as odd distortions in hands, teeth, or backgrounds, or, conversely, might seem overly perfect, with unnaturally smooth skin, hyper-symmetrical faces, or lighting might seem off (like it doesn’t match the environment). Keep an eye out for unnatural blinking and out-of-sync movements, as those can be signs of AI-generated content, too.

3. Cross-check with Trusted Sources

When in doubt, check multiple reputable outlets. If a story or image is truly groundbreaking or important, others will likely be reporting on it as well. If it exists only in one place—or if that place seems sketchy—that’s a red flag.

4. Use Tools (They’re Getting Better, Too)

There are emerging tools that can help identify AI-generated content. Platforms like Hive, Deepware Scanner, and Microsoft’s PhotoDNA are designed to help detect manipulated media. While they’re not perfect, they’re improving—and they can be helpful in combination with your own critical thinking.

5. Teach the Next Generation Early

Back to my kid for a second: I asked how he figured out the videos were fake. He pointed out how strangely the lady in the first video was blinking, how her mouth looked out of sync, and how her skin had an almost glowy appearance. For the Kangaroo, his response was more innocent—he said, “Kangaroos cannot hold boarding passes,” 🤣 but hey! He knew something was off!

Kids today are growing up with this technology, and many of them will become quite adept at navigating it. But we can’t assume that ability is automatic. Media literacy should be a regular part of conversations at home and in schools, just like teaching online safety or critical thinking.

It’s not about fearing AI. It’s about being aware.

AI-generated content isn’t inherently good or bad—it’s a tool, and there’s real value in it when used responsibly and transparently. It can democratize creativity, break down language barriers, and enhance learning, but it can also be used to manipulate, mislead, or simply overwhelm us with noise and nonsense.

Sharpening our observational skills is as important as demanding transparency (yes, those AI-generated content tags that some people dislike), and educating others is how we can continue to enjoy social media and AI’s creativity while minimizing its potential harm.

So, the next time you see a flawless image, a viral video, or a quote that feels a little too perfect, take a second look. Your awareness might be the difference between spreading truth and amplifying fiction.

Our digital future depends not just on the tools we use—but on how well we understand them.

Leave a Reply

Your email address will not be published. Required fields are marked *