How Do We Trust Content Anymore?
We've all seen stories we know are fake.
But what happens as the AI generated content gets better and better? A lot of people fell for the Pope in a puffer jacket last month. So what happens when we have "audio and video proof" of something happening that never existed in real life? Multiple shots and angles of the same “event” that never happened?
There's an interesting thing happening over on Twitter in certain circles, where AI artists are posting AI-generated images, and asking people to identify the real picture, or asking them to tell them "how can you tell this is a fake," etc.
Y'all, the answers are becoming increasingly thin. I've seen things like, "well this one is clearly a fake because no self-respecting food photog would position the sushi that way" or "this guy in the background is blurrier than he should be at that focal depth." ... Really? Did you triangulate that idea yourself my bro?
We're not far from content that no one will be able to distinguish from the real thing, if we're not there already. We're going to start seeing the same scene - completely fake - "filmed" from multiple angles, where it legitimately seems like two or three different people with different cameras captured the same, completely fake, event. Same with photos and audio files, of course, too.
AI makes the generation of this content fast and easy.
And we can't expect the AI to do something that will make the image identifiable as an AI generated image, or expect an AI to be able to detect it. It's not possible right now, though it may be in the future. But the generation tech is ahead of the fact-checking tech.
So here we are. What do we do?
I watched a conference about AI in communication from UW this week. Mike Caulfield, a Research Scientist at the UW Center for an Informed Public, described the difference between "cheap signals" and "deep signals."
Cheap signals are what we're used to looking for to declare something as misinformation. We "zoom in" (metaphorically) and look for spelling errors, whether or not something "looks professional," etc., are ways that we validate based on appearances and initial impressions. We're looking at the content itself for flaws.
None of those indicators are present in AI-manufactured content.
Deep signals are where we validate the authenticity or trustworthiness of the reporting outlet itself (even if that "outlet" is just one person you follow and trust on Twitter, YouTube, wherever). You zoom back. You ask,
Do I trust this organization? Why? How do they validate their information ... do they even validate their information? Do they report on other news org's content themselves, or just grab the headlines and move on? Do they follow through with sources themselves, or are they passing along content they found elsewhere?
Have they built trust with me over time?
New-to-you sources need extra special scrutiny. With AI, it's about to become extremely easy to throw up an entire website with plenty of credible looking content, large amounts of backfilled stories and whatnot, in order to push out one false story. The entire site could be a way of serving up one key piece of misinformation. (And if that sounds like a lot of work for one story ... I imagine a site like that could be tossed up within an hour or so, with help from AI. That’s easier and cheaper than troll farms, and we appear to have plenty of those.)
So when you stumble on something that's new to you, the whole thing should get checked out by zooming back. Zooming way back.
Looking for deep signals take a lot more work and time. Which means fewer people will do it.
Our sources of knowledge and "what's happening" has already been messy.
It's about to get messier.
... I've been sitting here now with this ending for a little bit now, trying to think "what's the advice, where's the fix, where's a silver lining here." Honestly I don't think there is one. Not on this point, not yet.
Right now it's about seeing the storm, and being ready for it to hit. Sometimes that's all we have.
I also think we may find this drives the conversation closer to home. Rather than arguing about some story friends or family saw online, instead talking about what it means to them, why they’re angry/sad/mad/glad about it. Braving the Wilderness, as Brené Brown calls it … which is an excellent resource if you find yourself having this kind of conversation with loved ones often.