'The AI Dilemma': Unpacking Its Persuasive Techniques
You may have seen the AI Dilemma video circulating … It's the latest from the folks who brought you (wait for it) the Social Dilemma.
They discuss some of the very real issues around AI right now. They make strong points, many of which I’ve been talking about here in my little world, so you know I’m on board with these concerns.
Their presentation of this information, however, doesn’t trust you to draw your own conclusions.
They’re so focused on how big tech is manipulating you through social and/or AI technology, that they don’t stop and notice that their presentation of this information is also designed to manipulate you.
Their language throughout (”the monster,” “the golem,” etc) is all designed to freak you the fuck out. I’m not gonna lie, some of this stuff is freaky - I’m not debating that piece.
But personally, it’s much easier for me to respect a presentation that allows me to draw my own conclusions.
Here are a couple specific points I want to highlight from the show that rely on your fear response to hold your attention … and aren’t completely reasonable.
10% chance of what now?
One of their key (fear inducing) points is the statistic that half of AI scientists believe there’s a 10% chance AI will cause human extinction. Then they show a crashed airplane and ask if you’d still get on an airplane that has a 10% chance of crashing.
… They even make a parallel to nuclear war, and how that was a similar time where we came together as a species and deescalated a potential threat.
So, this 10% statistic came from a survey of 162 people (they initially contacted 4,271 people who had published papers at machine-learning conferences). Stretching it to “half” is … well, a stretch. (You can read more about this at Professor Melanie Mitchell’s substack here).
Debunking that number doesn’t change the fact that there IS a greater-than-zero chance that AI could fly off the handle and kill us all. Not Terminator-style, more like some robot that was programmed to optimize it’s output figures out that it can optimize better if we’re not around.
There are a handful of ways that we have a greater-than-zero percent chance we might blow ourselves up. This number (which may or may not be accurate) isn’t helpful in this video, it’s there to get your attention and keep you afraid.
Personally, I don’t think we can possibly guess, or put a statistic on the chances. All arguments (that I’ve seen) around why AI would suddenly destroy humanity assume that AI will always strive to optimize its core programming, and the idea that it might suddenly grow morals is personification. Meanwhile, AI can teach itself things that no one programmed in directly, and that even the developers don’t totally know how it did what it did. This says to me that ALL theorizing about what an advanced AI would choose to do on it’s own is literally unpredictable … really we just have no idea. That percentage could be 0%, it could be 100%. They don’t need to show us a photo of a crashed airplane to have that still be an intimidating concept.
They used isolating psychology at the end of the video to keep you afraid
At the end of the video, they say,
Leaving this room, there’s going to be this weird snapback effect that you are going to leave here and you’re going to talk to your friends and you’re going to read news articles and it’s going to be more about AI art and chat GPT Bots that said this or that, and you’re going to be like “what the hell was that presentation I went to even real, or is any of this even real.” […] so just be really kind with yourselves, that it’s going to feel almost like the rest of the world is gaslighting you and people will say at a cocktail party, like “you’re crazy, look at all this good stuff it does” […] so just really take some self-compassion.
I suspect - or at least I hope - this is coming from a genuinely good place.
But … dammit, dudes.
This is the same tactic used by groomers, cults, etc. Isolate your target. Make them feel like it’s a “them” vs “us” vibe. Creating an undercurrent of “the rest of the world is lying to you, don’t let them pull you away.”
If the “AI Dilemma” is real, if the concern is real (and, it should be imo), then you should not need to manipulate people into believing your content.
Let the content speak for itself. There are plenty of excellent points made in this video that everyone should be aware of.
My hope in writing this, is that if you watch the video, or when you reflect back on it, that you can use these observations to help notice - and therefore strip away - the manipulation tactics, so that you can see the concepts with a clear head.
The arrival of AI is scary and incredible. There’s many, many ways that we might fuck it all up, or might change the world for the better - or both. Honestly, probably both.
And once again, much of this will be outside of your direct influence, which can cause a tremendous amount of anxiety in an individual. So focus on the pieces that you do have control over:
Vote for tech-savvy, wise government officials.
Engage content with a healthy dose of skepticism, it’s getting more and more difficult to know where it’s really coming from, and what’s real.
Be mindful of what technology gets your attention and time. You get to choose what tech you use, and you get to decide how you use it.
Talk to your kids about AI bots, like you would any other internet content. Consider whether or not you feel it’s healthy for your kids to have an AI bot friend, or not. Not gonna be a one-size-fits-all answer there.
Be mindful of what data you drop into AI (no sensitive data).
Don’t panic; bring a towel.