Did You Know Squid Game Is Based on a Real Internment Camp? Me Neither. (It’s Not.)
This morning started like most of mine: cracked open a Monster, opened laptop.
Then a friend dropped an Instagram reel into a group chat: “Whoa, did you know Squid Game was based on a real-life internment camp in South Korea?”
I tapped play. The reel was slick, old-timey AI photos of kids in numbered tracksuits, masked guards, brutalist buildings.
Felt real enough for 9 AM.
And my friend wasn’t questioning it. They believed it. So, yeah, I kinda did too.
Then I did the rarest of things: I googled it.
Turns out, nope. Squid Game wasn’t based on any real-life camp.
The reel was pure fiction, a mashup of AI vibes and confident storytelling. Even the Korea Times put out an article debunking the whole thing.
The photos? Probably Midjourney or something like it. They looked historical, but had zero actual history behind them.
And that’s what got me. Not that the reel existed, but that it was so easy to believe. Especially because someone I trust already had.
We’ve Always Had Misinformation. But This Feels Different.
This kind of thing isn’t new. We’ve had fake photos, hoaxes, and wild conspiracies for ages.
But AI cranks it up. Now the fakes look better, spread faster, and feel way more real.
The danger isn’t just what’s false, it’s how real it feels.
And look, I’m not anti-AI. Far from it. I use Cursor every day, ChatGPT is helping me write this, and I honestly think these tools can unlock a ton of creativity and efficiency.
But they can also mess with our grip on reality in ways we’re not quite ready for.
What Happens When Reality Gets Automated?
What happens when a fake photo shows up in court? Or a politician’s cloned voice “confesses” something they never said? Or when millions believe something false before the truth even has a chance?
That’s the scary part. Not the tools, the speed. Truth’s slipping fast, and not just online. It’s crawling into our heads and messing with who we trust.
“Pics or it didn’t happen” is dead. Now it’s more like, “Pics? lol, nice render.”
Reality’s on fire.
So What Do We Do With This?
Honestly? No idea. But here’s what I’m trying:
- Pause. Even if it’s from someone you trust.
- Check who’s talking. Real or clown?
- Stay curious, don’t turn into a jerk.
The reel I saw was mostly harmless. But what comes next? Who knows. If AI keeps blurring the lines, maybe all we can do is slow down, ask smarter questions, and keep our curiosity alive.
And if all else fails, well… I guess I’ll just keep writing blog posts about it.
Have you ever believed something at first glance, and later realized it wasn’t true? What made it believable? Was it the person who shared it? The way it was packaged?
It’s a weird world we live in, where even the words you’re reading now were shaped by AI. Oh, and by the way, since ChatGPT helped with this, maybe this whole post is fake too. Or maybe it’s just the beginning of a very convincing new reality.