Review: Shell Game

Artificial intelligence is not going anywhere. It is not the utmost evil, it is not the best thing to happen to humanity. It is a thing that we are using. I am of the mindset that we can only hope to reign in and regulate what has been happening. I say this because barreling into the unknown headfirst can sometimes cause a concussion - or worse.
I try not to fall into doomerism, because I just don’t think it’s productive. However, the fact remains. I am not particularly optimistic about the next 3-5 years of human and AI. There are so many consequences that we are not considering, or don't know how to consider because we are jumping into this.
I reiterate. It’s not AI as a thing. It’s just that, it’s a thing. It’s the human that’s the problem. The human hubris and the impulse to achieve for the sake of an unknown that we don’t fully see. Sure, not everyone knows what they are doing - but there’s a reason we have a history of testing animals before we get to human trials. As big of a question that is, we can’t deny that there are things we figured out were dangerous because we didn’t try it on humans first.
Today’s podcast brought to light a lot of general feelings I’ve been having about artificial intelligence, and the Podcast Brunch Club discussion around it helped me reframe my thoughts. Originally I wrote eight pages about Shell Game by hand. Now I have a slightly more nuanced idea of what this podcast meant, but also about what it could mean.
Shell Game Season One covers the exploration of artificial intelligence-based voice clones by journalist Evan Ratliff. In 2024 he popped his voice into a cloning service, attached it to Chat GPT, and went on with experiments. Customer service agents, unwitting scammers, and his friends were all mice in the laboratory. Can we create these digital clones and set them to do our dirty work, reading a recap of that meeting while we are on the beach? Is it possible, and is it effective?
If you want to know about the production, this is a good podcast to listen to. It is polished and professional. The voices are warm and clear, the style is straightforward and it’s easy to listen to. The story follows a track and deftly leaves questions unsaid, and unanswered.
I have some things to chew on when it comes to the ethics of this podcast. Consent to be on the show was always given, but consent to be experimented on with an AI voice clone was not. I understand that this was not a medical question.Ratliff wasn’t coming at his friends with scalpels asking to chip their brains. The harm possibility was relatively low in the grand scheme of things. However, if I didn’t know there was a possibility of chatting with an AI clone, and the conversations were going the way some of these were - I would be furious. I would be hurt. I would also be creeped out. Maybe Ratliff has different understandings with his friends, and maybe my hesitation to jump into things is what’s causing me to have the “ick” here - but the fact remains that parts of this podcast feel gross in a way that could have been controlled.
Which points to another question. What are the ethics of AI, and where do we draw our lines for the sake of innovation? The ends don’t always justify the means. I’m hard-pressed to say whether or not if this breach of ethics, this creeping feeling, was worth the questions this podcast has posed. I am not sure if the podcast have been as effective to me without the ick. What seems at first to be a question of an internet that forever circles on itself, eventually blends into a bigger question. What is the point of AI being used in this way?
In the final episode, Ratliff and his father create an AI that can answer questions about the elder Ratliff’s particular field of study. However, our host goes to the AI to talk to him about his childhood. The humanity of the AI was stripped by the lack of humanity given to it. This blends into every other issue of artificial intelligence that we are facing right now. There is a difference between generative and assistive AI. There is a difference between something feeling natural and something being replaced. What this journalist explores is just scratching the surface of these issues.
You can add humanity, as Ratliff himself does. You can tell it information to get the AI to fool your friends. However, what is the cost of this replication? What is the purpose of replacing human interaction, and what is the utility of it? I may very well write more about this separately, but having these questions scratched by the podcast is good - so long as you have a place to put it. I had my Podcast Brunch Club and my friends. What happens when we put all of this into a chatbot? I’m not interested in that answer, in reality. It doesn’t feel like it would be a real one.
I think this podcast may be slightly behind the times in AI, but the questions it wants us to ask are right where we need to be, or maybe where we needed to be two years ago.
If you like this review, consider exploring my review of Bot Love.

If you want to support my reviews, send me a tip or donate to my ko-fi. This website is free to you, but not to me.
Comments ()