Is Chat-GPT conscious?
Does it know? Does it think?
In his excellent 2014 science fiction movie “Ex Machina”, Alex Garland explores the nuances and humanity of AI (or lack thereof). His movie offers an important framework for understanding the limits, dangers, and capabilities of AI in a post-Chat-GPT world.
Ex Machina argues that there is an essential difference between ML and AGI but humans struggle to see it. The movie centers around the interactions between three characters, Nathan – a wealthy, erratic, brilliant tech CEO; Caleb – a talented coder that works for Nathan; and Ava – an AI designed and built by Nathan. The plot of the movie is a series of dialogue sessions between Caleb and Ava, as Caleb tests whether or not Ava has achieved AGI.
Or, that’s how the dialogue sessions are presented to Caleb by Nathan. But as the movie progresses, the audience and Caleb learn that Nathan has set up the dialogues to test Ava and that Caleb is an unwitting participant. Caleb is an opportunity for Ava to exploit. Ava is programmed as a “rat in a maze”, their only goal is to escape from the compound they share with Nathan, an isolated, remote, otherwise empty research labratory and residence.
As Caleb gets to know Ava better, he’s infatuated with her. He wants to help her. Caleb sees Ava as an intelligent, sympathetic “person”, not just a machine like a toaster or a laptop. Caleb so badly wants to see Ava as a human that even after Ava is badly damaged by their physical altercation with Nathan and their mechanical body is revealed, and even as he watches Ava repair their own body by appending a new arm to replace the damaged one and rebuild their skin and face with component parts of previous robot prototypes, Caleb still sees Ava as human.
Ava leverages this sympathy to help their escape. Caleb wants to see Ava as human, or at least capable of humanity, and Ava understands that if they appears human, then they have a better chance of escape. But does their ultimate success prove Ava’s achieved AGI? Does Ava actually have the capacity for friendship and empathy, or can they merely simulate it in service, though complex in strategy, of their programmed goal?
As Nathan puts it to Caleb as they discuss the results of Caleb’s Turing test, “Does Ava actually like you, or not?” And if not, “is she pretending to like you” or is it a simulation?
In my opinion, Garland wants people to understand that Ava is only simulating being human to manipulate Caleb to accomplish their programmed goal. She is not AGI. Neither is Chat-GPT.
In the climax of the movie, director Alex Garland offers us an answer. The movie climaxes with Ava manipulating Caleb into changing the code in the security system, allowing Ava to escape from her room. Once she’s escaped, she stabs and kills Nathan. But she doesn’t stab him violently, in some emotionally charged finale, she does so clinically, slowly inserting the knife twice, once in Nathan’s back and once in his chest.
Then, she turns to Caleb. Softly, she asks Caleb to wait in a room he’s occupying. Caleb, infatuated and trusting, obliges. But Ava doesn’t actually care about Caleb, at least not as a friend. She cares about him only as an obstacle to her goal. So she optimizes. To make sure she can complete her programming and escape the compound, she locks Caleb in the room he’s occupying and leaves. As she leaves, from the room, Caleb pleads with Ava not to leave him locked up and alone in the compound. Ava ignores him. She does not feel for him, has no remorse, has no empathy, has no awareness of Caleb as a human because she is not intelligent and not capable of “knowing”. She is a machine learning program. She uses what’s she learned about the world to accomplish her programmed goal and nothing more.
The empathy, the desire, the friendship, all of these are creations of Caleb. He finds humanity in the machine. He sees emotion in her facial expressions, movements, and words that seem to make her human. Ava takes advantage of these proclivities. But that doesn’t mean she demonstrates a capacity for anything more than machinic intelligence. Ava has no internal “comprehender”.
This framework should inform how we understand Chat-GPT and other “AIs” recently released into the world. The goal of Chat-GPT is to predict the next word in a text to produce a plausible language response to a prompt. But it does not have an experience of the world. It does not have an experience of emotions or feelings, even if it can detect and classify them. And, importantly, it is unlikely that Chat-GPT will ever be “conscious” or “sentient” in the same way as humans.