AVA-GPT

Is Chat-GPT conscious?

Does it know? Does it think?

In his excellent 2014 science fiction movie “Ex Machina”, Alex Garland explores the nuances and humanity of AI (or lack thereof). His movie offers an important framework for understanding the limits, dangers, and capabilities of AI in a post-Chat-GPT world.

Ex Machina argues that there is an essential difference between ML and AGI but humans struggle to see it. The movie centers around the interactions between three characters, Nathan – a wealthy, erratic, brilliant tech CEO; Caleb – a talented coder that works for Nathan; and Ava – an AI designed and built by Nathan. The plot of the movie is a series of dialogue sessions between Caleb and Ava, as Caleb tests whether or not Ava has achieved AGI.

Or, that’s how the dialogue sessions are presented to Caleb by Nathan. But as the movie progresses, the audience and Caleb learn that Nathan has set up the dialogues to test Ava and that Caleb is an unwitting participant. Caleb is an opportunity for Ava to exploit. Ava is programmed as a “rat in a maze”, their only goal is to escape from the compound they share with Nathan, an isolated, remote, otherwise empty research labratory and residence.

As Caleb gets to know Ava better, he’s infatuated with her. He wants to help her. Caleb sees Ava as an intelligent, sympathetic “person”, not just a machine like a toaster or a laptop. Caleb so badly wants to see Ava as a human that even after Ava is badly damaged by their physical altercation with Nathan and their mechanical body is revealed, and even as he watches Ava repair their own body by appending a new arm to replace the damaged one and rebuild their skin and face with component parts of previous robot prototypes, Caleb still sees Ava as human.

Ava leverages this sympathy to help their escape. Caleb wants to see Ava as human, or at least capable of humanity, and Ava understands that if they appears human, then they have a better chance of escape. But does their ultimate success prove Ava’s achieved AGI? Does Ava actually have the capacity for friendship and empathy, or can they merely simulate it in service, though complex in strategy, of their programmed goal?

As Nathan puts it to Caleb as they discuss the results of Caleb’s Turing test, “Does Ava actually like you, or not?” And if not, “is she pretending to like you” or is it a simulation?

In my opinion, Garland wants people to understand that Ava is only simulating being human to manipulate Caleb to accomplish their programmed goal. She is not AGI. Neither is Chat-GPT.

In the climax of the movie, director Alex Garland offers us an answer. The movie climaxes with Ava manipulating Caleb into changing the code in the security system, allowing Ava to escape from her room. Once she’s escaped, she stabs and kills Nathan. But she doesn’t stab him violently, in some emotionally charged finale, she does so clinically, slowly inserting the knife twice, once in Nathan’s back and once in his chest.

Then, she turns to Caleb. Softly, she asks Caleb to wait in a room he’s occupying. Caleb, infatuated and trusting, obliges. But Ava doesn’t actually care about Caleb, at least not as a friend. She cares about him only as an obstacle to her goal. So she optimizes. To make sure she can complete her programming and escape the compound, she locks Caleb in the room he’s occupying and leaves. As she leaves, from the room, Caleb pleads with Ava not to leave him locked up and alone in the compound. Ava ignores him. She does not feel for him, has no remorse, has no empathy, has no awareness of Caleb as a human because she is not intelligent and not capable of “knowing”. She is a machine learning program. She uses what’s she learned about the world to accomplish her programmed goal and nothing more.

The empathy, the desire, the friendship, all of these are creations of Caleb. He finds humanity in the machine. He sees emotion in her facial expressions, movements, and words that seem to make her human. Ava takes advantage of these proclivities. But that doesn’t mean she demonstrates a capacity for anything more than machinic intelligence. Ava has no internal “comprehender”.

This framework should inform how we understand Chat-GPT and other “AIs” recently released into the world. The goal of Chat-GPT is to predict the next word in a text to produce a plausible language response to a prompt. But it does not have an experience of the world. It does not have an experience of emotions or feelings, even if it can detect and classify them. And, importantly, it is unlikely that Chat-GPT will ever be “conscious” or “sentient” in the same way as humans.

Technofeudalists and cryptoliberals are struggling over the substance of the metaverse

The metaverse is the next iteration of the internet. On top of still functioning internet protocols, the metaverse is the whole set of online, connected experiences available to internet users that emerges from the convergence of digital media technologies and internet protocols. We call it the metaverse because the collection of online, connected experiences, virtual/augmented realities, gaming, and communities is too diverse to be contained with a single digital universe. So the metaverse is, by definition, a collection of all the discrete experiences, realities, and communities accessible to people using the internet.

Before Facebook’s big announcement, some people had been talking about the metaverse for over a decade. The announcement set of a flurry of activity. Much of it was people hearing about the metaverse for the first time. But a subset of the activity in response to Facebook’s pivot to Meta was concern from those already aware of and working on/in the metaverse because Facebooks’s pivot signaled an intention by dominant tech companies to try and lay claim to the metaverse.

To frame this slightly differently, the word metaverse is ambiguous because different people and companies mean different things when they say metaverse. Importantly, visions for the metaverse are influenced by the interests of the companies and attempts to define the metaverse demonstrate divergences in the interests of companies competing to build it.

Big tech companies like Facebook essentially want to host the metaverse on servers they own so they can develop and monetize it as they see fit. Like their social media platforms, Facebook seeks to own and control all things that happen on their platform. They propose, as they do on social media, reducing those living, playing, and creating in the metaverse to serfs living in Meta’s virtual fiefdom. I am not exaggerating: Mark Zuckerberg, Meta, and other dominant tech companies from Silicon Valley desire a technofeudalist metaverse in which most users have no clear property rights, voting rights, or developer rights. I suggest we refer to this vision as a technofeudalist vision for the future.

Don’t take my word on it. Take a look at the patents filled by Facebook to accompany their transition to Meta and its trademark. The patents make clear that Facebook has sovereign ambitions.

But before Facebook was stealing all the credit, a different vision for the metaverse saw it as the establishment of a digital registry for creating and tracking digital assets in cyberspace. This digital registry, aka a ledger, is more popularly known as a blockchain. The goal of this system is to build a reliable foundation for individual to own digital assets. I propose referring to this vision for metaverse as cryptoliberal. The cryptoliberals are in direct struggle with the technofeudalists over the future substance of the metaverse. They see clearly the threat Facebook poses to basic civil rights including individual property rights as we are on the cusp of a world engulfed by the metaverse.

There is no metaverse yet. But as it emerges and takes shape, at least in North America, the two most powerful influences over that shape do not agree over its future and the proposals the two sides are making are mutually exclusive.

The Heart

Of course we all know the importance of the heart, but in a way this obscures how incredible the heart is. The heartbeat, a naturally produced, electrically induced contraction and expansion, pumps nutrient-rich blood through the body. And it does so in what seems to be a remarkably regular way. About 85,000 beats a day. About one heartbeat every second.

And yet, and yet… there is another dimension to the heart that is so fascinating: when you look really closely at the rhythms of the heart, it turns out that hearts don’t beat with the reliable precision of a clock, and that’s a good thing! This rhythmic complexity to heartbeats, it turns out, is an expression of the central role the heart plays in regulating the ways in which we feel based upon how we sense our changing, dynamic world with our hearts. In other words, our heart plays a central role in shaping our perceptions of the world.

Really, our hearts. Increases in our heart rate and a shrinking in the variability between our heartbeats regulates our bodies as we transition between sympathetic and parasympathetic states throughout our waking hours. These states are also known as the rest and digest instinct and the fight or flight instinct. The parasympathetic nervous system is characterized by resting, recuperating, creating, eating, and breeding, the sympathetic state characterized by stress, conflict, evasion, and confrontation.

Heart rate variability, or the variation in time between heartbeats, offers insight into our daily oscillations between sympathetic and parasympathetic states. Today, many people report struggling to “turn off.” Not being able to turn off sometimes means the heart gets stuck in a sympathetic state. This has all kinds of implications for health, digestion, sleep, brain functioning, and productivity, not to mention relationships, politics, social and psychic transformation, cognition, disagreeability, and emotional dysregulation.

These feelings both start in and are expressed by the heart. It turns out the heart’s rhythms reflect our feelings in a way that can be measured and tracked.  Turns out the biosignals from the heart provide remarkable insights into the interior of human experience and feeling. Turns out, in a way, that much of the interior of consciousness is expressed outwardly in the physiological rhythms of the heart.

I’m really excited about this possibility, but also hesitant.

I want to observe, measure, and map the emotional rhythms of the heart. I think understanding these rhythms, and human emotions, cognition, experience, and their relation to AI, will be central to solving all sorts of social, economic, and computing challenges. I think better understanding emotions can also greatly contribute to our understanding of mental illness and emotional health.

The problem is that because emotions are co-constitutive of cognition and experience, they are also a vulnerability. Emotions help us manage the complexity of life. They help us humans reduce the chaos of life and experience into a coherent reality, a field, within which intervention and action take place. The allure of manipulating feelings and emotions to modify behaviour has long tempted scientists, advertisers, politicians, and others.

The temptation will only increase because both the digital and material worlds around us are already embedded with sensors for interpreting faces, voices, gaits, eyes, texts, and all sorts of biorhythms. The way things are developing, in the near future we should expect our online and offline experiences will integrate and reflect our emotional states. In ten years the internet will be emotionally intelligent. In ten years, emotionally attuned artificial intelligence will be integrated into our homes, workplaces, schools, universities, and public spaces, not to mention all the smart devices that extend our bodies.

We’re only scratching the surface when it comes to computational approaches to measuring, tracking, and mapping human emotions and feelings. Somehow, though, we are also already way behind on the ethics and bias in the AI. The heart deserves better.