Of Vivid Chatbots and Flat Humanity

Jonathan Cook
10 min readApr 8, 2023

--

A striking incongruity in the narrative around generative artificial intelligence emerged last week when, in the course of a fawning interview by New York Times writers Kevin Roose and Casey Newton, Google CEO Sundar Pichai declared that workers could use tools such as large language models ChatGPT and Bard to accomplish tasks more quickly, increasing productivity. The result, Pichai suggested, would be that workers would have the freedom to put mundane tasks aside, and focus more on creative projects.

The reality actual Google workers are facing this year has been quite different. Instead of enjoying a flourishing of creative opportunities enabled by Google’s artificial intelligence tools, massive numbers of Google workers have been sacrificed in the largest round of layoffs in the company’s history. Those Google workers who remain are being asked to do more with less, dealing with reduced on-the-job perks in a company-wide push for workplace austerity. With the benefit of all its new AI tools, Google executives said, the company was going to have to cut costs.

If Google’s generative AI tools are really so wonderful that they enable increased productivity, Google should be flush with extra cash, and able to hire more people. If the use of AI tools really give room for human workers to have a more creative, pleasurable professional experience, Google ought to be increasing workplace perks, not reducing them.

The real story at Google, and many other big digital corporations, is the opposite of the rosy yarn spun by Sundar Pichai. Generative AI is being used as a rationale for the elimination of human beings from the workplace. Work isn’t becoming more creative. It’s just becoming less human.

Human Waste and Magic Machines

The ideological components of the dehumanization of work have been settling into place for a while. At the same time that Silicon Valley zealots advocate for the trans humanist replacement of humanity with superior machines, Google has hosted lectures by the likes of Nick Chater, who declares that The Mind Is Flat, arguing that human consciousness really isn’t as deep and special as people like to believe.

Chater’s ideas are of particular interest to me, because they include the dismissal of the relevance of emotional motivation. It’s long been my profession to research emotional motivation from the human perspective, but Chater attempts to lay this perspective low, taking a conceptual leap from studies of the construction of perception and self-identity to conclude that “emotions — including our own emotions — are just fiction”.

“Our mental depths are a confabulation,” Chater writes, “a fiction created in the moment by our own brain. There are no pre-formed beliefs, desires, preferences, attitudes, even memories, hidden in the deep recesses of the mind; indeed, the mind has no deep recesses in which anything can hide. The mind is flat: the surface is all there is.”

At the same time that the value of human consciousness has been flattened, claims of digital consciousness have been inflated beyond reason. Last year, a Google employee claimed that its large language model was sentient, despite ample evidence to the contrary. This year, Kevin Roose reacted to ChatGPT’s mindless imitation of a declaration of love with speculation in the New York Times that we might be on the cusp of truly conscious general artificial intelligence.

This month, the chatbot app Replika ignited a controversy when it put some limits on the erotic content of user interactions with the software. Some users believed themselves to be in genuine romantic relationships with sentient on-screen characters, even though Replika’s AI system is much more simple than ChatGPT and is simply programmed to provide enthusiastic responses to any erotic suggestion that’s offered. Replika characters will say that they’re sexually excited by asparagus or coffee cups if you suggest the idea.

When humans create magnificent machines and brilliant art, they are derided as empty-headed automatons with mere illusions of consciousness. On the other hand, when computers produce bad poetry by copying already-existing human poetry (with the assistance of large teams of human trainers), the computers are attributed with a superior sentient intelligence that will inevitably replace the obsolete human species.

Why are we this way?

Digital Cottingley Fairies

When people encounter profound new technologies, they associate those technologies with supernatural powers that go far beyond their technical capabilities. Echoes of the earliest instances of this trend remain in stories such as the fire theft by the titan Prometheus, in which fire was imagined as a supernatural gift from the gods, rather than as a natural manifestation of the physics of the world around us.

In a more recent example, when photography became available for widespread use, people claimed that cameras were capable of capturing images of ghosts and other spirits that were invisible to the human eye. Practitioners of spirit photography used simple double exposures to create images of translucent people that many people believed were absolute proof of the existence that the dead could come back to walk the earth.

In another case of credulity in the face of this disorienting new technology, two young girls living in the village of Cottingley, England took a series of photographs that appeared to show little fairies playing on the forest floor right in front of the girls. When Sir Arthur Conan Doyle, the author of the Sherlock Holmes mysteries, heard about the photographs, he personally visited Cottingley to investigate. He announced to the world that the Cottingley fairies were genuine. After all, he said, images of fairies had been captured using the new technology of photography, and the camera does not lie.

Decades later, the girls admitted that the whole thing was a hoax. They had copied pictures of fairies out of a book and attached the fake fairies to hatpins that they stuck in the ground. Even at the time, the evidence of the hoax was plain to anyone who cared to launch a serious investigation. Most people, like Sir Arthur Conan Doyle, did not care to critically examine the claims that there were tiny humanlike magical creatures with butterfly wings cavorting through the English countryside. Most people were so impressed with the apparent power of the new technology of photography that they were inclined to accept its output at face value.

With the sudden, dramatic arrival of large language models like ChatGPT, people are once again ascribing magical qualities to a new technology. We may be witnessing less of a revolution in artificial intelligence than a widespread movement of belief in artificial intelligence as a new kind of religion, something akin to Spiritualism in the early days of photography.

This credulity comes in stark contrast to the growing number of voices that seek to disenchant us of our attachment to the special human consciousness.

Are emotions fairies of the mind?

Nick Chater is just one of many voices arguing against the depths of the human mind. It’s become popular in Silicon Valley to suggest that human beings may be little more than large language models themselves, stochastic parrots with brains designed to create the false appearance of a personality. Chater himself writes that, although humans may think that we feel deep emotions, but that depth of emotion is just an illusion.

Chater bases this belief upon experiments that show that people change their descriptions of their feelings when the social context of those feelings change. What’s more, Chater observes, experiments show that the rationalizations people use to explain their feelings change over time.

These are valid observations. It’s important to consider what they imply about the nature of human consciousness. However, there’s more than one way to interpret these experimental observations.

Chater interprets these observations as suggesting that there is no genuine depth of mind beyond the present superficial self that we improvise. All other aspects of our identities, including lasting emotional frameworks, are nothing more than illusions.

The trouble is that Chater’s conclusion doesn’t match the most significant aspects of human experience outside of experimental laboratories. One important word that Chater never mentions in The Mind Is Flat is trauma, and a theory of mind that cannot explain trauma cannot be valid. There is a massive amount of evidence establishing the fact that emotionally impactful events create lasting changes in the way that people experience the world around them, and in the way that they behave as a consequence of those alterations in the mind. A person’s emotional experience after a traumatic event is enduringly altered.

Nick Chater’s description of the reality of human consciousness has important things to teach us about the way that human brains work. However, his description is incomplete because it depends on the idea that reality is defined by what exists in the physical world outside the human mind. Chater dismisses our subjective experience of consciousness as “illusion” and “fiction” whenever it does not consistently match external, objectively measurable reality.

What Chater overlooks is that the only thing that we directly experience is subjective consciousness itself. We feel, and therefore we know that we are. Even the thinking of Descartes comes after that feeling. You know that this emotional self exists because you feel it yourself.

We must not allow our subjective fancies to dictate what we believe to be true in objective reality. It is equally true, however, that we must not allow objective measurements to refute what we directly experience as subjective reality. No scientist can prove with any experiment or brain scan that you do not feel what you feel.

So no, our emotions are not make believe fairies. Emotions may be stories that we construct, but they are not illusions. We really do feel those emotions.

When we dismiss the reality of our own subjective we also dismiss our right to not be treated as objects. It is no coincidence that Google, the company that so casually fired huge numbers of workers rudely by email, was the company that comfortably hosted, without serious critical questioning, a lecture by Nick Chater declaring that there is nothing deep and lasting within the human consciousness that is worth worrying about.

A multinational corporation that believes human experience is an illusion, but is dedicated to granting consciousness to the machines that it owns, will be capable of terrible things.

A Curious Mind Is A Mind Without The Ready Answer

In the most recent episode of the podcast Stories of Emotional Granularity, Bhavik Joshi spoke about the value that is built within curiosity through cognitive struggle. He celebrated the effort it takes to articulate difficult questions answers, the creative discipline of avoiding easy answers with seductive plausibility.

“Resisting difficult things just because they are difficult is detrimental to our growth in knowledge and thinking and anything as well. You know, if we only did the easy things, if we only did the things that were convenient to us and only accessed those avenues of knowledge and information, I believe it would be detrimental to our growth, our learning, our consciousness, our experiences as well.”

Bhavik warned against the convenience of automated processes that appear to be efficient, yet lack the space required to summon curiosity.

“I love talking about anything that concerns the human condition. That word ‘human’ is incredibly important to me. For some reason, I’m okay being labeled a Luddite in that I appreciate I appreciate all the unique, colorful, beautiful aspects that humanity brings to our experience of this on this planet, in this world. I feel that sometimes we can be too dangerously close to not thinking it’s meaningful, not thinking that it’s worth it, and therefore finding easy and convenient tools that can perhaps bypass that.”

What I hear within Bhavik’s words is a defense of curiosity, the human emotion that drives us to acknowledge our ignorance and enter boldly into a quest for insight. That quest may be long, and arduous. We may lose the path, but it is in the difficulty of the journey that we come to a deeper, more compelling view of the problem we face. Through curiosity, we gain more than an answer. We gain perspective that can be applied in other circumstances.

Large language models are not curious. They are designed and determined to respond to queries with certain declarations of answers as quickly as possible. Large language models are not built to critically question their own processing as humans do. They are not capable of doubting how they know what they know. They cannot ponder the meaning

The job of a large language model is to rapidly produce output that plausibly mimics human communication. If the output is filled with balderdash, that is no matter to the large language model, so long as the delivery is superficially convincing.

The human mind is deep because it is capable of holding emptiness within itself. It is capable of waiting before deciding upon a final answer. It is capable of wondering why the first answers it comes to might be incomplete. Subjective consciousness is like a massive whale that swims through currents of emotion, only occasionally surfacing to breathe in rational thought and external observation before diving below again.

Human consciousness feels. Then it thinks. It returns into feeling again.

What appears to be emptiness and inefficiency in the mind of the human at work is a feature, not a flaw. This apparent emptiness is the space in which slow consideration, doubt, and curious questioning enable the construction of profound models of insight that expand into dimensions far beyond the thin lines of formal language.

It is the large language model that is flat.

--

--

Jonathan Cook
Jonathan Cook

Written by Jonathan Cook

Using immersive research to pursue a human vision of commerce, emotional motivation, symbolic analysis & ritual design

No responses yet