Should AI Cure Humanity Of Its Emotions?

What if human emotion isn’t a design flaw?

Jonathan Cook
10 min readAug 8, 2018

This week, I’m writing a series of articles about sentiment analysis, which is often referred to as Emotional AI. Engineers of this new kind of technology claim to be able to detect and analyze emotion using electronic sensors and machine learning. To date, media coverage of this emerging field has accepted Silicon Valley’s optimistic depiction of Emotional AI at face value. In this series, I’m attempting to balance that fawning coverage with critical questions, building toward the articulation of ways in which sentiment analysis can be employed to enhance the emotional connection between businesses and the human beings they serve.

A list of the other articles in this series can be found at the bottom of this article.

In the previous article in this series, I examined the exploitative, psychopathic side of Emotional AI. In this article, I want to explore a more subtle problem with semantic analysis, one that comes under the guise of self-improvement.

There’s an ideology that’s taken root in Silicon Valley that regards our humanity as more of a burden than a source of pride. Humanity could do great things, adherents of transhumanism argue, if only we weren’t so… human. The transhumanist perspective gazes eagerly toward a future in which human life is improved, in some cases by a complete transition to mechanical life.

From this future perspective, the present condition of humanity looks weak, pathetic, broken and in need of repair. David Pearce, a prominent transhumanist, writes, “we’ll need to rewrite our bug-ridden genetic code.”

While some versions of transhumanism envision an enhanced emotional life enabled by integration with digital technology, many of the foot soldiers currently seeking to enhance human life with Emotional AI are moving in the other direction, using sentiment analysis tools to teach human beings how to repress their emotions.

The Rationalizers

Inspired by the anti-emotion strain of transhumanism, a growing number of tech companies are using sentiment analysis tools to identify when human beings are becoming too emotional for our own good. These Emotional AI programs target human emotion in the way that chemotherapies target metastatic tumors. Under these systems, after our emotions are detected by algorithmic scanners, connected digital services swoop in to rescue us from ourselves.

Planexta, a Ukranian company that claims to be able to track 40 different emotions merely by taking EKG data from a person’s wrist, depicts emotion as a potential killer, in the deadly manifestation of “stress.” Planexta warns that, “an intimate link between stress index and self-regulation was discovered, which lead to the identification of several factors that contribute to increased stress. As it turns out, the way we live is directly correlated with our stress index: stresses tend to compound on each other until eventually one deals the killing blow. So, how do we break this vicious cycle?” Not surprisingly, the company has the answer: Its own product. Emotional AI processing of biometric data, Planexta says, will save us from our dangerous feelings.

AutoEmotive is planning to develop Emotive AI systems implanted in automobiles that will “detect human emotions such as anger or lack of attention, and then take control over or stop the vehicle, preventing accidents or acts of road rage.” No one wants to see road rage, of course, but most of the time, driver anger does not ecalate into road rage. Often, drivers will engage in perfectly safe forms of behavior that could be interpreted as anger by biometric scanners: Singing along with loud music, for example. Would Wayne, Garth, and their buddies be forced to stop on the side of the road if they were driving an AutoEmotive car, just because they were doing their Bohemian Rhapsody thing? Will James Corden have to cancel Carpool Karaoke?

Okay, maybe those in particular aren’t the most serious concerns we could have with AutoEmotive. The point is that, once again, an Emotive AI system is focusing on emotion as a problem, for which technological control is the solution.

Sophie Kleber’s recent article in the Harvard Business Review celebrates the another effort to design a product that could reduce the sway of human emotion. In this case, emotion is said to undermine professional success. She writes, “In 2009, Philips teamed up with a Dutch bank to develop the idea of a “rationalizer” bracelet to stop traders from making irrational decisions by monitoring their stress levels, which it measures by monitoring the wearer’s pulse. Making traders aware of their heightened emotional states made them pause and think before making impulse decisions.”

Art Amador, co-founder of EquBot, an exchange traded fund controlled by an artificial intelligence, concurs with the premise of the rationalizer bracelet, paraphrased in the New York Times as arguing that, “Artificial intelligence has an edge over the natural kind because of the inherent emotional and psychological weaknesses that encumber human reasoning.”

Is Amador right? Is emotion a psychological weakness that encumbers human reasoning? It may be. A few days after EquBot’s debut last October, Amador’s partner Chida Khatua bragged that their AI was outperforming the human-traded market by 0.2%. Those first few days of success then appeared to have been a fluke. By the end of the year, EquBot’s artificial intelligence was underperforming against humans by 2.0%. As of a couple of weeks ago, however, EquBot was outperforming its human competitors by 6.5%. The quality of EquBot’s work still needs more time to be fairly judged, but its machine learning process will only become more effective over time.

If emotion is merely a design flaw in human beings, then why bother having humans do the job at all? In stock trading, it seems that AI could just replace human workers with responsive algorithmic systems. That approach would certainly be more efficient than using rationalizer bracelets on human workers.

Financial trading is ultimately a mathematical game, though, like chess or go, so it’s to be expected that a computer would be able to win it. Humans just can’t make computations as quickly as computers can. Other kinds of work, however, are more like charades or Apples To Apples than chess. In those kinds of work, the ability to feel emotion, and to understand it in others, is an asset rather than a liability.

Emotional Work

Most jobs have an essential emotional component, even if feelings aren’t officially in the job description. In the 1990s, business anthropologist Grant McCracken wrote Big Hair, a fantastic ethnography of hair dressers, explaining how they go far beyond the simple job of cutting and styling people’s hair. Hair dressers also act as a kind of counselor, having lengthy conversations with their clients as they work with combs and scissors. They listen, but they also pay attention to subtle cues over multiple encounters to learn about the emotional struggles their clients are going through. It’s in response to this thick interaction that hairdressers ultimately recommend particular hair styles to their clients. They’re not just working on hair, after all. They’re going on an emotional journey, building a bond, and helping their clients manage their social identities.

Imagine a hair dresser managed by an Emotional AI system designed to keep their feelings in check. We wouldn’t want a person working with sharp instruments to get dangerously emotional, after all, would we? The result would be a merely functional relationship. People would come in, make requests of their hair dressers, and go out with a hair cut. What wouldn’t happen, however, is any of the personal interaction that enables a hair dresser to truly get to know their clients over time. People would get hair cuts, but they wouldn’t be able to get new styles suited especially to them given their emotional struggles at the time.

Most of us have to layers of work, as hair dressers do. We have the work that’s in the job description, but we also have bigger emotional responsibilities to the other people that we interact with during work hours. If we just stick to the functional requirements, we don’t get the emotional labor done. We become empty suits, performing our jobs robotically.

Few people with that kind of emotionally sterile attitude at work will keep their jobs in the decades to come. A real robot will always outperform a person acting like a robot. For the sake of our professional survival, what we need more than ever is to find new ways to effectively bring more emotional power into our work. Yet, Emotional AI is often pushing us in the opposite direction, to show less of our humanity.

With its emotion-suppressing apps, artificial intelligence is getting to be like an abusive boyfriend, telling us that our emotions don’t matter, that they just lead us to make irrational mistakes, that if only we could follow along with their logic, we’d see that what they’re trying to do is best for everyone.

An Evolutionary Success

Being irrational is not the same thing as being wrong. Emotion isn’t a mistake. It’s the manifestation of hundreds of millions of years of natural selection. Emotional complexity has been a remarkably successful adaptive strategy for human beings. “Emotions became the key to hominin survival,” explains Jonathan H. Turner, professor of sociology at the University of California, Riverside.

Over the hundreds of thousands of years that humans have been in existence, the human beings that have most often survived to have children and pass down their genes have been those with rich emotional lives. Emotions helped our ancestors survive by building social bonds, defeating adversaries, and patiently care for offspring despite the exceptionally long human period of childhood. Emotions also helped us develop the imagination necessary to make technological innovations. The intellect alone, without playful metaphorical thinking, could never have brought our species to where we are today.

Emotion often hurts, but natural selection doesn’t guarantee a life without pain. It merely develops strategies that are likely to succeed. There are times when our emotions get the better of us, and prevent us from living fruitful lives. However, such cases are outnumbered by those in which a full emotional life is a beneficial trait.

In fact, emotion has proven to be such a successful strategy for humans that there is good reason to believe that highly intelligent beings may have difficulty surviving without it. Given what emotion has done for human beings, it makes sense for technology companies to begin considering the ways that they can turn Emotional AI around 180 degrees. Emotional AI shouldn’t be about the detection, manipulation, and suppression of human emotion by artificial intelligence. Instead, the greatest success for AI may come when we can engineer systems of emotional experience for our digital counterparts.

In order to be worth living in, the future needs to be more, rather than less, emotional. The only way for that to take place in an authentic way will be for us to grant true emotion, not just the mimicry of it, to the intelligent objects that we design. If we aren’t willing to live in such a world, we would be better off not developing artificial intelligence at all.

The Ultimate Consequence Of AI-Managed Emotion

It doesn’t take a lot of imagination to predict the likely consequences of a professional world in which artificial intelligence systems are deployed to manage the emotion our of our work. Science fiction has given us huge numbers of stories about societies in which mechanized management has taken control of emotional expression.

Doctor Who, the British science fiction TV show that’s been running for over 50 years, has had plenty of opportunity to dream up nightmare societies in which technology is allowed to triumph over free emotion. The classic example is that of the cybermen, an army of cyborgs with human minds trapped in metal bodies, a torture designed by an engineer who decided, like some transhumanists today, that emotion is a plague of which humanity must be purged.

Another iteration of this theme came in a more recent Doctor Who episode, which features the discover of a colony populated only by robots who display emoticons to whomever they meet. The story eventually reveals that these robots had been given the task of removing all negative emotion from the human beings they were designed to assist — and in order to accomplish this task, the robots killed every single person in the colony. The annihilation begins with the invention of sentiment analysis tools, so that the robots can tell when a person is ready to be killed, the instant that they feel any negative emotion at all.

These stories are fiction, of course, but then, so are the stories told by companies selling Emotional AI technology. No reliably effective sentiment analysis tools exist yet, despite the yarns that Silicon Valley salesmen ask us to believe.

What we need is a more balanced collection of stories about Emotional AI, to expand the set of ideas beyond Silicon Valley hype. That’s why the next article in this series, available tomorrow, will review some of the mythological metaphors we can pull from the legacy of the past to begin to shape a more human practice of sentiment analysis.

The other articles in this series are:

Can AI Understand Your Emotions?

Emotional AI Is Not Ready For Prime Time

What Emotional AI Fails To Grasp About Emotion

It Isn’t Emotional AI. It’s Psychopathic AI

The Mythology of Emotional AI

The Missing Companion of Emotional AI — Our Humanity

--

--

Jonathan Cook

Using immersive research to pursue a human vision of commerce, emotional motivation, symbolic analysis & ritual design