Facebook told us all that it understood the mistakes it had made. Facebook said it was going to earn our trust back. Facebook promised that it was fiinally going to respect users’ privacy. It had a new set of ethics guidelines in place.
Today, it has been revealed that Facebook has been recording the audio conversations of its users, sending those audio recordings to other companies, and using a combination of humans and artificial intelligence to transcribe those conversations into text records, all without permission from its users.
This kind of thing doesn’t happen by accident. Facebook broke its own ethics guidelines, and it did it on purpose.
Ethics, a Doily of Protest Against the Tech Titans
In the most recent episode of my podcast, I talked about how ethics guidelines for the development of artificial intelligence have been inadequate. My discussion focused on the issue of the purposeful design of addictive digital experiences. Here’s what I said:
There’s no sign that anything serious is being done to confront the dangerously addictive nature of digital technology. I read through the IEEE’s most recent version of Ethically Aligned Design, the most comprehensive ethics document for artificial intelligence, and in 263 pages, all that it has to say on the issue of addiction to digital media is that, in the future, “There will more than likely be issues similar to the kind of video-game addictions we see now.”
That’s it. Nothing more than that one sentence. Nothing about the many other problematic addictive digital technologies, beyond video games, that already exist. No guidelines for non-addictive design at all. There isn’t even a recommendation that addictive AI should be avoided. Even this one sentence is an improvement, though. The first edition of Ethically Aligned Design didn’t have a single thing to say on the subject at all.
I don’t mean to single out the IEEE for shaming on this subject. The organization is doing a great deal of good work, and they have a lot of ground to cover in an industry that’s changing rapidly.
Besides, lack of adequate attention to the impact of digital addiction isn’t just a problem with the IEEE. IBM’s artificial intelligence ethics document, Everyday Ethics for Artificial Intelligence, doesn’t address addiction at all. The ethics guidelines from the European Union’s High-Level Expert Group on AI doesn’t either. Google’s principles on the ethical development of AI also fail to bring the subject up.
The problem is with the digital industry as a whole, made up of corporations that employ vast armies to develop and deploy new technologies, but invest almost nothing in comparison to develop and implement ethical guidelines to prevent their inventions from harming humanity. This widespread failure to implement ethical guidelines for dealing with addictive artificial intelligence is particularly unnerving when we consider the way that the digital technology companies routinely violate their own ethical guidelines and promises of restraint. Rigorous, detailed ethics guidelines for technological development in business would be an improvement, but they wouldn’t enough.
There’s nothing wrong with AI ethics guidelines, in the same way that there’s nothing wrong with holding up your hands to stop an out of control car from running you over. Like holding up your hands against that car, AI ethics guidelines are a perfectly understandable reflex reaction to the terrifying tech scandal spectacle we see rushing toward us.
Ethics guidelines work in professions where people can be counted on to have sincere, trustworthy, positive motivations. Trustworthy motivations are not what the long history of deception and abuse from giant digital corporations has demonstrated.
For professions that are rife with abuse, strict legal regulations are what’s called for. The bad news is that strong regulations would need to go through both Congress and the White House, and neither institution has shown any courage on these issues.
You Can’t Afford to Wait for Regulations
So what do you do?
Should you quit Facebook? Absolutely, but you knew that years ago, in your heart of hearts. Now is the time to follow through and leave the big toxic blue f behind.
Here’s the hard truth: You can’t just stop with Facebook. All of the big tech companies have been caught doing almost the same thing to their users. They didn’t stop doing it until they were caught. Amazon and Apple merely have promised to temporarily pause their unauthorized audio surveillance of their customers.
The reason why Big Tech keeps on being so abusive is going to be even harder for you to hear: They’re doing it because they think that you won’t stop using their services no matter what they do. They know that you’re addicted to your digital technology.
What other people, besides digital tech users, get abused over and over, but keep on coming back with their money, asking for more? Drug addicts.
Big tech stopped simply operating like computer services professionals a long time ago. What they’re doing now isn’t about word processors, spreadsheets, and email. They’ve turned into something more like virtual drug dealers.
They’ve got you hooked, and they’ve got you convinced that you can’t go on living without their product, delivered through the addictive interface of the smartphone. You know, deep down, that isn’t the truth.
It’s time to stop the abuse. If you have a scrap of dignity left, you’ll stand up right now, pick up your smartphone, turn it off, and never turn it on again.
Get rid of it. If you really have to make phone calls from on the road, get a flip phone. Use a laptop to do your work and get online. That’s all you had 15 years ago, and it’s all you really need now.
Like an abusive drug dealer, the big tech companies will tell you that you’ll be back, that you need the toxic garbage that they’re selling to you.
You’re better than that. Respect yourself.
It’s time to kill your smartphone.
This Human Business is a podcast about the movement to reform business culture by counterbalancing digital dominance with a reassertion of humanity. The next episode comes out next Monday, and will focus on the respectful use of emotional insight in commercial contexts.