Essays on our relationship with technology
Technology plays an unquestionably central role in our lives. It has brought us the power of good, in medicine for example, and it has brought us the power of destruction in war. The documentary The Social Dilemma, sponsored by Tristan Harris and the Center for Humane Technology, is educating many about structures of technological overreach, and potential overreach is something I encountered firsthand recently.
I was treated to a test drive in a friend’s electric vehicle, a technology I have committed to as my own vehicle is about to reach the end of its life. The friend’s vehicle is equipped with an autopilot feature. How easy it was to switch lanes in that car, she showed me as the vehicle moved itself almost as soon as she signalled a lane change. Her 19-year-old daughter, she told me, trusts the autopilot feature unquestioningly, although I agreed with my friend that vigilance is still warranted.
Why is vigilance warranted? Well, a curious thing happened before the test drive as we were about to get into the vehicle. The car’s software malfunctioned and did not extend the door handles so that we could not enter. “It’s not supposed to do that,” the owner remarked and then after we stood there puzzled for about 15 seconds the door handles finally relented to instruction and we were able to enter.
It was a software glitch, we assumed, of the annoying and generally inconsequential type that many of us are used to encountering on an almost daily basis. Why did my screen freeze? Why did that update not work? Why did my wifi not connect? These types of questions are so frequent that they are commonly dismissed, once someone figures out how to fix the problem or else the problem for unknown reasons fixes itself. Glitches are a fact of life in our technological world, and we live with them.
But can we live with glitches in all cases? The Toyota Corolla owners whose vehicles about a dozen years ago began suffering uncontrollable acceleration discovered, at the cost of some lives, that unintended consequences do arise for which there is no algorithmic fix. The programming glitch that gave rise to the Corolla’s malfunctioning acceleration was featured in The Atlantic (see xxxx) which disclosed that there are one hundred million – a truly astounding number – lines of software code in the Corolla. The code in the vehicle, now xx years old, has been modified and adapted so many times that no one can possibly have a complete understanding of what every piece of the code does. And when humans lose control of the probabilities, unintended consequences begin to occur more frequently than we imagine. This is a point that computer scientist Hector Levesque makes compellingly in his Common Sense, The Turing Test, and the Quest for Real AI.
The point was driven home to me more recently after my test drive, as I drove (under my own control) on a six-lane highway. On my right, I had just passed a vehicle I know to be equipped with an autopilot (because of its brand) when, ahead of me in my lane I saw a thin strip of metal that must have fallen off a truck or another vehicle ahead.
Not wanting to risk a punctured tire at that speed, especially as mine are low profile, I quickly judged that I had enough margin for safety to move slightly to the right to avoid the metal strip, which I did. My tire touched the land dividing line in the process, although I was not in danger as I had properly gauged myself a safe distance ahead of the autopiloted vehicle I had just passed.
Immediately I began to wonder how the other vehicle would have reacted had we been closer, thinking back to my recent experience of the test drive in the electric vehicle.
What if the situation had been different? When I saw the metal strip ahead of me what if I, in the middle of three lanes, had been caught between two other vehicles – one of them a human-piloted vehicle to my left and the other being the autopiloted vehicle to my right? Would the driver on my left have also seen the metal strip and, appreciating its danger to me moved over to create room for my safe passage? Helping each other out is what humans do, especially if the gesture has no personal cost. But what would the autopilot do as it detected my rightward swerve? Would it anticipate, with some sort of common sense as the other driver might, my predicament, or would it react quickly only after I had begun to act? If it reacted quickly, would it have swerved onto the soft shoulder or would it brake or both – and with what consequences to its passengers and those behind it in other vehicles?
I simply don’t know the answer to the questions in this scenario. Only the autopilot’s programmers would know – if they had in fact anticipated such a scenario and encoded instruction for it. And that, ultimately, is perhaps the real point with technology – we need to have reason to believe it will fulfill a humane mission whether that mission is protecting drivers and passengers on highways, or allowing us to communicate with each other, or creating the medicines that will save lives.
We have a habit of calling technology such as our phones and computers “smart” or “intelligent”. The thing is, though, that the machine has no innate intelligence of its own, beyond any intelligence that its algorithms were humanly programmed to allow. True, some algorithms have the capacity for learning and we call that machine learning, that capacity is also one that was programmed by humans. It was human ingenuity that created the machine in the first place. The fact is that the autopilot, or any other technology, is no smarter than its human programmers allowed it to be and also embeds any errors (such as in the example of the Corolla) that humans make.
In a static universe, error would not exist but then life would be quite dull and predictably pointless. Human life is a process of trial and error, often messy, but error is reduced by experience and learning that increase our capacity for creativity. Trial and error led to the development of the electric vehicle that I test drove, when a decade ago the financiers and technologists said it couldn’t be done.
It is said that love is the great joy of life, and if that is the case then it is a joy beyond the reach of our machines and in the exclusive capacity of the human. And it was for this reason, out of love for her daughter, that my friend agreed with me that caution is warranted with new technologies like the autopilot. It is altogether easy to signal a lane change and have the car do the work for you, but in those situations of unintended consequences – which are not as rare as we would like to think – nothing replaces good old fashioned common sense born of human experience that the daughter would do well to learn.