At first glance, it seems fairly obvious that if I am the customer of a product – such as the passenger of a self-driving car – said product should first and foremost prioritize me. After all, capitalism taught me that my own benefit and convenience are good, not just for me but for the whole. Every advertisement tells me that my comfort and convenience matter.
But I can’t help but feel a little sick when I take this to an extreme. Does my convenience as a customer really come above that of others? Or, more crudely, does my life as a passenger come above the life of an innocent pedestrian?
At this point, I might say that maybe we’ll only have self-driving cars once all cars are self-driven and all roads retrofitted for said technology; to avoid the predicament all together. But, if there’s one lesson the human experience has taught me, again and again, is that to every rule there is inevitably an exception; and no system is perfect. The day will come when a child will unknowingly jump onto the street and we’ll be left exactly where we started: having to encode this impossible predicament in the behavior of the car.
It seems, then, that this might be an unsolvable conundrum; since we do seem to believe that: yes, it is our right as customers to be protected; and no, our life does not matter any more or less than the life of a pedestrian.
Could it be the case, then, that certain technologies should not exist? That just because we can make them, that doesn’t mean we should. Given that the choices a computer makes are also choices made by the human who programs them, no one in their right mind could choose one life over the other.
For the sake of argument, let’s pause for a moment and consider the exact same predicament without the self-driving component. What would I do if I was about to hit someone with my car? I know I would most certainly steer the car away to avoid hitting the pedestrian — heck, I'd do this even with an animal.
And even though this might be a selfish choice — I steer the car away to avoid a deadly crash where I might die — the act itself still seems morally correct. I chose to be in the car, the pedestrian did not. I have control over the car and can turn it in a different direction to avoid hitting the other person. Is this not what most of us would do?
Could it be that this human instinct is at the heart of our question? Is the car, after all, not an extension of me and my choice to travel from point A to B?
Fair to say, this does makes for an unappealing product; a car that in all its technological glory will choose to kill its driver if need be. But if we take this idea of the car as an extension of the driver, the choice wouldn’t be the car’s but the driver's.
With this, we’ve come to what I believe is the crux of the matter: choice. If one chooses to be self-driven while reading Harry Potter and not paying much attention to the road, then that is a poor choice with potentially tragic consequences.
So, at last I come to the conclusion that maybe to be self-driven is to enter an agreement with the product, a contract, wherein one chooses to either drive the car or be self-driven knowing that the car might prioritize the innocent pedestrian.