“A robot may not harm a human being or, through inaction,
allow a human being to come to harm.”
Science fiction author Isaac Asimov penned this rule as the first of his Three Laws of Robotics in the 1940s. While both the concept and actual potential of robots has vastly changed since Mr. Asimov’s time, the above law is essential to today’s growing range of artificially intelligent devices. Thanks to extraordinary advances in science and technology by those of incredible intelligence, robotics is slowly surpassing the science fiction realm. Though we must live without Rosie the robot maid from The Jetsons, we do have Roombas and other smart vacuums to clean our homes without our supervision. On the larger commercial scale, the recent news-makers are autonomous vehicles.
But first, let’s slow down and cover an important term: autonomous. The “autonomous level” of a vehicle ranges from Level Zero to Level Five. Level Zero essentially means that the driver performs all tasks associated with operating a vehicle, including breaking, steering, and accelerating. Think of your standard golf cart or go-cart. Level Five autonomous means complete automation; i.e., no human involvement is necessary for the operation of the vehicle whatsoever. The four intermediate levels possess varying amounts of smart automation relative to the increasing number.[1]
Companies such as Tesla, Google, Waymo, and Uber are hard at work developing the perfect self-driving vehicle. However, the latter of the aforementioned seems to have run into a serious problem. An Uber self-driving vehicle became the culprit of the first documented autonomous car fatality earlier this year in March. It was recently revealed that, though the computer system did detect the deceased and her bicycle as they crossed the road together, it determined that the detection was a “false positive” and maintained its devastating course.[2] In the wake of this tragedy, the most obvious question that comes to mind is this: Why would a self-driving car even allow false positives in its programing?
The answer is both selfish and nonnegotiable. Obviously, the reasonable person would want the self-driving vehicle to be as safe as possible. The initial reaction would be to have the car stop for anything and everything it senses that might be in the way, for the sake of safety. With that said, think about the very first time you drove in a vehicle. Most of us hovered our feet over the brake pedal and pushed it at every split second possible, turning a mile into an hour-long adventure of brake-slamming. An autonomous vehicle with no ability to register false positives would drive similarly. This is not ideal as a passenger and owner of a futuristic car.
However, it appears that Uber took it too far, and worse yet, video footage clearly shows that the worker who was test driving the car was not even looking at the road.[3] This combination created the first fatality and further opened up the safety discussion for vehicles. After all, autonomous vehicles are supposed to be safer than human drivers, which is part of the appeal.
With the limited number of these self-driving cars available, a high majority of them in test phasing, self-driving lawsuits are limited. Lawsuits involving self-driving vehicles primarily consist of trade secret disputes and corporate espionage. However, this could change as the number of self-driving vehicles on the road rises and the hidden flaws in the technology become more apparent.
So are self-driving vehicles safer than human drivers? How are insurance companies supposed to react: with higher coverage or lower coverage? Will the number of accidents increase or decrease? These questions are high on the list of those that need answers before the impending future arrives.
The likely result will be higher costs for insurance, at least in the beginning. A self-driving car is still a car. Factoring the normal risks associated with multi-ton hunks of metal capable of incredible speeds, we are adding advanced circuitry, computer hardware and software, and all of the additional technical requirements to create an autonomous vehicle.[4] Though these systems will be created with safety in mind, there is always a risk. With the increase in ransomware attacks and hacking, the human imagination can go crazy with the potential fears and risks that can become a reality (i.e., weaponized cars).
With all of this in mind, hit the brakes on the Skynet-type thoughts. A full self-driving vehicle is still fairly far down the road, so to speak. Time is on our side for solving these problems. Any new technology will hit its hitch, with creators fixing the issues as soon as they appear. We just need to be patient and not rush the future. It will arrive on its own time.
[1]https://www.justice.org/what-we-do/enhance-practice-law/publications/trial-magazine/self-driving-cars-and-bumpy-road-ahead
[2]https://arstechnica.com/tech-policy/2018/05/report-software-bug-led-to-death-in-ubers-self-driving-crash/
[3]https://www.npr.org/sections/thetwo-way/2018/03/21/595941015/police-in-arizona-release-dashcam-video-of-fatal-crash-involving-self-driving-ca
[4]http://www.abajournal.com/magazine/article/selfdriving_liability_highly_automated_vehicle
Copyright © 2018 Kevin Peek