If anything about driverless cars can be considered an old riddle, it is this one: The car is driving itself down a residential street when a woman pushing a baby stroller suddenly enters a crosswalk. Unable to stop, should the car’s computer opt to hit mother and child, or veer off to strike a tree, almost certainly killing its passengers?
That macabre scenario has been fodder for ethicists almost since the prospect that cars might drive themselves first entered the horizon. It also, however, provides a second riddle: Regardless of the choice made by the car’s computer, who pays for the damages?
The car owner? The company that built it? The software developer?
Those questions are being debated nearly everywhere that lawyers and insurance brokers meet these days. While state governments and the courts ultimately will decide them, many have been addressed in a new study by one of the preeminent legal authorities on autonomous vehicles.
Bryant Walker Smith, a University of South Carolina law professor, expands on the belief that there will be a shift in blame for a crash from the at-fault driver to the automotive industry and the conglomerate of manufacturers and software developers who design and update car computers.
“To prove that an automated driving system performed unreasonably, an injured plaintiff would likely need to show either that a human driver would have done better or that another, actual or theoretical, automated driving system would have done better,” Smith said.
Volvo, one of many automakers eager to market an autonomous car, acknowledged that it expects liability will shift from the driver to the manufacturer.
“It is really not that strange,” Anders Karrberg, vice president of government affairs at Volvo Car Corp., told a House subcommittee earlier this month. “Carmakers should take liability for any system in the car. So we have declared that if there is a malfunction to the [autonomous driving] system when operating autonomously, we would take the product liability.”
Public officials and auto-car advocates are fond of pointing out that 94 percent of crashes are attributed to human error, a fact that implies that removing the human from behind the wheel might eliminate most crashes.
Not so.
[Will driverless cars really save millions of lives? Lack of data makes it hard to know.]
While computer-driven cars are expected to reduce crashes dramatically, just how much is speculation, and nobody in the field thinks collisions will become a thing of the past. There are too many vagaries for any computer — or human driver — to deal with on the roads.
There also are imponderables: If freedom from driving themselves means people are more willing to travel by car, an overall increase in miles traveled suggests there will be more crashes. However, driverless cars eliminate two of the leading causes of traffic fatalities: drunken driving and speeding.
“Those of us who have been in the software world know that software has bugs, so there’s no perfect solution,” said Ash Hassib, senior vice president for Auto and Home Insurance at LexisNexis Risk Solutions, which provides statistical data to the insurance industry. “There is so much brainpower that goes on when driving a car, so it will take a long time to teach a machine all the possible scenarios that could take place. Eighty percent of the scenarios will be quick, but trying to get to the last 20 percent is going to take a very long time.”
In the most definitive public legal research to date on autonomous cars, Smith’s 77-page paper says:
● There will be a shift from driver liability to product liability, making the automotive industry the primary liability stakeholders.
● When manufacturers imply their automated systems are at least as safe as a human driver, they may face a misrepresentation suit in cases that contradict that expectation.
● The argument that an automated system performed unreliably will be central to personal-injury claims.
● A key question in litigation will be whether a human driver or a comparable automated system would have performed better than the automated system in question.
● Another key question: Could a reasonable change in a vehicle’s automated system have prevented the crash?
● There could be a higher standard for automated vehicles. Smith cites a hypothetical case in which two cars collide at an intersection. One of the cars ran a stop sign, but it might be argued that systems in the other car should have recognized that the first car was going so fast that it would not stop at the sign. So, should that car share blame for the crash?
● In the shift from driver liability to product liability, plaintiffs would pursue significant injury claims and usually recover less, but if they prevail, they would receive higher damages. That’s largely because an unprecedented level of data on the cause of the crash will be stored in the vehicles’ computers, virtually replacing the post-crash investigation by a police officer who didn’t witness the incident.
“The standard for reasonable safety is always increasing, and automated driving is no exception,” Smith said. “The technologies that will amaze us in the next few years could seem laughably — or dangerously — anachronistic a decade later.”
To cover the cost of potential liability claims, automakers will have to factor that risk into a system’s initial sticker price — an estimate that later may provide dramatically inaccurate — or rely on future sales to cover past liability claims.
“As I emphasize in my paper, it’s incredibly difficult to accurately and precisely predict the actual liability costs attributable to a single automated vehicle over its lifetime,” Smith said.
[As driverless cars grow closer, a delicate balance between rules and guidelines]
The U.S. Transportation Department, in guidance issued to the industry last year, worried that sorting out the liability issue might delay introduction of autonomous cars.
Image Source: Flickr
Article Source: Washington Post