Self-driving cars will own the highways of the future. But that brings about many complex legal situations. Learn how this could affect personal injury law.
Cars can actually drive themselves legally in specific areas of California, Texas, Arizona, Washington, Pennsylvania, and Michigan.
When they’ll actually be on the road 24/7/365 driving in even the most treacherous of conditions, though, is another subject. Lior Ron, the co-founder of Otto, a self-driving tech company, predicts it will happen in baby steps slowly over time.
First, it will become available in the city during low-traffic hours (1-5 AM), only using wide traffic lanes, and in places where you don’t find many pedestrians. Then, as the technology increases in sophistication, more difficult driving conditions will be accommodated.
And eventually, cars will drive themselves completely during all hours of the day. And they won’t even have gas or brake pedals. But, that time may be decades away yet.
Regardless of how it pans out, self-driving technology will greatly reduce the number and severity of accidents. They won’t be perfect. But an automated vehicle, for example, will never drive drunk or high!
But, Self-Driving Technology Raises Ethical Issues
Robots, and even the most sophisticated AI, can’t necessarily process ethics in the fairest way like a human mind can.
Here’s a couple examples to show you what I’m talking about, and the legal implications of self-driving vehicles.
Google already has a patent (from 2014) for the risk minimization strategy used by its vehicles. This strategy comes into question when Google’s vehicle is driving in the center lane of a 3-lane road. Imagine a small car on the right, and a large truck on the left.
To keep its passengers safer, Google’s technology moves its car closer to the smaller car on the right. This minimizes risk to Google’s car. But as you compile crash data over the course of the year, you’ll find more accidents happening with smaller cars than larger trucks. So, the smaller car unfairly becomes a more likely target for accidents.
Since we can control the likelihood of accidents happening, it’s not the fairest decision to always put the smaller car at greater risk.
What If A Crash is Destined to Happen?
Another tricky legal scenario happens when a crash becomes inevitable. A self-driving vehicle will have to decide who to crash into and why.
For example, the car could crash into a wall and kill all its passengers. Or it can dodge the wall and kill several nearby pedestrians.
Maybe one group is made of ex-convicts. But, they appear to be well on their way to living reformed lives.
Which would be the right choice?
And what if this involves only objects, but not humans? Should the self-driving vehicle always aim to crash into the least valuable car? Should it total itself rather than hitting a human-driven vehicle, risking injury to the driver?
Only Time, And the Legal System, Will Tell
On average, traffic injuries, accidents, and deaths will fall dramatically in the coming years. But, clearly you’ll still have challenging (and unclear) scenarios that could happen.
And just like anything else, it’ll take time, testing from engineers, and legal action to determine the fairest route for all involved.