Reports Discover Fatal Shortcomings in Self-Driving Cars

by | Nov 18, 2019

Companies like NVIDIA, Waymo, Uber, Lyft, and General Motors’ Cruise envision a future without the need for car ownership. Motivated in part by our aging population, Silicon Valley works every day toward a revolution of driverless cars.

Today’s autonomous vehicles combine radar, complex laser sensors, and high-definition cameras to interpret and navigate their surroundings. They constantly and automatically process images, comparing every object to an extensive set of programmed reference images.

However, critics are more concerned about what self-driving cars can’t do, arguing that Silicon Valley’s vision is unreasonably optimistic. Fueling this concern is a series of accidents involving driverless cars. While many of the drivers in these collisions were distracted, the general public struggles to imagine a time when we can trust our cars enough to feel comfortable taking a backseat.

What Is Wrong with Our Self-Driving Cars?

One disturbing accident occurred in Tempe, Arizona in March of 2018. An Uber vehicle fatally hit a pedestrian while its supervising driver was using her phone. Last week, the National Transportation Safety Board’s investigation into this deadly collision revealed significant shortcomings in the vehicle’s automated driving technology.

The central issue these vehicles currently face is the inability to distinguish, as humans can, between various objects and living things in the road. A deer, a bicyclist, a runner, and an elderly person all exhibit different behaviors and qualities of movement. Furthermore, a shadow, pothole, or reflection in a building or puddle could resemble a different object and confuse the automated system. While a human driver can accommodate for these subtle differences, even the most advanced vehicles struggle to make crucial distinctions and react accordingly.

In the Uber incident, the pedestrian was crossing outside of a crosswalk with her bike. The street was poorly lit, but the vehicle managed to detect the pedestrian nearly 6 seconds before hitting her. According to the report, the vehicle misclassified her as a vehicle/bicycle/other instead of a pedestrian. While developers have become more successful at engineering vehicles that detect hazards, programming the proper response is another issue altogether.

Accommodating for Real-World Situations

A single urban intersection produces thousands of possible scenarios and hazards, and engineers are struggling to design their vehicles in a way that accounts for all potential threats. Sally A. Applin, an anthropologist whose research focuses on the intersection between algorithms and human beings, says engineers may be “programming for what should be, not what actually is. There just seems to be a really naïve assumption about various rules—and that the world is going to be the way the rules are, not necessarily the way the world is.”

Uber’s 2018 safety report claims its software “considers how and where all actors and objects may move over the next ten seconds. Our self-driving vehicles will not operate in a vacuum.” In response to the incident in March, Uber spokeswoman Sarah Abboud expressed regret on behalf of the company and explained how their self-driving unit “has adopted critical program improvements to further prioritize safety.”

Engineers, investors, and proponents of the autonomous-vehicle industry are concerned that these accidents may cause tighter regulations and, therefore, obstruct further development. Many industry insiders have expressed shock over the Uber incident, believing the company’s developers failed to meet a basic expectation.

Going forward, the autonomous-vehicle community is advocating for extensive testing and precautions that properly accommodate for real-world situations. Arizona State University’s Katina Michael, for example, believes a more diverse group of experts must peer-review all software code. She is a professor in the School for the Future of Innovation in Society and School of Computing, Informatics, and Decision Systems Engineering at the university.

Were You Injured in an Accident Involving an Autonomous Vehicle?

Our attorneys at Chaikin, Sherman, Cammarata & Siegel, P.C. have recovered more than $500 million for clients in Washington D.C., Maryland, and Virginia. When you work with our team, you will benefit from nearlya century’s worth of combined legal experience. Our skills are routinely recognized by the community, various legal associations, and national lawyer ranking services. If you were injured due to negligence or faulty technology, we will fight for compensation on your behalf.

Schedule a free consultation or call our firm directly at (202) 659-8600. We look forward to taking on your case.

Pin It on Pinterest

Share This