LeftLaneNews
Uber's homebrew automation software blamed in deadly crash

Uber's homebrew automation software blamed in deadly crash

The crash that killed a pedestrian in Arizona has been blamed on Uber\'s seemingly shoddy autonomous driving software, according to investigators.

NTSB investigators have confirmed that Uber's self-driving software is to blame in the death of a pedestrian in March. While the conclusion hardly comes as a surprise, it brings clarity to the controversial circumstances leading up to the incident. In particular, the findings vindicate sensor maker Velodyne, chipmaker Nvidia, both of whom provide hardware to Uber, and publicly stated they would be surprised if any hardware failures were to blame. What's more, no Volvo technology was implicated in the failure.

Although the cameras, radar, and lidar sensors all made actionable measurements about Herzberg's whereabouts and trajectory, the software was unable to properly interpret those signals and take appropriate action.

System logs from Uber's prototype robotaxi indicate that the car's software struggled to classify 49-year-old Elaine Herzberg as it hurled towards her at 40 mph, according to a preliminary report from the National Transportation Safety Board. The report indicates the vehicle observed the pedestrian six seconds before the collision, but had low confidence as to what it was seeing. "The self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle," the report says.In any artificial intelligence system, classification is the crucial first step in deciding how to react to data coming from cameras, sensors, and other inputs. A first-generation semi-autonomous Tesla Autopilot system infamously failed to correctly classify a white semi trailer against a bright sky, in the accident that killed Joshua Brown, but the sensors and computational power available to that system -- as well as its stated purpose and intended use -- were entirely different than that of the Uber vehicle.

Although Herzberg was inattentively (and illegally) crossing a multilane throughfare in poor lighting conditions, there's no doubt a typical human driver would have spotted her. By all accounts most vehicle automation systems currently under development would have as well. In fact, investigators say the XC90's factory-equipped automatic emergency braking system managed to detect Herzberg and concluded emergency braking was necessary seconds before impact, but it was unable to intervene. Uber engineers had disconnected the system, fearing conflicts between the Volvo unit and their own system. In an effort to "reduce the potential for erratic vehicle behavior", potential redundancy afforded by Volvo's humble auto-braking system was forgone.

While Uber's rationale is sound -- having two automation systems both attempting to intervene could be problematic to say the very least -- they chose not to use the Volvo system even as an indirect input or as an alarm for the safety driver. NTSB says the Volvo system could have altered the driver, but it was effectively left shouting into the wind.

That decision is one of many that have lead observers to question diligence and thoroughness of Uber's approach to safety. Some have also questioned the company's decision to run single-occupant test vehicles, whereas most AV companies in the early stages of testing employ two technicians per car.

Uber made the controversial choice of pursuing its own autonomous driving software in recent years, using only the hardware from chipmaker Nvidia, while eschewing the company's robust Nvidia Drive software (which is arguably one of the most advanced AV platforms out there) in favor of a homebrew solution.

In a desperate race to catch up to Waymo, Google's self-driving car division on the cusp of commercializing self-driving taxis, Uber has found no shortage of controversy, including clashes with regulators in California, a not-at-fault collision that was nonetheless potentially avoidable, a videotaped incident of one of its cars blatantly failing to detect a red light, and most recently the deadly crash that lead to a nationwide halt to testing.

What's more, statistics released by authorities in California indicate that in 2017, Uber prototypes managed to drive only 13 miles between "disengagements", which are either computer- or driver-initiated hand-offs from autonomous mode to manual control. By contrast, Waymo managed to average 5,700 miles between disengagements in 2017, the majority of which were driver-triggered out of an abundance of caution. In the final quarter of 2017, Waymo reported approximately 1 disengagement per 30,000 miles.