Foibles in Finding Fault

ABC-15, Phoenix, via Associated Press

A driverless car hit a woman who was walking her bicycle across a street at night. Numerous articles, some less useful than others, but also some more helpful to a better understanding, describe this incident. For example, in order of increasing technical detail,

Police In Arizona Release Dashcam Video Of Fatal Crash Involving Self-Driving Car

Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam

How a Self-Driving Uber Killed a Pedestrian in Arizona

Uber Self-Driving Car Fatality Reveals the Technology’s Blind Spots

I have tried to find articles that do not center on or advance irresponsible histrionics, egregious ideological bias, or insulting, simplistic thinking. Critical thinking matters, especially when people’s lives are at stake.

Given the infancy of driverless vehicle technology, how significant is this incident? All knowledgeable observers have been expecting, and dreading, that something like this would happen. Now that it has, how do we put this in a context that makes sense? That is helpful? That is at least somewhat productive for society? Certainly, snap judgments and scapegoating are the opposite. So let’s first consider a few facts.

According to the CDC, in 2015 (latest official U.S. mortality data) the lower bound on the number of pedestrians killed by people-driven cars was 5,719. Another 17,008 were killed by ‘unspecified’ means involving motor vehicles, so the real number is probably higher than 5,719. Another 8,313 vehicle occupants died in accidents, cars killed 4,431 motorcyclists, 675 bicyclists, and 15 ‘other’. Of necessity, those are all lower bounds as well. The total number of motor vehicle deaths in the U.S. in 2015 was therefore 36,161. Further, 94% of traffic accidents are due to human error (i.e., insufficiently sound judgment).

However, the number of driverless cars on the road is as yet statistically minuscule, and the miles driven under realistic circumstances even more statistically minuscule. So any direct comparison is invalid in the absence of substantially further data — estimates put the number of miles necessary for statistically significant discrimination at hundreds of millions of miles or more — and better-discriminating data measures — such as number of fatal accidents per mile driven — with which to compare. The significance of this incident, for now, lies elsewhere.

In the immediate aftermath of this unfortunate and terrible accident, many of us have several questions in common: what might have gone wrong? Where should we look for correctable fault? With whom should we lay blame? Is attempting to lay blame on someone even useful, to anybody? I would argue that the answer to that last question, at least, is easy and should be obvious: no. The situation is complicated in several arenas (technical, political, psychological, among probably others), with many confounding factors. So, based on experience if nothing else, we know it is likely there are no easy or quick answers. Certainly, simplistic thinking is neither productive nor beneficial.

With regard to the political context around this particular event in this particular state: As so often has happened in the past couple of decades, we could again be looking at the sad, unnecessary consequences of conservative values enacted as irresponsible and reckless public policy: private profits tend to matter more than people’s lives. But that is an entirely separate, infuriating, and thoroughly expletive-laden subject, at least for me. Here, let us choose not to go there.

Could we also, or maybe instead, be looking at technology advance pursued at the knowing expense of public safety? Of people’s lives? The implication being that tech advance, and science more generally, is pursued by heartless elites who disregard public concerns? In short: no. This is an ignorant, flagrantly dishonest strawman. No. This is not that. Just no. It is never, in the real world, that. The people who manufacture and push this false cultural meme on us are dishonest; they seek to steer an often unknowing society toward their own self-serving agenda — an agenda that cannot survive the light of rational, fair, ethical examination, hence their sly dishonesty. The real world is never this simple-minded, one-dimensional fiction. In the real world, scientists and technologists are not evil, are of necessity if nothing else the opposite of dishonest, have neither time nor inclination for shady, conspiratorial ulterior motives. That story, however superficially enticing, is pure fiction. Deep down, under whatever emotions, ideological bent, and biased noise might be pummeling our conscious minds, I think we all know this.

With regard to sensors, the Wikipedia article on driverless cars sheds some light but, surprisingly, not much:

Typical sensors include lidar, stereo vision, GPS and IMU.[42] Visual object recognition uses machine vision including neural networks.

The last two journalistic articles linked above are more helpful. In this context, “vision” can mean optical or infrared or both (and/or even some other wavelength range, such as radar). I would have been utterly shocked if these systems did not use both IR and optical sensors. It’d surely be the height of both stupidity and irresponsibility if they did not. It is worth pointing out that engineers are not stupid, and rarely irresponsible, while corporate upper management sometimes is — and politicians almost certainly can be counted on being — both stupid and the epitome of irresponsibility. Given the current apparently poor regulation in this area, I suppose stupidity and irresponsibility are therefore potentially viable likelihoods, despite the no doubt multiple layers of safety protocols that smart people, down at the technical levels inside the companies, have nevertheless managed to put in place despite upper-level idiots (if any).

If this particular car was outfitted with both visible and IR sensors (lidar would necessarily be IR in this context), then the fault must be either 1) with the chosen sensor sensitivities, the chosen sensor ranges, or the chosen sensor fields of view, or some combination thereof; or 2) within the AI decision assessment of the filtered and cleaned input signals; or 3) some combination of both. That’s it; those are the options. Yes, it’s complicated.

(I’m assuming there was not a sensor failure, which would be both a manufacturing testing and reliability fault as well as a redundancy failure in the design. This is a different topic I won’t address here, but at this point it’s also a possibility.)

Indeed, as we learn from the articles above, this vehicle used optical, IR, and radar sensors, as one would reasonably and correctly think should be the case. Whether or not the production design had been recklessly limited to just optical wavelengths — I find it unthinkably unlikely that any significant U.S. company would be that self-destructively reckless, but suppose so anyway — then you must arguably add, and otherwise could reasonably add, a fourth potential likelihood: the fault lies foremost with a cost/safety trade-off decision or series of decisions that some fucking idiot in a position of overriding power might have made — undoubtedly (if this even happened) over the vehement objections of the design engineers and other technical experts. Trade off decisions are ultimately subjective assessments, whether wisely made or not, therefore a thick morass of difficulties and ambiguities. (As an aside, optical-only would be, and many other potential technical shortcomings could be, a direct consequence of allowing privatization in the absence of regulation of technological advances, in spite of all the obvious public risks. Ayn Rand was a fucktard and an awful human.)

But here’s the point I wish to make: either way, the woman who was struck and killed was NOT at fault, no matter how careless or distracted she may or may not have been in that terrible moment. Keep in mind that we can never really know her state of mind or level of distractedness anyway. The available data are insufficient for attempting any such assessment, even if it could have been useful (it is not). But even so that judgment, however tempting for some people when emotions run high, is not relevant. In addition, it is not relevant that she was not at a cross walk; those exist to primarily to counter (or, rather, partly contain) human driver error. Nor is it relevant that it was dark instead of broad daylight: a paucity of optical photons does not affect two of the three sensor wavelength ranges, while optical sensors that are extremely sensitive at low light levels are a well-established and inexpensive technology. It even is not relevant that the safety driver was provably distracted and then failed to react and take control. None of these are relevant if a fatal fault lies further up the precedence hierarchy.

I watched the video clip of the incident, compiled from on-board optical cameras, several times. It is crystal clear to me that this accident should never have happened, because first and foremost something either is or went wrong (or is at least insufficiently comprehensive, therefore still wrong) with the sensor design, or the sensor data analysis software (this is where noise filtering takes place), or the software decision module (the machine learning and AI part), or — and this I think is most likely — some combination of those. Even if every reasonable precaution had been allowed — and we all know that every precaution engineers deemed necessary and reasonable very likely was not allowed, for a variety of reasons (some good, some bad) — this still is where the fault lies that we should all care most about right now. It is a complex system operating in a complex, time-variable setting, and somewhere therein lies a problem, a bug, or an oversight that unambiguously takes precedence over whatever may or may not have been happening with that poor, tragically unlucky woman.

We don’t know yet what, exactly, happened. But we know where we should be looking. It behooves all of us — society — to allow unhindered gathering of all possibly relevant data around the event and give the experts the access, resources, and time that, according to them, they need to hunt down the real cause and find ways to fix it. The technical problem, wherever it lies, is the only valid first priority. Setting up red herrings (“it’s her fault!”, “no, it’s Uber’s fault!”, “no, it’s capitalism’s fault!”, “no, it’s the idiot Arizona legislators’ faults for once again valuing corporate profit over human lives!”, etc.) only uselessly distracts from — and maybe even will prevent — tracking down the technical problem.

Complex systems can behave in unexpected ways. Probably several things — perhaps even each one innocuous in isolation — had to combine for this accident to have happened. It is likely every single-fault failure mode was identified and mitigated by the design and test engineers. Those failure points are relatively straightforward to deal with, and, again, engineers are far from stupid. However, multiple chained events leading to unexpected, even unpredictable, behavior in a complex system can be extremely difficult and time-consuming to debug, both before and after the fact — especially in the presence of insufficient testing or design resources. This is every engineer’s absolute, hands down, most disturbing nightmare. It is THE thing that keeps engineers up at night.

Yet even if every conceivable precaution had been (allowed to be) taken, dependent multimode failures can and do still happen anyway. It is inherent in the very natures of technological advance and sophisticated systems.

Sometimes, nobody is to blame.

The best anybody can ever do — and therefore the most anybody can ever ask — is to implement every reasonably knowable precaution and perform every reasonably knowable relevant test, and iterate sufficiently many of these design-test-correct cycles to satisfy everyone’s most important misgivings, in order to ferret out all (you hope) of the important gotchas you undoubtedly didn’t or couldn’t, for whatever reasons, think of beforehand. Even then, surprising shit will happen. To further compound things, in the real world you very rarely (as in: never) can afford to fully implement these to everybody’s satisfaction, which just makes bad unexpected events all the more unavoidably likely.

Nobody wants it to happen, or even to be a possibility, but bad shit is very likely going to happen no matter what. You do your best — and, even in the real world, almost everybody in this line of work does — given the current context, available knowledge, and available resources, and you hope that you’ve managed to mitigate the severity of the consequences of the inevitable but unpredictable bad things to a sufficient extent that in the end you have navigated this unavoidable minefield without anybody getting hurt or killed.

That didn’t happen, this time. But nobody should be surprised. Nor should anybody be at all quick in pronouncing — nor is anybody entitled to pronounce — ill-informed judgments.

This is the way it is. You learn from the problems that, despite all your efforts, hit you and your team; you fix them; you become wiser (if more saddened); and despite the paralyzing pit in your stomach you move on.

My dad was an aircraft flight test instrumentation engineer, so I grew up seeing him and his friends live this process, several times. As a kid I did not fully comprehend what had happened, and he had a tendency for understatement in place of emphasis in serious events. One of the incidents occupies a vividly chiseled volume in my brain. In a freak accident, a chase plane — a standard safety measure — was nicked on the wing by a test helicopter’s main rotor blade. As it happened this time, Dad was not on board the helicopter but directing the chorus of test data from the ground. The helicopter pilot managed, somehow, to recover control, but the chase jet went down, hitting the ocean surface off the California coast (another standard safety precaution — never expose the public to even the slightest possible risk). Out of long experience, the seasoned test pilot habitually flew with his harness buckled but loosened. This habit saved his life. His copilot was a different story.

The stunned test engineers, as my Dad later relayed to me, listened to this event unfold, from routine start to grisly finish, on the comms radio, while pitiless equipment monitored pilot life signs data. The test pilot and his copilot both survived the crash. The cockpit canopy had automatically released upon impact. As the plane sank the pilot immediately unbuckled and escaped. The descent of the plane-turned-anchor was too rapid for any hope of him to turning around and helping his copilot (he tried anyway). The apparent size of the crippled jet shrank with distance, and it disappeared from sight, his friend and colleague methodically struggling in the dark, finger-numbing cold and the exponentially rising pressure. (Test pilots are a uniquely cool-headed lot in dire circumstances.) The well liked twenty-five year old, ironically trapped by his snugly tightened safety harness, sank with his plane, fighting to release himself until he lost consciousness and died in the unforgiving abyss.

Imagine the pilot having to witness this. Imagine the test team looking up from their instruments, in dawning horror informed by knowing dread, to each other’s faces, one after another, hoping for some sign from the more experienced team leaders — in this case my Dad — that, despite what their rational brains were telling them, things really would be okay, that nobody was really going to die. For any engineer, nothing is more inexpressibly awful, nor more dreaded, sitting there in the backs of their minds, nor more devastating when it one day happens, than  to have to live through — and, afterwards, continue to live with — somebody getting killed. But it happens. It will happen. And sometimes, nobody is to blame.


Leave a Reply

Your email address will not be published.