top of page

Red Flags, Autonomous System Safety, and the importance of looking back before looking forward


Have we gone through the introduction of autonomous vehicles before? In other words, have we gone through the introduction of a new, potentially hazardous but wonderfully promising technology?


Of course we have. Many times. And we make many of the same mistakes each time.


When the first automobiles were introduced in the 1800s, mild legislative hysteria ensued. A flurry of ‘red flag’ traffic acts were passed in both the United Kingdom and the United States. Many of these acts required self-propelled locomotives (as they were called) to have at least three people operating them, travel no greater than four miles per hour, and have someone (on foot) carry a red flag around 60 yards in front.


The red flag is a historical symbol for impending danger. Perhaps the most famous red flag traffic act was one in Pennsylvania that required the entire vehicle to be disassembled with all parts hidden in bushes should it encounter livestock (this bill was passed by legislators in both houses but was ultimately vetoed by the governor).


These acts seem humorous now, but to miss their key lessons and those from and other technological revolutions would be ill-advised.


The first red flag lesson is that society instinctively hoists red flags in the absence of information


We are seeing this now with autonomous vehicles. Why?


Perhaps it is because without information, people tend to focus on specific narratives and not on net benefit. Steven Overly’s article in the Washington Post talks about the reaction people will likely have to autonomous systems noting that humans are not ‘rational when it comes to fear based decision making.’ Overly quotes Harvard University’s Calestous Juma, who writes in his book Innovation and Its Enemies: Why People Resist New Technologies about how consumers were initially wary of refrigerators when they were first introduced in the 1920s. The (somewhat remote) prospect of refrigerators catching fire weighed heavily on people’s minds regardless of the obvious health benefits of storing food safely.


So what happened? Three things. The Agriculture Department publicly advocated the health benefits of refrigeration. And once refrigerators became ubiquitous as a result of their efforts they became safer as manufacturers learnt from their mistakes. And finally, the consuming public became more experienced with refrigeration - and to an extent, more knowledgeable.


We need to make a distinction here. More information doesn't equate to more knowledge. It is interesting to see that first world western nations (the ones that are pioneering autonomous vehicle technology) are for some reason more pessimistic about AV safety than developing countries. In fact it is the US public which is perhaps the most pessimistic. The most pessimistic countries are those with (by definition) more information regarding autonomous vehicles as they are the ones with the startups and established car makers pioneering the technology.


So why is this the case?


The second red flag lesson is the consumers don’t trust experts.


Take the current issue of drunk driving. Autonomous vehicle proponents argue that autonomous vehicles will effectively eliminate crashes (and deaths) caused by drunk driving. And this makes theoretical sense - with 94 per cent of current vehicular crashes caused by human error, surely autonomous vehicles (which remove the ‘human’) would mean that these crashes should effectively be eliminated. But the broader population is not so sure.


2015 Harris Poll survey found that 53% of United States drivers believe that autonomous vehicles will reduce the prevalence of drunk driving. The same figure applies to distracted driving. This means that 47% of people can’t see the link between autonomous vehicles and fewer crashes caused by inebriated or distracted drivers. To be clear, 47% of the population ‘is not stupid,' so the experts simply have not or cannot sell the safety message – yet.


And there is of course nuance. For example, 'Level III' autonomous driving is not really autonomous. A 'Level III autonomous vehicle' can do pretty much everything, but still needs a human driver to be there 'just in case.' Some argue that this will actually increase the prevalence of drunk driving. But of course, this won't be an issue once we get truly autonomous vehicles that have 'earned' our trust.


The third red flag lesson is that governments (and the regulators they appoint) will control the deployment of new safety-critical technology.


Politicians are not scientists. They are a special subset of society who are inherently conservative in their thought process and are inherently inclined to demand red flags. They have their collective strengths and weaknesses, but what most of the voting public cannot empathize with is the responsibility they have for virtually everything.


Perhaps today’s governments are more open-minded for autonomous vehicle technology, no doubt because they are hoping for commensurate economic benefits. Some are no doubt waiting for other governments to take the plunge, and set a precedent they can follow. But there is also no doubt that some lawmakers are looking at the tangible economic benefits their city will hopefully reap if theirs is among the first to deploy this technology.


The fourth red flag lesson is that we tend to incorrectly gauge the performance of new technology using perspectives of the old.


In the 1800s, the main safety concern of self-propelled locomotives focused on those outside of the vehicle. So safety became a measure in which this technology would not induce panic from man, woman and beast alike. But we quickly learned that instead of looking outwards, automobile safety needed to look inwards. That is, we needed to focus on passenger safety in the event of a crash. As it turns out, livestock and pedestrians could easily live in a self-propelled locomotive world. Irish author and scientist Mary Ward became the first automobile fatality when she was ejected from the steam powered vehicle her cousin built. And when vehicles became more popular, it became clear that the passengers and drivers were more likely to be killed or injured than anyone else.


In the early 1900s, vehicles were built with hydraulic brakes and safety glass. Crash tests started in the 1920s. General Motors performed the first barrier test in 1934. So today, vehicle safety is largely about people inside it - not outside it.


Why is there limited focus on those outside vehicles? Because we have human drivers. Drivers who are assumed to be trained, licensed and able to avoid hazards and people. But this is about to change.


There are many more red flag lessons to be learnt, but for now we will stop at four.


So where to from here. Perhaps the most relevant red flag lesson is the last. The first two lessons are largely societal, and can be resolved by better communication with the driving population. And because autonomous vehicles are yet to really hit the marketplace, we can assume that the (virtually every) car maker that is now investing in autonomous vehicles is yet to unleash their full marketing arsenal. Which they will. And we are seeing governments at all levels leaning further forward than others, probably because they think this will make more financial sense as mentioned above.


But we need to (much) better understand how we will create safe autonomous vehicles in a way that can be certified. Take the Tesla Model S that crashed into the side of a tractor trailer in Florida while in “Autopilot” mode, killing its driver. This was an event that many in the industry feared - the first public autonomous vehicle related fatality. The National Highway Traffic Safety Administration (NHTSA) Report into the situation surrounding the accident determined that the driver had seven seconds to do something about the tractor trailer in its path, but was clearly distracted. The driver is required to be attentive when autopilot mode is enabled.


But isn’t autopilot going to make drivers less attentive and cause more crashes?


Well, many people think the answer is no - as the NHTSA report found that the Tesla Model S saw a 40% reduction in accidents with autopilot on (noting that some of the statistics that autonomous vehicle makers have used in the past to demonstrate safety have attracted widespread criticism). And unfortunately in this case, the data Tesla provided to the NHTSA has been described as 'bogus' and that we still aren't any clearer as to the net benefit or harm of Tesla's autopilot features.


How can this be? How can we be doing as much as we have been doing without being able to say definitively one way or another whether a certain driving mode contributes to more accidents (or not)? Tesla is at least one of the parties that needs to provide an answer to this question.


Going back to the unfortunate accident - what Tesla did afterwards is telling. Tesla updated their vehicles on-board software to essentially enable it to better identify tractor trailers cutting the driving path. And this is where we humans really need to change our perspective when it comes to autonomous vehicle safety.


A safe autonomous vehicle will be more like an i-OS or Windows operating system - one that is constantly maintained from afar in the same way Apple and Microsoft do. We won’t be able to slap a sticker of certification on an autonomous vehicle as it rolls out the factory door. The manufacturer’s ongoing support system will be as much a part of safety as the braking system. Moving from one-time certification to ongoing safety demonstration will likely be the most challenging aspect of autonomous vehicle reliability.


And as we continue to experience autonomy, we must brace ourselves. The name of the driver killed in the Tesla crash was Joshua Brown. He has a family who mourn his loss. We cannot list people who might only be alive today because of Tesla’s Autopilot Mode. And we won’t be able to list those who are alive in the future based on what Tesla learned from the crash.


But we know they exist, even if they don’t themselves. We need to be thinking of them when we decide what red flags we choose to raise in the future.

10 views0 comments
bottom of page