05/30/2018

Convenience at a Cost: Autonomous Vehicles are not (yet) what they seem

Leading automotive manufacturers and technology developers have made exceptional advancements in autonomous vehicle technology, but they have not yet achieved what has been promised to, or expected by, the consumer.

In flashy marketing campaigns and social blasts, we have been sold a bill of goods that autonomous vehicles are not yet able to deliver. Believe it or not, autonomous vehicles cannot (yet) drop your kids off at school and drive you to work while you sleep. While some manufacturers and their technologies provide certain functionality to its consumers, they vary wildly in capability and design. What you get when you buy an autonomous vehicle may not be what you think.

Autonomous vehicles are classified by the National Highway Traffic Safety Administration (NHTSA) by ‘levels’ that equate to the amount of self-driving the vehicle will do for you[1]. Level 0 means that there is zero automation and that the driver is entirely responsible for all primary vehicle controls (braking, steering, throttle, etc.). One step up is Level 1, in which automation of specific tasks is possible, such as speed monitoring in a manner comparable to cruise control. At the extreme end is Level 5, in which the vehicle itself will perform all safety-critical driving functions with no human input whatsoever.

Present-day commercially-available autonomous vehicles have achieved Level 3 autonomy. This means that the driver can hand-over full control to the autonomous system only under certain driving conditions, such as mundane highway driving, but not when more complex navigation is required such as through a construction zone. However, even when in the allowable situations, the driver must be ready to retake control of the vehicle at any given time.

The irony of requiring humans to operate self-driving vehicles aside, the reliance on driver intervention when operating autonomous vehicles presents new challenges to drivers themselves and in assessing the cause(s) of accidents when they occur.

Is driving, driving anymore?

Driving is a skill like any other in that it takes time to learn and master. Yet, the battery of tasks involved with driving are changing because of the advancement of autonomous technologies, which is increasing the proportion of unsuitably skilled motorists on the roadway every time an autonomous vehicle rolls off the lot.

When driving, drivers are expected to perform navigational tasks such as speed and lane control, and to scan the roadway ahead for potential or emergent hazards. While doing so, drivers are also expected to have their feet in the foot well and hands on the steering wheel. By consistently applying these practices and postures, drivers become conditioned to a set of predicable and repeatable behaviours when behind the wheel.

Consider the example of encountering a jaywalking pedestrian on the roadway. While driving a traditional (non-autonomous) vehicle, the driver is tasked with observing the pedestrian, assessing whether its path will cross that of the vehicle, determining whether evasive action (braking and/or steering) is required then, if needed, taking such action to avoid a collision. This and related situations are common to the typical driver experience.

Now consider that the driver is approaching the same pedestrian but is instead behind the wheel of an autonomous vehicle in self-driving mode. How is the situation different?

For one, the driver is not driving. What is the consequence of this? If not actively involved in the task of driving, drivers may not be attentive to the roadway and potential hazards ahead. Their hands and feet may not be on the steering wheel and pedals. Think of what YOU would be doing in the car if it was taking care of the driving.

Under these conditions, ‘drivers’ of modern-day autonomous vehicles face an inherently different set of challenges to operate these vehicles safely, particularly when the vehicle indicates that the driver must retake control. Re-engaging in the task of driving can be met with differing behaviour, requiring differing amounts of time to complete, depending on what occupants are doing beforehand.

When in self-driving mode and approaching a roadway section where drivers must takeback control of the vehicle, autonomous vehicles will provide signals and cues to the driver. For an effective hand-over to the human occupant, self-driving vehicles must provide warning “with sufficiently comfortable transition time”1, yet that is an idealized scenario. In situations when the hazard detection technology (LIDAR, radar, sonar, etc.) are delayed in, or entirely miss, detecting roadway obstacles or challenging conditions, this ‘comfortable’ timeframe may morph into something much more urgent. Under pressure, humans generally perform poorly compared to when time to plan and execute a response is plentiful. In a well-known phenomenon of the ‘speed-accuracy trade-off’, the faster you try to perform a task, the less accurate you will be[2]. Stated differently, you are more prone to make errors when trying to perform tasks faster than you typically would.

This problem is magnified when a driver is warned to re-engage at a time when they are distracted or not properly seated in the driver’s seat. The subsequent rush to get back to the ten-and-two position on the steering wheel with the feet properly on the pedals, all while having to reorient themselves of their surroundings and any roadway hazards, may be a tall order for some. Even when manually operating non-autonomous vehicles, drivers can miss the brake pedal in the panic of attempting to avoid a collision[3], with such errors in driver response having catastrophic consequences. The likelihood for delayed response, misplacement of the hands and feet, and poor decision making only increases when retaking control of autonomous vehicles.

Case in point

The adverse interaction between drivers and their self-driving vehicle is best exemplified by a recent collision event that occurred between Tesla and a firetruck. Briefly, a Tesla reportedly operating in autonomous mode collided with a firetruck stopped on the shoulder of a highway[4].  On the surface, this incident may seem to be caused by a technical malfunction or algorithmic glitch which failed to avoid an obvious roadway hazard. But, it may not be that simple. Tesla states in its own reporting on the matter that “it is [the driver’s] responsibility to stay alert, drive safely, and be in control of the vehicle at all times”, and that drivers must “always watch the road in front… and be prepared to take corrective action at all times” [5]. These expectations of driver engagement stand in remarkable contrast to the expectations of self-driving vehicle capabilities. That being said, did the driver understand their responsibility or the capabilities of the vehicle? Had they been exposed to the warnings or other cues provided by the vehicle to prompt the driver to act? Was the severity of the hazard understood by the driver? Did they try to re-engage and mis-fire under pressure to do so? These and a host of other questions remain unanswered, yet are critical to understanding the cause of this and related collisions.

Calling vehicles ‘autonomous’ yet requiring driver intervention is both ironic and dangerously misleading. To overcome the void between expectations of vehicle capabilities and the reality of their shortcomings, we may be approaching a world in which drivers must carefully read their vehicle’s manual before ever driving it. Or, drivers may need to be formally trained in the use of the vehicle’s technology. Without training or education by those with expertise in the technology’s use, capabilities and limitations, our misconceptions formed by media and advertising will be the guiding force of our understanding of the technology at our fingertips and perhaps our excuse for not doing more to avoid a collision threat.

Practice…we’re talking about practice?

As with any skill, we improve with practice. But how much practice is required before a driver can properly use autonomous driving technology? At present, autonomous vehicle operators are under no obligation to demonstrate minimum levels of competency with the technology—there are simply no mandatory tests or minimum thresholds to prove a driver’s familiarity or skill in its use. The effect is that some users of autonomous vehicles may be highly under-skilled, and yet the only barrier to purchasing an autonomous vehicle is a financial one.

This raises a vitally important point: the task of driving is changing, yet the testing of driver skill and competency has not. As technological advancement surges on, governments and agencies responsible for ensuring roadway safety and driver training should consider the effect that self-driving vehicles have on human factors and skill development.

Human performance research is full of anecdotes and hardcore science which collectively indicate that mastering skill takes time and practice. In other words, the more you drive, the better you generally get at the task of driving. Drilling deeper into the performance gains with driving experience, what you find is that drivers become better able to monitor the roadway for potential hazards before they become unavoidable collision threats[6]. In other words, what a naïve and inexperienced driver would see as a benign event, or not see at all, an experienced driver would identify as posing a potential problem and take anticipatory action. In the most traditional sense, practice and experience with driving helps drivers avoid more accidents.

However, practice with self-driving vehicles creates an entirely different picture. Unlike with conventional vehicles, extended exposure to autonomous vehicles re-enforces habits and behaviours that stand in remarkable contradiction to roadway safety. Highly advanced self-driving technologies foster inattention and distraction to a continually greater degree as drivers become more confident with the system’s abilities. Trusting the technology more and more to properly perform the task of driving breeds inattention and distraction, which is further proliferated over time[7]. The end game is a host of occupants sleeping behind the wheel at just the moment the vehicle requires the driver to re-engage. A sobering thought.

Who (or what) caused the collision?

When vehicles are involved in collisions, investigators turn to human factors research to understand what drivers are expected to do when confronted with collision threats. The science behind expected (human) driver behaviour is well-established in various situations. However, the most established science assumes that the driver is, well, driving.

When an autonomous vehicle is involved in a collision, understanding what the human occupant should or could have done to avoid the collision depends on another host of variables not yet held to the same level of scientific scrutiny. This is because independent research has not been able to keep up with the pace of technological advancement or the array of options and models of the technology being produced by manufacturers: every manufacturer is left to develop and validate their autonomous technology for themselves. The research behind proprietary technology occurs behind closed doors, creating a siloed ecosystem of knowledge that may not be transferable across different makes and models of the technology.

To highlight the challenge of assessing expected driver behaviour, consider again the example of an autonomous vehicle in self-driving mode approaching a roadway hazard, and assume that the hazard is acutely visible and detectable to the human occupant. For a hazard visible to the human occupant, it may not be a stretch for them to assume that the advanced technology contained in their vehicle would ‘see’ the hazard as well. Wouldn’t you think that the technology was at least capable of doing what a human could do? If the self-driving vehicle has successfully avoided collisions with similar hazards in the past, wouldn’t you think that it would do so again? Confronted with this situation, drivers trust of their vehicle’s capability to respond to roadway obstacles may result in them delaying or avoiding taking evasive action themselves.

When approaching a potential collision hazard in an autonomous vehicle, the question of what the driver should have done is no longer focused on when the driver was able to detect and respond to the roadway hazard directly. Rather, it is related to when the driver recognized that their vehicle did not detect and respond to the roadway hazard as expected. This seemingly subtle distinction is critical to assessing fault and cause of collisions. The driver’s behaviour in a critical situation is directly related to the expectation of their vehicle’s performance, likely influenced and re-enforced by previous successful trips without human intervention. Absence of response by the human occupant may in fact be a manifestation of trust in technology rather than inattention or distraction.

The legal and insurance industries tasked with assessing liability in collision events are no doubt going to be confronted with matters involving driverless vehicles and their occupants. The tests for whether a collision should have been avoided or not, and who or what is at fault, will range among technological, human and their interactive effects. Even though some manufacturers aim to pass ‘responsibility’ to the occupants of their self-driving vehicles, both the costs and benefits of the technology cannot be ignored.

Powering down

As most manufacturers and autonomous technology developers would have you believe, owning a fully autonomous vehicle would be the pinnacle of luxury—sleeping while commuting, socializing with passengers, or enjoying the scenery during long drives. But such convenience inevitably comes at a cost, one necessarily borne by today’s driver transitioning from a world of cars being driven to one of cars doing the driving.

The goal of autonomous vehicles is ultimately to remove humans from the equation and thereby make driving safer. But while transitioning from no autonomy to full autonomy, drivers are being quietly and in some cases unknowingly asked to do very different things from that which they are accustomed and trained to do. In this interim period, and as with any new technology, it is imperative to understand it first. At least for now, that is something that you must do for yourself.

Dr. Adam Campbell Ph. D., Senior Associate in Human Factors and Personal Injury

Fabian Erazo B.Eng., M.A.Sc.Associate in Human Factors and Personal Injury

 

[1] National Highway Traffic Safety Administration (NHTSA) (2013). Preliminary Statement of Policy Concerning Automated Vehicles. Department of Transportation, Washington, D.C.

[2] Wickelgren, W. (1977). Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica. 41, 67-85.

[3] Mazzae, E. N., Barickman, F. S., Forkenbrock, G., & Baldwin, G. H. S. (2003). NHTSA light vehicle antilock brake system research program task 5.2/5.3: Test track examination of drivers’ collision avoidance behaviour using conventional antilock brakes (Report No. HS-808 875). National Highway Traffic Safety Administration.

[4] http://losangeles.cbslocal.com/2018/01/22/new-safety-concerns-tesla-autopilot/amp/

[5] Tesla Model S Owner’s Manual (2017). Retrieved from: https://www.tesla.com/sites/default/files/model_s_owners_manual_north_america_en_us.pdf

[6] Underwood, G. (2007). Visual attention and the transition from novice to advanced driver. Ergonomics. 50(8), 1235-1249.

[7] Cunningham, M. and Regan, M. (2015). Autonomous Vehicles: Human Factors Issues and Future Research. Proc. In the 2015 Australasian Road Safety Conference, Gold Coast, Australia.