Summary in seven points:
- Everything is becoming more automated.
- We tend to cede control to automation before it is actually ready to take over.
- In the process of ceding control, our skills degrade; in the first stages of this process, our past hands-on experience protects us from disasters, allowing over-reliance on automation to settle in.
- People with a lot of experience, especially of operating in the pre-automated world currently act as a kind of safety buffer – but as generations shift over time, we are losing that buffer.
- We are approaching an automation tipping point; due to prematurely relying on automation in many domains, we now find ourselves needing to rely on automation because we no longer have the skills to do it manually.
- This premature forced shift to increased automation will result in systems that are brittle or unsafe. We have two main options, neither of which is particularly good: urgently improve automation or urgently attempt to regain human proficiency.
- The third option is using automation to improve human proficiency – it just needs a different kind of automation than we normally focus on.
Setting the stage
Everywhere you look, automation is taking over the world – sometimes in small, barely perceptible steps camouflaged as productivity improvements that are often welcomed as useful tools, other times as major attempts at automating larger pieces of the system.
Whenever either happens, by definition something that used to be done by humans is no longer done by them.
We rarely question this stealthy tidal wave of automation; we welcome our adaptive cruise control – they are very handy after all – just we welcome sophisticated Excel macros or debugging tools that make our lives easier.
Underneath it all, the macro-level shift to more automation is worth considering more carefully.
As we cede control to automation, our skills in what we no longer actively do, degrade. This much is obvious; if we stop doing something, we become worse at it. Skills rust, knowledge is forgotten.
Losing skills is not necessarily problematic; most of us can no longer grow our own food, for example, and society has been doing quite fine with most people losing that skill. We don’t make our tools either, and every day we rely on an intricate network of services and specialists for our very survival.
Even during our lives, many of us have willingly lost some skills – maybe neatly doing joint handwriting; maybe we are no longer good at driving manual transmission cars; or maybe we can’t navigate well anymore due to always relying on GPS.
The problem is we tend to cede control to automation prematurely; when it works ‘well enough’ most of the time, we rapidly become complacent and let it take care of things before it’s quite ready to take over.
This is why we hear of people following their GPS directions into the ditch, or worse.
Good enough is not good enough
The problem with automation is that when it seems good enough, it’s not.
The situation becomes dangerous when automation performs well enough, or even better than we would perform under normal circumstances — but fails under exceptional circumstances or other rare cases. Outliers might be rare individually, but collectively they are anything but.
That is precisely the situation where we find ourselves today in many domains; we are losing skills, and have lost skills, because we are prematurely trusting automation.
Worse, it is creating a feedback loop: as human skills degrade, the natural tendency is to push for more automation to eliminate the now ‘unreliable’ human element – let’s call it an automation tipping point, or a point of no return.
An important point is that this does not need to happen to everyone.
We will still have individuals who pride themselves on their ‘manual’ skills, but it is the average that matters.
When those exceptionally skilled individuals become fewer and fewer, chances of us having one at hand when we need one decline – and that increases systemic risk.
The aviation industry can again offer us lessons on what to do – and what not to do – when it comes to automation.
What’s happening in aviation
Increased automation can reduce pilot workload and has played a role in making flying safer, but combined with poor company policies has also contributed to pilots’ deteriorating hand-flying skills. In the case of many younger pilots especially in certain geographies, those skills may never be fully developed in the first place.
This works fine until something goes wrong with the automation – and stuff inevitably sometimes goes wrong with automation, with “what’s it doing now?” being one of the more common utterances in a cockpit – and the pilot is suddenly called upon to utilise skills he or she may no longer have.
There is ample research that shows this is a clear, present, and growing risk – and one that has even been recognized by the regulators, with FAA now requiring upset prevention and recovery training and some sensible airlines encouraging or at least allowing more hand-flying.
When automation goes south, you are in better hands if you have an experienced pilot sitting up front, one who has decades of hand-flying experience and is comfortable using those skills.
Things can and have gone poorly when the skills one is supposed to fall back on do not exist.
Going, going, … gone?
The disappearance of these skills are related to a fundamentally existential challenge; people grow old and retire, and if and when those in-depth hand-flying skills are not broadly and consistently transferred to the next generation that deep skill base will disappear, leaving the industry much impoverished.
How do we then regain the skills?
Do we even try?
Can we even try?
Or do we double-down on the automation path, hoping that by improving the machines, we wouldn’t need those sets of skills to begin with?
The aviation industry is doing a bit of both, and the jury is still out on whether automation will hit an irreversible tipping point.
On one hand, there is an increased, if uneven, realization that hand-flying skills are important.
Yet, on the other hand, much of the industry is moving more towards doubling down on the automation path; single-pilot operations are actively planned (necessitating more automation), and numerous start-ups are testing fully autonomous planes of various sizes.
The eVTOL industry expects their fleets to be fully autonomous – they say eventually, but really they want it ASAP. For them, this line of thinking is out of business plan necessity, for their plans wouldn’t easily scale with pilots.
In some circles, there even seems to be a – mistaken, I might add – belief that moving to fully autonomous planes is somehow inevitable. It’s not, but that’s another story.
Meanwhile on the ground
On the ground, we are firmly ignoring the lessons from above.
We’re happily rolling out vehicles with Level 2½ -3-ish automation and broadly don’t seem to mind people over-relying on it. Somewhere around 2½ is where your Autopilot-equipped Tesla is – the marketing will have you believe it’s Level 3 or even Level 4, but if you read the fine print the reality is different.
As a reminder, Level 3 automation is Conditional Automation, where a “Driver is a necessity, is not required to monitor the environment. The driver must be ready to take control of the vehicle at all times with notice.”
This is hardly a novel argument, but I hereby join the group of people who have called for skipping Level 3 automation. Untrained humans are not good at sustained active monitoring, and no human is good at suddenly jumping in and taking over control in abnormal situations.
When – or rather, if – we move to Level 4 and to the full automation of level 5, the human driver becomes more superfluous, with no expectation of needing the human under any conditions on Level 5.
If we can reach Level 5 automation that is safer than humans under all circumstances, then we can say mission accomplished – we can safely transition away from humans. But even then we should be having the discussion if we really want to do that.
But we must get to Level 4/5 directly from Level 2, and we are far from being able to do that.
What are the options?
This pattern of increasing automation and losing skills is by no means limited to aviation or road transport – we are experiencing it almost everywhere.
Ironically, the worsening ransomware attacks provide a good signal just how much we rely on computers; take them away, and most organizations are crippled.
Where and how do we go from here though?
As foreshadowed above, we have two philosophically very different strategies forward. We can opt to:
- Rely on automation: do our best to push automation to become more competent ASAP, so that it would perform better than humans under all circumstances, or
- Rely on humans: do our best to maintain human skills, an approach that may need significant re-skilling or re-training or even initial training where necessary, and change policies to encourage retaining those skills.
I don’t think either option is very good.
The first approach has a number of problems, not least of which is the fact we don’t know if we can build automation that is better than humans under all circumstances, let alone do so quickly enough.
Going all-in on automation is also the least viable strategy because technologies will fail; it’s not a matter of if, it’s a matter of when. Adding safety systems does nothing to help, for the safety systems can and do fail as well.
From a systems perspective, to rely on something we know is going to fail without having a Plan B is unacceptable.
And as a cherry on top of that failed cake are legal constructs, which still generally call for people to be accountable – people who may have been relegated to an exceedingly minor operational role in the overall system, but who are still being held accountable for all the outcomes. That situation is called using humans as the moral crumple zone, an exceptionally useful concept that everyone working on or with automation needs to familiarize themselves with.
The option of purely relying on humans, on the other hand, places a certain upper limit on possible performance.
We know from human factors research that humans will make mistakes, and there are situations where automation improves matters – with vehicles, for example, we know automatic emergency braking (AEB) systems save lives. No amount of driver training would negate the benefit the systems bring.
Clearly, there is a place for automation as well.
Luckily, there is a third alternative, one that can be the best of both worlds. We can:
3. Focus on collaboration: shift our attention to different kinds of automation; one that supports and builds up human capability and expertise rather than seeks to supplant it.
It probably goes without saying that I support Option 3.
What does Option 3 look like?
It looks like the HUD (Head-Up Displays) in aircraft; previously, when landing in particularly bad visibility, doing a CAT III Autoland has been the only possibility – as the name implies, Autoland systems land, well, automatically. While they require vigilant pilot monitoring, they don’t require – or indeed allow – hand-flying.
HUDs change that; they enhance situational awareness and allow pilots to hand-fly CAT III approaches.
HUDs are a prime example of advanced automation that allows for humans to use and improve their skills, instead of just being a monitor of systems.
That’s the kind of automation we need; one that allows humans to not be falsely in the loop, managing exceptions, but automation that engages them in a way that allows humans to retain agency, expertise, accountability and responsibility for operations.
There are many, but I think the three below are most important:
- Don’t automate to eliminate human effort; automate to enhance the human effort.
- Ban Level 3 automation of vehicles and the equivalent level in all other systems.
- Be mindful of the moral crumple zone, where the human is held accountable for something they are no longer responsible for.