Right now, there’s plenty of excitement and concern alike about the rapid progress of AI.

We went through a period of anxiety a decade ago, when we had the first wave of AI job apocalypse predictions. Then it waned, and much of the world now has more trouble finding employees than we have issues with mass unemployment.

This year, particularly with the rapid-fire AI capability launches by OpenAI, the anxiety is back with a vengeance.

I sympathize with this sentiment on Twitter a couple of days ago:

It’s hard for even people who spend considerable amount of time to keep on top of things, to now keep on top of things. It feels like a good example of Future Shock that Alvin Toffler wrote about half a century ago [5].

In all our shock and awe, we’re missing a few key things especially in the context of LLM (Large Language Model) style AI that is now getting all the attention. To understand the significance of that, we need to ask a few questions:

Could you explicitly describe every step and every nuance of your job?

If you struggle with that, you’re far from alone.

Much of what comprises people’s skills is tacit knowledge, and even things like muscle memory. 

You can, over time, teach some of it, and learn the rest on the job, but you can’t really describe it.   

Which brings us to the first issue with AI: while there are systems that can do imitation learning, LLMs are not one of those systems. LLMs need to be trained on data; massive amounts of explicit knowledge – although calling it ‘knowledge’ is generous given we know much of what they’ve been trained on is….well, not knowledge.

For humans, that implicit knowledge is invaluable in doing a job safely, effectively and efficiently – but all that implicit knowledge is useless unless you understand the context and have some reasoning ability. 

The next problems with LLMs are that they have neither understanding nor causal reasoning ability. 

So, you have a model that has undergone probabilistic learning and operates in a theory-free manner. 

That can be fine for some tasks, but very rarely for entire jobs.

Even if you could describe your job, is that how it’s always done?

Work as imagined is how the tasks or entire jobs are documented. Many professions, especially safety-critical professions, have reams of written guidance of everything that the workers need to do in order to complete their jobs effectively and safely.

Except the way the documents describe the work is rarely, if ever, how it’s done.

If you follow every rule, guide, policy and requirement down to a T, you’re likely to be a highly ineffective and possibly even dangerous actor.

Why? Because the rules are inflexible, have often been written in isolation with no attention paid to unintended consequences, and the world in which we operate is non-deterministic; it’s complex, connected, and unpredictable.

From a limited perspective, it might appear that it’s the people that are the problem. As Kurt Vonnegut’s character in Player Piano said:

“If it weren’t for the people, the god-damn people, always getting tangled up in the machinery. If it weren’t for them, the world would be an engineer’s paradise.”

In reality it’s the people who make things work. As Sidney Dekker pointed out in his book The Safety Anarchist [2]:

Actual work process in any air traffic control center, or tower, or office, on construction site, or factory cannot be explained by the rules that govern it – however many of those rules we write. Work gets done because of people’s effective informal understandings, their interpretations, their innovations, and improvisations outside those rules.

This – the unpredictability of the world – is why we don’t have autonomous cars.

The effective, innate human capability of managing that unpredictability in a highly flexible manner is why fully automating something as large as a job is incredibly difficult.

If you ever doubt the size of the gap between work as imagined and work-as-done, consider the concept of malicious compliance [3], or work-to-rule strikes. In most, if not all, industries following the rules to the letter will essentially grind the system to a halt.

The smaller the gap between work as imagined and work-as-done, the more resilient the system is likely to be; but importantly, this does not mean work-as-done should somehow more closely align with the horrible documentation, but the reverse.

Keeping AI at bay

What do we learn from this? Quoting Dekker again:

Only people can keep together the patchwork of imperfect technologies, production pressures, goal conflicts and resource constraints. Rules and procedures never can, and never will. Nor will tighter supervision or management of our work.

The context here was not AI, but it might as well have been.

As long as AI cannot be taught implicit knowledge the way humans can; as long as it doesn’t understand the system it’s working in; and as long as it can’t do causal reasoning, most of jobs should be safe.

But there are some caveats:

  • Most, because there are exceptions.
  • Jobs, because many tasks are about to undergo a significant change, and that can have equally significant consequences downstream.
  • And should, because as we’ve seen, just because something is not a good idea, doesn’t mean nobody will do it.

As with most things, the situation is not one warranting panic, but neither does it warrant complacency.

It warrants mindfulness, situational awareness and some systems thinking.

References:

[1] Frey, Carl Benedikt and Osborne, Michael: The Future of Employment: How susceptible are jobs to computerisation? Oxford, 2013.
[2] Dekker, Sidney: The Safety Anarchist. Routledge, 2018.
[3] Wikipedia: Malicious compliance
[4] Dekker, Sidney: Compliance Capitalism. Routledge, 2022.
[5] Toffler, Alvin: Future Shock. 1970.

Leave a Comment

Your email address will not be published. Required fields are marked *