You may or may not know the concept of the moral crumple zone. It’s a term coined by Madeleine Clare Elish, PhD some years ago, and it refers to a situation in which a human is held morally or legally responsible for the actions of a system, even though they have limited or no control over it.
In other words, it’s a situation where a human is being held accountable for actions they are not responsible for. In that capacity, they act as the moral crumple zone, absorbing the blame when something goes wrong, in the process protecting the system.
They are rife today, and with thoughtless implementation of automation, we are creating more moral crumple zones too.
This is, of course, a deeply unfair situation. It rarely happens by design. Few people are actually want to implement a moral crumple zone, but they can and do creep up on us in all kinds of places.
The Unintended Consequences of Efficiency
While moral crumple zones are rarely implemented intentionally, they often emerge as unintended consequences of pursuing efficiency and productivity gains. As organizations strive to optimize their processes and leverage new technologies, they may inadvertently create situations where individuals become disproportionately responsible for outcomes they cannot fully control.
This “smuggling in” of moral crumple zones occurs when new systems or technologies are introduced to improve efficiency. The full implications of these changes are rarely thoroughly considered, and accountability structures fail to adapt to the new reality of human-machine collaborations.
This is how things could happen, and do happen:
A company implements generative AI tools to enhance productivity in content creation, data analysis, or customer service. To address concerns about GenAI’s fallibility and hallucinations, the company insists on having a “human in the loop” and declares that employees are responsible for all AI-generated content.
As the AI tools demonstrate ability to significantly increase output, productivity expectations rise accordingly.
Employees soon find themselves with insufficient time to thoroughly check all AI-generated content, despite being held accountable for it. And if we’re honest, they don’t want to check them either; we know people will over-rely on pretty good automation to the point their own skills will start to degrade.
Because of our natural tendencies, and being under pressure to meet new productivity expectations, workers begin to rely more heavily on the AI system without adequate verification. They will over-rely on the systems.
The result is an accountability mismatch: when errors do occur – and they will – employees are held responsible in line with the policies, even though they lacked the time and resources to prevent them. The actual on-the-ground reality has not allowed them to really do the necessary checking, a fact that is ignored by the formal rules.
And so the human workers have become accidental moral crumple zones.
What can we do?
Unsurprisingly, this is not an altogether simple thing to resolve.
First there needs to be a recognition of the potential for moral crumple zones when implementing new systems or technologies. This awareness should be coupled with realistic expectations that not just allow for but ensure meaningful human oversight and intervention, even if it eats into some of those productivity benefits.
Organizations must also modify their accountability frameworks that distribute responsibility more fairly between human operators, system designers, and the organizations themselves.
This approach should be complemented by continuous evaluation of the impact of new technologies on human roles, with adjustments made as necessary. Notably, I’m not talking about tracking the expected benefits here, but all impacts on the roles and people.
Finally, there should be a prioritization of ethical design in the deployment of AI systems. This does not, by the way, just mean adopting an ‘ethical AI framework’.
By keeping in mind that we might unintentionally be introducing moral crumple zones, we can work towards creating more just and equitable systems.
It doesn’t mean just giving up on AI either; we can, and should, use it to enhance human capabilities, but do so without unfairly burdening individuals with disproportionate moral and legal responsibilities.
The goal should be to ensure the humans in our systems are respected and protected, rather than turned into an unintended moral crumple zone.