Ah, humanity. Give us any shiny new thing, and we’ll find a way to turn it into a battlefield.
The “us versus them”-framing comes to us very naturally, and we now have solid psychological research explaining the phenomenon.
It’s a highly damaging trait, and ideally, that knowledge would allow us to better steer away from it – not come up with new divisions.
But no.
Instead, we’re letting ourselves be divided by our political standing; our geopolitical views; our stance on social and cultural issues; our religious beliefs or lack thereof; our preferred news sources and media outlets; our socioeconomic status; our pronouns; and so on.
And now we’re even divided by how we think about Generative AI.
It’s childish, immature, and disheartening.
The camps forming within GenAI reflect a broader societal issue — an unwillingness, or sometimes inability, to consider differing viewpoints.
We already have so many distinct groups forming in GenAI.
At one extreme end is the e/acc (effective accelerationism) crowd, who think we need go towards AGI (Artificial General Intelligence, i.e. superhuman intelligence) as fast as we possibly can. These individuals emphasize the benefits of GenAI, foreseeing rapid advancements and economic growth driven by AI technologies.
Their enthusiasm is palpable, and I get it. I understand them. I don’t agree with them, but I understand them.
Then we have figures like Sam Altman, who to many comes across as that extreme, as someone who is pushing AI progress forward recklessly with little regards to safety or unintended consequences.
His viewpoint, of course, is that he is shipping thoughtful products early and often, and that that is a highly responsible way to approach it so that society can slowly adapt to the new capabilities.
I understand Sam, too. I don’t agree that society is able to adapt at nearly the speed he thinks it is, but I understand his reasoning and where he’s coming from.
Then we have many, many more approaches on the more cautious side of things.
We had the infamous “pause” letter from The Future of Life Institute – I understand that view, too.
I signed the letter – not because I agreed with all of it or thought there ever was any chance of a pause happening, but because I wanted us to talk about those issues more. But I understood the reasoning behind the letter.
I understand the competitive pressures companies are under that guaranteed nothing like a pause was ever going to happen.
I understand the intentions behind a number of different AI Ethics approaches, the people who feel strongly about them, and why they feel strongly about them.
I understand why people would call many of them little more than ethicswashing.
I understand the people who are worried about intellectual property and consider all GenAI models a form of theft.
I understand the people who have existential anxiety of machines taking over our very human tasks, roles, and jobs.
I understand the people who are anxious or angry about the environmental costs of GenAI; the energy use, the water use, the emissions of all the data centres needed to support the blooming development and use of GenAI models.
I understand the people who reject the very notion of humans ever developing any kind of “intelligence”.
I understand the people who are scared about this all or consider it an existential risk.
I understand the people who think everyone is going about this very irresponsibly.
And I understand the people who are beyond excited about the current capabilities and the promise of even greater future capabilities.
I don’t fully agree with any of them, but I understand them. That’s my superpower.
I also understand the very human desire to form those us vs them-camps.
So, I understand the people who shout from rooftops – or our social media equivalents – that they hate how someone is doing something; how organization Y is being evil; how company Z is the best; how approach K will never work; and so on. You’ve all seen all that.
That doesn’t mean I wouldn’t be saddened, frustrated and disappointed at those views.
The most disappointing of all? That people don’t even try to better understand the issues they have such strong opinions about.
Few people are meeting halfway and talking things out.
This division is not just an academic or technological issue—it has profound implications for society. We cannot afford to let another crucial topic devolve into a shouting match. Instead of turning our diversity of views into a Quentin Tarantino film, let’s transform them into a beautiful tree of approaches.
The stakes are too high.
Reminders from psychology 101:
- Understanding does not mean agreeing.
- You can hold two or more seemingly mutually exclusive thoughts in your mind.
- Ideas can argue without people arguing. You can even hate an idea while loving the person holding the idea.
- If everyone around you agrees with you, it’s not a sign that you’re right. It’s a sign that you need more and different people around you.
- You can change your mind. It’s not a sign of weakness; it’s strength.
Understanding is the foundation of thoughtful discourse and better outcomes.
When we refuse to consider other viewpoints, we hinder progress – any kind of progress. Sometimes that progress may take a form that others might see as holding back progress.
The goal should not be to “win” the argument but to achieve results that benefit humanity. And no matter what camp you’re in, I’m sorry, but you do not have the only truth about how to benefit humanity.
You may have a view, and that’s great, but you need to learn to understand other views.
This involves honestly engaging with other viewpoints, seeking to understand the reasoning and concerns behind them.
We must create platforms for open and respectful discussions, encouraging collaboration between different camps; the more those can be face-to-face, the better.
Do we want to fight, or do we want results?
I vote for results.