This article was originally published on Kendra Vant’s substack Data Runs Deep at https://kendravant.substack.com/p/sami-makelainen-on-data-and-ai-in on December 12th, 2024. Kendra is a friend and a thoughtful advocate for all things data and AI; she’s running a mini-interview series and I had the honor of being the first cab off the rank in that series, so I thought I’d post the thoughts here too.
I’d like to point out there’s no way to stay on top of everything, and trying to do that will only drive you mad.
Sami, 2024 has been another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
There hasn’t been one single story that stood out, but the ongoing polarization in discussions about GenAI has been both fascinating and frustrating. On one side, there’s the “OMG, this is terrible” camp that views GenAI as inherently abhorrent, a terrible intellectual property rights violation, or even an existential threat, and on the other, the e/acc and their “we’ll soon have AGI that will solve everything” views. Both extremes, I feel, oversimplify reality and both views just seem to become more entrenched as time passes. This hinders nuanced discussions and prevents us from addressing the real, complex challenges and opportunities GenAI brings.
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
The adage “I think you’ll find it’s a bit more complicated than that” remains a constant truth. Simplifying complex matters for effective communication is important, but what I see is over-simplification – and that’s dangerous, and something I’m highly allergic to. Whether it’s understanding data, deploying AI, or making decisions, nuance is critical, and there’s still a lot of work to do in fostering that mindset across all industries. This is even more true in fields like AI Ethics, which is a space of incredible complexity and multiple possible approaches.
It’s been a heady couple of years with 2024 almost as frothy as 2023. What’s one common misconception about AI that you wish would go away?
The idea that AI is “coming for your jobs” or “doing” anything autonomously is a misconception I wish would disappear. AI is a tool — for the time being, it lacks agency or autonomy. Every impact, whether good or bad, results from human decisions and actions using these tools. Framing AI as having agency not only misrepresents its capabilities but also allows organizations to distance themselves from the consequences of their decisions, as if the technology is to blame. One could go so far as to say some of this commentary borders on unethical and is intellectually dishonest, using an nonexistent feature of the technology as an accountability sink.
The festive season is almost upon us, so many readers will have a bit of extra time to read / learn / reflect. Who do you follow to stay up to date with what’s changing in the world of data & AI?
I’d like to point out there’s no way to stay on top of everything, and trying to do that will only drive you mad. However, Ethan Mollick’s One Useful Thing is a great resource for pragmatic insights. If you have like 8 hours per week to read, Zvi Mowshowitz’s Don’t Worry About the Vase newsletter is a great example of superhuman productivity of some people. Both are excellent resources for cutting through some of the noise.
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
That the temptation to deploy generative AI systems in mission-critical applications without adequate safeguards will prove too great for some organizations to resist. Rushing into these deployments without robust procedures and processes in place to ensure safety could lead to catastrophic outcomes — both for the companies involved and for society at large.
I wrote the above paragraph choosing my words very carefully, and you might be alarmed at my lack of damning nondeterministic systems altogether from mission-critical applications. To be clear, I in no way advocate their usage in such situations today, but I reserve the right to change my mind. After all, we humans are nondeterministic systems that are also prone to a bunch of different failure modes; in high-reliability organizations, we have built the procedures and processes with that in mind, hence we are able to operate some (very few, but some) tightly coupled and complex systems relatively safely and reliably. I am specifically concerned that the organizations that seek to deploy AI to such use cases will do away with that safety scaffolding, in the process losing that ability.
And finally, channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
A slowdown in capability improvements — or even a pause — on the GenAI front would likely be a good thing. The societal changes we’re witnessing, even aside from any AI, are outpacing our ability to absorb and manage them. Even if all capability development stopped today, we’d be busy for the next 15 years in integrating and adapting what we already have into our organizations and societies. A slower pace would give us the chance to build the support structures needed to navigate these transitions effectively.
That is likely to be unrealistic. On the slightly more realistic front, I really wish we could get over conflating AI and GenAI. They are very different beasts and we should not attempt to tackle both with one approach; this extends to team capabilities and governance structures. Do not mix the two.