top of page

AI in 2026: A Few Predictions Among A Lot of Misconceptions

Updated: 2 days ago

Practically everyone is wrong about how fast enterprise AI adoption will happen. Not because AI is unimpressive or incapable (it’s not), and not because it won’t matter (it will), but because large organizations move slowly. They always have.



AI adoption feels fast, but it isn’t

For three years, I have said that enterprise adoption of AI will be slower than expected, and so far, the evidence supports it. Sure, AI adoption inside large companies is happening faster than previous tech waves, but nowhere near as fast as people predicted in 2022. In fact, anyone who claimed that this would be an overnight transformation ignored how culture, incentives, and large organizations actually work.


As the saying goes, “when you’re a hammer, every problem looks like a nail.” For technologists, this means assuming that all enterprise problems can be solved with more technology. While that may often be true, in my experience, almost any enterprise problem is roughly eighty-percent cultural and twenty-percent technical. In practice, the cultural cost of changing how people work is far higher than the SaaS bill. With AI, even when we take into account chip production, computing power, and energy supply, the bottleneck at the very end of that journey is still just people.


That’s why I don’t spend much time thinking about short-term artificial general intelligence (AGI) timelines. Maybe it will happen. Maybe it won’t. We don’t even agree on what AGI actually means. There’s a non-zero chance that whatever we would have called AGI has already passed, and we simply moved the goalposts, much like we did when we first cleared the hurdle of the Turing test.


Here’s what’s actually important: even if the models never get better than what we have today (which they almost certainly will), we still have a decade of runway to figure out the meaningful applications of that technology. But the models are getting better, and we are discovering new use cases every day. We will for a long time.


That tells me that in the short and medium term, the limitation is not AI intelligence, but organizational imagination, incentives, and willingness to experiment.


Cultural cost is the real cost

To be clear, large organizations are slow-moving for a reason—mainly because they have a tremendous amount of value to protect. In a ten-person company, your ability to move quickly is proportional to how little there is to break. In a global enterprise, every change carries risk, which makes leaders careful but not incompetent.


With that in mind, one of the most under-discussed aspects of AI adoption is the role of cultural capital. Leaders can only ask people to change how they work so many times before they call quits. Every systemic shift spends some of that capital, so the very real question for leaders a few years back became whether AI is worth spending it on, and how to do so deliberately.


That's the crux of the matter: Any new technology lives on practical exploration, but in large organizations, that process is bureaucratized. The time-honored tactic of “fuck around and find out” becomes “test and learn,” a center of excellence that lives under layers of approval, distancing operators from the problems affecting them.


Yet one of the things that makes AI unique is that it lends itself incredibly well to small, low-risk experimentation. You don’t need to replatform your business. In fact, at the operator level, adoption is practically free and frictionless. You can start with individuals, with workflows, or with curiosity; many organizations just simply don’t have that muscle.


So, understandably, AI adoption feels paradoxical: On the consumer side, usage has exploded. People are constantly experimenting, while within enterprises, especially at the senior level, there’s a surprising amount of disengagement. Many executives discuss AI on stage but spend very little time actually using it day-to-day.


What AI tells us about how we work

In its way, AI exposes what’s wrong with how we work. Its accessible nature highlights the potentially unnecessary layers of operation within an enterprise, and so does its “fuzzy” interface. When a machine can write an email, read it, and draft a report or summary about it, it forces this question: Why did you even need that email in the first place?


In a large organization, problem-solving tends to drift away from the problem itself; often, the first response to a challenge is to hire someone new to look at it. Layers get added to manage risk and complexity, but those layers create distance, with information passing through so many hands that it just becomes a game of telephone. Consultancies and agencies are no strangers to this phenomenon.


So, at its best, AI is both an opportunity to bring people closer to actual problem-solving and a tool to make it happen.


Looking Toward 2026

If I had to prime you for 2026, I wouldn’t tell you to chase every new model release. I’d tell you to focus on culture, incentives, and proximity to real problems, especially if you operate in a large enterprise. 


For many, there’s a stigma around AI that implies using it is a way of cheating; try to remove that stigma (we have bracelets to remind employees to ask AI first). For others, there is serious job anxiety; a good dose of honesty will help here. There’s a very useful middle ground between being alarmist, offering false reassurance, or not communicating at all, and it’ll benefit those who learn to stop treating uncertainty as a weakness.


Meanwhile, time will quietly punish those who confuse talking about AI with actually using it, so get started on the latter, even just to become familiar.


And with that, let’s see where this year takes us.

bottom of page