Most leaders are trying to gauge the difficulty of AI adoption before they begin. But difficulty is relative, and when the thing in front of you has no precedent, the measurement becomes the obstacle. The better move is to start before you're ready.
In a recent interview on the Lex Friedman podcast, NVIDIA CEO Jensen Huang talked about naïveté as a superpower. His premise is simple: if you knew how hard it was going to be to build a company before you started, you probably wouldn't. So the best approach to building a company, or to any daunting challenge for that matter, is to ask yourself, "How hard can it be?"
This resonates with me at a core level. When I left the corporate world to start a consultancy fifteen years ago, I thought to myself, "How hard can it be?" When I decided to go back to grad school and write my first book, I asked the same question. And then a few years ago, when I signed up for a 50-mile ultramarathon, I literally thought to myself, "How hard can it be?" (Very hard, it turns out.)
Huang's version of this is essentially "ignorance is bliss," and there's real utility in that. But I think there's something else to consider as well. Because the problem isn't just that difficulty scares people off. It's that we try to measure difficulty before we begin, and that measurement is almost always based on a comparison to something else. Something we've done before. Something we've watched someone else struggle with. Something we've read about. We build a model of how hard this new thing will be based on how hard other things have been, and then we treat that model as if it's real.
However, it’s not. Difficulty is relative, and it shifts depending on what you're comparing it to. Everything is hard or everything is easy based on the reference point you choose. And when the thing in front of you is genuinely new, when there is no valid comparison, the entire exercise of trying to gauge difficulty in advance becomes the obstacle itself.
The Measurement Trap
This is exactly where most leaders find themselves with AI right now. They know it matters and they know they need to move. But instead of moving, they're trying to figure out how hard it's going to be first. They're running readiness assessments. They're building committee structures. They're waiting for the right moment, the right vendor, the right model, the right level of certainty before they begin. They are, in other words, trying to measure the difficulty of something none of us have done before.
And that measurement makes things harder than they need to be. The belief that you need to understand the scope of the challenge before you're allowed to start is fundamentally flawed. You don't. You never did. No one who has built anything meaningful sat down first and accurately forecasted how hard it would be. They simply started, and figured it out as they went.
I think about this often, both as someone who advises organizations on AI strategy and as a founder building an AI product from scratch. The temptation to try and see the whole path before you take the first step is constant. It feels responsible, or like leadership. But it's usually just a more sophisticated form of hesitation.
The leaders I work with who are making real progress with AI share a common trait. They are not fearless, and they are not naïve. They’ve simply decided that the cost of waiting to understand how hard it will be is higher than the cost of starting before they're ready. They have chosen to inhabit the discomfort of not knowing rather than the false comfort of planning indefinitely.
We are in a reality none of us have been in before. There is no precedent to compare against, no reliable model for how hard this will be, and no amount of preparation that will make the uncertainty go away. So push forward however you need to, and ask all the questions you must, but make sure at least one of them is this: How hard can it be?

Related Insights


























































































































