how do i make work for number lists also Framer CSS - List Item Padding Code Now the CSS targets both unordered lists

Brand Strategy

Tim Hillegonds

It’s Time to Rethink How You Hire

AI has changed what capability looks like—but most hiring processes haven't caught up. Here's a practical framework for finding the people who can solve problems and build things in an era of intelligence on demand.

Earlier this week, I sat down with a colleague who's a writing professor at DePaul University, where I earned both my undergraduate and graduate degrees. She's more AI-forward than most people in her field—a rarity in academia these days—and she’s not interested in keeping AI away from her students. Instead, she’s interested in figuring out how to bring it in without losing the thing that makes writing worth teaching in the first place, by which I mean the hard, necessary work of thinking through an experience or a problem.

At one point in our conversation, she asked me this: If you were hiring someone today, what would you actually look for?

I've written about this before—about T-shaped individuals, the democratization of expertise, and the way AI is expanding what any one person can do. All of that still holds. But her question pushed me to think even more deeply about it.

My answer is that I'd throw out the traditional hiring rubric almost entirely—at least at first—and replace it with a two-part audition.

Part One: Solve a problem.

I'd hand a candidate a problem they don't already know how to solve and ask them to work through it using AI. When they came back a day or so later, I'd ask them to walk me through every choice they made along the way.

What I'm after isn't actually the solution to the problem, rather, the resourcefulness and critical thinking they use to get there. Can they break a problem down into something workable? Can they figure out which tools to use and why? Can they evaluate what the AI gives back and make a judgment call about whether it's actually good? And can they articulate their reasoning clearly enough that I can follow it?

That last part matters as much—or more—as anything else. Explaining your process is a form of accountability. It tells me whether you actually understood what you were doing—or whether you just got lucky.

Part Two: Build something.

A little over a year ago, I attended a hackathon at MIT. In roughly 24 hours, a team of us identified a problem, designed a solution, and built a working product. It was one of the most clarifying experiences I've had in the last couple of years—not because of what we built, but because of how fast it happened and what it revealed about what becomes possible when you commit to figuring something out under pressure.

With platforms like Claude, Lovable, Replit, and a whole host of other AI-assisted tools, we're past the point where building requires technical training. Anyone willing to roll up their sleeves can produce a working prototype—a web app, a full marketing campaign, a functional tool—in a matter of hours. So I'd ask a candidate to build something. I don’t care what. But I'd be paying close attention to the choice.

The easy path might be a landing page with placeholder copy, a basic competitive matrix pulled straight from a Google search— something that demonstrates familiarity with the tools but not much imagination about what they can do. The more ambitious path is a working app, a campaign with real thinking behind it (and visuals)—something that required them to push into territory they weren't sure they could navigate. The complexity of what they aim for tells me how they think about excellence—whether they're willing to take the shortcut when nobody's watching, whether they believe that good enough is actually good enough.

There's also something that happens the first time you build something you genuinely didn't think you could build. A kind of confidence that's hard to manufacture any other way. (We sometimes call it the “holy sh#t moment” in AI.) I want to see if a candidate has found that feeling, or if they're willing to go looking for it.

Why everything else comes second.

I'm not saying the traditional hiring rubric doesn't matter at all. Domain expertise still matters. We're still in a world that requires a human in the loop—someone with enough taste, discernment, and judgment to look at an output and decide whether it's actually good. That's not going away—at least not yet.

But if a candidate can't pass the two-part audition, we never get there. AI isn't slowing down, and it’s certainly not going away. So the ability to use these tools—to access what I'd call intelligence on demand—is what's going to separate people going forward. If you can't solve a complex problem with AI, if you can't think critically about what comes back, if you can't put tools together quickly and with a high degree of fidelity in your output, then your domain expertise is sitting behind a wall you can't get over.

The audition is the filter. Everything else is secondary.

That's a pretty meaningful departure from how most organizations still hire. We screen for credentials, years of experience, narrow specializations. Those things aren't worthless, but they're no longer sufficient proxies for the capability that actually matters right now.

Stop hiring for what people already know. Start hiring for what they can figure out.

Related Insights

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.