how do i make work for number lists also Framer CSS - List Item Padding Code Now the CSS targets both unordered lists

AI Strategy

Tim Hillegonds

Meta-Prompting and the Power of Context

Voice prompting increases the richness of what you give the model by default, and meta-prompting turns that richness into a structured brief the next chat can execute. Together, they shift prompting from a “better questions” game to a repeatable method for designing the conditions for better thinking.

A quote that I often reference in AI strategy workshops is by Dr. Lance B. Eliot, a Stanford Fellow and world-renowned AI expert. He says, “There will be generative AI in nearly all applications that you use or that you are reliant upon. The more that you can think like the machine, the greater the chances you have of successfully contending with the machine.”

What he's getting at is that if you understand some of the fundamental ways that AI systems work, or at least some of the fundamental ways of working with AI systems, you can be much more effective in what you're trying to do.

Take prompting, for instance. We’ve been trained—largely by search engines—to believe that to get the information we need, we must type a concise question and wait for a result. However, when it comes to large language models, even though ChatGPT’s context window literally says “Ask anything,” that search-bar mindset is exactly the wrong way to think about it.

Large language models don’t retrieve answers so much as they operate inside a context. And the quality of what they produce is constrained—sometimes severely—by how much context they’re given to work within. Most people dramatically under-prepare that context.

The "Star Intern" Analogy

Imagine a highly capable intern joining your organization tomorrow. They’re curious, fast, well-educated, and capable of doing anything you task them with. You wouldn't sit them down in your office, give them a single sentence of instruction on a Post-It note, and then expect them to do any sort of meaningful work.

Instead, you’d explain the business, the culture, the customers, the constraints. You’d talk about what success looks like, what matters, and what doesn’t matter at all. You’d orient them to the environment in which they’re expected to work and think, and you’d give them as much context as you possible could. Then, you’d continue providing it as their work evolved.

Essentially, your LLM of choice is your AI intern. And your prompt is how you communicate with them. This means that, just like the intern in my example, your prompt shouldn't be a sentence or two. It really shouldn't even be a paragraph. It should be a lengthy and complete "onboarding" of information.

A good rule of thumb is that if your prompt isn’t at least 200 words, it’s likely not enough context to move the needle the way you're hoping it will.

Voice Prompting and Meta-Pompting

Two techniques have become increasingly more important as models get better: voice prompting and meta prompting.

When you move away from the constraint of the keyboard and start prompting by voice, your prompts inevitably get longer and richer without you trying. You speak in full sentences. You add qualifiers. You correct yourself. You pause, circle back, and refine, which is all much-needed context.

Output quality is often limited by what you failed to include, so voice prompting becomes a practical and reliable way to give the model a richer slice of your intent than typing typically allows. (Note: I haven't moved away from typing completely, but for most prompts, and even for some writing, voice is the absolute way to go. Check out Wispr Flow.)

Prompts for Prompts

The second technique is meta-prompting.

Meta-prompting is often described as using one model to generate prompts for another. But for business use cases, it is more practical to think of it as using the model to help you build the context before you ask it to perform. It is a way of preventing the most common failure in all of AI work: asking for output before you’ve supplied the conditions that make good output possible.

It's worth repeating, so I'll write that again: You shouldn't ask for an output from an LLM before you've supplied the conditions that make a good output possible.

In concrete terms, you don’t start by requesting the deliverable. You start by given them model some general context, then asking the model to interview you so it can surface what you haven’t yet said, which is often the stakes, the constraints, the audience, the edge cases, and the operating assumptions. You answer those questions with as much specificity as you can muster so it understands the problem and the environment you're trying to solve it in. (Remember the intern?)

Next, when it has as much context as it needs, you ask the model to write the prompt you should have written in the first place—a prompt that includes all the context you just surfaced, structured in a way the next chat can actually use. You paste that prompt into a fresh thread, and you’re off and running right out of the gate.

The Strategic Shift

Once you start working this way, you’ll see that prompting is no longer about "asking questions." It is about giving a model the conditions it needs to do meaningful work. When you stop treating the prompt box like a search bar and start treating it like a briefing room, your experience becomes exponentially better.

Related Insights

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.

Stay Ahead With Powerful Insights

Get exclusive insights, actionable strategies, and ideas delivered straight to your inbox.