Table of Contents
Prompting is the Window to AI Interactions
Prompting is the most fundamental way for humans to interact with Generative AI tools. Most of us who use any AI tool are prompting as the primary means of engagement with AI. This means there are hundreds of millions of people doing this activity. Yet there are so few who know how to prompt optimally to get the output they are looking for.
Prompting is an art. It is your ability to tell an AI tool or system what is it that you're looking for. The higher the quality and relevance of your prompt, the better the output would be aligned to your needs.
We blame the AI when the output is not to our liking. Yet, I can guarantee you that more often than not these can simply be improved through improvement inputs aka prompting.
Having worked with AI systems and built AI products for over 2 years now, I have identified two things that are game-changers when it comes to prompting. And, I don’t want you to agree (or disagree) with my words. One of the key principles for me at Work in Beta is that I will not tell you things that I have not done or tried to do myself.
Let me first share the learnings and then lets look into the follow-on research that validates the learnings.
So let's dive in.
Learning 1: This Prompt Structure really Works!
I get the best output from AI when my prompts are structure in this way:
Role: Who is the AI acting to be?
Task: What exactly do you need done?
Context: What does the AI need to know about your situation?
Format: What should the output look like?
Generally, I have seen that most of us give AI tasks to do and sometimes we give it some context as well. However, most of us lose out big time when we do not give AI a role to play and the format for the output. I have made this mistake for the longest time. I would do vanilla prompts expecting AI to give me results that are attuned to what I want, and it would take multiple prompts over a period of time to get a response that was satisfactory for me. Interestingly, this is how I discovered the value of role and format.
And, this is why in this newsletter giving AI a role and the output format are called out as individual tips.
In case, you have one or more good examples of outputs especially when it comes to complex tasks, I have found that one-shot prompting (that has one example of what the desired output looks like) or few-shot prompting (that has more than one example of what the desired output looks like) can do even better quality outputs.
Learning 2: Ask AI to Ask You Questions First
This is a very critical element that has been a meta-move and an absolute game changer for me.
All you need to do is along with your prompt, add in that you want the AI tool to ask you up to five questions to clarify things to further understand what you want from it. A lot of times there are things that we have in our mind but failed to answer our problems. By asking AI to surface what information it needs before generating output, you reduce your cognitive load while improving the result. AI knows what details would change its response - you just need to give it permission to ask.
This technique works especially well for complex tasks where you might not know all the variables. The AI will ask about things like target audience, constraints, tone, use case, and success criteria - all the things that would have made its first draft better.
The Proof: How I Tested My Hypothesis
I didn't just theorize about better prompts. I tested them. Over many, many months. And, to showcase the difference between a run-of-the-mill generic prompt and my 4-step framework based specific prompt, I ran two experiments.
The Experiment
I created two sets of very different real-world tasks that anyone can relate to.
Task 1: Cold outreach email for digital marketing services
Task 2: Market research report on India's snacks market
For each task, I wrote a ‘generic prompt’ (the way most people naturally write prompts) and a ‘specific prompt’ (following the framework above). And, then I generated output from ChatGPT based on both.
Here is what ChatGPT created for Task 2:
Post that, I had three other models (Claude, Gemini and Grok) independently evaluate both the generated reports without knowing which prompt led to which report (let’s just avoid any chances of bias).
These evaluations were based on Readability, Structure, Depth of Response, Quality of Report and Sources.
The Results
Judge / Parameters | Claude | Gemini | Grok |
|---|---|---|---|
Readability | Report 1 | Report 1 | Report 1 |
Structure | Report 2 | Report 2 | Report 2 |
Depth of Response | Report 2 | Report 2 | Report 2 |
Quality of Report | Report 2 | Report 2 | Report 2 |
Sources | Report 2 | Report 2 | Report 2 |
OVERALL | Report 2 | Report 2 | Report 2 |
All three models independently chose the output from the good prompt as dramatically superior.
The evaluator models consistently rated Report 2 outputs as fundamentally different in quality - the kind of difference between something you would delete (Report 1) and something you would actually use (Report 2).
Here's what the models said about the market research report comparison:
Claude's verdict:
Report 2 is superior for rigorous analysis... Fewer sources, but far higher rigor. Report 1 has 'more citations' but weaker verification discipline. Report 2 treats data limitations as first-class information.
Gemini’s verdict:
Report 2 is significantly better... It relies on financial filings rather than news articles. This reduces the 'game of telephone' error in statistics. It solves the 'Apples vs. Oranges' problem by defining the market taxonomy first.
Grok’s verdict:
Report 2 demonstrates substantially higher research integrity, systematic thinking, and practical utility for decision-making. Report 1 is more readable and confident, but that confidence masks methodological weaknesses.
All three models independently identified the same pattern: the good prompt produced outputs that showed intellectual honesty, proper source discipline, and operational utility.
Why Better Prompts Produce Better Results
The difference isn't magic. It's mechanical.
Role: Activates Specialized Knowledge
AI models are trained on everything from Reddit threads to academic papers. When you give AI a specific role, you're telling it which subset of that training data to prioritize. "You are a senior engineer debugging with minimal changes" activates different response patterns than "You are a brand copywriter for premium skincare."
Format: Prevents Rewrites
AI naturally wants to generate "complete" responses, which often means over-explaining, over-formatting, and over-hedging. When you specify format upfront, you short-circuit this tendency. You get usable structure on the first try instead of the fifth.
Clarifying Questions: Outsources the Hard Part
The hardest part of prompting is knowing what information the model needs. By asking AI to surface that explicitly, you transfer the expertise requirement from you to the system. You don't need to be a prompt expert - you just need to answer the questions AI asks.
How I Further Tested The Methodology
Additionally, I put together a comprehensive list of 60 best-in-class resources to learn the ins and outs of Prompting and Prompting Engineering (including things I have used myself and some additional recommendations)
I passed the most reliable definitive videos, documentation, newsletters through NoteBookLM to arrive at 21 Tips for high quality prompting.
I distilled these tips into 7 crisp pointers - 3 that you already read and 4 are provided as bonus in the next section.
This entire effort - just to double validate my core hypothesis. Hope this is enough for you to be convinced about the veracity of what I am telling you.
Bonus Techniques: For When You Want to Level Up
Once you've mastered Role + Task + Context + Format + Clarifying Questions, here are a few advanced moves worth knowing:
Use Delimiters to Separate Instructions from Data. When your prompt includes both instructions and content to analyze (like a document or transcript), using clear separators prevents AI from confusing your instructions with the content that is to be analyzed.
Ask for Step-by-Step Thinking. For complex problems (math, logic, strategy), explicitly instruct the AI to think step-by-step before giving a final answer. This "Chain of Thought" prompting dramatically improves accuracy.
Specify What "Good" Looks Like. Instead of just describing a task, define success criteria which gives AI a clear target ti optimize for.
Use Examples (Few-Shot Prompting). If you have a specific style or format in mind, show the AI 2-3 examples - you would be surprised how examples are often more effective than descriptions.
What Actually Matters: The 80/20 Rule
After all this research, here's what I'm confident about:
20% of prompting techniques give you 80% of the improvement:
Give AI a specific role (10 seconds of setup, massive output improvement)
Specify your format (prevents rewrites and clarifies expectations)
Ask AI to ask you questions (outsources the expertise requirement)
Everything else is optimization around the edges.
If you only remember three things from this post, remember those three.
How to Actually Apply This Tomorrow
Reading about better prompts doesn't help. Using them does. Here's how to start:
Tomorrow: Pick One Task
Choose something you're already doing with AI - summarizing a document, writing an email, drafting a proposal. Write your prompt the way you normally would, then add:
One sentence defining the role: "You are a [specific expertise]..."
One sentence specifying format: "Provide output as [structure]..."
See if the output improves. It probably will.
Next Week: Use the Clarifying Questions Technique
For your next complex task, end your prompt with:
Before you start, ask me up to 5 clarifying questions that would change your output.
Answer the questions, then let AI generate. Notice how much better the result is when AI has the right context.
Next Month: Build Your Template Library
Create 3-5 reusable prompt templates for your most common tasks:
Client emails
Meeting summaries
Research briefs
Content drafts
Strategy docs
Each template should have Role + Task + Format clearly defined. Just swap in the specific details each time you use it.
The Real Unlock: Systems Over Tips
Here's what I've learned after weeks of testing prompts:
Individual prompting tips are useful. But prompting systems are transformative.
A good prompt gets you a better output. A prompting system means every time you interact with AI, you're building on what worked before. You're not starting from scratch - you're refining a method.
That's why I built this research into a systematic framework that gives you a decision tree:
Am I clear on the role? If not, define it.
Am I clear on the format? If not, specify it.
Do I have all the context AI needs? If not, ask it to ask me.
Final Thought: The Difference Between Knowing and Doing
I can tell you that better prompts work. I can show you the research. I can give you the framework.
But the actual improvement happens when you try it yourself.
So here's the assignment: Take your next AI task and apply Role + Format. That's it. Two additions to your prompt.
See what happens. My guess? You'll never go back to vague prompting again.

