I don’t usually hype myself up, but what I’m about to share has completely changed how I use AI.
If you’ve ever wondered why some prompts give you bland, generic answers while others feel like they were written just for you, you’re not alone. I used to struggle with this too.
I’m Tanzila — I studied Statistics, I mostly write about data and analytics, and I spend a lot of time experimenting with tools like ChatGPT and Google Bard. At first, I couldn’t figure out why my results were so inconsistent.
So, I decided to do the nerdy thing: I spent hundreds of hours testing prompts, reading research, and applying AI to my own projects in analytics and writing. What I discovered is that effective prompts share six building blocks.
This isn’t just theory. I’ve used this framework to write better blog posts, save time on data summaries, and even make technical explanations more engaging. And once you learn it, you’ll never have to “guess” again.
Here’s the formula.
The Six Building Blocks of a High-Quality Prompt
Not all prompts are created equal. These are the six components I’ve found most important, in order:
- Task
- Context
- Exemplars
- Persona
- Format
- Tone
Think of this as a checklist. You don’t always need all six — but the more you include, the more likely you’ll get a strong, tailored response.
Let’s break them down.
1. Task: Start With Action
The task is non-negotiable. Without it, the model doesn’t know what to do.
Always begin with an action verb: Generate, Write, Analyze, Summarize, Compare, Design.
Examples:
- “Generate a 3-month savings plan for me.”
- “Analyze this sales dataset and extract 3 insights.”
- “Write a blog intro that sounds approachable but professional.”
If your task is fuzzy, your answer will be too.
2. Context: Set the Scene
Context tells the model what matters. The challenge is striking the balance: enough detail to guide the output, but not so much that it gets cluttered.
I use three quick questions:
- Who’s the user?
- What does success look like?
- What constraints exist?
Example:
“I’m a beginner investor. I want to grow a small portfolio over 6 months, but I only have 2 hours a week to research. Generate a study plan.”
Compare that to simply saying: “Teach me investing.” The first gives a usable, customized answer. The second? Likely generic fluff.
3. Exemplars: Show, Don’t Just Tell
Exemplars (a fancy word for examples) drastically improve results.
Resume example:
“Rewrite this bullet: ‘Worked on reports.’ Use the format: I accomplished X by doing Y, which resulted in Z.”
Job description example:
“Here’s a LinkedIn job post. Mirror its structure but adjust for my role.”
Blogging example:
“Write my blog intro in the same tone as this paragraph I’ll paste below.”
You know what, LLMs are mimics at heart. Show them a structure, and they’ll follow it.
4. Persona: Who Should ChatGPT “Be”?
This one is underrated. Giving the model a role or perspective changes the way it answers.
For job prep: “Act as a recruiter.”
For analysis: “Be a financial analyst with experience in forecasting.”
For creativity: “You’re Sherlock Holmes. Explain statistics as a detective story.”
Once, I asked it to explain regression analysis as if it were a stand-up comedian. The result was surprisingly clear — and hilarious.
5. Format: Decide the Shape of the Answer
Don’t just say “Summarize this.” Say how you want it delivered.
Options include:
- Bullet points
- Tables
- Step-by-step guides
- Paragraphs with headings
- Code blocks
Example:....
Read More on Medium →
0 Comments