Am I asking the right questions?
So you’re racing against a tight deadline. Stress is mounting. You’re juggling the limited resources you do have.
Enter your ever-reliable, omnipresent (as long as your internet isn’t a let down) AI assistant. But here’s the catch: you’re asking question after question and not getting the answer you need.
Sound familiar? You type out 8,9, maybe even 10 prompts for your million dollar question, only to hit that dreaded “you’ve hit the free plan” message. Now you’re stuck with GPT-4o’s less capable sibling. Frustrating, right?
In moments like these, you might find yourself wishing you had that premium subscription—or maybe, just maybe, that you could have asked better questions in the first place.
Here’s the budget-friendly secret to fixing this dilemma: it’s not about upgrading your subscription. It’s about upgrading your prompting skills. Because one well-crafted question can save you from 20 misfired ones.
Welcome to prompt engineering 101: a simple ChatGPT prompt guide to help your GPT work smarter, not harder.
Setting the stage aka contextual prompting
Chatgpt may seem like a superhero at times, but unfortunately mind-reading isn’t one of its abilities— well not yet anyway.
Think about GPT as making friends with a stranger. The more relevant information you provide, the better your responses will be. At the end of the day, you don’t just want a coffee at your local barista. You want an iced oat milk latte, half sugar, no foam, extra shot of espresso.
Bad prompt: Write an advertisement for a coffee shop
Why it’s bad: This is way too vague. ChatGPT doesn’t know anything about your target audience, the product you’re selling, the platform you’re advertising on, or the tone you’re going for.
Why it works: Providing detailed context allows ChatGPT to tailor its response to your specific needs, ensuring the output is accurate and relevant.
Pro Tip: Before you write your prompt, jot down a quick checklist of the key context elements the GPT needs to know. This extra effort can take your results from simplistic to specific.
This is a brief table to get you started.
Play the part aka role based prompting
Role based prompting takes contextual prompting up a notch. This technique is specifically useful when you want to fine-tune your tone and perspective to match that of your target audience
Bad prompt: Write a relatable and funny caption for a budgeting app that makes saving money feel easy.
Why it’s bad: The prompt lacks context about the target audience. Without clearly defining the role or audience, the AI generates a generic output that may not hit the mark.
Why it works: It provides clear context, defines the audience and tone, and customizes the output to resonate with the target demographic.
Defining the role for your GPT ensures it speaks directly to your audience—Gen Z, Millennials, Boomers, Gen X, Gen Alpha—you name it, GPT delivers. Just make sure you give it the right hat to wear.
Show me your working aka chain of thought prompting
The benefits of thinking out loud aren’t unique to humans. When you prompt the LLM to explore ideas and break down its reasoning step-by-step, you might uncover insights that may have been overlooked with a more direct approach.
This method aims to improve LLM performance by encouraging it to explain its reasoning process sequentially. However, with models like OpenAI’s o1, this approach appears to be reducing in utility, as these advanced LLMs can now perform complex reasoning independently.
Bad prompt: How can I improve the user experience on my website?
Why it’s bad: The bad prompt is too vague and lacks direction, leaving the LLM to interpret what “improve the user experience” entails.
Why it works: By explicitly guiding the LLM to evaluate, suggest, and personalize, it ensures a comprehensive and detailed response that covers all critical aspects of improving user experience. If there’s one key takeaway from this ChatGPT prompt guide, it’s this: clarity and structure in your prompts are the foundation for unlocking smarter AI interactions.
Pro tip: phrases such as “walk me through,” “think step by step,” “work through the answer in clear sequential steps” encourages your model to put a little bit of extra effort into exploring your answer and may prompt more accurate responses.
Baby steps aka least to most prompting
Least to most prompting is like chain-of-thought’s older, more responsible sibling. Remember as a kid, when you first dipped your toes into the shallow end of the pool and slowly worked your way to the deep end? It’s either that, or someone shoves you straight into the deep—and let’s be real, that usually ends in panic rather than progress.
This technique works in a similar fashion by decomposing the problem. Each step is a building block to the next, gradually guiding the model toward more logical and well thought out answers.
Let’s say you are hoping to launch a product, and require a marketing slogan.
Bad prompt: Write a marketing slogan for a new eco-friendly water bottle.
Why it’s bad: It overwhelms the GPT by skipping structured steps, leading to generic outputs instead of well-thought-out results.
Why it works: It breaks the task into manageable steps, guiding the GPT to produce detailed and logical responses.
By dividing the problem into small digestible sub-problems, least-to-most prompting ensures clarity, precision, and impactful results.
Lead by example aka N-shot prompting
This ChatGPT prompt guide wouldn’t be complete without emphasizing the power of examples. If you want to stray away from the LLM falling back on its favorite buzzwords (“essentially” and “tailored,” anyone?) or using overplayed metaphors (we’ve had enough of those crystal ball references, thanks), you can provide more examples.
The more examples you give, the better it can mimic the tone, style, and structure you’re aiming for. N stands for the number of examples you provide. More examples = better results.
Bad prompt: Write a product description for our project management software.
Why it’s bad: Without context, ChatGPT is left to guess, resulting in generic and less effective outputs.
Why it works: This prompt provides examples for reference, specifies the desired tone and style, and clearly highlights the features to emphasize
Pro tip: Notice how the last prompt integrated contextual prompting at the end? Combining methods like contextual priming with N-shot prompting can make your requests even more effective!
Fire and ice aka temperature prompting
Temperature prompting is the GPT-version of a vibe check. Do you want a conservative, precise response or a highly creative but unpredictable response? It’s up to you to set the tone, and all you need to do is control the weather. The higher the temperature, the more creative the response.
Simply add your desired temperature level in brackets at the end of your prompt, and voilà! You can fine-tune responses to align perfectly with your brief.
Pro Tip: A lower temperature may be favoured for more research-centric and explanatory work, while a higher temperature can be powerfully used for storytelling.
Summary table
Conclusion
The next time you’re racing against the clock and need some solid prompting techniques, don’t forget to give these methods a try. There are plenty of other techniques out there, so feel free to experiment with the ones that best fit your requirements.
If you found this Chatgpt prompt guide helpful, you’ll surely appreciate the exciting AI projects we’re working on. Visit our website to explore our innovative AI solutions—because Gen AI is at the heart of everything we offer at Netscribes.