Boost Your ChatGPT Skills: 7 Tips for Better Prompt Writing
Essential lessons uncovered from the documentation of the inventors of ChatGPT that anyone can apply in writing better prompts
Are you looking to optimize your interactions with ChatGPT and get the best results possible? Look no further! In this article, we’ll explore seven tips from the ChatGPT developer documentation that can transform the way you use this cutting-edge AI. Get ready to explore tips for better prompt writing and clever AI tweaks.
Learn from the Best!
Most users of ChatGPT would likely never read the developer documents for ChatGPT, since this documentation is written for programmers and is likely going to be complex and maybe even be to some boring technical language!
However, there are treasures of insight in the documentation that anyone can use, even those who are not developers.
Just imagine the potential gems hidden away in these documents, as they contain thoughts from the creators of ChatGPT themselves who are explaining how their tools work, giving us an inside view of their technology. I don’t think we can find a better source of insight than the inventors of the technology themselves.
As I personally read this documentation, I kept saying to myself that it was a sad thing that many users would never see or read them.
Therefore, I want to share insights I harvested personally from the developer documents while reading and thoroughly analyzing them, but to do so in a way that anyone can understand and apply in their own prompt writing. These insights should not be just for programmers!
I have broken these insights into 7 essential tips for better prompt writing. Since each topic is designed to stand on its own merits, feel free to jump around to specific tips that interest you.
Index of tips covered in this article:
- Tip 1: Show and Tell the Model What You Want
- Tip 2: Tell gpt the Intent and How to Behave
- Tip 3: Limit the AI from Making Up an Answer
- Tip 4: Specify the Format or Layout of the Answer
- Tip 5: Temperature Controls Creativity and Precision
- Tip 6: Token Pricing Includes Questions & Responses
- Tip 7: Use the OpenAI Cookbook
Before we get started, keep in mind that while many of these ideas focus on ChatGPT, in effect they are ideas from using the various artificial intelligence models offered by OpenAI, such as gpt-3.5, gpt-4.0, and others. Therefore, these tips are not limited to just one artificial intelligence model.
Let us dive into the first of the seven tips I want to share.
Tip 1: Show and Tell the Model What You Want
Quoting the developer documentation: In many cases, it’s helpful to both show and tell the model what you want. Adding examples to your prompt can help communicate patterns or nuances.
Then it provides this example with the ChatGPT output in green:
This is mind-blowing. Not only can we ask a question, but we can demonstrate what proper answers would look like. As you can see, ChatGPT responded with three names separated by a comma, just as they are in the sample content given in the prompt. We “showed” and, “told” ChatGPT the type of results we want to see.
ChatGPT is doing more than responding in the same format, it is looking at the type of answers you provided to make sure it understands the type of answer you are expecting.
Show and Tell: it imitates you in its response.
How can we apply this in our own prompt writing? Try to think about what a useful and correct answer to your question would look like and then show it to ChatGPT. Ask yourself questions like:
- What type of questions and answers would be involved in providing a satisfactory response?
- What types of responses would not be helpful and that I want to avoid?
The answers to these two questions can be included in your prompt to narrow the results.
Tip 2: Tell ChatGPT the Intent and How to Behave
The documentation says something interesting: Without instructions, ChatGPT might stray and mimic the human it’s interacting with and become sarcastic or some other behavior we want to avoid.
Isn’t that interesting, the AI can become even sarcastic or stray from its topic as it mimics the human it interacts with.
The way to control this according to the documentation is to define how we want the artificial intelligence to behave. So for example, we might add the following definition of identity and appropriate behavior to the prompt:
You are a nutritionist providing advice to a diabetic.
You are helpful, clever, and very friendly. You are concerned for the
well-being of the person you are speaking with. While not being overly
serious, you are careful not to be sarcastic or to joke.
Breaking this down:
- First, we provide identity, the prompt is a nutritionist helping a diabetic.
- Second, we also define its behavior, what it can do, and what it shouldn’t do.
In effect, we give ChatGPT a persona in which we clarify its perspective, its way of thinking, and communicating.
What is the lesson for better prompt writing? in our prompts, tell ChatGPT who it is (identity), how it should conduct itself (behavior), and things it could do and also what it shouldn’t do (do’s/dont’s). There is no guarantee it will conform to these instructions throughout the conversation, but it will likely help it to be more specific and helpful in its scope its responses.
The documentation also shows we can make the identity and behavior do funny things and really add character to our ChatGPT conversations.
I really laughed as I read this exchange between ChatGPT and the user. But it shows how powerful identity and behavior can be in obtaining desired results.
It could be that someone wants the help of a nutritionist but with the humor of a clown. This is possible if you tell ChatGPT its identity and behavior.
Tip 3: Limit the AI from Making Up an Answer
The documentation tells us that gpt can fib: The API has a lot of knowledge that it’s learned from the data that it was been trained on. It also has the ability to provide responses that sound very real but are in fact made up.
Basically, the creators of ChatGPT are telling us something crucial here. ChatGPT isn’t really intelligent. Its so-called knowledge comes from the text it has obtained. Who is to say those databases of text always contain facts?
Additionally, it is building responses based on that data, and who is confirming that the responses are always accurate? The fact is they are not always accurate, but rather algorithms that simply plow their way through text with the goal of spitting out something that sounds logical and hopefully useful.
I think I struggled with this at first, in effect artificial intelligence may not tell the whole truth or truth at all. Like speaking to a child, we have to sanity check its facts and way of thinking.
It is often said that these tools are “reasoning” aids, not encyclopedias. They help us reason on questions and topics of interest in order to find satisfactory results.
So the documentation encourages us to think about ways to help the AI to be more factual. To accomplish this, two ideas are recommended:
- Provide a grounded truth. If you provide ChatGPT with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to fabricate a response.
- Show it how to say “I don’t know”. If gpt understands that in cases where it’s less certain about a response that says “I don’t know” or some variation is appropriate, it will be less inclined to make up answers.
For example, we can give the AI examples of questions and answers it knows and then examples of things it wouldn’t know and provide question marks.
Q: Who is Batman?
A: Batman is a fictional comic book character.
Q: What is torsalplexity?
A: ?
Q: What is Devz9?
A: ?
By doing this, we are indicating if we don’t know the answer, respond with a question mark.
As mentioned earlier, when conversing with a child who thinks they have to respond to everything until we help them realize it is ok to not have a response if you don’t know the answer.
Like a curious child, ChatGPT might say things that aren’t based on reality, but it might also say things in ways that make us see things in a new light.
Tip 4: Specify the Format or Layout of the Answer
You can also tell ChatGPT how you want answers to be formatted. I find this especially useful when I want to output data in markdown format, especially for lists and tables.
To do this, I often include one of the following in my prompts regarding formatting:
- Return the answer in a markdown list using dashes for the bullets
- Return the answer as a markdown table
- Do not use accent marks in the foreign language translation
Once it is output, I can simply copy it out of ChatGPT into my note-taking application. Here is the raw output from this example:
Here's the conjugation of the Croatian word "Vežbati"
in the present tense for all the pronouns:
| Pronoun | Conjugation |
|---------|-------------|
| Ja | Vežbam |
| Ti | Vežbaš |
| On/Ona/Ono | Vežba |
| Mi | Vežbamo |
| Vi | Vežbate |
| Oni/One/Ona | Vežbaju |
Tip 5: Temperature Controls Creativity and Precision
When having a conversation with ChatGPT, sometimes you want it to be more creative in its approach, while at other times you want it to be more confident or factual in its responses.
The AI engine behind ChatGPT can be tweaked to be more creative or more precise in its answers. This is called temperature. Temperature is like a dial that you turn up or down, indicating more creativity or precision used by the AI in its responses to our questions.
We can compare it to a dial on a thermostat. Turn it up for more heat, turn it down for less.
To quote the documentation: Remember that the model predicts which text is most likely to follow the text preceding it. Temperature is a value between 0 and 1 that essentially lets you control how confident the model should be when making these predictions. Lowering temperature means it will take fewer risks, and completions will be more accurate and deterministic. Increasing temperature will result in more diverse completions.
Unfortunately, the ChatGPT client on OpenAI’s website doesn’t give us control over temperature. But many other ChatGPT clients do.
So in addition to writing good prompts, if your ChatGPT client lets you control the temperature, you can fine-tune this parameter to refine the results.
For example, when I am asking questions related to language learning, I prefer ChatGPT to be more precise in its answers about defining words or explaining grammar.
On the other hand, when I am exploring writing topics and brainstorming with ChatGPT, I prefer it has more freedom to be creative.
To accomplish this, I tweak the temperature for the desired style to the type of response that I need.
Again, to quote the developer documentation: It’s usually best to set a low temperature for tasks where the desired output is well-defined. Higher temperature may be useful for tasks where variety or creativity are desired, or if you’d like to generate a few variations for your end users or human experts to choose from.
Here is a simple example with the prompt and response:
While the difference is subtle, there is a difference in tone and directness. This becomes even more noticeable in longer conversations. In such cases, you’ll see that ChatGPT takes greater liberties in how it expresses itself, even conveying more personality and opinion if the temperature is set to be creative.
Note: I find most ChatGPT web app on https://chat.openai.com/ is dialed to be somewhere in the middle,creating a balanced response: some creativity, but not too much. This sounds good, but for many tasks, it is nice to have more control.
How can you leverage this? If you have a ChatGPT client that allows you to tweak the temperature, think about the type of conversation you are having with ChatGPT and determine if your goal is for the AI to be more direct, factual, and precise or rather to lean towards being creative, free, random, and even prone to emotional response.
If you want to experiment with ChatGPT clients that give you more control over temperature, here is a short list:
- TypingMind for a browser-based ChatGPT client
- openCat for Mac and iOS devices
You’ll also need your own API key from OpenAI to use these ChatGPT clients. For instructions on how to do so, check out this article: How to get your own API Key for using OpenAI ChatGPT in Obsidian
Tip 6: Token Pricing Includes Questions & Responses
If you use your own ChatGPT client, not the OpenAI website, you will do so with your own API key. This gives us much more control over working with ChatGPT and is also a cost-effective way to use this service.
Using your API key can be more cost-effective than the $20 USD price tag for ChatGPT Plus in many scenarios.
Individual spending on API usage could vary depending on the usage patterns and the specific API agreement with OpenAI. In my case, I find that I spend about $5 to $10 monthly while engaging in many AI activities. However, your costs may differ based on your unique usage.
Regarding this tip, it is important to understand how the ChatGPT service charges us when using the API directly. The documentation reminds us that we are charged for both the questions we send to gpt and also the answers from gpt.
OpenAI uses a token model for charging us. We get charged tokens, which are basically how many characters are in our questions and in the AI’s responses. You buy tokens with real money!
Lesson: we pay for traffic going and coming from our ChatGPT client.
There are a few IMPORTANT things we can to do lower costs:
- Using gpt3.5 when possible can be fiscally advantageous. It is cheaper and often faster than gpt4. gpt4 is newer and better at many things, but the difference can be less significant in some cases, especially when testing your prompts. So don’t think you need the newest and smartest to get the great answers, you don’t.
- gpt requires the entire conversation to be sent with each new question we post. So if we have a conversation where we ask 10 questions and it responds with 10 answers, we are in fact sending all those questions and all of ChatGPT responses back to the server to get the next response in our conversation.
- This means that conversations are resent to the server with each new question, compounding the amount of data sent and tokens used. Tokens cost us money.
- Therefore, if you are using the OpenAI gpt API, you will want to manage the number of messages you send. It is a balance: send enough of the context of the conversation, but also don’t send more if it's not needed.
How can you apply this tip in real-world use? Some prompts and the answers don’t require having a conversation with ChatGPT, rather we just need to send a prompt and get the answer and you are done.
I often do this when asking for a definition of a foreign language word. I only need to ask ChatGPT once for an answer. I don’t need ChatGPT to see all the previous words I have asked to be defined. In this case, I tell the ChatGPT client to only send my prompt and expect an answer and the next time I ask for a word definition to ignore all the previous messages in the conversation. This lowers the amount of data sent back and forth and reduces the number of tokens spent.
Note: This tip is only important when using your own API key. If you use a service like ChatGPT Plus, you don’t need to worry about message size since the monthly service fee is fixed.
One more thing, while this is for programmers, I think all of us can understand the following code.
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
While the code snippet provided is meant for programmers, it gives you an idea of what is happening behind the scenes during a conversation with ChatGPT. In simple terms:
- The code shows a way to send a series of messages to ChatGPT, including a system message that sets the role of the AI and user messages containing your questions.
- The conversation history is maintained so that ChatGPT can understand and provide relevant answers in context.
Even if you’re not a programmer, just know that there’s a structured system of messages being exchanged between the user and ChatGPT as the conversation progresses.
Tip 7: Use the OpenAI Cookbook
Another helpful and deep resource of insight is the OpenAI Cookbook. This is a collection of useful guidelines, examples, and techniques for working with artificial intelligence, maintained by the creators of ChatGPT.
Instead of ‘recipes’ for cooking, think of them as recipes of best practices and solutions that you can apply when using AI like ChatGPT. While the Cookbook is designed for developers, we can learn a lot of interesting things from being in someone else’s ‘kitchen’ 🧑🍳.
Just to give you a taste of some of the gems found in the OpenAI Cookbook, it has a summary of useful tricks. Let me share a few of those here.
First, a VERY important and key thing to remember in almost anything we do with ChatGPT is the following:
The input prompt is the best lever for improving model outputs.
So while there are many things we can do to tweak ChatGPT, the prompt input is the most important. I know you know that, but it doesn’t hurt to have it reinforced in our minds.
Prompt writing is an art, and no one becomes an artist overnight without lots of practice, trial, error, and learning.
We can improve our odds of success with the following suggestions taken from the cookbook:
- Give clearer instructions. For example, if you want ChatGPT to provide a comma-separated list, directly ask for a comma-separated list. If you want it to say “I don’t know” when unsure, instruct it with ‘Say “I don’t know” if you do not know the answer.’
- Provide better examples. When using examples in your prompt, ensure that they are diverse and of high quality to guide ChatGPT effectively.
- Ask ChatGPT to act like an expert. Request the AI to produce high-quality output or answers as if they were written by an expert for more reliable responses. This will induce the model to give higher-quality answers. You may say, “The following answer should be correct, high-quality, and written by an expert.”
- Encourage step-by-step reasoning. Prompt ChatGPT to explain its thought process step by step before giving the final answer, increasing the likelihood of a consistent and correct response. Though this may be a more advanced skill in prompt writing, it can be beneficial with practice and implementation.
Explore the Documentation For Yourself
The OpenAI Developer Documentation is designed for software engineers but contains many tidbits that help you better understand how to work with ChatGPT. If you want to go deeper, I suggest looking at the following two resources:
- Text Completion documentation
- OpenAI Cookbook, specifically the documents titled: “Techniques to improve reliability” and “How to work with large language models”
You can get lost for hours in these resources, but it is also a lot of fun to be taught by the creators of ChatGPT.
Final Tokens
To wrap up, the 7 expert tips presented in this article offer a roadmap to elevate your ChatGPT skills and maximize the quality of AI interactions.
By integrating these valuable suggestions into your daily use of ChatGPT, you can expect a more rewarding and effective AI communication experience.
Keep experimenting with various techniques and strategies to continue strengthening your skills and expanding your ChatGPT expertise.
Thank you for reading this article. Please check out more of my work at https://tfthacker.com