Skip to main content
  1. My Blog Posts and Stories/

Prompt Engineering: Different Types of Prompts

··1593 words·8 mins

Introduction #

When AI was 1st introduced, it was marketed as a game changer technology, capable of replacing workers and changing the way we work. Initially, I was adamant adding AI into my workflow as I believed that the hallucination problems were a major concern. I am definitely not one of the early adopters of this workflow.

After looking at some of my colleagues / friends after they have integrated their workflow, I think it is definitely the way forward for future workflows.

As a result, I started to take a look into prompt engineering and how to prompt effectively to make full use of the AI model’s capabilities.

I am starting a series where I learn more about different prompts, and how to improve my prompting skills.

In this blog post, I will primarily focus on different prompting methods and when to use them for maximum effectiveness.

Zero-Shot Prompting #

Zero-Shot prompting is a technique where the AI model is prompted without any prior examples or training data. This type of prompting can be adapted to many different use cases without requiring additional training data.

When to Use #

This type of prompting works the best when the tasks fulfills the following criterias

  1. The task is straightforward and clear
  2. There were no past attempts at this task
  3. The task is exploration oriented
  4. Results can be flexible

Example #

This is an example of a zero-shot prompt.

Classify the following text as active or passive:
Text: The cat stretched out and sat on the mat
Classification:

When doing zero-shot prompting, we do not provide any examples to the LLM but it still understands what it needs to do.

Limitations #

The results from the large language model depends on the model’s interpretation of our prompt wording. If the task is ambiguous, the results may not be what we expect.

For domain specific tasks which requires any of the following:

  1. Specialized reasoning
  2. Multi step logic

Zero shot prompting may cause the model to not know what is the desired process and output.

Few-Shot Prompting #

Few-Shot prompting is a technique where the AI model is prompted with a few examples or training data.

When to Use #

This type of prompting works best when the tasks fulfills the following criteria

  1. The task has patterns.
  2. You need a very specific format (IE: JSON Value as results).
  3. Past attempts are in consistent.
  4. Ideal examples are readily available.

Example #

This is an example of a few-shot prompt.

Classify the following text as active or passive:
Text: The cat chased the mouse
Classification: active

Classify the following text as active or passive:
Text: A meal is being cooked by me
Classification: passive

Classify the following text as active or passive:
Text: The cat stretched out and sat on the mat
Classification:

In this example, we provided the LLM with 2 different examples of what we want from the AI before we provided the actual question at the end.

By providing it with some examples of what we want, we are teaching the AI model what it needs to do.

The more difficult the task is, the more examples we can provide to ensure that the AI model understands what we want from it.

Limitations #

Few shot prompting may not work when there are multiple reasoning steps involved in solving the problem. Examples include arithmetic problems, where the AI model needs to perform multi-step computations before arriving at the final answer. For such tasks, Chain of thought prompting may be better suited.

For cases where we do not know what we want, we cannot provide good examples of what we want. In such cases, we should revert to Zero-Shot Prompting

Chain of Thought Prompting #

Chain of thought prompting is a method of prompting where the model breaks down the task into small intermediate reasoning steps.

When to Use #

Chain of thought prompting is useful when the task requires multi-step reasoning, and the AI model needs to break down the task into smaller steps to arrive at the final answer.

Example #

Q: Find the sum of odd numbers in the list [1, 2, 3, 4, 5, 6]
Solve this question by solving each of this steps individually:
1. Identify all the odd numbers in this list.
2. Sum all of the odd numbers which are identified in step 1.

This is an example of chain of thought prompting, we ask the AI to work the solution out step by step instead of just returning the answer directly.

Q: Find the sum of odd numbers in the list [1, 2, 3, 4, 5, 6]
Go through each step one by one and show all your work.

A simple way of doing this without explicitly mentioning the steps is to literally ask the model to solve the problem step by step and show their work.

Limitations #

Chain of thought prompting depends very much on the size of the model. If the context window is too small, the model may not be able to generate a coherent chain of thought as some context from the initial chain may be lost while generating the next step.

The cost of computation and the token cost of generating a chain of thought can be high, especially for large models as the model has to generate each of the steps, resulting in slower responses.

This way of prompting is overkill for simple tasks where the answer is straight forward.

Metaprompting #

Metaprompting is a way to enhance the quality of outputs from language models by using the language model itself for prompt optimizations.

There can be multiple iterations to improve the prompt. During each iteration, the model can be asked to generate a new prompt based on the previous prompt and asked to give feedback on the quality of the prompt.

When to Use #

  1. Generating better prompts to solve a specific problem.
  2. Improve existing prompts to improve clarity, structure and effectiveness.

Example #

Write a prompt that generates a blog post about the current state of clean energy.

The prompt here asks the model how to prompt the model to generate a blog post about the current state of clean energy.

Write a comprehensive and engaging blog post about the current state of clean energy in 2025.
Begin with an overview of global trends in renewable energy adoption, including solar, wind, hydro, and emerging technologies like green hydrogen and fusion.
Discuss major recent breakthroughs, government policies, and corporate investments driving the transition away from fossil fuels.
Highlight both the progress made and the challenges that remain, such as energy storage, grid modernization, and equitable access.
Use a balanced, informative tone suitable for a general audience interested in sustainability.
End with a forward-looking conclusion about what to expect in the next five years.

After the 1st iteration, we might get something like the above (Actually from ChatGPT). We can further validate it by:

  1. Passing the prompt to ChatGPT to see the Results
  2. Ask the model to validate how good the prompt is.

By repeating this cycle, we can systematically improve the prompt until the results are satisfactory.

Limitations #

Meta-prompting assumes that the model has a good understanding of the specific tasks that is being addressed. If the problem is something that the model is not familiar with, the results that it generates may not be as desirable.

Tree of thoughts #

Tree of thoughts is an extension of the chain of thought prompting. It type of prompting encourages the model to maintain a tree of thoughts (hence the name) to be used as intermediate steps to solve a problem.

It enables the model to self evaluate its own progress to solving a problem. This can be combined with search algorithms to explore the thought systematically to find the best final output.

When to Use #

Tree of thought prompting is similar to Chain of thought prompting, it is useful when the task requires multi-step reasoning and you are unsure what is the best way to approach the problem. It can also be used when the model is not familiar with the problem.

Example #

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they're wrong at any point then they leave.
The question is <Insert prompt here>

This is an example of a tree of thought prompt.

Limitations #

Similar to chain of thought prompting, tree of thought prompting can be time-consuming and may not always lead to the best solution.

It is even more time-consuming in this case as you will have to decide which part of the solution tree that you want to explore next. There is also backtracking if we eventually decide to go back to a previous step to explore.

Conclusion #

These are some of the different prompt types that can be used to improve the performance of AI models. Each prompt type does not have to be independent of each other. For example, we can use chain of thought prompting together with few shot prompting by providing the model with examples of the chain of thought that we want.

The list stated here is not exhaustive, but it covers the most common prompt types. For more information you can refer to the resources below.

In the next blog post, I will write more about the different tips and tricks that can be used to improve our prompting skills to get the best results.