Skip to main content
  1. My Blog Posts and Stories/

Prompt Engineering: Prompting Guide

··1499 words·8 mins

Introduction #

In this section of the Prompt Engineering blog post series, I will be going through the various tips and tricks to improve the effectiveness of our prompts.

Each of the tricks used in this blog post does not exist in isolation of each other. Feel free to combine multiple tricks together to improve the prompts even more.

These tricks are split into multiple categories:

  1. Frameworks: To help structure your prompt before it is sent to the AI.
  2. Ways to prompt: Some words to use within the prompt to improve the effectiveness of the prompt.
  3. Tools: Different tools that you can use to improve your prompts.

This list is not exhaustive and there are many more tricks that you can use to improve your prompts.

There may be a follow up blog post on this in the future to cover even more tricks for improving prompts.

Framework #

In this section, we will go through the various frameworks that you can use to structure your prompt. It allows the model to understand the context of the prompt and other information.

Include the following information in your prompt:

  1. Role: Role of the AI that you want the AI to role play
  2. Context: Context for the question
  3. Task: Clearly states what needs to be done
  4. Format: Define how the output should be structured
  5. Parameters: Set constraints and special requirements

This provides the model with sufficient context.

Example #

Let us go through it with an example basic prompt and take a look at how to improve it.

Basic Prompt: Write a new blog post about AI Prompt Engineering

This is the basic prompt, it lacks the following information information for AI to improve the result.

  1. Lack of target audience & Personalization
  2. Vague
  3. Generic

This will also cause the AI to generate results of low quality and may not be relevant to the user’s needs.

To improve the prompt, we can make use of the prompting formula above.

Role: You are a tech blogger specializing in AI Prompt engineering.
Context: Interest in AI Prompting is increasing rapidly and you want to capitalize on this trend.
Task: Write a blog post to share with your audience members who want to learn more about AI Prompting and how to get effective results from AI.
Format:
  The blog post should highlight the following:
  1. The importance of good prompting skills
  2. The different methods of prompting
  3. Examples of a basic prompt and how to improve it
Parameters:
  1. Format the blog post with a compelling title and a catchy introduction that grabs the reader's attention.
  2. Keep the blog post within 1000 words
  3. Optimize it for SEO and readability

Using the formula that we have given above, we can see that that there are a lot more details provided and that the AI has more information to work with.

Way to Prompt #

Think Harder #

The quality of the prompt somehow improves when the model is asked to think hard.

Let me follow this up with an explanation of why this works. Using ChatGPT 5 as an example.

GPT 5 Architecture

Your Input

Router

Small Model

Deep Reasoning Model

In the new GPT 5 model by OpenAI, before the input is sent to any AI Model, there is a router which decides if the prompt will be handled by the smaller, more efficient model or the larger deep reasoning model.

By adding in Think long and hard about this before giving a reply, it nudges the router to route the prompt to the larger deep reasoning model instead of the smaller model. This will hopefully result in a more accurate and detailed response.

Partial Input Completion (Putting words in their mouth) #

When you provide partial content, the model can provide the rest of what it thinks is the next part of that content as a response.

For example, if you want the reply to be a single word, you can add One Word Answer: at the end of the prompt before the model gives its response.

LLM Inference Process

User Prompt

Prompt Analysis

Pattern Matching

Word Prediction

Response Generation

Answer

Large language models work like a buffed up auto complete. Given the words that come before (IE: your prompt), it will try to autocomplete what should come after. By partially completing the model’s answer, you can nudge it in the direction you want the reply to be in.

Include Completion Strategy #

Instead of just giving a basic prompt the the model, you can include a completion strategy at the end of the prompt.

Create a blog post about how to create optimized prompts for large language models.

The above shows a basic prompt. To improve the prompt, we can provide the different categories to the model for it to include.

Create a blog post about how to create optimized prompts for large language models.
Include the following parts:
1. Introduction
2. Frameworks for prompting
3. Prompt Wording
4. Tools for optimizing prompts
5. Conclusion
6. Useful resources for the user to follow

By laying out the reply format for the model, the model will consider the format that we want before generating the response.

Enclosing the context with XML Tags #

When prompts involves multiple components (EG: Context, Instructions, Examples, Formatting, etc), using XML tags can help to improve the output quality from the model.

It helps the model understand each part of our prompt more accurately, resulting in improved answer from the model.

If the information is nested, we can also make use of nested prompts to further improve the performance of the model.

Role Prompting #

Role prompting assigns a role to the model to guide its accuracy and style of its responses.

Giving the model a role determines how it processes the prompt and generates the response.

If the problem you are giving it is a mathematics problem, it will improve the accuracy of the response by giving the model a role of a mathematician.

Including examples #

For problems where we have an expected example format (or we want the model to follow a specified format), it will improve the output quality from the model by providing a clear example of what is expected.

For example, if we want the model to generate a list of 5 fruits, we can provide an example of what the output should look like:

[
  "Apple",
  "Banana",
  "Orange",
  "Mango",
  "Pineapple"
]

By providing the model with an example of the expected format, we are essentially teaching it to follow our example and generate a response in the same format.

The more examples we provide, the better the model will be at following our example and generating a response in the same format.

Multimodal Prompting #

Instead of just prompting with only words or only images, we can prompt them with both (if the model supports multiple forms of input).

This allows the model to consider both textual and visual information when generating a response.

For example, if we want the model to generate an image, the quality of the generation will be higher if we provide it with both a textual representation of what we want and an image with the same style as what we want.

For example, if we want the model to generate a blue themed icon, we can ask it to generate a logo with a blue color background with a blue color scheme.

By providing the model with an image we want it to reference, it will be able to generate a response closer to what we want.

Tools #

In this section, we will go through some tools that we can use to improve our prompts.

GPT 5 Prompt Optimizer #

OpenAI released a prompt optimizer for GPT-5

We can type in a prompt that we want to optimize and it will help us optimize it. This tool will apply the various optimizations mentioned in the GPT-5 Prompting Guide, probably through the use of Metaprompting.

Things to Consider #

Large language models may not be the perfect way to solve all of your problems. Due to the probabilistic nature of Large Language Models, please take note when asking for:

  1. Factual information, as the model can hallucinate.
  2. Math and logical problems, as it may not be solved correctly.

Conclusion #

In this blog post, we went through the different methods we can use to improve the effectiveness of our AI prompts.

These methods can be used in conjunction with each other to improve the effectiveness of our AI prompts.

Experimentation is key to finding out the best prompts for your specific use case. Hopefully, you will find these methods useful in your own AI prompt engineering journey. Please let me know if there are any other methods that has worked for you.