Skip to content

Effective Prompt Engineering

Published: at 08:00 AM (23 min read)

Table of contents

Open Table of contents

Introduction

As I sit here on a somewhat cold British morning as our country shimmies into Autumn, I’ve just watched the Microsoft Wave 2 announcement, and I’m writing this blog post ready for this afternoon. It is clear that Microsoft is increasing its focus and work on Copilot along with its journey to integrate Generative AI into the modern workday through the tools we interact with an hour on the hour. Yet I still think there is value in understanding how to make the most of Large Language Models as consumers.

If you’re anything like me, who regularly looks through job adverts in the industry to see what the state of play is in our sector, then you might have come across the role of ‘Prompt Engineer’ arising in our humble capital of London. I’m not entirely sure if prompt engineering constitutes its own job role, but I do believe that practical prompt engineering and understanding how to interact efficiently with language models is swiftly becoming a soft skill worth having.

To make sure I have explain what ‘Prompt Engineering’ is adequately let me say this: It is typically the practice of crafting precise and structured inputs to optimise the quality of outputs generated by AI models, such as those based on machine learning or natural language processing (NLP). As AI increasingly integrates into various industries, from content creation to customer service, prompt engineering is becoming a genuine soft skill for ensuring relevant, accurate, and valuable responses from AI systems.

With that in mind, this blog will go over some of the AI services out there, some of the data security considerations to consider when using your organisation’s data within prompts, and then effective prompt engineering techniques and why they work.

AI Services

In the changing landscape of artificial intelligence, several significant providers offer robust AI services tailored to diverse business needs. Understanding the technical nuances of these services is crucial for selecting the right platform for your applications. Below is an overview of some leading AI service providers and their offerings:

OpenAI

Services Offered:

Technical Features:

Google Gemini

Services Offered:

Technical Features:

Microsoft 365 Copilot

Services Offered:

Technical Features:

Amazon Web Services (AWS) AI

Services Offered:

Technical Features:

IBM Watson

Services Offered:

Technical Features:

Anthropic’s Claude

Services Offered:

Technical Features:

GROK

Services Offered:

Technical Features:

Data Security

Data security is critical when integrating AI services into your organisation’s workflows. Handling sensitive information requires a comprehensive understanding of how different AI platforms manage data privacy, encryption, access controls, and compliance with regulatory standards. Below is a more in-depth look at data security considerations for leading AI service providers:

OpenAI Security

Data Usage and Privacy:

Security Measures:

Google Gemini Security

Data Usage and Privacy:

Security Measures:

Microsoft 365 Copilot Security

Data Usage and Privacy:

Security Measures:

Azure OpenAI Security

Data Usage and Privacy:

Security Measures:

Amazon Web Services (AWS) AI Security

Data Usage and Privacy:

Security Measures:

IBM Watson Security

Data Usage and Privacy:

Security Measures:

Anthropic’s Claude Security

Data Usage and Privacy:

Security Measures:

GROK Security

Data Usage and Privacy:

Security Measures:

Data Security Rankings

Data security is critical when leveraging AI services, especially as an enterprise user who wants to leverage the capabilities to be more productive in the workplace, as it encompasses data privacy, encryption, access control, and compliance with regulatory standards. Each AI service provider offers a unique set of security features and data handling policies, below is a list of the providers and where I would place them in a ranking system that favours data sovereignty and ensures proprietary information is not used to train their public models.

  1. Azure OpenAI

Why It Ranks Highest:

  1. AWS AI

Why It Ranks Highly:

  1. Microsoft 365 Copilot

Why It Ranks Well:

  1. Anthropic’s Claude

Why It Ranks Favourably:

  1. IBM Watson

Why It Ranks Respectably:

  1. OpenAI

Why It Ranks Moderately:

  1. Google Gemini

Why It Ranks Lower:

  1. GROK

Why It Ranks Lowest:

Summary of Rankings

  1. Azure OpenAI
  2. AWS AI
  3. Microsoft 365 Copilot
  4. Anthropic’s Claude
  5. IBM Watson
  6. OpenAI
  7. Google Gemini
  8. GROK

Key Considerations for Data Governance and Security:

When selecting an AI service provider, it is crucial to thoroughly assess these factors in the context of your organisation’s specific needs and regulatory obligations. Ensuring that the chosen platform aligns with your data protection requirements will safeguard sensitive information and maintain trust in your AI-driven initiatives.

Prompt Engineering

Prompt engineering involves designing and structuring input prompts to guide AI models toward producing high-quality, relevant, contextual outputs. It involves crafting instructions that direct the AI and improve the model’s ability to understand complex queries.

Definition:

Prompt engineering involves crafting well-structured inputs (prompts) to maximise the quality and relevance of AI-generated outputs. Users can guide AI systems to generate precise and contextually relevant responses through refined and detailed instructions.

Importance:

The effectiveness of AI largely depends on how clearly and precisely it is instructed through prompts. A well-designed prompt can enhance AI’s capability to understand and generate high-quality responses, while a poorly designed prompt can lead to irrelevant or incorrect outputs. Mastering prompt engineering is, therefore, crucial to improving AI interactions and achieving desired results.

The Context Window

The concept of a context window in ChatGPT refers to the amount of information or text that the model can consider when generating a response. The context window is essentially a buffer of memory that includes all the input the model uses to produce its output in a given session. This input typically consists of three key components.

Context Window Diagram

How the Context Window Works

The model doesn’t have unlimited memory. It can only consider a limited number of tokens (words or characters) in its context window. For GPT-4, this token limit can vary depending on the specific version used. For example, the standard GPT-4 model can process up to 8,000 tokens, while some versions can handle up to 32,000 tokens. Truncation: Older messages are dropped from the context window when the conversation history plus the user prompt exceeds the token limit. This means the model might “forget” earlier parts of a long conversation if it exceeds the token limit.

Impact of the Context Window

By considering past exchanges and instructions, the model can generate more contextually appropriate and personalised responses, making the interaction feel more natural. Limitations: However, when the conversation grows too long and earlier parts are dropped from the window, the model may lose track of important information discussed earlier, leading to less coherent or relevant responses in extended interactions.

The context window is a dynamic memory buffer that includes the system message, conversation history, and user prompt. It allows the model to generate context-aware responses but is limited by its token capacity.

The Basics of Effective Prompt Engineering

Several foundational strategies guide prompt engineering. Users can enhance the performance of AI models by focusing on simplicity, specificity, and context.

Start Simple

When interacting with AI, begin with clear, concise prompts and iterate based on the results. Avoid overcomplicating the initial prompt; start with simple instructions and gradually add complexity as needed. This approach allows you to see how the AI responds and adjust accordingly.

Be Specific

Specificity is key to minimising irrelevant or generalised outputs. Include detailed instructions, such as the desired tone, format, and key points you want the AI to address. Being specific helps the AI generate the most helpful response.

Context Matters

Providing the AI with adequate context is essential for ensuring that its responses are relevant and accurate. The more context you provide, the better the AI will understand the situation or problem, allowing for more insightful responses.

Key Techniques to Improve Prompts

Refining prompts through specific techniques can significantly enhance the quality of AI outputs.

  1. Iterative Prompting - Refine responses by asking follow-up questions or requesting clarifications. This process allows for dynamic, continuous improvement of the prompt based on received outputs.
  2. Chain-of-Thought Prompting - Break complex queries into smaller, manageable parts to guide the AI step-by-step. This helps the AI approach multi-step problems more systematically.
  3. Question Refinement Pattern - Ask the AI to suggest better questions for unclear queries, improving the prompt’s precision. This meta-cognitive approach encourages the AI to assist in refining the prompt itself.

Advanced Techniques

As users become more experienced, employing advanced prompt engineering techniques can lead to more precise outputs.

  1. Persona Setting - Define a specific role or perspective (e.g., “As an SEO expert…”) to tailor the AI’s responses. This technique allows the AI to respond from a specific point of view, making responses more contextually appropriate.
  2. Template Filling - Use placeholders to structure prompts for dynamic content creation. This technique is beneficial in automated emails or product descriptions. It provides flexibility, allowing for quick variations of similar content.
  3. Prompt Re-framing - Subtly rephrase prompts to explore nuances and get varied responses. Prompt re-framing encourages creativity and helps users view problems from multiple angles.

Common Pitfalls and How to Avoid Them

Prompt engineering requires attention to detail. Here are some common pitfalls and strategies for avoiding them:

Vague Prompts

Lack of clarity leads to poor outputs. Be precise about tasks and expected outcomes. Define your expectations to guide the AI more effectively.

Lack of Examples

Without specific examples, AI responses can be generic. Include examples to guide the AI. Providing examples helps the AI understand the expected output style and structure.

Misaligned Persona

Ensure your persona matches the intended audience for more accurate, resonant responses. A mismatch can lead to irrelevant or off-tone results.

Effective Prompting Patterns

Certain patterns of prompting can be particularly effective in specific scenarios:

  1. Forecasting Pattern - Feed raw data for AI to make predictions. Ensure you give adequate background data.
  2. Cognitive Verifier Pattern - Ask the AI to list factors to consider before answering a complex query, ensuring thoughtful responses.
  3. Interactive Role-Playing - Engage in back-and-forth interactions, such as collaborative storytelling, for creative applications. This is particularly useful for creative writing, marketing, or simulations.

Improving Response Quality

Maximising the quality of responses requires refining the prompt with additional elements:

Task Definition

Always clearly define the task you want the AI to perform, including the expected response format (e.g., blog post, list, summary). A well-defined task ensures the AI understands the objective and produces a more appropriate result.

Tone and Style

Specify the desired tone and style of the output, such as formal, conversational, or technical. This ensures that the AI generates a response that aligns with your expectations and the needs of your audience.

Format Specifications

If you need the output in a specific format (e.g., bullet points, structured paragraphs, headings), include these instructions in your prompt. Precise format specifications can significantly improve the clarity and usability of the AI-generated response.

Case Study: Prompt Engineering in Action

To illustrate the effectiveness of prompt engineering, let’s explore a real-world scenario. • Scenario: Generating a personalised email for a marketing campaign. • Prompt: “Create an email to promote our new AI product.” • Context: “Targeting tech enthusiasts familiar with AI.” • Persona: “Tech-savvy marketer.” • Format: “Email format with catchy subject, intro, body, and CTA.” • Example: Highlight the product’s unique features and benefits (Slash Co). The AI will generate personalised marketing emails focusing on the product’s strengths for the target audience.

Practical Tips for Continuous Improvement

Prompt engineering is an evolving skill. To ensure continuous improvement:

Experimentation

Prompt engineering is an iterative process. Regular experimentation and adjustment of prompts based on the AI’s responses can improve outcomes over time. You can fine-tune the process and achieve better results by experimenting with different prompt styles and techniques.

Model Updates

Always use the latest AI models to stay ahead, as they leverage the most recent training data and are better equipped to handle more complex queries and generate higher-quality responses.

Collaborative Learning

AI can assist in refining its outputs. Using its feedback, you can progressively improve prompts, turning AI interactions into a collaborative learning process. Encourage the AI to self-improve by asking it to reflect on and refine its responses.