Prompt engineering is the process of crafting and refining prompts to improve the performance of generative AI models. It involves providing specific inputs to tools like ChatGPT, Midjourney, or Gemini, guiding the AI to deliver more accurate and contextually relevant outputs.
When an AI model doesn't produce the desired response, prompt engineering allows us to iterate and adjust the prompt to optimize the output. This method is particularly useful for overcoming limitations of generative models, such as logical errors or insufficient context in responses.
A prompt is the input or instruction given to an AI model to generate a response. Prompts can be simple (a question) or complex (detailed instructions with context, tone, style, and format specifications). The quality of the AI's response depends directly on how clear, detailed, and structured the prompt is.
Click on each item for more details on how to write Generative AI prompts.
Click on each item to further review the 5 key parts of a prompt
Click on each method to a look at the types of techniques using the method.
This is the most basic form of printing. It simply shows the Large Language Model (LLM) a prompt without examples or demonstrations and asks it to generate a response. You've already seen these techniques in the Basics docs like Giving Instructions and Assigning Roles.
This provides the model with example pairs of problems and their correct solutions. These examples help the model better understand the context and improve its response generation.
Chain of Draft (CoD) addresses the challenges by introducing a more efficient approach to LLM reasoning. Inspired by human problem-solving patterns, where we typically jot down only essential information, CoD demonstrates that effective reasoning doesn't require lengthy explanations.
This technique is based on using multiple prompts to tackle the problem and then aggregating these responses into a final output.
A common problem is making sure LLM responses are both accurate and reliable. One powerful approach to tackling this is by prompting LLMs to critique their own outputs—a technique that has shown great success in helping models refine and improve their responses
This is a powerful approach that breaks down complex problems into simpler, more manageable sub-tasks. This technique is inspired by a fundamental human problem-solving strategy and has shown remarkable success in enhancing AI performance without requiring larger models.