Skip to Main Content

AI at NETC: Prompt Engineering

A guide for Faculty and Students about using AI.

What is Prompt Engineering?

Prompt engineering is the process of crafting and refining prompts to improve the performance of generative AI models. It involves providing specific inputs to tools like ChatGPT, Midjourney, or Gemini, guiding the AI to deliver more accurate and contextually relevant outputs.

When an AI model doesn't produce the desired response, prompt engineering allows us to iterate and adjust the prompt to optimize the output. This method is particularly useful for overcoming limitations of generative models, such as logical errors or insufficient context in responses.

A prompt is the input or instruction given to an AI model to generate a response. Prompts can be simple (a question) or complex (detailed instructions with context, tone, style, and format specifications). The quality of the AI's response depends directly on how clear, detailed, and structured the prompt is.

Best Practices

Helpful Tips

Click on each item for more details on how to write Generative AI prompts.

Parts of a Prompt

Click on each item to further review the 5 key parts of a prompt

  • The Directive - The main instruction in the prompt. It tells the AI exactly what task it should perform. Without a clear directive, the AI may provide a generic or irrelevant response.
  • Examples - When the task is more complex, providing Examples can help guide the AI in producing more accurate responses. This technique is especially useful in few-shot and one-shot prompting, where a model is given one or more examples of what you expect in output.
  • Role (Persona) - Assigning a Role to the AI, also known as a persona, helps frame the response in a specific way. By telling the AI to act as an expert, a professional, or a specific character, you can guide the tone, style, and content of the response.
  • Output Formatting - Sometimes, it's important to specify the format in which you want the AI to present its output. Output Formatting ensures that the response follows a particular structure—whether it's a list, a table, or a paragraph. Specifying the format can help prevent misunderstandings and reduce the need for additional post-processing.
  • Additional Information - Additional Information, sometimes referred to as context, though we discourage the use of this term as it is overloaded with other meanings in the prompting space[^b]. It provides the background details the AI needs to generate a relevant response. Including this information ensures that the AI has a comprehensive understanding of the task and the necessary data to complete it.

Advanced Prompt Writing

Click on each method to a look at the types of techniques using the method. 

Zero-Shot Prompting Techniques

This is the most basic form of printing. It simply shows the Large Language Model (LLM) a prompt without examples or demonstrations and asks it to generate a response. You've already seen these techniques in the Basics docs like Giving Instructions and Assigning Roles.

Few-Shot Prompting Techniques

This provides the model with example pairs of problems and their correct solutions. These examples help the model better understand the context and improve its response generation.

Thought Generation Techniques

Chain of Draft (CoD) addresses the challenges by introducing a more efficient approach to LLM reasoning. Inspired by human problem-solving patterns, where we typically jot down only essential information, CoD demonstrates that effective reasoning doesn't require lengthy explanations.

Ensembling Techniques

This technique is based on using multiple prompts to tackle the problem and then aggregating these responses into a final output.

Self-Criticism Techniques

A common problem is making sure LLM responses are both accurate and reliable. One powerful approach to tackling this is by prompting LLMs to critique their own outputs—a technique that has shown great success in helping models refine and improve their responses

Decomposition Techniques

This is a powerful approach that breaks down complex problems into simpler, more manageable sub-tasks. This technique is inspired by a fundamental human problem-solving strategy and has shown remarkable success in enhancing AI performance without requiring larger models.