Skip to Main Content

AI at NETC: Few-Shot Prompting

A guide for Faculty and Students about using AI.

Self-Ask

Improves LLM reasoning by breaking down complex questions into sub-questions and answering them step by step, enhances tasks like customer support, legal analysis, research, and creative writing by prompting follow-up questions, and can integrate with external resources like search engines for more accurate responses.

To use Self-Ask: prepare a One- or Few-Shot prompt that demonstrates how to answer the questions. You need to prepare examples of how a complex question is broken down into simpler sub-questions and the right answers to each question.

Example:

Question: {A complex question}

Are follow up questions needed here: Yes.

Follow up: {Sub-question 1} Intermediate answer: {Correct answer to sub-question 1}

Follow up: {Sub-question 2} Intermediate answer: {Correct answer to sub-question 2}

So the final answer is: {Correct answer to the complex question}

Question: {Your prompt with a complex question}

Are follow up questions needed here:

 

Self Generated In-Context Learning (SG-ICL)

A technique used to get few shot examples directly from the model you're trying to get answers from. It's intuitive, easy-to-use, and fast, and it comes in handy when you don't have a dataset of exemplars available.

SG-ICL is a two-step process:

  1. Self-Generation Step: The model generates examplars closely related to the specific task at hand, improving input-demonstration correlation.
  2. Inference Step: The generated samples are used as exemplars. The model then predicts the class for the test input based on these generated samples, which are tailored to the task, leading to better performance than relying on external examples.

Chain-of-Dictionary (CoD)

Incorporates external multilingual dictionaries into the translation process. This method enriches the translation prompt with explicit lexical cues, thereby bridging gaps in the model's internal knowledge.

How it works:

  1. Standard Translation Prompt: The prompt begins with a simple translation instruction.
  2. Multilingual Dictionary Chain: Before the translation task, CoD introduces a dictionary-based lexical hint in multiple languages.
  3. The LLM incorporates the multilingual dictionary hints while generating the translation, leading to more accurate and natural results.

To use CoD, follow these steps:

  1. The LLM identifies key content words (nouns, adjectives, etc.) using the proposed prompt.
  2. These key words are translated into several languages using off-the-shelf translation models (e.g., NLLB).
  3. The multiple translations are formatted into a chained structure. This chain is then prepended to the translation prompt, providing cross-lingual cues.
  4. The LLM processes the enhanced prompt, incorporating the chained lexical hints to generate a more accurate translation—particularly for rare or ambiguous words.