Self-Ask
Improves LLM reasoning by breaking down complex questions into sub-questions and answering them step by step, enhances tasks like customer support, legal analysis, research, and creative writing by prompting follow-up questions, and can integrate with external resources like search engines for more accurate responses.
To use Self-Ask: prepare a One- or Few-Shot prompt that demonstrates how to answer the questions. You need to prepare examples of how a complex question is broken down into simpler sub-questions and the right answers to each question.
Example:
Question: {A complex question} Are follow up questions needed here: Yes. Follow up: {Sub-question 1} Intermediate answer: {Correct answer to sub-question 1} Follow up: {Sub-question 2} Intermediate answer: {Correct answer to sub-question 2} So the final answer is: {Correct answer to the complex question} Question: {Your prompt with a complex question} Are follow up questions needed here: |
Self Generated In-Context Learning (SG-ICL)
A technique used to get few shot examples directly from the model you're trying to get answers from. It's intuitive, easy-to-use, and fast, and it comes in handy when you don't have a dataset of exemplars available.
SG-ICL is a two-step process:
Chain-of-Dictionary (CoD)
Incorporates external multilingual dictionaries into the translation process. This method enriches the translation prompt with explicit lexical cues, thereby bridging gaps in the model's internal knowledge.
How it works:
To use CoD, follow these steps: