Prompting evolved alongside the emergence of large pre-trained language models, with the idea of asking models to perform tasks via text input instead of fine-tuning for each task.
GPT-2 showed surprising zero-shot abilities, like generating summaries by recognizing phrases like 'TL;DR' from training data.
GPT-3 demonstrated few-shot learning by performing new tasks with well-crafted prompts, showing language models can follow prompts effectively.
Prompt engineering gained prominence in 2020, emphasizing the importance of phrasing inputs effectively for quality outputs.
The Chain-of-Thought (CoT) prompting technique improved model performance by showcasing step-by-step reasoning, enhancing accuracy on complex tasks.
ReAct, introduced in 2022, combined reasoning with action in prompts, enabling models to decide actions and evolve their input-output process.
Prompting in 2024 focused on AI agents stringing together prompts and actions, automating goal accomplishment and prompting meta-prompt design.
Automatic Prompt Engineering tools emerged to optimize prompts through algorithms, enabling optimal prompt generation for various tasks.
Strategies like Tree-of-Thought and Graph-of-Thought enabled non-linear reasoning in language models, advancing prompt engineering beyond linear sequences.
The evolution of prompt engineering led to models becoming more capable, with an emphasis on instruction-following and automated prompt optimization.