Zero-shot prompting is a direct way of interacting with Large Language Models without the need for examples or extra context.
Examples of zero-shot prompting include creative writing, question answering, summarization, classification, language translation, coding, and explanation tasks.
Pros of zero-shot prompting include flexibility across tasks, time and resource efficiency, and fast prototyping and exploration.
Cons include performance dependence on prompt quality, limitation to pre-trained knowledge, and difficulty with complex or multi-step tasks.