← Volver a Investigación
Language Models are Few-Shot Learners
Brown et al. • 2020
LLMIn-Context LearningGPT-3
Resumen
This paper introduced GPT-3 and the concept of in-context learning. It demonstrated that scaling language models allows them to perform tasks given only a few examples in the prompt, without gradient updates or fine-tuning.
Por Qué Importa
- Defined the prompt engineering paradigm
- Validated scaling laws empirically
- Shifted focus toward general-purpose models
Preguntar sobre este artículo
Loading chat...
Language Models are Few-Shot Learners
Brown et al. • 2020
LLMIn-Context LearningGPT-3
Resumen
This paper introduced GPT-3 and the concept of in-context learning. It demonstrated that scaling language models allows them to perform tasks given only a few examples in the prompt, without gradient updates or fine-tuning.
Por Qué Importa
- Defined the prompt engineering paradigm
- Validated scaling laws empirically
- Shifted focus toward general-purpose models
Preguntar sobre este artículo
Loading chat...
