Skip to content
Embedding LabsEmbedding Labs
Embedding Labs
Zurück zur Forschung

Language Models are Few-Shot Learners

Brown et al.2020

LLMIn-Context LearningGPT-3

Abstract

This paper introduced GPT-3 and the concept of in-context learning. It demonstrated that scaling language models allows them to perform tasks given only a few examples in the prompt, without gradient updates or fine-tuning.

Warum Es Wichtig Ist

  • Defined the prompt engineering paradigm
  • Validated scaling laws empirically
  • Shifted focus toward general-purpose models

Fragen zu diesem Artikel stellen

Loading chat...