Skip to content
Embedding LabsEmbedding Labs
Embedding Labs
Back to Research

Scaling Laws for Neural Language Models

Kaplan et al. (OpenAI)2020

ScalingTrainingFundamentals

Abstract

The paper that gave the field its roadmap. Kaplan et al. showed that language model performance scales predictably with compute, data, and parameters following power-law relationships. These scaling laws became the theoretical underpinning of every frontier model training run, enabling labs to predict performance before committing billions in compute.

Why It Matters

  • Provided the theoretical basis for the frontier model scaling race
  • Showed performance is predictable from compute, data, and parameter count
  • Enabled rational planning of multi-billion-dollar training runs

Ask about this paper

Loading chat...