Skip to content
Embedding LabsEmbedding Labs
Embedding Labs
Zurück zur Forschung

Llama 2: Open Foundation and Fine-Tuned Chat Models

Touvron et al. (Meta)2023

Open SourceLLMFine-tuning

Abstract

Meta's release of Llama 2, a collection of pretrained and fine-tuned language models ranging from 7B to 70B parameters. The paper details training methodology, including Ghost Attention for multi-turn consistency and extensive safety testing, establishing patterns for open-weights model development.

Warum Es Wichtig Ist

  • Broadened access to capable open-weights models
  • Detailed RLHF and safety methodology
  • Reference architecture for open-source development

Fragen zu diesem Artikel stellen

Loading chat...