← Volver a Investigación
Llama 2: Open Foundation and Fine-Tuned Chat Models
Touvron et al. (Meta) • 2023
Open SourceLLMFine-tuning
Resumen
Meta's release of Llama 2, a collection of pretrained and fine-tuned language models ranging from 7B to 70B parameters. The paper details training methodology, including Ghost Attention for multi-turn consistency and extensive safety testing, establishing patterns for open-weights model development.
Por Qué Importa
- Broadened access to capable open-weights models
- Detailed RLHF and safety methodology
- Reference architecture for open-source development
Preguntar sobre este artículo
Loading chat...
Llama 2: Open Foundation and Fine-Tuned Chat Models
Touvron et al. (Meta) • 2023
Open SourceLLMFine-tuning
Resumen
Meta's release of Llama 2, a collection of pretrained and fine-tuned language models ranging from 7B to 70B parameters. The paper details training methodology, including Ghost Attention for multi-turn consistency and extensive safety testing, establishing patterns for open-weights model development.
Por Qué Importa
- Broadened access to capable open-weights models
- Detailed RLHF and safety methodology
- Reference architecture for open-source development
Preguntar sobre este artículo
Loading chat...
