Skip to content
Embedding LabsEmbedding Labs
Embedding Labs
Zurück zur Forschung

LoRA: Low-Rank Adaptation of Large Language Models

Hu et al.2021

Fine-tuningEfficiencyPEFT

Abstract

LoRA proposes freezing pre-trained model weights and injecting trainable rank decomposition matrices into each layer. This approach reduces trainable parameters significantly, making fine-tuning of large models feasible on limited hardware.

Warum Es Wichtig Ist

  • Made fine-tuning practical on constrained hardware
  • Standard approach for adapters and model personalization
  • Major reduction in storage and compute requirements

Fragen zu diesem Artikel stellen

Loading chat...