← Back to Research
LoRA: Low-Rank Adaptation of Large Language Models
Hu et al. • 2021
Fine-tuningEfficiencyPEFT
Abstract
LoRA proposes freezing pre-trained model weights and injecting trainable rank decomposition matrices into each layer. This approach reduces trainable parameters significantly, making fine-tuning of large models feasible on limited hardware.
Why It Matters
- Made fine-tuning practical on constrained hardware
- Standard approach for adapters and model personalization
- Major reduction in storage and compute requirements
Ask about this paper
Loading chat...
LoRA: Low-Rank Adaptation of Large Language Models
Hu et al. • 2021
Fine-tuningEfficiencyPEFT
Abstract
LoRA proposes freezing pre-trained model weights and injecting trainable rank decomposition matrices into each layer. This approach reduces trainable parameters significantly, making fine-tuning of large models feasible on limited hardware.
Why It Matters
- Made fine-tuning practical on constrained hardware
- Standard approach for adapters and model personalization
- Major reduction in storage and compute requirements
Ask about this paper
Loading chat...
