← Volver a Investigación
LoRA: Low-Rank Adaptation of Large Language Models
Hu et al. • 2021
Fine-tuningEfficiencyPEFT
Resumen
LoRA proposes freezing pre-trained model weights and injecting trainable rank decomposition matrices into each layer. This approach reduces trainable parameters significantly, making fine-tuning of large models feasible on limited hardware.
Por Qué Importa
- Made fine-tuning practical on constrained hardware
- Standard approach for adapters and model personalization
- Major reduction in storage and compute requirements
Preguntar sobre este artículo
Loading chat...
LoRA: Low-Rank Adaptation of Large Language Models
Hu et al. • 2021
Fine-tuningEfficiencyPEFT
Resumen
LoRA proposes freezing pre-trained model weights and injecting trainable rank decomposition matrices into each layer. This approach reduces trainable parameters significantly, making fine-tuning of large models feasible on limited hardware.
Por Qué Importa
- Made fine-tuning practical on constrained hardware
- Standard approach for adapters and model personalization
- Major reduction in storage and compute requirements
Preguntar sobre este artículo
Loading chat...
