LoRA: Revolutionizing Large Language Model Adaptation without Fine-Tuning
Exploiting the low-rank nature of weight updates during fine-tuning results in orders of magnitude reduction in learnable parameters
Ever since the introduction of BERT in 2019, fine-tuning has been the standard approach to adapt large language models (LLMs) to downstream tasks. This changed with the introduction of LoRA (Hu et al 2021) which showed for the first time that the weight update matrix during fine-tuning can be drastically simplified using low-rank factorization, often wi…
Keep reading with a 7-day free trial
Subscribe to Machine Learning Frontiers to keep reading this post and get 7 days of free access to the full post archives.