Unveiling the LoRA Assumption: Why Fine-Tuning May Lead to Wrong Outputs
LoRA, a popular method for fine-tuning large AI models, assumes uniformity in how updates are applied to a model. While effective for modifying styles like tone or format, it struggles with incorporating complex factual knowledge, leading to unstab…