Quoting Devansh:

Fine-tuning advanced LLMs isn’t knowledge injection — it’s destructive overwriting. Neurons in trained language models aren’t blank slates; they’re densely interconnected and already encode crucial, nuanced information. When you fine-tune, you risk erasing valuable existing patterns, leading to unexpected and problematic downstream effects. Instead, use modular methods like retrieval-augmented generation, adapters, or prompt-engineering — these techniques inject new information without damaging the underlying model’s carefully built ecosystem.

Source: codinginterviewsmadesimple.substack.com