DS1 spectrogram: When Life gives you LLMs, make LLM-ADE: Large Language Models with
  Adaptive Data Engineering

When Life gives you LLMs, make LLM-ADE: Large Language Models with Adaptive Data Engineering

April 19, 20242404.13028

Authors

Stephen Choi,William Gazeley

Abstract

This paper presents the LLM-ADE framework, a novel methodology for continued pre-training of large language models (LLMs) that addresses the challenges of catastrophic forgetting and double descent. LLM-ADE employs dynamic architectural adjustments, including selective block freezing and expansion, tailored to specific datasets.

This strategy enhances model adaptability to new data while preserving previously acquired knowledge. We demonstrate LLM-ADE's effectiveness on the TinyLlama model across various general knowledge benchmarks, showing significant performance improvements without the drawbacks of traditional continuous training methods.

This approach promises a more versatile and robust way to keep LLMs current and efficient in real-world applications.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.