ML Research Hub
32.9K subscribers
5.38K photos
338 videos
24 files
5.82K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Memory Bank Compression for Continual Adaptation of Large Language Models

📝 Summary:
Memory-augmented continual learning for LLMs faces growing memory bank issues. MBC compresses these banks via codebook optimization and an online resetting mechanism, using Key-Value Low-Rank Adaptation. It reduces bank size to 0.3 percent while maintaining high accuracy.

🔹 Publication Date: Published on Jan 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00756
• PDF: https://arxiv.org/pdf/2601.00756
• Github: https://github.com/Thomkat/MBC

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMs #ContinualLearning #MemoryCompression #MachineLearning #DeepLearning
Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement Learning

📝 Summary:
This paper introduces a cognitive-inspired framework for long-context LLM reasoning. It uses chunk-wise memory compression and selective recall, optimized via end-to-end reinforcement learning to improve accuracy and efficiency for contexts up to 1.75M tokens.

🔹 Publication Date: Published on Feb 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.08382
• PDF: https://arxiv.org/pdf/2602.08382

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #ReinforcementLearning #LongContext #MemoryCompression #AIResearch