ML Research Hub
32.9K subscribers
4.72K photos
292 videos
24 files
5.1K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts

📝 Summary:
HALO efficiently converts Transformer models to RNN-attention hybrids using minimal training data. This enables superior long-context performance and efficiency, showcased by the HypeNet architecture and its application to the Qwen3 series.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22156
• PDF: https://arxiv.org/pdf/2601.22156
• Github: https://www.github.com/THUNLP/hybrid-linear-attention

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#HybridAttention #LongContext #Transformers #LLMs #DeepLearning
FROST: Filtering Reasoning Outliers with Attention for Efficient Reasoning

📝 Summary:
FROST is an attention-aware method that improves reasoning efficiency by pruning uncritical paths and removing reasoning outliers, leading to reduced token usage and improved accuracy. AI-generated su...

🔹 Publication Date: Published on Jan 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.19001
• PDF: https://arxiv.org/pdf/2601.19001

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
ECO: Quantized Training without Full-Precision Master Weights

📝 Summary:
Error-compensating optimizer eliminates memory overhead from master weights in quantized LLM training while maintaining near-lossless accuracy. AI-generated summary Quantization has significantly impr...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22101
• PDF: https://arxiv.org/pdf/2601.22101

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Mechanistic Data Attribution: Tracing the Training Origins of Interpretable LLM Units

📝 Summary:
MDA traces interpretable LLM units to training data using influence functions. Intervening on high-influence samples causally modulates circuit emergence, especially with structural data. This shows a direct link between data, circuit formation, and in-context learning.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21996
• PDF: https://arxiv.org/pdf/2601.21996
• Github: https://github.com/chenjianhuii/Mechanistic-Data-Attribution

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AI #MachineLearning #MechanisticInterpretability #DataAttribution
Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening

📝 Summary:
This paper proposes a training-free method to sharpen LLM distributions, improving reasoning. It approximates the global power distribution with a token-level scaled low-temperature one. This achieves reinforcement learning-like performance with significantly lower computational cost and reduced ...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21590
• PDF: https://arxiv.org/pdf/2601.21590

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMs #AI #MachineLearning #NLP #DeepLearning
Discovering Hidden Gems in Model Repositories

📝 Summary:
Many superior, overlooked models exist in public repositories. This paper proposes a Multi-Armed Bandit approach with shared query sets and aggressive elimination to rapidly identify these high-performing hidden gems. This method accelerates discovery over 50x, finding top models with very few qu...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22157
• PDF: https://arxiv.org/pdf/2601.22157

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MachineLearning #DataScience #MultiArmedBandit #ModelDiscovery #AIResearch
1
FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale

📝 Summary:
FineInstructions generates billions of synthetic instruction-response pairs from unstructured text using real user queries. Pre-training LLMs solely on this large synthetic dataset from scratch outperforms traditional methods and other synthetic techniques on response quality benchmarks. This new...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22146
• PDF: https://arxiv.org/pdf/2601.22146

🔹 Models citing this paper:
https://huggingface.co/fineinstructions/query_templatizer
https://huggingface.co/fineinstructions/instruction_template_retrieval_embedding
https://huggingface.co/fineinstructions/template_instantiator

Datasets citing this paper:
https://huggingface.co/datasets/fineinstructions/finetemplates
https://huggingface.co/datasets/fineinstructions/fineinstructions_nemotron
https://huggingface.co/datasets/fineinstructions/real_queries

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion

📝 Summary:
This paper introduces JUST-DUB-IT, a single-model approach for high-quality video dubbing. It uses a LoRA adaptation of an audio-video diffusion model to generate translated audio and synchronized facial motion. Synthetic multilingual video training preserves speaker identity and improves lip syn...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22143
• PDF: https://arxiv.org/pdf/2601.22143

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
KromHC: Manifold-Constrained Hyper-Connections with Kronecker-Product Residual Matrices

📝 Summary:
KromHC addresses training instability and scalability issues in hyper-connections by using Kronecker products to parametrize residual matrices with reduced parameter complexity. AI-generated summary T...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21579
• PDF: https://arxiv.org/pdf/2601.21579

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Benchmarking Reward Hack Detection in Code Environments via Contrastive Analysis

📝 Summary:
A new benchmark, TRACE, was developed to detect reward hacks in code generation environments. Contrastive anomaly detection significantly outperforms isolated classification, though models struggle more with semantically contextualized hacks.

🔹 Publication Date: Published on Jan 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.20103
• PDF: https://arxiv.org/pdf/2601.20103

Datasets citing this paper:
https://huggingface.co/datasets/PatronusAI/trace-dataset

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Shaping capabilities with token-level data filtering

📝 Summary:
Token filtering during pretraining effectively reduces unwanted language model capabilities while maintaining alignment, becoming more effective at larger scales and tolerating noisy labels with suffi...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21571
• PDF: https://arxiv.org/pdf/2601.21571

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Media is too big
VIEW IN TELEGRAM
LoL: Longer than Longer, Scaling Video Generation to Hour

📝 Summary:
Researchers addressed sink-collapse in autoregressive video generation, a failure mode where content reverts to a sink frame due to a RoPE and multi-head attention conflict. Their training-free multi-head RoPE jitter enables real-time, streaming video generation up to 12 hours without quality decay.

🔹 Publication Date: Published on Jan 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.16914
• PDF: https://arxiv.org/pdf/2601.16914

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Latent Adversarial Regularization for Offline Preference Optimization

📝 Summary:
GANPO uses latent-space regularization via adversarial divergence minimization to improve language model preference optimization. It offers more robust structural feedback than token-level methods, performing better under distributional shift and noise with minor overhead.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22083
• PDF: https://arxiv.org/pdf/2601.22083
• Github: https://github.com/enyijiang/GANPO

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Flow-based Extremal Mathematical Structure Discovery

📝 Summary:
FlowBoost is a new generative framework for discovering extremal geometric structures. It uses flow-matching, policy optimization, and local search. This closed-loop approach efficiently finds new best results for geometric optimization problems, outperforming prior methods like AlphaEvolve.

🔹 Publication Date: Published on Jan 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.18005
• PDF: https://arxiv.org/pdf/2601.18005

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Reinforcement Learning from Meta-Evaluation: Aligning Language Models Without Ground-Truth Labels

📝 Summary:
Reinforcement Learning from Meta-Evaluation RLME trains language models without ground truth labels. It uses an evaluators judgments on natural language meta questions as reward. RLME achieves comparable accuracy and efficiency to label based methods, broadening RL application for LLM training.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21268
• PDF: https://arxiv.org/pdf/2601.21268

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
EEG Foundation Models: Progresses, Benchmarking, and Open Problems

📝 Summary:
This paper benchmarks EEG foundation models, finding specialist models remain competitive. Linear probing is often insufficient, and larger models dont always improve generalization under current data conditions.

🔹 Publication Date: Published on Jan 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.17883
• PDF: https://arxiv.org/pdf/2601.17883
• Github: https://github.com/Dingkun0817/EEG-FM-Benchmark

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Stable Video Infinity: Infinite-Length Video Generation with Error Recycling

📝 Summary:
Stable Video Infinity SVI generates infinite-length videos with high temporal consistency and controllable storylines. It uses Error-Recycling Fine-Tuning, teaching the Diffusion Transformer to identify and correct its own errors by recycling self-generated errors.

🔹 Publication Date: Published on Oct 10, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.09212
• PDF: https://arxiv.org/pdf/2510.09212
• Project Page: https://stable-video-infinity.github.io/homepage/
• Github: https://github.com/vita-epfl/Stable-Video-Infinity

🔹 Models citing this paper:
https://huggingface.co/vita-video-gen/svi-model

Datasets citing this paper:
https://huggingface.co/datasets/vita-video-gen/svi-benchmark
https://huggingface.co/datasets/mzwydf/svi-benchmark

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #DiffusionModels #GenerativeAI #ComputerVision #AIResearch
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length

📝 Summary:
Live Avatar enables real-time, high-fidelity, infinite-length avatar generation using a 14B-parameter diffusion model. It employs Timestep-forcing Pipeline Parallelism and the Rolling Sink Frame Mechanism to overcome limitations, achieving 20 FPS on 5 GPUs. This is the first practical system at t...

🔹 Publication Date: Published on Dec 4, 2025

🔹 Paper Links:
• arXiv Page: https://arxivexplained.com/papers/live-avatar-streaming-real-time-audio-driven-avatar-generation-with-infinite-length
• PDF: https://arxiv.org/pdf/2512.04677
• Project Page: https://liveavatar.github.io/
• Github: https://github.com/Alibaba-Quark/LiveAvatar

🔹 Models citing this paper:
https://huggingface.co/Quark-Vision/Live-Avatar

Spaces citing this paper:
https://huggingface.co/spaces/ahm98alex/liveavatar-test
https://huggingface.co/spaces/sdavignon/liveavatar

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LiveAvatar #AvatarGeneration #RealtimeAI #DiffusionModels #GenerativeAI
👍1