ML Research Hub
32.6K subscribers
5.81K photos
372 videos
24 files
6.28K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
SoundWeaver: Semantic Warm-Starting for Text-to-Audio Diffusion Serving

📝 Summary:
SoundWeaver accelerates text-to-audio diffusion generation by caching semantically similar audio and dynamically skipping function evaluations, achieving significant latency reduction with minimal qua...

🔹 Publication Date: Published on Mar 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07865
• PDF: https://arxiv.org/pdf/2603.07865

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
DreamVideo-Omni: Omni-Motion Controlled Multi-Subject Video Customization with Latent Identity Reinforcement Learning

📝 Summary:
DreamVideo-Omni is a unified framework for video synthesis that enables precise multi-subject identity control and multi-granularity motion manipulation through a two-stage training approach combining...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12257
• PDF: https://arxiv.org/pdf/2603.12257
• Project Page: https://dreamvideo-omni.github.io/
• Github: https://dreamvideo-omni.github.io/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Examining Reasoning LLMs-as-Judges in Non-Verifiable LLM Post-Training

📝 Summary:
Research examines the effectiveness of reasoning versus non-reasoning large language model judges in reinforcement learning-based alignment, revealing that reasoning judges prevent reward hacking but ...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12246
• PDF: https://arxiv.org/pdf/2603.12246

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
One Model, Many Budgets: Elastic Latent Interfaces for Diffusion Transformers

📝 Summary:
Elastic Latent Interface Transformer (ELIT) decouples compute from image resolution in diffusion transformers by introducing learnable latent tokens that adaptively prioritize important regions, enabl...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12245
• PDF: https://arxiv.org/pdf/2603.12245
• Project Page: https://snap-research.github.io/elit/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Multi-Task Reinforcement Learning for Enhanced Multimodal LLM-as-a-Judge

📝 Summary:
Multi-Task Reinforcement Learning framework improves multimodal large language models' judgment consistency and generalization across diverse visual tasks. AI-generated summary Multimodal Large Langua...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11665
• PDF: https://arxiv.org/pdf/2603.11665

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Attention Sinks Are Provably Necessary in Softmax Transformers: Evidence from Trigger-Conditional Tasks

📝 Summary:
Softmax self-attention models exhibit attention sinks where probability mass concentrates on fixed positions due to normalization constraints, while ReLU attention avoids this behavior. AI-generated s...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11487
• PDF: https://arxiv.org/pdf/2603.11487
• Github: https://github.com/YuvMilo/sinks-are-provably-necessary

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Understanding by Reconstruction: Reversing the Software Development Process for LLM Pretraining

📝 Summary:
Large language models trained on reconstructed agent trajectories from multi-agent simulations show improved performance in long-context understanding, coding proficiency, and agentic capabilities. AI...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11103
• PDF: https://arxiv.org/pdf/2603.11103

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse

📝 Summary:
IndexCache reduces sparse attention computation in large language models by reusing top-k token selections across layers, achieving significant speedups with minimal quality loss. AI-generated summary...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12201
• PDF: https://arxiv.org/pdf/2603.12201

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
EVATok: Adaptive Length Video Tokenization for Efficient Visual Autoregressive Generation

📝 Summary:
EVATok is a framework for efficient video tokenization that adapts token assignment based on video content, improving reconstruction quality and generation efficiency through learned routers and adapt...

🔹 Publication Date: Published on Mar 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12267
• PDF: https://arxiv.org/pdf/2603.12267
• Project Page: https://silentview.github.io/EVATok/
• Github: https://github.com/HKU-MMLab/EVATok

🔹 Models citing this paper:
https://huggingface.co/YuuTennYi/EVATok

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
DIVE: Scaling Diversity in Agentic Task Synthesis for Generalizable Tool Use

📝 Summary:
Training Qwen3-8B on DIVE data improves performance across out-of-distribution benchmarks, with diversity scaling outperforming quantity scaling even with less data. AI-generated summary Recent work s...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11076
• PDF: https://arxiv.org/pdf/2603.11076
• Project Page: https://sheep333c.github.io/DIVE/
• Github: https://github.com/sheep333c/DIVE

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research