ML Research Hub
32.6K subscribers
5.81K photos
372 videos
24 files
6.29K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Just-in-Time: Training-Free Spatial Acceleration for Diffusion Transformers

📝 Summary:
Diffusion Transformers face high computational costs during iterative sampling, which this work addresses by introducing a spatial-domain acceleration framework that uses sparse anchor tokens and dete...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10744
• PDF: https://arxiv.org/pdf/2603.10744

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR

📝 Summary:
Contrastive Learning mechanism integrated into Policy Optimization enhances LLM reasoning by regularizing correct reasoning paths and reducing hallucinations. AI-generated summary Reinforcement Learni...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10101
• PDF: https://arxiv.org/pdf/2603.10101
• Github: https://github.com/Qwen-Applications/CLIPO

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Code-Space Response Oracles: Generating Interpretable Multi-Agent Policies with Large Language Models

📝 Summary:
Code-Space Response Oracles replace traditional neural network policies with human-readable code generated by large language models, enabling interpretable and explainable multi-agent reinforcement le...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10098
• PDF: https://arxiv.org/pdf/2603.10098

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Bootstrapping Exploration with Group-Level Natural Language Feedback in Reinforcement Learning

📝 Summary:
Language feedback is leveraged in reinforcement learning to improve exploration efficiency and sample utilization through grouped critique aggregation and joint generation-refinement optimization. AI-...

🔹 Publication Date: Published on Mar 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.04597
• PDF: https://arxiv.org/pdf/2603.04597
• Github: https://github.com/LuckyyySTA/GOLF

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation

📝 Summary:
Researchers developed RbtAct, a method that uses rebuttal responses to improve the actionability of AI-generated peer-review feedback by training a language model to produce specific, implementable co...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09723
• PDF: https://arxiv.org/pdf/2603.09723
• Github: https://github.com/formula12/RbtAct

Datasets citing this paper:
https://huggingface.co/datasets/shwu22/RMR-75K

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Media is too big
VIEW IN TELEGRAM
V2M-Zero: Zero-Pair Time-Aligned Video-to-Music Generation

📝 Summary:
V2M-Zero enables video-to-music generation with improved temporal alignment by using modality-specific event curves derived from pretrained encoders, achieving superior audio quality and synchronizati...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11042
• PDF: https://arxiv.org/pdf/2603.11042
• Project Page: https://genjib.github.io/v2m_zero/
• Github: https://genjib.github.io/v2m_zero/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Can Large Language Models Keep Up? Benchmarking Online Adaptation to Continual Knowledge Streams

📝 Summary:
OAKS is a new benchmark to test how LLMs adapt to real-time, evolving information streams. Current models struggle significantly, showing delays and distraction in tracking dynamic knowledge.

🔹 Publication Date: Published on Mar 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07392
• PDF: https://arxiv.org/pdf/2603.07392
• Github: https://github.com/kaistAI/OAKS

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
CodePercept: Code-Grounded Visual STEM Perception for MLLMs

📝 Summary:
MLLMs struggle with STEM visual reasoning due to perceptual limitations rather than reasoning deficiencies, and enhancing perception through code-as-perception paradigms improves performance. AI-gener...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10757
• PDF: https://arxiv.org/pdf/2603.10757
• Github: https://github.com/TongkunGuan/Qwen-CodePercept

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
MA-EgoQA: Question Answering over Egocentric Videos from Multiple Embodied Agents

📝 Summary:
Multi-agent systems require understanding multiple long-horizon egocentric videos simultaneously, necessitating new benchmarks and models for system-level comprehension. AI-generated summary As embodi...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09827
• PDF: https://arxiv.org/pdf/2603.09827
• Project Page: https://ma-egoqa.github.io/
• Github: https://github.com/KangsanKim07/MA-EgoQA

Datasets citing this paper:
https://huggingface.co/datasets/KangsanKim71/MA-EgoQA

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Any to Full: Prompting Depth Anything for Depth Completion in One Stage

📝 Summary:
A novel one-stage depth completion framework that uses scale-prompting adaptation of pretrained monocular depth estimation models to handle varying depth sparsity and irregular distributions more effi...

🔹 Publication Date: Published on Mar 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05711
• PDF: https://arxiv.org/pdf/2603.05711

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
LLM2Vec-Gen: Generative Embeddings from Large Language Models

📝 Summary:
LLM2Vec-Gen introduces a self-supervised method for text embedding that represents model responses through trainable special tokens, achieving superior performance on MTEB while reducing harmful conte...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10913
• PDF: https://arxiv.org/pdf/2603.10913
• Project Page: https://mcgill-nlp.github.io/llm2vec-gen/
• Github: https://github.com/McGill-NLP/llm2vec-gen

🔹 Models citing this paper:
https://huggingface.co/McGill-NLP/LLM2Vec-Gen-Qwen3-06B
https://huggingface.co/McGill-NLP/LLM2Vec-Gen-Qwen3-17B
https://huggingface.co/McGill-NLP/LLM2Vec-Gen-Qwen3-4B

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
TADA: A Generative Framework for Speech Modeling via Text-Acoustic Dual Alignment

📝 Summary:
A novel tokenization scheme synchronizes acoustic features with text tokens in TTS systems, enabling unified modeling and reduced hallucinations through flow matching and text-only guidance. AI-genera...

🔹 Publication Date: Published on Feb 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.23068
• PDF: https://arxiv.org/pdf/2602.23068
• Project Page: https://www.hume.ai/blog/opensource-tada
• Github: https://github.com/HumeAI/tada

🔹 Models citing this paper:
https://huggingface.co/HumeAI/tada-1b
https://huggingface.co/HumeAI/tada-3b-ml
https://huggingface.co/HumeAI/tada-codec

Spaces citing this paper:
https://huggingface.co/spaces/HumeAI/tada

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Flash-KMeans: Fast and Memory-Efficient Exact K-Means

📝 Summary:
Flash-kmeans enables efficient online k-means clustering on GPUs through novel kernel-level optimizations that eliminate I/O bottlenecks and atomic write contention. AI-generated summary k-means has h...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09229
• PDF: https://arxiv.org/pdf/2603.09229
• Github: https://github.com/svg-project/flash-kmeans

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
ReMix: Reinforcement routing for mixtures of LoRAs in LLM finetuning

📝 Summary:
Researchers address imbalance in routing weights of Mixture-of-LoRAs models by proposing Reinforcement Routing (ReMix), which uses non-learnable weights and reinforcement learning techniques to improv...

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10160
• PDF: https://arxiv.org/pdf/2603.10160

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Prism-Δ: Differential Subspace Steering for Prompt Highlighting in Large Language Models

📝 Summary:
PRISM-Δ extracts discriminative steering directions by decomposing cross-covariance differences, uses softplus weights for attention heads, and extends to value representations for improved long-conte...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10705
• PDF: https://arxiv.org/pdf/2603.10705
• Project Page: https://yuyaoge.github.io/PRISM-DELTA/
• Github: https://github.com/YuyaoGe/PRISM-DELTA

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
In-Context Reinforcement Learning for Tool Use in Large Language Models

📝 Summary:
In-Context Reinforcement Learning ICRL is an RL-only framework for LLMs to use external tools, eliminating costly supervised fine-tuning. It teaches tool use through in-context examples during training, gradually reducing them. ICRL proves to be a scalable, data-efficient, and state-of-the-art ap...

🔹 Publication Date: Published on Mar 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08068
• PDF: https://arxiv.org/pdf/2603.08068
• Github: https://github.com/applese233/ICRL

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Hindsight Credit Assignment for Long-Horizon LLM Agents

📝 Summary:
HCAPO improves credit assignment in long-horizon LLM agents by using hindsight reasoning to refine Q-values and a multi-scale advantage mechanism. It significantly outperforms state-of-the-art methods, boosting success rates on benchmarks like WebShop and ALFWorld. This enhances exploration and c...

🔹 Publication Date: Published on Mar 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08754
• PDF: https://arxiv.org/pdf/2603.08754

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMAgents #ReinforcementLearning #AI #MachineLearning #HindsightReasoning
1
UniCom: Unified Multimodal Modeling via Compressed Continuous Semantic Representations

📝 Summary:
UniCom unifies multimodal understanding and generation via compressed continuous semantic representations. It resolves issues with discrete tokenizers and unstable continuous modeling by efficiently reducing channel dimensions. This yields state-of-the-art generation, superior controllability, an...

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10702
• PDF: https://arxiv.org/pdf/2603.10702
• Project Page: https://miazhao7708.github.io/UniComPage/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MultimodalAI #GenerativeAI #DeepLearning #AIResearch #SemanticRepresentations
Lost in Backpropagation: The LM Head is a Gradient Bottleneck

📝 Summary:
The softmax bottleneck in neural LMs is a critical optimization bottleneck, not just an expressivity issue. The rank-D output layer suppresses 95-99% of gradient norm, leading to suboptimal updates and inefficient training. This necessitates new LM head designs.

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10145
• PDF: https://arxiv.org/pdf/2603.10145

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #DeepLearning #Optimization #NeuralNetworks #GradientBottleneck
StyleVLA: Driving Style-Aware Vision Language Action Model for Autonomous Driving

📝 Summary:
StyleVLA is a physics-informed VLA model that generates diverse, style-aware, and kinematically plausible driving trajectories. It uses a hybrid loss and a large dataset, outperforming proprietary models like Gemini-3-Pro on specialized driving tasks.

🔹 Publication Date: Published on Mar 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09482
• PDF: https://arxiv.org/pdf/2603.09482

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AutonomousDriving #VLA #AI #DeepLearning #Robotics
Causal Concept Graphs in LLM Latent Space for Stepwise Reasoning

📝 Summary:
Causal Concept Graphs identify causal relationships between concepts in LLMs using sparse autoencoders and differentiable structure learning. This method significantly improves causal fidelity for multi-step reasoning over prior techniques, yielding sparse and stable graphs.

🔹 Publication Date: Published on Mar 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10377
• PDF: https://arxiv.org/pdf/2603.10377

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#CausalAI #LLMs #MachineLearning #GraphLearning #ExplainableAI