ML Research Hub
32.9K subscribers
4.72K photos
292 videos
24 files
5.1K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Everything in Its Place: Benchmarking Spatial Intelligence of Text-to-Image Models

📝 Summary:
Text-to-image models struggle with complex spatial reasoning due to sparse prompts. This paper introduces SpatialGenEval, a new benchmark with dense prompts, showing models struggle with higher-order spatial tasks. A new dataset, SpatialT2I, helps fine-tune models for significant performance gain...

🔹 Publication Date: Published on Jan 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.20354
• PDF: https://arxiv.org/pdf/2601.20354
• Github: https://github.com/AMAP-ML/SpatialGenEval

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#TextToImage #SpatialReasoning #GenerativeAI #ComputerVision #AIResearch
MetricAnything: Scaling Metric Depth Pretraining with Noisy Heterogeneous Sources

📝 Summary:
Metric Anything introduces a scalable pretraining framework for metric depth using Sparse Metric Prompts to handle diverse, noisy 3D data. It shows clear scaling trends and achieves state-of-the-art performance across various depth estimation and spatial intelligence tasks.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22054
• PDF: https://arxiv.org/pdf/2601.22054
• Project Page: https://metric-anything.github.io/metric-anything-io/
• Github: https://github.com/metric-anything/metric-anything

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MetricDepth #ComputerVision #MachineLearning #DeepLearning #3DVision
PLANING: A Loosely Coupled Triangle-Gaussian Framework for Streaming 3D Reconstruction

📝 Summary:
PLANING is an efficient streaming 3D reconstruction framework. It combines explicit geometric primitives and neural Gaussians with decoupled optimization, achieving both high-quality rendering and accurate geometry. It outperforms prior methods in quality and speed.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22046
• PDF: https://arxiv.org/pdf/2601.22046

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#3DReconstruction #ComputerVision #NeuralNetworks #StreamingTech #ComputerGraphics
BMAM: Brain-inspired Multi-Agent Memory Framework

📝 Summary:
BMAM presents a brain-inspired multi-agent memory architecture that decomposes memory into specialized subsystems to address long-term reasoning challenges in language-model-based agents. AI-generated...

🔹 Publication Date: Published on Jan 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.20465
• PDF: https://arxiv.org/pdf/2601.20465
• Github: https://github.com/innovation64/BMAM

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Spotlighting Task-Relevant Features: Object-Centric Representations for Better Generalization in Robotic Manipulation

📝 Summary:
Slot-Based Object-Centric Representations outperform global and dense feature representations in robotic manipulation tasks by providing better generalization under visual distribution shifts. AI-gene...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21416
• PDF: https://arxiv.org/pdf/2601.21416

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
STORM: Slot-based Task-aware Object-centric Representation for robotic Manipulation

📝 Summary:
STORM enhances robotic manipulation by adapting visual foundation models with semantic-aware slots through multi-phase training. This approach improves object discovery, generalization to distractors, and robotic control performance.

🔹 Publication Date: Published on Jan 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.20381
• PDF: https://arxiv.org/pdf/2601.20381

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#Robotics #AI #ComputerVision #RoboticManipulation #DeepLearning
AgentLongBench: A Controllable Long Benchmark For Long-Contexts Agents via Environment Rollouts

📝 Summary:
AgentLongBench evaluates LLM agents via dynamic environment rollouts. It finds agents struggle with high-density tool responses more than memory fragmentation in long conversations, driven by tokens needed to resolve queries.

🔹 Publication Date: Published on Jan 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.20730
• PDF: https://arxiv.org/pdf/2601.20730
• Github: https://github.com/euReKa025/AgentLongBench

Datasets citing this paper:
https://huggingface.co/datasets/ign1s/AgentLongBench

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMAgents #LongContext #AIResearch #NLP #Benchmarking
One-step Latent-free Image Generation with Pixel Mean Flows

📝 Summary:
Pixel MeanFlow pMF proposes a one-step, latent-free image generation method. It separates network output space from loss space, targeting an image manifold for prediction and defining loss in velocity space. pMF achieves strong ImageNet results at 256x256 and 512x512 resolutions.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22158
• PDF: https://arxiv.org/pdf/2601.22158

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#ImageGeneration #DeepLearning #ComputerVision #GenerativeAI #AIResearch
Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts

📝 Summary:
HALO efficiently converts Transformer models to RNN-attention hybrids using minimal training data. This enables superior long-context performance and efficiency, showcased by the HypeNet architecture and its application to the Qwen3 series.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22156
• PDF: https://arxiv.org/pdf/2601.22156
• Github: https://www.github.com/THUNLP/hybrid-linear-attention

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#HybridAttention #LongContext #Transformers #LLMs #DeepLearning
FROST: Filtering Reasoning Outliers with Attention for Efficient Reasoning

📝 Summary:
FROST is an attention-aware method that improves reasoning efficiency by pruning uncritical paths and removing reasoning outliers, leading to reduced token usage and improved accuracy. AI-generated su...

🔹 Publication Date: Published on Jan 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.19001
• PDF: https://arxiv.org/pdf/2601.19001

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
ECO: Quantized Training without Full-Precision Master Weights

📝 Summary:
Error-compensating optimizer eliminates memory overhead from master weights in quantized LLM training while maintaining near-lossless accuracy. AI-generated summary Quantization has significantly impr...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22101
• PDF: https://arxiv.org/pdf/2601.22101

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Mechanistic Data Attribution: Tracing the Training Origins of Interpretable LLM Units

📝 Summary:
MDA traces interpretable LLM units to training data using influence functions. Intervening on high-influence samples causally modulates circuit emergence, especially with structural data. This shows a direct link between data, circuit formation, and in-context learning.

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21996
• PDF: https://arxiv.org/pdf/2601.21996
• Github: https://github.com/chenjianhuii/Mechanistic-Data-Attribution

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AI #MachineLearning #MechanisticInterpretability #DataAttribution
Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening

📝 Summary:
This paper proposes a training-free method to sharpen LLM distributions, improving reasoning. It approximates the global power distribution with a token-level scaled low-temperature one. This achieves reinforcement learning-like performance with significantly lower computational cost and reduced ...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21590
• PDF: https://arxiv.org/pdf/2601.21590

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMs #AI #MachineLearning #NLP #DeepLearning
Discovering Hidden Gems in Model Repositories

📝 Summary:
Many superior, overlooked models exist in public repositories. This paper proposes a Multi-Armed Bandit approach with shared query sets and aggressive elimination to rapidly identify these high-performing hidden gems. This method accelerates discovery over 50x, finding top models with very few qu...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22157
• PDF: https://arxiv.org/pdf/2601.22157

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MachineLearning #DataScience #MultiArmedBandit #ModelDiscovery #AIResearch
1
FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale

📝 Summary:
FineInstructions generates billions of synthetic instruction-response pairs from unstructured text using real user queries. Pre-training LLMs solely on this large synthetic dataset from scratch outperforms traditional methods and other synthetic techniques on response quality benchmarks. This new...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22146
• PDF: https://arxiv.org/pdf/2601.22146

🔹 Models citing this paper:
https://huggingface.co/fineinstructions/query_templatizer
https://huggingface.co/fineinstructions/instruction_template_retrieval_embedding
https://huggingface.co/fineinstructions/template_instantiator

Datasets citing this paper:
https://huggingface.co/datasets/fineinstructions/finetemplates
https://huggingface.co/datasets/fineinstructions/fineinstructions_nemotron
https://huggingface.co/datasets/fineinstructions/real_queries

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion

📝 Summary:
This paper introduces JUST-DUB-IT, a single-model approach for high-quality video dubbing. It uses a LoRA adaptation of an audio-video diffusion model to generate translated audio and synchronized facial motion. Synthetic multilingual video training preserves speaker identity and improves lip syn...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22143
• PDF: https://arxiv.org/pdf/2601.22143

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
KromHC: Manifold-Constrained Hyper-Connections with Kronecker-Product Residual Matrices

📝 Summary:
KromHC addresses training instability and scalability issues in hyper-connections by using Kronecker products to parametrize residual matrices with reduced parameter complexity. AI-generated summary T...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21579
• PDF: https://arxiv.org/pdf/2601.21579

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Benchmarking Reward Hack Detection in Code Environments via Contrastive Analysis

📝 Summary:
A new benchmark, TRACE, was developed to detect reward hacks in code generation environments. Contrastive anomaly detection significantly outperforms isolated classification, though models struggle more with semantically contextualized hacks.

🔹 Publication Date: Published on Jan 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.20103
• PDF: https://arxiv.org/pdf/2601.20103

Datasets citing this paper:
https://huggingface.co/datasets/PatronusAI/trace-dataset

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Shaping capabilities with token-level data filtering

📝 Summary:
Token filtering during pretraining effectively reduces unwanted language model capabilities while maintaining alignment, becoming more effective at larger scales and tolerating noisy labels with suffi...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.21571
• PDF: https://arxiv.org/pdf/2601.21571

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Media is too big
VIEW IN TELEGRAM
LoL: Longer than Longer, Scaling Video Generation to Hour

📝 Summary:
Researchers addressed sink-collapse in autoregressive video generation, a failure mode where content reverts to a sink frame due to a RoPE and multi-head attention conflict. Their training-free multi-head RoPE jitter enables real-time, streaming video generation up to 12 hours without quality decay.

🔹 Publication Date: Published on Jan 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.16914
• PDF: https://arxiv.org/pdf/2601.16914

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research