ML Research Hub
32.9K subscribers
5.4K photos
338 videos
24 files
5.83K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Stable Velocity: A Variance Perspective on Flow Matching

📝 Summary:
Stable Velocity tackles high-variance training in flow matching by identifying low-variance regimes. It introduces StableVM and VA-REPA for more efficient training, and StableVS for over 2x faster sampling. This improves both training and inference without compromising quality.

🔹 Publication Date: Published on Feb 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05435
• PDF: https://arxiv.org/pdf/2602.05435
• Project Page: https://linydthu.github.io/StableVelocity/
• Github: https://github.com/linYDTHU/StableVelocity

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#FlowMatching #GenerativeAI #MachineLearning #DeepLearning #VarianceReduction
TodoEvolve: Learning to Architect Agent Planning Systems

📝 Summary:
TodoEvolve enables autonomous synthesis and revision of task-specific planning architectures through a modular design space and multi-objective reinforcement learning optimization. AI-generated summar...

🔹 Publication Date: Published on Feb 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.07839
• PDF: https://arxiv.org/pdf/2602.07839
• Github: https://github.com/EcthelionLiu/TodoEvolve

🔹 Models citing this paper:
https://huggingface.co/EcthelionLiu/Todo-14B

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Steer2Adapt: Dynamically Composing Steering Vectors Elicits Efficient Adaptation of LLMs

📝 Summary:
STEER2ADAPT adapts LLMs by composing steering vectors from reusable semantic prior subspaces. This lightweight framework dynamically combines basis vectors, offering efficient and flexible adaptation for complex tasks without learning new vectors. It achieves an average performance improvement of...

🔹 Publication Date: Published on Feb 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.07276
• PDF: https://arxiv.org/pdf/2602.07276

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AI #MachineLearning #ModelAdaptation #SteeringVectors
From Directions to Regions: Decomposing Activations in Language Models via Local Geometry

📝 Summary:
Mixture of Factor Analyzers MFA models language model activations via local Gaussian regions, capturing complex nonlinear structures. MFA outperforms baselines, improving localization and steering, positioning local geometry as a promising unit for concept discovery and control.

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02464
• PDF: https://arxiv.org/pdf/2602.02464
• Github: https://github.com/ordavid-s/decomposing-activations-local-geometry

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AIResearch #Interpretability #NeuralNetworks #MachineLearning
TreeCUA: Efficiently Scaling GUI Automation with Tree-Structured Verifiable Evolution

📝 Summary:
TreeCUA scales GUI automation by organizing CUA exploration trajectories into tree structures. It uses multi-agent collaboration, adaptive exploration, and verification to improve GUI planning. This approach achieves better efficiency, generalization, and enhances planning capabilities.

🔹 Publication Date: Published on Feb 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.09662
• PDF: https://arxiv.org/pdf/2602.09662
• Github: https://github.com/UITron-hub/TreeCUA

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#GUIAutomation #AI #SoftwareAutomation #RPA #Planning
Rethinking Global Text Conditioning in Diffusion Transformers

📝 Summary:
Conventional text conditioning pooled embedding in diffusion transformers offers little benefit alone. But, when used as training-free guidance for controllable generation, it significantly improves performance across text-to-image, video, and image editing tasks.

🔹 Publication Date: Published on Feb 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.09268
• PDF: https://arxiv.org/pdf/2602.09268
• Github: https://github.com/quickjkee/modulation-guidance

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#DiffusionModels #GenerativeAI #AIResearch #ComputerVision #MachineLearning
Stop the Flip-Flop: Context-Preserving Verification for Fast Revocable Diffusion Decoding

📝 Summary:
COVER stops flip-flop oscillations in parallel diffusion decoding with cache override verification. It performs leave-one-out verification and stable drafting in one pass, preserving context via KV cache override. This greatly reduces revisions for faster, quality-preserving decoding.

🔹 Publication Date: Published on Feb 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06161
• PDF: https://arxiv.org/pdf/2602.06161

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#DiffusionModels #GenerativeAI #DeepLearning #Decoding #ContextPreservation
LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

📝 Summary:
LLMs encode their likelihood of success in pre-generation activations. Probes can predict performance on math and coding tasks, outperforming surface features. This allows efficient inference routing across models, reducing costs by up to 70% while improving performance.

🔹 Publication Date: Published on Feb 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.09924
• PDF: https://arxiv.org/pdf/2602.09924
• Github: https://github.com/KabakaWilliam/llms_know_difficulty

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMs #AIResearch #MachineLearning #PerformancePrediction #CostEfficiency
Learning on the Manifold: Unlocking Standard Diffusion Transformers with Representation Encoders

📝 Summary:
Standard diffusion transformers fail on representation encoders due to geometric interference. Our RJF method uses Riemannian flow matching to guide generation along the manifold, enabling standard DiT architectures to converge effectively without width scaling.

🔹 Publication Date: Published on Feb 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10099
• PDF: https://arxiv.org/pdf/2602.10099
• Github: https://github.com/amandpkr/RJF

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#DiffusionModels #MachineLearning #GenerativeAI #ManifoldLearning #AIResearch
Learning to Continually Learn via Meta-learning Agentic Memory Designs

📝 Summary:
ALMA uses meta-learning to automatically discover adaptable memory designs for agentic systems, enabling continual learning without human engineering. Its learned designs outperform state-of-the-art human-crafted methods across diverse domains.

🔹 Publication Date: Published on Feb 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.07755
• PDF: https://arxiv.org/pdf/2602.07755
• Project Page: https://yimingxiong.me/alma
• Github: https://github.com/zksha/alma

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation

📝 Summary:
Auto-regressive video generation suffers from temporal drift due to error accumulation in latent conditioning tokens, which is addressed by identifying and removing unstable tokens during inference to...

🔹 Publication Date: Published on Jan 30

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.00268
• PDF: https://arxiv.org/pdf/2602.00268
• Project Page: https://arielshaulov.github.io/TokenTrim/
• Github: https://github.com/arielshaulov/TokenTrim

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
C-ΔΘ: Circuit-Restricted Weight Arithmetic for Selective Refusal

📝 Summary:
Offline selective refusal in large language models is achieved through circuit-restricted weight updates that eliminate runtime intervention costs while maintaining performance. AI-generated summary M...

🔹 Publication Date: Published on Feb 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.04521
• PDF: https://arxiv.org/pdf/2602.04521

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Bridging Academia and Industry: A Comprehensive Benchmark for Attributed Graph Clustering

📝 Summary:
PyAGC presents a production-ready benchmark and library for attributed graph clustering that addresses limitations of current research through scalable, memory-efficient implementations and comprehens...

🔹 Publication Date: Published on Feb 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.08519
• PDF: https://arxiv.org/pdf/2602.08519
• Project Page: https://pyagc.readthedocs.io
• Github: https://github.com/Cloudy1225/PyAGC

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
On the Optimal Reasoning Length for RL-Trained Language Models

📝 Summary:
Length control methods in reinforcement learning-trained language models affect reasoning performance and computational efficiency, with optimal output lengths balancing these factors. AI-generated su...

🔹 Publication Date: Published on Feb 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.09591
• PDF: https://arxiv.org/pdf/2602.09591

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
VISTA-Bench: Do Vision-Language Models Really Understand Visualized Text as Well as Pure Text?

📝 Summary:
VISTA-Bench evaluates vision-language models' ability to understand visualized text versus pure-text queries, revealing significant performance gaps and sensitivity to rendering variations. AI-generat...

🔹 Publication Date: Published on Feb 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.04802
• PDF: https://arxiv.org/pdf/2602.04802
• Github: https://github.com/QingAnLiu/VISTA-Bench

Datasets citing this paper:
https://huggingface.co/datasets/liuqa/VISTA-Bench

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM

📝 Summary:
pySLAM is an open-source framework supporting Visual SLAM with monocular, stereo, and RGB-D cameras, incorporating classical and modern features, loop closure methods, volumetric reconstruction, and d...

🔹 Publication Date: Published on Feb 17, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2502.11955
• PDF: https://arxiv.org/pdf/2502.11955
• Github: https://github.com/luigifreda/pyslam

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Large-Scale Terminal Agentic Trajectory Generation from Dockerized Environments

📝 Summary:
A scalable pipeline called TerminalTraj addresses challenges in creating high-quality terminal trajectories for training agentic models by filtering repositories, generating Docker-aligned task instan...

🔹 Publication Date: Published on Feb 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01244
• PDF: https://arxiv.org/pdf/2602.01244
• Github: https://github.com/multimodal-art-projection/TerminalTraj

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs

📝 Summary:
LatentLens enables interpretation of visual token representations in vision-language models by comparing them to contextualized textual representations, revealing that visual tokens are more interpret...

🔹 Publication Date: Published on Jan 31

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.00462
• PDF: https://arxiv.org/pdf/2602.00462
• Github: https://github.com/McGill-NLP/latentlens

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Surprisal-Guided Selection: Compute-Optimal Test-Time Strategies for Execution-Grounded Code Generation

📝 Summary:
Test-time training fails in verification-grounded tasks due to over-sharpening, while surprisal-guided selection improves performance by favoring diverse, low-confidence samples. AI-generated summary ...

🔹 Publication Date: Published on Feb 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.07670
• PDF: https://arxiv.org/pdf/2602.07670
• Project Page: https://jbarnes850.github.io/2026/02/02/surprisal-guided-selection/
• Github: https://jbarnes850.github.io/2026/02/02/surprisal-guided-selection/

🔹 Models citing this paper:
https://huggingface.co/Jarrodbarnes/KernelBench-RLVR-120b

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Effective Reasoning Chains Reduce Intrinsic Dimensionality

📝 Summary:
Effective chain-of-thought reasoning strategies reduce intrinsic dimensionality, leading to better generalization by requiring fewer model parameters to achieve given accuracy thresholds. AI-generated...

🔹 Publication Date: Published on Feb 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.09276
• PDF: https://arxiv.org/pdf/2602.09276

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
ContextBench: A Benchmark for Context Retrieval in Coding Agents

📝 Summary:
ContextBench evaluates context retrieval in coding agents through detailed process analysis, revealing that advanced agent designs provide limited improvements in context usage while highlighting gaps...

🔹 Publication Date: Published on Feb 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05892
• PDF: https://arxiv.org/pdf/2602.05892
• Project Page: https://contextbench.github.io/
• Github: https://github.com/EuniAI/ContextBench

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research