ML Research Hub
33K subscribers
5.02K photos
313 videos
24 files
5.42K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
VisionTrim: Unified Vision Token Compression for Training-Free MLLM Acceleration

📝 Summary:
VisionTrim accelerates MLLMs by selecting dominant visual tokens and merging them with text guidance. This training-free framework improves efficiency without performance loss, addressing high computational costs from excessive visual data.

🔹 Publication Date: Published on Jan 30

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22674
• PDF: https://arxiv.org/pdf/2601.22674

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MLLM #VisionTokenCompression #ModelAcceleration #DeepLearning #TrainingFree
On the Limits of Layer Pruning for Generative Reasoning in LLMs

📝 Summary:
Layer pruning degrades LLM generative reasoning tasks, unlike classification which recovers well. While finetuning helps, generative reasoning recovery remains fundamentally limited, especially at higher pruning ratios.

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01997
• PDF: https://arxiv.org/pdf/2602.01997
• Github: https://github.com/safal312/on-the-limits-of-layer-pruning

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMs #ModelPruning #AIResearch #GenerativeAI #DeepLearning
1
Small Generalizable Prompt Predictive Models Can Steer Efficient RL Post-Training of Large Reasoning Models

📝 Summary:
Generalizable Predictive Prompt Selection GPS efficiently selects informative prompts for RL-enhanced language models using Bayesian inference and a lightweight generative model. This method significantly improves training efficiency, final performance, and test-time efficiency.

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01970
• PDF: https://arxiv.org/pdf/2602.01970

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #ReinforcementLearning #PromptEngineering #AI #MachineLearning
Implicit neural representation of textures

📝 Summary:
This work designs new texture implicit neural representations that operate continuously over UV coordinate space. Experiments show they achieve good image quality while balancing memory and rendering time, useful for real-time rendering and downstream tasks.

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02354
• PDF: https://arxiv.org/pdf/2602.02354
• Project Page: https://peterhuistyping.github.io/INR-Tex/
• Github: https://github.com/PeterHUistyping/INR-Tex

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#ImplicitNeuralRepresentations #ComputerGraphics #DeepLearning #TextureModeling #RealTimeRendering
Why Steering Works: Toward a Unified View of Language Model Parameter Dynamics

📝 Summary:
This paper unifies LLM control methods as dynamic weight updates, revealing a consistent preference-utility trade-off. It introduces SPLIT, a new steering method that enhances preference while better preserving utility.

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02343
• PDF: https://arxiv.org/pdf/2602.02343
• Github: https://github.com/zjunlp/EasyEdit/blob/main/examples/SPLIT.md

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AI #MachineLearning #LLMSteering #DeepLearning
SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization

📝 Summary:
SLIME is a new objective for aligning large language models, addressing 'unlearning' and 'formatting collapse' issues in prior methods. It maximizes preferred response likelihood, stabilizes rejected token probabilities, and uses dual-margin constraints, achieving superior performance and stable ...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02383
• PDF: https://arxiv.org/pdf/2602.02383

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AIAlignment #MachineLearning #NLP #DeepLearning
TRIP-Bench: A Benchmark for Long-Horizon Interactive Agents in Real-World Scenarios

📝 Summary:
TRIP-Bench introduces a challenging long-horizon benchmark for evaluating LLM agents in complex, real-world travel planning. Existing models struggle significantly on this benchmark. To improve performance, the authors propose GTPO, an online reinforcement learning method that enhances constraint...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01675
• PDF: https://arxiv.org/pdf/2602.01675

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMAgents #ReinforcementLearning #AI #NLP #Benchmarking
This media is not supported in your browser
VIEW IN TELEGRAM
AI-Generated Image Detectors Overrely on Global Artifacts: Evidence from Inpainting Exchange

📝 Summary:
AI image detectors for inpainting overrely on global spectral shifts from VAEs, not local content. Inpainting Exchange INP-X reveals this weakness, dramatically reducing detector accuracy. This calls for content-aware detection methods.

🔹 Publication Date: Published on Jan 30

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.00192
• PDF: https://arxiv.org/pdf/2602.00192

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #ImageDetection #Inpainting #ComputerVision #DeepfakeDetection
Enhancing Multi-Image Understanding through Delimiter Token Scaling

📝 Summary:
Scaling delimiter token hidden states in vision-language models reduces cross-image information leakage, improving multi-image reasoning. This enhances image distinction and performance on multi-image benchmarks. The method also aids multi-document understanding without extra training or inferenc...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01984
• PDF: https://arxiv.org/pdf/2602.01984
• Github: https://github.com/MYMY-young/DelimScaling

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VisionLanguageModels #MultiModalAI #TokenScaling #DeepLearning #AIResearch
PolySAE: Modeling Feature Interactions in Sparse Autoencoders via Polynomial Decoding

📝 Summary:
PolySAE enhances sparse autoencoders with polynomial decoding to model complex feature interactions and compositional structure. It improves probing F1 by 8% and captures relationships independent of feature co-occurrence while maintaining interpretability.

🔹 Publication Date: Published on Feb 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01322
• PDF: https://arxiv.org/pdf/2602.01322

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Mano: Restriking Manifold Optimization for LLM Training

📝 Summary:
A novel optimizer called Mano is proposed that combines manifold optimization with momentum projection onto tangent spaces, achieving superior performance over AdamW and Muon while reducing memory and...

🔹 Publication Date: Published on Jan 30

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.23000
• PDF: https://arxiv.org/pdf/2601.23000
• Github: https://github.com/xie-lab-ml/Mano-Restriking-Manifold-Optimization-for-LLM-Training

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Rethinking Selective Knowledge Distillation

📝 Summary:
This paper introduces student-entropy-guided position selection SE-KD for selective knowledge distillation in autoregressive language models. SE-KD improves accuracy and efficiency, and when extended, significantly reduces training time, memory, and storage compared to prior methods.

🔹 Publication Date: Published on Feb 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01395
• PDF: https://arxiv.org/pdf/2602.01395
• Github: https://github.com/almogtavor/SE-KD3x

Spaces citing this paper:
https://huggingface.co/spaces/almogtavor/SE-KD3x

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Thinking with Comics: Enhancing Multimodal Reasoning through Structured Visual Storytelling

📝 Summary:
Thinking with Comics emerges as an effective visual reasoning approach that bridges images and videos by leveraging comic structures for improved multimodal reasoning efficiency and performance. AI-ge...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02453
• PDF: https://arxiv.org/pdf/2602.02453
• Project Page: https://thinking-with-comics.github.io/
• Github: https://github.com/andongBlue/Think-with-Comics

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #MultimodalAI #VisualReasoning #Comics #ComputerVision
ParalESN: Enabling parallel information processing in Reservoir Computing

📝 Summary:
Parallel Echo State Network (ParalESN) addresses reservoir computing limitations by enabling parallel temporal processing through diagonal linear recurrence, maintaining theoretical guarantees while a...

🔹 Publication Date: Published on Jan 29

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22296
• PDF: https://arxiv.org/pdf/2601.22296

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Internal Flow Signatures for Self-Checking and Refinement in LLMs

📝 Summary:
Internal flow signatures analyze depthwise dynamics in large language models to enable self-checking and targeted refinement without modifying the base model. AI-generated summary Large language model...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01897
• PDF: https://arxiv.org/pdf/2602.01897
• Github: https://github.com/EavnJeong/Internal-Flow-Signatures-for-Self-Checking-and-Refinement-in-LLMs

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Diagnosing the Reliability of LLM-as-a-Judge via Item Response Theory

📝 Summary:
A two-phase diagnostic framework based on Item Response Theory and Graded Response Model is introduced to assess the reliability of LLM-as-a-Judge by examining intrinsic consistency and human alignmen...

🔹 Publication Date: Published on Jan 31

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.00521
• PDF: https://arxiv.org/pdf/2602.00521

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Cross-Lingual Stability of LLM Judges Under Controlled Generation: Evidence from Finno-Ugric Languages

📝 Summary:
Controlled cross-lingual evaluation reveals instability in LLM assessment methods when targeting morphologically rich languages, indicating unreliable zero-shot judge transfer for discourse-level task...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02287
• PDF: https://arxiv.org/pdf/2602.02287
• Github: https://github.com/isaac-chung/

Datasets citing this paper:
https://huggingface.co/datasets/isaacchung/controlled-generated-convos-gpt-4.1-mini

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Fast Autoregressive Video Diffusion and World Models with Temporal Cache Compression and Sparse Attention

📝 Summary:
Autoregressive video diffusion models face efficiency challenges due to growing KV caches and redundant attention computations, which are addressed through TempCache, AnnCA, and AnnSA techniques that ...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01801
• PDF: https://arxiv.org/pdf/2602.01801
• Project Page: https://dvirsamuel.github.io/fast-auto-regressive-video/
• Github: https://dvirsamuel.github.io/fast-auto-regressive-video/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Generative Visual Code Mobile World Models

📝 Summary:
Visual world models for mobile GUI agents are improved through renderable code generation using vision-language models, achieving better performance with reduced model size compared to existing approa...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01576
• PDF: https://arxiv.org/pdf/2602.01576

🔹 Models citing this paper:
https://huggingface.co/trillionlabs/gWorld-8B
https://huggingface.co/trillionlabs/gWorld-32B

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Sparse Reward Subsystem in Large Language Models

📝 Summary:
Research identifies a sparse reward subsystem in LLM hidden states containing value neurons that represent internal state expectations and dopamine-like neurons encoding reward prediction errors. AI-g...

🔹 Publication Date: Published on Feb 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.00986
• PDF: https://arxiv.org/pdf/2602.00986

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Hunt Instead of Wait: Evaluating Deep Data Research on Large Language Models

📝 Summary:
Agentic large language models require investigatory intelligence for autonomous data analysis, demonstrated through the Deep Data Research benchmark that evaluates their ability to extract insights fr...

🔹 Publication Date: Published on Feb 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02039
• PDF: https://arxiv.org/pdf/2602.02039
• Project Page: https://huggingface.co/spaces/thinkwee/DDR_Bench
• Github: https://github.com/thinkwee/DDR_Bench

Datasets citing this paper:
https://huggingface.co/datasets/thinkwee/DDRBench_10K

Spaces citing this paper:
https://huggingface.co/spaces/thinkwee/DDR_Bench

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research