ML Research Hub
32.3K subscribers
6.74K photos
473 videos
24 files
7.35K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
TingIS: Real-time Risk Event Discovery from Noisy Customer Incidents at Enterprise Scale

📝 Summary:
TingIS is an enterprise-grade incident discovery system that uses multi-stage event linking with LLMs, cascaded routing, and noise reduction to efficiently identify critical issues from high-volume, n...

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21889
• PDF: https://arxiv.org/pdf/2604.21889

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Trust but Verify: Introducing DAVinCI -- A Framework for Dual Attribution and Verification in Claim Inference for Language Models

📝 Summary:
DAVinCI is a dual attribution and verification framework that enhances factual reliability and interpretability of large language models by attributing claims to internal components and external sourc...

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21193
• PDF: https://arxiv.org/pdf/2604.21193
• Github: https://github.com/vr25/davinci

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Explainable Disentangled Representation Learning for Generalizable Authorship Attribution in the Era of Generative AI

📝 Summary:
A novel variational autoencoder framework with supervised contrastive learning and discriminative disentanglement achieves superior performance in authorship attribution and AI-generated text detectio...

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21300
• PDF: https://arxiv.org/pdf/2604.21300
• Github: https://github.com/hieum98/avae

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Co-Evolving LLM Decision and Skill Bank Agents for Long-Horizon Tasks

📝 Summary:
A co-evolution framework enables large language models to discover, retain, and reuse structured skills across episodes in long-horizon interactive environments through a learnable skill bank and skil...

🔹 Publication Date: Published on Apr 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.20987
• PDF: https://arxiv.org/pdf/2604.20987
• Project Page: https://wuxiyang1996.github.io/COSPLAY_page/
• Github: https://wuxiyang1996.github.io/COSPLAY_page/

🔹 Models citing this paper:
https://huggingface.co/IntelligenceLab/COS-PLAY

Datasets citing this paper:
https://huggingface.co/datasets/IntelligenceLab/Cos-Play-Cold-Start

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Hybrid Policy Distillation for LLMs

📝 Summary:
Hybrid Policy Distillation combines forward and reverse KL divergence approaches to improve knowledge distillation stability and efficiency across different model sizes and tasks. AI-generated summary...

🔹 Publication Date: Published on Apr 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.20244
• PDF: https://arxiv.org/pdf/2604.20244
• Github: https://github.com/zwhong714/Hybrid-Policy-Distillation

🔹 Models citing this paper:
https://huggingface.co/wh-zhu/Qwen2.5-7B-PSFT-RL-DAPO-90
https://huggingface.co/wh-zhu/qwen2.5-1.5B-longcot-reasoning-HPD

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
WebGen-R1: Incentivizing Large Language Models to Generate Functional and Aesthetic Websites with Reinforcement Learning

📝 Summary:
WebGen-R1 is a reinforcement learning framework enabling small language models to generate functional and aesthetically pleasing multi-page websites. It uses structured generation and a novel cascaded multimodal reward for structural integrity, functional feedback, and aesthetic supervision. WebG...

🔹 Publication Date: Published on Apr 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.20398
• PDF: https://arxiv.org/pdf/2604.20398

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#ReinforcementLearning #LLMs #WebsiteGeneration #AI #WebDevelopment
Media is too big
VIEW IN TELEGRAM
EditCrafter: Tuning-free High-Resolution Image Editing via Pretrained Diffusion Model

📝 Summary:
EditCrafter enables high-resolution image editing using pretrained text-to-image diffusion models through tiled inversion and noise-damped manifold-constrained guidance without requiring model tuning....

🔹 Publication Date: Published on Apr 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.10268
• PDF: https://arxiv.org/pdf/2604.10268
• Project Page: https://editcrafter.github.io/
• Github: https://github.com/EditCrafter/EditCrafter

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
PersonalAI: A Systematic Comparison of Knowledge Graph Storage and Retrieval Approaches for Personalized LLM agents

📝 Summary:
A knowledge graph-based external memory framework enhances language model personalization through dynamic semantic and temporal representations with diverse retrieval mechanisms. AI-generated summary ...

🔹 Publication Date: Published on Apr 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.17001
• PDF: https://arxiv.org/pdf/2506.17001

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Encoder-Free Human Motion Understanding via Structured Motion Descriptions

📝 Summary:
Structured Motion Description SMD converts human motion into natural language, enabling large language models LLMs to reason about it directly. This encoder-free method achieves state-of-the-art performance on motion question answering and captioning.

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21668
• PDF: https://arxiv.org/pdf/2604.21668
• Project Page: https://yaozhang182.github.io/motion-smd/
• Github: https://yaozhang182.github.io/motion-smd/

🔹 Models citing this paper:
https://huggingface.co/zyyy12138/motion-smd-lora

Datasets citing this paper:
https://huggingface.co/datasets/zyyy12138/motion-smd-data

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#HumanMotionUnderstanding #LLMs #NLP #AI #DeepLearning
1
This media is not supported in your browser
VIEW IN TELEGRAM
LLaTiSA: Towards Difficulty-Stratified Time Series Reasoning from Visual Perception to Semantics

📝 Summary:
A hierarchical time series reasoning dataset and model are introduced to improve LLM understanding of temporal data through visualized patterns and numerical tables. AI-generated summary Comprehensive...

🔹 Publication Date: Published on Apr 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.17295
• PDF: https://arxiv.org/pdf/2604.17295
• Github: https://github.com/RainingNovember/LLaTiSA

Datasets citing this paper:
https://huggingface.co/datasets/November-Rain/HiTSR

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Media is too big
VIEW IN TELEGRAM
Vista4D: Video Reshooting with 4D Point Clouds

📝 Summary:
Vista4D is a video reshooting framework that uses 4D point clouds to synthesize dynamic scenes from new camera viewpoints. It improves 4D consistency, camera control, and visual quality by overcoming depth estimation issues and preserving scene content.

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21915
• PDF: https://arxiv.org/pdf/2604.21915
• Project Page: https://eyeline-labs.github.io/Vista4D
• Github: https://github.com/Eyeline-Labs/Vista4D

🔹 Models citing this paper:
https://huggingface.co/Eyeline-Labs/Vista4D

Datasets citing this paper:
https://huggingface.co/datasets/Eyeline-Labs/Vista4D-Eval-Data

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Coevolving Representations in Joint Image-Feature Diffusion

📝 Summary:
CoReDi adapts the semantic representation space during diffusion training by learning a linear projection. This joint evolution improves convergence speed and sample quality in both VAE latent and pixel-space diffusion models, addressing limitations of fixed representation spaces.

🔹 Publication Date: Published on Apr 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.17492
• PDF: https://arxiv.org/pdf/2604.17492
• Project Page: https://huggingface.co/papers?q=lightweight%20linear%20projection
• Github: https://github.com/zelaki/CoReDi

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
3D-VCD: Hallucination Mitigation in 3D-LLM Embodied Agents through Visual Contrastive Decoding

📝 Summary:
3D-VCD is a new inference-time framework that reduces hallucinations in 3D embodied agents. It constructs distorted 3D scene graphs and contrasts predictions to suppress ungrounded tokens. This improves reasoning on 3D benchmarks without retraining.

🔹 Publication Date: Published on Apr 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.08645
• PDF: https://arxiv.org/pdf/2604.08645
• Project Page: https://plan-lab.github.io/projects/3d-vcd

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#3DLLM #EmbodiedAI #HallucinationMitigation #ComputerVision #AIResearch
Temporally Extended Mixture-of-Experts Models

📝 Summary:
Temporal extension of mixture-of-experts layers using reinforcement learning options framework reduces expert switching rates while maintaining model accuracy. AI-generated summary Mixture-of-Experts ...

🔹 Publication Date: Published on Apr 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.20156
• PDF: https://arxiv.org/pdf/2604.20156
• Project Page: https://princeton-polaris-lab.github.io/moe_webpage/
• Github: https://github.com/princeton-polaris-lab/rl_moe

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications

📝 Summary:
Mixture of Experts MoE models enhance large AI model efficiency and performance by dynamically selecting sub-models for diverse data. This survey details MoE design, algorithms, theory, and applications in various machine learning fields.

🔹 Publication Date: Published on Mar 10, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2503.07137
• PDF: https://arxiv.org/pdf/2503.07137
• Github: https://github.com/deepseek-ai/DeepEP

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MixtureOfExperts #MoE #AI #MachineLearning #DeepLearning
1
This media is not supported in your browser
VIEW IN TELEGRAM
Self Attention vs Cross Attention by hand ✍️
Resize the matrices yourself 👉 https://byhand.ai/aMisxP

Two attention mechanisms, side by side. Both project X into queries; both compute attention via S = Kᵀ × Q and F = V × A. The only difference is the source of K and V.

Self attention uses X for everything. Q, K, and V all come from projecting X. Each X token attends to every other X token. The score matrix S is square — 128 × 128.

Cross attention uses X for queries and a second sequence E for keys and values. Each X token attends to every E token instead. The score matrix S is rectangular — 64 × 128.

Notice what's shared and what's not:

X is the same in both — same 36 × 128 input.

Q and K share the 16 dimension — that's what makes the dot product Kᵀ × Q valid in either case.

V dimensions are independent: self-attention uses 12, cross-attention uses 12. The choice doesn't depend on which mechanism you're using; it depends on what output dimension your downstream layer expects.

https://t.iss.one/CodeProgrammer
2
Follow the Machine Learning with Python channel on WhatsApp: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
LLM Safety From Within: Detecting Harmful Content with Internal Representations

📝 Summary:
SIREN is a lightweight guard model that uses LLM internal layer features to detect harmful content, outperforming current models. It is more efficient, generalizes better, and requires significantly fewer parameters than existing guard models.

🔹 Publication Date: Published on Apr 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.18519
• PDF: https://arxiv.org/pdf/2604.18519
• Github: https://github.com/CSSLab/SIREN

🔹 Models citing this paper:
https://huggingface.co/UofTCSSLab/SIREN-Qwen3-0.6B
https://huggingface.co/UofTCSSLab/SIREN-Qwen3-4B
https://huggingface.co/UofTCSSLab/SIREN-Llama-3.2-1B

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMSafety #AIethics #HarmfulContent #DeepLearning #NLP
dWorldEval: Scalable Robotic Policy Evaluation via Discrete Diffusion World Model

📝 Summary:
dWorldEval proposes a scalable robotics policy evaluation method using a discrete diffusion world model. It unifies diverse modalities into a token space, employing a transformer and progress token for success detection. This approach significantly outperforms prior methods, enabling large-scale ...

🔹 Publication Date: Published on Apr 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.22152
• PDF: https://arxiv.org/pdf/2604.22152

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#Robotics #DiffusionModels #WorldModels #AI #MachineLearning
AgentSearchBench: A Benchmark for AI Agent Search in the Wild

📝 Summary:
AgentSearchBench is a new benchmark for finding suitable AI agents using execution-grounded performance signals from nearly 10,000 real-world agents. It shows that description-based similarity is insufficient, and lightweight behavioral signals significantly improve agent ranking.

🔹 Publication Date: Published on Apr 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.22436
• PDF: https://arxiv.org/pdf/2604.22436

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #AIAgents #Benchmarking #AgentSearch #MachineLearning