ML Research Hub
32.3K subscribers
6.76K photos
478 videos
24 files
7.38K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Media is too big
VIEW IN TELEGRAM
✨EditCrafter: Tuning-free High-Resolution Image Editing via Pretrained Diffusion Model

πŸ“ Summary:
EditCrafter enables high-resolution image editing using pretrained text-to-image diffusion models through tiled inversion and noise-damped manifold-constrained guidance without requiring model tuning....

πŸ”Ή Publication Date: Published on Apr 11

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.10268
β€’ PDF: https://arxiv.org/pdf/2604.10268
β€’ Project Page: https://editcrafter.github.io/
β€’ Github: https://github.com/EditCrafter/EditCrafter

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
❀1
✨PersonalAI: A Systematic Comparison of Knowledge Graph Storage and Retrieval Approaches for Personalized LLM agents

πŸ“ Summary:
A knowledge graph-based external memory framework enhances language model personalization through dynamic semantic and temporal representations with diverse retrieval mechanisms. AI-generated summary ...

πŸ”Ή Publication Date: Published on Apr 12

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2506.17001
β€’ PDF: https://arxiv.org/pdf/2506.17001

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
✨Encoder-Free Human Motion Understanding via Structured Motion Descriptions

πŸ“ Summary:
Structured Motion Description SMD converts human motion into natural language, enabling large language models LLMs to reason about it directly. This encoder-free method achieves state-of-the-art performance on motion question answering and captioning.

πŸ”Ή Publication Date: Published on Apr 23

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.21668
β€’ PDF: https://arxiv.org/pdf/2604.21668
β€’ Project Page: https://yaozhang182.github.io/motion-smd/
β€’ Github: https://yaozhang182.github.io/motion-smd/

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/zyyy12138/motion-smd-lora

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/zyyy12138/motion-smd-data

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#HumanMotionUnderstanding #LLMs #NLP #AI #DeepLearning
❀1
This media is not supported in your browser
VIEW IN TELEGRAM
✨LLaTiSA: Towards Difficulty-Stratified Time Series Reasoning from Visual Perception to Semantics

πŸ“ Summary:
A hierarchical time series reasoning dataset and model are introduced to improve LLM understanding of temporal data through visualized patterns and numerical tables. AI-generated summary Comprehensive...

πŸ”Ή Publication Date: Published on Apr 19

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.17295
β€’ PDF: https://arxiv.org/pdf/2604.17295
β€’ Github: https://github.com/RainingNovember/LLaTiSA

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/November-Rain/HiTSR

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
❀1
Media is too big
VIEW IN TELEGRAM
✨Vista4D: Video Reshooting with 4D Point Clouds

πŸ“ Summary:
Vista4D is a video reshooting framework that uses 4D point clouds to synthesize dynamic scenes from new camera viewpoints. It improves 4D consistency, camera control, and visual quality by overcoming depth estimation issues and preserving scene content.

πŸ”Ή Publication Date: Published on Apr 23

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.21915
β€’ PDF: https://arxiv.org/pdf/2604.21915
β€’ Project Page: https://eyeline-labs.github.io/Vista4D
β€’ Github: https://github.com/Eyeline-Labs/Vista4D

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/Eyeline-Labs/Vista4D

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/Eyeline-Labs/Vista4D-Eval-Data

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
❀1
✨Coevolving Representations in Joint Image-Feature Diffusion

πŸ“ Summary:
CoReDi adapts the semantic representation space during diffusion training by learning a linear projection. This joint evolution improves convergence speed and sample quality in both VAE latent and pixel-space diffusion models, addressing limitations of fixed representation spaces.

πŸ”Ή Publication Date: Published on Apr 19

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.17492
β€’ PDF: https://arxiv.org/pdf/2604.17492
β€’ Project Page: https://huggingface.co/papers?q=lightweight%20linear%20projection
β€’ Github: https://github.com/zelaki/CoReDi

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
✨3D-VCD: Hallucination Mitigation in 3D-LLM Embodied Agents through Visual Contrastive Decoding

πŸ“ Summary:
3D-VCD is a new inference-time framework that reduces hallucinations in 3D embodied agents. It constructs distorted 3D scene graphs and contrasts predictions to suppress ungrounded tokens. This improves reasoning on 3D benchmarks without retraining.

πŸ”Ή Publication Date: Published on Apr 9

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.08645
β€’ PDF: https://arxiv.org/pdf/2604.08645
β€’ Project Page: https://plan-lab.github.io/projects/3d-vcd

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#3DLLM #EmbodiedAI #HallucinationMitigation #ComputerVision #AIResearch
✨Temporally Extended Mixture-of-Experts Models

πŸ“ Summary:
Temporal extension of mixture-of-experts layers using reinforcement learning options framework reduces expert switching rates while maintaining model accuracy. AI-generated summary Mixture-of-Experts ...

πŸ”Ή Publication Date: Published on Apr 22

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.20156
β€’ PDF: https://arxiv.org/pdf/2604.20156
β€’ Project Page: https://princeton-polaris-lab.github.io/moe_webpage/
β€’ Github: https://github.com/princeton-polaris-lab/rl_moe

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
✨A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications

πŸ“ Summary:
Mixture of Experts MoE models enhance large AI model efficiency and performance by dynamically selecting sub-models for diverse data. This survey details MoE design, algorithms, theory, and applications in various machine learning fields.

πŸ”Ή Publication Date: Published on Mar 10, 2025

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2503.07137
β€’ PDF: https://arxiv.org/pdf/2503.07137
β€’ Github: https://github.com/deepseek-ai/DeepEP

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#MixtureOfExperts #MoE #AI #MachineLearning #DeepLearning
❀1
This media is not supported in your browser
VIEW IN TELEGRAM
Self Attention vs Cross Attention by hand ✍️
Resize the matrices yourself πŸ‘‰ https://byhand.ai/aMisxP

Two attention mechanisms, side by side. Both project X into queries; both compute attention via S = Kα΅€ Γ— Q and F = V Γ— A. The only difference is the source of K and V.

Self attention uses X for everything. Q, K, and V all come from projecting X. Each X token attends to every other X token. The score matrix S is square β€” 128 Γ— 128.

Cross attention uses X for queries and a second sequence E for keys and values. Each X token attends to every E token instead. The score matrix S is rectangular β€” 64 Γ— 128.

Notice what's shared and what's not:

X is the same in both β€” same 36 Γ— 128 input.

Q and K share the 16 dimension β€” that's what makes the dot product Kα΅€ Γ— Q valid in either case.

V dimensions are independent: self-attention uses 12, cross-attention uses 12. The choice doesn't depend on which mechanism you're using; it depends on what output dimension your downstream layer expects.

https://t.iss.one/CodeProgrammer
❀2
Follow the Machine Learning with Python channel on WhatsApp: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
✨LLM Safety From Within: Detecting Harmful Content with Internal Representations

πŸ“ Summary:
SIREN is a lightweight guard model that uses LLM internal layer features to detect harmful content, outperforming current models. It is more efficient, generalizes better, and requires significantly fewer parameters than existing guard models.

πŸ”Ή Publication Date: Published on Apr 20

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.18519
β€’ PDF: https://arxiv.org/pdf/2604.18519
β€’ Github: https://github.com/CSSLab/SIREN

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/UofTCSSLab/SIREN-Qwen3-0.6B
β€’ https://huggingface.co/UofTCSSLab/SIREN-Qwen3-4B
β€’ https://huggingface.co/UofTCSSLab/SIREN-Llama-3.2-1B

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLMSafety #AIethics #HarmfulContent #DeepLearning #NLP
✨dWorldEval: Scalable Robotic Policy Evaluation via Discrete Diffusion World Model

πŸ“ Summary:
dWorldEval proposes a scalable robotics policy evaluation method using a discrete diffusion world model. It unifies diverse modalities into a token space, employing a transformer and progress token for success detection. This approach significantly outperforms prior methods, enabling large-scale ...

πŸ”Ή Publication Date: Published on Apr 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.22152
β€’ PDF: https://arxiv.org/pdf/2604.22152

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#Robotics #DiffusionModels #WorldModels #AI #MachineLearning
✨AgentSearchBench: A Benchmark for AI Agent Search in the Wild

πŸ“ Summary:
AgentSearchBench is a new benchmark for finding suitable AI agents using execution-grounded performance signals from nearly 10,000 real-world agents. It shows that description-based similarity is insufficient, and lightweight behavioral signals significantly improve agent ranking.

πŸ”Ή Publication Date: Published on Apr 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.22436
β€’ PDF: https://arxiv.org/pdf/2604.22436

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #AIAgents #Benchmarking #AgentSearch #MachineLearning
✨Learning Evidence Highlighting for Frozen LLMs

πŸ“ Summary:
HiLight enhances long-context reasoning in large language models by training a lightweight emphasis actor to highlight key evidence without modifying the original input or solver, using reinforcement ...

πŸ”Ή Publication Date: Published on Apr 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.22565
β€’ PDF: https://arxiv.org/pdf/2604.22565

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
✨Agentic World Modeling: Foundations, Capabilities, Laws, and Beyond

πŸ“ Summary:
World models are categorized into three capability levels and four law regimes to better understand and develop predictive environment models for AI agents across diverse domains. AI-generated summary...

πŸ”Ή Publication Date: Published on Apr 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.22748
β€’ PDF: https://arxiv.org/pdf/2604.22748
β€’ Project Page: https://agentic-world-modeling.xyz/
β€’ Github: https://github.com/matrix-agent/awesome-agentic-world-modeling

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Media is too big
VIEW IN TELEGRAM
✨AgriIR: A Scalable Framework for Domain-Specific Knowledge Retrieval

πŸ“ Summary:
AgriIR is a modular retrieval-augmented generation framework for agriculture. It uses configurable stages to provide accurate, trustworthy, and resource-efficient domain-specific information. This adaptable design promotes accessibility and accountability in AI for agriculture.

πŸ”Ή Publication Date: Published on Mar 17

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.16353
β€’ PDF: https://arxiv.org/pdf/2604.16353
β€’ Github: https://github.com/Shuvam-Banerji-Seal/AgriIR

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #Agriculture #RAG #KnowledgeRetrieval #NLP
✨DiffNR: Diffusion-Enhanced Neural Representation Optimization for Sparse-View 3D Tomographic Reconstruction

πŸ“ Summary:
DiffNR enhances sparse-view CT reconstruction with neural representations by employing SliceFixer, a single-step diffusion model. It corrects artifacts via pseudo-reference volumes, offering 3D supervision for better accuracy and efficient optimization, with a 3.99 dB PSNR gain.

πŸ”Ή Publication Date: Published on Apr 23

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.21518
β€’ PDF: https://arxiv.org/pdf/2604.21518
β€’ Project Page: https://ooonesevennn.github.io/DiffNR/
β€’ Github: https://github.com/ooonesevennn/DiffNR

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#3DReconstruction #DiffusionModels #NeuralNetworks #CTReconstruction #DeepLearning
✨FlowAnchor: Stabilizing the Editing Signal for Inversion-Free Video Editing

πŸ“ Summary:
FlowAnchor stabilizes inversion-free video editing by addressing signal instability in high-dimensional latent spaces. It uses spatial-aware attention refinement and adaptive magnitude modulation to ensure precise localization and sufficient editing strength, leading to faithful and coherent vide...

πŸ”Ή Publication Date: Published on Apr 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.22586
β€’ PDF: https://arxiv.org/pdf/2604.22586

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoEditing #DeepLearning #ComputerVision #GenerativeAI #AIResearch
✨Contexts are Never Long Enough: Structured Reasoning for Scalable Question Answering over Long Document Sets

πŸ“ Summary:
SLIDERS tackles long-document QA by extracting information into a relational database and using SQL for structured reasoning. This avoids LLM context window issues and aggregation bottlenecks, significantly outperforming traditional methods on various benchmarks.

πŸ”Ή Publication Date: Published on Apr 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2604.22294
β€’ PDF: https://arxiv.org/pdf/2604.22294

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#QuestionAnswering #NLP #AI #SQL #LongDocuments