ML Research Hub
32.3K subscribers
6.73K photos
472 videos
24 files
7.34K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
WebGen-R1: Incentivizing Large Language Models to Generate Functional and Aesthetic Websites with Reinforcement Learning

📝 Summary:
WebGen-R1 is a reinforcement learning framework enabling small language models to generate functional and aesthetically pleasing multi-page websites. It uses structured generation and a novel cascaded multimodal reward for structural integrity, functional feedback, and aesthetic supervision. WebG...

🔹 Publication Date: Published on Apr 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.20398
• PDF: https://arxiv.org/pdf/2604.20398

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#ReinforcementLearning #LLMs #WebsiteGeneration #AI #WebDevelopment
Media is too big
VIEW IN TELEGRAM
EditCrafter: Tuning-free High-Resolution Image Editing via Pretrained Diffusion Model

📝 Summary:
EditCrafter enables high-resolution image editing using pretrained text-to-image diffusion models through tiled inversion and noise-damped manifold-constrained guidance without requiring model tuning....

🔹 Publication Date: Published on Apr 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.10268
• PDF: https://arxiv.org/pdf/2604.10268
• Project Page: https://editcrafter.github.io/
• Github: https://github.com/EditCrafter/EditCrafter

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
PersonalAI: A Systematic Comparison of Knowledge Graph Storage and Retrieval Approaches for Personalized LLM agents

📝 Summary:
A knowledge graph-based external memory framework enhances language model personalization through dynamic semantic and temporal representations with diverse retrieval mechanisms. AI-generated summary ...

🔹 Publication Date: Published on Apr 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.17001
• PDF: https://arxiv.org/pdf/2506.17001

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Encoder-Free Human Motion Understanding via Structured Motion Descriptions

📝 Summary:
Structured Motion Description SMD converts human motion into natural language, enabling large language models LLMs to reason about it directly. This encoder-free method achieves state-of-the-art performance on motion question answering and captioning.

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21668
• PDF: https://arxiv.org/pdf/2604.21668
• Project Page: https://yaozhang182.github.io/motion-smd/
• Github: https://yaozhang182.github.io/motion-smd/

🔹 Models citing this paper:
https://huggingface.co/zyyy12138/motion-smd-lora

Datasets citing this paper:
https://huggingface.co/datasets/zyyy12138/motion-smd-data

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#HumanMotionUnderstanding #LLMs #NLP #AI #DeepLearning
1
This media is not supported in your browser
VIEW IN TELEGRAM
LLaTiSA: Towards Difficulty-Stratified Time Series Reasoning from Visual Perception to Semantics

📝 Summary:
A hierarchical time series reasoning dataset and model are introduced to improve LLM understanding of temporal data through visualized patterns and numerical tables. AI-generated summary Comprehensive...

🔹 Publication Date: Published on Apr 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.17295
• PDF: https://arxiv.org/pdf/2604.17295
• Github: https://github.com/RainingNovember/LLaTiSA

Datasets citing this paper:
https://huggingface.co/datasets/November-Rain/HiTSR

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Media is too big
VIEW IN TELEGRAM
Vista4D: Video Reshooting with 4D Point Clouds

📝 Summary:
Vista4D is a video reshooting framework that uses 4D point clouds to synthesize dynamic scenes from new camera viewpoints. It improves 4D consistency, camera control, and visual quality by overcoming depth estimation issues and preserving scene content.

🔹 Publication Date: Published on Apr 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.21915
• PDF: https://arxiv.org/pdf/2604.21915
• Project Page: https://eyeline-labs.github.io/Vista4D
• Github: https://github.com/Eyeline-Labs/Vista4D

🔹 Models citing this paper:
https://huggingface.co/Eyeline-Labs/Vista4D

Datasets citing this paper:
https://huggingface.co/datasets/Eyeline-Labs/Vista4D-Eval-Data

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
Coevolving Representations in Joint Image-Feature Diffusion

📝 Summary:
CoReDi adapts the semantic representation space during diffusion training by learning a linear projection. This joint evolution improves convergence speed and sample quality in both VAE latent and pixel-space diffusion models, addressing limitations of fixed representation spaces.

🔹 Publication Date: Published on Apr 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.17492
• PDF: https://arxiv.org/pdf/2604.17492
• Project Page: https://huggingface.co/papers?q=lightweight%20linear%20projection
• Github: https://github.com/zelaki/CoReDi

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
3D-VCD: Hallucination Mitigation in 3D-LLM Embodied Agents through Visual Contrastive Decoding

📝 Summary:
3D-VCD is a new inference-time framework that reduces hallucinations in 3D embodied agents. It constructs distorted 3D scene graphs and contrasts predictions to suppress ungrounded tokens. This improves reasoning on 3D benchmarks without retraining.

🔹 Publication Date: Published on Apr 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.08645
• PDF: https://arxiv.org/pdf/2604.08645
• Project Page: https://plan-lab.github.io/projects/3d-vcd

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#3DLLM #EmbodiedAI #HallucinationMitigation #ComputerVision #AIResearch
Temporally Extended Mixture-of-Experts Models

📝 Summary:
Temporal extension of mixture-of-experts layers using reinforcement learning options framework reduces expert switching rates while maintaining model accuracy. AI-generated summary Mixture-of-Experts ...

🔹 Publication Date: Published on Apr 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2604.20156
• PDF: https://arxiv.org/pdf/2604.20156
• Project Page: https://princeton-polaris-lab.github.io/moe_webpage/
• Github: https://github.com/princeton-polaris-lab/rl_moe

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications

📝 Summary:
Mixture of Experts MoE models enhance large AI model efficiency and performance by dynamically selecting sub-models for diverse data. This survey details MoE design, algorithms, theory, and applications in various machine learning fields.

🔹 Publication Date: Published on Mar 10, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2503.07137
• PDF: https://arxiv.org/pdf/2503.07137
• Github: https://github.com/deepseek-ai/DeepEP

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#MixtureOfExperts #MoE #AI #MachineLearning #DeepLearning
1