ML Research Hub
32.8K subscribers
4.38K photos
270 videos
23 files
4.74K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
πŸš€ Master Data Science & Programming!

Unlock your potential with this curated list of Telegram channels. Whether you need books, datasets, interview prep, or project ideas, we have the perfect resource for you. Join the community today!


πŸ”° Machine Learning with Python
Learn Machine Learning with hands-on Python tutorials, real-world code examples, and clear explanations for researchers and developers.
https://t.iss.one/CodeProgrammer

πŸ”– Machine Learning
Machine learning insights, practical tutorials, and clear explanations for beginners and aspiring data scientists. Follow the channel for models, algorithms, coding guides, and real-world ML applications.
https://t.iss.one/DataScienceM

🧠 Code With Python
This channel delivers clear, practical content for developers, covering Python, Django, Data Structures, Algorithms, and DSA – perfect for learning, coding, and mastering key programming skills.
https://t.iss.one/DataScience4

🎯 PyData Careers | Quiz
Python Data Science jobs, interview tips, and career insights for aspiring professionals.
https://t.iss.one/DataScienceQ

πŸ’Ύ Kaggle Data Hub
Your go-to hub for Kaggle datasets – explore, analyze, and leverage data for Machine Learning and Data Science projects.
https://t.iss.one/datasets1

πŸ§‘β€πŸŽ“ Udemy Coupons | Courses
The first channel in Telegram that offers free Udemy coupons
https://t.iss.one/DataScienceC

πŸ˜€ ML Research Hub
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.
https://t.iss.one/DataScienceT

πŸ’¬ Data Science Chat
An active community group for discussing data challenges and networking with peers.
https://t.iss.one/DataScience9

🐍 Python Arab| Ψ¨Ψ§ΩŠΨ«ΩˆΩ† عربي
The largest Arabic-speaking group for Python developers to share knowledge and help.
https://t.iss.one/PythonArab

πŸ–Š Data Science Jupyter Notebooks
Explore the world of Data Science through Jupyter Notebooksβ€”insights, tutorials, and tools to boost your data journey. Code, analyze, and visualize smarter with every post.
https://t.iss.one/DataScienceN

πŸ“Ί Free Online Courses | Videos
Free online courses covering data science, machine learning, analytics, programming, and essential skills for learners.
https://t.iss.one/DataScienceV

πŸ“ˆ Data Analytics
Dive into the world of Data Analytics – uncover insights, explore trends, and master data-driven decision making.
https://t.iss.one/DataAnalyticsX

🎧 Learn Python Hub
Master Python with step-by-step courses – from basics to advanced projects and practical applications.
https://t.iss.one/Python53

⭐️ Research Papers
Professional Academic Writing & Simulation Services
https://t.iss.one/DataScienceY

━━━━━━━━━━━━━━━━━━
Admin: @HusseinSheikho
Please open Telegram to view this post
VIEW IN TELEGRAM
❀1
✨CauSight: Learning to Supersense for Visual Causal Discovery

πŸ“ Summary:
CauSight is a novel vision-language model for visual causal discovery, inferring cause-effect relations in images. It uses the VCG-32K dataset and Tree-of-Causal-Thought, significantly outperforming GPT-4.1 with a threefold performance boost.

πŸ”Ή Publication Date: Published on Dec 1

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.01827
β€’ PDF: https://arxiv.org/pdf/2512.01827
β€’ Github: https://github.com/OpenCausaLab/CauSight

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/OpenCausaLab/CauSight

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/OpenCausaLab/VCG-32K

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VisualCausalDiscovery #VisionLanguageModels #AI #DeepLearning #CausalInference
✨POLARIS: Projection-Orthogonal Least Squares for Robust and Adaptive Inversion in Diffusion Models

πŸ“ Summary:
POLARIS minimizes approximate noise errors in diffusion models during image inversion. It robustly treats the guidance scale as a step-wise variable, significantly improving image editing and restoration accuracy by reducing errors at each step.

πŸ”Ή Publication Date: Published on Nov 29

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.00369
β€’ PDF: https://arxiv.org/pdf/2512.00369
β€’ Project Page: https://polaris-code-official.github.io/
β€’ Github: https://github.com/Chatonz/POLARIS

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#DiffusionModels #ImageProcessing #AI #MachineLearning #ComputerVision
❀2
✨Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories

πŸ“ Summary:
Rectified MeanFlow enables efficient one-step generative modeling. It achieves this by modeling the mean velocity field on a single-step rectified trajectory with a truncation heuristic, improving both sample quality and training efficiency over prior methods.

πŸ”Ή Publication Date: Published on Nov 28

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.23342
β€’ PDF: https://arxiv.org/pdf/2511.23342
β€’ Github: https://github.com/Xinxi-Zhang/Re-MeanFlow

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#GenerativeAI #MachineLearning #DeepLearning #AIResearch #MeanFlow
πŸ‘1
✨MEGConformer: Conformer-Based MEG Decoder for Robust Speech and Phoneme Classification

πŸ“ Summary:
Conformer-based decoders were adapted for MEG signals to perform Speech Detection and Phoneme Classification. Using MEG-oriented augmentations and normalization, their systems achieved high performance, surpassing competition baselines and ranking within the top-10 in both tasks.

πŸ”Ή Publication Date: Published on Dec 1

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.01443
β€’ PDF: https://arxiv.org/pdf/2512.01443
β€’ Github: https://github.com/neural2speech/libribrain-experiments

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/zuazo/megconformer-speech-detection
β€’ https://huggingface.co/zuazo/megconformer-phoneme-classification

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#MEGConformer #MEG #SpeechProcessing #Neuroscience #AI
Media is too big
VIEW IN TELEGRAM
✨Generative Video Motion Editing with 3D Point Tracks

πŸ“ Summary:
This paper presents a track-conditioned video-to-video framework for precise joint camera and object motion editing. It uses 3D point tracks to maintain spatiotemporal coherence and handle occlusions through explicit depth cues. This enables diverse motion edits.

πŸ”Ή Publication Date: Published on Dec 1

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02015
β€’ PDF: https://arxiv.org/pdf/2512.02015
β€’ Project Page: https://edit-by-track.github.io/

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoEditing #GenerativeAI #ComputerVision #3DTracking #DeepLearning
❀1πŸ‘1
✨ORION: Teaching Language Models to Reason Efficiently in the Language of Thought

πŸ“ Summary:
ORION models compress reasoning into ultra-compressed structured tokens, inspired by Mentalese. This reduces reasoning steps by 4-16x, cuts inference latency by 5x, and training costs by 7-9x while maintaining high accuracy.

πŸ”Ή Publication Date: Published on Nov 28

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.22891
β€’ PDF: https://arxiv.org/pdf/2511.22891

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLM #AI #AIReasoning #CognitiveAI #DeepLearning
✨A Hierarchical Framework for Humanoid Locomotion with Supernumerary Limbs

πŸ“ Summary:
A hierarchical control framework enables stable humanoid locomotion with supernumerary limbs. It combines learning-based gait with model-based limb balancing, improving stability and reducing the CoM trajectory Dynamic Time Warping distance by 47%. This decoupled design effectively mitigates dyna...

πŸ”Ή Publication Date: Published on Nov 25

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.00077
β€’ PDF: https://arxiv.org/pdf/2512.00077
β€’ Github: https://github.com/heyzbw/HuSLs

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#Robotics #HumanoidRobotics #Locomotion #ControlSystems #SupernumeraryLimbs
✨DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models

πŸ“ Summary:
DeepSeek-V3.2 introduces DeepSeek Sparse Attention and a scalable reinforcement learning framework. This allows it to achieve superior reasoning and agent performance, with its Speciale variant surpassing GPT-5 and matching Gemini-3.0-Pro in complex tasks.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02556
β€’ PDF: https://arxiv.org/pdf/2512.02556

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLM #AI #DeepLearning #ReinforcementLearning #GenerativeAI
✨Does Hearing Help Seeing? Investigating Audio-Video Joint Denoising for Video Generation

πŸ“ Summary:
This paper shows audio-video joint denoising significantly improves video generation quality. By using audio as a privileged signal, the AVFullDiT model regularizes video dynamics, leading to better video quality beyond just synchrony.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02457
β€’ PDF: https://arxiv.org/pdf/2512.02457
β€’ Project Page: https://jianzongwu.github.io/projects/does-hearing-help-seeing/
β€’ Github: https://github.com/jianzongwu/Does-Hearing-Help-Seeing

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/jianzongwu/ALT-Merge
β€’ https://huggingface.co/datasets/jianzongwu/VGGSound-T2AV

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoGeneration #MultimodalAI #DeepLearning #ComputerVision #AIResearch
✨PAI-Bench: A Comprehensive Benchmark For Physical AI

πŸ“ Summary:
PAI-Bench is a new benchmark evaluating multi-modal LLMs and video generative models for physical AI perception and prediction. It reveals current models struggle with physical coherence, forecasting, and causal reasoning in real-world dynamics. This highlights significant gaps for future physica...

πŸ”Ή Publication Date: Published on Dec 1

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.01989
β€’ PDF: https://arxiv.org/pdf/2512.01989
β€’ Github: https://github.com/SHI-Labs/physical-ai-bench

✨ Spaces citing this paper:
β€’ https://huggingface.co/spaces/shi-labs/physical-ai-bench-leaderboard

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#PhysicalAI #LLMs #Benchmarking #GenerativeAI #ComputerVision
✨Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization

πŸ“ Summary:
Concise Chain-of-Thought steps, specifically minimal visual grounding, are most effective for achieving generalizable visual reasoning in vision-language models. Longer or visual CoT primarily accelerate training but do not improve final performance or generalization across tasks.

πŸ”Ή Publication Date: Published on Nov 27

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.22586
β€’ PDF: https://arxiv.org/pdf/2511.22586

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#ChainOfThought #VisionLanguageModels #VisualReasoning #AIGeneralization #DeepLearning
✨GUI Exploration Lab: Enhancing Screen Navigation in Agents via Multi-Turn Reinforcement Learning

πŸ“ Summary:
GUI Exploration Lab is a simulation environment to train GUI agents for screen navigation. It finds supervised fine-tuning establishes basics, single-turn reinforcement learning improves generalization, and multi-turn RL enhances exploration for superior navigation performance.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02423
β€’ PDF: https://arxiv.org/pdf/2512.02423

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#ReinforcementLearning #GUIAgents #AINavigation #MachineLearning #AIResearch
✨Benchmarking Scientific Understanding and Reasoning for Video Generation using VideoScience-Bench

πŸ“ Summary:
VideoScience-Bench introduces a new benchmark evaluating video models scientific reasoning. It assesses their ability to generate phenomena consistent with undergraduate physics and chemistry, filling a critical gap. It is the first to evaluate models as scientific reasoners.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02942
β€’ PDF: https://arxiv.org/pdf/2512.02942

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoGeneration #AIResearch #ScientificReasoning #AIModels #Benchmarking
✨UnicEdit-10M: A Dataset and Benchmark Breaking the Scale-Quality Barrier via Unified Verification for Reasoning-Enriched Edits

πŸ“ Summary:
This paper tackles image editing model performance gaps due to data scarcity by introducing UnicEdit-10M, a 10M-scale high-quality dataset from a lightweight verified pipeline. It also proposes UnicBench, a new benchmark with novel metrics to diagnose reasoning limitations in models.

πŸ”Ή Publication Date: Published on Dec 1

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02790
β€’ PDF: https://arxiv.org/pdf/2512.02790

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#ImageEditing #AI #Dataset #Benchmark #ComputerVision
✨Guided Self-Evolving LLMs with Minimal Human Supervision

πŸ“ Summary:
R-Few enables stable LLM self-evolution using a guided Self-Play Challenger-Solver framework with minimal human input. It leverages human examples for synthetic data and a curriculum for training, consistently improving math and reasoning.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02472
β€’ PDF: https://arxiv.org/pdf/2512.02472

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLM #SelfEvolvingAI #MachineLearning #DeepLearning #AIResearch
✨DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation

πŸ“ Summary:
DualCamCtrl is a novel diffusion model for camera-controlled video generation. It employs a dual-branch framework and Semantic Guided Mutual Alignment to generate consistent RGB and depth, better disentangling appearance and geometry for accurate camera trajectories.

πŸ”Ή Publication Date: Published on Nov 28

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.23127
β€’ PDF: https://arxiv.org/pdf/2511.23127
β€’ Project Page: https://soyouthinkyoucantell.github.io/dualcamctrl-page/
β€’ Github: https://github.com/EnVision-Research/DualCamCtrl

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/FayeHongfeiZhang/DualCamCtrl

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#DiffusionModels #VideoGeneration #ComputerVision #GenerativeAI #DeepLearning
Media is too big
VIEW IN TELEGRAM
✨DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models

πŸ“ Summary:
DiG-Flow enhances VLA model robustness by using geometric regularization to align observation and action embeddings. It measures embedding discrepancy, applies residual updates, and consistently boosts performance on complex tasks and with limited data.

πŸ”Ή Publication Date: Published on Dec 1

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.01715
β€’ PDF: https://arxiv.org/pdf/2512.01715
β€’ Project Page: https://beingbeyond.github.io/DiG-Flow/
β€’ Github: https://beingbeyond.github.io/DiG-Flow

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VLAModels #RobustAI #FlowMatching #MachineLearning #DeepLearning
πŸ‘1
✨Glance: Accelerating Diffusion Models with 1 Sample

πŸ“ Summary:
Glance accelerates diffusion models with a phase-aware strategy using lightweight LoRA adapters. This method applies varying speedups across denoising stages, achieving up to 5x acceleration and strong generalization with minimal retraining on just 1 sample.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02899
β€’ PDF: https://arxiv.org/pdf/2512.02899

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#DiffusionModels #ModelAcceleration #LoRA #DeepLearning #GenerativeAI
✨Video4Spatial: Towards Visuospatial Intelligence with Context-Guided Video Generation

πŸ“ Summary:
Video4Spatial uses video diffusion models with only visual data to perform complex spatial tasks like navigation and object grounding. It demonstrates strong spatial understanding, planning, and generalization, advancing visuospatial reasoning.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.03040
β€’ PDF: https://arxiv.org/pdf/2512.03040

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#Video4Spatial #VisuospatialAI #DiffusionModels #SpatialReasoning #ComputerVision
✨YingVideo-MV: Music-Driven Multi-Stage Video Generation

πŸ“ Summary:
YingVideo-MV is the first framework to generate high-quality, music-driven long performance videos with synchronized camera motion. It uses audio analysis, diffusion transformers, and a camera adapter, achieving precise music-motion-camera synchronization.

πŸ”Ή Publication Date: Published on Dec 2

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2512.02492
β€’ PDF: https://arxiv.org/pdf/2512.02492

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoGeneration #MusicAI #GenerativeAI #DiffusionModels #ComputerVision