ML Research Hub
32.9K subscribers
4.45K photos
273 videos
23 files
4.81K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
BulletTime: Decoupled Control of Time and Camera Pose for Video Generation

📝 Summary:
This paper presents a video diffusion framework that decouples scene dynamics from camera pose. This enables precise 4D control over time and viewpoint for high-quality video generation, outperforming prior models in controllability.

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05076
• PDF: https://arxiv.org/pdf/2512.05076

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #DiffusionModels #GenerativeAI #ComputerVision #AICameraControl
This media is not supported in your browser
VIEW IN TELEGRAM
EgoLCD: Egocentric Video Generation with Long Context Diffusion

📝 Summary:
EgoLCD addresses content drift in long egocentric video generation by integrating long-term sparse and attention-based short-term memory with narrative prompting. It achieves state-of-the-art perceptual quality and temporal consistency, mitigating generative forgetting.

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04515
• PDF: https://arxiv.org/pdf/2512.04515
• Project Page: https://aigeeksgroup.github.io/EgoLCD/
• Github: https://github.com/AIGeeksGroup/EgoLCD

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #VideoGeneration #DiffusionModels #ComputerVision #EgocentricVision
👍1
Generative Action Tell-Tales: Assessing Human Motion in Synthesized Videos

📝 Summary:
A new metric evaluates human action in generated videos by using a learned latent space of real-world actions, fusing skeletal geometry and appearance features. It significantly improves temporal and visual correctness assessment, outperforming existing methods and correlating better with human p...

🔹 Publication Date: Published on Dec 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01803
• PDF: https://arxiv.org/pdf/2512.01803
• Project Page: https://xthomasbu.github.io/video-gen-evals/
• Github: https://xthomasbu.github.io/video-gen-evals/

Datasets citing this paper:
https://huggingface.co/datasets/dghadiya/TAG-Bench-Video

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #HumanMotion #ComputerVision #AIMetrics #DeepLearning
Deep Forcing: Training-Free Long Video Generation with Deep Sink and Participative Compression

📝 Summary:
Deep Forcing is a training-free method that enhances real-time video diffusion for high-quality, long-duration generation. It uses Deep Sink for stable context and Participative Compression for efficient KV cache pruning, achieving over 12x extrapolation and improved consistency.

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05081
• PDF: https://arxiv.org/pdf/2512.05081
• Github: https://cvlab-kaist.github.io/DeepForcing/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #DiffusionModels #TrainingFreeAI #DeepLearning #ComputerVision
2
Media is too big
VIEW IN TELEGRAM
Light-X: Generative 4D Video Rendering with Camera and Illumination Control

📝 Summary:
Light-X is a video generation framework for controllable rendering from monocular videos with joint viewpoint and illumination control. It disentangles geometry and lighting using synthetic data for robust training, outperforming prior methods in both aspects.

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05115
• PDF: https://arxiv.org/pdf/2512.05115
• Project Page: https://lightx-ai.github.io/
• Github: https://github.com/TQTQliu/Light-X

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #ComputerVision #AI #NeuralRendering #GenerativeAI
1
ProPhy: Progressive Physical Alignment for Dynamic World Simulation

📝 Summary:
ProPhy is a two-stage framework that enhances video generation by explicitly incorporating physics-aware conditioning and anisotropic generation. It uses a Mixture-of-Physics-Experts mechanism to extract fine-grained physical priors, improving physical consistency and realism in dynamic world sim...

🔹 Publication Date: Published on Dec 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05564
• PDF: https://arxiv.org/pdf/2512.05564

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #PhysicsAI #DynamicSimulation #DeepLearning #ComputerVision
UnityVideo: Unified Multi-Modal Multi-Task Learning for Enhancing World-Aware Video Generation

📝 Summary:
UnityVideo is a unified framework enhancing video generation by integrating multiple modalities and training paradigms. It uses dynamic noising and a modality switcher for comprehensive world understanding. This improves video quality, consistency, and zero-shot generalization to new data.

🔹 Publication Date: Published on Dec 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.07831
• PDF: https://arxiv.org/pdf/2512.07831
• Project Page: https://jackailab.github.io/Projects/UnityVideo/
• Github: https://github.com/dvlab-research/UnityVideo

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #MultimodalAI #GenerativeAI #DeepLearning #AIResearch
MIND-V: Hierarchical Video Generation for Long-Horizon Robotic Manipulation with RL-based Physical Alignment

📝 Summary:
MIND-V generates long-horizon, physically plausible robotic manipulation videos. This hierarchical framework uses semantic reasoning and an RL-based physical alignment strategy to synthesize robust, coherent actions, addressing data scarcity.

🔹 Publication Date: Published on Dec 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.06628
• PDF: https://arxiv.org/pdf/2512.06628
• Project Page: https://github.com/Richard-Zhang-AI/MIND-V
• Github: https://github.com/Richard-Zhang-AI/MIND-V

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#Robotics #VideoGeneration #ReinforcementLearning #AI #MachineLearning
Media is too big
VIEW IN TELEGRAM
OneStory: Coherent Multi-Shot Video Generation with Adaptive Memory

📝 Summary:
OneStory generates coherent multi-shot videos by modeling global cross-shot context. It uses a Frame Selection module and an Adaptive Conditioner for next-shot generation, leveraging pretrained models and a new dataset. This achieves state-of-the-art narrative coherence for long-form video storyt...

🔹 Publication Date: Published on Dec 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.07802
• PDF: https://arxiv.org/pdf/2512.07802
• Project Page: https://zhaochongan.github.io/projects/OneStory/

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #AI #DeepLearning #ComputerVision #GenerativeAI
1
VideoSSM: Autoregressive Long Video Generation with Hybrid State-Space Memory

📝 Summary:
VideoSSM proposes a hybrid state-space memory model for long video generation. It unifies autoregressive diffusion with global state-space memory and local context to achieve state-of-the-art temporal consistency and motion stability. This enables scalable, interactive minute-scale video synthesis.

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04519
• PDF: https://arxiv.org/pdf/2512.04519

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#VideoGeneration #GenerativeAI #DiffusionModels #StateSpaceModels #DeepLearning