This media is not supported in your browser
VIEW IN TELEGRAM
✨SpaceTimePilot: Generative Rendering of Dynamic Scenes Across Space and Time
📝 Summary:
SpaceTimePilot is a video diffusion model for dynamic scene rendering, offering independent control over spatial viewpoint and temporal motion. It achieves precise space-time disentanglement via a time-embedding, temporal-warping training, and a synthetic dataset.
🔹 Publication Date: Published on Dec 31, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.25075
• PDF: https://arxiv.org/pdf/2512.25075
• Project Page: https://zheninghuang.github.io/Space-Time-Pilot/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VideoDiffusion #GenerativeAI #DynamicScenes #ComputerGraphics #DeepLearning
📝 Summary:
SpaceTimePilot is a video diffusion model for dynamic scene rendering, offering independent control over spatial viewpoint and temporal motion. It achieves precise space-time disentanglement via a time-embedding, temporal-warping training, and a synthetic dataset.
🔹 Publication Date: Published on Dec 31, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.25075
• PDF: https://arxiv.org/pdf/2512.25075
• Project Page: https://zheninghuang.github.io/Space-Time-Pilot/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VideoDiffusion #GenerativeAI #DynamicScenes #ComputerGraphics #DeepLearning
✨MorphAny3D: Unleashing the Power of Structured Latent in 3D Morphing
📝 Summary:
MorphAny3D offers a training-free framework for high-quality 3D morphing, even across categories. It leverages Structured Latent representations with novel attention mechanisms MCA, TFSA for structural coherence and temporal consistency. This achieves state-of-the-art results and supports advance...
🔹 Publication Date: Published on Jan 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00204
• PDF: https://arxiv.org/pdf/2601.00204
• Project Page: https://xiaokunsun.github.io/MorphAny3D.github.io
• Github: https://github.com/XiaokunSun/MorphAny3D
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DMorphing #ComputerGraphics #DeepLearning #StructuredLatent #AIResearch
📝 Summary:
MorphAny3D offers a training-free framework for high-quality 3D morphing, even across categories. It leverages Structured Latent representations with novel attention mechanisms MCA, TFSA for structural coherence and temporal consistency. This achieves state-of-the-art results and supports advance...
🔹 Publication Date: Published on Jan 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00204
• PDF: https://arxiv.org/pdf/2601.00204
• Project Page: https://xiaokunsun.github.io/MorphAny3D.github.io
• Github: https://github.com/XiaokunSun/MorphAny3D
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DMorphing #ComputerGraphics #DeepLearning #StructuredLatent #AIResearch
✨Muses: Designing, Composing, Generating Nonexistent Fantasy 3D Creatures without Training
📝 Summary:
Muses is a training-free method for generating fantastic 3D creatures. It leverages 3D skeletal structures and graph-constrained reasoning to coherently design, compose, and assemble diverse elements. This approach achieves state-of-the-art visual fidelity and alignment with text descriptions.
🔹 Publication Date: Published on Jan 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.03256
• PDF: https://arxiv.org/pdf/2601.03256
• Github: https://github.com/luhexiao/Muses
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGeneration #GenerativeAI #ComputerGraphics #AIArt #TrainingFreeAI
📝 Summary:
Muses is a training-free method for generating fantastic 3D creatures. It leverages 3D skeletal structures and graph-constrained reasoning to coherently design, compose, and assemble diverse elements. This approach achieves state-of-the-art visual fidelity and alignment with text descriptions.
🔹 Publication Date: Published on Jan 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.03256
• PDF: https://arxiv.org/pdf/2601.03256
• Github: https://github.com/luhexiao/Muses
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGeneration #GenerativeAI #ComputerGraphics #AIArt #TrainingFreeAI
Media is too big
VIEW IN TELEGRAM
✨ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion
📝 Summary:
ActionMesh extends 3D diffusion models with a temporal axis to generate high-quality, rig-free animated 3D meshes. This 'temporal 3D diffusion' framework quickly creates topology-consistent animations from various inputs like video or text, achieving state-of-the-art results.
🔹 Publication Date: Published on Jan 22
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.16148
• PDF: https://remysabathier.github.io/actionmesh/actionmesh_2026.pdf
• Project Page: https://remysabathier.github.io/actionmesh/
• Github: https://github.com/facebookresearch/actionmesh
🔹 Models citing this paper:
• https://huggingface.co/facebook/ActionMesh
✨ Spaces citing this paper:
• https://huggingface.co/spaces/facebook/ActionMesh
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DAnimation #DiffusionModels #ComputerGraphics #DeepLearning #3DModeling
📝 Summary:
ActionMesh extends 3D diffusion models with a temporal axis to generate high-quality, rig-free animated 3D meshes. This 'temporal 3D diffusion' framework quickly creates topology-consistent animations from various inputs like video or text, achieving state-of-the-art results.
🔹 Publication Date: Published on Jan 22
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.16148
• PDF: https://remysabathier.github.io/actionmesh/actionmesh_2026.pdf
• Project Page: https://remysabathier.github.io/actionmesh/
• Github: https://github.com/facebookresearch/actionmesh
🔹 Models citing this paper:
• https://huggingface.co/facebook/ActionMesh
✨ Spaces citing this paper:
• https://huggingface.co/spaces/facebook/ActionMesh
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DAnimation #DiffusionModels #ComputerGraphics #DeepLearning #3DModeling
✨Interp3D: Correspondence-aware Interpolation for Generative Textured 3D Morphing
📝 Summary:
Interp3D is a training-free framework for textured 3D morphing. It solves existing issues of structural misalignment and texture blurring by ensuring geometric consistency and texture alignment using generative priors and progressive alignment. The method outperforms prior approaches.
🔹 Publication Date: Published on Jan 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.14103
• PDF: https://arxiv.org/pdf/2601.14103
• Project Page: https://interp3d.github.io/
• Github: https://github.com/xiaolul2/Interp3D
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DMorphing #GenerativeAI #ComputerGraphics #DeepLearning #AIResearch
📝 Summary:
Interp3D is a training-free framework for textured 3D morphing. It solves existing issues of structural misalignment and texture blurring by ensuring geometric consistency and texture alignment using generative priors and progressive alignment. The method outperforms prior approaches.
🔹 Publication Date: Published on Jan 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.14103
• PDF: https://arxiv.org/pdf/2601.14103
• Project Page: https://interp3d.github.io/
• Github: https://github.com/xiaolul2/Interp3D
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DMorphing #GenerativeAI #ComputerGraphics #DeepLearning #AIResearch
❤1
✨PLANING: A Loosely Coupled Triangle-Gaussian Framework for Streaming 3D Reconstruction
📝 Summary:
PLANING is an efficient streaming 3D reconstruction framework. It combines explicit geometric primitives and neural Gaussians with decoupled optimization, achieving both high-quality rendering and accurate geometry. It outperforms prior methods in quality and speed.
🔹 Publication Date: Published on Jan 29
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22046
• PDF: https://arxiv.org/pdf/2601.22046
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DReconstruction #ComputerVision #NeuralNetworks #StreamingTech #ComputerGraphics
📝 Summary:
PLANING is an efficient streaming 3D reconstruction framework. It combines explicit geometric primitives and neural Gaussians with decoupled optimization, achieving both high-quality rendering and accurate geometry. It outperforms prior methods in quality and speed.
🔹 Publication Date: Published on Jan 29
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.22046
• PDF: https://arxiv.org/pdf/2601.22046
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DReconstruction #ComputerVision #NeuralNetworks #StreamingTech #ComputerGraphics
✨Implicit neural representation of textures
📝 Summary:
This work designs new texture implicit neural representations that operate continuously over UV coordinate space. Experiments show they achieve good image quality while balancing memory and rendering time, useful for real-time rendering and downstream tasks.
🔹 Publication Date: Published on Feb 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02354
• PDF: https://arxiv.org/pdf/2602.02354
• Project Page: https://peterhuistyping.github.io/INR-Tex/
• Github: https://github.com/PeterHUistyping/INR-Tex
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ImplicitNeuralRepresentations #ComputerGraphics #DeepLearning #TextureModeling #RealTimeRendering
📝 Summary:
This work designs new texture implicit neural representations that operate continuously over UV coordinate space. Experiments show they achieve good image quality while balancing memory and rendering time, useful for real-time rendering and downstream tasks.
🔹 Publication Date: Published on Feb 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02354
• PDF: https://arxiv.org/pdf/2602.02354
• Project Page: https://peterhuistyping.github.io/INR-Tex/
• Github: https://github.com/PeterHUistyping/INR-Tex
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ImplicitNeuralRepresentations #ComputerGraphics #DeepLearning #TextureModeling #RealTimeRendering
✨FlowScene: Style-Consistent Indoor Scene Generation with Multimodal Graph Rectified Flow
📝 Summary:
FlowScene is a generative model that uses multimodal graph conditioning and rectified flow to create realistic, style-consistent indoor scenes. It offers fine-grained control over object shapes, textures, and relations, surpassing prior methods.
🔹 Publication Date: Published on Mar 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.19598
• PDF: https://arxiv.org/pdf/2603.19598
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#GenerativeAI #3DSceneGeneration #MultimodalAI #DeepLearning #ComputerGraphics
📝 Summary:
FlowScene is a generative model that uses multimodal graph conditioning and rectified flow to create realistic, style-consistent indoor scenes. It offers fine-grained control over object shapes, textures, and relations, surpassing prior methods.
🔹 Publication Date: Published on Mar 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.19598
• PDF: https://arxiv.org/pdf/2603.19598
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#GenerativeAI #3DSceneGeneration #MultimodalAI #DeepLearning #ComputerGraphics
✨F4Splat: Feed-Forward Predictive Densification for Feed-Forward 3D Gaussian Splatting
📝 Summary:
F4Splat introduces predictive densification for 3D Gaussian splatting, adaptively allocating Gaussians based on spatial complexity and view overlap. This reduces redundant Gaussians, leading to compact, high-quality 3D representations with significantly fewer Gaussians than prior feed-forward met...
🔹 Publication Date: Published on Mar 22
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.21304
• PDF: https://arxiv.org/pdf/2603.21304
• Project Page: https://mlvlab.github.io/F4Splat/
• Github: https://github.com/mlvlab/F4Splat
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGaussianSplatting #ComputerGraphics #3DReconstruction #MachineLearning #NeuralRendering
📝 Summary:
F4Splat introduces predictive densification for 3D Gaussian splatting, adaptively allocating Gaussians based on spatial complexity and view overlap. This reduces redundant Gaussians, leading to compact, high-quality 3D representations with significantly fewer Gaussians than prior feed-forward met...
🔹 Publication Date: Published on Mar 22
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.21304
• PDF: https://arxiv.org/pdf/2603.21304
• Project Page: https://mlvlab.github.io/F4Splat/
• Github: https://github.com/mlvlab/F4Splat
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGaussianSplatting #ComputerGraphics #3DReconstruction #MachineLearning #NeuralRendering
Media is too big
VIEW IN TELEGRAM
✨WorldFlow3D: Flowing Through 3D Distributions for Unbounded World Generation
📝 Summary:
WorldFlow3D generates unbounded 3D worlds by modeling 3D data distributions as a flow matching problem. This latent-free approach achieves rapid convergence and high-quality generation with controllable geometric and texture properties. It outperforms existing methods on both real and synthetic s...
🔹 Publication Date: Published on Mar 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.29089
• PDF: https://arxiv.org/pdf/2603.29089
• Project Page: https://princeton-computational-imaging.github.io/WorldFlow3D/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGeneration #GenerativeAI #FlowMatching #ComputerGraphics #AIResearch
📝 Summary:
WorldFlow3D generates unbounded 3D worlds by modeling 3D data distributions as a flow matching problem. This latent-free approach achieves rapid convergence and high-quality generation with controllable geometric and texture properties. It outperforms existing methods on both real and synthetic s...
🔹 Publication Date: Published on Mar 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.29089
• PDF: https://arxiv.org/pdf/2603.29089
• Project Page: https://princeton-computational-imaging.github.io/WorldFlow3D/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGeneration #GenerativeAI #FlowMatching #ComputerGraphics #AIResearch
✨Hunyuan3D 2.1: From Images to High-Fidelity 3D Assets with Production-Ready PBR Material
📝 Summary:
This tutorial introduces Hunyuan3D 2.1, a system for generating high-fidelity, textured 3D assets to make AI content creation more accessible. It details the full workflow from data preparation to deployment, using Hunyuan3D-DiT for shape and Hunyuan3D-Paint for texture synthesis.
🔹 Publication Date: Published on Jun 18, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.15442
• PDF: https://arxiv.org/pdf/2506.15442
• Github: https://github.com/huggingface/huggingface.js
🔹 Models citing this paper:
• https://huggingface.co/tencent/Hunyuan3D-2.1
• https://huggingface.co/tencent/Hunyuan3D-Omni
• https://huggingface.co/tencent/HY3D-Bench
✨ Datasets citing this paper:
• https://huggingface.co/datasets/tencent/HY3D-Bench
✨ Spaces citing this paper:
• https://huggingface.co/spaces/duranponce/ai-default
• https://huggingface.co/spaces/AliothTalks/Hunyuan3D-2.1
• https://huggingface.co/spaces/joaojack/Hunyuan3D-2.1
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGeneration #AI #ComputerGraphics #ImageTo3D #PBRMaterials
📝 Summary:
This tutorial introduces Hunyuan3D 2.1, a system for generating high-fidelity, textured 3D assets to make AI content creation more accessible. It details the full workflow from data preparation to deployment, using Hunyuan3D-DiT for shape and Hunyuan3D-Paint for texture synthesis.
🔹 Publication Date: Published on Jun 18, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.15442
• PDF: https://arxiv.org/pdf/2506.15442
• Github: https://github.com/huggingface/huggingface.js
🔹 Models citing this paper:
• https://huggingface.co/tencent/Hunyuan3D-2.1
• https://huggingface.co/tencent/Hunyuan3D-Omni
• https://huggingface.co/tencent/HY3D-Bench
✨ Datasets citing this paper:
• https://huggingface.co/datasets/tencent/HY3D-Bench
✨ Spaces citing this paper:
• https://huggingface.co/spaces/duranponce/ai-default
• https://huggingface.co/spaces/AliothTalks/Hunyuan3D-2.1
• https://huggingface.co/spaces/joaojack/Hunyuan3D-2.1
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DGeneration #AI #ComputerGraphics #ImageTo3D #PBRMaterials
arXiv.org
Hunyuan3D 2.1: From Images to High-Fidelity 3D Assets with...
3D AI-generated content (AIGC) is a passionate field that has significantly accelerated the creation of 3D models in gaming, film, and design. Despite the development of several groundbreaking...
❤1