✨Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting
📝 Summary:
Dolphin is a novel multimodal model for document image parsing. It uses an analyze-then-parse approach with heterogeneous anchor prompting, achieving state-of-the-art performance and superior efficiency.
🔹 Publication Date: Published on May 20, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2505.14059
• PDF: https://arxiv.org/pdf/2505.14059
• Github: https://github.com/bytedance/dolphin
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DocumentParsing #MultimodalAI #DeepLearning #ComputerVision #AI
📝 Summary:
Dolphin is a novel multimodal model for document image parsing. It uses an analyze-then-parse approach with heterogeneous anchor prompting, achieving state-of-the-art performance and superior efficiency.
🔹 Publication Date: Published on May 20, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2505.14059
• PDF: https://arxiv.org/pdf/2505.14059
• Github: https://github.com/bytedance/dolphin
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DocumentParsing #MultimodalAI #DeepLearning #ComputerVision #AI
❤1
✨SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning
📝 Summary:
SenseNova-MARS empowers Vision-Language Models with interleaved visual reasoning and dynamic tool use like search and cropping via reinforcement learning. It achieves state-of-the-art performance on complex visual tasks, outperforming proprietary models on new and existing benchmarks.
🔹 Publication Date: Published on Dec 30, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.24330
• PDF: https://arxiv.org/pdf/2512.24330
• Github: https://github.com/OpenSenseNova/SenseNova-MARS
✨ Datasets citing this paper:
• https://huggingface.co/datasets/sensenova/SenseNova-MARS-Data
• https://huggingface.co/datasets/sensenova/HR-MMSearch
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MultimodalAI #ReinforcementLearning #VisionLanguageModels #AgenticAI #ComputerVision
📝 Summary:
SenseNova-MARS empowers Vision-Language Models with interleaved visual reasoning and dynamic tool use like search and cropping via reinforcement learning. It achieves state-of-the-art performance on complex visual tasks, outperforming proprietary models on new and existing benchmarks.
🔹 Publication Date: Published on Dec 30, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.24330
• PDF: https://arxiv.org/pdf/2512.24330
• Github: https://github.com/OpenSenseNova/SenseNova-MARS
✨ Datasets citing this paper:
• https://huggingface.co/datasets/sensenova/SenseNova-MARS-Data
• https://huggingface.co/datasets/sensenova/HR-MMSearch
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MultimodalAI #ReinforcementLearning #VisionLanguageModels #AgenticAI #ComputerVision
❤1
This media is not supported in your browser
VIEW IN TELEGRAM
✨Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation
📝 Summary:
Avatar Forcing creates real-time interactive talking head avatars. It uses diffusion forcing for low-latency reactions to user input and a label-free preference optimization for expressive, preferred motion, achieving 6.8x speedup.
🔹 Publication Date: Published on Jan 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00664
• PDF: https://arxiv.org/pdf/2601.00664
• Project Page: https://taekyungki.github.io/AvatarForcing/
• Github: https://github.com/TaekyungKi/AvatarForcing
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AvatarGeneration #RealTimeAI #GenerativeAI #ComputerVision #AIResearch
📝 Summary:
Avatar Forcing creates real-time interactive talking head avatars. It uses diffusion forcing for low-latency reactions to user input and a label-free preference optimization for expressive, preferred motion, achieving 6.8x speedup.
🔹 Publication Date: Published on Jan 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00664
• PDF: https://arxiv.org/pdf/2601.00664
• Project Page: https://taekyungki.github.io/AvatarForcing/
• Github: https://github.com/TaekyungKi/AvatarForcing
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AvatarGeneration #RealTimeAI #GenerativeAI #ComputerVision #AIResearch
This media is not supported in your browser
VIEW IN TELEGRAM
✨NeoVerse: Enhancing 4D World Model with in-the-wild Monocular Videos
📝 Summary:
NeoVerse is a 4D world model for reconstruction and video generation. It scales to in-the-wild monocular videos using pose-free feed-forward reconstruction and online degradation simulation, achieving state-of-the-art performance.
🔹 Publication Date: Published on Jan 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00393
• PDF: https://arxiv.org/pdf/2601.00393
• Project Page: https://neoverse-4d.github.io/
• Github: https://neoverse-4d.github.io
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#4DWorldModel #VideoGeneration #ComputerVision #DeepLearning #AI
📝 Summary:
NeoVerse is a 4D world model for reconstruction and video generation. It scales to in-the-wild monocular videos using pose-free feed-forward reconstruction and online degradation simulation, achieving state-of-the-art performance.
🔹 Publication Date: Published on Jan 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00393
• PDF: https://arxiv.org/pdf/2601.00393
• Project Page: https://neoverse-4d.github.io/
• Github: https://neoverse-4d.github.io
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#4DWorldModel #VideoGeneration #ComputerVision #DeepLearning #AI
This media is not supported in your browser
VIEW IN TELEGRAM
✨AdaGaR: Adaptive Gabor Representation for Dynamic Scene Reconstruction
📝 Summary:
AdaGaR reconstructs dynamic 3D scenes from monocular video. It introduces an Adaptive Gabor Representation for detail and stability, and Cubic Hermite Splines for temporal continuity. This method achieves state-of-the-art performance.
🔹 Publication Date: Published on Jan 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00796
• PDF: https://arxiv.org/pdf/2601.00796
• Project Page: https://jiewenchan.github.io/AdaGaR/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DReconstruction #ComputerVision #DynamicScenes #MonocularVideo #GaborRepresentation
📝 Summary:
AdaGaR reconstructs dynamic 3D scenes from monocular video. It introduces an Adaptive Gabor Representation for detail and stability, and Cubic Hermite Splines for temporal continuity. This method achieves state-of-the-art performance.
🔹 Publication Date: Published on Jan 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00796
• PDF: https://arxiv.org/pdf/2601.00796
• Project Page: https://jiewenchan.github.io/AdaGaR/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#3DReconstruction #ComputerVision #DynamicScenes #MonocularVideo #GaborRepresentation
❤1
✨OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions
📝 Summary:
OmniVCus introduces a system for feedforward multi-subject video customization with multimodal controls. It proposes a data pipeline, VideoCus-Factory, and a diffusion Transformer framework with novel embedding mechanisms. This enables more subjects and precise editing, significantly outperformin...
🔹 Publication Date: Published on Jun 29, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.23361
• PDF: https://arxiv.org/pdf/2506.23361
• Project Page: https://caiyuanhao1998.github.io/project/OmniVCus/
• Github: https://github.com/caiyuanhao1998/Open-OmniVCus
🔹 Models citing this paper:
• https://huggingface.co/CaiYuanhao/OmniVCus
✨ Datasets citing this paper:
• https://huggingface.co/datasets/CaiYuanhao/OmniVCus
• https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test
• https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Train
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VideoGeneration #DiffusionModels #MultimodalAI #DeepLearning #ComputerVision
📝 Summary:
OmniVCus introduces a system for feedforward multi-subject video customization with multimodal controls. It proposes a data pipeline, VideoCus-Factory, and a diffusion Transformer framework with novel embedding mechanisms. This enables more subjects and precise editing, significantly outperformin...
🔹 Publication Date: Published on Jun 29, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.23361
• PDF: https://arxiv.org/pdf/2506.23361
• Project Page: https://caiyuanhao1998.github.io/project/OmniVCus/
• Github: https://github.com/caiyuanhao1998/Open-OmniVCus
🔹 Models citing this paper:
• https://huggingface.co/CaiYuanhao/OmniVCus
✨ Datasets citing this paper:
• https://huggingface.co/datasets/CaiYuanhao/OmniVCus
• https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test
• https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Train
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VideoGeneration #DiffusionModels #MultimodalAI #DeepLearning #ComputerVision
arXiv.org
OmniVCus: Feedforward Subject-driven Video Customization with...
Existing feedforward subject-driven video customization methods mainly study single-subject scenarios due to the difficulty of constructing multi-subject training data pairs. Another challenging...
❤1
This media is not supported in your browser
VIEW IN TELEGRAM
✨DreamID-V:Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer
📝 Summary:
DreamID-V is a novel video face swapping framework that uses diffusion transformers and curriculum learning. It achieves superior identity preservation and visual realism by bridging the image-to-video gap, outperforming existing methods and enhancing temporal consistency.
🔹 Publication Date: Published on Jan 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01425
• PDF: https://arxiv.org/pdf/2601.01425
• Project Page: https://guoxu1233.github.io/DreamID-V/
• Github: https://guoxu1233.github.io/DreamID-V/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FaceSwapping #DiffusionModels #ComputerVision #GenerativeAI #VideoAI
📝 Summary:
DreamID-V is a novel video face swapping framework that uses diffusion transformers and curriculum learning. It achieves superior identity preservation and visual realism by bridging the image-to-video gap, outperforming existing methods and enhancing temporal consistency.
🔹 Publication Date: Published on Jan 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01425
• PDF: https://arxiv.org/pdf/2601.01425
• Project Page: https://guoxu1233.github.io/DreamID-V/
• Github: https://guoxu1233.github.io/DreamID-V/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FaceSwapping #DiffusionModels #ComputerVision #GenerativeAI #VideoAI
This media is not supported in your browser
VIEW IN TELEGRAM
✨DiffProxy: Multi-View Human Mesh Recovery via Diffusion-Generated Dense Proxies
📝 Summary:
DiffProxy generates multi-view consistent human proxies using diffusion models to improve human mesh recovery. This bridges synthetic training and real-world generalization, achieving state-of-the-art performance on real benchmarks.
🔹 Publication Date: Published on Jan 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02267
• PDF: https://arxiv.org/pdf/2601.02267
• Project Page: https://wrk226.github.io/DiffProxy.html
• Github: https://github.com/wrk226/DiffProxy
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#HumanMeshRecovery #DiffusionModels #ComputerVision #DeepLearning #AI
📝 Summary:
DiffProxy generates multi-view consistent human proxies using diffusion models to improve human mesh recovery. This bridges synthetic training and real-world generalization, achieving state-of-the-art performance on real benchmarks.
🔹 Publication Date: Published on Jan 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02267
• PDF: https://arxiv.org/pdf/2601.02267
• Project Page: https://wrk226.github.io/DiffProxy.html
• Github: https://github.com/wrk226/DiffProxy
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#HumanMeshRecovery #DiffusionModels #ComputerVision #DeepLearning #AI
❤1
✨Prithvi-Complimentary Adaptive Fusion Encoder (CAFE): unlocking full-potential for flood inundation mapping
📝 Summary:
Prithvi-CAFE improves flood mapping by integrating a pretrained Geo-Foundation Model encoder with a parallel CNN branch featuring attention modules. This hybrid approach effectively captures both global context and critical local details, achieving state-of-the-art results on Sen1Flood11 and Floo...
🔹 Publication Date: Published on Jan 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02315
• PDF: https://arxiv.org/pdf/2601.02315
• Github: https://github.com/Sk-2103/Prithvi-CAFE
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FloodMapping #DeepLearning #GeoAI #RemoteSensing #ComputerVision
📝 Summary:
Prithvi-CAFE improves flood mapping by integrating a pretrained Geo-Foundation Model encoder with a parallel CNN branch featuring attention modules. This hybrid approach effectively captures both global context and critical local details, achieving state-of-the-art results on Sen1Flood11 and Floo...
🔹 Publication Date: Published on Jan 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02315
• PDF: https://arxiv.org/pdf/2601.02315
• Github: https://github.com/Sk-2103/Prithvi-CAFE
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FloodMapping #DeepLearning #GeoAI #RemoteSensing #ComputerVision
This media is not supported in your browser
VIEW IN TELEGRAM
✨ExposeAnyone: Personalized Audio-to-Expression Diffusion Models Are Robust Zero-Shot Face Forgery Detectors
📝 Summary:
ExposeAnyone is a self-supervised diffusion model for deepfake detection that personalizes to subjects and uses reconstruction errors to measure identity distance. It significantly outperforms prior methods on unseen manipulations, including Sora2 videos, and is robust to real-world corruptions.
🔹 Publication Date: Published on Jan 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02359
• PDF: https://arxiv.org/pdf/2601.02359
• Github: https://mapooon.github.io/ExposeAnyonePage/
✨ Datasets citing this paper:
• https://huggingface.co/datasets/mapooon/S2CFP
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DeepfakeDetection #DiffusionModels #ComputerVision #AITechnology #ForgeryDetection
📝 Summary:
ExposeAnyone is a self-supervised diffusion model for deepfake detection that personalizes to subjects and uses reconstruction errors to measure identity distance. It significantly outperforms prior methods on unseen manipulations, including Sora2 videos, and is robust to real-world corruptions.
🔹 Publication Date: Published on Jan 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02359
• PDF: https://arxiv.org/pdf/2601.02359
• Github: https://mapooon.github.io/ExposeAnyonePage/
✨ Datasets citing this paper:
• https://huggingface.co/datasets/mapooon/S2CFP
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DeepfakeDetection #DiffusionModels #ComputerVision #AITechnology #ForgeryDetection
❤2