✨CATS-V2V: A Real-World Vehicle-to-Vehicle Cooperative Perception Dataset with Complex Adverse Traffic Scenarios
📝 Summary:
CATS-V2V is a new real-world dataset for V2V cooperative perception, focusing on complex adverse traffic scenarios. It provides extensive synchronized sensor data, including LiDAR and cameras, from two vehicles across diverse conditions. This dataset supports autonomous driving research.
🔹 Publication Date: Published on Nov 14
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.11168
• PDF: https://arxiv.org/pdf/2511.11168
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#V2V #AutonomousDriving #CooperativePerception #Dataset #ADAS
📝 Summary:
CATS-V2V is a new real-world dataset for V2V cooperative perception, focusing on complex adverse traffic scenarios. It provides extensive synchronized sensor data, including LiDAR and cameras, from two vehicles across diverse conditions. This dataset supports autonomous driving research.
🔹 Publication Date: Published on Nov 14
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.11168
• PDF: https://arxiv.org/pdf/2511.11168
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#V2V #AutonomousDriving #CooperativePerception #Dataset #ADAS
✨MiMo-Embodied: X-Embodied Foundation Model Technical Report
📝 Summary:
MiMo-Embodied is the first cross-embodied foundation model. It achieves state-of-the-art performance in both autonomous driving and embodied AI, demonstrating positive transfer through multi-stage learning and fine-tuning.
🔹 Publication Date: Published on Nov 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.16518
• PDF: https://arxiv.org/pdf/2511.16518
• Github: https://github.com/XiaomiMiMo/MiMo-Embodied
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FoundationModels #EmbodiedAI #AutonomousDriving #AI #Robotics
📝 Summary:
MiMo-Embodied is the first cross-embodied foundation model. It achieves state-of-the-art performance in both autonomous driving and embodied AI, demonstrating positive transfer through multi-stage learning and fine-tuning.
🔹 Publication Date: Published on Nov 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.16518
• PDF: https://arxiv.org/pdf/2511.16518
• Github: https://github.com/XiaomiMiMo/MiMo-Embodied
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FoundationModels #EmbodiedAI #AutonomousDriving #AI #Robotics
✨OpenREAD: Reinforced Open-Ended Reasoing for End-to-End Autonomous Driving with LLM-as-Critic
📝 Summary:
OpenREAD enhances autonomous driving via end-to-end reinforcement fine-tuning for both reasoning and planning. It uses an LLM critic to quantify open-ended reasoning, achieving state-of-the-art performance by addressing prior limitations.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01830
• PDF: https://arxiv.org/pdf/2512.01830
• Github: https://github.com/wyddmw/OpenREAD
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #LLMs #ReinforcementLearning #AI #Robotics
📝 Summary:
OpenREAD enhances autonomous driving via end-to-end reinforcement fine-tuning for both reasoning and planning. It uses an LLM critic to quantify open-ended reasoning, achieving state-of-the-art performance by addressing prior limitations.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01830
• PDF: https://arxiv.org/pdf/2512.01830
• Github: https://github.com/wyddmw/OpenREAD
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #LLMs #ReinforcementLearning #AI #Robotics
✨SimScale: Learning to Drive via Real-World Simulation at Scale
📝 Summary:
SimScale is a simulation framework synthesizing diverse driving scenarios from logs. Co-training with this data significantly improves autonomous driving robustness and generalization, scaling with simulation data even without new real-world input.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23369
• PDF: https://arxiv.org/pdf/2511.23369
• Project Page: https://opendrivelab.com/SimScale
• Github: https://github.com/OpenDriveLab/SimScale
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #Simulation #AI #MachineLearning #Robotics
📝 Summary:
SimScale is a simulation framework synthesizing diverse driving scenarios from logs. Co-training with this data significantly improves autonomous driving robustness and generalization, scaling with simulation data even without new real-world input.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23369
• PDF: https://arxiv.org/pdf/2511.23369
• Project Page: https://opendrivelab.com/SimScale
• Github: https://github.com/OpenDriveLab/SimScale
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #Simulation #AI #MachineLearning #Robotics
✨From Segments to Scenes: Temporal Understanding in Autonomous Driving via Vision-Language Model
📝 Summary:
The TAD benchmark is introduced to evaluate temporal understanding in autonomous driving, addressing a gap where current VLMs perform poorly. It reveals that state-of-the-art models show substandard accuracy in this domain. Two training-free solutions, Scene-CoT and TCogMap, are proposed, improvi...
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05277
• PDF: https://arxiv.org/pdf/2512.05277
• Github: https://github.com/vbdi/tad_bench
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #VisionLanguageModels #ComputerVision #AIResearch #DeepLearning
📝 Summary:
The TAD benchmark is introduced to evaluate temporal understanding in autonomous driving, addressing a gap where current VLMs perform poorly. It reveals that state-of-the-art models show substandard accuracy in this domain. Two training-free solutions, Scene-CoT and TCogMap, are proposed, improvi...
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05277
• PDF: https://arxiv.org/pdf/2512.05277
• Github: https://github.com/vbdi/tad_bench
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #VisionLanguageModels #ComputerVision #AIResearch #DeepLearning
❤3
✨DrivePI: Spatial-aware 4D MLLM for Unified Autonomous Driving Understanding, Perception, Prediction and Planning
📝 Summary:
DrivePI is a new spatial-aware 4D MLLM for autonomous driving, unifying understanding, 3D perception, prediction, and planning. It integrates point clouds, images, and language instructions, achieving state-of-the-art performance by outperforming existing VLA and specialized VA models.
🔹 Publication Date: Published on Dec 14
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.12799
• PDF: https://arxiv.org/pdf/2512.12799
• Github: https://github.com/happinesslz/DrivePI
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #MLLM #ComputerVision #DeepLearning #AI
📝 Summary:
DrivePI is a new spatial-aware 4D MLLM for autonomous driving, unifying understanding, 3D perception, prediction, and planning. It integrates point clouds, images, and language instructions, achieving state-of-the-art performance by outperforming existing VLA and specialized VA models.
🔹 Publication Date: Published on Dec 14
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.12799
• PDF: https://arxiv.org/pdf/2512.12799
• Github: https://github.com/happinesslz/DrivePI
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #MLLM #ComputerVision #DeepLearning #AI
✨Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
📝 Summary:
Vision-Language-Action VLA models integrate visual, linguistic, and action capabilities for autonomous driving. They aim for interpretable and human-aligned policies, addressing prior system limitations. This paper characterizes VLA paradigms, datasets, and future challenges.
🔹 Publication Date: Published on Dec 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.16760
• PDF: https://arxiv.org/pdf/2512.16760
• Project Page: https://worldbench.github.io/vla4ad
• Github: https://github.com/worldbench/awesome-vla-for-ad
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VLAModels #AutonomousDriving #AI #DeepLearning #Robotics
📝 Summary:
Vision-Language-Action VLA models integrate visual, linguistic, and action capabilities for autonomous driving. They aim for interpretable and human-aligned policies, addressing prior system limitations. This paper characterizes VLA paradigms, datasets, and future challenges.
🔹 Publication Date: Published on Dec 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.16760
• PDF: https://arxiv.org/pdf/2512.16760
• Project Page: https://worldbench.github.io/vla4ad
• Github: https://github.com/worldbench/awesome-vla-for-ad
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VLAModels #AutonomousDriving #AI #DeepLearning #Robotics
❤2
✨RadarGen: Automotive Radar Point Cloud Generation from Cameras
📝 Summary:
RadarGen synthesizes realistic automotive radar point clouds from camera images using diffusion models. It incorporates depth, semantic, and motion cues for physical plausibility, enabling scalable multimodal simulation and improving perception models.
🔹 Publication Date: Published on Dec 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.17897
• PDF: https://arxiv.org/pdf/2512.17897
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutomotiveRadar #PointClouds #DiffusionModels #ComputerVision #AutonomousDriving
📝 Summary:
RadarGen synthesizes realistic automotive radar point clouds from camera images using diffusion models. It incorporates depth, semantic, and motion cues for physical plausibility, enabling scalable multimodal simulation and improving perception models.
🔹 Publication Date: Published on Dec 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.17897
• PDF: https://arxiv.org/pdf/2512.17897
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutomotiveRadar #PointClouds #DiffusionModels #ComputerVision #AutonomousDriving
❤1
This media is not supported in your browser
VIEW IN TELEGRAM
✨DrivingGen: A Comprehensive Benchmark for Generative Video World Models in Autonomous Driving
📝 Summary:
DrivingGen is the first comprehensive benchmark for generative driving world models, addressing prior evaluation gaps. It uses diverse datasets and new metrics to assess visual realism, trajectory plausibility, temporal coherence, and controllability. Benchmarking reveals trade-offs between visua...
🔹 Publication Date: Published on Jan 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01528
• PDF: https://arxiv.org/pdf/2601.01528
• Project Page: https://drivinggen-bench.github.io/
• Github: https://github.com/youngzhou1999/DrivingGen
✨ Datasets citing this paper:
• https://huggingface.co/datasets/yangzhou99/DrivingGen
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #GenerativeAI #WorldModels #AIResearch #Benchmarking
📝 Summary:
DrivingGen is the first comprehensive benchmark for generative driving world models, addressing prior evaluation gaps. It uses diverse datasets and new metrics to assess visual realism, trajectory plausibility, temporal coherence, and controllability. Benchmarking reveals trade-offs between visua...
🔹 Publication Date: Published on Jan 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01528
• PDF: https://arxiv.org/pdf/2601.01528
• Project Page: https://drivinggen-bench.github.io/
• Github: https://github.com/youngzhou1999/DrivingGen
✨ Datasets citing this paper:
• https://huggingface.co/datasets/yangzhou99/DrivingGen
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #GenerativeAI #WorldModels #AIResearch #Benchmarking
This media is not supported in your browser
VIEW IN TELEGRAM
✨DrivingGen: A Comprehensive Benchmark for Generative Video World Models in Autonomous Driving
📝 Summary:
DrivingGen is the first comprehensive benchmark for generative driving world models, addressing prior evaluation gaps. It uses diverse datasets and new metrics to assess visual realism, trajectory plausibility, temporal coherence, and controllability. Benchmarking reveals trade-offs between visua...
🔹 Publication Date: Published on Jan 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01528
• PDF: https://arxiv.org/pdf/2601.01528
• Project Page: https://drivinggen-bench.github.io/
• Github: https://github.com/youngzhou1999/DrivingGen
✨ Datasets citing this paper:
• https://huggingface.co/datasets/yangzhou99/DrivingGen
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #GenerativeAI #WorldModels #AIResearch #Benchmarking
📝 Summary:
DrivingGen is the first comprehensive benchmark for generative driving world models, addressing prior evaluation gaps. It uses diverse datasets and new metrics to assess visual realism, trajectory plausibility, temporal coherence, and controllability. Benchmarking reveals trade-offs between visua...
🔹 Publication Date: Published on Jan 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01528
• PDF: https://arxiv.org/pdf/2601.01528
• Project Page: https://drivinggen-bench.github.io/
• Github: https://github.com/youngzhou1999/DrivingGen
✨ Datasets citing this paper:
• https://huggingface.co/datasets/yangzhou99/DrivingGen
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AutonomousDriving #GenerativeAI #WorldModels #AIResearch #Benchmarking
❤2