✨TDM-R1: Reinforcing Few-Step Diffusion Models with Non-Differentiable Reward
📝 Summary:
TDM-R1 is a novel reinforcement learning method that enhances few-step generative models by incorporating non-differentiable real-world rewards. It overcomes limitations of existing RL approaches, achieving state-of-the-art performance with significantly fewer steps.
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07700
• PDF: https://arxiv.org/pdf/2603.07700
• Project Page: https://luo-yihong.github.io/TDM-R1-Page/
• Github: https://github.com/Luo-Yihong/TDM-R1
🔹 Models citing this paper:
• https://huggingface.co/Luo-Yihong/TDM-R1
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DiffusionModels #ReinforcementLearning #GenerativeAI #MachineLearning #DeepLearning
📝 Summary:
TDM-R1 is a novel reinforcement learning method that enhances few-step generative models by incorporating non-differentiable real-world rewards. It overcomes limitations of existing RL approaches, achieving state-of-the-art performance with significantly fewer steps.
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07700
• PDF: https://arxiv.org/pdf/2603.07700
• Project Page: https://luo-yihong.github.io/TDM-R1-Page/
• Github: https://github.com/Luo-Yihong/TDM-R1
🔹 Models citing this paper:
• https://huggingface.co/Luo-Yihong/TDM-R1
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DiffusionModels #ReinforcementLearning #GenerativeAI #MachineLearning #DeepLearning
✨Building AI Coding Agents for the Terminal: Scaffolding, Harness, Context Engineering, and Lessons Learned
📝 Summary:
OPENDEV is an open-source command-line AI coding agent for autonomous software engineering assistance. It uses specialized model routing, a dual-agent architecture, and efficient context management to provide robust, terminal-first assistance. This design prevents reasoning degradation and accumu...
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05344
• PDF: https://arxiv.org/pdf/2603.05344
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AICodingAgents #SoftwareDevelopment #CommandLine #OpenSource #ArtificialIntelligence
📝 Summary:
OPENDEV is an open-source command-line AI coding agent for autonomous software engineering assistance. It uses specialized model routing, a dual-agent architecture, and efficient context management to provide robust, terminal-first assistance. This design prevents reasoning degradation and accumu...
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05344
• PDF: https://arxiv.org/pdf/2603.05344
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AICodingAgents #SoftwareDevelopment #CommandLine #OpenSource #ArtificialIntelligence
✨SlowBA: An efficiency backdoor attack towards VLM-based GUI agents
📝 Summary:
SlowBA is a novel backdoor attack targeting the response latency of VLM-based GUI agents. It induces excessively long reasoning chains using realistic pop-up window triggers, significantly increasing response length and latency while maintaining task accuracy. This reveals a new security vulnerab...
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08316
• PDF: https://arxiv.org/pdf/2603.08316
• Github: https://github.com/tu-tuing/SlowBA
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#BackdoorAttack #AISecurity #VLM #GUIagents #Cybersecurity
📝 Summary:
SlowBA is a novel backdoor attack targeting the response latency of VLM-based GUI agents. It induces excessively long reasoning chains using realistic pop-up window triggers, significantly increasing response length and latency while maintaining task accuracy. This reveals a new security vulnerab...
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08316
• PDF: https://arxiv.org/pdf/2603.08316
• Github: https://github.com/tu-tuing/SlowBA
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#BackdoorAttack #AISecurity #VLM #GUIagents #Cybersecurity
Media is too big
VIEW IN TELEGRAM
✨HydroShear: Hydroelastic Shear Simulation for Tactile Sim-to-Real Reinforcement Learning
📝 Summary:
HydroShear is a hydroelastic tactile simulator that improves sim-to-real policy transfer for contact-rich tasks by accurately modeling stick-slip and path-dependent forces. It enables zero-shot transfer of reinforcement learning policies with a 93% average success rate, significantly outperformin...
🔹 Publication Date: Published on Feb 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.00446
• PDF: https://arxiv.org/pdf/2603.00446
• Project Page: https://hydroshear.github.io/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #Robotics #Simulation #TactileSensing #Sim2Real
📝 Summary:
HydroShear is a hydroelastic tactile simulator that improves sim-to-real policy transfer for contact-rich tasks by accurately modeling stick-slip and path-dependent forces. It enables zero-shot transfer of reinforcement learning policies with a 93% average success rate, significantly outperformin...
🔹 Publication Date: Published on Feb 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.00446
• PDF: https://arxiv.org/pdf/2603.00446
• Project Page: https://hydroshear.github.io/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #Robotics #Simulation #TactileSensing #Sim2Real
✨Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training
📝 Summary:
High-quality, difficulty-aware post-training data is key to financial LLM performance. This study introduces ODA-Fin-SFT-318k and ODA-Fin-RL-12k datasets which enable their model to surpass state-of-the-art financial LLMs.
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07223
• PDF: https://arxiv.org/pdf/2603.07223
• Project Page: https://opendataarena.github.io/
🔹 Models citing this paper:
• https://huggingface.co/OpenDataArena/ODA-Fin-SFT-8B
• https://huggingface.co/OpenDataArena/ODA-Fin-RL-8B
✨ Datasets citing this paper:
• https://huggingface.co/datasets/OpenDataArena/ODA-Fin-SFT-318k
• https://huggingface.co/datasets/OpenDataArena/ODA-Fin-RL-12k
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FinancialLLM #AIinFinance #DataScience #NLP #DeepLearning
📝 Summary:
High-quality, difficulty-aware post-training data is key to financial LLM performance. This study introduces ODA-Fin-SFT-318k and ODA-Fin-RL-12k datasets which enable their model to surpass state-of-the-art financial LLMs.
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07223
• PDF: https://arxiv.org/pdf/2603.07223
• Project Page: https://opendataarena.github.io/
🔹 Models citing this paper:
• https://huggingface.co/OpenDataArena/ODA-Fin-SFT-8B
• https://huggingface.co/OpenDataArena/ODA-Fin-RL-8B
✨ Datasets citing this paper:
• https://huggingface.co/datasets/OpenDataArena/ODA-Fin-SFT-318k
• https://huggingface.co/datasets/OpenDataArena/ODA-Fin-RL-12k
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#FinancialLLM #AIinFinance #DataScience #NLP #DeepLearning
✨TAPFormer: Robust Arbitrary Point Tracking via Transient Asynchronous Fusion of Frames and Events
📝 Summary:
TAPFormer is a new transformer framework for robust arbitrary point tracking. It uses Transient Asynchronous Fusion to bridge low-rate frames and high-rate events, and Cross-modal Locally Weighted Fusion for adaptive attention. This method significantly outperforms existing trackers.
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.04989
• PDF: https://arxiv.org/pdf/2603.04989
• Project Page: https://tapformer.github.io/
• Github: https://github.com/ljx1002/TAPFormer
🔹 Models citing this paper:
• https://huggingface.co/ljx1002/tapformer
✨ Datasets citing this paper:
• https://huggingface.co/datasets/ljx1002/tapformer
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#PointTracking #Transformers #ComputerVision #EventCameras #DeepLearning
📝 Summary:
TAPFormer is a new transformer framework for robust arbitrary point tracking. It uses Transient Asynchronous Fusion to bridge low-rate frames and high-rate events, and Cross-modal Locally Weighted Fusion for adaptive attention. This method significantly outperforms existing trackers.
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.04989
• PDF: https://arxiv.org/pdf/2603.04989
• Project Page: https://tapformer.github.io/
• Github: https://github.com/ljx1002/TAPFormer
🔹 Models citing this paper:
• https://huggingface.co/ljx1002/tapformer
✨ Datasets citing this paper:
• https://huggingface.co/datasets/ljx1002/tapformer
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#PointTracking #Transformers #ComputerVision #EventCameras #DeepLearning
✨MWM: Mobile World Models for Action-Conditioned Consistent Prediction
📝 Summary:
MWM improves action-conditioned rollout consistency for navigation world models. It uses a two-stage training approach and Inference-Consistent State Distillation to achieve robust, efficient planning with higher visual fidelity and success.
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07799
• PDF: https://arxiv.org/pdf/2603.07799
• Project Page: https://aigeeksgroup.github.io/MWM
• Github: https://aigeeksgroup.github.io/MWM
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
MWM improves action-conditioned rollout consistency for navigation world models. It uses a two-stage training approach and Inference-Consistent State Distillation to achieve robust, efficient planning with higher visual fidelity and success.
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07799
• PDF: https://arxiv.org/pdf/2603.07799
• Project Page: https://aigeeksgroup.github.io/MWM
• Github: https://aigeeksgroup.github.io/MWM
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨HY-WU (Part I): An Extensible Functional Neural Memory Framework and An Instantiation in Text-Guided Image Editing
📝 Summary:
Foundation models require adaptive architectures to handle evolving objectives and user needs, leading to the development of HY-WU, a memory-first framework that generates instance-specific weight upd...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07236
• PDF: https://arxiv.org/pdf/2603.07236
• Project Page: https://tencent-hy-wu.github.io/
• Github: https://github.com/Tencent-Hunyuan/HY-WU
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Foundation models require adaptive architectures to handle evolving objectives and user needs, leading to the development of HY-WU, a memory-first framework that generates instance-specific weight upd...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07236
• PDF: https://arxiv.org/pdf/2603.07236
• Project Page: https://tencent-hy-wu.github.io/
• Github: https://github.com/Tencent-Hunyuan/HY-WU
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Making LLMs Optimize Multi-Scenario CUDA Kernels Like Experts
📝 Summary:
This paper introduces CUDAMaster, a multi-agent, hardware-aware system for general-purpose automated GPU kernel optimization across diverse scenarios including ML and scientific computing. It achieves significant speedups, often matching or exceeding commercial libraries like cuBLAS.
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07169
• PDF: https://arxiv.org/pdf/2603.07169
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMs #GPUOptimization #CUDA #HighPerformanceComputing #MachineLearning
📝 Summary:
This paper introduces CUDAMaster, a multi-agent, hardware-aware system for general-purpose automated GPU kernel optimization across diverse scenarios including ML and scientific computing. It achieves significant speedups, often matching or exceeding commercial libraries like cuBLAS.
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07169
• PDF: https://arxiv.org/pdf/2603.07169
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMs #GPUOptimization #CUDA #HighPerformanceComputing #MachineLearning
✨NLE: Non-autoregressive LLM-based ASR by Transcript Editing
📝 Summary:
NLE is a non-autoregressive ASR system that uses a bidirectional LLM editor for conditional transcript editing, enabling parallel prediction. It achieves strong accuracy and a 27x speedup over AR baselines, making it suitable for real-time use.
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08397
• PDF: https://arxiv.org/pdf/2603.08397
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
NLE is a non-autoregressive ASR system that uses a bidirectional LLM editor for conditional transcript editing, enabling parallel prediction. It achieves strong accuracy and a 27x speedup over AR baselines, making it suitable for real-time use.
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08397
• PDF: https://arxiv.org/pdf/2603.08397
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation
📝 Summary:
Slides serve as a critical medium for conveying information in presentation-oriented scenarios such as academia, education, and business. Despite their importance, creating high-quality slide decks re...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07244
• PDF: https://arxiv.org/pdf/2603.07244
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Slides serve as a critical medium for conveying information in presentation-oriented scenarios such as academia, education, and business. Despite their importance, creating high-quality slide decks re...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07244
• PDF: https://arxiv.org/pdf/2603.07244
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Agentic Planning with Reasoning for Image Styling via Offline RL
📝 Summary:
This paper presents an agentic offline reinforcement learning framework for complex image styling. It uses structured planning with chain-of-thought reasoning and a tool library to decompose editing tasks. This approach significantly improves performance over direct prompting, validated by human ...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07148
• PDF: https://arxiv.org/pdf/2603.07148
✨ Datasets citing this paper:
• https://huggingface.co/datasets/subhojyoti1990/image-agent-styling
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
This paper presents an agentic offline reinforcement learning framework for complex image styling. It uses structured planning with chain-of-thought reasoning and a tool library to decompose editing tasks. This approach significantly improves performance over direct prompting, validated by human ...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07148
• PDF: https://arxiv.org/pdf/2603.07148
✨ Datasets citing this paper:
• https://huggingface.co/datasets/subhojyoti1990/image-agent-styling
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Sparse-BitNet: 1.58-bit LLMs are Naturally Friendly to Semi-Structured Sparsity
📝 Summary:
Sparse-BitNet demonstrates that 1.58-bit quantization works better with N:M sparsity than full-precision models, achieving stable training and improved efficiency across different scales and regimes. ...
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05168
• PDF: https://arxiv.org/pdf/2603.05168
• Github: https://github.com/AAzdi/Sparse-BitNet
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Sparse-BitNet demonstrates that 1.58-bit quantization works better with N:M sparsity than full-precision models, achieving stable training and improved efficiency across different scales and regimes. ...
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05168
• PDF: https://arxiv.org/pdf/2603.05168
• Github: https://github.com/AAzdi/Sparse-BitNet
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨MedSteer: Counterfactual Endoscopic Synthesis via Training-Free Activation Steering
📝 Summary:
MedSteer is a training-free framework for generating counterfactual medical images. It steers diffusion model activations along pathology vectors to modify concepts while preserving underlying image structure. This method outperforms existing techniques in concept modification and significantly i...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07066
• PDF: https://arxiv.org/pdf/2603.07066
• Github: https://github.com/phamtrongthang123/medsteer
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
MedSteer is a training-free framework for generating counterfactual medical images. It steers diffusion model activations along pathology vectors to modify concepts while preserving underlying image structure. This method outperforms existing techniques in concept modification and significantly i...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07066
• PDF: https://arxiv.org/pdf/2603.07066
• Github: https://github.com/phamtrongthang123/medsteer
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Scaling Data Difficulty: Improving Coding Models via Reinforcement Learning on Fresh and Challenging Problems
📝 Summary:
A four-stage data processing framework with LLM-based difficulty filtering creates a high-quality code generation dataset that significantly improves model performance on challenging problems. AI-gene...
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07779
• PDF: https://arxiv.org/pdf/2603.07779
• Project Page: https://github.com/ZongqianLi/MicroCoder/blob/main/README.md
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
A four-stage data processing framework with LLM-based difficulty filtering creates a high-quality code generation dataset that significantly improves model performance on challenging problems. AI-gene...
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07779
• PDF: https://arxiv.org/pdf/2603.07779
• Project Page: https://github.com/ZongqianLi/MicroCoder/blob/main/README.md
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Breaking Training Bottlenecks: Effective and Stable Reinforcement Learning for Coding Models
📝 Summary:
MicroCoder-GRPO enhances code generation through improved policy optimization with innovations in truncation masking, temperature selection, and loss function adjustments, achieving superior performan...
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07777
• PDF: https://arxiv.org/pdf/2603.07777
• Project Page: https://github.com/ZongqianLi/MicroCoder/blob/main/README.md
• Github: https://github.com/ZongqianLi/MicroCoder
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
MicroCoder-GRPO enhances code generation through improved policy optimization with innovations in truncation masking, temperature selection, and loss function adjustments, achieving superior performan...
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07777
• PDF: https://arxiv.org/pdf/2603.07777
• Project Page: https://github.com/ZongqianLi/MicroCoder/blob/main/README.md
• Github: https://github.com/ZongqianLi/MicroCoder
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Retrieval-Augmented Generation for Predicting Cellular Responses to Gene Perturbation
📝 Summary:
PT-RAG framework improves prediction of cellular responses to genetic perturbations by using differentiable, cell-type-aware retrieval combined with generative modeling, outperforming existing methods...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07233
• PDF: https://arxiv.org/pdf/2603.07233
• Github: https://github.com/difra100/PT-RAG_ICLR
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
PT-RAG framework improves prediction of cellular responses to genetic perturbations by using differentiable, cell-type-aware retrieval combined with generative modeling, outperforming existing methods...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07233
• PDF: https://arxiv.org/pdf/2603.07233
• Github: https://github.com/difra100/PT-RAG_ICLR
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
This media is not supported in your browser
VIEW IN TELEGRAM
✨Training-free Latent Inter-Frame Pruning with Attention Recovery
📝 Summary:
LIPAR reduces video generation latency by skipping redundant latent patches. It uses Attention Recovery to maintain quality, boosting throughput by 1.45x without extra training.
🔹 Publication Date: Published on Mar 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05811
• PDF: https://arxiv.org/pdf/2603.05811
• Project Page: https://dennismenn.github.io/lipar/
• Github: https://github.com/DennisMenn/lipar
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
LIPAR reduces video generation latency by skipping redundant latent patches. It uses Attention Recovery to maintain quality, boosting throughput by 1.45x without extra training.
🔹 Publication Date: Published on Mar 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05811
• PDF: https://arxiv.org/pdf/2603.05811
• Project Page: https://dennismenn.github.io/lipar/
• Github: https://github.com/DennisMenn/lipar
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
This media is not supported in your browser
VIEW IN TELEGRAM
✨LiveWorld: Simulating Out-of-Sight Dynamics in Generative Video World Models
📝 Summary:
LiveWorld addresses the out-of-sight dynamics problem in video world models by introducing a persistent global state representation that maintains continuous evolution of dynamic entities beyond the o...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07145
• PDF: https://arxiv.org/pdf/2603.07145
• Project Page: https://zichengduan.github.io/LiveWorld/index.html
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
LiveWorld addresses the out-of-sight dynamics problem in video world models by introducing a persistent global state representation that maintains continuous evolution of dynamic entities beyond the o...
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07145
• PDF: https://arxiv.org/pdf/2603.07145
• Project Page: https://zichengduan.github.io/LiveWorld/index.html
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research