✨Adapting Web Agents with Synthetic Supervision
📝 Summary:
Web agents struggle to adapt to new websites due to limited data and poor synthetic data quality. SynthAgent is a framework that refines AI-generated tasks and collected trajectories to create high-quality synthetic supervision. This approach significantly improves web agent adaptation.
🔹 Publication Date: Published on Nov 8, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06101
• PDF: https://arxiv.org/pdf/2511.06101
• Project Page: https://github.com/aiming-lab/SynthAgent
• Github: https://github.com/aiming-lab/SynthAgent
🔹 Models citing this paper:
• https://huggingface.co/ChilleD/SynthAgent-SFT-Qwen2.5-VL-7B
• https://huggingface.co/ChilleD/SynthAgent-SFT-UI-TARS-1.5-7B
• https://huggingface.co/ChilleD/SynthAgent
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#WebAgents #SyntheticData #MachineLearning #AIResearch #DeepLearning
📝 Summary:
Web agents struggle to adapt to new websites due to limited data and poor synthetic data quality. SynthAgent is a framework that refines AI-generated tasks and collected trajectories to create high-quality synthetic supervision. This approach significantly improves web agent adaptation.
🔹 Publication Date: Published on Nov 8, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06101
• PDF: https://arxiv.org/pdf/2511.06101
• Project Page: https://github.com/aiming-lab/SynthAgent
• Github: https://github.com/aiming-lab/SynthAgent
🔹 Models citing this paper:
• https://huggingface.co/ChilleD/SynthAgent-SFT-Qwen2.5-VL-7B
• https://huggingface.co/ChilleD/SynthAgent-SFT-UI-TARS-1.5-7B
• https://huggingface.co/ChilleD/SynthAgent
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#WebAgents #SyntheticData #MachineLearning #AIResearch #DeepLearning
❤1
This media is not supported in your browser
VIEW IN TELEGRAM
✨Generated Reality: Human-centric World Simulation using Interactive Video Generation with Hand and Camera Control
📝 Summary:
This paper introduces a human-centric video world model for extended reality, using tracked head and hand poses for dexterous interaction. This system generates egocentric virtual environments, significantly improving user task performance and perceived control.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18422
• PDF: https://arxiv.org/pdf/2602.18422
• Project Page: https://codeysun.github.io/generated-reality/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ExtendedReality #VideoGeneration #HumanComputerInteraction #VirtualEnvironments #AIResearch
📝 Summary:
This paper introduces a human-centric video world model for extended reality, using tracked head and hand poses for dexterous interaction. This system generates egocentric virtual environments, significantly improving user task performance and perceived control.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18422
• PDF: https://arxiv.org/pdf/2602.18422
• Project Page: https://codeysun.github.io/generated-reality/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ExtendedReality #VideoGeneration #HumanComputerInteraction #VirtualEnvironments #AIResearch
✨Learning Smooth Time-Varying Linear Policies with an Action Jacobian Penalty
📝 Summary:
This paper proposes using an action Jacobian penalty to remove unrealistic high-frequency signals from reinforcement learning policies without tuning. It introduces a Linear Policy Net architecture to reduce computational overhead, enabling faster convergence and efficient inference for learning ...
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18312
• PDF: https://arxiv.org/pdf/2602.18312
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #MachineLearning #PolicyLearning #DeepLearning #AI
📝 Summary:
This paper proposes using an action Jacobian penalty to remove unrealistic high-frequency signals from reinforcement learning policies without tuning. It introduces a Linear Policy Net architecture to reduce computational overhead, enabling faster convergence and efficient inference for learning ...
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18312
• PDF: https://arxiv.org/pdf/2602.18312
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #MachineLearning #PolicyLearning #DeepLearning #AI
This media is not supported in your browser
VIEW IN TELEGRAM
✨EgoPush: Learning End-to-End Egocentric Multi-Object Rearrangement for Mobile Robots
📝 Summary:
EgoPush allows mobile robots to rearrange multiple objects in cluttered spaces using a single egocentric camera. It uses an object-centric latent space and stage-decomposed rewards for long-horizon tasks, outperforming end-to-end baselines and demonstrating sim-to-real transfer.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18071
• PDF: https://arxiv.org/pdf/2602.18071
• Project Page: https://ai4ce.github.io/EgoPush/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Robotics #ComputerVision #AI #MachineLearning #RobotManipulation
📝 Summary:
EgoPush allows mobile robots to rearrange multiple objects in cluttered spaces using a single egocentric camera. It uses an object-centric latent space and stage-decomposed rewards for long-horizon tasks, outperforming end-to-end baselines and demonstrating sim-to-real transfer.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18071
• PDF: https://arxiv.org/pdf/2602.18071
• Project Page: https://ai4ce.github.io/EgoPush/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Robotics #ComputerVision #AI #MachineLearning #RobotManipulation
✨Does Your Reasoning Model Implicitly Know When to Stop Thinking?
📝 Summary:
Large reasoning models implicitly know when to stop thinking, a capability obscured by current sampling. SAGE, a novel sampling paradigm, uncovers this efficient reasoning potential. Integrating SAGE into SAGE-RL boosts reasoning accuracy and efficiency on math benchmarks.
🔹 Publication Date: Published on Feb 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.08354
• PDF: https://arxiv.org/pdf/2602.08354
• Project Page: https://hzx122.github.io/sage-rl/
• Github: https://hzx122.github.io/sage-rl/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #LLMs #Reasoning #MachineLearning #Efficiency
📝 Summary:
Large reasoning models implicitly know when to stop thinking, a capability obscured by current sampling. SAGE, a novel sampling paradigm, uncovers this efficient reasoning potential. Integrating SAGE into SAGE-RL boosts reasoning accuracy and efficiency on math benchmarks.
🔹 Publication Date: Published on Feb 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.08354
• PDF: https://arxiv.org/pdf/2602.08354
• Project Page: https://hzx122.github.io/sage-rl/
• Github: https://hzx122.github.io/sage-rl/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #LLMs #Reasoning #MachineLearning #Efficiency
This media is not supported in your browser
VIEW IN TELEGRAM
✨SARAH: Spatially Aware Real-time Agentic Humans
📝 Summary:
SARAH provides real-time, spatially-aware conversational motion for VR agents. It uses a causal transformer VAE and flow matching to generate natural full-body movement responsive to user position and audio, achieving state-of-the-art quality at 300+ FPS.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18432
• PDF: https://arxiv.org/pdf/2602.18432
• Project Page: https://evonneng.github.io/sarah/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VirtualReality #AI #GenerativeAI #HumanMotion #DeepLearning
📝 Summary:
SARAH provides real-time, spatially-aware conversational motion for VR agents. It uses a causal transformer VAE and flow matching to generate natural full-body movement responsive to user position and audio, achieving state-of-the-art quality at 300+ FPS.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18432
• PDF: https://arxiv.org/pdf/2602.18432
• Project Page: https://evonneng.github.io/sarah/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VirtualReality #AI #GenerativeAI #HumanMotion #DeepLearning
❤1
✨VESPO: Variational Sequence-Level Soft Policy Optimization for Stable Off-Policy LLM Training
📝 Summary:
VESPO addresses LLM RL training instability by using a variational formulation with variance reduction. It provides a sequence-level correction without length normalization, ensuring stable training and consistent gains even with high policy staleness.
🔹 Publication Date: Published on Feb 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10693
• PDF: https://arxiv.org/pdf/2602.10693
• Github: https://github.com/FloyedShen/VESPO
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #ReinforcementLearning #DeepLearning #AI #NLP
📝 Summary:
VESPO addresses LLM RL training instability by using a variational formulation with variance reduction. It provides a sequence-level correction without length normalization, ensuring stable training and consistent gains even with high policy staleness.
🔹 Publication Date: Published on Feb 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10693
• PDF: https://arxiv.org/pdf/2602.10693
• Github: https://github.com/FloyedShen/VESPO
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #ReinforcementLearning #DeepLearning #AI #NLP
✨AudioX: Diffusion Transformer for Anything-to-Audio Generation
📝 Summary:
AudioX is a unified Diffusion Transformer for high-quality audio and music generation with natural language control. It processes diverse modalities using a novel multi-modal masked training strategy. This model outperforms specialized systems while offering remarkable versatility.
🔹 Publication Date: Published on Mar 13, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2503.10522
• PDF: https://arxiv.org/pdf/2503.10522
• Project Page: https://zeyuet.github.io/AudioX/
• Github: https://github.com/ZeyueT/AudioX
🔹 Models citing this paper:
• https://huggingface.co/HKUSTAudio/AudioX
• https://huggingface.co/HKUSTAudio/AudioX-MAF-MMDiT
• https://huggingface.co/Zeyue7/AudioX
✨ Datasets citing this paper:
• https://huggingface.co/datasets/HKUSTAudio/AudioX-IFcaps
✨ Spaces citing this paper:
• https://huggingface.co/spaces/Zeyue7/AudioX
• https://huggingface.co/spaces/Napawit/AudioX
• https://huggingface.co/spaces/ar93092/atai
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AudioGeneration #DiffusionModels #Transformers #AI #MultimodalAI
📝 Summary:
AudioX is a unified Diffusion Transformer for high-quality audio and music generation with natural language control. It processes diverse modalities using a novel multi-modal masked training strategy. This model outperforms specialized systems while offering remarkable versatility.
🔹 Publication Date: Published on Mar 13, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2503.10522
• PDF: https://arxiv.org/pdf/2503.10522
• Project Page: https://zeyuet.github.io/AudioX/
• Github: https://github.com/ZeyueT/AudioX
🔹 Models citing this paper:
• https://huggingface.co/HKUSTAudio/AudioX
• https://huggingface.co/HKUSTAudio/AudioX-MAF-MMDiT
• https://huggingface.co/Zeyue7/AudioX
✨ Datasets citing this paper:
• https://huggingface.co/datasets/HKUSTAudio/AudioX-IFcaps
✨ Spaces citing this paper:
• https://huggingface.co/spaces/Zeyue7/AudioX
• https://huggingface.co/spaces/Napawit/AudioX
• https://huggingface.co/spaces/ar93092/atai
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AudioGeneration #DiffusionModels #Transformers #AI #MultimodalAI
arXiv.org
AudioX: A Unified Framework for Anything-to-Audio Generation
Audio and music generation based on flexible multimodal control signals is a widely applicable topic, with the following key challenges: 1) a unified multimodal modeling framework, and 2)...
✨Selective Training for Large Vision Language Models via Visual Information Gain
📝 Summary:
This paper proposes Visual Information Gain VIG to quantify visual inputs contribution to prediction uncertainty in Large Vision Language Models. VIG enables selective training, improving visual grounding and reducing language bias with less supervision.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17186
• PDF: https://arxiv.org/pdf/2602.17186
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LVLMs #SelectiveTraining #VisualInformationGain #ComputerVision #AIResearch
📝 Summary:
This paper proposes Visual Information Gain VIG to quantify visual inputs contribution to prediction uncertainty in Large Vision Language Models. VIG enables selective training, improving visual grounding and reducing language bias with less supervision.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17186
• PDF: https://arxiv.org/pdf/2602.17186
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LVLMs #SelectiveTraining #VisualInformationGain #ComputerVision #AIResearch
✨DeepVision-103K: A Visually Diverse, Broad-Coverage, and Verifiable Mathematical Dataset for Multimodal Reasoning
📝 Summary:
To address limitations in existing datasets, DeepVision-103K offers a comprehensive and visually diverse mathematical dataset for multimodal reasoning. It enhances model performance, visual perception, and reasoning in large multimodal models.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16742
• PDF: https://arxiv.org/pdf/2602.16742
• Github: https://github.com/SKYLENAGE-AI/DeepVision-103K
✨ Datasets citing this paper:
• https://huggingface.co/datasets/skylenage/DeepVision-103K
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MultimodalAI #ComputerVision #Datasets #AIResearch #DeepLearning
📝 Summary:
To address limitations in existing datasets, DeepVision-103K offers a comprehensive and visually diverse mathematical dataset for multimodal reasoning. It enhances model performance, visual perception, and reasoning in large multimodal models.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16742
• PDF: https://arxiv.org/pdf/2602.16742
• Github: https://github.com/SKYLENAGE-AI/DeepVision-103K
✨ Datasets citing this paper:
• https://huggingface.co/datasets/skylenage/DeepVision-103K
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MultimodalAI #ComputerVision #Datasets #AIResearch #DeepLearning
✨Mobile-Agent-v3: Foundamental Agents for GUI Automation
📝 Summary:
This paper introduces GUI-Owl and Mobile-Agent-v3, open-source GUI agent models and frameworks. Mobile-Agent-v3 achieves new state-of-the-art performance on GUI automation benchmarks like AndroidWorld and OSWorld by building on GUI-Owl's innovations in environment infrastructure, agent capabiliti...
🔹 Publication Date: Published on Aug 21, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2508.15144
• PDF: https://arxiv.org/pdf/2508.15144
• Project Page: https://github.com/X-PLUG/MobileAgent
• Github: https://github.com/X-PLUG/MobileAgent
🔹 Models citing this paper:
• https://huggingface.co/mPLUG/GUI-Owl-7B
• https://huggingface.co/mPLUG/GUI-Owl-32B
• https://huggingface.co/mPLUG/GUI-Owl-7B-Desktop-RL
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#GUIAgent #Automation #AI #OpenSource #MachineLearning
📝 Summary:
This paper introduces GUI-Owl and Mobile-Agent-v3, open-source GUI agent models and frameworks. Mobile-Agent-v3 achieves new state-of-the-art performance on GUI automation benchmarks like AndroidWorld and OSWorld by building on GUI-Owl's innovations in environment infrastructure, agent capabiliti...
🔹 Publication Date: Published on Aug 21, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2508.15144
• PDF: https://arxiv.org/pdf/2508.15144
• Project Page: https://github.com/X-PLUG/MobileAgent
• Github: https://github.com/X-PLUG/MobileAgent
🔹 Models citing this paper:
• https://huggingface.co/mPLUG/GUI-Owl-7B
• https://huggingface.co/mPLUG/GUI-Owl-32B
• https://huggingface.co/mPLUG/GUI-Owl-7B-Desktop-RL
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#GUIAgent #Automation #AI #OpenSource #MachineLearning
✨VidEoMT: Your ViT is Secretly Also a Video Segmentation Model
📝 Summary:
VidEoMT is a video segmentation model that eliminates complex tracking modules by using a Vision Transformer encoder with query propagation and fusion. This enables efficient temporal modeling, achieving competitive accuracy and 5-10x faster processing speeds.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17807
• PDF: https://arxiv.org/pdf/2602.17807
• Project Page: https://www.tue-mps.org/videomt/
• Github: https://github.com/tue-mps/videomt
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VideoSegmentation #VisionTransformers #ComputerVision #DeepLearning #AIResearch
📝 Summary:
VidEoMT is a video segmentation model that eliminates complex tracking modules by using a Vision Transformer encoder with query propagation and fusion. This enables efficient temporal modeling, achieving competitive accuracy and 5-10x faster processing speeds.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17807
• PDF: https://arxiv.org/pdf/2602.17807
• Project Page: https://www.tue-mps.org/videomt/
• Github: https://github.com/tue-mps/videomt
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#VideoSegmentation #VisionTransformers #ComputerVision #DeepLearning #AIResearch
✨Sink-Aware Pruning for Diffusion Language Models
📝 Summary:
Diffusion Language Models have high inference costs. This paper finds that their attention sinks are often unstable, unlike in autoregressive models. Sink-Aware Pruning identifies and removes these unstable sinks, improving efficiency and quality without retraining.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17664
• PDF: https://arxiv.org/pdf/2602.17664
• Github: https://github.com/VILA-Lab/Sink-Aware-Pruning
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DiffusionModels #LanguageModels #ModelPruning #NLP #AIResearch
📝 Summary:
Diffusion Language Models have high inference costs. This paper finds that their attention sinks are often unstable, unlike in autoregressive models. Sink-Aware Pruning identifies and removes these unstable sinks, improving efficiency and quality without retraining.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17664
• PDF: https://arxiv.org/pdf/2602.17664
• Github: https://github.com/VILA-Lab/Sink-Aware-Pruning
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DiffusionModels #LanguageModels #ModelPruning #NLP #AIResearch
This media is not supported in your browser
VIEW IN TELEGRAM
✨PersonaLive! Expressive Portrait Image Animation for Live Streaming
📝 Summary:
PersonaLive enables real-time, expressive portrait animation for live streaming. It uses hybrid implicit signals, appearance distillation, and autoregressive streaming generation to achieve low-latency, stable results with up to 22x speedup.
🔹 Publication Date: Published on Dec 12, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.11253
• PDF: https://arxiv.org/pdf/2512.11253
• Github: https://github.com/GVCLab/PersonaLive
🔹 Models citing this paper:
• https://huggingface.co/huaichang/PersonaLive
✨ Spaces citing this paper:
• https://huggingface.co/spaces/seawolf2357/personalive
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#PortraitAnimation #LiveStreaming #RealtimeAI #ComputerVision #GenerativeAI
📝 Summary:
PersonaLive enables real-time, expressive portrait animation for live streaming. It uses hybrid implicit signals, appearance distillation, and autoregressive streaming generation to achieve low-latency, stable results with up to 22x speedup.
🔹 Publication Date: Published on Dec 12, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.11253
• PDF: https://arxiv.org/pdf/2512.11253
• Github: https://github.com/GVCLab/PersonaLive
🔹 Models citing this paper:
• https://huggingface.co/huaichang/PersonaLive
✨ Spaces citing this paper:
• https://huggingface.co/spaces/seawolf2357/personalive
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#PortraitAnimation #LiveStreaming #RealtimeAI #ComputerVision #GenerativeAI
✨Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers
📝 Summary:
This paper redefines decoding as an optimization problem on the probability simplex balancing model scores with structural preferences. This unifies existing methods and enables new decoders like Best-of-K, improving accuracy in tasks such as mathematical reasoning.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18292
• PDF: https://arxiv.org/pdf/2602.18292
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DecodingStrategies #Optimization #LLMs #MathematicalReasoning #MachineLearning
📝 Summary:
This paper redefines decoding as an optimization problem on the probability simplex balancing model scores with structural preferences. This unifies existing methods and enables new decoders like Best-of-K, improving accuracy in tasks such as mathematical reasoning.
🔹 Publication Date: Published on Feb 20
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.18292
• PDF: https://arxiv.org/pdf/2602.18292
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DecodingStrategies #Optimization #LLMs #MathematicalReasoning #MachineLearning
Media is too big
VIEW IN TELEGRAM
✨4RC: 4D Reconstruction via Conditional Querying Anytime and Anywhere
📝 Summary:
4RC introduces a unified feed-forward framework for 4D reconstruction from monocular video. It learns holistic scene geometry and motion dynamics using a novel transformer-based 'encode-once, query-anywhere and anytime' approach. This method significantly outperforms prior 4D reconstruction techn...
🔹 Publication Date: Published on Feb 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10094
• PDF: https://arxiv.org/pdf/2602.10094
• Project Page: https://yihangluo.com/projects/4RC/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#4DReconstruction #ComputerVision #DeepLearning #NeuralNetworks #MonocularVideo
📝 Summary:
4RC introduces a unified feed-forward framework for 4D reconstruction from monocular video. It learns holistic scene geometry and motion dynamics using a novel transformer-based 'encode-once, query-anywhere and anytime' approach. This method significantly outperforms prior 4D reconstruction techn...
🔹 Publication Date: Published on Feb 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10094
• PDF: https://arxiv.org/pdf/2602.10094
• Project Page: https://yihangluo.com/projects/4RC/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#4DReconstruction #ComputerVision #DeepLearning #NeuralNetworks #MonocularVideo
❤1
✨Spanning the Visual Analogy Space with a Weight Basis of LoRAs
📝 Summary:
LoRWeB improves visual analogy learning by dynamically composing a basis of LoRA modules. It uses an encoder to select and weigh multiple LoRAs at inference time, rather than a single fixed module. This achieves state-of-the-art performance and significantly better generalization for image manipu...
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15727
• PDF: https://arxiv.org/pdf/2602.15727
• Project Page: https://research.nvidia.com/labs/par/lorweb/
• Github: https://github.com/NVlabs/LoRWeB
✨ Datasets citing this paper:
• https://huggingface.co/datasets/hilamanor/LoRWeB_evalset
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LoRA #VisualAnalogies #DeepLearning #AI #ComputerVision
📝 Summary:
LoRWeB improves visual analogy learning by dynamically composing a basis of LoRA modules. It uses an encoder to select and weigh multiple LoRAs at inference time, rather than a single fixed module. This achieves state-of-the-art performance and significantly better generalization for image manipu...
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15727
• PDF: https://arxiv.org/pdf/2602.15727
• Project Page: https://research.nvidia.com/labs/par/lorweb/
• Github: https://github.com/NVlabs/LoRWeB
✨ Datasets citing this paper:
• https://huggingface.co/datasets/hilamanor/LoRWeB_evalset
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LoRA #VisualAnalogies #DeepLearning #AI #ComputerVision
✨Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum
📝 Summary:
NAMO and NAMO-D are new optimizers combining orthogonalized momentum with Adam-type noise adaptation. They show improved convergence and better performance on LLM pretraining than AdamW and Muon, with NAMO-D adding neuron-wise adaptation for further gains.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17080
• PDF: https://arxiv.org/pdf/2602.17080
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MachineLearning #DeepLearning #LLM #Optimizers #Adam
📝 Summary:
NAMO and NAMO-D are new optimizers combining orthogonalized momentum with Adam-type noise adaptation. They show improved convergence and better performance on LLM pretraining than AdamW and Muon, with NAMO-D adding neuron-wise adaptation for further gains.
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17080
• PDF: https://arxiv.org/pdf/2602.17080
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MachineLearning #DeepLearning #LLM #Optimizers #Adam
❤3
✨Avey-B
📝 Summary:
This paper reformulates the Avey architecture for encoder-only tasks, introducing innovations like decoupled parameterizations and neural compression. The new model consistently outperforms Transformer-based encoders on token classification and information retrieval, also scaling more efficiently...
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15814
• PDF: https://arxiv.org/pdf/2602.15814
• Github: https://github.com/rimads/avey-b
🔹 Models citing this paper:
• https://huggingface.co/avey-ai/avey-b1-base-exp
• https://huggingface.co/avey-ai/avey-b1-large-exp
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
This paper reformulates the Avey architecture for encoder-only tasks, introducing innovations like decoupled parameterizations and neural compression. The new model consistently outperforms Transformer-based encoders on token classification and information retrieval, also scaling more efficiently...
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15814
• PDF: https://arxiv.org/pdf/2602.15814
• Github: https://github.com/rimads/avey-b
🔹 Models citing this paper:
• https://huggingface.co/avey-ai/avey-b1-base-exp
• https://huggingface.co/avey-ai/avey-b1-large-exp
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1
✨ReIn: Conversational Error Recovery with Reasoning Inception
📝 Summary:
Conversational agents with tool integration face challenges from user-induced errors, but a test-time intervention method called Reasoning Inception (ReIn) enables error recovery by injecting external...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17022
• PDF: https://arxiv.org/pdf/2602.17022
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Conversational agents with tool integration face challenges from user-induced errors, but a test-time intervention method called Reasoning Inception (ReIn) enables error recovery by injecting external...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17022
• PDF: https://arxiv.org/pdf/2602.17022
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1