✨Accent Vector: Controllable Accent Manipulation for Multilingual TTS Without Accented Data
📝 Summary:
Accent Vector enables controllable accent manipulation in multilingual TTS systems through fine-tuning on native speech from different languages and computing task vectors that capture accent characte...
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07534
• PDF: https://arxiv.org/pdf/2603.07534
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Accent Vector enables controllable accent manipulation in multilingual TTS systems through fine-tuning on native speech from different languages and computing task vectors that capture accent characte...
🔹 Publication Date: Published on Mar 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07534
• PDF: https://arxiv.org/pdf/2603.07534
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨The Curse and Blessing of Mean Bias in FP4-Quantized LLM Training
📝 Summary:
LLM anisotropy caused by rank-one mean bias in low-bit training can be stabilized through mean subtraction, recovering performance while enabling efficient hardware deployment. AI-generated summary La...
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10444
• PDF: https://arxiv.org/pdf/2603.10444
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
LLM anisotropy caused by rank-one mean bias in low-bit training can be stabilized through mean subtraction, recovering performance while enabling efficient hardware deployment. AI-generated summary La...
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10444
• PDF: https://arxiv.org/pdf/2603.10444
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨4DEquine: Disentangling Motion and Appearance for 4D Equine Reconstruction from Monocular Video
📝 Summary:
4DEquine is a new framework for 4D equine reconstruction from monocular video. It disentangles motion using spatio-temporal transformers and appearance with 3D Gaussian avatars. Training on synthetic data, it achieves state-of-the-art results on real-world datasets.
🔹 Publication Date: Published on Mar 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10125
• PDF: https://arxiv.org/pdf/2603.10125
• Project Page: https://luoxue-star.github.io/4DEquine_Project_Page/
• Github: https://github.com/luoxue-star/4DEquine
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ComputerVision #4DReconstruction #DeepLearning #Equine #AI
📝 Summary:
4DEquine is a new framework for 4D equine reconstruction from monocular video. It disentangles motion using spatio-temporal transformers and appearance with 3D Gaussian avatars. Training on synthetic data, it achieves state-of-the-art results on real-world datasets.
🔹 Publication Date: Published on Mar 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10125
• PDF: https://arxiv.org/pdf/2603.10125
• Project Page: https://luoxue-star.github.io/4DEquine_Project_Page/
• Github: https://github.com/luoxue-star/4DEquine
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ComputerVision #4DReconstruction #DeepLearning #Equine #AI
✨Training Language Models via Neural Cellular Automata
📝 Summary:
This paper introduces using Neural Cellular Automata NCA to generate synthetic data for pre-pre-training language models, addressing natural language limitations. This approach improves performance, accelerates convergence, and transfers to reasoning tasks, often outperforming extensive natural l...
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10055
• PDF: https://arxiv.org/pdf/2603.10055
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #LanguageModels #NeuralCellularAutomata #SyntheticData #NLP
📝 Summary:
This paper introduces using Neural Cellular Automata NCA to generate synthetic data for pre-pre-training language models, addressing natural language limitations. This approach improves performance, accelerates convergence, and transfers to reasoning tasks, often outperforming extensive natural l...
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10055
• PDF: https://arxiv.org/pdf/2603.10055
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #LanguageModels #NeuralCellularAutomata #SyntheticData #NLP
✨XSkill: Continual Learning from Experience and Skills in Multimodal Agents
📝 Summary:
XSkill is a dual-stream framework for continual learning in multimodal agents. It extracts and retrieves knowledge from visual observations, consolidating experiences and skills. This improves tool use efficiency, reasoning, and zero-shot generalization.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12056
• PDF: https://arxiv.org/pdf/2603.12056
• Project Page: https://xskill-agent.github.io/xskill_page/
• Github: https://github.com/XSkill-Agent/XSkill
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ContinualLearning #MultimodalAI #AIagents #MachineLearning #Robotics
📝 Summary:
XSkill is a dual-stream framework for continual learning in multimodal agents. It extracts and retrieves knowledge from visual observations, consolidating experiences and skills. This improves tool use efficiency, reasoning, and zero-shot generalization.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12056
• PDF: https://arxiv.org/pdf/2603.12056
• Project Page: https://xskill-agent.github.io/xskill_page/
• Github: https://github.com/XSkill-Agent/XSkill
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ContinualLearning #MultimodalAI #AIagents #MachineLearning #Robotics
This media is not supported in your browser
VIEW IN TELEGRAM
✨Neural Field Thermal Tomography: A Differentiable Physics Framework for Non-Destructive Evaluation
📝 Summary:
NeFTY is a new differentiable physics framework that reconstructs 3D material properties from temperature measurements. It uses continuous neural fields and hard constraints, overcoming prior limitations and accurately locating subsurface defects.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11045
• PDF: https://arxiv.org/pdf/2603.11045
• Project Page: https://cab-lab-princeton.github.io/nefty/
• Github: https://cab-lab-princeton.github.io/nefty/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#NeuralFields #DifferentiablePhysics #NDE #MaterialScience #DeepLearning
📝 Summary:
NeFTY is a new differentiable physics framework that reconstructs 3D material properties from temperature measurements. It uses continuous neural fields and hard constraints, overcoming prior limitations and accurately locating subsurface defects.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11045
• PDF: https://arxiv.org/pdf/2603.11045
• Project Page: https://cab-lab-princeton.github.io/nefty/
• Github: https://cab-lab-princeton.github.io/nefty/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#NeuralFields #DifferentiablePhysics #NDE #MaterialScience #DeepLearning
✨Causal Attribution of Coastal Water Clarity Degradation to Nickel Processing Expansion at the Indonesia Morowali Industrial Park, Sulawesi
📝 Summary:
Satellite data and causal analysis established that Indonesia Morowali Industrial Park expansion caused significant coastal water clarity degradation. This impact, linked to battery-grade nickel production, threatens marine biodiversity and coral reefs.
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07331
• PDF: https://arxiv.org/pdf/2603.07331
• Github: https://github.com/sandyherho/supplMorowaliOcean
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#EnvironmentalScience #CoastalDegradation #NickelMining #MarineConservation #Indonesia
📝 Summary:
Satellite data and causal analysis established that Indonesia Morowali Industrial Park expansion caused significant coastal water clarity degradation. This impact, linked to battery-grade nickel production, threatens marine biodiversity and coral reefs.
🔹 Publication Date: Published on Mar 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.07331
• PDF: https://arxiv.org/pdf/2603.07331
• Github: https://github.com/sandyherho/supplMorowaliOcean
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#EnvironmentalScience #CoastalDegradation #NickelMining #MarineConservation #Indonesia
✨A Mixed Diet Makes DINO An Omnivorous Vision Encoder
📝 Summary:
The Omnivorous Vision Encoder learns modality-agnostic features by aligning multi-modal scene inputs and distilling semantics from a frozen teacher model. This resolves poor cross-modal alignment in existing encoders, yielding consistent, powerful embeddings for various modalities.
🔹 Publication Date: Published on Feb 27
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.24181
• PDF: https://arxiv.org/pdf/2602.24181
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MultimodalAI #ComputerVision #DeepLearning #SelfSupervisedLearning #AIResearch
📝 Summary:
The Omnivorous Vision Encoder learns modality-agnostic features by aligning multi-modal scene inputs and distilling semantics from a frozen teacher model. This resolves poor cross-modal alignment in existing encoders, yielding consistent, powerful embeddings for various modalities.
🔹 Publication Date: Published on Feb 27
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.24181
• PDF: https://arxiv.org/pdf/2602.24181
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#MultimodalAI #ComputerVision #DeepLearning #SelfSupervisedLearning #AIResearch
❤1
✨Dr. SHAP-AV: Decoding Relative Modality Contributions via Shapley Attribution in Audio-Visual Speech Recognition
📝 Summary:
Dr. SHAP-AV uses Shapley values to analyze audio-visual speech recognition modality contributions. Findings show models shift toward visual under noise but maintain a persistent audio bias. This method serves as a key diagnostic tool for AVSR.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12046
• PDF: https://arxiv.org/pdf/2603.12046
• Project Page: https://umbertocappellazzo.github.io/Dr-SHAP-AV/
• Github: https://github.com/umbertocappellazzo/Dr-SHAP-AV
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AVSR #ShapleyValues #ExplainableAI #MultimodalAI #SpeechRecognition
📝 Summary:
Dr. SHAP-AV uses Shapley values to analyze audio-visual speech recognition modality contributions. Findings show models shift toward visual under noise but maintain a persistent audio bias. This method serves as a key diagnostic tool for AVSR.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12046
• PDF: https://arxiv.org/pdf/2603.12046
• Project Page: https://umbertocappellazzo.github.io/Dr-SHAP-AV/
• Github: https://github.com/umbertocappellazzo/Dr-SHAP-AV
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AVSR #ShapleyValues #ExplainableAI #MultimodalAI #SpeechRecognition
❤1
✨Simple Recipe Works: Vision-Language-Action Models are Natural Continual Learners with Reinforcement Learning
📝 Summary:
Contrary to established belief, simple sequential fine-tuning with low-rank adaptation is highly effective for continual reinforcement learning in large Vision-Language-Action models. It achieves excellent plasticity and avoids catastrophic forgetting, often outperforming complex methods.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11653
• PDF: https://arxiv.org/pdf/2603.11653
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #ContinualLearning #VLAmodels #AI #MachineLearning
📝 Summary:
Contrary to established belief, simple sequential fine-tuning with low-rank adaptation is highly effective for continual reinforcement learning in large Vision-Language-Action models. It achieves excellent plasticity and avoids catastrophic forgetting, often outperforming complex methods.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11653
• PDF: https://arxiv.org/pdf/2603.11653
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #ContinualLearning #VLAmodels #AI #MachineLearning
✨HyPER-GAN: Hybrid Patch-Based Image-to-Image Translation for Real-Time Photorealism Enhancement
📝 Summary:
HyPER-GAN is a lightweight U-Net based model for real-time photorealism enhancement. Its hybrid training strategy, using real-world patches, improves visual realism, semantic consistency, and inference speed over state-of-the-art methods.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10604
• PDF: https://arxiv.org/pdf/2603.10604
• Github: https://github.com/stefanos50/HyPER-GAN
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#GAN #ComputerVision #DeepLearning #ImageProcessing #Photorealism
📝 Summary:
HyPER-GAN is a lightweight U-Net based model for real-time photorealism enhancement. Its hybrid training strategy, using real-world patches, improves visual realism, semantic consistency, and inference speed over state-of-the-art methods.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.10604
• PDF: https://arxiv.org/pdf/2603.10604
• Github: https://github.com/stefanos50/HyPER-GAN
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#GAN #ComputerVision #DeepLearning #ImageProcessing #Photorealism
✨PACED: Distillation at the Frontier of Student Competence
📝 Summary:
PACED optimizes distillation by focusing training on a student competence frontier using a Beta kernel weighting. Derived from gradient analysis, this avoids wasted compute at extremes, boosting distillation and self-distillation performance.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11178
• PDF: https://arxiv.org/pdf/2603.11178
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#KnowledgeDistillation #DeepLearning #ModelOptimization #AIResearch #ComputeEfficiency
📝 Summary:
PACED optimizes distillation by focusing training on a student competence frontier using a Beta kernel weighting. Derived from gradient analysis, this avoids wasted compute at extremes, boosting distillation and self-distillation performance.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11178
• PDF: https://arxiv.org/pdf/2603.11178
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#KnowledgeDistillation #DeepLearning #ModelOptimization #AIResearch #ComputeEfficiency
✨SurvHTE-Bench: A Benchmark for Heterogeneous Treatment Effect Estimation in Survival Analysis
📝 Summary:
SurvHTE-Bench is the first comprehensive benchmark for estimating heterogeneous treatment effects with censored survival data. It offers synthetic, semi-synthetic, and real-world datasets for rigorous and reproducible evaluation of causal survival methods.
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05483
• PDF: https://arxiv.org/pdf/2603.05483
• Github: https://github.com/Shahriarnz14/SurvHTE-Bench
✨ Datasets citing this paper:
• https://huggingface.co/datasets/snoroozi/SurvHTE-Bench
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
SurvHTE-Bench is the first comprehensive benchmark for estimating heterogeneous treatment effects with censored survival data. It offers synthetic, semi-synthetic, and real-world datasets for rigorous and reproducible evaluation of causal survival methods.
🔹 Publication Date: Published on Mar 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.05483
• PDF: https://arxiv.org/pdf/2603.05483
• Github: https://github.com/Shahriarnz14/SurvHTE-Bench
✨ Datasets citing this paper:
• https://huggingface.co/datasets/snoroozi/SurvHTE-Bench
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1
✨Meta-Reinforcement Learning with Self-Reflection for Agentic Search
📝 Summary:
MR-Search is a meta-reinforcement learning approach for agentic search that uses self-reflection. It conditions on past episodes to adapt search strategies and improve in-context exploration. This method shows strong generalization and significant performance gains across various benchmarks.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11327
• PDF: https://arxiv.org/pdf/2603.11327
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
MR-Search is a meta-reinforcement learning approach for agentic search that uses self-reflection. It conditions on past episodes to adapt search strategies and improve in-context exploration. This method shows strong generalization and significant performance gains across various benchmarks.
🔹 Publication Date: Published on Mar 11
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.11327
• PDF: https://arxiv.org/pdf/2603.11327
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning
📝 Summary:
RubiCap introduces a reinforcement learning framework for dense image captioning, using LLM-generated rubrics to provide fine-grained reward signals. This method overcomes limitations of supervised learning and prior RL, achieving superior performance on benchmarks and improving vision-language m...
🔹 Publication Date: Published on Mar 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09160
• PDF: https://arxiv.org/pdf/2603.09160
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
RubiCap introduces a reinforcement learning framework for dense image captioning, using LLM-generated rubrics to provide fine-grained reward signals. This method overcomes limitations of supervised learning and prior RL, achieving superior performance on benchmarks and improving vision-language m...
🔹 Publication Date: Published on Mar 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09160
• PDF: https://arxiv.org/pdf/2603.09160
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights
📝 Summary:
Large pretrained models have a high density of task-specific experts around their weights. This enables a simple post-training method of random sampling and ensembling to be competitive with complex optimization techniques.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12228
• PDF: https://arxiv.org/pdf/2603.12228
• Project Page: https://thickets.mit.edu
• Github: https://github.com/sunrainyg/RandOpt
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Large pretrained models have a high density of task-specific experts around their weights. This enables a simple post-training method of random sampling and ensembling to be competitive with complex optimization techniques.
🔹 Publication Date: Published on Mar 12
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.12228
• PDF: https://arxiv.org/pdf/2603.12228
• Project Page: https://thickets.mit.edu
• Github: https://github.com/sunrainyg/RandOpt
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨CREATE: Testing LLMs for Associative Creativity
📝 Summary:
CREATE is a new benchmark to evaluate LLMs associative creativity by generating diverse and specific concept paths. It scores models on path specificity, diversity, and quantity. Strong models perform well but saturation is hard to achieve, and thinking models dont always improve performance.
🔹 Publication Date: Published on Mar 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09970
• PDF: https://arxiv.org/pdf/2603.09970
• Project Page: https://manyawadhwa.github.io/projects/create/
• Github: https://github.com/ManyaWadhwa/CREATE
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
CREATE is a new benchmark to evaluate LLMs associative creativity by generating diverse and specific concept paths. It scores models on path specificity, diversity, and quantity. Strong models perform well but saturation is hard to achieve, and thinking models dont always improve performance.
🔹 Publication Date: Published on Mar 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.09970
• PDF: https://arxiv.org/pdf/2603.09970
• Project Page: https://manyawadhwa.github.io/projects/create/
• Github: https://github.com/ManyaWadhwa/CREATE
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨WaDi: Weight Direction-aware Distillation for One-step Image Synthesis
📝 Summary:
Diffusion model inference is slow. WaDi focuses on weight direction changes during distillation to accelerate models into efficient one-step generators. This achieves state-of-the-art quality with significantly fewer parameters and broad versatility.
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08258
• PDF: https://arxiv.org/pdf/2603.08258
• Github: https://github.com/gudaochangsheng/WaDi
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DiffusionModels #ImageSynthesis #ModelAcceleration #DeepLearning #AIResearch
📝 Summary:
Diffusion model inference is slow. WaDi focuses on weight direction changes during distillation to accelerate models into efficient one-step generators. This achieves state-of-the-art quality with significantly fewer parameters and broad versatility.
🔹 Publication Date: Published on Mar 9
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.08258
• PDF: https://arxiv.org/pdf/2603.08258
• Github: https://github.com/gudaochangsheng/WaDi
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#DiffusionModels #ImageSynthesis #ModelAcceleration #DeepLearning #AIResearch
✨AutoFigure-Edit: Generating Editable Scientific Illustration
📝 Summary:
AutoFigure-Edit generates editable scientific illustrations from text and reference images. It improves editability, style control, and efficiency by combining long-context understanding and native SVG editing for high-quality, flexible refinement.
🔹 Publication Date: Published on Mar 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/pdf/2603.06674
• PDF: https://arxiv.org/pdf/2603.06674
• Project Page: https://deepscientist.cc/
• Github: https://github.com/ResearAI/AutoFigure-Edit
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #ScientificIllustration #ImageGeneration #SVG #DeepLearning
📝 Summary:
AutoFigure-Edit generates editable scientific illustrations from text and reference images. It improves editability, style control, and efficiency by combining long-context understanding and native SVG editing for high-quality, flexible refinement.
🔹 Publication Date: Published on Mar 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/pdf/2603.06674
• PDF: https://arxiv.org/pdf/2603.06674
• Project Page: https://deepscientist.cc/
• Github: https://github.com/ResearAI/AutoFigure-Edit
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #ScientificIllustration #ImageGeneration #SVG #DeepLearning
✨Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning
📝 Summary:
Nemotron 3 Nano is an efficient Mixture-of-Experts hybrid Mamba-Transformer model. It achieves better accuracy and up to 3.3x higher inference throughput than similar models, while using fewer active parameters and supporting 1M token contexts for enhanced agentic reasoning.
🔹 Publication Date: Published on Dec 23, 2025
🔹 Paper Links:
• arXiv Page: https://arxivlens.com/PaperView/Details/nemotron-3-nano-open-efficient-mixture-of-experts-hybrid-mamba-transformer-model-for-agentic-reasoning-1072-37bf9190
• PDF: https://arxiv.org/pdf/2512.20848
• Github: https://github.com/NVIDIA-NeMo/Nemotron
🔹 Models citing this paper:
• https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
• https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8
• https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16
✨ Spaces citing this paper:
• https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard
• https://huggingface.co/spaces/hadadxyz/ai
• https://huggingface.co/spaces/hadadxyz/blog
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Nemotron3Nano #MixtureOfExperts #MambaTransformer #AgenticAI #LLM
📝 Summary:
Nemotron 3 Nano is an efficient Mixture-of-Experts hybrid Mamba-Transformer model. It achieves better accuracy and up to 3.3x higher inference throughput than similar models, while using fewer active parameters and supporting 1M token contexts for enhanced agentic reasoning.
🔹 Publication Date: Published on Dec 23, 2025
🔹 Paper Links:
• arXiv Page: https://arxivlens.com/PaperView/Details/nemotron-3-nano-open-efficient-mixture-of-experts-hybrid-mamba-transformer-model-for-agentic-reasoning-1072-37bf9190
• PDF: https://arxiv.org/pdf/2512.20848
• Github: https://github.com/NVIDIA-NeMo/Nemotron
🔹 Models citing this paper:
• https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
• https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8
• https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16
✨ Spaces citing this paper:
• https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard
• https://huggingface.co/spaces/hadadxyz/ai
• https://huggingface.co/spaces/hadadxyz/blog
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Nemotron3Nano #MixtureOfExperts #MambaTransformer #AgenticAI #LLM
Arxivlens
Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning - AI Research Paper Analysis…
AI-powered analysis of 'Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning'. We present Nemotron 3 Nano 30B-A3B, a Mixture-of-Experts hybrid Mamba-Transformer language model. Nemotron 3 Nano was pretrained…
❤1👍1