This media is not supported in your browser
VIEW IN TELEGRAM
✨DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos
📝 Summary:
DreamDojo is a foundation world model trained on 44k hours of egocentric human videos that enables efficient simulation of dexterous robotic tasks through continuous latent actions and real-time disti...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06949
• PDF: https://arxiv.org/pdf/2602.06949
• Project Page: https://dreamdojo-world.github.io/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
DreamDojo is a foundation world model trained on 44k hours of egocentric human videos that enables efficient simulation of dexterous robotic tasks through continuous latent actions and real-time disti...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06949
• PDF: https://arxiv.org/pdf/2602.06949
• Project Page: https://dreamdojo-world.github.io/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Back to Basics: Revisiting Exploration in Reinforcement Learning for LLM Reasoning via Generative Probabilities
📝 Summary:
ProGRPO is a novel RL method for LLM reasoning that tackles entropy collapse. It dynamically re-weights rewards to equilibrate confidence across correct responses, enhancing generative diversity and exploration. ProGRPO significantly outperforms standard methods on reasoning benchmarks.
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05281
• PDF: https://arxiv.org/pdf/2602.05281
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #LLM #AI #GenerativeAI #MachineLearning
📝 Summary:
ProGRPO is a novel RL method for LLM reasoning that tackles entropy collapse. It dynamically re-weights rewards to equilibrate confidence across correct responses, enhancing generative diversity and exploration. ProGRPO significantly outperforms standard methods on reasoning benchmarks.
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05281
• PDF: https://arxiv.org/pdf/2602.05281
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#ReinforcementLearning #LLM #AI #GenerativeAI #MachineLearning
✨Exploring Knowledge Purification in Multi-Teacher Knowledge Distillation for LLMs
📝 Summary:
This paper introduces Knowledge Purification, consolidating multi-teacher LLM rationales to reduce conflicts and improve distillation efficiency. Methods improve model performance and reduce conflicts; router-based methods generalize robustly.
🔹 Publication Date: Published on Feb 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01064
• PDF: https://arxiv.org/pdf/2602.01064
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #KnowledgeDistillation #KnowledgePurification #AI #DeepLearning
📝 Summary:
This paper introduces Knowledge Purification, consolidating multi-teacher LLM rationales to reduce conflicts and improve distillation efficiency. Methods improve model performance and reduce conflicts; router-based methods generalize robustly.
🔹 Publication Date: Published on Feb 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01064
• PDF: https://arxiv.org/pdf/2602.01064
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #KnowledgeDistillation #KnowledgePurification #AI #DeepLearning
✨Judging What We Cannot Solve: A Consequence-Based Approach for Oracle-Free Evaluation of Research-Level Math
📝 Summary:
Consequence-Based Utility evaluates math solutions by testing their value as in-context exemplars for related problems. This oracle-free approach outperforms reward models and LLM judges, improving ranking quality and correct-wrong separation of AI-generated solutions.
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06291
• PDF: https://arxiv.org/pdf/2602.06291
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AIEvaluation #LLMEvaluation #MathAI #ArtificialIntelligence #MachineLearning
📝 Summary:
Consequence-Based Utility evaluates math solutions by testing their value as in-context exemplars for related problems. This oracle-free approach outperforms reward models and LLM judges, improving ranking quality and correct-wrong separation of AI-generated solutions.
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06291
• PDF: https://arxiv.org/pdf/2602.06291
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AIEvaluation #LLMEvaluation #MathAI #ArtificialIntelligence #MachineLearning
✨MSign: An Optimizer Preventing Training Instability in Large Language Models via Stable Rank Restoration
📝 Summary:
LLM training instability is linked to weight matrix stable rank decline and Jacobian alignment, causing gradient explosions. MSign is a new optimizer that restores stable rank via matrix sign operations, effectively preventing training failures with low computational overhead.
🔹 Publication Date: Published on Feb 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01734
• PDF: https://arxiv.org/pdf/2602.01734
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
LLM training instability is linked to weight matrix stable rank decline and Jacobian alignment, causing gradient explosions. MSign is a new optimizer that restores stable rank via matrix sign operations, effectively preventing training failures with low computational overhead.
🔹 Publication Date: Published on Feb 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.01734
• PDF: https://arxiv.org/pdf/2602.01734
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Baichuan-M3: Modeling Clinical Inquiry for Reliable Medical Decision-Making
📝 Summary:
Baichuan-M3 is a medical LLM for clinical decision support. It uses proactive info gathering, long-horizon reasoning, and hallucination suppression. It outperforms GPT-5.2 on medical benchmarks in clinical inquiry and safety.
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06570
• PDF: https://arxiv.org/pdf/2602.06570
• Github: https://github.com/baichuan-inc/Baichuan-M3-235B
🔹 Models citing this paper:
• https://huggingface.co/baichuan-inc/Baichuan-M3-235B
• https://huggingface.co/baichuan-inc/Baichuan-M3-235B-GPTQ-INT4
• https://huggingface.co/baichuan-inc/Baichuan-M3-235B-FP8
✨ Spaces citing this paper:
• https://huggingface.co/spaces/baichuan-inc/Baichuan-M3-Inquiry
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Baichuan-M3 is a medical LLM for clinical decision support. It uses proactive info gathering, long-horizon reasoning, and hallucination suppression. It outperforms GPT-5.2 on medical benchmarks in clinical inquiry and safety.
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06570
• PDF: https://arxiv.org/pdf/2602.06570
• Github: https://github.com/baichuan-inc/Baichuan-M3-235B
🔹 Models citing this paper:
• https://huggingface.co/baichuan-inc/Baichuan-M3-235B
• https://huggingface.co/baichuan-inc/Baichuan-M3-235B-GPTQ-INT4
• https://huggingface.co/baichuan-inc/Baichuan-M3-235B-FP8
✨ Spaces citing this paper:
• https://huggingface.co/spaces/baichuan-inc/Baichuan-M3-Inquiry
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks
📝 Summary:
A novel framework called SEMA is introduced that effectively trains multi-turn attackers for large language models without relying on existing strategies or external data, achieving state-of-the-art a...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06854
• PDF: https://arxiv.org/pdf/2602.06854
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
A novel framework called SEMA is introduced that effectively trains multi-turn attackers for large language models without relying on existing strategies or external data, achieving state-of-the-art a...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06854
• PDF: https://arxiv.org/pdf/2602.06854
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨RaBiT: Residual-Aware Binarization Training for Accurate and Efficient LLMs
📝 Summary:
Residual binarization framework RaBiT addresses feature co-adaptation in quantized LLMs through hierarchical path derivation and robust initialization, achieving superior accuracy-efficiency trade-off...
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05367
• PDF: https://arxiv.org/pdf/2602.05367
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Residual binarization framework RaBiT addresses feature co-adaptation in quantized LLMs through hierarchical path derivation and robust initialization, achieving superior accuracy-efficiency trade-off...
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05367
• PDF: https://arxiv.org/pdf/2602.05367
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨OdysseyArena: Benchmarking Large Language Models For Long-Horizon, Active and Inductive Interactions
📝 Summary:
OdysseyArena presents a new framework for evaluating large language models on long-horizon, inductive agent tasks that emphasize autonomous discovery of environmental transition laws. AI-generated sum...
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05843
• PDF: https://arxiv.org/pdf/2602.05843
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
OdysseyArena presents a new framework for evaluating large language models on long-horizon, inductive agent tasks that emphasize autonomous discovery of environmental transition laws. AI-generated sum...
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05843
• PDF: https://arxiv.org/pdf/2602.05843
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1
✨SeeUPO: Sequence-Level Agentic-RL with Convergence Guarantees
📝 Summary:
SeeUPO is a critic-free reinforcement learning method that ensures convergence guarantees in multi-turn agent interactions by modeling sequential decision-making as multi-agent bandit problems and usi...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06554
• PDF: https://arxiv.org/pdf/2602.06554
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
SeeUPO is a critic-free reinforcement learning method that ensures convergence guarantees in multi-turn agent interactions by modeling sequential decision-making as multi-agent bandit problems and usi...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06554
• PDF: https://arxiv.org/pdf/2602.06554
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
Forwarded from Machine Learning
🚀 Machine Learning Workflow: Step-by-Step Breakdown
Understanding the ML pipeline is essential to build scalable, production-grade models.
👉 Initial Dataset
Start with raw data. Apply cleaning, curation, and drop irrelevant or redundant features.
Example: Drop constant features or remove columns with 90% missing values.
👉 Exploratory Data Analysis (EDA)
Use mean, median, standard deviation, correlation, and missing value checks.
Techniques like PCA and LDA help with dimensionality reduction.
Example: Use PCA to reduce 50 features down to 10 while retaining 95% variance.
👉 Input Variables
Structured table with features like ID, Age, Income, Loan Status, etc.
Ensure numeric encoding and feature engineering are complete before training.
👉 Processed Dataset
Split the data into training (70%) and testing (30%) sets.
Example: Stratified sampling ensures target distribution consistency.
👉 Learning Algorithms
Apply algorithms like SVM, Logistic Regression, KNN, Decision Trees, or Ensemble models like Random Forest and Gradient Boosting.
Example: Use Random Forest to capture non-linear interactions in tabular data.
👉 Hyperparameter Optimization
Tune parameters using Grid Search or Random Search for better performance.
Example: Optimize max_depth and n_estimators in Gradient Boosting.
👉 Feature Selection
Use model-based importance ranking (e.g., from Random Forest) to remove noisy or irrelevant features.
Example: Drop features with zero importance to reduce overfitting.
👉 Model Training and Validation
Use cross-validation to evaluate generalization. Train final model on full training set.
Example: 5-fold cross-validation for reliable performance metrics.
👉 Model Evaluation
Use task-specific metrics:
- Classification – MCC, Sensitivity, Specificity, Accuracy
- Regression – RMSE, R², MSE
Example: For imbalanced classes, prefer MCC over simple accuracy.
💡 This workflow ensures models are robust, interpretable, and ready for deployment in real-world applications.
https://t.iss.one/DataScienceM
Understanding the ML pipeline is essential to build scalable, production-grade models.
👉 Initial Dataset
Start with raw data. Apply cleaning, curation, and drop irrelevant or redundant features.
Example: Drop constant features or remove columns with 90% missing values.
👉 Exploratory Data Analysis (EDA)
Use mean, median, standard deviation, correlation, and missing value checks.
Techniques like PCA and LDA help with dimensionality reduction.
Example: Use PCA to reduce 50 features down to 10 while retaining 95% variance.
👉 Input Variables
Structured table with features like ID, Age, Income, Loan Status, etc.
Ensure numeric encoding and feature engineering are complete before training.
👉 Processed Dataset
Split the data into training (70%) and testing (30%) sets.
Example: Stratified sampling ensures target distribution consistency.
👉 Learning Algorithms
Apply algorithms like SVM, Logistic Regression, KNN, Decision Trees, or Ensemble models like Random Forest and Gradient Boosting.
Example: Use Random Forest to capture non-linear interactions in tabular data.
👉 Hyperparameter Optimization
Tune parameters using Grid Search or Random Search for better performance.
Example: Optimize max_depth and n_estimators in Gradient Boosting.
👉 Feature Selection
Use model-based importance ranking (e.g., from Random Forest) to remove noisy or irrelevant features.
Example: Drop features with zero importance to reduce overfitting.
👉 Model Training and Validation
Use cross-validation to evaluate generalization. Train final model on full training set.
Example: 5-fold cross-validation for reliable performance metrics.
👉 Model Evaluation
Use task-specific metrics:
- Classification – MCC, Sensitivity, Specificity, Accuracy
- Regression – RMSE, R², MSE
Example: For imbalanced classes, prefer MCC over simple accuracy.
💡 This workflow ensures models are robust, interpretable, and ready for deployment in real-world applications.
https://t.iss.one/DataScienceM
✨Avoiding Premature Collapse: Adaptive Annealing for Entropy-Regularized Structural Inference
📝 Summary:
Optimal transport models face premature mode collapse and instability during annealing as standard cooling is too fast. EPH-ASC, an adaptive algorithm, solves this by enforcing a linear stability law, preventing gradient explosions and stabilizing training.
🔹 Publication Date: Published on Jan 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.23039
• PDF: https://arxiv.org/pdf/2601.23039
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Optimal transport models face premature mode collapse and instability during annealing as standard cooling is too fast. EPH-ASC, an adaptive algorithm, solves this by enforcing a linear stability law, preventing gradient explosions and stabilizing training.
🔹 Publication Date: Published on Jan 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.23039
• PDF: https://arxiv.org/pdf/2601.23039
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the Rare
📝 Summary:
RLVR methods using group sampling suffer from bias toward likely trajectories and missed rare-correct ones; a difficulty-aware advantage scaling technique improves performance on benchmarks without in...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06717
• PDF: https://arxiv.org/pdf/2602.06717
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
RLVR methods using group sampling suffer from bias toward likely trajectories and missed rare-correct ones; a difficulty-aware advantage scaling technique improves performance on benchmarks without in...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06717
• PDF: https://arxiv.org/pdf/2602.06717
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Seg-ReSearch: Segmentation with Interleaved Reasoning and External Search
📝 Summary:
Seg-ReSearch introduces a novel segmentation approach that combines interleaved reasoning with external search to overcome limitations of frozen MLLM knowledge, using hierarchical reward design for tr...
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.04454
• PDF: https://arxiv.org/pdf/2602.04454
• Github: https://github.com/iSEE-Laboratory/Seg-ReSearch
✨ Datasets citing this paper:
• https://huggingface.co/datasets/iSEE-Laboratory/OK_VOS
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Seg-ReSearch introduces a novel segmentation approach that combines interleaved reasoning with external search to overcome limitations of frozen MLLM knowledge, using hierarchical reward design for tr...
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.04454
• PDF: https://arxiv.org/pdf/2602.04454
• Github: https://github.com/iSEE-Laboratory/Seg-ReSearch
✨ Datasets citing this paper:
• https://huggingface.co/datasets/iSEE-Laboratory/OK_VOS
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Vision Transformer Finetuning Benefits from Non-Smooth Components
📝 Summary:
Vision transformer components exhibit varying plasticity levels that correlate with finetuning performance, challenging the assumption that smoothness is always beneficial. AI-generated summary The sm...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06883
• PDF: https://arxiv.org/pdf/2602.06883
• Github: https://github.com/ambroiseodt/vit-plasticity
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Vision transformer components exhibit varying plasticity levels that correlate with finetuning performance, challenging the assumption that smoothness is always beneficial. AI-generated summary The sm...
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06883
• PDF: https://arxiv.org/pdf/2602.06883
• Github: https://github.com/ambroiseodt/vit-plasticity
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1
✨AudioSAE: Towards Understanding of Audio-Processing Models with Sparse AutoEncoders
📝 Summary:
AudioSAE applies sparse autoencoders to Whisper and HuBERT models, extracting stable acoustic and semantic features. These features disentangle information, reduce false speech detections, and correlate with human EEG, demonstrating practical utility.
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05027
• PDF: https://arxiv.org/pdf/2602.05027
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AudioAI #SparseAutoencoders #MachineLearning #SpeechRecognition #Neuroscience
📝 Summary:
AudioSAE applies sparse autoencoders to Whisper and HuBERT models, extracting stable acoustic and semantic features. These features disentangle information, reduce false speech detections, and correlate with human EEG, demonstrating practical utility.
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.05027
• PDF: https://arxiv.org/pdf/2602.05027
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AudioAI #SparseAutoencoders #MachineLearning #SpeechRecognition #Neuroscience
✨Group-Evolving Agents: Open-Ended Self-Improvement via Experience Sharing
📝 Summary:
Group-Evolving Agents GEA enable open-ended self-improvement by treating agent groups as evolutionary units, allowing efficient experience sharing. GEA significantly outperforms state-of-the-art self-evolving methods on coding benchmarks, demonstrating enhanced robustness and sustained progress.
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.04837
• PDF: https://arxiv.org/pdf/2602.04837
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #EvolutionaryAI #MultiAgentSystems #OpenEndedLearning #MachineLearning
📝 Summary:
Group-Evolving Agents GEA enable open-ended self-improvement by treating agent groups as evolutionary units, allowing efficient experience sharing. GEA significantly outperforms state-of-the-art self-evolving methods on coding benchmarks, demonstrating enhanced robustness and sustained progress.
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.04837
• PDF: https://arxiv.org/pdf/2602.04837
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #EvolutionaryAI #MultiAgentSystems #OpenEndedLearning #MachineLearning
✨Urban Spatio-Temporal Foundation Models for Climate-Resilient Housing: Scaling Diffusion Transformers for Disaster Risk Prediction
📝 Summary:
This paper presents Skjold-DiT, a diffusion-transformer framework predicting building-level climate risks. It integrates urban data and transportation networks to generate accessibility layers for emergency response and intelligent vehicles. Experiments evaluate its prediction quality and cross-c...
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06129
• PDF: https://arxiv.org/pdf/2602.06129
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#UrbanAI #ClimateResilience #DisasterRisk #DiffusionModels #SpatioTemporalAI
📝 Summary:
This paper presents Skjold-DiT, a diffusion-transformer framework predicting building-level climate risks. It integrates urban data and transportation networks to generate accessibility layers for emergency response and intelligent vehicles. Experiments evaluate its prediction quality and cross-c...
🔹 Publication Date: Published on Feb 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06129
• PDF: https://arxiv.org/pdf/2602.06129
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#UrbanAI #ClimateResilience #DisasterRisk #DiffusionModels #SpatioTemporalAI
✨Canzona: A Unified, Asynchronous, and Load-Balanced Framework for Distributed Matrix-based Optimizers
📝 Summary:
Canzona presents a unified asynchronous framework that addresses the conflict between matrix-based optimizers and distributed tensor fragmentation in LLM training, improving efficiency and reducing la...
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06079
• PDF: https://arxiv.org/pdf/2602.06079
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Canzona presents a unified asynchronous framework that addresses the conflict between matrix-based optimizers and distributed tensor fragmentation in LLM training, improving efficiency and reducing la...
🔹 Publication Date: Published on Feb 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06079
• PDF: https://arxiv.org/pdf/2602.06079
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨QuantLRM: Quantization of Large Reasoning Models via Fine-Tuning Signals
📝 Summary:
QuantLRM improves Large Reasoning Model quantization by using weight update magnitudes from fine-tuning to estimate channel importance. It protects both smallest and largest updates, consistently outperforming traditional methods and applying even to non-fine-tuned models.
🔹 Publication Date: Published on Jan 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02581
• PDF: https://arxiv.org/pdf/2602.02581
• Github: https://github.com/psunlpgroup/QuantLRM
🔹 Models citing this paper:
• https://huggingface.co/nanzhang/QuantLRM-R1-Qwen-32B-3-bit
• https://huggingface.co/nanzhang/QuantLRM-R1-Llama-70B-3-bit
• https://huggingface.co/nanzhang/QuantLRM-R1-Qwen3-8B-3-bit
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Quantization #LargeLanguageModels #DeepLearning #AI #ModelCompression
📝 Summary:
QuantLRM improves Large Reasoning Model quantization by using weight update magnitudes from fine-tuning to estimate channel importance. It protects both smallest and largest updates, consistently outperforming traditional methods and applying even to non-fine-tuned models.
🔹 Publication Date: Published on Jan 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.02581
• PDF: https://arxiv.org/pdf/2602.02581
• Github: https://github.com/psunlpgroup/QuantLRM
🔹 Models citing this paper:
• https://huggingface.co/nanzhang/QuantLRM-R1-Qwen-32B-3-bit
• https://huggingface.co/nanzhang/QuantLRM-R1-Llama-70B-3-bit
• https://huggingface.co/nanzhang/QuantLRM-R1-Qwen3-8B-3-bit
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Quantization #LargeLanguageModels #DeepLearning #AI #ModelCompression
✨Table-as-Search: Formulate Long-Horizon Agentic Information Seeking as Table Completion
📝 Summary:
Table-as-Search TaS reformulates information seeking as table completion to robustly manage long-horizon search states. By mapping queries to structured tables, TaS explicitly tracks progress and plans, significantly outperforming baselines in complex search tasks.
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06724
• PDF: https://arxiv.org/pdf/2602.06724
• Github: https://github.com/AIDC-AI/Marco-DeepResearch/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #InformationRetrieval #AgenticAI #TableCompletion #SearchAlgorithms
📝 Summary:
Table-as-Search TaS reformulates information seeking as table completion to robustly manage long-horizon search states. By mapping queries to structured tables, TaS explicitly tracks progress and plans, significantly outperforming baselines in complex search tasks.
🔹 Publication Date: Published on Feb 6
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.06724
• PDF: https://arxiv.org/pdf/2602.06724
• Github: https://github.com/AIDC-AI/Marco-DeepResearch/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #InformationRetrieval #AgenticAI #TableCompletion #SearchAlgorithms
❤1