✨TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment
📝 Summary:
TactAlign transfers human tactile demonstrations to robots with different embodiments. It aligns human and robot tactile signals into a shared latent space without paired data, improving policy transfer for contact-rich tasks and enabling zero-shot transfer.
🔹 Publication Date: Published on Feb 14
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.13579
• PDF: https://arxiv.org/pdf/2602.13579
• Project Page: https://yswi.github.io/tactalign/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Robotics #TactileRobotics #PolicyTransfer #HRI #AI
📝 Summary:
TactAlign transfers human tactile demonstrations to robots with different embodiments. It aligns human and robot tactile signals into a shared latent space without paired data, improving policy transfer for contact-rich tasks and enabling zero-shot transfer.
🔹 Publication Date: Published on Feb 14
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.13579
• PDF: https://arxiv.org/pdf/2602.13579
• Project Page: https://yswi.github.io/tactalign/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#Robotics #TactileRobotics #PolicyTransfer #HRI #AI
❤1
✨Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5
📝 Summary:
This report assesses frontier AI risks, updating granular scenarios for cyber offense, manipulation, deception, uncontrolled AI R&D, and self-replication. It also proposes robust mitigation strategies for secure deployment of advanced AI systems.
🔹 Publication Date: Published on Feb 16
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.14457
• PDF: https://arxiv.org/pdf/2602.14457
• Project Page: https://ai45lab.github.io/safeworkf1-page/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
This report assesses frontier AI risks, updating granular scenarios for cyber offense, manipulation, deception, uncontrolled AI R&D, and self-replication. It also proposes robust mitigation strategies for secure deployment of advanced AI systems.
🔹 Publication Date: Published on Feb 16
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.14457
• PDF: https://arxiv.org/pdf/2602.14457
• Project Page: https://ai45lab.github.io/safeworkf1-page/
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning
📝 Summary:
A trainable sparse attention method called SpargeAttention2 is proposed that achieves high sparsity in diffusion models while maintaining generation quality through hybrid masking rules and distillati...
🔹 Publication Date: Published on Feb 13
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.13515
• PDF: https://arxiv.org/pdf/2602.13515
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
A trainable sparse attention method called SpargeAttention2 is proposed that achieves high sparsity in diffusion models while maintaining generation quality through hybrid masking rules and distillati...
🔹 Publication Date: Published on Feb 13
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.13515
• PDF: https://arxiv.org/pdf/2602.13515
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Mobile-Agent-v3.5: Multi-platform Fundamental GUI Agents
📝 Summary:
GUI-Owl-1.5 is a multi-platform GUI agent model with varying sizes that achieves superior performance across GUI automation, grounding, tool-calling, and memory tasks through innovations in data pipel...
🔹 Publication Date: Published on Feb 15
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16855
• PDF: https://arxiv.org/pdf/2602.16855
• Project Page: https://github.com/X-PLUG/MobileAgent/tree/main/Mobile-Agent-v3.5
• Github: https://github.com/X-PLUG/MobileAgent
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
GUI-Owl-1.5 is a multi-platform GUI agent model with varying sizes that achieves superior performance across GUI automation, grounding, tool-calling, and memory tasks through innovations in data pipel...
🔹 Publication Date: Published on Feb 15
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16855
• PDF: https://arxiv.org/pdf/2602.16855
• Project Page: https://github.com/X-PLUG/MobileAgent/tree/main/Mobile-Agent-v3.5
• Github: https://github.com/X-PLUG/MobileAgent
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers
📝 Summary:
Dynamic tokenization improves diffusion transformer efficiency by adjusting patch sizes based on content complexity and denoising timestep, achieving significant speedup without quality loss. AI-gener...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16968
• PDF: https://arxiv.org/pdf/2602.16968
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Dynamic tokenization improves diffusion transformer efficiency by adjusting patch sizes based on content complexity and denoising timestep, achieving significant speedup without quality loss. AI-gener...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16968
• PDF: https://arxiv.org/pdf/2602.16968
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Computer-Using World Model
📝 Summary:
A world model for desktop software that predicts UI state changes through textual description followed by visual synthesis, improving decision quality and execution robustness in computer-using tasks....
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17365
• PDF: https://arxiv.org/pdf/2602.17365
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
A world model for desktop software that predicts UI state changes through textual description followed by visual synthesis, improving decision quality and execution robustness in computer-using tasks....
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17365
• PDF: https://arxiv.org/pdf/2602.17365
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨2Mamba2Furious: Linear in Complexity, Competitive in Accuracy
📝 Summary:
Researchers enhance linear attention by simplifying Mamba-2 and improving its architectural components to achieve near-softmax accuracy while maintaining memory efficiency for long sequences. AI-gener...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17363
• PDF: https://arxiv.org/pdf/2602.17363
• Github: https://github.com/gmongaras/2Mamba2Furious
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
Researchers enhance linear attention by simplifying Mamba-2 and improving its architectural components to achieve near-softmax accuracy while maintaining memory efficiency for long sequences. AI-gener...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17363
• PDF: https://arxiv.org/pdf/2602.17363
• Github: https://github.com/gmongaras/2Mamba2Furious
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨Discovering Multiagent Learning Algorithms with Large Language Models
📝 Summary:
AlphaEvolve, an evolutionary coding agent using large language models, automatically discovers new multiagent learning algorithms for imperfect-information games by evolving regret minimization and po...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16928
• PDF: https://arxiv.org/pdf/2602.16928
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
AlphaEvolve, an evolutionary coding agent using large language models, automatically discovers new multiagent learning algorithms for imperfect-information games by evolving regret minimization and po...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16928
• PDF: https://arxiv.org/pdf/2602.16928
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨PaperBench: Evaluating AI's Ability to Replicate AI Research
📝 Summary:
PaperBench evaluates AI agents' ability to replicate state-of-the-art AI research by decomposing replication tasks into graded sub-tasks, using both LLM-based and human judges to assess performance. A...
🔹 Publication Date: Published on Apr 2, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.01848
• PDF: https://arxiv.org/pdf/2504.01848
• Github: https://github.com/openai/preparedness
✨ Datasets citing this paper:
• https://huggingface.co/datasets/josancamon/paperbench
• https://huggingface.co/datasets/ai-coscientist/researcher-ablation-bench
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
PaperBench evaluates AI agents' ability to replicate state-of-the-art AI research by decomposing replication tasks into graded sub-tasks, using both LLM-based and human judges to assess performance. A...
🔹 Publication Date: Published on Apr 2, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.01848
• PDF: https://arxiv.org/pdf/2504.01848
• Github: https://github.com/openai/preparedness
✨ Datasets citing this paper:
• https://huggingface.co/datasets/josancamon/paperbench
• https://huggingface.co/datasets/ai-coscientist/researcher-ablation-bench
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
✨References Improve LLM Alignment in Non-Verifiable Domains
📝 Summary:
References improve LLM alignment in non-verifiable domains. Reference-guided LLM-evaluators act as soft verifiers, boosting judge accuracy and enabling self-improvement for post-training. This method outperforms SFT and reference-free techniques, achieving strong results.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16802
• PDF: https://arxiv.org/pdf/2602.16802
• Github: https://github.com/yale-nlp/RLRR
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
References improve LLM alignment in non-verifiable domains. Reference-guided LLM-evaluators act as soft verifiers, boosting judge accuracy and enabling self-improvement for post-training. This method outperforms SFT and reference-free techniques, achieving strong results.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16802
• PDF: https://arxiv.org/pdf/2602.16802
• Github: https://github.com/yale-nlp/RLRR
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1
✨FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment
📝 Summary:
FRAPPE addresses limitations in world modeling for robotics by using parallel progressive expansion to improve representation alignment and reduce error accumulation in predictive models. AI-generated...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17259
• PDF: https://arxiv.org/pdf/2602.17259
• Project Page: https://h-zhao1997.github.io/frappe/
• Github: https://github.com/OpenHelix-Team/frappe
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
📝 Summary:
FRAPPE addresses limitations in world modeling for robotics by using parallel progressive expansion to improve representation alignment and reduce error accumulation in predictive models. AI-generated...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17259
• PDF: https://arxiv.org/pdf/2602.17259
• Project Page: https://h-zhao1997.github.io/frappe/
• Github: https://github.com/OpenHelix-Team/frappe
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AI #DataScience #MachineLearning #HuggingFace #Research
❤1
✨"What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing
📝 Summary:
Intermediate feedback from in-car AI assistants improves user experience, trust, and perceived speed, reducing task load. Users prefer adaptive feedback, starting transparently and becoming less verbose as reliability increases.
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15569
• PDF: https://arxiv.org/pdf/2602.15569
• Github: https://github.com/johanneskirmayr/agentic_llm_feedback
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #AI #HCI #AutomotiveAI #UserExperience
📝 Summary:
Intermediate feedback from in-car AI assistants improves user experience, trust, and perceived speed, reducing task load. Users prefer adaptive feedback, starting transparently and becoming less verbose as reliability increases.
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15569
• PDF: https://arxiv.org/pdf/2602.15569
• Github: https://github.com/johanneskirmayr/agentic_llm_feedback
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #AI #HCI #AutomotiveAI #UserExperience
❤1
✨World Models for Policy Refinement in StarCraft II
📝 Summary:
StarWM is the first world model for StarCraft II predicting future observations under partial observability using a structured textual representation. It achieves significant offline prediction accuracy and, integrated into a decision system, yields substantial win-rate improvements against SC2s ...
🔹 Publication Date: Published on Feb 16
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.14857
• PDF: https://arxiv.org/pdf/2602.14857
• Github: https://github.com/yxzzhang/StarWM
🔹 Models citing this paper:
• https://huggingface.co/yxzhang2024/StarWM
✨ Datasets citing this paper:
• https://huggingface.co/datasets/yxzhang2024/SC2-Dynamics-50K
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#WorldModels #StarCraftII #AI #ReinforcementLearning #DeepLearning
📝 Summary:
StarWM is the first world model for StarCraft II predicting future observations under partial observability using a structured textual representation. It achieves significant offline prediction accuracy and, integrated into a decision system, yields substantial win-rate improvements against SC2s ...
🔹 Publication Date: Published on Feb 16
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.14857
• PDF: https://arxiv.org/pdf/2602.14857
• Github: https://github.com/yxzzhang/StarWM
🔹 Models citing this paper:
• https://huggingface.co/yxzhang2024/StarWM
✨ Datasets citing this paper:
• https://huggingface.co/datasets/yxzhang2024/SC2-Dynamics-50K
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#WorldModels #StarCraftII #AI #ReinforcementLearning #DeepLearning
❤1
✨ArXiv-to-Model: A Practical Study of Scientific LM Training
📝 Summary:
This paper details training a 1.36B scientific language model from raw arXiv LaTeX sources with limited computational resources. It reveals how preprocessing, tokenization, and infrastructure significantly impact training stability and data utilization. The work provides practical insights for re...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17288
• PDF: https://arxiv.org/pdf/2602.17288
• Project Page: https://kitefishai.com
• Github: https://github.com/kitefishai/KiteFish-A1-1.5B-Math
🔹 Models citing this paper:
• https://huggingface.co/KiteFishAI/KiteFish-A1-1.5B-Math
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #ScientificAI #MLOps #ModelTraining #NLP
📝 Summary:
This paper details training a 1.36B scientific language model from raw arXiv LaTeX sources with limited computational resources. It reveals how preprocessing, tokenization, and infrastructure significantly impact training stability and data utilization. The work provides practical insights for re...
🔹 Publication Date: Published on Feb 19
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17288
• PDF: https://arxiv.org/pdf/2602.17288
• Project Page: https://kitefishai.com
• Github: https://github.com/kitefishai/KiteFish-A1-1.5B-Math
🔹 Models citing this paper:
• https://huggingface.co/KiteFishAI/KiteFish-A1-1.5B-Math
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLM #ScientificAI #MLOps #ModelTraining #NLP
❤1
✨StereoAdapter-2: Globally Structure-Consistent Underwater Stereo Depth Estimation
📝 Summary:
StereoAdapter-2 improves underwater stereo depth estimation by replacing ConvGRU with a ConvSS2D operator for efficient, long-range disparity propagation. It also introduces UW-StereoDepth-80K, a new large-scale synthetic dataset. This approach achieves state-of-the-art zero-shot performance on u...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16915
• PDF: https://arxiv.org/pdf/2602.16915
• Project Page: https://aigeeksgroup.github.io/StereoAdapter-2
• Github: https://aigeeksgroup.github.io/StereoAdapter-2
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#UnderwaterAI #ComputerVision #DeepLearning #StereoVision #Dataset
📝 Summary:
StereoAdapter-2 improves underwater stereo depth estimation by replacing ConvGRU with a ConvSS2D operator for efficient, long-range disparity propagation. It also introduces UW-StereoDepth-80K, a new large-scale synthetic dataset. This approach achieves state-of-the-art zero-shot performance on u...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16915
• PDF: https://arxiv.org/pdf/2602.16915
• Project Page: https://aigeeksgroup.github.io/StereoAdapter-2
• Github: https://aigeeksgroup.github.io/StereoAdapter-2
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#UnderwaterAI #ComputerVision #DeepLearning #StereoVision #Dataset
❤1
✨GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning
📝 Summary:
GEPA is a prompt optimizer that uses natural language reflection to learn high-level rules from trial and error. It significantly outperforms RL methods like GRPO and MIPROv2, achieving better performance with up to 35x fewer rollouts.
🔹 Publication Date: Published on Jul 25, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2507.19457
• PDF: https://arxiv.org/pdf/2507.19457
• Project Page: https://gepa-ai.github.io/gepa/
• Github: https://github.com/gepa-ai/gepa
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#PromptEngineering #ReinforcementLearning #ArtificialIntelligence #MachineLearning #NLP
📝 Summary:
GEPA is a prompt optimizer that uses natural language reflection to learn high-level rules from trial and error. It significantly outperforms RL methods like GRPO and MIPROv2, achieving better performance with up to 35x fewer rollouts.
🔹 Publication Date: Published on Jul 25, 2025
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2507.19457
• PDF: https://arxiv.org/pdf/2507.19457
• Project Page: https://gepa-ai.github.io/gepa/
• Github: https://github.com/gepa-ai/gepa
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#PromptEngineering #ReinforcementLearning #ArtificialIntelligence #MachineLearning #NLP
❤2
✨NeST: Neuron Selective Tuning for LLM Safety
📝 Summary:
NeST is a lightweight LLM safety framework that selectively adapts a small subset of safety-relevant neurons. It significantly reduces unsafe generations by 90.2% with minimal trainable parameters, outperforming full fine-tuning and LoRA in safety performance and efficiency.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16835
• PDF: https://arxiv.org/pdf/2602.16835
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMSafety #LLM #AI #MachineLearning #DeepLearning
📝 Summary:
NeST is a lightweight LLM safety framework that selectively adapts a small subset of safety-relevant neurons. It significantly reduces unsafe generations by 90.2% with minimal trainable parameters, outperforming full fine-tuning and LoRA in safety performance and efficiency.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16835
• PDF: https://arxiv.org/pdf/2602.16835
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMSafety #LLM #AI #MachineLearning #DeepLearning
❤1
✨On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking
📝 Summary:
Two-layer neural networks solve modular addition by learning Fourier features through phase symmetry and frequency diversification. This enables robust computation via majority voting to cancel noise. The process, including grokking, is explained by a lottery ticket mechanism and competition betw...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16849
• PDF: https://arxiv.org/pdf/2602.16849
• Github: https://github.com/Y-Agent/modular-addition-feature-learning
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#NeuralNetworks #Grokking #FourierFeatures #LotteryTicket #MachineLearning
📝 Summary:
Two-layer neural networks solve modular addition by learning Fourier features through phase symmetry and frequency diversification. This enables robust computation via majority voting to cancel noise. The process, including grokking, is explained by a lottery ticket mechanism and competition betw...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16849
• PDF: https://arxiv.org/pdf/2602.16849
• Github: https://github.com/Y-Agent/modular-addition-feature-learning
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#NeuralNetworks #Grokking #FourierFeatures #LotteryTicket #MachineLearning
❤1
✨NESSiE: The Necessary Safety Benchmark -- Identifying Errors that should not Exist
📝 Summary:
NESSiE is a new safety benchmark revealing basic security vulnerabilities in large language models with simple tests. Even state-of-the-art models fail these necessary safety checks, showing a bias towards helpfulness over safety and underscoring deployment risks.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16756
• PDF: https://arxiv.org/pdf/2602.16756
• Project Page: https://huggingface.co/datasets/JByale/NESSiE
• Github: https://github.com/JohannesBertram/NESSiE
✨ Datasets citing this paper:
• https://huggingface.co/datasets/JByale/NESSiE
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AISafety #LLM #Cybersecurity #AIethics #AIResearch
📝 Summary:
NESSiE is a new safety benchmark revealing basic security vulnerabilities in large language models with simple tests. Even state-of-the-art models fail these necessary safety checks, showing a bias towards helpfulness over safety and underscoring deployment risks.
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16756
• PDF: https://arxiv.org/pdf/2602.16756
• Project Page: https://huggingface.co/datasets/JByale/NESSiE
• Github: https://github.com/JohannesBertram/NESSiE
✨ Datasets citing this paper:
• https://huggingface.co/datasets/JByale/NESSiE
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#AISafety #LLM #Cybersecurity #AIethics #AIResearch
❤1
✨Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents
📝 Summary:
LLM agents must balance exploration costs and uncertainty in complex sequential tasks. The Calibrate-Then-Act CTA framework provides LLMs with explicit cost-uncertainty context, enabling more optimal reasoning. This leads to better decision-making strategies in tasks like coding and information r...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16699
• PDF: https://arxiv.org/pdf/2602.16699
• Github: https://github.com/Wenwen-D/env-explorer
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMAgents #AIResearch #MachineLearning #CostAwareAI #DecisionMaking
📝 Summary:
LLM agents must balance exploration costs and uncertainty in complex sequential tasks. The Calibrate-Then-Act CTA framework provides LLMs with explicit cost-uncertainty context, enabling more optimal reasoning. This leads to better decision-making strategies in tasks like coding and information r...
🔹 Publication Date: Published on Feb 18
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16699
• PDF: https://arxiv.org/pdf/2602.16699
• Github: https://github.com/Wenwen-D/env-explorer
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMAgents #AIResearch #MachineLearning #CostAwareAI #DecisionMaking
❤1
✨CrispEdit: Low-Curvature Projections for Scalable Non-Destructive LLM Editing
📝 Summary:
CrispEdit is a scalable second-order LLM editing algorithm. It preserves capabilities by projecting updates into low-curvature subspaces using efficient Kronecker-factored approximations. This achieves high edit success with minimal capability degradation.
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15823
• PDF: https://arxiv.org/pdf/2602.15823
• Project Page: https://crispedit.github.io
• Github: https://github.com/zarifikram/CrispEdit
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMEditing #LLMs #MachineLearning #AIResearch #DeepLearning
📝 Summary:
CrispEdit is a scalable second-order LLM editing algorithm. It preserves capabilities by projecting updates into low-curvature subspaces using efficient Kronecker-factored approximations. This achieves high edit success with minimal capability degradation.
🔹 Publication Date: Published on Feb 17
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15823
• PDF: https://arxiv.org/pdf/2602.15823
• Project Page: https://crispedit.github.io
• Github: https://github.com/zarifikram/CrispEdit
==================================
For more data science resources:
✓ https://t.iss.one/DataScienceT
#LLMEditing #LLMs #MachineLearning #AIResearch #DeepLearning
👍1