ML Research Hub
32.9K subscribers
5.36K photos
332 videos
24 files
5.79K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
2Mamba2Furious: Linear in Complexity, Competitive in Accuracy

📝 Summary:
Researchers enhance linear attention by simplifying Mamba-2 and improving its architectural components to achieve near-softmax accuracy while maintaining memory efficiency for long sequences. AI-gener...

🔹 Publication Date: Published on Feb 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17363
• PDF: https://arxiv.org/pdf/2602.17363
• Github: https://github.com/gmongaras/2Mamba2Furious

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Discovering Multiagent Learning Algorithms with Large Language Models

📝 Summary:
AlphaEvolve, an evolutionary coding agent using large language models, automatically discovers new multiagent learning algorithms for imperfect-information games by evolving regret minimization and po...

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16928
• PDF: https://arxiv.org/pdf/2602.16928

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
PaperBench: Evaluating AI's Ability to Replicate AI Research

📝 Summary:
PaperBench evaluates AI agents' ability to replicate state-of-the-art AI research by decomposing replication tasks into graded sub-tasks, using both LLM-based and human judges to assess performance. A...

🔹 Publication Date: Published on Apr 2, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.01848
• PDF: https://arxiv.org/pdf/2504.01848
• Github: https://github.com/openai/preparedness

Datasets citing this paper:
https://huggingface.co/datasets/josancamon/paperbench
https://huggingface.co/datasets/ai-coscientist/researcher-ablation-bench

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
References Improve LLM Alignment in Non-Verifiable Domains

📝 Summary:
References improve LLM alignment in non-verifiable domains. Reference-guided LLM-evaluators act as soft verifiers, boosting judge accuracy and enabling self-improvement for post-training. This method outperforms SFT and reference-free techniques, achieving strong results.

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16802
• PDF: https://arxiv.org/pdf/2602.16802
• Github: https://github.com/yale-nlp/RLRR

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment

📝 Summary:
FRAPPE addresses limitations in world modeling for robotics by using parallel progressive expansion to improve representation alignment and reduce error accumulation in predictive models. AI-generated...

🔹 Publication Date: Published on Feb 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17259
• PDF: https://arxiv.org/pdf/2602.17259
• Project Page: https://h-zhao1997.github.io/frappe/
• Github: https://github.com/OpenHelix-Team/frappe

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
"What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing

📝 Summary:
Intermediate feedback from in-car AI assistants improves user experience, trust, and perceived speed, reducing task load. Users prefer adaptive feedback, starting transparently and becoming less verbose as reliability increases.

🔹 Publication Date: Published on Feb 17

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15569
• PDF: https://arxiv.org/pdf/2602.15569
• Github: https://github.com/johanneskirmayr/agentic_llm_feedback

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AI #HCI #AutomotiveAI #UserExperience
1
World Models for Policy Refinement in StarCraft II

📝 Summary:
StarWM is the first world model for StarCraft II predicting future observations under partial observability using a structured textual representation. It achieves significant offline prediction accuracy and, integrated into a decision system, yields substantial win-rate improvements against SC2s ...

🔹 Publication Date: Published on Feb 16

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.14857
• PDF: https://arxiv.org/pdf/2602.14857
• Github: https://github.com/yxzzhang/StarWM

🔹 Models citing this paper:
https://huggingface.co/yxzhang2024/StarWM

Datasets citing this paper:
https://huggingface.co/datasets/yxzhang2024/SC2-Dynamics-50K

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#WorldModels #StarCraftII #AI #ReinforcementLearning #DeepLearning
1
ArXiv-to-Model: A Practical Study of Scientific LM Training

📝 Summary:
This paper details training a 1.36B scientific language model from raw arXiv LaTeX sources with limited computational resources. It reveals how preprocessing, tokenization, and infrastructure significantly impact training stability and data utilization. The work provides practical insights for re...

🔹 Publication Date: Published on Feb 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17288
• PDF: https://arxiv.org/pdf/2602.17288
• Project Page: https://kitefishai.com
• Github: https://github.com/kitefishai/KiteFish-A1-1.5B-Math

🔹 Models citing this paper:
https://huggingface.co/KiteFishAI/KiteFish-A1-1.5B-Math

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #ScientificAI #MLOps #ModelTraining #NLP
1
StereoAdapter-2: Globally Structure-Consistent Underwater Stereo Depth Estimation

📝 Summary:
StereoAdapter-2 improves underwater stereo depth estimation by replacing ConvGRU with a ConvSS2D operator for efficient, long-range disparity propagation. It also introduces UW-StereoDepth-80K, a new large-scale synthetic dataset. This approach achieves state-of-the-art zero-shot performance on u...

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16915
• PDF: https://arxiv.org/pdf/2602.16915
• Project Page: https://aigeeksgroup.github.io/StereoAdapter-2
• Github: https://aigeeksgroup.github.io/StereoAdapter-2

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#UnderwaterAI #ComputerVision #DeepLearning #StereoVision #Dataset
1
GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

📝 Summary:
GEPA is a prompt optimizer that uses natural language reflection to learn high-level rules from trial and error. It significantly outperforms RL methods like GRPO and MIPROv2, achieving better performance with up to 35x fewer rollouts.

🔹 Publication Date: Published on Jul 25, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2507.19457
• PDF: https://arxiv.org/pdf/2507.19457
• Project Page: https://gepa-ai.github.io/gepa/
• Github: https://github.com/gepa-ai/gepa

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#PromptEngineering #ReinforcementLearning #ArtificialIntelligence #MachineLearning #NLP
2
NeST: Neuron Selective Tuning for LLM Safety

📝 Summary:
NeST is a lightweight LLM safety framework that selectively adapts a small subset of safety-relevant neurons. It significantly reduces unsafe generations by 90.2% with minimal trainable parameters, outperforming full fine-tuning and LoRA in safety performance and efficiency.

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16835
• PDF: https://arxiv.org/pdf/2602.16835

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMSafety #LLM #AI #MachineLearning #DeepLearning
1
On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking

📝 Summary:
Two-layer neural networks solve modular addition by learning Fourier features through phase symmetry and frequency diversification. This enables robust computation via majority voting to cancel noise. The process, including grokking, is explained by a lottery ticket mechanism and competition betw...

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16849
• PDF: https://arxiv.org/pdf/2602.16849
• Github: https://github.com/Y-Agent/modular-addition-feature-learning

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#NeuralNetworks #Grokking #FourierFeatures #LotteryTicket #MachineLearning
1
NESSiE: The Necessary Safety Benchmark -- Identifying Errors that should not Exist

📝 Summary:
NESSiE is a new safety benchmark revealing basic security vulnerabilities in large language models with simple tests. Even state-of-the-art models fail these necessary safety checks, showing a bias towards helpfulness over safety and underscoring deployment risks.

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16756
• PDF: https://arxiv.org/pdf/2602.16756
• Project Page: https://huggingface.co/datasets/JByale/NESSiE
• Github: https://github.com/JohannesBertram/NESSiE

Datasets citing this paper:
https://huggingface.co/datasets/JByale/NESSiE

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AISafety #LLM #Cybersecurity #AIethics #AIResearch
1
Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

📝 Summary:
LLM agents must balance exploration costs and uncertainty in complex sequential tasks. The Calibrate-Then-Act CTA framework provides LLMs with explicit cost-uncertainty context, enabling more optimal reasoning. This leads to better decision-making strategies in tasks like coding and information r...

🔹 Publication Date: Published on Feb 18

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.16699
• PDF: https://arxiv.org/pdf/2602.16699
• Github: https://github.com/Wenwen-D/env-explorer

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMAgents #AIResearch #MachineLearning #CostAwareAI #DecisionMaking
1
CrispEdit: Low-Curvature Projections for Scalable Non-Destructive LLM Editing

📝 Summary:
CrispEdit is a scalable second-order LLM editing algorithm. It preserves capabilities by projecting updates into low-curvature subspaces using efficient Kronecker-factored approximations. This achieves high edit success with minimal capability degradation.

🔹 Publication Date: Published on Feb 17

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15823
• PDF: https://arxiv.org/pdf/2602.15823
• Project Page: https://crispedit.github.io
• Github: https://github.com/zarifikram/CrispEdit

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMEditing #LLMs #MachineLearning #AIResearch #DeepLearning
👍1
Modeling Distinct Human Interaction in Web Agents

📝 Summary:
This paper models distinct human intervention patterns in web agents to improve adaptability and collaboration. It identifies four interaction styles, training language models to predict user intervention with significantly improved accuracy. This approach leads to more useful and collaborative w...

🔹 Publication Date: Published on Feb 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.17588
• PDF: https://arxiv.org/pdf/2602.17588
• Project Page: https://cowcorpus.github.io/
• Github: https://github.com/oaishi/PlowPilot

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Hardware Co-Design Scaling Laws via Roofline Modelling for On-Device LLMs

📝 Summary:
A hardware-software co-design framework is proposed for on-device LLMs. It models training loss and uses roofline analysis to link accuracy and latency, speeding up architecture selection. This yields better performance on target hardware.

🔹 Publication Date: Published on Feb 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10377
• PDF: https://arxiv.org/pdf/2602.10377

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
2
1
Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis

📝 Summary:
Fish-Speech is a novel TTS framework using a Dual-AR architecture with GFSQ for efficient codebook processing and high-fidelity speech. It leverages LLMs for linguistic feature extraction, streamlining multilingual support by eliminating G2P. This significantly improves TTS for complex scenarios ...

🔹 Publication Date: Published on Nov 2, 2024

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2411.01156
• PDF: https://arxiv.org/pdf/2411.01156
• Github: https://github.com/fishaudio/fish-speech

🔹 Models citing this paper:
https://huggingface.co/fishaudio/fish-speech-1.5
https://huggingface.co/fishaudio/fish-speech-1.4
https://huggingface.co/ModelsLab/fish-speech-1.5

Spaces citing this paper:
https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena
https://huggingface.co/spaces/mediverseai/mediverse.ai
https://huggingface.co/spaces/fishaudio/fish-agent

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#TextToSpeech #LLM #SpeechSynthesis #Multilingual #AI
2
GPT-4 Technical Report

📝 Summary:
GPT-4 is a multimodal Transformer model accepting image and text inputs. It achieves human-level performance on professional and academic benchmarks through pre-training and post-training alignment. Its development prioritized predictable scaling.

🔹 Publication Date: Published on Mar 15, 2023

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2303.08774
• PDF: https://arxiv.org/pdf/2303.08774
• Github: https://github.com/openai/evals

🔹 Models citing this paper:
https://huggingface.co/openchat/openchat_3.5
https://huggingface.co/openchat/openchat-3.5-0106
https://huggingface.co/openchat/openchat-3.5-1210

Datasets citing this paper:
https://huggingface.co/datasets/m-a-p/CHC-Bench

Spaces citing this paper:
https://huggingface.co/spaces/ludwigstumpp/llm-leaderboard
https://huggingface.co/spaces/dingliyu/skillmix
https://huggingface.co/spaces/SSGHJKKNBVCXZWQ134578000JJBBBBNNNMKLL/AGI-Framework

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#GPT4 #AI #LLM #MultimodalAI #DeepLearning
1