ML Research Hub
32.5K subscribers
6K photos
385 videos
24 files
6.49K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
ReCode: Unify Plan and Action for Universal Granularity Control

📝 Summary:
ReCode unifies planning and action in LLM agents via recursive code generation. It treats plans as abstract functions recursively decomposed into primitive actions, enabling dynamic decision granularity. This significantly improves performance and data efficiency.

🔹 Publication Date: Published on Oct 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.23564
• PDF: https://arxiv.org/pdf/2510.23564
• Github: https://github.com/FoundationAgents/ReCode

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMAgents #AI #CodeGeneration #Planning #GranularityControl
Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning

📝 Summary:
PaperCoder is a multi-agent LLM framework that automates converting machine learning papers into functional code repositories. It uses planning, analysis, and generation stages with specialized agents. Evaluations show it effectively creates high-quality implementations, outperforming strong base...

🔹 Publication Date: Published on Apr 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.17192
• PDF: https://arxiv.org/pdf/2504.17192
• Project Page: https://huggingface.co/papers/2504.15080
• Github: https://github.com/going-doer/Paper2Code

Datasets citing this paper:
https://huggingface.co/datasets/iaminju/paper2code

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#CodeGeneration #MachineLearning #LLM #AI #Automation
DRIVE: Data Curation Best Practices for Reinforcement Learning with Verifiable Reward in Competitive Code Generation

📝 Summary:
This study develops a two-stage reinforcement learning method for competitive code generation. It uses tailored data curation and a hard-focus curriculum, achieving state-of-the-art performance on competitive programming benchmarks.

🔹 Publication Date: Published on Nov 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06307
• PDF: https://arxiv.org/pdf/2511.06307

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#ReinforcementLearning #CodeGeneration #DataCuration #MachineLearning #AIResearch
1
UI2Code^N: A Visual Language Model for Test-Time Scalable Interactive UI-to-Code Generation

📝 Summary:
UI2Code^N is a visual language model trained for interactive UI-to-code generation, editing, and polishing. It uses multi-turn feedback to achieve state-of-the-art performance among open-source models, comparable to leading closed-source solutions.

🔹 Publication Date: Published on Nov 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.08195
• PDF: https://arxiv.org/pdf/2511.08195
• Project Page: https://zheny2751-dotcom.github.io/ui2code-n.github.io/
• Github: https://zheny2751-dotcom.github.io/ui2code-n.github.io/

🔹 Models citing this paper:
https://huggingface.co/zai-org/UI2Code_N

Spaces citing this paper:
https://huggingface.co/spaces/zai-org/UI2Code_N-demo-case

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#UI2Code #VisualLanguageModels #CodeGeneration #AI #SoftwareEngineering
Code2Video: A Code-centric Paradigm for Educational Video Generation

📝 Summary:
Code2Video is a code-centric agent framework generating educational videos via executable Python code. It uses three collaborative agents to improve coherence and interpretability, outperforming direct code generation by 40% and matching human-crafted tutorials.

🔹 Publication Date: Published on Oct 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.01174
• PDF: https://arxiv.org/pdf/2510.01174
• Project Page: https://showlab.github.io/Code2Video/
• Github: https://github.com/showlab/code2video

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #VideoGeneration #EducationalTech #CodeGeneration #DeepLearning
WizardCoder: Empowering Code Large Language Models with Evol-Instruct

📝 Summary:
WizardCoder is a Code LLM fine-tuned using Evol-Instruct for complex instructions. It significantly outperforms open-source and major closed LLMs on code generation benchmarks.

🔹 Publication Date: Published on Jun 14, 2023

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2306.08568
• PDF: https://arxiv.org/pdf/2306.08568
• Github: https://github.com/nlpxucan/WizardLM

🔹 Models citing this paper:
https://huggingface.co/WizardLMTeam/WizardCoder-Python-34B-V1.0
https://huggingface.co/WizardLMTeam/WizardCoder-15B-V1.0
https://huggingface.co/alpindale/WizardLM-2-8x22B

Datasets citing this paper:
https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k
https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1
https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k

Spaces citing this paper:
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard
https://huggingface.co/spaces/FallnAI/Quantize-HF-Models

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#CodeLLM #LLM #AIE #CodeGeneration #EvolInstruct
SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories

📝 Summary:
SWE-Bench++ is an automated framework generating scalable, multilingual, repository-level coding tasks from live GitHub pull requests. It overcomes manual curation limits and static datasets, offering a benchmark to evaluate and improve code generation models across 11 languages.

🔹 Publication Date: Published on Dec 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.17419
• PDF: https://arxiv.org/pdf/2512.17419
• Project Page: https://research.turing.com/swebench
• Github: https://huggingface.co/papers?q=GitHub%20pull%20requests

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#SoftwareEngineering #CodeGeneration #AIBenchmarking #MachineLearning #OpenSource
1
SecureCode v2.0: A Production-Grade Dataset for Training Security-Aware Code Generation Models

📝 Summary:
SecureCode v2.0 is a production-grade dataset of 1215 security-focused coding examples. It trains AI models to generate secure code by providing real-incident examples with vulnerable and secure implementations, attacks, defense, and operational security context across 11 languages, using a conve...

🔹 Publication Date: Published on Dec 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.18542
• PDF: https://arxiv.org/pdf/2512.18542
• Project Page: https://perfecxion.ai/
• Github: https://github.com/scthornton/securecode-v2

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#Cybersecurity #CodeSecurity #AI #CodeGeneration #Dataset
Towards Automated Kernel Generation in the Era of LLMs

📝 Summary:
This survey explores how large language models and agent systems are automating kernel generation and optimization, a critical yet non-scalable process for AI systems. It provides a structured overview of existing approaches, datasets, and benchmarks, aiming to unify this fragmented field and out...

🔹 Publication Date: Published on Jan 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.15727
• PDF: https://arxiv.org/pdf/2601.15727
• Github: https://github.com/flagos-ai/awesome-LLM-driven-kernel-generation

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLMs #KernelGeneration #AI #Automation #CodeGeneration
👍1
GoodVibe: Security-by-Vibe for LLM-Based Code Generation

📝 Summary:
GoodVibe secures LLM-generated code by precisely fine-tuning only a small subset of security-relevant neurons. This neuron-level framework greatly enhances code security and preserves utility with significantly fewer parameters and training costs than traditional methods.

🔹 Publication Date: Published on Feb 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.10778
• PDF: https://arxiv.org/pdf/2602.10778

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #CodeGeneration #Cybersecurity #AIsecurity #MachineLearning
Code2Worlds: Empowering Coding LLMs for 4D World Generation

📝 Summary:
Code2Worlds empowers coding LLMs to generate 4D dynamic scenes by formulating it as language-to-simulation code. It uses a dual-stream architecture and physics-aware closed-loop refinement to ensure physical fidelity. The system significantly outperforms baselines, uniquely generating realistic, ...

🔹 Publication Date: Published on Feb 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.11757
• PDF: https://arxiv.org/pdf/2602.11757
• Project Page: https://aigeeksgroup.github.io/Code2Worlds
• Github: https://aigeeksgroup.github.io/Code2Worlds

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #CodeGeneration #4DGeneration #AISimulation #Research
Nanbeige4.1-3B: A Small General Model that Reasons, Aligns, and Acts

📝 Summary:
Nanbeige4.1-3B is a 3B-parameter model excelling in agentic behavior, code generation, and reasoning. It outperforms larger models through advanced reward modeling and training, demonstrating broad competence for a small language model.

🔹 Publication Date: Published on Feb 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.13367
• PDF: https://arxiv.org/pdf/2602.13367
• Project Page: https://huggingface.co/Nanbeige/Nanbeige4.1-3B

🔹 Models citing this paper:
https://huggingface.co/Nanbeige/Nanbeige4.1-3B

Spaces citing this paper:
https://huggingface.co/spaces/PioTio/AIMan

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #AI #SmallLanguageModels #AgenticAI #CodeGeneration
1
TAROT: Test-driven and Capability-adaptive Curriculum Reinforcement Fine-tuning for Code Generation with Large Language Models

📝 Summary:
TAROT proposes a reinforcement fine-tuning method for code generation that uses a four-tier test suite and capability-adaptive curriculum. This approach tailors curriculum progression based on a models skill, improving functional correctness and robustness.

🔹 Publication Date: Published on Feb 17

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.15449
• PDF: https://arxiv.org/pdf/2602.15449
• Github: https://github.com/deep-diver/TAROT

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#LLM #CodeGeneration #ReinforcementLearning #AI #MachineLearning
CL4SE: A Context Learning Benchmark For Software Engineering Tasks

📝 Summary:
CL4SE presents a benchmark for evaluating context learning in software engineering tasks, defining four SE-specific context types. It demonstrates an average 24.7% performance improvement for LLMs across tasks like code generation and review, establishing a standardized evaluation framework.

🔹 Publication Date: Published on Feb 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2602.23047
• PDF: https://arxiv.org/pdf/2602.23047
• Project Page: https://huggingface.co/papers?q=project-specific%20context
• Github: https://github.com/Tomsawyerhu/CodeCL

Datasets citing this paper:
https://huggingface.co/datasets/tomhu/codecl

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#ContextLearning #SoftwareEngineering #LLMs #CodeGeneration #Benchmarks
1
This media is not supported in your browser
VIEW IN TELEGRAM
V_1: Unifying Generation and Self-Verification for Parallel Reasoners

📝 Summary:
V1 unifies generation and verification for complex reasoning tasks. It leverages models' superior ability in pairwise self-verification over independent scoring, improving performance and efficiency in code generation and math.

🔹 Publication Date: Published on Mar 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2603.04304
• PDF: https://arxiv.org/pdf/2603.04304
• Project Page: https://harmandotpy.github.io/v1-verification/
• Github: https://github.com/HarmanDotpy/pairwise-self-verification

==================================

For more data science resources:
https://t.iss.one/DataScienceT

#AI #LLMs #MachineLearning #CodeGeneration #AIReasoning
1