Machine learning books and papers
22.8K subscribers
974 photos
54 videos
928 files
1.31K links
Admin: @Raminmousa
Watsapp: +989333900804
ID: @Machine_learn
link: https://t.iss.one/Machine_learn
Download Telegram
📄A Survey of Genetic Programming Applications in Modern Biological Research


📎 Study the paper


@Machine_learn
👍1
Discrete Matematics and applications

🔗 link

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
❤3👍3
⭐️ Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge Graph

🖥 Github: https://github.com/dosonleung/fasttog

📕 Paper: https://arxiv.org/abs/2501.14300v1


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Foundations of Geometry. DAVID HILBERT, PH. D.

📚 Book


@Machine_learn
👍1🔥1
ChatGPT Cheat Sheet for Business (2025).pdf
8 MB
ChatGPT Cheat Sheet for Business - DataCamp

@Machine_learn
👍2❤1
📃 Perspectives on Computational Enzyme Modeling: From Mechanisms to Design and Drug Development


📎 Study the paper


@Machine_learn
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation

We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.

Paper: https://arxiv.org/pdf/2411.07975v1.pdf

Code: https://github.com/deepseek-ai/janus

Datasets: GQA MMBench MM-Vet SEED-Bench

@Machine_learn
👍3
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

Paper submitted by #DeepSeek team has generated significant attention in the AI community.

This work addresses the enhancement of reasoning capabilities in Large Language Models (LLMs) through the application of reinforcement learning techniques. The authors introduce a novel framework, DeepSeek-R1, which aims to improve LLM reasoning abilities by incorporating incentives for logical reasoning processes within their training. This integration of reinforcement learning allows LLMs to go beyond basic linguistic processing, developing sophisticated reasoning methods that can boost performance across a wide array of complex applications.

This approach has cause lots of discussions in different communities, but it definitely opens up the whole new direction of development for the research.

Paper: https://arxiv.org/abs/2501.12948

#nn #LLM

@Machine_learn
❤2
Forwarded from Github LLMs
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2
⭐️ Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge Graph

🖥 Github: https://github.com/dosonleung/fasttog

📕 Paper: https://arxiv.org/abs/2501.14300v1

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
International AI Safety Report

📚 Report

@Machine_learn
🐋 DeepClaude


git clone https://github.com/getasterisk/deepclaude.git
cd deepclaude

▪ Github
▪Docs

@Machine_learn
اخرین زمان برای مشارکت در این پروژه تا اخر شب...!
@Raminmousa