đA Survey of Genetic Programming Applications in Modern Biological Research
đ Study the paper
@Machine_learn
đ Study the paper
@Machine_learn
đ1
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
SmolVLM -
Model: https://huggingface.co/collections/HuggingFaceTB/smolvlm-256m-and-500m-6791fafc5bb0ab8acc960fb0
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
đ Perspectives on Computational Enzyme Modeling: From Mechanisms to Design and Drug Development
đ Study the paper
@Machine_learn
đ Study the paper
@Machine_learn
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.
Paper: https://arxiv.org/pdf/2411.07975v1.pdf
Code: https://github.com/deepseek-ai/janus
Datasets: GQA MMBench MM-Vet SEED-Bench
@Machine_learn
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.
Paper: https://arxiv.org/pdf/2411.07975v1.pdf
Code: https://github.com/deepseek-ai/janus
Datasets: GQA MMBench MM-Vet SEED-Bench
@Machine_learn
đ3
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Paper submitted by #DeepSeek team has generated significant attention in the AI community.
This work addresses the enhancement of reasoning capabilities in Large Language Models (LLMs) through the application of reinforcement learning techniques. The authors introduce a novel framework, DeepSeek-R1, which aims to improve LLM reasoning abilities by incorporating incentives for logical reasoning processes within their training. This integration of reinforcement learning allows LLMs to go beyond basic linguistic processing, developing sophisticated reasoning methods that can boost performance across a wide array of complex applications.
This approach has cause lots of discussions in different communities, but it definitely opens up the whole new direction of development for the research.
Paper: https://arxiv.org/abs/2501.12948
#nn #LLM
@Machine_learn
Paper submitted by #DeepSeek team has generated significant attention in the AI community.
This work addresses the enhancement of reasoning capabilities in Large Language Models (LLMs) through the application of reinforcement learning techniques. The authors introduce a novel framework, DeepSeek-R1, which aims to improve LLM reasoning abilities by incorporating incentives for logical reasoning processes within their training. This integration of reinforcement learning allows LLMs to go beyond basic linguistic processing, developing sophisticated reasoning methods that can boost performance across a wide array of complex applications.
This approach has cause lots of discussions in different communities, but it definitely opens up the whole new direction of development for the research.
Paper: https://arxiv.org/abs/2501.12948
#nn #LLM
@Machine_learn
arXiv.org
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via...
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning...
â¤2
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
đĄđđŁ_đđśđđľ_đ§đżđŽđťđđłđźđżđşđ˛đżđ.pdf
8.2 MB
Natural Language Processing with Transformers Building Language Applications
with Hugging Face
#Book
@Machine_learn
with Hugging Face
#Book
@Machine_learn
đĽ2đ1
đ DeepClaude
⪠Github
âŞDocs
@Machine_learn
git clone https://github.com/getasterisk/deepclaude.git
cd deepclaude
⪠Github
âŞDocs
@Machine_learn
ا؎عŰ٠زŮ
Ř§Ů Ř¨ŘąŘ§Ű Ů
شاعڊت ŘŻŘą اŰ٠ٞعŮÚ٠تا ا؎ع شب...!
@Raminmousa
@Raminmousa