Machine learning books and papers
22.8K subscribers
974 photos
54 videos
928 files
1.31K links
Admin: @Raminmousa
Watsapp: +989333900804
ID: @Machine_learn
link: https://t.iss.one/Machine_learn
Download Telegram
Building Blocks for Theoretical Computer Science

๐ŸŽ“ Link

@Machine_learn
๐Ÿ‘1๐Ÿ”ฅ1
๐ŸŒŸ AlphaFold 3

๐ŸŸกPaper
๐ŸŸกDemo
๐Ÿ–ฅGitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘3โค2
Forwarded from Github LLMs
๐ŸŒŸ LLaMA-Mesh:
๐ŸŸกArxiv
๐Ÿ–ฅGitHub

https://t.iss.one/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘4
ุฏูˆุณุชุงู† ุฎุฑูˆุฌูŠ ุงูŠู† ูƒุงุฑ ูฃ ุชุง ู…ู‚ุงู„ู‡ ุฎูˆุงู‡ุฏ ุจูˆุฏ...!
New research papers and github codes

๐ŸŸขMotivo
๐ŸŸกPaper ๐ŸŸกDemo ๐ŸŸกGithub
๐ŸŸขVideo Seal
๐ŸŸกPaper ๐ŸŸกDemo ๐ŸŸกGithub
๐ŸŸขFlow Matching
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขExplore Theory-of-Mind
๐ŸŸกPaper ๐ŸŸกGithub ๐ŸŸกDataset
๐ŸŸขLarge Concept Model (LCM)
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขDynamic Byte Latent Transformer
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขMemory Layers.
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขEvalGym
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขCLIP 1.2
๐ŸŸกPaper ๐ŸŸกGithub ๐ŸŸกDataset ๐ŸŸกModel

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘2
Forwarded from Papers
ุจุง ุนุฑุถ ุณู„ุงู…
ุงูˆู„ูŠู† ู…ู‚ุงู„ู‡ ูŠ LLM ู…ุง ุฏุฑ ู…ุฑุญู„ู‡ ูŠ ุณุงุจู…ูŠุช. ู†ูุฑ ฺ†ู‡ุงุฑู… ู‚ุงุจู„ ุงุถุงูู‡ ูƒุฑุฏู† ู…ูŠ ุจุงุดุฏ. ุฌู‡ุช ู…ุดุงุฑูƒุช ุจู‡ ุงูŠุฏูŠ ุจู†ุฏู‡ ู…ุฑุงุฌุนู‡ ูƒู†ูŠู†.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ€™ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐‘‚(๐‘›log ๐‘›), space complexity became less efficient, rising to ๐‘‚(๐‘›2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://t.iss.one/+SP9l58Ta_zZmYmY0
โค1๐Ÿ”ฅ1
Approaching (Almost) Any Machine Learning Problem.pdf
8 MB
Approaching (Almost) Any Machine Learning Problem
#Book
#ML

@Machine_learn
๐Ÿ‘2
GAN.pdf
794.1 KB
Text-to-Image Generation with GANs
#GANs
@Machine_learn
๐Ÿ‘1
Forwarded from Papers
ุจุง ุนุฑุถ ุณู„ุงู…
ุงูˆู„ูŠู† ู…ู‚ุงู„ู‡ ูŠ LLM ู…ุง ุฏุฑ ู…ุฑุญู„ู‡ ูŠ ุณุงุจู…ูŠุช. ู†ูุฑ ฺ†ู‡ุงุฑู… ู‚ุงุจู„ ุงุถุงูู‡ ูƒุฑุฏู† ู…ูŠ ุจุงุดุฏ. ุฌู‡ุช ู…ุดุงุฑูƒุช ุจู‡ ุงูŠุฏูŠ ุจู†ุฏู‡ ู…ุฑุงุฌุนู‡ ูƒู†ูŠู†.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ€™ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐‘‚(๐‘›log ๐‘›), space complexity became less efficient, rising to ๐‘‚(๐‘›2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb


ู‡ุฒูŠู†ู‡ ู…ุดุงุฑูƒุช ูกูข ู…ูŠู„ูŠูˆู†
@Raminmousa
@Machine_learn
https://t.iss.one/+SP9l58Ta_zZmYmY0
โค1๐Ÿ‘1
๐Ÿ“Œ Convex Optimization

Book

@Machine_learn
๐Ÿ‘2
This media is not supported in your browser
VIEW IN TELEGRAM
๐ŸŒŸ RLtools

๐ŸŸขTD3 - Pendulum, Racing Car, MuJoCo Ant-v4, Acrobot;
๐ŸŸขPPO - Pendulum, Racing Car, MuJoCo Ant-v4 (CPU), MuJoCo Ant-v4 (CUDA);
๐ŸŸขMulti-Agent PPO - Bottleneck;
๐ŸŸขSAC - Pendulum (CPU), Pendulum (CUDA), Acrobot.





# Clone and checkout
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools

# Build and run
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum





๐ŸŸกArxiv
๐ŸŸกRLTools Design Studio
๐ŸŸกDemo
๐ŸŸกZoo Experiment Tracking
๐ŸŸกGoogle Collab (Python Interface)
๐Ÿ–ฅGitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
04. CNN Transfer Learning.pdf
2.1 MB
๐Ÿ“š Transfer Learning for CNNs: Leveraging Pre-trained Models


Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).


๐ŸŒ Why Transfer Learning?


1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.


๐Ÿ’ธ How Transfer Learning Works:


1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.


๐Ÿ”Š Common Transfer Learning Scenarios:


1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.


Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.

๐Ÿš€ Common Used Transfer Learning Meathods:

1๏ธโƒฃ. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.

2๏ธโƒฃ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.

3๏ธโƒฃ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.

4๏ธโƒฃ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.

5๏ธโƒฃ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.

6๏ธโƒฃ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.

7๏ธโƒฃ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.

@Machine_learn
๐Ÿ‘7
Large Language Models Course: Learn by Doing LLM Projects

๐Ÿ–ฅ Github: https://github.com/peremartra/Large-Language-Model-Notebooks-Course

๐Ÿ“• Paper: https://doi.org/10.31219/osf.io/qgxea

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘4โค1๐Ÿ”ฅ1
Python for Everybody Exploring Data Using Python 3

๐Ÿ““ book

@Machine_learn
๐Ÿ‘3
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation

Paper: https://arxiv.org/pdf/2409.13731v3.pdf

Code: https://github.com/openspg/kag

Dataset: 2WikiMultiHopQA

๐Ÿ”ธ@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘1
Arcade Academy - Learn Python

๐Ÿ“– Book

@Machine_learn
๐Ÿ“„ RNA Sequencing Data: Hitchhiker's Guide to Expression Analysis


๐Ÿ“Ž Study the paper


@Machine_learn
๐Ÿ‘2