Forwarded from Python Courses
Please open Telegram to view this post
VIEW IN TELEGRAM
๐1
Forwarded from Python | Machine Learning | Coding | R
SciPy.pdf
206.4 KB
Unlock the full power of SciPy with my comprehensive cheat sheet!
Master essential functions for:
Function optimization and solving equations
Linear algebra operations
ODE integration and statistical analysis
Signal processing and spatial data manipulation
Data clustering and distance computation ...and much more!
๐ฏ BEST DATA SCIENCE CHANNELS ON TELEGRAM ๐
Master essential functions for:
Function optimization and solving equations
Linear algebra operations
ODE integration and statistical analysis
Signal processing and spatial data manipulation
Data clustering and distance computation ...and much more!
#Python #SciPy #MachineLearning #DataScience #CheatSheet #ArtificialIntelligence #Optimization #LinearAlgebra #SignalProcessing #BigData
Please open Telegram to view this post
VIEW IN TELEGRAM
๐5
Mastering CNNs: From Kernels to Model Evaluation
If you're learning Computer Vision, understanding the Conv2D layer in Convolutional Neural Networks (#CNNs) is crucial. Letโs break it down from basic to advanced.
1. What is Conv2D?
Conv2D is a 2D convolutional layer used in image processing. It takes an image as input and applies filters (also called kernels) to extract features.
2. What is a Kernel (or Filter)?
A kernel is a small matrix (like 3x3 or 5x5) that slides over the image and performs element-wise multiplication and summing.
A 3x3 kernel means the filter looks at 3x3 chunks of the image.
The kernel detects patterns like edges, textures, etc.
Example:
A vertical edge detection kernel might look like:
[-1, 0, 1]
[-1, 0, 1]
[-1, 0, 1]
3. What Are Filters in Conv2D?
In CNNs, we donโt use just one filterโwe use multiple filters in a single Conv2D layer.
Each filter learns to detect a different feature (e.g., horizontal lines, curves, textures).
So if you have 32 filters in the Conv2D layer, youโll get 32 feature maps.
More Filters = More Features = More Learning Power
4. Kernel Size and Its Impact
Smaller kernels (e.g., 3x3) are most common; they capture fine details.
Larger kernels (e.g., 5x5 or 7x7) capture broader patterns, but increase computational cost.
Many CNNs stack multiple small kernels (like 3x3) to simulate a large receptive field while keeping complexity low.
5. Life Cycle of a CNN Model (From Data to Evaluation)
Letโs visualize how a CNN model works from start to finish:
Step 1: Data Collection
Images are gathered and labeled (e.g., cat vs dog).
Step 2: Preprocessing
Resize images
Normalize pixel values
Data augmentation (flipping, rotation, etc.)
Step 3: Model Building (Conv2D layers)
Add Conv2D + Activation (ReLU)
Use Pooling layers (MaxPooling2D)
Add Dropout to prevent overfitting
Flatten and connect to Dense layers
Step 4: Training the Model
Feed data in batches
Use loss function (like cross-entropy)
Optimize using backpropagation + optimizer (like Adam)
Adjust weights over several epochs
Step 5: Evaluation
Test the model on unseen data
Use metrics like Accuracy, Precision, Recall, F1-Score
Visualize using confusion matrix
Step 6: Deployment
Convert model to suitable format (e.g., ONNX, TensorFlow Lite)
Deploy on web, mobile, or edge devices
Summary
Conv2D uses filters (kernels) to extract image features.
More filters = better feature detection.
The CNN pipeline takes raw image data, learns features, and gives powerful predictions.
If this helped you, let me know! Or feel free to share your experience learning CNNs!
๐ฏ BEST DATA SCIENCE CHANNELS ON TELEGRAM ๐
If you're learning Computer Vision, understanding the Conv2D layer in Convolutional Neural Networks (#CNNs) is crucial. Letโs break it down from basic to advanced.
1. What is Conv2D?
Conv2D is a 2D convolutional layer used in image processing. It takes an image as input and applies filters (also called kernels) to extract features.
2. What is a Kernel (or Filter)?
A kernel is a small matrix (like 3x3 or 5x5) that slides over the image and performs element-wise multiplication and summing.
A 3x3 kernel means the filter looks at 3x3 chunks of the image.
The kernel detects patterns like edges, textures, etc.
Example:
A vertical edge detection kernel might look like:
[-1, 0, 1]
[-1, 0, 1]
[-1, 0, 1]
3. What Are Filters in Conv2D?
In CNNs, we donโt use just one filterโwe use multiple filters in a single Conv2D layer.
Each filter learns to detect a different feature (e.g., horizontal lines, curves, textures).
So if you have 32 filters in the Conv2D layer, youโll get 32 feature maps.
More Filters = More Features = More Learning Power
4. Kernel Size and Its Impact
Smaller kernels (e.g., 3x3) are most common; they capture fine details.
Larger kernels (e.g., 5x5 or 7x7) capture broader patterns, but increase computational cost.
Many CNNs stack multiple small kernels (like 3x3) to simulate a large receptive field while keeping complexity low.
5. Life Cycle of a CNN Model (From Data to Evaluation)
Letโs visualize how a CNN model works from start to finish:
Step 1: Data Collection
Images are gathered and labeled (e.g., cat vs dog).
Step 2: Preprocessing
Resize images
Normalize pixel values
Data augmentation (flipping, rotation, etc.)
Step 3: Model Building (Conv2D layers)
Add Conv2D + Activation (ReLU)
Use Pooling layers (MaxPooling2D)
Add Dropout to prevent overfitting
Flatten and connect to Dense layers
Step 4: Training the Model
Feed data in batches
Use loss function (like cross-entropy)
Optimize using backpropagation + optimizer (like Adam)
Adjust weights over several epochs
Step 5: Evaluation
Test the model on unseen data
Use metrics like Accuracy, Precision, Recall, F1-Score
Visualize using confusion matrix
Step 6: Deployment
Convert model to suitable format (e.g., ONNX, TensorFlow Lite)
Deploy on web, mobile, or edge devices
Summary
Conv2D uses filters (kernels) to extract image features.
More filters = better feature detection.
The CNN pipeline takes raw image data, learns features, and gives powerful predictions.
If this helped you, let me know! Or feel free to share your experience learning CNNs!
Please open Telegram to view this post
VIEW IN TELEGRAM
๐9โค1
Forwarded from Python | Machine Learning | Coding | R
Dive deep into the world of Transformers with this comprehensive PyTorch implementation guide. Whether you're a seasoned ML engineer or just starting out, this resource breaks down the complexities of the Transformer model, inspired by the groundbreaking paper "Attention Is All You Need".
https://www.k-a.in/pyt-transformer.html
This guide offers:
By following along, you'll gain a solid understanding of how Transformers work and how to implement them from scratch.
#MachineLearning #DeepLearning #PyTorch #Transformer #AI #NLP #AttentionIsAllYouNeed #Coding #DataScience #NeuralNetworks๏ปฟ
Please open Telegram to view this post
VIEW IN TELEGRAM
๐3๐ฅ1
This media is not supported in your browser
VIEW IN TELEGRAM
How do transformers work? Learn it by hand ๐
๐ช๐ฎ๐น๐ธ๐๐ต๐ฟ๐ผ๐๐ด๐ต
[1] Given
โณ Input features from the previous block (5 positions)
[2] Attention
โณ Feed all 5 features to a query-key attention module (QK) to obtain an attention weight matrix (A). I will skip the details of this module. In a follow-up post I will unpack this module.
[3] Attention Weighting
โณ Multiply the input features with the attention weight matrix to obtain attention weighted features (Z). Note that there are still 5 positions.
โณ The effect is to combine features across positions (horizontally), in this case, X1 := X1 + X2, X2 := X2 + X3....etc.
[4] FFN: First Layer
โณ Feed all 5 attention weighted features into the first layer.
โณ Multiply these features with the weights and biases.
โณ The effect is to combine features across feature dimensions (vertically).
โณ The dimensionality of each feature is increased from 3 to 4.
โณ Note that each position is processed by the same weight matrix. This is what the term "position-wise" is referring to.
โณ Note that the FFN is essentially a multi layer perceptron.
[5] ReLU
โณ Negative values are set to zeros by ReLU.
[6] FFN: Second Layer
โณ Feed all 5 features (d=3) into the second layer.
โณ The dimensionality of each feature is decreased from 4 back to 3.
โณ The output is fed to the next block to repeat this process.
โณ Note that the next block would have a completely separate set of parameters.
#ai #tranformers #genai #learning
๐ฏ BEST DATA SCIENCE CHANNELS ON TELEGRAM ๐
๐ช๐ฎ๐น๐ธ๐๐ต๐ฟ๐ผ๐๐ด๐ต
[1] Given
โณ Input features from the previous block (5 positions)
[2] Attention
โณ Feed all 5 features to a query-key attention module (QK) to obtain an attention weight matrix (A). I will skip the details of this module. In a follow-up post I will unpack this module.
[3] Attention Weighting
โณ Multiply the input features with the attention weight matrix to obtain attention weighted features (Z). Note that there are still 5 positions.
โณ The effect is to combine features across positions (horizontally), in this case, X1 := X1 + X2, X2 := X2 + X3....etc.
[4] FFN: First Layer
โณ Feed all 5 attention weighted features into the first layer.
โณ Multiply these features with the weights and biases.
โณ The effect is to combine features across feature dimensions (vertically).
โณ The dimensionality of each feature is increased from 3 to 4.
โณ Note that each position is processed by the same weight matrix. This is what the term "position-wise" is referring to.
โณ Note that the FFN is essentially a multi layer perceptron.
[5] ReLU
โณ Negative values are set to zeros by ReLU.
[6] FFN: Second Layer
โณ Feed all 5 features (d=3) into the second layer.
โณ The dimensionality of each feature is decreased from 4 back to 3.
โณ The output is fed to the next block to repeat this process.
โณ Note that the next block would have a completely separate set of parameters.
#ai #tranformers #genai #learning
Please open Telegram to view this post
VIEW IN TELEGRAM
๐7โค3
Forwarded from Python | Machine Learning | Coding | R
๐จ๐ปโ๐ป Carnegie University in the United States has come to offer a free #datamining course in 25 lectures to those interested in this field.
โ
โ
Please open Telegram to view this post
VIEW IN TELEGRAM
๐3
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0๏ธโฃ Python
1๏ธโฃ Data Science
2๏ธโฃ Machine Learning
3๏ธโฃ Data Visualization
4๏ธโฃ Artificial Intelligence
5๏ธโฃ Data Analysis
6๏ธโฃ Statistics
7๏ธโฃ Deep Learning
8๏ธโฃ programming Languages
โ
https://t.iss.one/addlist/8_rRW2scgfRhOTc0
โ
https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
๐1
Forwarded from Python | Machine Learning | Coding | R
Full PyTorch Implementation of Transformer-XL
If you're looking to understand and experiment with Transformer-XL using PyTorch, this resource provides a clean and complete implementation. Transformer-XL is a powerful model that extends the Transformer architecture with recurrence, enabling learning dependencies beyond fixed-length segments.
The implementation is ideal for researchers, students, and developers aiming to dive deeper into advanced language modeling techniques.
Explore the code and start building:
https://www.k-a.in/pyt-transformerXL.html
#TransformerXL #PyTorch #DeepLearning #NLP #LanguageModeling #AI #MachineLearning #OpenSource #ResearchTools
https://t.iss.one/CodeProgrammer
If you're looking to understand and experiment with Transformer-XL using PyTorch, this resource provides a clean and complete implementation. Transformer-XL is a powerful model that extends the Transformer architecture with recurrence, enabling learning dependencies beyond fixed-length segments.
The implementation is ideal for researchers, students, and developers aiming to dive deeper into advanced language modeling techniques.
Explore the code and start building:
https://www.k-a.in/pyt-transformerXL.html
#TransformerXL #PyTorch #DeepLearning #NLP #LanguageModeling #AI #MachineLearning #OpenSource #ResearchTools
https://t.iss.one/CodeProgrammer
๐3
Forwarded from Python | Machine Learning | Coding | R
Top 100+ questions%0A %22Google Data Science Interview%22.pdf
16.7 MB
Google is known for its rigorous data science interview process, which typically follows a hybrid format. Candidates are expected to demonstrate strong programming skills, solid knowledge in statistics and machine learning, and a keen ability to approach problems from a product-oriented perspective.
To succeed, one must be proficient in several critical areas: statistics and probability, SQL and Python programming, product sense, and case study-based analytics.
This curated list features over 100 of the most commonly asked and important questions in Google data science interviews. It serves as a comprehensive resource to help candidates prepare effectively and confidently for the challenge ahead.
#DataScience #GoogleInterview #InterviewPrep #MachineLearning #SQL #Statistics #ProductAnalytics #Python #CareerGrowth
https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
@CodeProgrammer Matplotlib.pdf
4.3 MB
The Complete Visual Guide for Data Enthusiasts
Matplotlib is a powerful Python library for data visualization, essential not only for acing job interviews but also for building a solid foundation in analytical thinking and data storytelling.
This step-by-step tutorial guide walks learners through everything from the basics to advanced techniques in Matplotlib. It also includes a curated collection of the most frequently asked Matplotlib-related interview questions, making it an ideal resource for both beginners and experienced professionals.
#Matplotlib #DataVisualization #Python #DataScience #InterviewPrep #Analytics #TechCareer #LearnToCode๏ปฟ
https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
๐9โค1
Automate Dataset Labeling with Active Learning
A few years ago, training AI models required massive amounts of labeled data. Manually collecting and labeling this data was both time-consuming and expensive. But thankfully, weโve come a long way since then, and now we have much more powerful tools and techniques to help us automate this labeling process. One of the most effective ways? Active Learning.
In this article, weโll walk through the concept of active learning, how it works, and share a step-by-step implementation of how to automate dataset labeling for a text classification task using this method.
Read article: https://machinelearningmastery.com/automate-dataset-labeling-with-active-learning/
https://t.iss.one/DataScienceM
A few years ago, training AI models required massive amounts of labeled data. Manually collecting and labeling this data was both time-consuming and expensive. But thankfully, weโve come a long way since then, and now we have much more powerful tools and techniques to help us automate this labeling process. One of the most effective ways? Active Learning.
In this article, weโll walk through the concept of active learning, how it works, and share a step-by-step implementation of how to automate dataset labeling for a text classification task using this method.
Read article: https://machinelearningmastery.com/automate-dataset-labeling-with-active-learning/
https://t.iss.one/DataScienceM
๐4โค1
Forwarded from Data Science Premium (Books & Courses)
Join to our WhatsApp channel ๐ฑ
Tell your friends
https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Tell your friends
https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
WhatsApp.com
Python | Machine Learning | Data Science | WhatsApp Channel
Python | Machine Learning | Data Science WhatsApp Channel. Welcome to our official WhatsApp Channel โ your daily dose of AI, Python, and cutting-edge technology!
Here, we share:
Python tutorials and ready-to-use code snippets
AI & machine learning tipsโฆ
Here, we share:
Python tutorials and ready-to-use code snippets
AI & machine learning tipsโฆ
๐2
Forwarded from Python | Machine Learning | Coding | R
Machine Learning Notes ๐ (1).pdf
4.9 MB
Machine Learning Notes with Real Project and Amazing discussion
https://t.iss.one/CodeProgrammer๐
#MachineLearning #AI #DataScience #MLAlgorithms #DeepLearning
https://t.iss.one/CodeProgrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
๐7
Forwarded from Python | Machine Learning | Coding | R
This media is not supported in your browser
VIEW IN TELEGRAM
โ
โ
Join to our WhatsApp
https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
๐4โค1
How to Combine Pandas, NumPy, and Scikit-learn Seamlessly
Read Article: https://machinelearningmastery.com/how-to-combine-pandas-numpy-and-scikit-learn-seamlessly/
Join to our WhatsApp๐ฌ channel:
https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Read Article: https://machinelearningmastery.com/how-to-combine-pandas-numpy-and-scikit-learn-seamlessly/
Join to our WhatsApp
https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
โค2๐1
This media is not supported in your browser
VIEW IN TELEGRAM
A new interactive sentiment visualization project has been developed, featuring a dynamic smiley face that reflects sentiment analysis results in real time. Using a natural language processing model, the system evaluates input text and adjusts the smiley face expression accordingly:
๐ Positive sentiment
โน๏ธ Negative sentiment
The visualization offers an intuitive and engaging way to observe sentiment dynamics as they happen.
๐ GitHub: https://lnkd.in/e_gk3hfe
๐ฐ Article: https://lnkd.in/e_baNJd2
#AI #SentimentAnalysis #DataVisualization #InteractiveDesign #NLP #MachineLearning #Python #GitHubProjects #TowardsDataScience
๐ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
๐ฑ Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
The visualization offers an intuitive and engaging way to observe sentiment dynamics as they happen.
#AI #SentimentAnalysis #DataVisualization #InteractiveDesign #NLP #MachineLearning #Python #GitHubProjects #TowardsDataScience
Please open Telegram to view this post
VIEW IN TELEGRAM
โค3๐1
Forwarded from Python | Machine Learning | Coding | R
Python Cheat Sheet
โก๏ธ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
๐ฑ Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
#AI #SentimentAnalysis #DataVisualization #pandas #Numpy #InteractiveDesign #NLP #MachineLearning #Python #GitHubProjects #TowardsDataScience
Please open Telegram to view this post
VIEW IN TELEGRAM
๐4โค1
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0๏ธโฃ Python
1๏ธโฃ Data Science
2๏ธโฃ Machine Learning
3๏ธโฃ Data Visualization
4๏ธโฃ Artificial Intelligence
5๏ธโฃ Data Analysis
6๏ธโฃ Statistics
7๏ธโฃ Deep Learning
8๏ธโฃ programming Languages
โ
https://t.iss.one/addlist/8_rRW2scgfRhOTc0
โ
https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
โค1
Forwarded from Python | Machine Learning | Coding | R
from SQL to pandas.pdf
1.3 MB
#DataScience #SQL #pandas #InterviewPrep #Python #DataAnalysis #CareerGrowth #TechTips #Analytics
Please open Telegram to view this post
VIEW IN TELEGRAM
๐7โค1๐ฅ1
๐ฃ AI Paper by Hand.pdf
29.1 MB
๐ฃ AI Paper by Hand โ๏ธ
[1] ๐ช๐ต๐ฎ๐ ๐ ๐ฎ๐๐๐ฒ๐ฟ๐ ๐ถ๐ป ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐? ๐ก๐ผ๐ ๐๐น๐น ๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐ถ๐ ๐ก๐ฒ๐ฒ๐ฑ๐ฒ๐ฑ
[2] ๐ฃ๐ฟ๐ฒ๐ฑ๐ถ๐ฐ๐๐ถ๐ป๐ด ๐ณ๐ฟ๐ผ๐บ ๐ฆ๐๐ฟ๐ถ๐ป๐ด๐: ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐บ๐ฏ๐ฒ๐ฑ๐ฑ๐ถ๐ป๐ด๐ ๐ณ๐ผ๐ฟ ๐๐ฎ๐๐ฒ๐๐ถ๐ฎ๐ป ๐ข๐ฝ๐๐ถ๐บ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป
[3] ๐ ๐ข๐๐๐ ๐ฆ๐ช๐๐ฅ๐ ๐ฆ: ๐๐ผ๐น๐น๐ฎ๐ฏ๐ผ๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐๐ผ ๐๐ฑ๐ฎ๐ฝ๐ ๐๐๐ ๐๐ ๐ฝ๐ฒ๐ฟ๐๐ ๐๐ถ๐ฎ ๐ฆ๐๐ฎ๐ฟ๐บ ๐๐ป๐๐ฒ๐น๐น๐ถ๐ด๐ฒ๐ป๐ฐ๐ฒ
[4] ๐ง๐๐๐ก๐๐๐ก๐ ๐๐๐ ๐ฆ: ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐น ๐๐ป๐๐๐ฟ๐๐ฐ๐๐ถ๐ผ๐ป ๐๐ผ๐น๐น๐ผ๐๐ถ๐ป๐ด ๐๐ถ๐๐ต ๐ง๐ต๐ผ๐๐ด๐ต๐ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป
[5] ๐ข๐ฝ๐ฒ๐ป๐ฉ๐๐: ๐๐ป ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ฉ๐ถ๐๐ถ๐ผ๐ป-๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ-๐๐ฐ๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น
[6] ๐ฅ๐ง-๐ญ: ๐ฅ๐ผ๐ฏ๐ผ๐๐ถ๐ฐ๐ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ ๐ณ๐ผ๐ฟ ๐ฅ๐ฒ๐ฎ๐น-๐ช๐ผ๐ฟ๐น๐ฑ ๐๐ผ๐ป๐๐ฟ๐ผ๐น ๐๐ ๐ฆ๐ฐ๐ฎ๐น๐ฒ
[7] ฯ๐ฌ: ๐ ๐ฉ๐ถ๐๐ถ๐ผ๐ป-๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ-๐๐ฐ๐๐ถ๐ผ๐ป ๐๐น๐ผ๐ ๐ ๐ผ๐ฑ๐ฒ๐น ๐ณ๐ผ๐ฟ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐น ๐ฅ๐ผ๐ฏ๐ผ๐ ๐๐ผ๐ป๐๐ฟ๐ผ๐น
[8] ๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป: ๐๐ฐ๐ฐ๐ฒ๐น๐ฒ๐ฟ๐ฎ๐๐ถ๐ป๐ด ๐๐ผ๐ป๐ด-๐๐ผ๐ป๐๐ฒ๐ ๐ ๐๐๐ ๐๐ป๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ ๐๐ถ๐ฎ ๐ฉ๐ฒ๐ฐ๐๐ผ๐ฟ ๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น
[9] ๐ฃ-๐ฅ๐๐: ๐ฃ๐ฟ๐ผ๐ด๐ฟ๐ฒ๐๐๐ถ๐๐ฒ ๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น ๐๐๐ด๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐๐ผ๐ฟ ๐ฃ๐น๐ฎ๐ป๐ป๐ถ๐ป๐ด ๐ผ๐ป ๐๐บ๐ฏ๐ผ๐ฑ๐ถ๐ฒ๐ฑ ๐๐๐ฒ๐ฟ๐๐ฑ๐ฎ๐ ๐ง๐ฎ๐๐ธ
[10] ๐ฅ๐๐๐: ๐๐ฒ๐ฎ๐ฟ๐ป๐ฒ๐ฑ-๐ฅ๐๐น๐ฒ-๐๐๐ด๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐๐ผ๐ฟ ๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐
[11] ๐ข๐ป ๐๐ต๐ฒ ๐ฆ๐๐ฟ๐ฝ๐ฟ๐ถ๐๐ถ๐ป๐ด ๐๐ณ๐ณ๐ฒ๐ฐ๐๐ถ๐๐ฒ๐ป๐ฒ๐๐ ๐ผ๐ณ ๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ฒ๐ฟ ๐ณ๐ผ๐ฟ ๐ฉ๐ถ๐๐ถ๐ผ๐ป ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐
[12] ๐ ๐ถ๐ ๐๐๐ฟ๐ฒ-๐ผ๐ณ-๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐: ๐ ๐ฆ๐ฝ๐ฎ๐ฟ๐๐ฒ ๐ฎ๐ป๐ฑ ๐ฆ๐ฐ๐ฎ๐น๐ฎ๐ฏ๐น๐ฒ ๐๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ ๐ณ๐ผ๐ฟ ๐ ๐๐น๐๐ถ-๐ ๐ผ๐ฑ๐ฎ๐น ๐๐ผ๐๐ป๐ฑ๐ฎ๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น๐
[13]-[14] ๐๐ฑ๐ถ๐ณ๐ ๐ฏ๐: ๐ฆ๐ฐ๐ฎ๐น๐ฎ๐ฏ๐น๐ฒ ๐๐ถ๐ด๐ต-๐ค๐๐ฎ๐น๐ถ๐๐ ๐ฏ๐ ๐๐๐๐ฒ๐ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป
[15] ๐๐๐๐ฒ ๐๐ฎ๐๐ฒ๐ป๐ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ: ๐ฃ๐ฎ๐๐ฐ๐ต๐ฒ๐ ๐ฆ๐ฐ๐ฎ๐น๐ฒ ๐๐ฒ๐๐๐ฒ๐ฟ ๐ง๐ต๐ฎ๐ป ๐ง๐ผ๐ธ๐ฒ๐ป๐
[16]-[18] ๐๐ฒ๐ฒ๐ฝ๐ฆ๐ฒ๐ฒ๐ธ-๐ฉ๐ฏ (๐ฃ๐ฎ๐ฟ๐ ๐ญ-๐ฏ)
[19] ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐ ๐๐ถ๐๐ต๐ผ๐๐ ๐ก๐ผ๐ฟ๐บ๐ฎ๐น๐ถ๐๐ฎ๐๐ถ๐ผ๐ป
โ๏ธ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
๐ฑ Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
[1] ๐ช๐ต๐ฎ๐ ๐ ๐ฎ๐๐๐ฒ๐ฟ๐ ๐ถ๐ป ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐? ๐ก๐ผ๐ ๐๐น๐น ๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐ถ๐ ๐ก๐ฒ๐ฒ๐ฑ๐ฒ๐ฑ
[2] ๐ฃ๐ฟ๐ฒ๐ฑ๐ถ๐ฐ๐๐ถ๐ป๐ด ๐ณ๐ฟ๐ผ๐บ ๐ฆ๐๐ฟ๐ถ๐ป๐ด๐: ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐บ๐ฏ๐ฒ๐ฑ๐ฑ๐ถ๐ป๐ด๐ ๐ณ๐ผ๐ฟ ๐๐ฎ๐๐ฒ๐๐ถ๐ฎ๐ป ๐ข๐ฝ๐๐ถ๐บ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป
[3] ๐ ๐ข๐๐๐ ๐ฆ๐ช๐๐ฅ๐ ๐ฆ: ๐๐ผ๐น๐น๐ฎ๐ฏ๐ผ๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐๐ผ ๐๐ฑ๐ฎ๐ฝ๐ ๐๐๐ ๐๐ ๐ฝ๐ฒ๐ฟ๐๐ ๐๐ถ๐ฎ ๐ฆ๐๐ฎ๐ฟ๐บ ๐๐ป๐๐ฒ๐น๐น๐ถ๐ด๐ฒ๐ป๐ฐ๐ฒ
[4] ๐ง๐๐๐ก๐๐๐ก๐ ๐๐๐ ๐ฆ: ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐น ๐๐ป๐๐๐ฟ๐๐ฐ๐๐ถ๐ผ๐ป ๐๐ผ๐น๐น๐ผ๐๐ถ๐ป๐ด ๐๐ถ๐๐ต ๐ง๐ต๐ผ๐๐ด๐ต๐ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป
[5] ๐ข๐ฝ๐ฒ๐ป๐ฉ๐๐: ๐๐ป ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ฉ๐ถ๐๐ถ๐ผ๐ป-๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ-๐๐ฐ๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น
[6] ๐ฅ๐ง-๐ญ: ๐ฅ๐ผ๐ฏ๐ผ๐๐ถ๐ฐ๐ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ ๐ณ๐ผ๐ฟ ๐ฅ๐ฒ๐ฎ๐น-๐ช๐ผ๐ฟ๐น๐ฑ ๐๐ผ๐ป๐๐ฟ๐ผ๐น ๐๐ ๐ฆ๐ฐ๐ฎ๐น๐ฒ
[7] ฯ๐ฌ: ๐ ๐ฉ๐ถ๐๐ถ๐ผ๐ป-๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ-๐๐ฐ๐๐ถ๐ผ๐ป ๐๐น๐ผ๐ ๐ ๐ผ๐ฑ๐ฒ๐น ๐ณ๐ผ๐ฟ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐น ๐ฅ๐ผ๐ฏ๐ผ๐ ๐๐ผ๐ป๐๐ฟ๐ผ๐น
[8] ๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป: ๐๐ฐ๐ฐ๐ฒ๐น๐ฒ๐ฟ๐ฎ๐๐ถ๐ป๐ด ๐๐ผ๐ป๐ด-๐๐ผ๐ป๐๐ฒ๐ ๐ ๐๐๐ ๐๐ป๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ ๐๐ถ๐ฎ ๐ฉ๐ฒ๐ฐ๐๐ผ๐ฟ ๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น
[9] ๐ฃ-๐ฅ๐๐: ๐ฃ๐ฟ๐ผ๐ด๐ฟ๐ฒ๐๐๐ถ๐๐ฒ ๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น ๐๐๐ด๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐๐ผ๐ฟ ๐ฃ๐น๐ฎ๐ป๐ป๐ถ๐ป๐ด ๐ผ๐ป ๐๐บ๐ฏ๐ผ๐ฑ๐ถ๐ฒ๐ฑ ๐๐๐ฒ๐ฟ๐๐ฑ๐ฎ๐ ๐ง๐ฎ๐๐ธ
[10] ๐ฅ๐๐๐: ๐๐ฒ๐ฎ๐ฟ๐ป๐ฒ๐ฑ-๐ฅ๐๐น๐ฒ-๐๐๐ด๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐๐ผ๐ฟ ๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐
[11] ๐ข๐ป ๐๐ต๐ฒ ๐ฆ๐๐ฟ๐ฝ๐ฟ๐ถ๐๐ถ๐ป๐ด ๐๐ณ๐ณ๐ฒ๐ฐ๐๐ถ๐๐ฒ๐ป๐ฒ๐๐ ๐ผ๐ณ ๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ฒ๐ฟ ๐ณ๐ผ๐ฟ ๐ฉ๐ถ๐๐ถ๐ผ๐ป ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐
[12] ๐ ๐ถ๐ ๐๐๐ฟ๐ฒ-๐ผ๐ณ-๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐: ๐ ๐ฆ๐ฝ๐ฎ๐ฟ๐๐ฒ ๐ฎ๐ป๐ฑ ๐ฆ๐ฐ๐ฎ๐น๐ฎ๐ฏ๐น๐ฒ ๐๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ ๐ณ๐ผ๐ฟ ๐ ๐๐น๐๐ถ-๐ ๐ผ๐ฑ๐ฎ๐น ๐๐ผ๐๐ป๐ฑ๐ฎ๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น๐
[13]-[14] ๐๐ฑ๐ถ๐ณ๐ ๐ฏ๐: ๐ฆ๐ฐ๐ฎ๐น๐ฎ๐ฏ๐น๐ฒ ๐๐ถ๐ด๐ต-๐ค๐๐ฎ๐น๐ถ๐๐ ๐ฏ๐ ๐๐๐๐ฒ๐ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป
[15] ๐๐๐๐ฒ ๐๐ฎ๐๐ฒ๐ป๐ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ: ๐ฃ๐ฎ๐๐ฐ๐ต๐ฒ๐ ๐ฆ๐ฐ๐ฎ๐น๐ฒ ๐๐ฒ๐๐๐ฒ๐ฟ ๐ง๐ต๐ฎ๐ป ๐ง๐ผ๐ธ๐ฒ๐ป๐
[16]-[18] ๐๐ฒ๐ฒ๐ฝ๐ฆ๐ฒ๐ฒ๐ธ-๐ฉ๐ฏ (๐ฃ๐ฎ๐ฟ๐ ๐ญ-๐ฏ)
[19] ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐ ๐๐ถ๐๐ต๐ผ๐๐ ๐ก๐ผ๐ฟ๐บ๐ฎ๐น๐ถ๐๐ฎ๐๐ถ๐ผ๐ป
Please open Telegram to view this post
VIEW IN TELEGRAM
๐7โค3