Python | Machine Learning | Coding | R
64.5K subscribers
1.15K photos
73 videos
146 files
818 links
Help and ads: @hussein_sheikho

Discover powerful insights with Python, Machine Learning, Coding, and R—your essential toolkit for data-driven solutions, smart alg

List of our channels:
https://t.iss.one/addlist/8_rRW2scgfRhOTc0

https://telega.io/?r=nikapsOH
Download Telegram
Please open Telegram to view this post
VIEW IN TELEGRAM
👍144💯2
Master Machine Learning in Just 20 Days.1745724742524
30.8 MB
Title:
Master Machine Learning in Just 20 Days - Your Ultimate Guide! 🔥

Description:
Struggling to break into Data Science or ace ML interviews at top product-based companies?

This 20-day roadmap covers ML basics to advanced topics like tuning, deep learning, and deployment with top resources and practice questions!

What’s Inside:

Supervised & Unsupervised Learning – Regression, Classification, Clustering
Deep Learning & Neural Networks – CNNs, RNNs, LSTMs
End-to-End ML Projects – Data Preprocessing, Feature Engineering, Deployment
Model Optimization – Hyperparameter Tuning, Ensemble Methods
Real-World ML Applications – NLP, AutoML, Scalable ML Systems

#MachineLearning #DeepLearning #DataScience #ArtificialIntelligence #MLEngineering #CareerGrowth #MLRoadmap

By: t.iss.one/HusseinSheikho

💯 BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
👍102👎1👨‍💻1
Forwarded from Python Courses
🚀 LunaProxy - The Most Cost-effective Residential Proxy Exclusive Benefits for Members of This Group: 💥 Residential Proxy: As low as $0.77 / GB. Use the discount code [lunapro30] when placing an order and save 30% immediately. ✔️ Over 200 million pure IPs | No charge for invalid ones | Success rate > 99.9% 💥 Unlimited Traffic Proxy: Enjoy a discount of up to 72%, only $79 / day. ✔️ Unlimited traffic | Unlimited concurrency | Bandwidth of over 100Gbps | Customized services | Save 90% of the cost when collecting AI/LLM data Join the Luna Affiliate Program and earn a 10% commission. There is no upper limit for the commission, and you can withdraw it at any time.
👉 Take action now: https://www.lunaproxy.com/?ls=data&lk=?01
Please open Telegram to view this post
VIEW IN TELEGRAM
👍4
SciPy.pdf
206.4 KB
Unlock the full power of SciPy with my comprehensive cheat sheet!
Master essential functions for:

Function optimization and solving equations

Linear algebra operations

ODE integration and statistical analysis

Signal processing and spatial data manipulation

Data clustering and distance computation ...and much more!


#Python #SciPy #MachineLearning #DataScience #CheatSheet #ArtificialIntelligence #Optimization #LinearAlgebra #SignalProcessing #BigData



💯 BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
👍11🎉1
Mastering CNNs: From Kernels to Model Evaluation

If you're learning Computer Vision, understanding the Conv2D layer in Convolutional Neural Networks (#CNNs) is crucial. Let’s break it down from basic to advanced.

1. What is Conv2D?

Conv2D is a 2D convolutional layer used in image processing. It takes an image as input and applies filters (also called kernels) to extract features.

2. What is a Kernel (or Filter)?

A kernel is a small matrix (like 3x3 or 5x5) that slides over the image and performs element-wise multiplication and summing.

A 3x3 kernel means the filter looks at 3x3 chunks of the image.

The kernel detects patterns like edges, textures, etc.


Example:
A vertical edge detection kernel might look like:

[-1, 0, 1]
[-1, 0, 1]
[-1, 0, 1]

3. What Are Filters in Conv2D?

In CNNs, we don’t use just one filter—we use multiple filters in a single Conv2D layer.

Each filter learns to detect a different feature (e.g., horizontal lines, curves, textures).

So if you have 32 filters in the Conv2D layer, you’ll get 32 feature maps.

More Filters = More Features = More Learning Power

4. Kernel Size and Its Impact

Smaller kernels (e.g., 3x3) are most common; they capture fine details.

Larger kernels (e.g., 5x5 or 7x7) capture broader patterns, but increase computational cost.

Many CNNs stack multiple small kernels (like 3x3) to simulate a large receptive field while keeping complexity low.

5. Life Cycle of a CNN Model (From Data to Evaluation)

Let’s visualize how a CNN model works from start to finish:

Step 1: Data Collection

Images are gathered and labeled (e.g., cat vs dog).

Step 2: Preprocessing

Resize images

Normalize pixel values

Data augmentation (flipping, rotation, etc.)

Step 3: Model Building (Conv2D layers)

Add Conv2D + Activation (ReLU)

Use Pooling layers (MaxPooling2D)

Add Dropout to prevent overfitting

Flatten and connect to Dense layers

Step 4: Training the Model

Feed data in batches

Use loss function (like cross-entropy)

Optimize using backpropagation + optimizer (like Adam)

Adjust weights over several epochs

Step 5: Evaluation

Test the model on unseen data

Use metrics like Accuracy, Precision, Recall, F1-Score

Visualize using confusion matrix

Step 6: Deployment

Convert model to suitable format (e.g., ONNX, TensorFlow Lite)

Deploy on web, mobile, or edge devices

Summary

Conv2D uses filters (kernels) to extract image features.

More filters = better feature detection.

The CNN pipeline takes raw image data, learns features, and gives powerful predictions.

If this helped you, let me know! Or feel free to share your experience learning CNNs!

#DeepLearning #ComputerVision #CNNs #Conv2D #MachineLearning #AI #NeuralNetworks #DataScience #ModelTraining #ImageProcessing


💯 BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
👍134💯2
🚀 Master the Transformer Architecture with PyTorch! 🧠

Dive deep into the world of Transformers with this comprehensive PyTorch implementation guide. Whether you're a seasoned ML engineer or just starting out, this resource breaks down the complexities of the Transformer model, inspired by the groundbreaking paper "Attention Is All You Need".

🔗 Check it out here:
https://www.k-a.in/pyt-transformer.html

This guide offers:

🌟 Detailed explanations of each component of the Transformer architecture.

🌟 Step-by-step code implementations in PyTorch.

🌟 Insights into the self-attention mechanism and positional encoding.

By following along, you'll gain a solid understanding of how Transformers work and how to implement them from scratch.

#MachineLearning #DeepLearning #PyTorch #Transformer #AI #NLP #AttentionIsAllYouNeed #Coding #DataScience #NeuralNetworks


💯 BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟

🧠💻📊
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
Please open Telegram to view this post
VIEW IN TELEGRAM
👍125💯2🏆2
Four best-advanced university courses on NLP & LLM to advance your skills:

1. Advanced NLP -- Carnegie Mellon University
Link: https://lnkd.in/ddEtMghr

2. Recent Advances on Foundation Models -- University of Waterloo
Link: https://lnkd.in/dbdpUV9v

3. Large Language Model Agents -- University of California, Berkeley
Link: https://lnkd.in/d-MdSM8Y

4. Advanced LLM Agent -- University Berkeley
Link: https://lnkd.in/dvCD4HR4

#LLM #python #AI #Agents #RAG #NLP

💯 BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
👍113
🔴 Comprehensive course on "Data Mining"
🖥 Carnegie Mellon University, USA


👨🏻‍💻 Carnegie University in the United States has come to offer a free #datamining course in 25 lectures to those interested in this field.

◀️ In this course, you will deal with statistical concepts and model selection methods on the one hand, and on the other hand, you will have to implement these concepts in practice and present the results.

◀️ The exercises are both combined: theory, #coding, and practical.👇


🥵 Data Mining
⏯️ Course Homepage

💯 BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
👍91
This channels is for Programmers, Coders, Software Engineers.

0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages

https://t.iss.one/addlist/8_rRW2scgfRhOTc0

https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
👍65💯1
Full PyTorch Implementation of Transformer-XL

If you're looking to understand and experiment with Transformer-XL using PyTorch, this resource provides a clean and complete implementation. Transformer-XL is a powerful model that extends the Transformer architecture with recurrence, enabling learning dependencies beyond fixed-length segments.

The implementation is ideal for researchers, students, and developers aiming to dive deeper into advanced language modeling techniques.

Explore the code and start building:
https://www.k-a.in/pyt-transformerXL.html

#TransformerXL #PyTorch #DeepLearning #NLP #LanguageModeling #AI #MachineLearning #OpenSource #ResearchTools

https://t.iss.one/CodeProgrammer
👍71
LLM Engineer’s Handbook (2024)

🚀 Unlock the Future of AI with the LLM Engineer’s Handbook 🚀

Step into the world of Large Language Models (LLMs) with this comprehensive guide that takes you from foundational concepts to deploying advanced applications using LLMOps best practices. Whether you're an AI engineer, NLP professional, or LLM enthusiast, this book offers practical insights into designing, training, and deploying LLMs in real-world scenarios.

Why Choose the LLM Engineer’s Handbook?

Comprehensive Coverage: Learn about data engineering, supervised fine-tuning, and deployment strategies.

Hands-On Approach: Implement MLOps components through practical examples, including building an LLM-powered twin that's cost-effective, scalable, and modular.

Cutting-Edge Techniques: Explore inference optimization, preference alignment, and real-time data processing to apply LLMs effectively in your projects.

Real-World Applications: Move beyond isolated Jupyter notebooks and focus on building production-grade end-to-end LLM systems.


Limited-Time Offer

Originally priced at $55, the LLM Engineer’s Handbook is now available for just $25—a 55% discount! This special offer is available for a limited quantity, so act fast to secure your copy.

Who Should Read This Book?

This handbook is ideal for AI engineers, NLP professionals, and LLM engineers looking to deepen their understanding of LLMs. A basic knowledge of LLMs, Python, and AWS is recommended. Whether you're new to AI or seeking to enhance your skills, this book provides comprehensive guidance on implementing LLMs in real-world scenarios.

Don't miss this opportunity to advance your expertise in LLM engineering. Secure your discounted copy today and take the next step in your AI journey!

Buy book: https://www.patreon.com/DataScienceBooks/shop/llm-engineers-handbook-2024-1582908
👍11💯3
Top 100+ questions%0A %22Google Data Science Interview%22.pdf
16.7 MB
💯 Top 100+ Google Data Science Interview Questions

🌟 Essential Prep Guide for Aspiring Candidates

Google is known for its rigorous data science interview process, which typically follows a hybrid format. Candidates are expected to demonstrate strong programming skills, solid knowledge in statistics and machine learning, and a keen ability to approach problems from a product-oriented perspective.

To succeed, one must be proficient in several critical areas: statistics and probability, SQL and Python programming, product sense, and case study-based analytics.

This curated list features over 100 of the most commonly asked and important questions in Google data science interviews. It serves as a comprehensive resource to help candidates prepare effectively and confidently for the challenge ahead.

#DataScience #GoogleInterview #InterviewPrep #MachineLearning #SQL #Statistics #ProductAnalytics #Python #CareerGrowth


https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
👍172
@CodeProgrammer Matplotlib.pdf
4.3 MB
💯 Mastering Matplotlib in 20 Days

The Complete Visual Guide for Data Enthusiasts

Matplotlib is a powerful Python library for data visualization, essential not only for acing job interviews but also for building a solid foundation in analytical thinking and data storytelling.

This step-by-step tutorial guide walks learners through everything from the basics to advanced techniques in Matplotlib. It also includes a curated collection of the most frequently asked Matplotlib-related interview questions, making it an ideal resource for both beginners and experienced professionals.

#Matplotlib #DataVisualization #Python #DataScience #InterviewPrep #Analytics #TechCareer #LearnToCode

https://t.iss.one/addlist/0f6vfFbEMdAwODBk 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
👍121💯1
Introduction to Machine Learning” by Alex Smola and S.V.N.

Vishwanathan is a foundational textbook that offers a comprehensive and mathematically rigorous introduction to core concepts in machine learning. The book covers key topics including supervised and unsupervised learning, kernels, graphical models, optimization techniques, and large-scale learning. It balances theory and practical application, making it ideal for graduate students, researchers, and professionals aiming to deepen their understanding of machine learning fundamentals and algorithmic principles.

PDF:
https://alex.smola.org/drafts/thebook.pdf

#MachineLearning #AI #DataScience #MLAlgorithms #DeepLearning #MathForML #MLTheory #MLResearch #AlexSmola #SVNVishwanathan
👍41
Keep up with the latest developments in artificial intelligence and Python through our WhatsApp channel. The resources will be diverse and of great importance. We strive to make our WhatsApp channel the number one channel in the world of artificial intelligence.

Tell your friends
https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
👍3
Please open Telegram to view this post
VIEW IN TELEGRAM
👍6💯4
This media is not supported in your browser
VIEW IN TELEGRAM
𝐊-𝐌𝐞𝐚𝐧𝐬 𝐂𝐥𝐮𝐬𝐭𝐞𝐫𝐢𝐧𝐠 𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐞𝐝 - 𝐟𝐨𝐫 𝐛𝐞𝐠𝐢𝐧𝐧𝐞𝐫𝐬

𝐖𝐡𝐚𝐭 𝐢𝐬 𝐊-𝐌𝐞𝐚𝐧𝐬?
It’s an unsupervised machine learning algorithm that automatically groups your data into K similar clusters without labels. It finds hidden patterns using distance-based similarity.

𝐈𝐧𝐭𝐮𝐢𝐭𝐢𝐯𝐞 𝐞𝐱𝐚𝐦𝐩𝐥𝐞:
You run a mall. Your data has:
› Age
› Annual Income
› Spending Score

K-Means can divide customers into:
⤷ Budget Shoppers
⤷ Mid-Range Customers
⤷ High-End Spenders

𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬:
① Choose the number of clusters K
② Randomly initialize K centroids
③ Assign each point to its nearest centroid
④ Move centroids to the mean of their assigned points
⑤ Repeat until centroids don’t move (convergence)

𝐎𝐛𝐣𝐞𝐜𝐭𝐢𝐯𝐞:
Minimize the total squared distance between data points and their cluster centroids
𝐉 = Σ‖𝐱ᵢ - μⱼ‖²
Where 𝐱ᵢ = data point, μⱼ = cluster center

𝐇𝐨𝐰 𝐭𝐨 𝐩𝐢𝐜𝐤 𝐊:
Use the Elbow Method
⤷ Plot K vs. total within-cluster variance
⤷ The “elbow” in the curve = ideal number of clusters

𝐂𝐨𝐝𝐞 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 (𝐒𝐜𝐢𝐤𝐢𝐭-𝐋𝐞𝐚𝐫𝐧):

from sklearn.cluster import KMeans
X = [[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]
model = KMeans(n_clusters=2, random_state=0)
model.fit(X)
print(model.labels_)
print(model.cluster_centers_)


𝐁𝐞𝐬𝐭 𝐔𝐬𝐞 𝐂𝐚𝐬𝐞𝐬:
⤷ Customer segmentation
⤷ Image compression
⤷ Market analysis
⤷ Social network analysis

𝐋𝐢𝐦𝐢𝐭𝐚𝐭𝐢𝐨𝐧𝐬:
› Sensitive to outliers
› Requires you to predefine K
› Works best with spherical clusters

https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A 📱
Please open Telegram to view this post
VIEW IN TELEGRAM
👍156
This media is not supported in your browser
VIEW IN TELEGRAM
𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗮𝗹 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 (𝗣𝗖𝗔)
𝗧𝗵𝗲 𝗔𝗿𝘁 𝗼𝗳 𝗥𝗲𝗱𝘂𝗰𝗶𝗻𝗴 𝗗𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝘀 𝗪𝗶𝘁𝗵𝗼𝘂𝘁 𝗟𝗼𝘀𝗶𝗻𝗴 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀

𝗪𝗵𝗮𝘁 𝗘𝘅𝗮𝗰𝘁𝗹𝘆 𝗜𝘀 𝗣𝗖𝗔?
⤷ 𝗣𝗖𝗔 is a 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲 used to transform a 𝗵𝗶𝗴𝗵-𝗱𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝗮𝗹 dataset into fewer dimensions, while retaining as much 𝘃𝗮𝗿𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 (𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻) as possible.
⤷ Think of it as “𝗰𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗻𝗴” data, similar to how we reduce the size of an image without losing too much detail.

𝗪𝗵𝘆 𝗨𝘀𝗲 𝗣𝗖𝗔 𝗶𝗻 𝗬𝗼𝘂𝗿 𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝘀?
⤷ 𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝘆 your data for 𝗲𝗮𝘀𝗶𝗲𝗿 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 and 𝗺𝗼𝗱𝗲𝗹𝗶𝗻𝗴
⤷ 𝗘𝗻𝗵𝗮𝗻𝗰𝗲 machine learning models by reducing 𝗰𝗼𝗺𝗽𝘂𝘁𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗰𝗼𝘀𝘁
⤷ 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗲 multi-dimensional data in 2𝗗 or 3𝗗 for insights
⤷ 𝗙𝗶𝗹𝘁𝗲𝗿 𝗼𝘂𝘁 𝗻𝗼𝗶𝘀𝗲 and uncover hidden patterns in your data

𝗧𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗼𝗳 𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗮𝗹 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀
⤷ The 𝗳𝗶𝗿𝘀𝘁 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗮𝗹 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁 is the direction in which the data varies the most.
⤷ Each subsequent component represents the 𝗻𝗲𝘅𝘁 𝗵𝗶𝗴𝗵𝗲𝘀𝘁 𝗿𝗮𝘁𝗲 of variance, but is 𝗼𝗿𝘁𝗵𝗼𝗴𝗼𝗻𝗮𝗹 (𝘂𝗻𝗰𝗼𝗿𝗿𝗲𝗹𝗮𝘁𝗲𝗱) to the previous one.
⤷ The challenge is selecting how many components to keep based on the 𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 they explain.

𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗘𝘅𝗮𝗺𝗽𝗹𝗲

1: 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗦𝗲𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻
Imagine you’re working on a project to 𝘀𝗲𝗴𝗺𝗲𝗻𝘁 customers for a marketing campaign, with data on spending habits, age, income, and location.
⤷ Using 𝗣𝗖𝗔, you can reduce these four variables into just 𝘁𝘄𝗼 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗮𝗹 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 that retain 90% of the variance.
⤷ These two new components can then be used for 𝗸-𝗺𝗲𝗮𝗻𝘀 clustering to identify distinct customer groups without dealing with the complexity of all the original variables.

𝗧𝗵𝗲 𝗣𝗖𝗔 𝗣𝗿𝗼𝗰𝗲𝘀𝘀 — 𝗦𝘁𝗲𝗽-𝗕𝘆-𝗦𝘁𝗲𝗽
⤷ 𝗦𝘁𝗲𝗽 𝟭: 𝗗𝗮𝘁𝗮 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗮𝘁𝗶𝗼𝗻
Ensure your data is on the same scale (e.g., mean = 0, variance = 1).
⤷ 𝗦𝘁𝗲𝗽 𝟮: 𝗖𝗼𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 𝗠𝗮𝘁𝗿𝗶𝘅
Calculate how features are correlated.
⤷ 𝗦𝘁𝗲𝗽 𝟯: 𝗘𝗶𝗴𝗲𝗻 𝗗𝗲𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻
Compute the eigenvectors and eigenvalues to determine the principal components.
⤷ 𝗦𝘁𝗲𝗽 𝟰: 𝗦𝗲𝗹𝗲𝗰𝘁 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀
Choose the top-k components based on the explained variance ratio.
⤷ 𝗦𝘁𝗲𝗽 𝟱: 𝗗𝗮𝘁𝗮 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻
Transform your data onto the new 𝗣𝗖𝗔 space with fewer dimensions.

𝗪𝗵𝗲𝗻 𝗡𝗼𝘁 𝘁𝗼 𝗨𝘀𝗲 𝗣𝗖𝗔
⤷ 𝗣𝗖𝗔 is not suitable when the dataset contains 𝗻𝗼𝗻-𝗹𝗶𝗻𝗲𝗮𝗿 𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀𝗵𝗶𝗽𝘀 or 𝗵𝗶𝗴𝗵𝗹𝘆 𝘀𝗸𝗲𝘄𝗲𝗱 𝗱𝗮𝘁𝗮.
⤷ For non-linear data, consider 𝗧-𝗦𝗡𝗘 or 𝗮𝘂𝘁𝗼𝗲𝗻𝗰𝗼𝗱𝗲𝗿𝘀 instead.

https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A 📱
Please open Telegram to view this post
VIEW IN TELEGRAM
👍86