Data Science & Machine Learning
73.1K subscribers
778 photos
2 videos
68 files
685 links
Join this channel to learn data science, artificial intelligence and machine learning with funny quizzes, interesting projects and amazing resources for free

For collaborations: @love_data
Download Telegram
Python Basics for Data Science: Part-1

Variables Data Types

In Python, variables are used to store data, and data types define what kind of data is stored. This is the first and most essential building block of your data science journey.

1️⃣ What is a Variable?
A variable is like a label for data stored in memory. You can assign any value to a variable and reuse it throughout your code.

Syntax:
x = 10  
name = "Riya"
is_active = True


2️⃣ Common Data Types in Python

int – Integers (whole numbers)
age = 25

float – Decimal numbers
height = 5.8

str – Text/String
city = "Mumbai"

bool – Boolean (True or False)
is_student = False

list – A collection of items
fruits = ["apple", "banana", "mango"]

tuple – Ordered, immutable collection
coordinates = (10.5, 20.3)

dict – Key-value pairs
student = {"name": "Riya", "score": 90}


3️⃣ Type Checking
You can check the type of any variable using type()
print(type(age))       # <class 'int'>  
print(type(city)) # <class 'str'>


4️⃣ Type Conversion
Change data from one type to another:
num = "100"
converted = int(num)
print(type(converted)) # <class 'int'>


5️⃣ Why This Matters in Data Science
Data comes in various types. Understanding and managing types is critical for:
• Cleaning data
• Performing calculations
• Avoiding errors in analysis

Practice Task for You:
• Create 5 variables with different data types
• Use type() to print each one
• Convert a string to an integer and do basic math

💬 Tap ❤️ for more!
13👍4
𝗙𝗥𝗘𝗘 𝗢𝗻𝗹𝗶𝗻𝗲 𝗠𝗮𝘀𝘁𝗲𝗿𝗰𝗹𝗮𝘀𝘀 𝗕𝘆 𝗜𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗘𝘅𝗽𝗲𝗿𝘁𝘀 😍

Roadmap to land your dream job in top product-based companies

𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝗲𝘀:-
- 90-Day Placement Plan
- Tech & Non-Tech Career Path
- Interview Preparation Tips
- Live Q&A

𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝗙𝗼𝗿 𝗙𝗥𝗘𝗘👇:- 

https://pdlink.in/3Ltb3CE

Date & Time:- 06th January 2026 , 7PM
1
Python Basics for Data Science: Part-2

Loops Functions 🔁🧠

These two concepts are key to writing clean, efficient, and reusable code — especially when working with data.

1️⃣ Loops in Python
Loops help you repeat tasks like reading data, checking values, or processing items in a list.

For Loop
fruits = ["apple", "banana", "mango"]
for fruit in fruits:
print(fruit)


While Loop
count = 1
while count <= 3:
print("Loading...", count)
count += 1


Loop with Condition
numbers = [10, 5, 20, 3]
for num in numbers:
if num > 10:
print(num, "is greater than 10")


2️⃣ Functions in Python
Functions let you group code into blocks you can reuse.

Basic Function
def greet(name):
return f"Hello, {name}!"

print(greet("Riya"))


Function with Logic
def is_even(num):
if num % 2 == 0:
return True
return False

print(is_even(4)) # Output: True


Function for Calculation
def square(x):
return x * x

print(square(6)) # Output: 36


Why This Matters in Data Science
• Loops help in iterating over datasets
• Functions make your data cleaning reusable
• Helps organize long analysis code into simple blocks

🎯 Practice Task for You:
• Write a for loop to print numbers from 1 to 10
• Create a function that takes two numbers and returns their average
• Make a function that returns "Even" or "Odd" based on input

💬 Tap ❤️ for more!
13
𝗧𝗼𝗽 𝟱 𝗜𝗻-𝗗𝗲𝗺𝗮𝗻𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 𝘁𝗼 𝗙𝗼𝗰𝘂𝘀 𝗼𝗻 𝗶𝗻 𝟮𝟬𝟮𝟲😍

Start learning industry-relevant data skills today at zero cost!

𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀:- https://pdlink.in/497MMLw

𝗔𝗜 & 𝗠𝗟 :- https://pdlink.in/4bhetTu

𝗖𝗹𝗼𝘂𝗱 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴:- https://pdlink.in/3LoutZd

𝗖𝘆𝗯𝗲𝗿 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆:- https://pdlink.in/3N9VOyW

𝗢𝘁𝗵𝗲𝗿 𝗧𝗲𝗰𝗵 𝗖𝗼𝘂𝗿𝘀𝗲𝘀:- https://pdlink.in/4qgtrxU

🎓 Enroll Now & Get Certified
1
Python for Data Science: Part-3

NumPy Pandas Basics 📊🐍
These two libraries form the foundation for handling and analyzing data in Python.

1️⃣ NumPy – Numerical Python
NumPy helps with fast numerical operations and array handling.

Importing NumPy
import numpy as np

Create Arrays
arr = np.array([1, 2, 3])
print(arr)

Array Operations
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print(a + b) # [5 7 9]
print(a * 2) # [2 4 6]

Useful NumPy Functions
np.mean(a)          # Average
np.max(b) # Max value
np.arange(0, 10, 2) # [0 2 4 6 8]

2️⃣ Pandas – Data Analysis Library
Pandas is used to work with data in table format (DataFrames).

Importing Pandas
import pandas as pd

Create a DataFrame
data = {
"Name": ["Riya", "Aman"],
"Age": [24, 30]
}
df = pd.DataFrame(data)
print(df)

Read CSV File
df = pd.read_csv("data.csv")

Basic DataFrame Operations
df.head()       # First 5 rows  
df.info() # Column types
df.describe() # Stats summary
df["Age"].mean() # Average age

Filter Rows
df[df["Age"] > 25]

🎯 Why This Matters
• NumPy makes math faster and easier
• Pandas helps clean, explore, and transform data
• Essential for real-world data analysis

Practice Task:
• Create a NumPy array of 10 numbers
• Make a Pandas DataFrame with 2 columns (Name, Score)
• Filter all scores above 80

💬 Tap ❤️ for more
6👍1
🎯 𝗡𝗲𝘄 𝘆𝗲𝗮𝗿, 𝗻𝗲𝘄 𝘀𝗸𝗶𝗹𝗹𝘀.

If you've been meaning to learn 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜, this is your starting point.

Build a real RAG assistant from scratch.
Beginner-friendly. Completely self-paced.

𝟱𝟬,𝟬𝟬𝟬+ 𝗹𝗲𝗮𝗿𝗻𝗲𝗿𝘀 from 130+ countries already enrolled.

https://www.readytensor.ai/agentic-ai-essentials-cert/
2
Python for Data Science: Part-4

Data Visualization with Matplotlib, Seaborn Plotly 📊📈

1️⃣ Matplotlib – Basic Plotting
Great for simple line, bar, and scatter plots.

Import and Line Plot
import matplotlib.pyplot as plt

x = [1, 2, 3, 4]
y = [10, 20, 25, 30]
plt.plot(x, y)
plt.title("Line Plot")
plt.xlabel("X-axis")
plt.ylabel("Y-axis")
plt.show()

Bar Plot
names = ["A", "B", "C"]
scores = [80, 90, 70]
plt.bar(names, scores)
plt.title("Scores by Name")
plt.show()


2️⃣ Seaborn – Statistical Visualization
Built on Matplotlib with better styling.

Import and Plot
import seaborn as sns
import pandas as pd

df = pd.DataFrame({
"Name": ["Riya", "Aman", "John", "Sara"],
"Score": [85, 92, 78, 88]
})

sns.barplot(x="Name", y="Score", data=df)

Other Seaborn Plots
sns.histplot(df["Score"])          # Histogram  
sns.boxplot(x=df["Score"]) # Box plot


3️⃣ Plotly – Interactive Graphs
Great for dashboards and interactivity.

Basic Line Plot
import plotly.express as px

df = pd.DataFrame({
"x": [1, 2, 3],
"y": [10, 20, 15]
})

fig = px.line(df, x="x", y="y", title="Interactive Line Plot")
fig.show()


🎯 Why Visualization Matters
• Helps spot patterns in data
• Makes insights clear and shareable
• Supports better decision-making

Practice Task:
• Create a line plot using matplotlib
• Use seaborn to plot a boxplot for scores
• Try any interactive chart using plotly

💬 Tap ❤️ for more
8
𝗙𝗥𝗘𝗘 𝗢𝗻𝗹𝗶𝗻𝗲 𝗠𝗮𝘀𝘁𝗲𝗿𝗰𝗹𝗮𝘀𝘀 𝗢𝗻 𝗟𝗮𝘁𝗲𝘀𝘁 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀😍

- Data Science 
- AI/ML
- Data Analytics
- UI/UX
- Full-stack Development 

Get Job-Ready Guidance in Your Tech Journey

𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝗙𝗼𝗿 𝗙𝗥𝗘𝗘👇:- 

https://pdlink.in/4sw5Ev8

Date :- 11th January 2026
Python for Data Science: Part-5

📊 Descriptive Statistics, Probability Distributions

1️⃣ Descriptive Statistics with Pandas
Quick way to summarize datasets.

import pandas as pd

data = {"Marks": [85, 92, 78, 88, 90]}
df = pd.DataFrame(data)

print(df.describe()) # count, mean, std, min, max, etc.
print(df["Marks"].mean()) # Average
print(df["Marks"].median()) # Middle value
print(df["Marks"].mode()) # Most frequent value


2️⃣ Probability Basics
Chances of an event occurring (0 to 1)

Tossing a coin
prob_heads = 1 / 2
print(prob_heads) # 0.5

Multiple outcomes example:

from itertools import product

outcomes = list(product(["H", "T"], repeat=2))
print(outcomes) # [('H', 'H'), ('H', 'T'), ('T', 'H'), ('T', 'T')]


3️⃣ Normal Distribution using NumPy Seaborn

import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt

data = np.random.normal(loc=0, scale=1, size=1000)

sns.histplot(data, kde=True)
plt.title("Normal Distribution")
plt.show()


4️⃣ Other Distributions
• Binomial → pass/fail outcomes
• Poisson → rare event frequency
• Uniform → all outcomes equally likely

Binomial Example:

from scipy.stats import binom

# 10 trials, p = 0.5
print(binom.pmf(k=5, n=10, p=0.5)) # Probability of 5 successes


🎯 Why This Matters
• Descriptive stats help understand data quickly
• Distributions help model real-world situations
• Probability supports prediction and risk analysis

Practice Task:
• Generate a normal distribution
• Calculate mean, median, std
• Plot binomial probability of success

💬 Tap ❤️ for more
5
Data Science Resume Tips 📊💼

To land data science roles, your resume should highlight problem-solving, tools, and real insights.

1️⃣ Contact Info (Top)
• Name, email, GitHub, LinkedIn, portfolio/Kaggle
• Optional: location, phone

2️⃣ Summary (2–3 lines)
Brief overview showing your skills + value
“Data scientist with strong Python, ML & SQL skills. Built projects in healthcare & finance. Proven ability to turn data into insights.”

3️⃣ Skills Section
Group by type:
Languages: Python, R, SQL
Libraries: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn
Tools: Jupyter, Git, Tableau, Power BI
ML/Stats: Regression, Classification, Clustering, A/B testing

4️⃣ Projects (Most Important)
List 3–4 impactful projects:
• Clear title
• Dataset used
• What you did (EDA, model, visualizations)
• Tools used
• GitHub + live dashboard (if any)

Example:
Loan Default Prediction – Used logistic regression + feature engineering on Kaggle dataset to predict defaults. 82% accuracy.
GitHub: [link]

5️⃣ Work Experience / Internships
Show how you used data to create value:
• “Built churn prediction model → reduced churn by 15%”
• “Automated Excel reports using Python, saving 6 hrs/week”

6️⃣ Education
• Degree or certifications
• Mention bootcamps, if relevant

7️⃣ Certifications (Optional)
• Google Data Analytics
• IBM Data Science
• Coursera/edX Machine Learning

💡 Tips:
• Show impact: “Increased accuracy by 10%”
• Use real datasets
• Keep layout clean and focused

💬 Tap ❤️ for more!
5
𝗛𝗶𝗴𝗵 𝗗𝗲𝗺𝗮𝗻𝗱𝗶𝗻𝗴 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲𝘀 𝗪𝗶𝘁𝗵 𝗣𝗹𝗮𝗰𝗲𝗺𝗲𝗻𝘁 𝗔𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝗰𝗲😍

Learn from IIT faculty and industry experts.

IIT Roorkee DS & AI Program :- https://pdlink.in/4qHVFkI

IIT Patna AI & ML :- https://pdlink.in/4pBNxkV

IIM Mumbai DM & Analytics :- https://pdlink.in/4jvuHdE

IIM Rohtak Product Management:- https://pdlink.in/4aMtk8i

IIT Roorkee Agentic Systems:- https://pdlink.in/4aTKgdc

Upskill in today’s most in-demand tech domains and boost your career 🚀
2
GitHub Profile Tips for Data Scientists 🧠📊

Your GitHub = your portfolio. Make it show skills, tools, and thinking.

1️⃣ Profile README
• Who you are & what you work on
• Mention tools (Python, Pandas, SQL, Scikit-learn, Power BI)
• Add project links & contact info
Example:
“Aspiring Data Scientist skilled in Python, ML & visualization. Love solving business problems with data.”

2️⃣ Highlight 3–6 Strong Projects
Each repo must have:
• Clear README:
– What problem you solved
– Dataset used
– Key steps (EDA → Model → Results)
– Tools & libraries
• Jupyter notebooks (cleaned + explained)
• Charts & results with conclusions
Tip: Include PDF/report or dashboard screenshots

3️⃣ Project Ideas to Include
• Sales insights dashboard (Power BI or Tableau)
• ML model (churn, fraud, sentiment)
• NLP app (text summarizer, topic model)
• EDA project on Kaggle dataset
• SQL project with queries & joins

4️⃣ Show Real Workflows
• Use .py scripts + .ipynb notebooks
• Add data cleaning + preprocessing steps
• Track experiments (metrics, models tried)

5️⃣ Regular Commits
• Update notebooks
• Push improvements
• Show learning progress over time

📌 Practice Task:
Pick 1 project → Write full README → Push to GitHub today

💬 Tap ❤️ for more!
7👍3
Data Science Mistakes Beginners Should Avoid ⚠️📉

1️⃣ Skipping the Basics
• Jumping into ML without Python, Stats, or Pandas
Build strong foundations in math, programming & EDA first

2️⃣ Not Understanding the Problem
• Applying models blindly
• Irrelevant features and metrics
Always clarify business goals before coding

3️⃣ Treating Data Cleaning as Optional
• Training on dirty/incomplete data
Spend time on preprocessing — it’s 70% of real work

4️⃣ Using Complex Models Too Early
• Overfitting small datasets
• Ignoring simpler, interpretable models
Start with baseline models (Logistic Regression, Decision Trees)

5️⃣ No Evaluation Strategy
• Relying only on accuracy
Use proper metrics (F1, AUC, MAE) based on problem type

6️⃣ Not Visualizing Data
• Missed outliers and patterns
Use Seaborn, Matplotlib, Plotly for EDA

7️⃣ Poor Feature Engineering
• Feeding raw data into models
Create meaningful features that boost performance

8️⃣ Ignoring Domain Knowledge
• Features don’t align with real-world logic
Talk to stakeholders or do research before modeling

9️⃣ No Practice with Real Datasets
• Kaggle-only learning
Work with messy, real-world data (open data portals, APIs)

🔟 Not Documenting or Sharing Work
• No GitHub, no portfolio
Document notebooks, write blogs, push projects online

💬 Tap ❤️ for more!
9
📊 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 𝗙𝗥𝗘𝗘 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲😍

🚀Upgrade your skills with industry-relevant Data Analytics training at ZERO cost 

Beginner-friendly
Certificate on completion
High-demand skill in 2026

𝐋𝐢𝐧𝐤 👇:- 

https://pdlink.in/497MMLw

📌 100% FREE – Limited seats available!
1🥰1
Python Libraries & Tools You Should Know 🐍💼

Mastering the right Python libraries helps you work faster, smarter, and more effectively in any data role.

🔷 1️⃣ For Data Analytics 📊
Useful for cleaning, analyzing, and visualizing data
pandas – Handle and manipulate structured data (tables)
numpy – Fast numerical operations, arrays, math
matplotlib – Basic data visualizations (charts, plots)
seaborn – Statistical plots, easier visuals with pandas
openpyxl – Read/write Excel files
plotly – Interactive visualizations and dashboards

🔷 2️⃣ For Data Science 🧠
Used for statistics, experimentation, and storytelling
scipy – Scientific computing, probability, optimization
statsmodels – Statistical testing, linear models
sklearn – Preprocessing + classic ML algorithms
sqlalchemy – Work with databases using Python
Jupyter – Interactive notebooks for code, text, charts
dash – Create dashboard apps with Python

🔷 3️⃣ For Machine Learning 🤖
Build and train predictive and deep learning models
scikit-learn – Core ML: regression, classification, clustering
TensorFlow – Deep learning by Google
PyTorch – Deep learning by Meta, flexible and research-friendly
XGBoost – Popular for gradient boosting models
LightGBM – Fast boosting by Microsoft
Keras – High-level neural network API (runs on TensorFlow)

💡 Tip:
• Learn pandas + matplotlib + sklearn first
• Add ML/DL libraries based on your goals

💬 Tap ❤️ for more!
4