How to Apply for Jobs in European Countries or Abroad Without an Agent
๐๐
https://t.iss.one/europe_russia_jobs/4
๐๐
https://t.iss.one/europe_russia_jobs/4
Today's Question :
Given a dataset in a CSV file, how would you read it into a Pandas DataFrame? And how would you handle missing values?
Given a dataset in a CSV file, how would you read it into a Pandas DataFrame? And how would you handle missing values?
Learn Data Science in 2024
๐ญ. ๐๐ฝ๐ฝ๐น๐ ๐ฃ๐ฎ๐ฟ๐ฒ๐๐ผ'๐ ๐๐ฎ๐ ๐๐ผ ๐๐ฒ๐ฎ๐ฟ๐ป ๐๐๐๐ ๐๐ป๐ผ๐๐ด๐ต ๐
Pareto's Law states that "that 80% of consequences come from 20% of the causes".
This law should serve as a guiding framework for the volume of content you need to know to be proficient in data science.
Often rookies make the mistake of overspending their time learning algorithms that are rarely applied in production. Learning about advanced algorithms such as XLNet, Bayesian SVD++, and BiLSTMs, are cool to learn.
But, in reality, you will rarely apply such algorithms in production (unless your job demands research and application of state-of-the-art algos).
For most ML applications in production - especially in the MVP phase, simple algos like logistic regression, K-Means, random forest, and XGBoost provide the biggest bang for the buck because of their simplicity in training, interpretation and productionization.
So, invest more time learning topics that provide immediate value now, not a year later.
๐ฎ. ๐๐ถ๐ป๐ฑ ๐ฎ ๐ ๐ฒ๐ป๐๐ผ๐ฟ โก
Thereโs a Japanese proverb that says โBetter than a thousand days of diligent study is one day with a great teacher.โ This proverb directly applies to learning data science quickly.
Mentors can teach you about how to build a model in production and how to manage stakeholders - stuff that you donโt often read about in courses and books.
So, find a mentor who can teach you practical knowledge in data science.
๐ฏ. ๐๐ฒ๐น๐ถ๐ฏ๐ฒ๐ฟ๐ฎ๐๐ฒ ๐ฃ๐ฟ๐ฎ๐ฐ๐๐ถ๐ฐ๐ฒ โ๏ธ
If you are serious about growing your excelling in data science, you have to put in the time to nurture your knowledge. This means that you need to spend less time watching mindless videos on TikTok and spend more time reading books and watching video lectures.
Join @datasciencefree for more
ENJOY LEARNING ๐๐
๐ญ. ๐๐ฝ๐ฝ๐น๐ ๐ฃ๐ฎ๐ฟ๐ฒ๐๐ผ'๐ ๐๐ฎ๐ ๐๐ผ ๐๐ฒ๐ฎ๐ฟ๐ป ๐๐๐๐ ๐๐ป๐ผ๐๐ด๐ต ๐
Pareto's Law states that "that 80% of consequences come from 20% of the causes".
This law should serve as a guiding framework for the volume of content you need to know to be proficient in data science.
Often rookies make the mistake of overspending their time learning algorithms that are rarely applied in production. Learning about advanced algorithms such as XLNet, Bayesian SVD++, and BiLSTMs, are cool to learn.
But, in reality, you will rarely apply such algorithms in production (unless your job demands research and application of state-of-the-art algos).
For most ML applications in production - especially in the MVP phase, simple algos like logistic regression, K-Means, random forest, and XGBoost provide the biggest bang for the buck because of their simplicity in training, interpretation and productionization.
So, invest more time learning topics that provide immediate value now, not a year later.
๐ฎ. ๐๐ถ๐ป๐ฑ ๐ฎ ๐ ๐ฒ๐ป๐๐ผ๐ฟ โก
Thereโs a Japanese proverb that says โBetter than a thousand days of diligent study is one day with a great teacher.โ This proverb directly applies to learning data science quickly.
Mentors can teach you about how to build a model in production and how to manage stakeholders - stuff that you donโt often read about in courses and books.
So, find a mentor who can teach you practical knowledge in data science.
๐ฏ. ๐๐ฒ๐น๐ถ๐ฏ๐ฒ๐ฟ๐ฎ๐๐ฒ ๐ฃ๐ฟ๐ฎ๐ฐ๐๐ถ๐ฐ๐ฒ โ๏ธ
If you are serious about growing your excelling in data science, you have to put in the time to nurture your knowledge. This means that you need to spend less time watching mindless videos on TikTok and spend more time reading books and watching video lectures.
Join @datasciencefree for more
ENJOY LEARNING ๐๐
๐16โค6
Forwarded from Health Fitness & Diet Tips - Gym Motivation ๐ช
6 rules for daily happiness:
1. Start each day with gratitudeโwrite it down.
2. Let go of grudges; they weigh you down.
3. Pursue what excites you, not whatโs safe.
4. Spend time with people who lift you up.
5. Do something kind for othersโdaily.
6. Find joy in the little things; they add up.
1. Start each day with gratitudeโwrite it down.
2. Let go of grudges; they weigh you down.
3. Pursue what excites you, not whatโs safe.
4. Spend time with people who lift you up.
5. Do something kind for othersโdaily.
6. Find joy in the little things; they add up.
๐16โค2
Forwarded from Data Analysis Books | Python | SQL | Excel | Artificial Intelligence | Power BI | Tableau | AI Resources
Knowing the tools won't be enough to become a master of data analytics!
See if your soft skills are worthy of the rank of master:
1. ๐๐ผ๐บ๐บ๐๐ป๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป: Can you translate your findings into easily digestible insights for non-technical stakeholders?
2. ๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ-๐ฆ๐ผ๐น๐๐ถ๐ป๐ด: Is your work focused on solving actual business problems, and are you able to pick the most efficient approach to solve them?
3. ๐ฆ๐๐ฎ๐ธ๐ฒ๐ต๐ผ๐น๐ฑ๐ฒ๐ฟ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you building strong relationships with your stakeholders, understanding their needs, and providing them with regular updates?
4. ๐๐ผ๐ป๐๐ถ๐ป๐๐ผ๐๐ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด: The data landscape is constantly changing. Are you keeping up with new tools and trends?
5. ๐ฃ๐ฟ๐ผ๐ฑ๐๐ฐ๐/๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you aware of the life cycle of your data products? Do you have a structured approach to plan, prioritize, and track your work?
6. ๐๐๐๐ถ๐ป๐ฒ๐๐ ๐๐ฐ๐๐บ๐ฒ๐ป: Can you understand the language and needs of the business and put your data work into context?
7. ๐๐ผ๐บ๐ฎ๐ถ๐ป ๐๐ป๐ผ๐๐น๐ฒ๐ฑ๐ด๐ฒ: Do you know the processes, products, and challenges of your domain?
If you want to earn the rank of master in the data field, start working on your soft skills now.
What are your thoughts on the role of soft skills in the data space?
See if your soft skills are worthy of the rank of master:
1. ๐๐ผ๐บ๐บ๐๐ป๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป: Can you translate your findings into easily digestible insights for non-technical stakeholders?
2. ๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ-๐ฆ๐ผ๐น๐๐ถ๐ป๐ด: Is your work focused on solving actual business problems, and are you able to pick the most efficient approach to solve them?
3. ๐ฆ๐๐ฎ๐ธ๐ฒ๐ต๐ผ๐น๐ฑ๐ฒ๐ฟ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you building strong relationships with your stakeholders, understanding their needs, and providing them with regular updates?
4. ๐๐ผ๐ป๐๐ถ๐ป๐๐ผ๐๐ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด: The data landscape is constantly changing. Are you keeping up with new tools and trends?
5. ๐ฃ๐ฟ๐ผ๐ฑ๐๐ฐ๐/๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you aware of the life cycle of your data products? Do you have a structured approach to plan, prioritize, and track your work?
6. ๐๐๐๐ถ๐ป๐ฒ๐๐ ๐๐ฐ๐๐บ๐ฒ๐ป: Can you understand the language and needs of the business and put your data work into context?
7. ๐๐ผ๐บ๐ฎ๐ถ๐ป ๐๐ป๐ผ๐๐น๐ฒ๐ฑ๐ด๐ฒ: Do you know the processes, products, and challenges of your domain?
If you want to earn the rank of master in the data field, start working on your soft skills now.
What are your thoughts on the role of soft skills in the data space?
๐13๐1
Three different learning styles in machine learning algorithms:
1. Supervised Learning
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and K-Means.
3. Semi-Supervised Learning
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
1. Supervised Learning
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and K-Means.
3. Semi-Supervised Learning
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
๐8โค1
Forwarded from Programming Resources | Python | Javascript | Artificial Intelligence Updates | Computer Science Courses | AI Books
Are you part of Rat Race?
A student who got 3.8 CGPA is unhappy because another student got 4 CGPA.
The student with 4 CGPA is unhappy because he/she is not placed in a Core Company.
Student placed in a Core Company is unhappy because his colleague has more salary than him/her.
The person having the highest salary in a company is unhappy because he/she has no time at all to enjoy their life with friends and family.
This is what happens when you get trapped in the infinite rat race. You are never happy. And you will never appreciate or be grateful for the life you have.
Come out of the Rat Race.
Art by: Steve Cutts
A student who got 3.8 CGPA is unhappy because another student got 4 CGPA.
The student with 4 CGPA is unhappy because he/she is not placed in a Core Company.
Student placed in a Core Company is unhappy because his colleague has more salary than him/her.
The person having the highest salary in a company is unhappy because he/she has no time at all to enjoy their life with friends and family.
This is what happens when you get trapped in the infinite rat race. You are never happy. And you will never appreciate or be grateful for the life you have.
Come out of the Rat Race.
Art by: Steve Cutts
๐30๐10โค6๐ฅ1
Some useful PYTHON libraries for data science
NumPy stands for Numerical Python. The most powerful feature of NumPy is n-dimensional array. This library also contains basic linear algebra functions, Fourier transforms, advanced random number capabilities and tools for integration with other low level languages like Fortran, C and C++
SciPy stands for Scientific Python. SciPy is built on NumPy. It is one of the most useful library for variety of high level science and engineering modules like discrete Fourier transform, Linear Algebra, Optimization and Sparse matrices.
Matplotlib for plotting vast variety of graphs, starting from histograms to line plots to heat plots.. You can use Pylab feature in ipython notebook (ipython notebook โpylab = inline) to use these plotting features inline. If you ignore the inline option, then pylab converts ipython environment to an environment, very similar to Matlab. You can also use Latex commands to add math to your plot.
Pandas for structured data operations and manipulations. It is extensively used for data munging and preparation. Pandas were added relatively recently to Python and have been instrumental in boosting Pythonโs usage in data scientist community.
Scikit Learn for machine learning. Built on NumPy, SciPy and matplotlib, this library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.
Statsmodels for statistical modeling. Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.
Seaborn for statistical data visualization. Seaborn is a library for making attractive and informative statistical graphics in Python. It is based on matplotlib. Seaborn aims to make visualization a central part of exploring and understanding data.
Bokeh for creating interactive plots, dashboards and data applications on modern web-browsers. It empowers the user to generate elegant and concise graphics in the style of D3.js. Moreover, it has the capability of high-performance interactivity over very large or streaming datasets.
Blaze for extending the capability of Numpy and Pandas to distributed and streaming datasets. It can be used to access data from a multitude of sources including Bcolz, MongoDB, SQLAlchemy, Apache Spark, PyTables, etc. Together with Bokeh, Blaze can act as a very powerful tool for creating effective visualizations and dashboards on huge chunks of data.
Scrapy for web crawling. It is a very useful framework for getting specific patterns of data. It has the capability to start at a website home url and then dig through web-pages within the website to gather information.
SymPy for symbolic computation. It has wide-ranging capabilities from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics. Another useful feature is the capability of formatting the result of the computations as LaTeX code.
Requests for accessing the web. It works similar to the the standard python library urllib2 but is much easier to code. You will find subtle differences with urllib2 but for beginners, Requests might be more convenient.
Additional libraries, you might need:
os for Operating system and file operations
networkx and igraph for graph based data manipulations
regular expressions for finding patterns in text data
BeautifulSoup for scrapping web. It is inferior to Scrapy as it will extract information from just a single webpage in a run.
NumPy stands for Numerical Python. The most powerful feature of NumPy is n-dimensional array. This library also contains basic linear algebra functions, Fourier transforms, advanced random number capabilities and tools for integration with other low level languages like Fortran, C and C++
SciPy stands for Scientific Python. SciPy is built on NumPy. It is one of the most useful library for variety of high level science and engineering modules like discrete Fourier transform, Linear Algebra, Optimization and Sparse matrices.
Matplotlib for plotting vast variety of graphs, starting from histograms to line plots to heat plots.. You can use Pylab feature in ipython notebook (ipython notebook โpylab = inline) to use these plotting features inline. If you ignore the inline option, then pylab converts ipython environment to an environment, very similar to Matlab. You can also use Latex commands to add math to your plot.
Pandas for structured data operations and manipulations. It is extensively used for data munging and preparation. Pandas were added relatively recently to Python and have been instrumental in boosting Pythonโs usage in data scientist community.
Scikit Learn for machine learning. Built on NumPy, SciPy and matplotlib, this library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.
Statsmodels for statistical modeling. Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.
Seaborn for statistical data visualization. Seaborn is a library for making attractive and informative statistical graphics in Python. It is based on matplotlib. Seaborn aims to make visualization a central part of exploring and understanding data.
Bokeh for creating interactive plots, dashboards and data applications on modern web-browsers. It empowers the user to generate elegant and concise graphics in the style of D3.js. Moreover, it has the capability of high-performance interactivity over very large or streaming datasets.
Blaze for extending the capability of Numpy and Pandas to distributed and streaming datasets. It can be used to access data from a multitude of sources including Bcolz, MongoDB, SQLAlchemy, Apache Spark, PyTables, etc. Together with Bokeh, Blaze can act as a very powerful tool for creating effective visualizations and dashboards on huge chunks of data.
Scrapy for web crawling. It is a very useful framework for getting specific patterns of data. It has the capability to start at a website home url and then dig through web-pages within the website to gather information.
SymPy for symbolic computation. It has wide-ranging capabilities from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics. Another useful feature is the capability of formatting the result of the computations as LaTeX code.
Requests for accessing the web. It works similar to the the standard python library urllib2 but is much easier to code. You will find subtle differences with urllib2 but for beginners, Requests might be more convenient.
Additional libraries, you might need:
os for Operating system and file operations
networkx and igraph for graph based data manipulations
regular expressions for finding patterns in text data
BeautifulSoup for scrapping web. It is inferior to Scrapy as it will extract information from just a single webpage in a run.
๐13โค5๐ฅฐ1
Here are 5 key Python libraries/ concepts that are particularly important for data analysts:
1. Pandas: Pandas is a powerful library for data manipulation and analysis in Python. It provides data structures like DataFrames and Series that make it easy to work with structured data. Pandas offers functions for reading and writing data, cleaning and transforming data, and performing data analysis tasks like filtering, grouping, and aggregating.
2. NumPy: NumPy is a fundamental package for scientific computing in Python. It provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays efficiently. NumPy is often used in conjunction with Pandas for numerical computations and data manipulation.
3. Matplotlib and Seaborn: Matplotlib is a popular plotting library in Python that allows you to create a wide variety of static, interactive, and animated visualizations. Seaborn is built on top of Matplotlib and provides a higher-level interface for creating attractive and informative statistical graphics. These libraries are essential for data visualization in data analysis projects.
4. Scikit-learn: Scikit-learn is a machine learning library in Python that provides simple and efficient tools for data mining and data analysis tasks. It includes a wide range of algorithms for classification, regression, clustering, dimensionality reduction, and more. Scikit-learn also offers tools for model evaluation, hyperparameter tuning, and model selection.
5. Data Cleaning and Preprocessing: Data cleaning and preprocessing are crucial steps in any data analysis project. Python offers libraries like Pandas and NumPy for handling missing values, removing duplicates, standardizing data types, scaling numerical features, encoding categorical variables, and more. Understanding how to clean and preprocess data effectively is essential for accurate analysis and modeling.
By mastering these Python concepts and libraries, data analysts can efficiently manipulate and analyze data, create insightful visualizations, apply machine learning techniques, and derive valuable insights from their datasets.
Credits: https://t.iss.one/free4unow_backup
ENJOY LEARNING ๐๐
1. Pandas: Pandas is a powerful library for data manipulation and analysis in Python. It provides data structures like DataFrames and Series that make it easy to work with structured data. Pandas offers functions for reading and writing data, cleaning and transforming data, and performing data analysis tasks like filtering, grouping, and aggregating.
2. NumPy: NumPy is a fundamental package for scientific computing in Python. It provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays efficiently. NumPy is often used in conjunction with Pandas for numerical computations and data manipulation.
3. Matplotlib and Seaborn: Matplotlib is a popular plotting library in Python that allows you to create a wide variety of static, interactive, and animated visualizations. Seaborn is built on top of Matplotlib and provides a higher-level interface for creating attractive and informative statistical graphics. These libraries are essential for data visualization in data analysis projects.
4. Scikit-learn: Scikit-learn is a machine learning library in Python that provides simple and efficient tools for data mining and data analysis tasks. It includes a wide range of algorithms for classification, regression, clustering, dimensionality reduction, and more. Scikit-learn also offers tools for model evaluation, hyperparameter tuning, and model selection.
5. Data Cleaning and Preprocessing: Data cleaning and preprocessing are crucial steps in any data analysis project. Python offers libraries like Pandas and NumPy for handling missing values, removing duplicates, standardizing data types, scaling numerical features, encoding categorical variables, and more. Understanding how to clean and preprocess data effectively is essential for accurate analysis and modeling.
By mastering these Python concepts and libraries, data analysts can efficiently manipulate and analyze data, create insightful visualizations, apply machine learning techniques, and derive valuable insights from their datasets.
Credits: https://t.iss.one/free4unow_backup
ENJOY LEARNING ๐๐
๐5โค4
7 Rules of Life:
- Let it go
- Ignore them
- Give it time
- Donโt compare
- Stay calm
- Itโs on you
- Always smile
- Let it go
- Ignore them
- Give it time
- Donโt compare
- Stay calm
- Itโs on you
- Always smile
โค31๐7
Forwarded from Jobs | Internships | Placement | Interviews
I am not sure if you guys are aware or not but there are many scammers in Telegram who may ask you to pay them 200 rs and will give 1250 after sometime, never ever reply to those fraud people. Never ever pay any money to anyone in telegram for the sake of getting it double or whatsoever.
Be smart, stay safe โค๏ธ
Be smart, stay safe โค๏ธ
๐30โค7
9 hacks to boost your productivity:
1) Plan your day. Write everything on a physical paper.
2) Follow the 80/20 rule. 20% of your work will bring you 80% of the results.
3) Stop multitasking. Switching tasks significantly reduces your productivity.
... read more
1) Plan your day. Write everything on a physical paper.
2) Follow the 80/20 rule. 20% of your work will bring you 80% of the results.
3) Stop multitasking. Switching tasks significantly reduces your productivity.
... read more
โค16๐5
This post is for beginners who decided to learn Data Science. I want to tell you that becoming a data scientist is a journey (6 months - 1 year at least) and not a 1 month thing where u do some courses and you are a data scientist. There are different fields in Data Science that you have to first get familiar and strong in basics as well as do hands-on to get the abilities that are required to function in a full time job opportunity. Then further delve into advanced implementations.
There are plenty of roadmaps and online content both paid and free that you can follow. In a nutshell. A few essential things that will be necessary and in no particular order that will at least get your data science journey started are below:
Basic Statistics, Linear Algebra, calculus, probability
Programming language (R or Python) - Preferably Python if you rather want to later on move into a developer role instead of sticking to data science.
Machine Learning - All of the above will be used here to implement machine learning concepts.
Data Visualisation - again it could be simple excel or via r/python libraries or tools like Tableau,PowerBI etc.
This can be overwhelming but again its just an indication of what lies ahead. So most important thing is to just START instead of just contemplating the best way to go about this. Since lot of things can be learnt independently as well in no particular order.
You can use the below Sources to prepare your own roadmap:
@free4unow_backup - some free courses from here
@datasciencefun - data science and machines learning resources
Data Science - https://365datascience.pxf.io/q4m66g
Python - https://bit.ly/45rlWZE
Kaggle - https://www.kaggle.com/learn
There are plenty of roadmaps and online content both paid and free that you can follow. In a nutshell. A few essential things that will be necessary and in no particular order that will at least get your data science journey started are below:
Basic Statistics, Linear Algebra, calculus, probability
Programming language (R or Python) - Preferably Python if you rather want to later on move into a developer role instead of sticking to data science.
Machine Learning - All of the above will be used here to implement machine learning concepts.
Data Visualisation - again it could be simple excel or via r/python libraries or tools like Tableau,PowerBI etc.
This can be overwhelming but again its just an indication of what lies ahead. So most important thing is to just START instead of just contemplating the best way to go about this. Since lot of things can be learnt independently as well in no particular order.
You can use the below Sources to prepare your own roadmap:
@free4unow_backup - some free courses from here
@datasciencefun - data science and machines learning resources
Data Science - https://365datascience.pxf.io/q4m66g
Python - https://bit.ly/45rlWZE
Kaggle - https://www.kaggle.com/learn
๐12โค8