Pattern Recognition and
Machine Learning [ Information Science and Statistics ]
Christopher M. Bishop
#python #machinelearning #statistics #information #ai #ml
Machine Learning [ Information Science and Statistics ]
Christopher M. Bishop
#python #machinelearning #statistics #information #ai #ml
π2
π Introduction to Machine Learning
by Alex Smola and S.V.N. Vishwanathan
University Press, Cambridge
by Alex Smola and S.V.N. Vishwanathan
University Press, Cambridge
#numpy
NumPy
Smart use of β:β to extract the right shape
Sometimes you encounter a 3-dim array that is of shape (N, T, D), while your function requires a shape of (N, D). At a time like this, reshape() will do more harm than good, so you are left with one simple solution:
Example:
NumPy
Smart use of β:β to extract the right shape
Sometimes you encounter a 3-dim array that is of shape (N, T, D), while your function requires a shape of (N, D). At a time like this, reshape() will do more harm than good, so you are left with one simple solution:
Example:
for t in xrange(T):
x[:, t, :] = # ...π6
To become a Machine Learning Engineer:
β’ Python
β’ numpy, pandas, matplotlib, Scikit-Learn
β’ TensorFlow or PyTorch
β’ Jupyter, Colab
β’ Analysis > Code
β’ 99%: Foundational algorithms
β’ 1%: Other algorithms
β’ Solve problems β This is key
β’ Teaching = 2 Γ Learning
β’ Have fun!
β’ Python
β’ numpy, pandas, matplotlib, Scikit-Learn
β’ TensorFlow or PyTorch
β’ Jupyter, Colab
β’ Analysis > Code
β’ 99%: Foundational algorithms
β’ 1%: Other algorithms
β’ Solve problems β This is key
β’ Teaching = 2 Γ Learning
β’ Have fun!
π13β€5
A LITTLE GUIDE TO HANDLING MISSING DATA
Having any Feature missing more than 5-10% of its values? you should consider it to be missing data or feature with high absence rateπ
How can you handle these missing values, ensuring you dont loose important part of your dataπ€·ββοΈ
Not a problemπ. Here are important facts you must knowπ
βοΈInstances with missing values for all features should be eliminated
βοΈFeatures with high absence rate should either be eliminated or filled with values
βοΈMissing values can be replaced using Mean Imputation or Regression Imputation
βοΈ Be careful with mean imputation for it may introduce bias as it evens out all instances
βοΈRegression Imputation might overfit your model
βοΈMean and Regression Imputation can't be applied to Text features with missing values
βοΈText Features with missing values can be eliminated if not needed in data
βοΈImportant Text Features with Missing values can be replaced with a new class or category labelled as uncategorized
Having any Feature missing more than 5-10% of its values? you should consider it to be missing data or feature with high absence rateπ
How can you handle these missing values, ensuring you dont loose important part of your dataπ€·ββοΈ
Not a problemπ. Here are important facts you must knowπ
βοΈInstances with missing values for all features should be eliminated
βοΈFeatures with high absence rate should either be eliminated or filled with values
βοΈMissing values can be replaced using Mean Imputation or Regression Imputation
βοΈ Be careful with mean imputation for it may introduce bias as it evens out all instances
βοΈRegression Imputation might overfit your model
βοΈMean and Regression Imputation can't be applied to Text features with missing values
βοΈText Features with missing values can be eliminated if not needed in data
βοΈImportant Text Features with Missing values can be replaced with a new class or category labelled as uncategorized
π7
Top 8 Github Repos to Learn Data Science and Python
1. All algorithms implemented in Python
By: The Algorithms
Stars βοΈ: 135K
Fork: 35.3K
Repo: https://github.com/TheAlgorithms/Python
2. DataScienceResources
By: jJonathan Bower
Stars βοΈ: 3K
Fork: 1.3K
Repo: https://github.com/jonathan-bower/DataScienceResources
3. Playground and Cheatsheet for Learning Python
By: Oleksii Trekhleb ( Also the Image)
Stars βοΈ: 12.5K
Fork: 2K
Repo: https://github.com/trekhleb/learn-python
4. Learn Python 3
By: Jerry Pussinen
Stars βοΈ: 4,8K
Fork: 1,4K
Repo: https://github.com/jerry-git/learn-python3
5. Awesome Data Science
By: Fatih AktΓΌrk, HΓΌseyin Mert & Osman Ungur, Recep Erol.
Stars βοΈ: 18.4K
Fork: 5K
Repo: https://github.com/academic/awesome-datascience
6. data-scientist-roadmap
By: MrMimic
Stars βοΈ: 5K
Fork: 1.5K
Repo: https://github.com/MrMimic/data-scientist-roadmap
7. Data Science Best Resources
By: Tirthajyoti Sarkar
Stars βοΈ: 1.8K
Fork: 717
Repo: https://github.com/tirthajyoti/Data-science-best-resources/blob/master/README.md
8. Ds-cheatsheets
By: Favio AndrΓ© VΓ‘zquez
Stars βοΈ: 10.4K
Fork: 3.1K
Repo: https://github.com/FavioVazquez/ds-cheatsheets
1. All algorithms implemented in Python
By: The Algorithms
Stars βοΈ: 135K
Fork: 35.3K
Repo: https://github.com/TheAlgorithms/Python
2. DataScienceResources
By: jJonathan Bower
Stars βοΈ: 3K
Fork: 1.3K
Repo: https://github.com/jonathan-bower/DataScienceResources
3. Playground and Cheatsheet for Learning Python
By: Oleksii Trekhleb ( Also the Image)
Stars βοΈ: 12.5K
Fork: 2K
Repo: https://github.com/trekhleb/learn-python
4. Learn Python 3
By: Jerry Pussinen
Stars βοΈ: 4,8K
Fork: 1,4K
Repo: https://github.com/jerry-git/learn-python3
5. Awesome Data Science
By: Fatih AktΓΌrk, HΓΌseyin Mert & Osman Ungur, Recep Erol.
Stars βοΈ: 18.4K
Fork: 5K
Repo: https://github.com/academic/awesome-datascience
6. data-scientist-roadmap
By: MrMimic
Stars βοΈ: 5K
Fork: 1.5K
Repo: https://github.com/MrMimic/data-scientist-roadmap
7. Data Science Best Resources
By: Tirthajyoti Sarkar
Stars βοΈ: 1.8K
Fork: 717
Repo: https://github.com/tirthajyoti/Data-science-best-resources/blob/master/README.md
8. Ds-cheatsheets
By: Favio AndrΓ© VΓ‘zquez
Stars βοΈ: 10.4K
Fork: 3.1K
Repo: https://github.com/FavioVazquez/ds-cheatsheets
π5π₯°1
π₯Deep Learning with Pytorch by Prof.Yann LeCun (CNN Founder)
This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition.
GitHub Link: https://atcold.github.io/pytorch-Deep-Learning/
YouTube Playlist: https://www.youtube.com/playlist?list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq
This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition.
GitHub Link: https://atcold.github.io/pytorch-Deep-Learning/
YouTube Playlist: https://www.youtube.com/playlist?list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq
YouTube
NYU Deep Learning SP20
Course website: https://bit.ly/DLSP20-web
π4
New Data Scientists - When you learn, it's easy to get distracted by Machine Learning & Deep Learning terms like "XGBoost", "Neural Networks", "RNN", "LSTM" or Advanced Technologies like "Spark", "Julia", "Scala", "Go", etc.
Don't get bogged down trying to learn every new term & technology you come across.
Instead, focus on foundations.
- data wrangling
- visualizing
- exploring
- modeling
- understanding the results.
The best tools are often basic, Build yourself up. You'll advance much faster. Keep learning!
Don't get bogged down trying to learn every new term & technology you come across.
Instead, focus on foundations.
- data wrangling
- visualizing
- exploring
- modeling
- understanding the results.
The best tools are often basic, Build yourself up. You'll advance much faster. Keep learning!
π16β€9π€1
Which of the following tool can be used for data visualization?
Anonymous Quiz
21%
Matplotlib
17%
Tableau
2%
Seaborn
61%
All of the above
π7
Data Analysis Interview Questions and Answers
ππ
1.How to create filters in Power BI?
Filters are an integral part of Power BI reports. They are used to slice and dice the data as per the dimensions we want. Filters are created in a couple of ways.
Using Slicers: A slicer is a visual under Visualization Pane. This can be added to the design view to filter our reports. When a slicer is added to the design view, it requires a field to be added to it. For example- Slicer can be added for Country fields. Then the data can be filtered based on countries.
Using Filter Pane: The Power BI team has added a filter pane to the reports, which is a single space where we can add different fields as filters. And these fields can be added depending on whether you want to filter only one visual(Visual level filter), or all the visuals in the report page(Page level filters), or applicable to all the pages of the report(report level filters)
2.How to sort data in Power BI?
Sorting is available in multiple formats. In the data view, a common sorting option of alphabetical order is there. Apart from that, we have the option of Sort by column, where one can sort a column based on another column. The sorting option is available in visuals as well. Sort by ascending and descending option by the fields and measure present in the visual is also available.
3.How to convert pdf to excel?
Open the PDF document you want to convert in XLSX format in Acrobat DC.
Go to the right pane and click on the βExport PDFβ option.
Choose spreadsheet as the Export format.
Select βMicrosoft Excel Workbook.β
Now click βExport.β
Download the converted file or share it.
4. How to enable macros in excel?
Click the file tab and then click βOptions.β
A dialog box will appear. In the βExcel Optionsβ dialog box, click on the βTrust Centerβ and then βTrust Center Settings.β
Go to the βMacro Settingsβ and select βenable all macros.β
Click OK to apply the macro settings.
ββββββββββββββββββββ-
ENJOY LEARNING ππ
ππ
1.How to create filters in Power BI?
Filters are an integral part of Power BI reports. They are used to slice and dice the data as per the dimensions we want. Filters are created in a couple of ways.
Using Slicers: A slicer is a visual under Visualization Pane. This can be added to the design view to filter our reports. When a slicer is added to the design view, it requires a field to be added to it. For example- Slicer can be added for Country fields. Then the data can be filtered based on countries.
Using Filter Pane: The Power BI team has added a filter pane to the reports, which is a single space where we can add different fields as filters. And these fields can be added depending on whether you want to filter only one visual(Visual level filter), or all the visuals in the report page(Page level filters), or applicable to all the pages of the report(report level filters)
2.How to sort data in Power BI?
Sorting is available in multiple formats. In the data view, a common sorting option of alphabetical order is there. Apart from that, we have the option of Sort by column, where one can sort a column based on another column. The sorting option is available in visuals as well. Sort by ascending and descending option by the fields and measure present in the visual is also available.
3.How to convert pdf to excel?
Open the PDF document you want to convert in XLSX format in Acrobat DC.
Go to the right pane and click on the βExport PDFβ option.
Choose spreadsheet as the Export format.
Select βMicrosoft Excel Workbook.β
Now click βExport.β
Download the converted file or share it.
4. How to enable macros in excel?
Click the file tab and then click βOptions.β
A dialog box will appear. In the βExcel Optionsβ dialog box, click on the βTrust Centerβ and then βTrust Center Settings.β
Go to the βMacro Settingsβ and select βenable all macros.β
Click OK to apply the macro settings.
ββββββββββββββββββββ-
ENJOY LEARNING ππ
π6π₯°5
While certificates have its own place to prove your skills, completing a course just for the sake of certificate is not going to help you at all. So whatever courses you take up, please make sure that you learn, practice and acquire that skill.
β€20π9
Some helpful Data science projects for beginners
https://www.kaggle.com/c/house-prices-advanced-regression-techniques
https://www.kaggle.com/c/digit-recognizer
https://www.kaggle.com/c/titanic
Intermediate Level Data science Projects
Black Friday Data : https://www.kaggle.com/sdolezel/black-friday
Human Activity Recognition Data : https://www.kaggle.com/uciml/human-activity-recognition-with-smartphones
Trip History Data : https://www.kaggle.com/pronto/cycle-share-dataset
Million Song Data : https://www.kaggle.com/c/msdchallenge
Census Income Data : https://www.kaggle.com/c/census-income/data
Movie Lens Data : https://www.kaggle.com/grouplens/movielens-20m-dataset
Twitter Classification Data : https://www.kaggle.com/c/twitter-sentiment-analysis2
Text mining : https://www.kaggle.com/kanncaa1/applying-text-mining
https://www.kaggle.com/c/house-prices-advanced-regression-techniques
https://www.kaggle.com/c/digit-recognizer
https://www.kaggle.com/c/titanic
Intermediate Level Data science Projects
Black Friday Data : https://www.kaggle.com/sdolezel/black-friday
Human Activity Recognition Data : https://www.kaggle.com/uciml/human-activity-recognition-with-smartphones
Trip History Data : https://www.kaggle.com/pronto/cycle-share-dataset
Million Song Data : https://www.kaggle.com/c/msdchallenge
Census Income Data : https://www.kaggle.com/c/census-income/data
Movie Lens Data : https://www.kaggle.com/grouplens/movielens-20m-dataset
Twitter Classification Data : https://www.kaggle.com/c/twitter-sentiment-analysis2
Text mining : https://www.kaggle.com/kanncaa1/applying-text-mining
π4
Three different learning styles in machine learning algorithms:
1. Supervised Learning
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and K-Means.
3. Semi-Supervised Learning
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
1. Supervised Learning
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and K-Means.
3. Semi-Supervised Learning
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
π4
Interview QnAs For ML Engineer
1.What are the various steps involved in an data analytics project?
The steps involved in a data analytics project are:
Data collection
Data cleansing
Data pre-processing
EDA
Creation of train test and validation sets
Model creation
Hyperparameter tuning
Model deployment
2. Explain Star Schema.
Star schema is a data warehousing concept in which all schema is connected to a central schema.
3. What is root cause analysis?
Root cause analysis is the process of tracing back of occurrence of an event and the factors which lead to it. Itβs generally done when a software malfunctions. In data science, root cause analysis helps businesses understand the semantics behind certain outcomes.
4. Define Confounding Variables.
A confounding variable is an external influence in an experiment. In simple words, these variables change the effect of a dependent and independent variable. A variable should satisfy below conditions to be a confounding variable :
Variables should be correlated to the independent variable.
Variables should be informally related to the dependent variable.
For example, if you are studying whether a lack of exercise has an effect on weight gain, then the lack of exercise is an independent variable and weight gain is a dependent variable. A confounder variable can be any other factor that has an effect on weight gain. Amount of food consumed, weather conditions etc. can be a confounding variable.
1.What are the various steps involved in an data analytics project?
The steps involved in a data analytics project are:
Data collection
Data cleansing
Data pre-processing
EDA
Creation of train test and validation sets
Model creation
Hyperparameter tuning
Model deployment
2. Explain Star Schema.
Star schema is a data warehousing concept in which all schema is connected to a central schema.
3. What is root cause analysis?
Root cause analysis is the process of tracing back of occurrence of an event and the factors which lead to it. Itβs generally done when a software malfunctions. In data science, root cause analysis helps businesses understand the semantics behind certain outcomes.
4. Define Confounding Variables.
A confounding variable is an external influence in an experiment. In simple words, these variables change the effect of a dependent and independent variable. A variable should satisfy below conditions to be a confounding variable :
Variables should be correlated to the independent variable.
Variables should be informally related to the dependent variable.
For example, if you are studying whether a lack of exercise has an effect on weight gain, then the lack of exercise is an independent variable and weight gain is a dependent variable. A confounder variable can be any other factor that has an effect on weight gain. Amount of food consumed, weather conditions etc. can be a confounding variable.
π2