50 Projects In 50 Days - HTML, CSS & JavaScript.zip.001
2 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.001
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.002
2 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.002
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.003
2 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.003
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.004
1.3 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.004
๐ฅ31๐11โค1โ1๐1
Building_Chatbots_with_Python_Using_Natural_Language_Processing.pdf
5.2 MB
Building Chatbots with Python
๐9โค3
Algorithms-Leetcode-Javascript
Webpack questions/answers you can use to prepare for interviews or test your knowledge.
Creator: Stepan V
Stars โญ๏ธ: 178
Forked By : 60
GitHub Repo: https://github.com/styopdev/webpack-interview-questions
Webpack questions/answers you can use to prepare for interviews or test your knowledge.
Creator: Stepan V
Stars โญ๏ธ: 178
Forked By : 60
GitHub Repo: https://github.com/styopdev/webpack-interview-questions
๐7
Curated papers, articles, and blogs on data science & machine learning in production.
https://github.com/eugeneyan/applied-ml
https://github.com/eugeneyan/applied-ml
GitHub
GitHub - eugeneyan/applied-ml: ๐ Papers & tech blogs by companies sharing their work on data science & machine learning in production.
๐ Papers & tech blogs by companies sharing their work on data science & machine learning in production. - eugeneyan/applied-ml
๐8โค2
john-c-shovic-raspberry-pi-iot-projects-prototyping-2021.epub
5.9 MB
Raspberry Pi IoT Projects
John C. Shovic, 2021
John C. Shovic, 2021
๐2
Managing Machine Learning Projects .pdf
9.4 MB
Managing Machine Learning Projects
Simon Thompson, 2022
Simon Thompson, 2022
๐6
Feature Scaling is one of the most useful and necessary transformations to perform on a training dataset, since with very few exceptions, ML algorithms do not fit well to datasets with attributes that have very different scales.
Let's talk about it ๐งต
There are 2 very effective techniques to transform all the attributes of a dataset to the same scale, which are:
โช๏ธ Normalization
โช๏ธ Standardization
The 2 techniques perform the same task, but in different ways. Moreover, each one has its strengths and weaknesses.
Normalization (min-max scaling) is very simple: values are shifted and rescaled to be in the range of 0 and 1.
This is achieved by subtracting each value by the min value and dividing the result by the difference between the max and min value.
In contrast, Standardization first subtracts the mean value (so that the values always have zero mean) and then divides the result by the standard deviation (so that the resulting distribution has unit variance).
More about them:
โช๏ธStandardization doesn't frame the data between the range 0-1, which is undesirable for some algorithms.
โช๏ธStandardization is robust to outliers.
โช๏ธNormalization is sensitive to outliers. A very large value may squash the other values in the range 0.0-0.2.
Both algorithms are implemented in the Scikit-learn Python library and are very easy to use. Check below Google Colab code with a toy example, where you can see how each technique works.
https://colab.research.google.com/drive/1DsvTezhnwfS7bPAeHHHHLHzcZTvjBzLc?usp=sharing
Check below spreadsheet, where you can see another example, step by step, of how to normalize and standardize your data.
https://docs.google.com/spreadsheets/d/14GsqJxrulv2CBW_XyNUGoA-f9l-6iKuZLJMcc2_5tZM/edit?usp=drivesdk
Well, the real benefit of feature scaling is when you want to train a model from a dataset with many features (e.g., m > 10) and these features have very different scales (different orders of magnitude). For NN this preprocessing is key.
Enable gradient descent to converge faster
Let's talk about it ๐งต
There are 2 very effective techniques to transform all the attributes of a dataset to the same scale, which are:
โช๏ธ Normalization
โช๏ธ Standardization
The 2 techniques perform the same task, but in different ways. Moreover, each one has its strengths and weaknesses.
Normalization (min-max scaling) is very simple: values are shifted and rescaled to be in the range of 0 and 1.
This is achieved by subtracting each value by the min value and dividing the result by the difference between the max and min value.
In contrast, Standardization first subtracts the mean value (so that the values always have zero mean) and then divides the result by the standard deviation (so that the resulting distribution has unit variance).
More about them:
โช๏ธStandardization doesn't frame the data between the range 0-1, which is undesirable for some algorithms.
โช๏ธStandardization is robust to outliers.
โช๏ธNormalization is sensitive to outliers. A very large value may squash the other values in the range 0.0-0.2.
Both algorithms are implemented in the Scikit-learn Python library and are very easy to use. Check below Google Colab code with a toy example, where you can see how each technique works.
https://colab.research.google.com/drive/1DsvTezhnwfS7bPAeHHHHLHzcZTvjBzLc?usp=sharing
Check below spreadsheet, where you can see another example, step by step, of how to normalize and standardize your data.
https://docs.google.com/spreadsheets/d/14GsqJxrulv2CBW_XyNUGoA-f9l-6iKuZLJMcc2_5tZM/edit?usp=drivesdk
Well, the real benefit of feature scaling is when you want to train a model from a dataset with many features (e.g., m > 10) and these features have very different scales (different orders of magnitude). For NN this preprocessing is key.
Enable gradient descent to converge faster
Google
DS - Feature Scaling.ipynb
Colaboratory notebook
๐14โค1