👍7
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.001
2 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.001
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.002
2 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.002
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.003
2 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.003
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.004
1.3 GB
50 Projects In 50 Days - HTML, CSS & JavaScript.zip.004
🔥31👍11❤1✍1👏1
Building_Chatbots_with_Python_Using_Natural_Language_Processing.pdf
5.2 MB
Building Chatbots with Python
👍9❤3
Algorithms-Leetcode-Javascript
Webpack questions/answers you can use to prepare for interviews or test your knowledge.
Creator: Stepan V
Stars ⭐️: 178
Forked By : 60
GitHub Repo: https://github.com/styopdev/webpack-interview-questions
Webpack questions/answers you can use to prepare for interviews or test your knowledge.
Creator: Stepan V
Stars ⭐️: 178
Forked By : 60
GitHub Repo: https://github.com/styopdev/webpack-interview-questions
👍7
Curated papers, articles, and blogs on data science & machine learning in production.
https://github.com/eugeneyan/applied-ml
https://github.com/eugeneyan/applied-ml
GitHub
GitHub - eugeneyan/applied-ml: 📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production.
📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production. - eugeneyan/applied-ml
👍8❤2
john-c-shovic-raspberry-pi-iot-projects-prototyping-2021.epub
5.9 MB
Raspberry Pi IoT Projects
John C. Shovic, 2021
John C. Shovic, 2021
👍2
Managing Machine Learning Projects .pdf
9.4 MB
Managing Machine Learning Projects
Simon Thompson, 2022
Simon Thompson, 2022
👍6
Feature Scaling is one of the most useful and necessary transformations to perform on a training dataset, since with very few exceptions, ML algorithms do not fit well to datasets with attributes that have very different scales.
Let's talk about it 🧵
There are 2 very effective techniques to transform all the attributes of a dataset to the same scale, which are:
▪️ Normalization
▪️ Standardization
The 2 techniques perform the same task, but in different ways. Moreover, each one has its strengths and weaknesses.
Normalization (min-max scaling) is very simple: values are shifted and rescaled to be in the range of 0 and 1.
This is achieved by subtracting each value by the min value and dividing the result by the difference between the max and min value.
In contrast, Standardization first subtracts the mean value (so that the values always have zero mean) and then divides the result by the standard deviation (so that the resulting distribution has unit variance).
More about them:
▪️Standardization doesn't frame the data between the range 0-1, which is undesirable for some algorithms.
▪️Standardization is robust to outliers.
▪️Normalization is sensitive to outliers. A very large value may squash the other values in the range 0.0-0.2.
Both algorithms are implemented in the Scikit-learn Python library and are very easy to use. Check below Google Colab code with a toy example, where you can see how each technique works.
https://colab.research.google.com/drive/1DsvTezhnwfS7bPAeHHHHLHzcZTvjBzLc?usp=sharing
Check below spreadsheet, where you can see another example, step by step, of how to normalize and standardize your data.
https://docs.google.com/spreadsheets/d/14GsqJxrulv2CBW_XyNUGoA-f9l-6iKuZLJMcc2_5tZM/edit?usp=drivesdk
Well, the real benefit of feature scaling is when you want to train a model from a dataset with many features (e.g., m > 10) and these features have very different scales (different orders of magnitude). For NN this preprocessing is key.
Enable gradient descent to converge faster
Let's talk about it 🧵
There are 2 very effective techniques to transform all the attributes of a dataset to the same scale, which are:
▪️ Normalization
▪️ Standardization
The 2 techniques perform the same task, but in different ways. Moreover, each one has its strengths and weaknesses.
Normalization (min-max scaling) is very simple: values are shifted and rescaled to be in the range of 0 and 1.
This is achieved by subtracting each value by the min value and dividing the result by the difference between the max and min value.
In contrast, Standardization first subtracts the mean value (so that the values always have zero mean) and then divides the result by the standard deviation (so that the resulting distribution has unit variance).
More about them:
▪️Standardization doesn't frame the data between the range 0-1, which is undesirable for some algorithms.
▪️Standardization is robust to outliers.
▪️Normalization is sensitive to outliers. A very large value may squash the other values in the range 0.0-0.2.
Both algorithms are implemented in the Scikit-learn Python library and are very easy to use. Check below Google Colab code with a toy example, where you can see how each technique works.
https://colab.research.google.com/drive/1DsvTezhnwfS7bPAeHHHHLHzcZTvjBzLc?usp=sharing
Check below spreadsheet, where you can see another example, step by step, of how to normalize and standardize your data.
https://docs.google.com/spreadsheets/d/14GsqJxrulv2CBW_XyNUGoA-f9l-6iKuZLJMcc2_5tZM/edit?usp=drivesdk
Well, the real benefit of feature scaling is when you want to train a model from a dataset with many features (e.g., m > 10) and these features have very different scales (different orders of magnitude). For NN this preprocessing is key.
Enable gradient descent to converge faster
Google
DS - Feature Scaling.ipynb
Colaboratory notebook
👍14❤1