Many data scientists don't know how to push ML models to production. Here's the recipe ๐
๐๐ฒ๐ ๐๐ป๐ด๐ฟ๐ฒ๐ฑ๐ถ๐ฒ๐ป๐๐
๐น ๐ง๐ฟ๐ฎ๐ถ๐ป / ๐ง๐ฒ๐๐ ๐๐ฎ๐๐ฎ๐๐ฒ๐ - Ensure Test is representative of Online data
๐น ๐๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐ฃ๐ถ๐ฝ๐ฒ๐น๐ถ๐ป๐ฒ - Generate features in real-time
๐น ๐ ๐ผ๐ฑ๐ฒ๐น ๐ข๐ฏ๐ท๐ฒ๐ฐ๐ - Trained SkLearn or Tensorflow Model
๐น ๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐๐ผ๐ฑ๐ฒ ๐ฅ๐ฒ๐ฝ๐ผ - Save model project code to Github
๐น ๐๐ฃ๐ ๐๐ฟ๐ฎ๐บ๐ฒ๐๐ผ๐ฟ๐ธ - Use FastAPI or Flask to build a model API
๐น ๐๐ผ๐ฐ๐ธ๐ฒ๐ฟ - Containerize the ML model API
๐น ๐ฅ๐ฒ๐บ๐ผ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ - Choose a cloud service; e.g. AWS sagemaker
๐น ๐จ๐ป๐ถ๐ ๐ง๐ฒ๐๐๐ - Test inputs & outputs of functions and APIs
๐น ๐ ๐ผ๐ฑ๐ฒ๐น ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด - Evidently AI, a simple, open-source for ML monitoring
๐ฃ๐ฟ๐ผ๐ฐ๐ฒ๐ฑ๐๐ฟ๐ฒ
๐ฆ๐๐ฒ๐ฝ ๐ญ - ๐๐ฎ๐๐ฎ ๐ฃ๐ฟ๐ฒ๐ฝ๐ฎ๐ฟ๐ฎ๐๐ถ๐ผ๐ป & ๐๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด
Don't push a model with 90% accuracy on train set. Do it based on the test set - if and only if, the test set is representative of the online data. Use SkLearn pipeline to chain a series of model preprocessing functions like null handling.
๐ฆ๐๐ฒ๐ฝ ๐ฎ - ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐
Train your model with frameworks like Sklearn or Tensorflow. Push the model code including preprocessing, training and validation scripts to Github for reproducibility.
๐ฆ๐๐ฒ๐ฝ ๐ฏ - ๐๐ฃ๐ ๐๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐ & ๐๐ผ๐ป๐๐ฎ๐ถ๐ป๐ฒ๐ฟ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป
Your model needs a "/predict" endpoint, which receives a JSON object in the request input and generates a JSON object with the model score in the response output. You can use frameworks like FastAPI or Flask. Containzerize this API so that it's agnostic to server environment
๐ฆ๐๐ฒ๐ฝ ๐ฐ - ๐ง๐ฒ๐๐๐ถ๐ป๐ด & ๐๐ฒ๐ฝ๐น๐ผ๐๐บ๐ฒ๐ป๐
Write tests to validate inputs & outputs of API functions to prevent errors. Push the code to remote services like AWS Sagemaker.
๐ฆ๐๐ฒ๐ฝ ๐ฑ - ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด
Set up monitoring tools like Evidently AI, or use a built-in one within AWS Sagemaker. I use such tools to track performance metrics and data drifts on online data.
๐๐ฒ๐ ๐๐ป๐ด๐ฟ๐ฒ๐ฑ๐ถ๐ฒ๐ป๐๐
๐น ๐ง๐ฟ๐ฎ๐ถ๐ป / ๐ง๐ฒ๐๐ ๐๐ฎ๐๐ฎ๐๐ฒ๐ - Ensure Test is representative of Online data
๐น ๐๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐ฃ๐ถ๐ฝ๐ฒ๐น๐ถ๐ป๐ฒ - Generate features in real-time
๐น ๐ ๐ผ๐ฑ๐ฒ๐น ๐ข๐ฏ๐ท๐ฒ๐ฐ๐ - Trained SkLearn or Tensorflow Model
๐น ๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐๐ผ๐ฑ๐ฒ ๐ฅ๐ฒ๐ฝ๐ผ - Save model project code to Github
๐น ๐๐ฃ๐ ๐๐ฟ๐ฎ๐บ๐ฒ๐๐ผ๐ฟ๐ธ - Use FastAPI or Flask to build a model API
๐น ๐๐ผ๐ฐ๐ธ๐ฒ๐ฟ - Containerize the ML model API
๐น ๐ฅ๐ฒ๐บ๐ผ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ - Choose a cloud service; e.g. AWS sagemaker
๐น ๐จ๐ป๐ถ๐ ๐ง๐ฒ๐๐๐ - Test inputs & outputs of functions and APIs
๐น ๐ ๐ผ๐ฑ๐ฒ๐น ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด - Evidently AI, a simple, open-source for ML monitoring
๐ฃ๐ฟ๐ผ๐ฐ๐ฒ๐ฑ๐๐ฟ๐ฒ
๐ฆ๐๐ฒ๐ฝ ๐ญ - ๐๐ฎ๐๐ฎ ๐ฃ๐ฟ๐ฒ๐ฝ๐ฎ๐ฟ๐ฎ๐๐ถ๐ผ๐ป & ๐๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด
Don't push a model with 90% accuracy on train set. Do it based on the test set - if and only if, the test set is representative of the online data. Use SkLearn pipeline to chain a series of model preprocessing functions like null handling.
๐ฆ๐๐ฒ๐ฝ ๐ฎ - ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐
Train your model with frameworks like Sklearn or Tensorflow. Push the model code including preprocessing, training and validation scripts to Github for reproducibility.
๐ฆ๐๐ฒ๐ฝ ๐ฏ - ๐๐ฃ๐ ๐๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐ & ๐๐ผ๐ป๐๐ฎ๐ถ๐ป๐ฒ๐ฟ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป
Your model needs a "/predict" endpoint, which receives a JSON object in the request input and generates a JSON object with the model score in the response output. You can use frameworks like FastAPI or Flask. Containzerize this API so that it's agnostic to server environment
๐ฆ๐๐ฒ๐ฝ ๐ฐ - ๐ง๐ฒ๐๐๐ถ๐ป๐ด & ๐๐ฒ๐ฝ๐น๐ผ๐๐บ๐ฒ๐ป๐
Write tests to validate inputs & outputs of API functions to prevent errors. Push the code to remote services like AWS Sagemaker.
๐ฆ๐๐ฒ๐ฝ ๐ฑ - ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด
Set up monitoring tools like Evidently AI, or use a built-in one within AWS Sagemaker. I use such tools to track performance metrics and data drifts on online data.
๐8๐คฃ1
Essential Python Libraries for Data Analytics ๐๐
Python Free Resources: https://t.iss.one/pythondevelopersindia
1. NumPy:
- Efficient numerical operations and array manipulation.
2. Pandas:
- Data manipulation and analysis with powerful data structures (DataFrame, Series).
3. Matplotlib:
- 2D plotting library for creating visualizations.
4. Scikit-learn:
- Machine learning toolkit for classification, regression, clustering, etc.
5. TensorFlow:
- Open-source machine learning framework for building and deploying ML models.
6. PyTorch:
- Deep learning library, particularly popular for neural network research.
7. Django:
- High-level web framework for building robust, scalable web applications.
8. Flask:
- Lightweight web framework for building smaller web applications and APIs.
9. Requests:
- HTTP library for making HTTP requests.
10. Beautiful Soup:
- Web scraping library for pulling data out of HTML and XML files.
As a beginner, you can start with Pandas and Numpy libraries for data analysis. If you want to transition from Data Analyst to Data Scientist, then you can start applying ML libraries like Scikit-learn, Tensorflow, Pytorch, etc. in your data projects.
Share with credits: https://t.iss.one/sqlspecialist
Hope it helps :)
Python Free Resources: https://t.iss.one/pythondevelopersindia
1. NumPy:
- Efficient numerical operations and array manipulation.
2. Pandas:
- Data manipulation and analysis with powerful data structures (DataFrame, Series).
3. Matplotlib:
- 2D plotting library for creating visualizations.
4. Scikit-learn:
- Machine learning toolkit for classification, regression, clustering, etc.
5. TensorFlow:
- Open-source machine learning framework for building and deploying ML models.
6. PyTorch:
- Deep learning library, particularly popular for neural network research.
7. Django:
- High-level web framework for building robust, scalable web applications.
8. Flask:
- Lightweight web framework for building smaller web applications and APIs.
9. Requests:
- HTTP library for making HTTP requests.
10. Beautiful Soup:
- Web scraping library for pulling data out of HTML and XML files.
As a beginner, you can start with Pandas and Numpy libraries for data analysis. If you want to transition from Data Analyst to Data Scientist, then you can start applying ML libraries like Scikit-learn, Tensorflow, Pytorch, etc. in your data projects.
Share with credits: https://t.iss.one/sqlspecialist
Hope it helps :)
๐4โค1๐1
Complete Roadmap to learn Data Science
1. Foundational Knowledge
Mathematics and Statistics
- Linear Algebra: Understand vectors, matrices, and tensor operations.
- Calculus: Learn about derivatives, integrals, and optimization techniques.
- Probability: Study probability distributions, Bayes' theorem, and expected values.
- Statistics: Focus on descriptive statistics, hypothesis testing, regression, and statistical significance.
Programming
- Python: Start with basic syntax, data structures, and OOP concepts. Libraries to learn: NumPy, pandas, matplotlib, seaborn.
- R: Get familiar with basic syntax and data manipulation (optional but useful).
- SQL: Understand database querying, joins, aggregations, and subqueries.
2. Core Data Science Concepts
Data Wrangling and Preprocessing
- Cleaning and preparing data for analysis.
- Handling missing data, outliers, and inconsistencies.
- Feature engineering and selection.
Data Visualization
- Tools: Matplotlib, seaborn, Plotly.
- Concepts: Types of plots, storytelling with data, interactive visualizations.
Machine Learning
- Supervised Learning: Linear regression, logistic regression, decision trees, random forests, support vector machines, k-nearest neighbors.
- Unsupervised Learning: K-means clustering, hierarchical clustering, PCA.
- Advanced Techniques: Ensemble methods, gradient boosting (XGBoost, LightGBM), neural networks.
- Model Evaluation: Train-test split, cross-validation, confusion matrix, ROC-AUC.
3. Advanced Topics
Deep Learning
- Frameworks: TensorFlow, Keras, PyTorch.
- Concepts: Neural networks, CNNs, RNNs, LSTMs, GANs.
Natural Language Processing (NLP)
- Basics: Text preprocessing, tokenization, stemming, lemmatization.
- Advanced: Sentiment analysis, topic modeling, word embeddings (Word2Vec, GloVe), transformers (BERT, GPT).
Big Data Technologies
- Frameworks: Hadoop, Spark.
- Databases: NoSQL databases (MongoDB, Cassandra).
4. Practical Experience
Projects
- Start with small datasets (Kaggle, UCI Machine Learning Repository).
- Progress to more complex projects involving real-world data.
- Work on end-to-end projects, from data collection to model deployment.
Competitions and Challenges
- Participate in Kaggle competitions.
- Engage in hackathons and coding challenges.
5. Soft Skills and Tools
Communication
- Learn to present findings clearly and concisely.
- Practice writing reports and creating dashboards (Tableau, Power BI).
Collaboration Tools
- Version Control: Git and GitHub.
- Project Management: JIRA, Trello.
6. Continuous Learning and Networking
Staying Updated
- Follow data science blogs, podcasts, and research papers.
- Join professional groups and forums (LinkedIn, Kaggle, Reddit, DataSimplifier).
7. Specialization
After gaining a broad understanding, you might want to specialize in areas such as:
- Data Engineering
- Business Analytics
- Computer Vision
- AI and Machine Learning Research
1. Foundational Knowledge
Mathematics and Statistics
- Linear Algebra: Understand vectors, matrices, and tensor operations.
- Calculus: Learn about derivatives, integrals, and optimization techniques.
- Probability: Study probability distributions, Bayes' theorem, and expected values.
- Statistics: Focus on descriptive statistics, hypothesis testing, regression, and statistical significance.
Programming
- Python: Start with basic syntax, data structures, and OOP concepts. Libraries to learn: NumPy, pandas, matplotlib, seaborn.
- R: Get familiar with basic syntax and data manipulation (optional but useful).
- SQL: Understand database querying, joins, aggregations, and subqueries.
2. Core Data Science Concepts
Data Wrangling and Preprocessing
- Cleaning and preparing data for analysis.
- Handling missing data, outliers, and inconsistencies.
- Feature engineering and selection.
Data Visualization
- Tools: Matplotlib, seaborn, Plotly.
- Concepts: Types of plots, storytelling with data, interactive visualizations.
Machine Learning
- Supervised Learning: Linear regression, logistic regression, decision trees, random forests, support vector machines, k-nearest neighbors.
- Unsupervised Learning: K-means clustering, hierarchical clustering, PCA.
- Advanced Techniques: Ensemble methods, gradient boosting (XGBoost, LightGBM), neural networks.
- Model Evaluation: Train-test split, cross-validation, confusion matrix, ROC-AUC.
3. Advanced Topics
Deep Learning
- Frameworks: TensorFlow, Keras, PyTorch.
- Concepts: Neural networks, CNNs, RNNs, LSTMs, GANs.
Natural Language Processing (NLP)
- Basics: Text preprocessing, tokenization, stemming, lemmatization.
- Advanced: Sentiment analysis, topic modeling, word embeddings (Word2Vec, GloVe), transformers (BERT, GPT).
Big Data Technologies
- Frameworks: Hadoop, Spark.
- Databases: NoSQL databases (MongoDB, Cassandra).
4. Practical Experience
Projects
- Start with small datasets (Kaggle, UCI Machine Learning Repository).
- Progress to more complex projects involving real-world data.
- Work on end-to-end projects, from data collection to model deployment.
Competitions and Challenges
- Participate in Kaggle competitions.
- Engage in hackathons and coding challenges.
5. Soft Skills and Tools
Communication
- Learn to present findings clearly and concisely.
- Practice writing reports and creating dashboards (Tableau, Power BI).
Collaboration Tools
- Version Control: Git and GitHub.
- Project Management: JIRA, Trello.
6. Continuous Learning and Networking
Staying Updated
- Follow data science blogs, podcasts, and research papers.
- Join professional groups and forums (LinkedIn, Kaggle, Reddit, DataSimplifier).
7. Specialization
After gaining a broad understanding, you might want to specialize in areas such as:
- Data Engineering
- Business Analytics
- Computer Vision
- AI and Machine Learning Research
๐7๐คฃ1
๐จโ๐ปWebsites to Practice Python
Python Basics ๐:
1. https://codingbat.com/python
2. https://www.hackerrank.com/
3. https://www.hackerearth.com/practice/
Practice Problems set :
4. https://projecteuler.net/archives
5. https://www.codeabbey.com/index/task_list
6. https://www.pythonchallenge.com/
Python Basics ๐:
1. https://codingbat.com/python
2. https://www.hackerrank.com/
3. https://www.hackerearth.com/practice/
Practice Problems set :
4. https://projecteuler.net/archives
5. https://www.codeabbey.com/index/task_list
6. https://www.pythonchallenge.com/
๐1๐คฃ1
Hey Guys๐,
The Average Salary Of a Data Scientist is 14LPA
๐๐๐๐จ๐ฆ๐ ๐ ๐๐๐ซ๐ญ๐ข๐๐ข๐๐ ๐๐๐ญ๐ ๐๐๐ข๐๐ง๐ญ๐ข๐ฌ๐ญ ๐๐ง ๐๐จ๐ฉ ๐๐๐๐ฌ๐
We help you master the required skills.
Learn by doing, build Industry level projects
๐ฉโ๐ 1500+ Students Placed
๐ผ 7.2 LPA Avg. Package
๐ฐ 41 LPA Highest Package
๐ค 450+ Hiring Partners
Apply for FREE๐ :
https://tracking.acciojob.com/g/PUfdDxgHR
( Limited Slots )
The Average Salary Of a Data Scientist is 14LPA
๐๐๐๐จ๐ฆ๐ ๐ ๐๐๐ซ๐ญ๐ข๐๐ข๐๐ ๐๐๐ญ๐ ๐๐๐ข๐๐ง๐ญ๐ข๐ฌ๐ญ ๐๐ง ๐๐จ๐ฉ ๐๐๐๐ฌ๐
We help you master the required skills.
Learn by doing, build Industry level projects
๐ฉโ๐ 1500+ Students Placed
๐ผ 7.2 LPA Avg. Package
๐ฐ 41 LPA Highest Package
๐ค 450+ Hiring Partners
Apply for FREE๐ :
https://tracking.acciojob.com/g/PUfdDxgHR
( Limited Slots )
๐7โค1๐คฃ1
If you know all these, you know most things in Generative AI ๐๐
https://t.iss.one/generativeai_gpt/266
https://t.iss.one/generativeai_gpt/266
๐2๐1
Some essential concepts every data scientist should understand:
### 1. Statistics and Probability
- Purpose: Understanding data distributions and making inferences.
- Core Concepts: Descriptive statistics (mean, median, mode), inferential statistics, probability distributions (normal, binomial), hypothesis testing, p-values, confidence intervals.
### 2. Programming Languages
- Purpose: Implementing data analysis and machine learning algorithms.
- Popular Languages: Python, R.
- Libraries: NumPy, Pandas, Scikit-learn (Python), dplyr, ggplot2 (R).
### 3. Data Wrangling
- Purpose: Cleaning and transforming raw data into a usable format.
- Techniques: Handling missing values, data normalization, feature engineering, data aggregation.
### 4. Exploratory Data Analysis (EDA)
- Purpose: Summarizing the main characteristics of a dataset, often using visual methods.
- Tools: Matplotlib, Seaborn (Python), ggplot2 (R).
- Techniques: Histograms, scatter plots, box plots, correlation matrices.
### 5. Machine Learning
- Purpose: Building models to make predictions or find patterns in data.
- Core Concepts: Supervised learning (regression, classification), unsupervised learning (clustering, dimensionality reduction), model evaluation (accuracy, precision, recall, F1 score).
- Algorithms: Linear regression, logistic regression, decision trees, random forests, support vector machines, k-means clustering, principal component analysis (PCA).
### 6. Deep Learning
- Purpose: Advanced machine learning techniques using neural networks.
- Core Concepts: Neural networks, backpropagation, activation functions, overfitting, dropout.
- Frameworks: TensorFlow, Keras, PyTorch.
### 7. Natural Language Processing (NLP)
- Purpose: Analyzing and modeling textual data.
- Core Concepts: Tokenization, stemming, lemmatization, TF-IDF, word embeddings.
- Techniques: Sentiment analysis, topic modeling, named entity recognition (NER).
### 8. Data Visualization
- Purpose: Communicating insights through graphical representations.
- Tools: Matplotlib, Seaborn, Plotly (Python), ggplot2, Shiny (R), Tableau.
- Techniques: Bar charts, line graphs, heatmaps, interactive dashboards.
### 9. Big Data Technologies
- Purpose: Handling and analyzing large volumes of data.
- Technologies: Hadoop, Spark.
- Core Concepts: Distributed computing, MapReduce, parallel processing.
### 10. Databases
- Purpose: Storing and retrieving data efficiently.
- Types: SQL databases (MySQL, PostgreSQL), NoSQL databases (MongoDB, Cassandra).
- Core Concepts: Querying, indexing, normalization, transactions.
### 11. Time Series Analysis
- Purpose: Analyzing data points collected or recorded at specific time intervals.
- Core Concepts: Trend analysis, seasonal decomposition, ARIMA models, exponential smoothing.
### 12. Model Deployment and Productionization
- Purpose: Integrating machine learning models into production environments.
- Techniques: API development, containerization (Docker), model serving (Flask, FastAPI).
- Tools: MLflow, TensorFlow Serving, Kubernetes.
### 13. Data Ethics and Privacy
- Purpose: Ensuring ethical use and privacy of data.
- Core Concepts: Bias in data, ethical considerations, data anonymization, GDPR compliance.
### 14. Business Acumen
- Purpose: Aligning data science projects with business goals.
- Core Concepts: Understanding key performance indicators (KPIs), domain knowledge, stakeholder communication.
### 15. Collaboration and Version Control
- Purpose: Managing code changes and collaborative work.
- Tools: Git, GitHub, GitLab.
- Practices: Version control, code reviews, collaborative development.
Best Data Science & Machine Learning Resources: https://topmate.io/coding/914624
ENJOY LEARNING ๐๐
### 1. Statistics and Probability
- Purpose: Understanding data distributions and making inferences.
- Core Concepts: Descriptive statistics (mean, median, mode), inferential statistics, probability distributions (normal, binomial), hypothesis testing, p-values, confidence intervals.
### 2. Programming Languages
- Purpose: Implementing data analysis and machine learning algorithms.
- Popular Languages: Python, R.
- Libraries: NumPy, Pandas, Scikit-learn (Python), dplyr, ggplot2 (R).
### 3. Data Wrangling
- Purpose: Cleaning and transforming raw data into a usable format.
- Techniques: Handling missing values, data normalization, feature engineering, data aggregation.
### 4. Exploratory Data Analysis (EDA)
- Purpose: Summarizing the main characteristics of a dataset, often using visual methods.
- Tools: Matplotlib, Seaborn (Python), ggplot2 (R).
- Techniques: Histograms, scatter plots, box plots, correlation matrices.
### 5. Machine Learning
- Purpose: Building models to make predictions or find patterns in data.
- Core Concepts: Supervised learning (regression, classification), unsupervised learning (clustering, dimensionality reduction), model evaluation (accuracy, precision, recall, F1 score).
- Algorithms: Linear regression, logistic regression, decision trees, random forests, support vector machines, k-means clustering, principal component analysis (PCA).
### 6. Deep Learning
- Purpose: Advanced machine learning techniques using neural networks.
- Core Concepts: Neural networks, backpropagation, activation functions, overfitting, dropout.
- Frameworks: TensorFlow, Keras, PyTorch.
### 7. Natural Language Processing (NLP)
- Purpose: Analyzing and modeling textual data.
- Core Concepts: Tokenization, stemming, lemmatization, TF-IDF, word embeddings.
- Techniques: Sentiment analysis, topic modeling, named entity recognition (NER).
### 8. Data Visualization
- Purpose: Communicating insights through graphical representations.
- Tools: Matplotlib, Seaborn, Plotly (Python), ggplot2, Shiny (R), Tableau.
- Techniques: Bar charts, line graphs, heatmaps, interactive dashboards.
### 9. Big Data Technologies
- Purpose: Handling and analyzing large volumes of data.
- Technologies: Hadoop, Spark.
- Core Concepts: Distributed computing, MapReduce, parallel processing.
### 10. Databases
- Purpose: Storing and retrieving data efficiently.
- Types: SQL databases (MySQL, PostgreSQL), NoSQL databases (MongoDB, Cassandra).
- Core Concepts: Querying, indexing, normalization, transactions.
### 11. Time Series Analysis
- Purpose: Analyzing data points collected or recorded at specific time intervals.
- Core Concepts: Trend analysis, seasonal decomposition, ARIMA models, exponential smoothing.
### 12. Model Deployment and Productionization
- Purpose: Integrating machine learning models into production environments.
- Techniques: API development, containerization (Docker), model serving (Flask, FastAPI).
- Tools: MLflow, TensorFlow Serving, Kubernetes.
### 13. Data Ethics and Privacy
- Purpose: Ensuring ethical use and privacy of data.
- Core Concepts: Bias in data, ethical considerations, data anonymization, GDPR compliance.
### 14. Business Acumen
- Purpose: Aligning data science projects with business goals.
- Core Concepts: Understanding key performance indicators (KPIs), domain knowledge, stakeholder communication.
### 15. Collaboration and Version Control
- Purpose: Managing code changes and collaborative work.
- Tools: Git, GitHub, GitLab.
- Practices: Version control, code reviews, collaborative development.
Best Data Science & Machine Learning Resources: https://topmate.io/coding/914624
ENJOY LEARNING ๐๐
๐3โค1๐คฃ1
โ๏ธ What is an Artificial Neural Networks?
โฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟ
001 What is Deep Learning
002 Plan of Attack
003 The Neuron
004 The Activation Function
005 How do Neural Networks work
006 How do Neural Networks learn
007 Gradient Descent
008 Stochastic Gradient Descent
009 Back propagation
โฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟ
๐๐๐๐
Deep Learning Complete Course
Artificial neural networks (ANN) give machines the ability to process data similar to the human brain and make decisions or take actions based on the data. While thereโs still more to develop before machines have similar imaginations and reasoning power as humans, ANNs help machines complete and learn from the tasks they perform.โถ๏ธContent:
โฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟ
001 What is Deep Learning
002 Plan of Attack
003 The Neuron
004 The Activation Function
005 How do Neural Networks work
006 How do Neural Networks learn
007 Gradient Descent
008 Stochastic Gradient Descent
009 Back propagation
โฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟโฟ
๐๐๐๐
Deep Learning Complete Course
๐คฃ1
6 Tips for Building a Robust Machine Learning Model
1. Understand the problem thoroughly before jumping into the model.
โ Taking time to understand the problem helps build a solution aligned with business needs and goals.
2. Focus on feature engineering to improve accuracy.
โ Well-engineered features make a big difference in model performance. Collaborating with data engineers on clean and well-structured data can simplify feature engineering.
3. Start simple, test assumptions, and iterate.
โ Begin with straightforward models to test ideas quickly. Iteration and experimentation will lead to stronger results.
4. Keep track of versions for reproducibility.
โ Documenting versions of data and code helps maintain consistency, making it easier to reproduce results.
5. Regularly validate your model with new data.
โ Models should be updated and validated as new data becomes available to avoid performance degradation.
6. Always prioritize interpretability alongside accuracy.
โ Building interpretable models helps stakeholders understand and trust your results, making insights more actionable.
Like if you need similar content ๐๐
1. Understand the problem thoroughly before jumping into the model.
โ Taking time to understand the problem helps build a solution aligned with business needs and goals.
2. Focus on feature engineering to improve accuracy.
โ Well-engineered features make a big difference in model performance. Collaborating with data engineers on clean and well-structured data can simplify feature engineering.
3. Start simple, test assumptions, and iterate.
โ Begin with straightforward models to test ideas quickly. Iteration and experimentation will lead to stronger results.
4. Keep track of versions for reproducibility.
โ Documenting versions of data and code helps maintain consistency, making it easier to reproduce results.
5. Regularly validate your model with new data.
โ Models should be updated and validated as new data becomes available to avoid performance degradation.
6. Always prioritize interpretability alongside accuracy.
โ Building interpretable models helps stakeholders understand and trust your results, making insights more actionable.
Like if you need similar content ๐๐
๐คฃ1
Use this checklist to see if youโre truly JOB-READY. The more items you complete, the closer you are to landing your dream data science job! ๐
Check Your Skills with This Checklist!
Python:-
Master Python fundamentals
Understand Pandas for data manipulation
Learn data visualization with Matplotlib and Seaborn
Practice error handling and debugging
Statistics:-
Grasp probability theory
Know descriptive and inferential statistics
Learn statistical machine learning concepts
Exploratory Data Analysis (EDA):-
Perform data summarization
Work on data cleaning and transformation
Visualize data effectively
SQL:-
Understand the BIG 6 SQL statements
Practice joins and common table expressions (CTEs)
Use window functions
Learn to write stored procedures
Machine Learning:-
Master feature engineering
Understand regression and classification techniques
Learn clustering methods
Model Evaluation:-
Work with confusion matrices
Understand precision, recall, and F1-score
Practice cross-validation
Learn about overfitting and underfitting
Deep Learning:-
Get familiar with Convolutional Neural Networks (CNNs)
Understand transformers
Learn PyTorch or TensorFlow basics
Practice model training and optimization
Resume:-
Ensure your resume is ATS-friendly
Customize for the job description
Use the STAR method to highlight achievements
Include a link to your portfolio
AI-Enabled Mindset:-
Develop Googling skills
Use AI tools like ChatGPT or Bard for learning
Commit to continuous learning
Hone problem-solving abilities
Communication:-
Practice presenting insights clearly
Write professional emails
Manage stakeholder communication
Utilize project management tools
LinkedIn:-
Have a good profile picture and banner
Get 10+ endorsed skills
Collect at least 3 recommendations
Link your portfolio in your profile
Portfolio:-
Include 4+ business-related projects
Showcase one project per tool you know
Create an insights desk
Prepare a video presentation
Like if you need similar content ๐๐
Check Your Skills with This Checklist!
Python:-
Master Python fundamentals
Understand Pandas for data manipulation
Learn data visualization with Matplotlib and Seaborn
Practice error handling and debugging
Statistics:-
Grasp probability theory
Know descriptive and inferential statistics
Learn statistical machine learning concepts
Exploratory Data Analysis (EDA):-
Perform data summarization
Work on data cleaning and transformation
Visualize data effectively
SQL:-
Understand the BIG 6 SQL statements
Practice joins and common table expressions (CTEs)
Use window functions
Learn to write stored procedures
Machine Learning:-
Master feature engineering
Understand regression and classification techniques
Learn clustering methods
Model Evaluation:-
Work with confusion matrices
Understand precision, recall, and F1-score
Practice cross-validation
Learn about overfitting and underfitting
Deep Learning:-
Get familiar with Convolutional Neural Networks (CNNs)
Understand transformers
Learn PyTorch or TensorFlow basics
Practice model training and optimization
Resume:-
Ensure your resume is ATS-friendly
Customize for the job description
Use the STAR method to highlight achievements
Include a link to your portfolio
AI-Enabled Mindset:-
Develop Googling skills
Use AI tools like ChatGPT or Bard for learning
Commit to continuous learning
Hone problem-solving abilities
Communication:-
Practice presenting insights clearly
Write professional emails
Manage stakeholder communication
Utilize project management tools
LinkedIn:-
Have a good profile picture and banner
Get 10+ endorsed skills
Collect at least 3 recommendations
Link your portfolio in your profile
Portfolio:-
Include 4+ business-related projects
Showcase one project per tool you know
Create an insights desk
Prepare a video presentation
Like if you need similar content ๐๐
โค5๐3๐คฃ1
List of Top 12 Coding Channels on WhatsApp:
1. Python Programming:
https://whatsapp.com/channel/0029VaiM08SDuMRaGKd9Wv0L
2. Coding Resources:
https://whatsapp.com/channel/0029VahiFZQ4o7qN54LTzB17
3. Coding Projects:
https://whatsapp.com/channel/0029VazkxJ62UPB7OQhBE502
4. Coding Interviews:
https://whatsapp.com/channel/0029VammZijATRSlLxywEC3X
5. Java Programming:
https://whatsapp.com/channel/0029VamdH5mHAdNMHMSBwg1s
6. Javascript:
https://whatsapp.com/channel/0029VavR9OxLtOjJTXrZNi32
7. Web Development:
https://whatsapp.com/channel/0029VaiSdWu4NVis9yNEE72z
8. Artificial Intelligence:
https://whatsapp.com/channel/0029VaoePz73bbV94yTh6V2E
9. Data Science:
https://whatsapp.com/channel/0029Va4QUHa6rsQjhITHK82y
10. Machine Learning:
https://whatsapp.com/channel/0029Va8v3eo1NCrQfGMseL2D
11. SQL:
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v
12. GitHub:
https://whatsapp.com/channel/0029Vawixh9IXnlk7VfY6w43
ENJOY LEARNING ๐๐
1. Python Programming:
https://whatsapp.com/channel/0029VaiM08SDuMRaGKd9Wv0L
2. Coding Resources:
https://whatsapp.com/channel/0029VahiFZQ4o7qN54LTzB17
3. Coding Projects:
https://whatsapp.com/channel/0029VazkxJ62UPB7OQhBE502
4. Coding Interviews:
https://whatsapp.com/channel/0029VammZijATRSlLxywEC3X
5. Java Programming:
https://whatsapp.com/channel/0029VamdH5mHAdNMHMSBwg1s
6. Javascript:
https://whatsapp.com/channel/0029VavR9OxLtOjJTXrZNi32
7. Web Development:
https://whatsapp.com/channel/0029VaiSdWu4NVis9yNEE72z
8. Artificial Intelligence:
https://whatsapp.com/channel/0029VaoePz73bbV94yTh6V2E
9. Data Science:
https://whatsapp.com/channel/0029Va4QUHa6rsQjhITHK82y
10. Machine Learning:
https://whatsapp.com/channel/0029Va8v3eo1NCrQfGMseL2D
11. SQL:
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v
12. GitHub:
https://whatsapp.com/channel/0029Vawixh9IXnlk7VfY6w43
ENJOY LEARNING ๐๐
โค2๐2๐คฃ1
Creating a one-month data analytics roadmap requires a focused approach to cover essential concepts and skills. Here's a structured plan along with free resources:
๐๏ธWeek 1: Foundation of Data Analytics
โพDay 1-2: Basics of Data Analytics
Resource: Khan Academy's Introduction to Statistics
Focus Areas: Understand descriptive statistics, types of data, and data distributions.
โพDay 3-4: Excel for Data Analysis
Resource: Microsoft Excel tutorials on YouTube or Excel Easy
Focus Areas: Learn essential Excel functions for data manipulation and analysis.
โพDay 5-7: Introduction to Python for Data Analysis
Resource: Codecademy's Python course or Google's Python Class
Focus Areas: Basic Python syntax, data structures, and libraries like NumPy and Pandas.
๐๏ธWeek 2: Intermediate Data Analytics Skills
โพDay 8-10: Data Visualization
Resource: Data Visualization with Matplotlib and Seaborn tutorials
Focus Areas: Creating effective charts and graphs to communicate insights.
โพDay 11-12: Exploratory Data Analysis (EDA)
Resource: Towards Data Science articles on EDA techniques
Focus Areas: Techniques to summarize and explore datasets.
โพDay 13-14: SQL Fundamentals
Resource: Mode Analytics SQL Tutorial or SQLZoo
Focus Areas: Writing SQL queries for data manipulation.
๐๏ธWeek 3: Advanced Techniques and Tools
โพDay 15-17: Machine Learning Basics
Resource: Andrew Ng's Machine Learning course on Coursera
Focus Areas: Understand key ML concepts like supervised learning and evaluation metrics.
โพDay 18-20: Data Cleaning and Preprocessing
Resource: Data Cleaning with Python by Packt
Focus Areas: Techniques to handle missing data, outliers, and normalization.
โพDay 21-22: Introduction to Big Data
Resource: Big Data University's courses on Hadoop and Spark
Focus Areas: Basics of distributed computing and big data technologies.
๐๏ธWeek 4: Projects and Practice
โพDay 23-25: Real-World Data Analytics Projects
Resource: Kaggle datasets and competitions
Focus Areas: Apply learned skills to solve practical problems.
โพDay 26-28: Online Webinars and Community Engagement
Resource: Data Science meetups and webinars (Meetup.com, Eventbrite)
Focus Areas: Networking and learning from industry experts.
โพDay 29-30: Portfolio Building and Review
Activity: Create a GitHub repository showcasing projects and code
Focus Areas: Present projects and skills effectively for job applications.
๐Additional Resources:
Books: "Python for Data Analysis" by Wes McKinney, "Data Science from Scratch" by Joel Grus.
Online Platforms: DataSimplifier, Kaggle, Towards Data Science
Tailor this roadmap to your learning pace and adjust the resources based on your preferences. Consistent practice and hands-on projects are crucial for mastering data analytics within a month. Good luck!
๐๏ธWeek 1: Foundation of Data Analytics
โพDay 1-2: Basics of Data Analytics
Resource: Khan Academy's Introduction to Statistics
Focus Areas: Understand descriptive statistics, types of data, and data distributions.
โพDay 3-4: Excel for Data Analysis
Resource: Microsoft Excel tutorials on YouTube or Excel Easy
Focus Areas: Learn essential Excel functions for data manipulation and analysis.
โพDay 5-7: Introduction to Python for Data Analysis
Resource: Codecademy's Python course or Google's Python Class
Focus Areas: Basic Python syntax, data structures, and libraries like NumPy and Pandas.
๐๏ธWeek 2: Intermediate Data Analytics Skills
โพDay 8-10: Data Visualization
Resource: Data Visualization with Matplotlib and Seaborn tutorials
Focus Areas: Creating effective charts and graphs to communicate insights.
โพDay 11-12: Exploratory Data Analysis (EDA)
Resource: Towards Data Science articles on EDA techniques
Focus Areas: Techniques to summarize and explore datasets.
โพDay 13-14: SQL Fundamentals
Resource: Mode Analytics SQL Tutorial or SQLZoo
Focus Areas: Writing SQL queries for data manipulation.
๐๏ธWeek 3: Advanced Techniques and Tools
โพDay 15-17: Machine Learning Basics
Resource: Andrew Ng's Machine Learning course on Coursera
Focus Areas: Understand key ML concepts like supervised learning and evaluation metrics.
โพDay 18-20: Data Cleaning and Preprocessing
Resource: Data Cleaning with Python by Packt
Focus Areas: Techniques to handle missing data, outliers, and normalization.
โพDay 21-22: Introduction to Big Data
Resource: Big Data University's courses on Hadoop and Spark
Focus Areas: Basics of distributed computing and big data technologies.
๐๏ธWeek 4: Projects and Practice
โพDay 23-25: Real-World Data Analytics Projects
Resource: Kaggle datasets and competitions
Focus Areas: Apply learned skills to solve practical problems.
โพDay 26-28: Online Webinars and Community Engagement
Resource: Data Science meetups and webinars (Meetup.com, Eventbrite)
Focus Areas: Networking and learning from industry experts.
โพDay 29-30: Portfolio Building and Review
Activity: Create a GitHub repository showcasing projects and code
Focus Areas: Present projects and skills effectively for job applications.
๐Additional Resources:
Books: "Python for Data Analysis" by Wes McKinney, "Data Science from Scratch" by Joel Grus.
Online Platforms: DataSimplifier, Kaggle, Towards Data Science
Tailor this roadmap to your learning pace and adjust the resources based on your preferences. Consistent practice and hands-on projects are crucial for mastering data analytics within a month. Good luck!
โค2๐2๐คฃ1
You don't need to spend several $๐ญ๐ฌ๐ฌ๐ฌ๐ to learn Data Science.โ
Stanford University, Harvard University & Massachusetts Institute of Technology is providing free courses.๐ฅ
Here's 8 free Courses that'll teach you better than the paid ones:
1. CS50โs Introduction to Artificial Intelligence with Python (Harvard)
https://pll.harvard.edu/course/cs50s-introduction-artificial-intelligence-python
2. Data Science: Machine Learning (Harvard)
https://pll.harvard.edu/course/data-science-machine-learning
3. Artificial Intelligence (MIT)
https://lnkd.in/dG5BCPen
4. Introduction to Computational Thinking and Data Science (MIT)
https://lnkd.in/ddm5Ckk9
5. Machine Learning (MIT)
https://lnkd.in/dJEjStCw
6. Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (MIT)
https://lnkd.in/dkpyt6qr
7. Statistical Learning (Stanford)
https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning
8. Mining Massive Data Sets (Stanford)
๐https://online.stanford.edu/courses/soe-ycs0007-mining-massive-data-sets
ENJOY LEARNING
Stanford University, Harvard University & Massachusetts Institute of Technology is providing free courses.๐ฅ
Here's 8 free Courses that'll teach you better than the paid ones:
1. CS50โs Introduction to Artificial Intelligence with Python (Harvard)
https://pll.harvard.edu/course/cs50s-introduction-artificial-intelligence-python
2. Data Science: Machine Learning (Harvard)
https://pll.harvard.edu/course/data-science-machine-learning
3. Artificial Intelligence (MIT)
https://lnkd.in/dG5BCPen
4. Introduction to Computational Thinking and Data Science (MIT)
https://lnkd.in/ddm5Ckk9
5. Machine Learning (MIT)
https://lnkd.in/dJEjStCw
6. Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (MIT)
https://lnkd.in/dkpyt6qr
7. Statistical Learning (Stanford)
https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning
8. Mining Massive Data Sets (Stanford)
๐https://online.stanford.edu/courses/soe-ycs0007-mining-massive-data-sets
ENJOY LEARNING
๐4โค2๐คฃ1
โThe Best Public Datasets for Machine Learning and Data Scienceโ by Stacy Stanford
https://datasimplifier.com/best-data-analyst-projects-for-freshers/
https://toolbox.google.com/datasetsearch
https://www.kaggle.com/datasets
https://mlr.cs.umass.edu/ml/
https://www.visualdata.io/
https://guides.library.cmu.edu/machine-learning/datasets
https://www.data.gov/
https://nces.ed.gov/
https://www.ukdataservice.ac.uk/
https://datausa.io/
https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html
https://www.kaggle.com/xiuchengwang/python-dataset-download
https://www.quandl.com/
https://data.worldbank.org/
https://www.imf.org/en/Data
https://markets.ft.com/data/
https://trends.google.com/trends/?q=google&ctab=0&geo=all&date=all&sort=0
https://www.aeaweb.org/resources/data/us-macro-regional
https://xviewdataset.org/#dataset
https://labelme.csail.mit.edu/Release3.0/browserTools/php/dataset.php
https://image-net.org/
https://cocodataset.org/
https://visualgenome.org/
https://ai.googleblog.com/2016/09/introducing-open-images-dataset.html?m=1
https://vis-www.cs.umass.edu/lfw/
https://vision.stanford.edu/aditya86/ImageNetDogs/
https://web.mit.edu/torralba/www/indoor.html
https://www.cs.jhu.edu/~mdredze/datasets/sentiment/
https://ai.stanford.edu/~amaas/data/sentiment/
https://nlp.stanford.edu/sentiment/code.html
https://help.sentiment140.com/for-students/
https://www.kaggle.com/crowdflower/twitter-airline-sentiment
https://hotpotqa.github.io/
https://www.cs.cmu.edu/~./enron/
https://snap.stanford.edu/data/web-Amazon.html
https://aws.amazon.com/datasets/google-books-ngrams/
https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm
https://code.google.com/archive/p/wiki-links/downloads
https://www.dt.fee.unicamp.br/~tiago/smsspamcollection/
https://www.yelp.com/dataset
https://t.iss.one/DataPortfolio/2
https://archive.ics.uci.edu/ml/datasets/Spambase
https://bdd-data.berkeley.edu/
https://apolloscape.auto/
https://archive.org/details/comma-dataset
https://www.cityscapes-dataset.com/
https://aplicaciones.cimat.mx/Personal/jbhayet/ccsad-dataset
https://www.vision.ee.ethz.ch/~timofter/traffic_signs/
https://cvrr.ucsd.edu/LISA/datasets.html
https://hci.iwr.uni-heidelberg.de/node/6132
https://www.lara.prd.fr/benchmarks/trafficlightsrecognition
https://computing.wpi.edu/dataset.html
https://mimic.physionet.org/
โ Best Telegram channels to get free coding & data science resources
https://t.iss.one/addlist/4q2PYC0pH_VjZDk5
โ Free Courses with Certificate:
https://t.iss.one/free4unow_backup
https://datasimplifier.com/best-data-analyst-projects-for-freshers/
https://toolbox.google.com/datasetsearch
https://www.kaggle.com/datasets
https://mlr.cs.umass.edu/ml/
https://www.visualdata.io/
https://guides.library.cmu.edu/machine-learning/datasets
https://www.data.gov/
https://nces.ed.gov/
https://www.ukdataservice.ac.uk/
https://datausa.io/
https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html
https://www.kaggle.com/xiuchengwang/python-dataset-download
https://www.quandl.com/
https://data.worldbank.org/
https://www.imf.org/en/Data
https://markets.ft.com/data/
https://trends.google.com/trends/?q=google&ctab=0&geo=all&date=all&sort=0
https://www.aeaweb.org/resources/data/us-macro-regional
https://xviewdataset.org/#dataset
https://labelme.csail.mit.edu/Release3.0/browserTools/php/dataset.php
https://image-net.org/
https://cocodataset.org/
https://visualgenome.org/
https://ai.googleblog.com/2016/09/introducing-open-images-dataset.html?m=1
https://vis-www.cs.umass.edu/lfw/
https://vision.stanford.edu/aditya86/ImageNetDogs/
https://web.mit.edu/torralba/www/indoor.html
https://www.cs.jhu.edu/~mdredze/datasets/sentiment/
https://ai.stanford.edu/~amaas/data/sentiment/
https://nlp.stanford.edu/sentiment/code.html
https://help.sentiment140.com/for-students/
https://www.kaggle.com/crowdflower/twitter-airline-sentiment
https://hotpotqa.github.io/
https://www.cs.cmu.edu/~./enron/
https://snap.stanford.edu/data/web-Amazon.html
https://aws.amazon.com/datasets/google-books-ngrams/
https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm
https://code.google.com/archive/p/wiki-links/downloads
https://www.dt.fee.unicamp.br/~tiago/smsspamcollection/
https://www.yelp.com/dataset
https://t.iss.one/DataPortfolio/2
https://archive.ics.uci.edu/ml/datasets/Spambase
https://bdd-data.berkeley.edu/
https://apolloscape.auto/
https://archive.org/details/comma-dataset
https://www.cityscapes-dataset.com/
https://aplicaciones.cimat.mx/Personal/jbhayet/ccsad-dataset
https://www.vision.ee.ethz.ch/~timofter/traffic_signs/
https://cvrr.ucsd.edu/LISA/datasets.html
https://hci.iwr.uni-heidelberg.de/node/6132
https://www.lara.prd.fr/benchmarks/trafficlightsrecognition
https://computing.wpi.edu/dataset.html
https://mimic.physionet.org/
โ Best Telegram channels to get free coding & data science resources
https://t.iss.one/addlist/4q2PYC0pH_VjZDk5
โ Free Courses with Certificate:
https://t.iss.one/free4unow_backup
๐8๐คฃ1
Top LLM Projects for every stage of Learning (1).pdf
13.9 MB
Top LLM Projects from Every State of Learning
ML Notes.pdf.pdf
6.2 MB
ML Notes
The Best LLMs Cheatsheet - Part 1.pdf
945.8 KB
The Best LLMs Cheatsheet - Part 1.pdf
DeepLearning Notes.pdf
19.1 MB
DeepLearning Notes
Matrix Theory and Linear Algebra, 2018.pdf
8.7 MB
Matrix Theory and Linear Algebra
Peter Selinger, 2018
Peter Selinger, 2018
โค2๐คฃ1