Forwarded from Health Fitness & Diet Tips - Gym Motivation ๐ช
6 rules for daily happiness:
1. Start each day with gratitudeโwrite it down.
2. Let go of grudges; they weigh you down.
3. Pursue what excites you, not whatโs safe.
4. Spend time with people who lift you up.
5. Do something kind for othersโdaily.
6. Find joy in the little things; they add up.
1. Start each day with gratitudeโwrite it down.
2. Let go of grudges; they weigh you down.
3. Pursue what excites you, not whatโs safe.
4. Spend time with people who lift you up.
5. Do something kind for othersโdaily.
6. Find joy in the little things; they add up.
๐16โค2
Forwarded from Data Analysis Books | Python | SQL | Excel | Artificial Intelligence | Power BI | Tableau | AI Resources
Knowing the tools won't be enough to become a master of data analytics!
See if your soft skills are worthy of the rank of master:
1. ๐๐ผ๐บ๐บ๐๐ป๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป: Can you translate your findings into easily digestible insights for non-technical stakeholders?
2. ๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ-๐ฆ๐ผ๐น๐๐ถ๐ป๐ด: Is your work focused on solving actual business problems, and are you able to pick the most efficient approach to solve them?
3. ๐ฆ๐๐ฎ๐ธ๐ฒ๐ต๐ผ๐น๐ฑ๐ฒ๐ฟ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you building strong relationships with your stakeholders, understanding their needs, and providing them with regular updates?
4. ๐๐ผ๐ป๐๐ถ๐ป๐๐ผ๐๐ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด: The data landscape is constantly changing. Are you keeping up with new tools and trends?
5. ๐ฃ๐ฟ๐ผ๐ฑ๐๐ฐ๐/๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you aware of the life cycle of your data products? Do you have a structured approach to plan, prioritize, and track your work?
6. ๐๐๐๐ถ๐ป๐ฒ๐๐ ๐๐ฐ๐๐บ๐ฒ๐ป: Can you understand the language and needs of the business and put your data work into context?
7. ๐๐ผ๐บ๐ฎ๐ถ๐ป ๐๐ป๐ผ๐๐น๐ฒ๐ฑ๐ด๐ฒ: Do you know the processes, products, and challenges of your domain?
If you want to earn the rank of master in the data field, start working on your soft skills now.
What are your thoughts on the role of soft skills in the data space?
See if your soft skills are worthy of the rank of master:
1. ๐๐ผ๐บ๐บ๐๐ป๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป: Can you translate your findings into easily digestible insights for non-technical stakeholders?
2. ๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ-๐ฆ๐ผ๐น๐๐ถ๐ป๐ด: Is your work focused on solving actual business problems, and are you able to pick the most efficient approach to solve them?
3. ๐ฆ๐๐ฎ๐ธ๐ฒ๐ต๐ผ๐น๐ฑ๐ฒ๐ฟ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you building strong relationships with your stakeholders, understanding their needs, and providing them with regular updates?
4. ๐๐ผ๐ป๐๐ถ๐ป๐๐ผ๐๐ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด: The data landscape is constantly changing. Are you keeping up with new tools and trends?
5. ๐ฃ๐ฟ๐ผ๐ฑ๐๐ฐ๐/๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐: Are you aware of the life cycle of your data products? Do you have a structured approach to plan, prioritize, and track your work?
6. ๐๐๐๐ถ๐ป๐ฒ๐๐ ๐๐ฐ๐๐บ๐ฒ๐ป: Can you understand the language and needs of the business and put your data work into context?
7. ๐๐ผ๐บ๐ฎ๐ถ๐ป ๐๐ป๐ผ๐๐น๐ฒ๐ฑ๐ด๐ฒ: Do you know the processes, products, and challenges of your domain?
If you want to earn the rank of master in the data field, start working on your soft skills now.
What are your thoughts on the role of soft skills in the data space?
๐13๐1
Three different learning styles in machine learning algorithms:
1. Supervised Learning
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and K-Means.
3. Semi-Supervised Learning
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
1. Supervised Learning
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and K-Means.
3. Semi-Supervised Learning
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
๐8โค1
Forwarded from Programming Resources | Python | Javascript | Artificial Intelligence Updates | Computer Science Courses | AI Books
Are you part of Rat Race?
A student who got 3.8 CGPA is unhappy because another student got 4 CGPA.
The student with 4 CGPA is unhappy because he/she is not placed in a Core Company.
Student placed in a Core Company is unhappy because his colleague has more salary than him/her.
The person having the highest salary in a company is unhappy because he/she has no time at all to enjoy their life with friends and family.
This is what happens when you get trapped in the infinite rat race. You are never happy. And you will never appreciate or be grateful for the life you have.
Come out of the Rat Race.
Art by: Steve Cutts
A student who got 3.8 CGPA is unhappy because another student got 4 CGPA.
The student with 4 CGPA is unhappy because he/she is not placed in a Core Company.
Student placed in a Core Company is unhappy because his colleague has more salary than him/her.
The person having the highest salary in a company is unhappy because he/she has no time at all to enjoy their life with friends and family.
This is what happens when you get trapped in the infinite rat race. You are never happy. And you will never appreciate or be grateful for the life you have.
Come out of the Rat Race.
Art by: Steve Cutts
๐30๐10โค6๐ฅ1
Some useful PYTHON libraries for data science
NumPy stands for Numerical Python. The most powerful feature of NumPy is n-dimensional array. This library also contains basic linear algebra functions, Fourier transforms, advanced random number capabilities and tools for integration with other low level languages like Fortran, C and C++
SciPy stands for Scientific Python. SciPy is built on NumPy. It is one of the most useful library for variety of high level science and engineering modules like discrete Fourier transform, Linear Algebra, Optimization and Sparse matrices.
Matplotlib for plotting vast variety of graphs, starting from histograms to line plots to heat plots.. You can use Pylab feature in ipython notebook (ipython notebook โpylab = inline) to use these plotting features inline. If you ignore the inline option, then pylab converts ipython environment to an environment, very similar to Matlab. You can also use Latex commands to add math to your plot.
Pandas for structured data operations and manipulations. It is extensively used for data munging and preparation. Pandas were added relatively recently to Python and have been instrumental in boosting Pythonโs usage in data scientist community.
Scikit Learn for machine learning. Built on NumPy, SciPy and matplotlib, this library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.
Statsmodels for statistical modeling. Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.
Seaborn for statistical data visualization. Seaborn is a library for making attractive and informative statistical graphics in Python. It is based on matplotlib. Seaborn aims to make visualization a central part of exploring and understanding data.
Bokeh for creating interactive plots, dashboards and data applications on modern web-browsers. It empowers the user to generate elegant and concise graphics in the style of D3.js. Moreover, it has the capability of high-performance interactivity over very large or streaming datasets.
Blaze for extending the capability of Numpy and Pandas to distributed and streaming datasets. It can be used to access data from a multitude of sources including Bcolz, MongoDB, SQLAlchemy, Apache Spark, PyTables, etc. Together with Bokeh, Blaze can act as a very powerful tool for creating effective visualizations and dashboards on huge chunks of data.
Scrapy for web crawling. It is a very useful framework for getting specific patterns of data. It has the capability to start at a website home url and then dig through web-pages within the website to gather information.
SymPy for symbolic computation. It has wide-ranging capabilities from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics. Another useful feature is the capability of formatting the result of the computations as LaTeX code.
Requests for accessing the web. It works similar to the the standard python library urllib2 but is much easier to code. You will find subtle differences with urllib2 but for beginners, Requests might be more convenient.
Additional libraries, you might need:
os for Operating system and file operations
networkx and igraph for graph based data manipulations
regular expressions for finding patterns in text data
BeautifulSoup for scrapping web. It is inferior to Scrapy as it will extract information from just a single webpage in a run.
NumPy stands for Numerical Python. The most powerful feature of NumPy is n-dimensional array. This library also contains basic linear algebra functions, Fourier transforms, advanced random number capabilities and tools for integration with other low level languages like Fortran, C and C++
SciPy stands for Scientific Python. SciPy is built on NumPy. It is one of the most useful library for variety of high level science and engineering modules like discrete Fourier transform, Linear Algebra, Optimization and Sparse matrices.
Matplotlib for plotting vast variety of graphs, starting from histograms to line plots to heat plots.. You can use Pylab feature in ipython notebook (ipython notebook โpylab = inline) to use these plotting features inline. If you ignore the inline option, then pylab converts ipython environment to an environment, very similar to Matlab. You can also use Latex commands to add math to your plot.
Pandas for structured data operations and manipulations. It is extensively used for data munging and preparation. Pandas were added relatively recently to Python and have been instrumental in boosting Pythonโs usage in data scientist community.
Scikit Learn for machine learning. Built on NumPy, SciPy and matplotlib, this library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.
Statsmodels for statistical modeling. Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.
Seaborn for statistical data visualization. Seaborn is a library for making attractive and informative statistical graphics in Python. It is based on matplotlib. Seaborn aims to make visualization a central part of exploring and understanding data.
Bokeh for creating interactive plots, dashboards and data applications on modern web-browsers. It empowers the user to generate elegant and concise graphics in the style of D3.js. Moreover, it has the capability of high-performance interactivity over very large or streaming datasets.
Blaze for extending the capability of Numpy and Pandas to distributed and streaming datasets. It can be used to access data from a multitude of sources including Bcolz, MongoDB, SQLAlchemy, Apache Spark, PyTables, etc. Together with Bokeh, Blaze can act as a very powerful tool for creating effective visualizations and dashboards on huge chunks of data.
Scrapy for web crawling. It is a very useful framework for getting specific patterns of data. It has the capability to start at a website home url and then dig through web-pages within the website to gather information.
SymPy for symbolic computation. It has wide-ranging capabilities from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics. Another useful feature is the capability of formatting the result of the computations as LaTeX code.
Requests for accessing the web. It works similar to the the standard python library urllib2 but is much easier to code. You will find subtle differences with urllib2 but for beginners, Requests might be more convenient.
Additional libraries, you might need:
os for Operating system and file operations
networkx and igraph for graph based data manipulations
regular expressions for finding patterns in text data
BeautifulSoup for scrapping web. It is inferior to Scrapy as it will extract information from just a single webpage in a run.
๐13โค5๐ฅฐ1
Here are 5 key Python libraries/ concepts that are particularly important for data analysts:
1. Pandas: Pandas is a powerful library for data manipulation and analysis in Python. It provides data structures like DataFrames and Series that make it easy to work with structured data. Pandas offers functions for reading and writing data, cleaning and transforming data, and performing data analysis tasks like filtering, grouping, and aggregating.
2. NumPy: NumPy is a fundamental package for scientific computing in Python. It provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays efficiently. NumPy is often used in conjunction with Pandas for numerical computations and data manipulation.
3. Matplotlib and Seaborn: Matplotlib is a popular plotting library in Python that allows you to create a wide variety of static, interactive, and animated visualizations. Seaborn is built on top of Matplotlib and provides a higher-level interface for creating attractive and informative statistical graphics. These libraries are essential for data visualization in data analysis projects.
4. Scikit-learn: Scikit-learn is a machine learning library in Python that provides simple and efficient tools for data mining and data analysis tasks. It includes a wide range of algorithms for classification, regression, clustering, dimensionality reduction, and more. Scikit-learn also offers tools for model evaluation, hyperparameter tuning, and model selection.
5. Data Cleaning and Preprocessing: Data cleaning and preprocessing are crucial steps in any data analysis project. Python offers libraries like Pandas and NumPy for handling missing values, removing duplicates, standardizing data types, scaling numerical features, encoding categorical variables, and more. Understanding how to clean and preprocess data effectively is essential for accurate analysis and modeling.
By mastering these Python concepts and libraries, data analysts can efficiently manipulate and analyze data, create insightful visualizations, apply machine learning techniques, and derive valuable insights from their datasets.
Credits: https://t.iss.one/free4unow_backup
ENJOY LEARNING ๐๐
1. Pandas: Pandas is a powerful library for data manipulation and analysis in Python. It provides data structures like DataFrames and Series that make it easy to work with structured data. Pandas offers functions for reading and writing data, cleaning and transforming data, and performing data analysis tasks like filtering, grouping, and aggregating.
2. NumPy: NumPy is a fundamental package for scientific computing in Python. It provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays efficiently. NumPy is often used in conjunction with Pandas for numerical computations and data manipulation.
3. Matplotlib and Seaborn: Matplotlib is a popular plotting library in Python that allows you to create a wide variety of static, interactive, and animated visualizations. Seaborn is built on top of Matplotlib and provides a higher-level interface for creating attractive and informative statistical graphics. These libraries are essential for data visualization in data analysis projects.
4. Scikit-learn: Scikit-learn is a machine learning library in Python that provides simple and efficient tools for data mining and data analysis tasks. It includes a wide range of algorithms for classification, regression, clustering, dimensionality reduction, and more. Scikit-learn also offers tools for model evaluation, hyperparameter tuning, and model selection.
5. Data Cleaning and Preprocessing: Data cleaning and preprocessing are crucial steps in any data analysis project. Python offers libraries like Pandas and NumPy for handling missing values, removing duplicates, standardizing data types, scaling numerical features, encoding categorical variables, and more. Understanding how to clean and preprocess data effectively is essential for accurate analysis and modeling.
By mastering these Python concepts and libraries, data analysts can efficiently manipulate and analyze data, create insightful visualizations, apply machine learning techniques, and derive valuable insights from their datasets.
Credits: https://t.iss.one/free4unow_backup
ENJOY LEARNING ๐๐
๐5โค4
7 Rules of Life:
- Let it go
- Ignore them
- Give it time
- Donโt compare
- Stay calm
- Itโs on you
- Always smile
- Let it go
- Ignore them
- Give it time
- Donโt compare
- Stay calm
- Itโs on you
- Always smile
โค31๐7
Forwarded from Jobs | Internships | Placement | Interviews
I am not sure if you guys are aware or not but there are many scammers in Telegram who may ask you to pay them 200 rs and will give 1250 after sometime, never ever reply to those fraud people. Never ever pay any money to anyone in telegram for the sake of getting it double or whatsoever.
Be smart, stay safe โค๏ธ
Be smart, stay safe โค๏ธ
๐30โค7
9 hacks to boost your productivity:
1) Plan your day. Write everything on a physical paper.
2) Follow the 80/20 rule. 20% of your work will bring you 80% of the results.
3) Stop multitasking. Switching tasks significantly reduces your productivity.
... read more
1) Plan your day. Write everything on a physical paper.
2) Follow the 80/20 rule. 20% of your work will bring you 80% of the results.
3) Stop multitasking. Switching tasks significantly reduces your productivity.
... read more
โค16๐5
This post is for beginners who decided to learn Data Science. I want to tell you that becoming a data scientist is a journey (6 months - 1 year at least) and not a 1 month thing where u do some courses and you are a data scientist. There are different fields in Data Science that you have to first get familiar and strong in basics as well as do hands-on to get the abilities that are required to function in a full time job opportunity. Then further delve into advanced implementations.
There are plenty of roadmaps and online content both paid and free that you can follow. In a nutshell. A few essential things that will be necessary and in no particular order that will at least get your data science journey started are below:
Basic Statistics, Linear Algebra, calculus, probability
Programming language (R or Python) - Preferably Python if you rather want to later on move into a developer role instead of sticking to data science.
Machine Learning - All of the above will be used here to implement machine learning concepts.
Data Visualisation - again it could be simple excel or via r/python libraries or tools like Tableau,PowerBI etc.
This can be overwhelming but again its just an indication of what lies ahead. So most important thing is to just START instead of just contemplating the best way to go about this. Since lot of things can be learnt independently as well in no particular order.
You can use the below Sources to prepare your own roadmap:
@free4unow_backup - some free courses from here
@datasciencefun - data science and machines learning resources
Data Science - https://365datascience.pxf.io/q4m66g
Python - https://bit.ly/45rlWZE
Kaggle - https://www.kaggle.com/learn
There are plenty of roadmaps and online content both paid and free that you can follow. In a nutshell. A few essential things that will be necessary and in no particular order that will at least get your data science journey started are below:
Basic Statistics, Linear Algebra, calculus, probability
Programming language (R or Python) - Preferably Python if you rather want to later on move into a developer role instead of sticking to data science.
Machine Learning - All of the above will be used here to implement machine learning concepts.
Data Visualisation - again it could be simple excel or via r/python libraries or tools like Tableau,PowerBI etc.
This can be overwhelming but again its just an indication of what lies ahead. So most important thing is to just START instead of just contemplating the best way to go about this. Since lot of things can be learnt independently as well in no particular order.
You can use the below Sources to prepare your own roadmap:
@free4unow_backup - some free courses from here
@datasciencefun - data science and machines learning resources
Data Science - https://365datascience.pxf.io/q4m66g
Python - https://bit.ly/45rlWZE
Kaggle - https://www.kaggle.com/learn
๐12โค8
In a data science project, using multiple scalers can be beneficial when dealing with features that have different scales or distributions. Scaling is important in machine learning to ensure that all features contribute equally to the model training process and to prevent certain features from dominating others.
Here are some scenarios where using multiple scalers can be helpful in a data science project:
1. Standardization vs. Normalization: Standardization (scaling features to have a mean of 0 and a standard deviation of 1) and normalization (scaling features to a range between 0 and 1) are two common scaling techniques. Depending on the distribution of your data, you may choose to apply different scalers to different features.
2. RobustScaler vs. MinMaxScaler: RobustScaler is a good choice when dealing with outliers, as it scales the data based on percentiles rather than the mean and standard deviation. MinMaxScaler, on the other hand, scales the data to a specific range. Using both scalers can be beneficial when dealing with mixed types of data.
3. Feature engineering: In feature engineering, you may create new features that have different scales than the original features. In such cases, applying different scalers to different sets of features can help maintain consistency in the scaling process.
4. Pipeline flexibility: By using multiple scalers within a preprocessing pipeline, you can experiment with different scaling techniques and easily switch between them to see which one works best for your data.
5. Domain-specific considerations: Certain domains may require specific scaling techniques based on the nature of the data. For example, in image processing tasks, pixel values are often scaled differently than numerical features.
When using multiple scalers in a data science project, it's important to evaluate the impact of scaling on the model performance through cross-validation or other evaluation methods. Try experimenting with different scaling techniques to you find the optimal approach for your specific dataset and machine learning model.
Here are some scenarios where using multiple scalers can be helpful in a data science project:
1. Standardization vs. Normalization: Standardization (scaling features to have a mean of 0 and a standard deviation of 1) and normalization (scaling features to a range between 0 and 1) are two common scaling techniques. Depending on the distribution of your data, you may choose to apply different scalers to different features.
2. RobustScaler vs. MinMaxScaler: RobustScaler is a good choice when dealing with outliers, as it scales the data based on percentiles rather than the mean and standard deviation. MinMaxScaler, on the other hand, scales the data to a specific range. Using both scalers can be beneficial when dealing with mixed types of data.
3. Feature engineering: In feature engineering, you may create new features that have different scales than the original features. In such cases, applying different scalers to different sets of features can help maintain consistency in the scaling process.
4. Pipeline flexibility: By using multiple scalers within a preprocessing pipeline, you can experiment with different scaling techniques and easily switch between them to see which one works best for your data.
5. Domain-specific considerations: Certain domains may require specific scaling techniques based on the nature of the data. For example, in image processing tasks, pixel values are often scaled differently than numerical features.
When using multiple scalers in a data science project, it's important to evaluate the impact of scaling on the model performance through cross-validation or other evaluation methods. Try experimenting with different scaling techniques to you find the optimal approach for your specific dataset and machine learning model.
๐13โค2
Learn SQL easily with these 5 simple steps ๐๐
https://datasimplifier.com/how-long-does-it-take-to-learn-sql/
https://datasimplifier.com/how-long-does-it-take-to-learn-sql/
โค4๐2
Harvard CS50 โ Free Computer Science Course (2023 Edition)
Here are the lectures included in this course:
Lecture 0 - Scratch
Lecture 1 - C
Lecture 2 - Arrays
Lecture 3 - Algorithms
Lecture 4 - Memory
Lecture 5 - Data Structures
Lecture 6 - Python
Lecture 7 - SQL
Lecture 8 - HTML, CSS, JavaScript
Lecture 9 - Flask
Lecture 10 - Emoji
Cybersecurity
https://www.freecodecamp.org/news/harvard-university-cs50-computer-science-course-2023/
Kaggle community for data science project discussion: @Kaggle_Group
Here are the lectures included in this course:
Lecture 0 - Scratch
Lecture 1 - C
Lecture 2 - Arrays
Lecture 3 - Algorithms
Lecture 4 - Memory
Lecture 5 - Data Structures
Lecture 6 - Python
Lecture 7 - SQL
Lecture 8 - HTML, CSS, JavaScript
Lecture 9 - Flask
Lecture 10 - Emoji
Cybersecurity
https://www.freecodecamp.org/news/harvard-university-cs50-computer-science-course-2023/
Kaggle community for data science project discussion: @Kaggle_Group
๐15โค1
4 websites to practice SQL
1. Dataford - https://www.dataford.io
2. Interview Query - https://www.interviewquery.com/questions
3. LeetCode - https://leetcode.com/
4. HackerRank - https://www.hackerrank.com/
#datascience
1. Dataford - https://www.dataford.io
2. Interview Query - https://www.interviewquery.com/questions
3. LeetCode - https://leetcode.com/
4. HackerRank - https://www.hackerrank.com/
#datascience
๐9โค1๐ฅ1
Things Introverts Hate Most:
- phone calls
- meaningless conversations
- unplanned visits
- noisy neighbours
- crowded places
- guests who stay after 8.30 pm
- last minute change of plans
- lack of common sense
Agreed?
- phone calls
- meaningless conversations
- unplanned visits
- noisy neighbours
- crowded places
- guests who stay after 8.30 pm
- last minute change of plans
- lack of common sense
Agreed?
๐95
Practice projects to consider:
1. Implement a basic search engine: Read a set of documents and build an index of keywords. Then, implement a search function that returns a list of documents that match the query.
2. Build a recommendation system: Read a set of user-item interactions and build a recommendation system that suggests items to users based on their past behavior.
3. Create a data analysis tool: Read a large dataset and implement a tool that performs various analyses, such as calculating summary statistics, visualizing distributions, and identifying patterns and correlations.
4. Implement a graph algorithm: Study a graph algorithm such as Dijkstra's shortest path algorithm, and implement it in Python. Then, test it on real-world graphs to see how it performs.
1. Implement a basic search engine: Read a set of documents and build an index of keywords. Then, implement a search function that returns a list of documents that match the query.
2. Build a recommendation system: Read a set of user-item interactions and build a recommendation system that suggests items to users based on their past behavior.
3. Create a data analysis tool: Read a large dataset and implement a tool that performs various analyses, such as calculating summary statistics, visualizing distributions, and identifying patterns and correlations.
4. Implement a graph algorithm: Study a graph algorithm such as Dijkstra's shortest path algorithm, and implement it in Python. Then, test it on real-world graphs to see how it performs.
๐13โค5
Some tips to Sharpen Your analytical Thinking: ๐ค๐ญ
1. Use the 80/20 Rule: Identify the 20% of activities that lead to 80% of your results.
2. Master learning with the Feynman Technique: Teach others, identify gaps, & simplify.
3. "You must not fool yourself; you are the easiest person to fool." -Richard Feynman
1. Use the 80/20 Rule: Identify the 20% of activities that lead to 80% of your results.
2. Master learning with the Feynman Technique: Teach others, identify gaps, & simplify.
3. "You must not fool yourself; you are the easiest person to fool." -Richard Feynman
๐8โค1