Here are some essential data science concepts from A to Z:
A - Algorithm: A set of rules or instructions used to solve a problem or perform a task in data science.
B - Big Data: Large and complex datasets that cannot be easily processed using traditional data processing applications.
C - Clustering: A technique used to group similar data points together based on certain characteristics.
D - Data Cleaning: The process of identifying and correcting errors or inconsistencies in a dataset.
E - Exploratory Data Analysis (EDA): The process of analyzing and visualizing data to understand its underlying patterns and relationships.
F - Feature Engineering: The process of creating new features or variables from existing data to improve model performance.
G - Gradient Descent: An optimization algorithm used to minimize the error of a model by adjusting its parameters.
H - Hypothesis Testing: A statistical technique used to test the validity of a hypothesis or claim based on sample data.
I - Imputation: The process of filling in missing values in a dataset using statistical methods.
J - Joint Probability: The probability of two or more events occurring together.
K - K-Means Clustering: A popular clustering algorithm that partitions data into K clusters based on similarity.
L - Linear Regression: A statistical method used to model the relationship between a dependent variable and one or more independent variables.
M - Machine Learning: A subset of artificial intelligence that uses algorithms to learn patterns and make predictions from data.
N - Normal Distribution: A symmetrical bell-shaped distribution that is commonly used in statistical analysis.
O - Outlier Detection: The process of identifying and removing data points that are significantly different from the rest of the dataset.
P - Precision and Recall: Evaluation metrics used to assess the performance of classification models.
Q - Quantitative Analysis: The process of analyzing numerical data to draw conclusions and make decisions.
R - Random Forest: An ensemble learning algorithm that builds multiple decision trees to improve prediction accuracy.
S - Support Vector Machine (SVM): A supervised learning algorithm used for classification and regression tasks.
T - Time Series Analysis: A statistical technique used to analyze and forecast time-dependent data.
U - Unsupervised Learning: A type of machine learning where the model learns patterns and relationships in data without labeled outputs.
V - Validation Set: A subset of data used to evaluate the performance of a model during training.
W - Web Scraping: The process of extracting data from websites for analysis and visualization.
X - XGBoost: An optimized gradient boosting algorithm that is widely used in machine learning competitions.
Y - Yield Curve Analysis: The study of the relationship between interest rates and the maturity of fixed-income securities.
Z - Z-Score: A standardized score that represents the number of standard deviations a data point is from the mean.
Credits: https://t.iss.one/free4unow_backup
Like if you need similar content ๐๐
A - Algorithm: A set of rules or instructions used to solve a problem or perform a task in data science.
B - Big Data: Large and complex datasets that cannot be easily processed using traditional data processing applications.
C - Clustering: A technique used to group similar data points together based on certain characteristics.
D - Data Cleaning: The process of identifying and correcting errors or inconsistencies in a dataset.
E - Exploratory Data Analysis (EDA): The process of analyzing and visualizing data to understand its underlying patterns and relationships.
F - Feature Engineering: The process of creating new features or variables from existing data to improve model performance.
G - Gradient Descent: An optimization algorithm used to minimize the error of a model by adjusting its parameters.
H - Hypothesis Testing: A statistical technique used to test the validity of a hypothesis or claim based on sample data.
I - Imputation: The process of filling in missing values in a dataset using statistical methods.
J - Joint Probability: The probability of two or more events occurring together.
K - K-Means Clustering: A popular clustering algorithm that partitions data into K clusters based on similarity.
L - Linear Regression: A statistical method used to model the relationship between a dependent variable and one or more independent variables.
M - Machine Learning: A subset of artificial intelligence that uses algorithms to learn patterns and make predictions from data.
N - Normal Distribution: A symmetrical bell-shaped distribution that is commonly used in statistical analysis.
O - Outlier Detection: The process of identifying and removing data points that are significantly different from the rest of the dataset.
P - Precision and Recall: Evaluation metrics used to assess the performance of classification models.
Q - Quantitative Analysis: The process of analyzing numerical data to draw conclusions and make decisions.
R - Random Forest: An ensemble learning algorithm that builds multiple decision trees to improve prediction accuracy.
S - Support Vector Machine (SVM): A supervised learning algorithm used for classification and regression tasks.
T - Time Series Analysis: A statistical technique used to analyze and forecast time-dependent data.
U - Unsupervised Learning: A type of machine learning where the model learns patterns and relationships in data without labeled outputs.
V - Validation Set: A subset of data used to evaluate the performance of a model during training.
W - Web Scraping: The process of extracting data from websites for analysis and visualization.
X - XGBoost: An optimized gradient boosting algorithm that is widely used in machine learning competitions.
Y - Yield Curve Analysis: The study of the relationship between interest rates and the maturity of fixed-income securities.
Z - Z-Score: A standardized score that represents the number of standard deviations a data point is from the mean.
Credits: https://t.iss.one/free4unow_backup
Like if you need similar content ๐๐
โค8
WhatsApp is no longer a platform just for chat.
It's an educational goldmine.
If you do, youโre sleeping on a goldmine of knowledge and community. WhatsApp channels are a great way to practice data science, make your own community, and find accountability partners.
I have curated the list of best WhatsApp channels to learn coding & data science for FREE
Free Courses with Certificate
๐๐
https://whatsapp.com/channel/0029Vamhzk5JENy1Zg9KmO2g
Jobs & Internship Opportunities
๐๐
https://whatsapp.com/channel/0029VaI5CV93AzNUiZ5Tt226
Python Free Books & Projects
๐๐
https://whatsapp.com/channel/0029VaiM08SDuMRaGKd9Wv0L
Java Free Resources
๐๐
https://whatsapp.com/channel/0029VamdH5mHAdNMHMSBwg1s
Coding Interviews
๐๐
https://whatsapp.com/channel/0029VammZijATRSlLxywEC3X
SQL For Data Analysis
๐๐
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v
Power BI Resources
๐๐
https://whatsapp.com/channel/0029Vai1xKf1dAvuk6s1v22c
Programming Free Resources
๐๐
https://whatsapp.com/channel/0029VahiFZQ4o7qN54LTzB17
Data Science Projects
๐๐
https://whatsapp.com/channel/0029Va4QUHa6rsQjhITHK82y
Learn Data Science & Machine Learning
๐๐
https://whatsapp.com/channel/0029Va8v3eo1NCrQfGMseL2D
Coding Projects
๐๐
https://whatsapp.com/channel/0029VamhFMt7j6fx4bYsX908
Excel for Data Analyst
๐๐
https://whatsapp.com/channel/0029VaifY548qIzv0u1AHz3i
ENJOY LEARNING ๐๐
It's an educational goldmine.
If you do, youโre sleeping on a goldmine of knowledge and community. WhatsApp channels are a great way to practice data science, make your own community, and find accountability partners.
I have curated the list of best WhatsApp channels to learn coding & data science for FREE
Free Courses with Certificate
๐๐
https://whatsapp.com/channel/0029Vamhzk5JENy1Zg9KmO2g
Jobs & Internship Opportunities
๐๐
https://whatsapp.com/channel/0029VaI5CV93AzNUiZ5Tt226
Python Free Books & Projects
๐๐
https://whatsapp.com/channel/0029VaiM08SDuMRaGKd9Wv0L
Java Free Resources
๐๐
https://whatsapp.com/channel/0029VamdH5mHAdNMHMSBwg1s
Coding Interviews
๐๐
https://whatsapp.com/channel/0029VammZijATRSlLxywEC3X
SQL For Data Analysis
๐๐
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v
Power BI Resources
๐๐
https://whatsapp.com/channel/0029Vai1xKf1dAvuk6s1v22c
Programming Free Resources
๐๐
https://whatsapp.com/channel/0029VahiFZQ4o7qN54LTzB17
Data Science Projects
๐๐
https://whatsapp.com/channel/0029Va4QUHa6rsQjhITHK82y
Learn Data Science & Machine Learning
๐๐
https://whatsapp.com/channel/0029Va8v3eo1NCrQfGMseL2D
Coding Projects
๐๐
https://whatsapp.com/channel/0029VamhFMt7j6fx4bYsX908
Excel for Data Analyst
๐๐
https://whatsapp.com/channel/0029VaifY548qIzv0u1AHz3i
ENJOY LEARNING ๐๐
โค7๐2
Core data science concepts you should know:
๐ข 1. Statistics & Probability
Descriptive statistics: Mean, median, mode, standard deviation, variance
Inferential statistics: Hypothesis testing, confidence intervals, p-values, t-tests, ANOVA
Probability distributions: Normal, Binomial, Poisson, Uniform
Bayes' Theorem
Central Limit Theorem
๐ 2. Data Wrangling & Cleaning
Handling missing values
Outlier detection and treatment
Data transformation (scaling, encoding, normalization)
Feature engineering
Dealing with imbalanced data
๐ 3. Exploratory Data Analysis (EDA)
Univariate, bivariate, and multivariate analysis
Correlation and covariance
Data visualization tools: Matplotlib, Seaborn, Plotly
Insights generation through visual storytelling
๐ค 4. Machine Learning Fundamentals
Supervised Learning: Linear regression, logistic regression, decision trees, SVM, k-NN
Unsupervised Learning: K-means, hierarchical clustering, PCA
Model evaluation: Accuracy, precision, recall, F1-score, ROC-AUC
Cross-validation and overfitting/underfitting
Bias-variance tradeoff
๐ง 5. Deep Learning (Basics)
Neural networks: Perceptron, MLP
Activation functions (ReLU, Sigmoid, Tanh)
Backpropagation
Gradient descent and learning rate
CNNs and RNNs (intro level)
๐๏ธ 6. Data Structures & Algorithms (DSA)
Arrays, lists, dictionaries, sets
Sorting and searching algorithms
Time and space complexity (Big-O notation)
Common problems: string manipulation, matrix operations, recursion
๐พ 7. SQL & Databases
SELECT, WHERE, GROUP BY, HAVING
JOINS (inner, left, right, full)
Subqueries and CTEs
Window functions
Indexing and normalization
๐ฆ 8. Tools & Libraries
Python: pandas, NumPy, scikit-learn, TensorFlow, PyTorch
R: dplyr, ggplot2, caret
Jupyter Notebooks for experimentation
Git and GitHub for version control
๐งช 9. A/B Testing & Experimentation
Control vs. treatment group
Hypothesis formulation
Significance level, p-value interpretation
Power analysis
๐ 10. Business Acumen & Storytelling
Translating data insights into business value
Crafting narratives with data
Building dashboards (Power BI, Tableau)
Knowing KPIs and business metrics
React โค๏ธ for more
๐ข 1. Statistics & Probability
Descriptive statistics: Mean, median, mode, standard deviation, variance
Inferential statistics: Hypothesis testing, confidence intervals, p-values, t-tests, ANOVA
Probability distributions: Normal, Binomial, Poisson, Uniform
Bayes' Theorem
Central Limit Theorem
๐ 2. Data Wrangling & Cleaning
Handling missing values
Outlier detection and treatment
Data transformation (scaling, encoding, normalization)
Feature engineering
Dealing with imbalanced data
๐ 3. Exploratory Data Analysis (EDA)
Univariate, bivariate, and multivariate analysis
Correlation and covariance
Data visualization tools: Matplotlib, Seaborn, Plotly
Insights generation through visual storytelling
๐ค 4. Machine Learning Fundamentals
Supervised Learning: Linear regression, logistic regression, decision trees, SVM, k-NN
Unsupervised Learning: K-means, hierarchical clustering, PCA
Model evaluation: Accuracy, precision, recall, F1-score, ROC-AUC
Cross-validation and overfitting/underfitting
Bias-variance tradeoff
๐ง 5. Deep Learning (Basics)
Neural networks: Perceptron, MLP
Activation functions (ReLU, Sigmoid, Tanh)
Backpropagation
Gradient descent and learning rate
CNNs and RNNs (intro level)
๐๏ธ 6. Data Structures & Algorithms (DSA)
Arrays, lists, dictionaries, sets
Sorting and searching algorithms
Time and space complexity (Big-O notation)
Common problems: string manipulation, matrix operations, recursion
๐พ 7. SQL & Databases
SELECT, WHERE, GROUP BY, HAVING
JOINS (inner, left, right, full)
Subqueries and CTEs
Window functions
Indexing and normalization
๐ฆ 8. Tools & Libraries
Python: pandas, NumPy, scikit-learn, TensorFlow, PyTorch
R: dplyr, ggplot2, caret
Jupyter Notebooks for experimentation
Git and GitHub for version control
๐งช 9. A/B Testing & Experimentation
Control vs. treatment group
Hypothesis formulation
Significance level, p-value interpretation
Power analysis
๐ 10. Business Acumen & Storytelling
Translating data insights into business value
Crafting narratives with data
Building dashboards (Power BI, Tableau)
Knowing KPIs and business metrics
React โค๏ธ for more
โค6๐2๐1
๐๐ฒ๐ฎ๐ฟ๐ป ๐๐ฎ๐๐ฎ ๐ฆ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ฒ ๐ณ๐ผ๐ฟ ๐๐ฅ๐๐ (๐ก๐ผ ๐ฆ๐๐ฟ๐ถ๐ป๐ด๐ ๐๐๐๐ฎ๐ฐ๐ต๐ฒ๐ฑ)
๐ก๐ผ ๐ณ๐ฎ๐ป๐ฐ๐ ๐ฐ๐ผ๐๐ฟ๐๐ฒ๐, ๐ป๐ผ ๐ฐ๐ผ๐ป๐ฑ๐ถ๐๐ถ๐ผ๐ป๐, ๐ท๐๐๐ ๐ฝ๐๐ฟ๐ฒ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด.
๐๐ฒ๐ฟ๐ฒโ๐ ๐ต๐ผ๐ ๐๐ผ ๐ฏ๐ฒ๐ฐ๐ผ๐บ๐ฒ ๐ฎ ๐๐ฎ๐๐ฎ ๐ฆ๐ฐ๐ถ๐ฒ๐ป๐๐ถ๐๐ ๐ณ๐ผ๐ฟ ๐๐ฅ๐๐:
1๏ธโฃ Python Programming for Data Science โ Harvardโs CS50P
The best intro to Python for absolute beginners:
โฌ Covers loops, data structures, and practical exercises.
โฌ Designed to help you build foundational coding skills.
Link: https://cs50.harvard.edu/python/
https://t.iss.one/datasciencefun
2๏ธโฃ Statistics & Probability โ Khan Academy
Want to master probability, distributions, and hypothesis testing? This is where to start:
โฌ Clear, beginner-friendly videos.
โฌ Exercises to test your skills.
Link: https://www.khanacademy.org/math/statistics-probability
https://whatsapp.com/channel/0029Vat3Dc4KAwEcfFbNnZ3O
3๏ธโฃ Linear Algebra for Data Science โ 3Blue1Brown
โฌ Learn about matrices, vectors, and transformations.
โฌ Essential for machine learning models.
Link: https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9KzVk3AjplI5PYPxkUr
4๏ธโฃ SQL Basics โ Mode Analytics
SQL is the backbone of data manipulation. This tutorial covers:
โฌ Writing queries, joins, and filtering data.
โฌ Real-world datasets to practice.
Link: https://mode.com/sql-tutorial
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v
5๏ธโฃ Data Visualization โ freeCodeCamp
Learn to create stunning visualizations using Python libraries:
โฌ Covers Matplotlib, Seaborn, and Plotly.
โฌ Step-by-step projects included.
Link: https://www.youtube.com/watch?v=JLzTJhC2DZg
https://whatsapp.com/channel/0029VaxaFzoEQIaujB31SO34
6๏ธโฃ Machine Learning Basics โ Googleโs Machine Learning Crash Course
An in-depth introduction to machine learning for beginners:
โฌ Learn supervised and unsupervised learning.
โฌ Hands-on coding with TensorFlow.
Link: https://developers.google.com/machine-learning/crash-course
7๏ธโฃ Deep Learning โ Fast.aiโs Free Course
Fast.ai makes deep learning easy and accessible:
โฌ Build neural networks with PyTorch.
โฌ Learn by coding real projects.
Link: https://course.fast.ai/
8๏ธโฃ Data Science Projects โ Kaggle
โฌ Compete in challenges to practice your skills.
โฌ Great way to build your portfolio.
Link: https://www.kaggle.com/
๐ก๐ผ ๐ณ๐ฎ๐ป๐ฐ๐ ๐ฐ๐ผ๐๐ฟ๐๐ฒ๐, ๐ป๐ผ ๐ฐ๐ผ๐ป๐ฑ๐ถ๐๐ถ๐ผ๐ป๐, ๐ท๐๐๐ ๐ฝ๐๐ฟ๐ฒ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด.
๐๐ฒ๐ฟ๐ฒโ๐ ๐ต๐ผ๐ ๐๐ผ ๐ฏ๐ฒ๐ฐ๐ผ๐บ๐ฒ ๐ฎ ๐๐ฎ๐๐ฎ ๐ฆ๐ฐ๐ถ๐ฒ๐ป๐๐ถ๐๐ ๐ณ๐ผ๐ฟ ๐๐ฅ๐๐:
1๏ธโฃ Python Programming for Data Science โ Harvardโs CS50P
The best intro to Python for absolute beginners:
โฌ Covers loops, data structures, and practical exercises.
โฌ Designed to help you build foundational coding skills.
Link: https://cs50.harvard.edu/python/
https://t.iss.one/datasciencefun
2๏ธโฃ Statistics & Probability โ Khan Academy
Want to master probability, distributions, and hypothesis testing? This is where to start:
โฌ Clear, beginner-friendly videos.
โฌ Exercises to test your skills.
Link: https://www.khanacademy.org/math/statistics-probability
https://whatsapp.com/channel/0029Vat3Dc4KAwEcfFbNnZ3O
3๏ธโฃ Linear Algebra for Data Science โ 3Blue1Brown
โฌ Learn about matrices, vectors, and transformations.
โฌ Essential for machine learning models.
Link: https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9KzVk3AjplI5PYPxkUr
4๏ธโฃ SQL Basics โ Mode Analytics
SQL is the backbone of data manipulation. This tutorial covers:
โฌ Writing queries, joins, and filtering data.
โฌ Real-world datasets to practice.
Link: https://mode.com/sql-tutorial
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v
5๏ธโฃ Data Visualization โ freeCodeCamp
Learn to create stunning visualizations using Python libraries:
โฌ Covers Matplotlib, Seaborn, and Plotly.
โฌ Step-by-step projects included.
Link: https://www.youtube.com/watch?v=JLzTJhC2DZg
https://whatsapp.com/channel/0029VaxaFzoEQIaujB31SO34
6๏ธโฃ Machine Learning Basics โ Googleโs Machine Learning Crash Course
An in-depth introduction to machine learning for beginners:
โฌ Learn supervised and unsupervised learning.
โฌ Hands-on coding with TensorFlow.
Link: https://developers.google.com/machine-learning/crash-course
7๏ธโฃ Deep Learning โ Fast.aiโs Free Course
Fast.ai makes deep learning easy and accessible:
โฌ Build neural networks with PyTorch.
โฌ Learn by coding real projects.
Link: https://course.fast.ai/
8๏ธโฃ Data Science Projects โ Kaggle
โฌ Compete in challenges to practice your skills.
โฌ Great way to build your portfolio.
Link: https://www.kaggle.com/
โค4๐ค2๐1๐ฅ1
Some important questions to crack data science interview
Q. Describe how Gradient Boosting works.
A. Gradient boosting is a type of machine learning boosting. It relies on the intuition that the best possible next model, when combined with previous models, minimizes the overall prediction error. If a small change in the prediction for a case causes no change in error, then next target outcome of the case is zero. Gradient boosting produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.
Q. Describe the decision tree model.
A. Decision Trees are a type of Supervised Machine Learning where the data is continuously split according to a certain parameter. The leaves are the decisions or the final outcomes. A decision tree is a machine learning algorithm that partitions the data into subsets.
Q. What is a neural network?
A. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. They, also known as Artificial Neural Networks, are the subset of Deep Learning.
Q. Explain the Bias-Variance Tradeoff
A. The biasโvariance tradeoff is the property of a model that the variance of the parameter estimated across samples can be reduced by increasing the bias in the estimated parameters.
Q. Whatโs the difference between L1 and L2 regularization?
A. The main intuitive difference between the L1 and L2 regularization is that L1 regularization tries to estimate the median of the data while the L2 regularization tries to estimate the mean of the data to avoid overfitting. That value will also be the median of the data distribution mathematically.
ENJOY LEARNING ๐๐
Q. Describe how Gradient Boosting works.
A. Gradient boosting is a type of machine learning boosting. It relies on the intuition that the best possible next model, when combined with previous models, minimizes the overall prediction error. If a small change in the prediction for a case causes no change in error, then next target outcome of the case is zero. Gradient boosting produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.
Q. Describe the decision tree model.
A. Decision Trees are a type of Supervised Machine Learning where the data is continuously split according to a certain parameter. The leaves are the decisions or the final outcomes. A decision tree is a machine learning algorithm that partitions the data into subsets.
Q. What is a neural network?
A. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. They, also known as Artificial Neural Networks, are the subset of Deep Learning.
Q. Explain the Bias-Variance Tradeoff
A. The biasโvariance tradeoff is the property of a model that the variance of the parameter estimated across samples can be reduced by increasing the bias in the estimated parameters.
Q. Whatโs the difference between L1 and L2 regularization?
A. The main intuitive difference between the L1 and L2 regularization is that L1 regularization tries to estimate the median of the data while the L2 regularization tries to estimate the mean of the data to avoid overfitting. That value will also be the median of the data distribution mathematically.
ENJOY LEARNING ๐๐
โค3
Some important questions to crack data science interview Part-2
๐1. ๐ฉ-๐ฏ๐๐ฅ๐ฎ๐?
๐ns. p-value is a measure of the probability that an observed difference could have occurred just by random chance. The lower the p-value, the greater the statistical significance of the observed difference. P-value can be used as an alternative to or in addition to pre-selected confidence levels for hypothesis testing.
๐2. ๐๐ง๐ญ๐๐ซ๐ฉ๐จ๐ฅ๐๐ญ๐ข๐จ๐ง ๐๐ง๐ ๐๐ฑ๐ญ๐ซ๐๐ฉ๐จ๐ฅ๐๐ญ๐ข๐จ๐ง?
๐ns. Interpolation is the process of calculating the unknown value from known given values whereas extrapolation is the process of calculating unknown values beyond the given data points.
๐3. ๐๐ง๐ข๐๐จ๐ซ๐ฆ๐๐ ๐๐ข๐ฌ๐ญ๐ซ๐ข๐๐ฎ๐ญ๐ข๐จ๐ง & ๐ง๐จ๐ซ๐ฆ๐๐ฅ ๐๐ข๐ฌ๐ญ๐ซ๐ข๐๐ฎ๐ญ๐ข๐จ๐ง?
๐ns. The normal distribution is bell-shaped, which means value near the center of the distribution are more likely to occur as opposed to values on the tails of the distribution. The uniform distribution is rectangular-shaped, which means every value in the distribution is equally likely to occur.
๐4. ๐๐๐๐จ๐ฆ๐ฆ๐๐ง๐๐๐ซ ๐๐ฒ๐ฌ๐ญ๐๐ฆ๐ฌ?
๐ns. The recommender system mainly deals with the likes and dislikes of the users. Its major objective is to recommend an item to a user which has a high chance of liking or is in need of a particular user based on his previous purchases. It is like having a personalized team who can understand our likes and dislikes and help us in making the decisions regarding a particular item without being biased by any means by making use of a large amount of data in the repositories which are generated day by day.
๐5. ๐๐๐๐ ๐๐ฎ๐ง๐๐ญ๐ข๐จ๐ง ๐ข๐ง ๐๐๐
๐ns. The SQL Joins clause is used to combine records from two or more tables in a database.
๐6. ๐๐ช๐ฎ๐๐ซ๐๐ ๐๐ซ๐ซ๐จ๐ซ ๐๐ง๐ ๐๐๐ฌ๐จ๐ฅ๐ฎ๐ญ๐ ๐๐ซ๐ซ๐จ๐ซ?
๐ns. mean squared error (MSE), and mean absolute error (MAE) are used to evaluate the regression problem's accuracy. The squared error is everywhere differentiable, while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization.
ENJOY LEARNING ๐๐
๐1. ๐ฉ-๐ฏ๐๐ฅ๐ฎ๐?
๐ns. p-value is a measure of the probability that an observed difference could have occurred just by random chance. The lower the p-value, the greater the statistical significance of the observed difference. P-value can be used as an alternative to or in addition to pre-selected confidence levels for hypothesis testing.
๐2. ๐๐ง๐ญ๐๐ซ๐ฉ๐จ๐ฅ๐๐ญ๐ข๐จ๐ง ๐๐ง๐ ๐๐ฑ๐ญ๐ซ๐๐ฉ๐จ๐ฅ๐๐ญ๐ข๐จ๐ง?
๐ns. Interpolation is the process of calculating the unknown value from known given values whereas extrapolation is the process of calculating unknown values beyond the given data points.
๐3. ๐๐ง๐ข๐๐จ๐ซ๐ฆ๐๐ ๐๐ข๐ฌ๐ญ๐ซ๐ข๐๐ฎ๐ญ๐ข๐จ๐ง & ๐ง๐จ๐ซ๐ฆ๐๐ฅ ๐๐ข๐ฌ๐ญ๐ซ๐ข๐๐ฎ๐ญ๐ข๐จ๐ง?
๐ns. The normal distribution is bell-shaped, which means value near the center of the distribution are more likely to occur as opposed to values on the tails of the distribution. The uniform distribution is rectangular-shaped, which means every value in the distribution is equally likely to occur.
๐4. ๐๐๐๐จ๐ฆ๐ฆ๐๐ง๐๐๐ซ ๐๐ฒ๐ฌ๐ญ๐๐ฆ๐ฌ?
๐ns. The recommender system mainly deals with the likes and dislikes of the users. Its major objective is to recommend an item to a user which has a high chance of liking or is in need of a particular user based on his previous purchases. It is like having a personalized team who can understand our likes and dislikes and help us in making the decisions regarding a particular item without being biased by any means by making use of a large amount of data in the repositories which are generated day by day.
๐5. ๐๐๐๐ ๐๐ฎ๐ง๐๐ญ๐ข๐จ๐ง ๐ข๐ง ๐๐๐
๐ns. The SQL Joins clause is used to combine records from two or more tables in a database.
๐6. ๐๐ช๐ฎ๐๐ซ๐๐ ๐๐ซ๐ซ๐จ๐ซ ๐๐ง๐ ๐๐๐ฌ๐จ๐ฅ๐ฎ๐ญ๐ ๐๐ซ๐ซ๐จ๐ซ?
๐ns. mean squared error (MSE), and mean absolute error (MAE) are used to evaluate the regression problem's accuracy. The squared error is everywhere differentiable, while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization.
ENJOY LEARNING ๐๐
โค4