Machine Learning
38.9K subscribers
3.72K photos
31 videos
40 files
1.28K links
Machine learning insights, practical tutorials, and clear explanations for beginners and aspiring data scientists. Follow the channel for models, algorithms, coding guides, and real-world ML applications.

Admin: @HusseinSheikho
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
๐‹๐จ๐ ๐ข๐ฌ๐ญ๐ข๐œ ๐‘๐ž๐ ๐ซ๐ž๐ฌ๐ฌ๐ข๐จ๐ง ๐„๐ฑ๐ฉ๐ฅ๐š๐ข๐ง๐ž๐ ๐ฌ๐ข๐ฆ๐ฉ๐ฅ๐ฒ

If youโ€™ve just started learning Machine Learning, ๐‹๐จ๐ ๐ข๐ฌ๐ญ๐ข๐œ ๐‘๐ž๐ ๐ซ๐ž๐ฌ๐ฌ๐ข๐จ๐ง is one of the most important and misunderstood algorithms.

Hereโ€™s everything you need to know ๐Ÿ‘‡

๐Ÿ โ‡จ ๐–๐ก๐š๐ญ ๐ข๐ฌ ๐‹๐จ๐ ๐ข๐ฌ๐ญ๐ข๐œ ๐‘๐ž๐ ๐ซ๐ž๐ฌ๐ฌ๐ข๐จ๐ง?

Itโ€™s a supervised ML algorithm used to predict probabilities and classify data into binary outcomes (like 0 or 1, Yes or No, Spam or Not Spam).

๐Ÿ โ‡จ ๐‡๐จ๐ฐ ๐ข๐ญ ๐ฐ๐จ๐ซ๐ค๐ฌ?

It starts like Linear Regression, but instead of outputting continuous values, it passes the result through a ๐ฌ๐ข๐ ๐ฆ๐จ๐ข๐ ๐Ÿ๐ฎ๐ง๐œ๐ญ๐ข๐จ๐ง to map the result between 0 and 1.

๐˜—๐˜ณ๐˜ฐ๐˜ฃ๐˜ข๐˜ฃ๐˜ช๐˜ญ๐˜ช๐˜ต๐˜บ = ๐Ÿ / (๐Ÿ + ๐žโป(๐ฐ๐ฑ + ๐›))

Here,
๐ฐ = weights
๐ฑ = inputs
๐› = bias
๐ž = Eulerโ€™s number (approx. 2.718)

๐Ÿ‘ โ‡จ ๐–๐ก๐ฒ ๐ง๐จ๐ญ ๐‹๐ข๐ง๐ž๐š๐ซ ๐‘๐ž๐ ๐ซ๐ž๐ฌ๐ฌ๐ข๐จ๐ง?

Because Linear Regression predicts any number from -โˆž to +โˆž, which doesnโ€™t make sense for probability.
We need outputs between 0 and 1 and thatโ€™s where the sigmoid function helps.

๐Ÿ’ โ‡จ ๐‹๐จ๐ฌ๐ฌ ๐…๐ฎ๐ง๐œ๐ญ๐ข๐จ๐ง ๐ฎ๐ฌ๐ž๐?

๐๐ข๐ง๐š๐ซ๐ฒ ๐‚๐ซ๐จ๐ฌ๐ฌ-๐„๐ง๐ญ๐ซ๐จ๐ฉ๐ฒ

โ„’ = โˆ’(y log(p) + (1 โˆ’ y) log(1 โˆ’ p))
Where y is the actual value (0 or 1), and p is the predicted probability

๐Ÿ“ โ‡จ ๐€๐ฉ๐ฉ๐ฅ๐ข๐œ๐š๐ญ๐ข๐จ๐ง๐ฌ ๐ข๐ง ๐ซ๐ž๐š๐ฅ ๐ฅ๐ข๐Ÿ๐ž:

๐„๐ฆ๐š๐ข๐ฅ ๐’๐ฉ๐š๐ฆ ๐ƒ๐ž๐ญ๐ž๐œ๐ญ๐ข๐จ๐ง
๐ƒ๐ข๐ฌ๐ž๐š๐ฌ๐ž ๐๐ซ๐ž๐๐ข๐œ๐ญ๐ข๐จ๐ง
๐‚๐ฎ๐ฌ๐ญ๐จ๐ฆ๐ž๐ซ ๐‚๐ก๐ฎ๐ซ๐ง ๐๐ซ๐ž๐๐ข๐œ๐ญ๐ข๐จ๐ง
๐‚๐ฅ๐ข๐œ๐ค-๐“๐ก๐ซ๐จ๐ฎ๐ ๐ก ๐‘๐š๐ญ๐ž ๐๐ซ๐ž๐๐ข๐œ๐ญ๐ข๐จ๐ง
๐๐ข๐ง๐š๐ซ๐ฒ ๐ฌ๐ž๐ง๐ญ๐ข๐ฆ๐ž๐ง๐ญ ๐œ๐ฅ๐š๐ฌ๐ฌ๐ข๐Ÿ๐ข๐œ๐š๐ญ๐ข๐จ๐ง

๐Ÿ” โ‡จ ๐•๐ฌ. ๐Ž๐ญ๐ก๐ž๐ซ ๐‚๐ฅ๐š๐ฌ๐ฌ๐ข๐Ÿ๐ข๐ž๐ซ๐ฌ

Itโ€™s fast, interpretable, and easy to implement, but it struggles with non-linearly separable data unlike Decision Trees or SVMs.

๐Ÿ• โ‡จ ๐‚๐š๐ง ๐ข๐ญ ๐ก๐š๐ง๐๐ฅ๐ž ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ž ๐œ๐ฅ๐š๐ฌ๐ฌ๐ž๐ฌ?

Yes, using One-vs-Rest (OvR) or Softmax in Multinomial Logistic Regression.

๐Ÿ– โ‡จ ๐„๐ฑ๐š๐ฆ๐ฉ๐ฅ๐ž ๐ข๐ง ๐๐ฒ๐ญ๐ก๐จ๐ง

from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
pred = model.predict(X_test)


#LogisticRegression #MachineLearning #MLAlgorithms #SupervisedLearning #BinaryClassification #SigmoidFunction #PythonML #ScikitLearn #MLForBeginners #DataScienceBasics #MLExplained #ClassificationModels #AIApplications #PredictiveModeling #MLRoadmap

โœ‰๏ธ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

๐Ÿ“ฑ Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
โค10๐Ÿ”ฅ2