๐ฅENTER VIP FOR FREE! ENTRY 24 HOURS FREE!
LISA TRADER - most successful trader for 2024. A week ago they finished a marathon in their vip channel where from $100 they made $2000, in just two weeks of time!
Entry to her channel cost :$1500 FOR 24 ENTRY FREE!
JOIN THE VIP CHANNEL NOW!
JOIN THE VIP CHANNEL NOW!
JOIN THE VIP CHANNEL NOW!
LISA TRADER - most successful trader for 2024. A week ago they finished a marathon in their vip channel where from $100 they made $2000, in just two weeks of time!
Entry to her channel cost :
JOIN THE VIP CHANNEL NOW!
JOIN THE VIP CHANNEL NOW!
JOIN THE VIP CHANNEL NOW!
๐1
Instance segmentation vs semantic segmentation using Ultralytics ๐ฅ
โ
Semantic segmentation classifies each pixel into a category (e.g., "car," "horse"), but doesn't distinguish between different objects of the same class.
โ
Instance segmentation goes further by identifying and separating individual objects within the same category (e.g., horse 1 vs. horse 2).
Each type has its strengths, semantic segmentation is more common in medical imaging due to its focus on pixel-wise classification without needing to distinguish individual object instances. Its simplicity and adaptability also make it widely applicable across industries.
๐ https://docs.ultralytics.com/guides/instance-segmentation-and-tracking/
๐ By: https://t.iss.one/DataScienceN
Each type has its strengths, semantic segmentation is more common in medical imaging due to its focus on pixel-wise classification without needing to distinguish individual object instances. Its simplicity and adaptability also make it widely applicable across industries.
Please open Telegram to view this post
VIEW IN TELEGRAM
Ultralytics
Instance Segmentation with Object Tracking
Master instance segmentation and tracking with Ultralytics YOLO11. Learn techniques for precise object identification and tracking.
๐2๐ฅ2โค1
Forwarded from Python | Machine Learning | Coding | R
๐1
๐ฏ๐๐๐๐๐๐๐๐๐ ๐๐๐
๐ฒ๐๐๐๐๐๐๐ ๐๐๐ ๐ญ๐๐๐๐๐๐๐ ๐จ๐๐๐๐๐๐๐๐ โฝ๏ธ๐
๐ Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. ๐ง ๐ฏ
๐งฉ At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. ๐บ๏ธ
๐ Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. โ๏ธโก
๐ For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
๐ Implementing and evaluating this stateโofโtheโart solution has deepened our appreciation for keypointโdriven approaches in sports analytics. ๐น๐
๐ https://lnkd.in/em94QDFE
๐ก By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
๐ Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. ๐ง ๐ฏ
๐งฉ At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. ๐บ๏ธ
๐ Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. โ๏ธโก
๐ For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
๐ Implementing and evaluating this stateโofโtheโart solution has deepened our appreciation for keypointโdriven approaches in sports analytics. ๐น๐
๐ https://lnkd.in/em94QDFE
๐ก By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
lnkd.in
LinkedIn
This link will take you to a page thatโs not on LinkedIn
๐4โค1๐ฅ1
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0๏ธโฃ Python
1๏ธโฃ Data Science
2๏ธโฃ Machine Learning
3๏ธโฃ Data Visualization
4๏ธโฃ Artificial Intelligence
5๏ธโฃ Data Analysis
6๏ธโฃ Statistics
7๏ธโฃ Deep Learning
8๏ธโฃ programming Languages
โ
https://t.iss.one/addlist/8_rRW2scgfRhOTc0
โ
https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
๐2
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing CoMotion, a project that detects and tracks detailed 3D poses of multiple people using a single monocular camera stream. This system maintains temporally coherent predictions in crowded scenes filled with difficult poses and occlusions, enabling online tracking through frames with high accuracy.
๐ Key Features:
- Precise detection and tracking in crowded scenes
- Temporal coherence even with occlusions
- High accuracy in tracking multiple people over time
This project advances 3D human motion tracking by offering faster and more accurate tracking of multiple individuals compared to existing systems.
#AI #DeepLearning #3DTracking #ComputerVision #PoseEstimation
Please open Telegram to view this post
VIEW IN TELEGRAM
๐2๐ฅ1
๐ฏ Trackers Library is Officially Released! ๐
If you're working in computer vision and object tracking, this one's for you!
๐ก Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
โ Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
โ Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTโSORT, ByteTrack, OCโSORT โ with even more coming soon!
๐งฉ Released under the permissive Apache 2.0 license โ free for everyone to use and contribute.
๐ Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
๐ Links:
๐ GitHub
๐ Docs
๐ Quick-start notebooks for SORT and DeepSORT are linked ๐๐ป
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
๐ก By: https://t.iss.one/DataScienceN
If you're working in computer vision and object tracking, this one's for you!
๐ก Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
โ Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
โ Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTโSORT, ByteTrack, OCโSORT โ with even more coming soon!
๐งฉ Released under the permissive Apache 2.0 license โ free for everyone to use and contribute.
๐ Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
๐ Links:
๐ GitHub
๐ Docs
๐ Quick-start notebooks for SORT and DeepSORT are linked ๐๐ป
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
๐ก By: https://t.iss.one/DataScienceN
Linkedin
Trackers Library is Out! | Piotr Skalski
Trackers Library is Out! ๐ฅ ๐ฅ ๐ฅ
- Plugโandโplay integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Builtโin support for SORT and DeepSORT today, with StrongSORT, BoTโSORT, ByteTrack, OCโSORT, andโฆ
- Plugโandโplay integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Builtโin support for SORT and DeepSORT today, with StrongSORT, BoTโSORT, ByteTrack, OCโSORT, andโฆ
๐4โค1๐ฅ1
Forwarded from ENG. Hussein Sheikho
ูุฑุตุฉ ุนู
ู ุนู ุจุนุฏ ๐งโ๐ป
ูุง ูุชุทูุจ ุงู ู ุคูู ุงู ุฎุจุฑู ุงูุดุฑูู ุชูุฏู ุชุฏุฑูุจ ูุงู ูโจ
ุณุงุนุงุช ุงูุนู ู ู ุฑููโฐ
ูุชู ุงูุชุณุฌูู ุซู ุงูุชูุงุตู ู ุนู ูุญุถูุฑ ููุงุก ุชุนุฑููู ุจุงูุนู ู ูุงูุดุฑูู
https://forms.gle/hqUZXu7u4uLjEDPv8
ูุง ูุชุทูุจ ุงู ู ุคูู ุงู ุฎุจุฑู ุงูุดุฑูู ุชูุฏู ุชุฏุฑูุจ ูุงู ู
ุณุงุนุงุช ุงูุนู ู ู ุฑูู
ูุชู ุงูุชุณุฌูู ุซู ุงูุชูุงุตู ู ุนู ูุญุถูุฑ ููุงุก ุชุนุฑููู ุจุงูุนู ู ูุงูุดุฑูู
https://forms.gle/hqUZXu7u4uLjEDPv8
Please open Telegram to view this post
VIEW IN TELEGRAM
Google Docs
ูุฑุตุฉ ุนู
ู
ุงูุนู
ู ู
ู ุงูู
ูุฒู ูู ุจุจุณุงุทุฉ ุญู ูู
ุดููุฉ ุงูุจุทุงูุฉ ููุดุจุงุจ ุงูุนุฑุจู ูููู ุงูุจุดุฑ ุญูู ุงูุนุงูู
ุ๐ ุงูู ุทุฑููู ูููุตูู ุงูู ุงูุญุฑูุฉ ุงูู
ุงููุฉ ูุจุนูุฏุงู ุนู ุดุบู ุงููุธููุฉ ุงูุญููู
ูุฉ ุงูู
ู
ูุฉ ูุงูู
ุฑุชุจุงุช ุงูุถุนููุฉ..
ุฃุตุจุญ ุงูุฑุจุญ ู ู ุงูุงูุชุฑูุช ุฃู ุฑ ุญูููู ูููุณ ููู ..๐ค
ููุฏู ูู ูุฑุตุฉ ุงูุขู ู ู ุบูุฑ ุฃู ุดูุงุฏุงุชโฆ
ุฃุตุจุญ ุงูุฑุจุญ ู ู ุงูุงูุชุฑูุช ุฃู ุฑ ุญูููู ูููุณ ููู ..๐ค
ููุฏู ูู ูุฑุตุฉ ุงูุขู ู ู ุบูุฑ ุฃู ุดูุงุฏุงุชโฆ
โค1
Forwarded from Python Courses
Please open Telegram to view this post
VIEW IN TELEGRAM
๐๐ Introducing Unidrone v1.0 โ The Next Generation of Aerial Object Detection Models ๐๐
We are excited to present Unidrone v1.0, a powerful collection of AI detection models based on YOLOv8, specially designed for object recognition in drone imagery.
๐ What is Unidrone?
Unidrone is a smart fusion of two previous models: WALDO (optimized for nadir/overhead views) and NANO (designed for forward-looking angles). Now you no longer need to choose between themโUnidrone handles both angles with high accuracy!
๐ฆ These models accurately detect objects in drone images taken from altitudes of approximately 50 to 1000 feet, regardless of camera angle.
๐ Supported Object Classes:0๏ธโฃ Person (walking, biking, swimming, skiing, etc.)
1๏ธโฃ Bike & motorcycle
2๏ธโฃ Light vehicles (cars, vans, ambulances, etc.)
3๏ธโฃ Trucks
4๏ธโฃ Bus
5๏ธโฃ Boat & floating objects
6๏ธโฃ Construction vehicles (e.g., tractors, loaders)
๐ซ Note: This version of Unidrone does not include military-related classes or smoke detection. It's built solely for civilian and safety-focused applications.
๐ Use Cases:โ Disaster recovery operations
โ Wildlife and protected area monitoring
โ Occupancy analysis (e.g., parking lots)
โ Infrastructure surveillance
โ Search and rescue (SAR)
โ Crowd counting
โ Ground-risk mitigation for drones
๐ ๏ธ The models are available in .pt format and can easily be exported to ONNX or TFLite. They also support visualization with Roboflowโs Supervision library for clean, annotated outputs.
๐ง If you're a machine learning practitioner, you can:
Fine-tune the models on your own dataset
Optimize for fast inference on edge devices
Quantize and deploy on low-cost hardware
Use the models to auto-label your own data
๐จ If you're facing detection issues or want to contribute to future improvements, feel free to contact the developer:
[email protected]
Enjoy exploring the power of Unidrone v1.0!
๐ฌhttps://huggingface.co/StephanST/unidrone
๐ก By: https://t.iss.one/DataScienceN
We are excited to present Unidrone v1.0, a powerful collection of AI detection models based on YOLOv8, specially designed for object recognition in drone imagery.
๐ What is Unidrone?
Unidrone is a smart fusion of two previous models: WALDO (optimized for nadir/overhead views) and NANO (designed for forward-looking angles). Now you no longer need to choose between themโUnidrone handles both angles with high accuracy!
๐ฆ These models accurately detect objects in drone images taken from altitudes of approximately 50 to 1000 feet, regardless of camera angle.
๐ Supported Object Classes:0๏ธโฃ Person (walking, biking, swimming, skiing, etc.)
1๏ธโฃ Bike & motorcycle
2๏ธโฃ Light vehicles (cars, vans, ambulances, etc.)
3๏ธโฃ Trucks
4๏ธโฃ Bus
5๏ธโฃ Boat & floating objects
6๏ธโฃ Construction vehicles (e.g., tractors, loaders)
๐ซ Note: This version of Unidrone does not include military-related classes or smoke detection. It's built solely for civilian and safety-focused applications.
๐ Use Cases:โ Disaster recovery operations
โ Wildlife and protected area monitoring
โ Occupancy analysis (e.g., parking lots)
โ Infrastructure surveillance
โ Search and rescue (SAR)
โ Crowd counting
โ Ground-risk mitigation for drones
๐ ๏ธ The models are available in .pt format and can easily be exported to ONNX or TFLite. They also support visualization with Roboflowโs Supervision library for clean, annotated outputs.
๐ง If you're a machine learning practitioner, you can:
Fine-tune the models on your own dataset
Optimize for fast inference on edge devices
Quantize and deploy on low-cost hardware
Use the models to auto-label your own data
๐จ If you're facing detection issues or want to contribute to future improvements, feel free to contact the developer:
[email protected]
Enjoy exploring the power of Unidrone v1.0!
๐ฌhttps://huggingface.co/StephanST/unidrone
๐ก By: https://t.iss.one/DataScienceN
๐3โค1๐ฅ1
๐ Retail Fashion Sales Data Analysis
Here's a fascinating project in the field of data analysis, focused on real-world fashion retail sales. The dataset contains 3,400 records of customer purchases, including item types, purchase amounts, customer ratings, and payment methods.
๐ Project Goals:
- Understand customer purchasing behavior
- Identify the most popular products
- Analyze preferred payment methods
๐ The dataset was first cleaned using Pandas to handle missing values, and then insightful visualizations were created with Matplotlib to reveal hidden patterns in the data.
๐Data source: https://lnkd.in/dbGbuhG7
๐ Check out the full notebook here:
๐ https://lnkd.in/dhnJpk47
If you're interested in customer behavior analytics and working with real-world retail data, this project is a great source of insight! ๐
๐ก By: https://t.iss.one/DataScienceN
Here's a fascinating project in the field of data analysis, focused on real-world fashion retail sales. The dataset contains 3,400 records of customer purchases, including item types, purchase amounts, customer ratings, and payment methods.
๐ Project Goals:
- Understand customer purchasing behavior
- Identify the most popular products
- Analyze preferred payment methods
๐ The dataset was first cleaned using Pandas to handle missing values, and then insightful visualizations were created with Matplotlib to reveal hidden patterns in the data.
๐Data source: https://lnkd.in/dbGbuhG7
๐ Check out the full notebook here:
๐ https://lnkd.in/dhnJpk47
If you're interested in customer behavior analytics and working with real-world retail data, this project is a great source of insight! ๐
๐ก By: https://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page thatโs not on LinkedIn
โค6๐1
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0๏ธโฃ Python
1๏ธโฃ Data Science
2๏ธโฃ Machine Learning
3๏ธโฃ Data Visualization
4๏ธโฃ Artificial Intelligence
5๏ธโฃ Data Analysis
6๏ธโฃ Statistics
7๏ธโฃ Deep Learning
8๏ธโฃ programming Languages
โ
https://t.iss.one/addlist/8_rRW2scgfRhOTc0
โ
https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
๐3
๐ The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM โ including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs โ all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
๐ Documentation: https://lnkd.in/e5iDT6Tf
๐ง Model Access: https://lnkd.in/ehS6ZUyv
๐ป Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
๐https://t.iss.one/DataScienceN
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM โ including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs โ all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
๐ Documentation: https://lnkd.in/e5iDT6Tf
๐ง Model Access: https://lnkd.in/ehS6ZUyv
๐ป Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
๐https://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page thatโs not on LinkedIn
โค2๐2๐ฅ1
Forwarded from Python | Machine Learning | Coding | R
๐ Your balance is credited $4,000 , the owner of the channel wants to contact you!
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+0DQSCADFTUA3N2Qx
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+0DQSCADFTUA3N2Qx
โค2๐1