𝑯𝒐𝒎𝒐𝒈𝒓𝒂𝒑𝒉𝒚 𝒂𝒏𝒅 𝑲𝒆𝒚𝒑𝒐𝒊𝒏𝒕 𝒇𝒐𝒓 𝑭𝒐𝒐𝒕𝒃𝒂𝒍𝒍 𝑨𝒏𝒂𝒍𝒚𝒕𝒊𝒄𝒔 ⚽️📐
🚀 Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. 🧠🎯
🧩 At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. 🗺️
📊 Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. ⚙️⚡
🏆 For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
📈 Implementing and evaluating this state‑of‑the‑art solution has deepened our appreciation for keypoint‑driven approaches in sports analytics. 📹📌
🔗 https://lnkd.in/em94QDFE
📡 By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
🚀 Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. 🧠🎯
🧩 At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. 🗺️
📊 Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. ⚙️⚡
🏆 For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
📈 Implementing and evaluating this state‑of‑the‑art solution has deepened our appreciation for keypoint‑driven approaches in sports analytics. 📹📌
🔗 https://lnkd.in/em94QDFE
📡 By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
lnkd.in
LinkedIn
This link will take you to a page that’s not on LinkedIn
👍4❤1🔥1
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages
✅ https://t.iss.one/addlist/8_rRW2scgfRhOTc0
✅ https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing CoMotion, a project that detects and tracks detailed 3D poses of multiple people using a single monocular camera stream. This system maintains temporally coherent predictions in crowded scenes filled with difficult poses and occlusions, enabling online tracking through frames with high accuracy.
🔍 Key Features:
- Precise detection and tracking in crowded scenes
- Temporal coherence even with occlusions
- High accuracy in tracking multiple people over time
This project advances 3D human motion tracking by offering faster and more accurate tracking of multiple individuals compared to existing systems.
#AI #DeepLearning #3DTracking #ComputerVision #PoseEstimation
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2🔥1
🎯 Trackers Library is Officially Released! 🚀
If you're working in computer vision and object tracking, this one's for you!
💡 Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
✅ Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
✅ Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoT‑SORT, ByteTrack, OC‑SORT – with even more coming soon!
🧩 Released under the permissive Apache 2.0 license – free for everyone to use and contribute.
👏 Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
📌 Links:
🔗 GitHub
🔗 Docs
📚 Quick-start notebooks for SORT and DeepSORT are linked 👇🏻
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
📡 By: https://t.iss.one/DataScienceN
If you're working in computer vision and object tracking, this one's for you!
💡 Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
✅ Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
✅ Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoT‑SORT, ByteTrack, OC‑SORT – with even more coming soon!
🧩 Released under the permissive Apache 2.0 license – free for everyone to use and contribute.
👏 Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
📌 Links:
🔗 GitHub
🔗 Docs
📚 Quick-start notebooks for SORT and DeepSORT are linked 👇🏻
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
📡 By: https://t.iss.one/DataScienceN
Linkedin
Trackers Library is Out! | Piotr Skalski
Trackers Library is Out! 🔥 🔥 🔥
- Plug‑and‑play integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Built‑in support for SORT and DeepSORT today, with StrongSORT, BoT‑SORT, ByteTrack, OC‑SORT, and…
- Plug‑and‑play integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Built‑in support for SORT and DeepSORT today, with StrongSORT, BoT‑SORT, ByteTrack, OC‑SORT, and…
👍4❤1🔥1
Forwarded from ENG. Hussein Sheikho
فرصة عمل عن بعد 🧑💻
لا يتطلب اي مؤهل او خبره الشركه تقدم تدريب كامل✨
ساعات العمل مرنه⏰
يتم التسجيل ثم التواصل معك لحضور لقاء تعريفي بالعمل والشركه
https://forms.gle/hqUZXu7u4uLjEDPv8
لا يتطلب اي مؤهل او خبره الشركه تقدم تدريب كامل
ساعات العمل مرنه
يتم التسجيل ثم التواصل معك لحضور لقاء تعريفي بالعمل والشركه
https://forms.gle/hqUZXu7u4uLjEDPv8
Please open Telegram to view this post
VIEW IN TELEGRAM
Google Docs
فرصة عمل
العمل من المنزل هو ببساطة حل لمشكلة البطالة للشباب العربي ولكل البشر حول العالم،👌 انه طريقك للوصول الى الحرية المالية وبعيداً عن شغل الوظيفة الحكومية المملة والمرتبات الضعيفة..
أصبح الربح من الانترنت أمر حقيقي وليس وهم..🤜
نقدم لك فرصة الآن من غير أي شهادات…
أصبح الربح من الانترنت أمر حقيقي وليس وهم..🤜
نقدم لك فرصة الآن من غير أي شهادات…
❤1
Forwarded from Python Courses
Please open Telegram to view this post
VIEW IN TELEGRAM
🎉🚁 Introducing Unidrone v1.0 – The Next Generation of Aerial Object Detection Models 🚁🎉
We are excited to present Unidrone v1.0, a powerful collection of AI detection models based on YOLOv8, specially designed for object recognition in drone imagery.
🔍 What is Unidrone?
Unidrone is a smart fusion of two previous models: WALDO (optimized for nadir/overhead views) and NANO (designed for forward-looking angles). Now you no longer need to choose between them—Unidrone handles both angles with high accuracy!
📦 These models accurately detect objects in drone images taken from altitudes of approximately 50 to 1000 feet, regardless of camera angle.
🔍 Supported Object Classes:0️⃣ Person (walking, biking, swimming, skiing, etc.)
1️⃣ Bike & motorcycle
2️⃣ Light vehicles (cars, vans, ambulances, etc.)
3️⃣ Trucks
4️⃣ Bus
5️⃣ Boat & floating objects
6️⃣ Construction vehicles (e.g., tractors, loaders)
🚫 Note: This version of Unidrone does not include military-related classes or smoke detection. It's built solely for civilian and safety-focused applications.
📌 Use Cases:✅ Disaster recovery operations
✅ Wildlife and protected area monitoring
✅ Occupancy analysis (e.g., parking lots)
✅ Infrastructure surveillance
✅ Search and rescue (SAR)
✅ Crowd counting
✅ Ground-risk mitigation for drones
🛠️ The models are available in .pt format and can easily be exported to ONNX or TFLite. They also support visualization with Roboflow’s Supervision library for clean, annotated outputs.
🧠 If you're a machine learning practitioner, you can:
Fine-tune the models on your own dataset
Optimize for fast inference on edge devices
Quantize and deploy on low-cost hardware
Use the models to auto-label your own data
📨 If you're facing detection issues or want to contribute to future improvements, feel free to contact the developer:
[email protected]
Enjoy exploring the power of Unidrone v1.0!
💬https://huggingface.co/StephanST/unidrone
📡 By: https://t.iss.one/DataScienceN
We are excited to present Unidrone v1.0, a powerful collection of AI detection models based on YOLOv8, specially designed for object recognition in drone imagery.
🔍 What is Unidrone?
Unidrone is a smart fusion of two previous models: WALDO (optimized for nadir/overhead views) and NANO (designed for forward-looking angles). Now you no longer need to choose between them—Unidrone handles both angles with high accuracy!
📦 These models accurately detect objects in drone images taken from altitudes of approximately 50 to 1000 feet, regardless of camera angle.
🔍 Supported Object Classes:0️⃣ Person (walking, biking, swimming, skiing, etc.)
1️⃣ Bike & motorcycle
2️⃣ Light vehicles (cars, vans, ambulances, etc.)
3️⃣ Trucks
4️⃣ Bus
5️⃣ Boat & floating objects
6️⃣ Construction vehicles (e.g., tractors, loaders)
🚫 Note: This version of Unidrone does not include military-related classes or smoke detection. It's built solely for civilian and safety-focused applications.
📌 Use Cases:✅ Disaster recovery operations
✅ Wildlife and protected area monitoring
✅ Occupancy analysis (e.g., parking lots)
✅ Infrastructure surveillance
✅ Search and rescue (SAR)
✅ Crowd counting
✅ Ground-risk mitigation for drones
🛠️ The models are available in .pt format and can easily be exported to ONNX or TFLite. They also support visualization with Roboflow’s Supervision library for clean, annotated outputs.
🧠 If you're a machine learning practitioner, you can:
Fine-tune the models on your own dataset
Optimize for fast inference on edge devices
Quantize and deploy on low-cost hardware
Use the models to auto-label your own data
📨 If you're facing detection issues or want to contribute to future improvements, feel free to contact the developer:
[email protected]
Enjoy exploring the power of Unidrone v1.0!
💬https://huggingface.co/StephanST/unidrone
📡 By: https://t.iss.one/DataScienceN
👍3🔥1
🚀 Retail Fashion Sales Data Analysis
Here's a fascinating project in the field of data analysis, focused on real-world fashion retail sales. The dataset contains 3,400 records of customer purchases, including item types, purchase amounts, customer ratings, and payment methods.
🔍 Project Goals:
- Understand customer purchasing behavior
- Identify the most popular products
- Analyze preferred payment methods
📊 The dataset was first cleaned using Pandas to handle missing values, and then insightful visualizations were created with Matplotlib to reveal hidden patterns in the data.
🔗Data source: https://lnkd.in/dbGbuhG7
📓 Check out the full notebook here:
🔗 https://lnkd.in/dhnJpk47
If you're interested in customer behavior analytics and working with real-world retail data, this project is a great source of insight! 🌟
📡 By: https://t.iss.one/DataScienceN
Here's a fascinating project in the field of data analysis, focused on real-world fashion retail sales. The dataset contains 3,400 records of customer purchases, including item types, purchase amounts, customer ratings, and payment methods.
🔍 Project Goals:
- Understand customer purchasing behavior
- Identify the most popular products
- Analyze preferred payment methods
📊 The dataset was first cleaned using Pandas to handle missing values, and then insightful visualizations were created with Matplotlib to reveal hidden patterns in the data.
🔗Data source: https://lnkd.in/dbGbuhG7
📓 Check out the full notebook here:
🔗 https://lnkd.in/dhnJpk47
If you're interested in customer behavior analytics and working with real-world retail data, this project is a great source of insight! 🌟
📡 By: https://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page that’s not on LinkedIn
❤6👍1
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages
✅ https://t.iss.one/addlist/8_rRW2scgfRhOTc0
✅ https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
🚀 The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM — including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs — all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
📄 Documentation: https://lnkd.in/e5iDT6Tf
🧠 Model Access: https://lnkd.in/ehS6ZUyv
💻 Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
🌟https://t.iss.one/DataScienceN
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM — including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs — all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
📄 Documentation: https://lnkd.in/e5iDT6Tf
🧠 Model Access: https://lnkd.in/ehS6ZUyv
💻 Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
🌟https://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page that’s not on LinkedIn
❤2👍2🔥1
Forwarded from Python | Machine Learning | Coding | R
🎁 Your balance is credited $4,000 , the owner of the channel wants to contact you!
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+0DQSCADFTUA3N2Qx
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+0DQSCADFTUA3N2Qx
❤2👍1
Follow me on LinkedIn for more projects and jobs
https://www.linkedin.com/in/hussein-sheikho-4a8187246
https://www.linkedin.com/in/hussein-sheikho-4a8187246
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages
✅ https://t.iss.one/addlist/8_rRW2scgfRhOTc0
✅ https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥Powerful Combo: Ultralytics YOLO11 + Sony Semicon | AITRIOS (Global) Platform + Raspberry Pi
We’ve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor — making it even easier to develop advanced Edge AI applications. 💡
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
👉
🔍Benchmark results for YOLO11n on IMX500:✅ Inference Time: 62.50 ms✅ mAP50-95 (B): 0.644📌 Want to learn more about YOLO11 and Sony IMX500? Check it out here ➡️
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
🌟https://t.iss.one/DataScienceN
We’ve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor — making it even easier to develop advanced Edge AI applications. 💡
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
👉
yolo export model=<path_to_drone_model> format=imx data=VisDrone.yaml🎥 The video below shows the result of this process!
🔍Benchmark results for YOLO11n on IMX500:✅ Inference Time: 62.50 ms✅ mAP50-95 (B): 0.644📌 Want to learn more about YOLO11 and Sony IMX500? Check it out here ➡️
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
🌟https://t.iss.one/DataScienceN
Ultralytics
SONY IMX500
Learn to export Ultralytics YOLO11 models to Sony's IMX500 format for efficient edge AI deployment on Raspberry Pi AI Camera with on-chip processing.
👍1🔥1