π―πππππππππ πππ
π²πππππππ πππ ππππππππ π¨ππππππππ β½οΈπ
π Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. π§ π―
π§© At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. πΊοΈ
π Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. βοΈβ‘
π For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
π Implementing and evaluating this stateβofβtheβart solution has deepened our appreciation for keypointβdriven approaches in sports analytics. πΉπ
π https://lnkd.in/em94QDFE
π‘ By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
π Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. π§ π―
π§© At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. πΊοΈ
π Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. βοΈβ‘
π For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
π Implementing and evaluating this stateβofβtheβart solution has deepened our appreciation for keypointβdriven approaches in sports analytics. πΉπ
π https://lnkd.in/em94QDFE
π‘ By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
lnkd.in
LinkedIn
This link will take you to a page thatβs not on LinkedIn
π4β€1π₯1
π₯Powerful Combo: Ultralytics YOLO11 + Sony Semicon | AITRIOS (Global) Platform + Raspberry Pi
Weβve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor β making it even easier to develop advanced Edge AI applications. π‘
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
π
πBenchmark results for YOLO11n on IMX500:β Inference Time: 62.50 msβ mAP50-95 (B): 0.644π Want to learn more about YOLO11 and Sony IMX500? Check it out here β‘οΈ
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
πhttps://t.iss.one/DataScienceN
Weβve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor β making it even easier to develop advanced Edge AI applications. π‘
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
π
yolo export model=<path_to_drone_model> format=imx data=VisDrone.yamlπ₯ The video below shows the result of this process!
πBenchmark results for YOLO11n on IMX500:β Inference Time: 62.50 msβ mAP50-95 (B): 0.644π Want to learn more about YOLO11 and Sony IMX500? Check it out here β‘οΈ
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
πhttps://t.iss.one/DataScienceN
Ultralytics
SONY IMX500
Learn to export Ultralytics YOLO11 models to Sony's IMX500 format for efficient edge AI deployment on Raspberry Pi AI Camera with on-chip processing.
π1π₯1
#YOLOv8 #ComputerVision #ObjectDetection #Python #AI
Audience Analysis with YOLOv8: Counting People & Estimating Gender Ratios
This lesson demonstrates how to use the YOLOv8 model to perform a computer vision task: analyzing an image of a crowd to count the total number of people and estimate the ratio of men to women.
---
Step 1: Setup and Installation
First, we need to install the necessary libraries.
#Setup #Installation
---
Step 2: Loading Models and Image
We will load two models: the official YOLOv8 model pre-trained for object detection, and we'll use
#DataLoading #Model
---
Step 3: Person Detection with YOLOv8
Now, we'll run the YOLOv8 model on our image to detect all objects and then filter those results to keep only the ones identified as a 'person'.
#PersonDetection #Inference
---
Step 4: Gender Classification
For each detected person, we will crop their bounding box from the image. Then, we'll use
#GenderClassification #CV
Audience Analysis with YOLOv8: Counting People & Estimating Gender Ratios
This lesson demonstrates how to use the YOLOv8 model to perform a computer vision task: analyzing an image of a crowd to count the total number of people and estimate the ratio of men to women.
---
Step 1: Setup and Installation
First, we need to install the necessary libraries.
ultralytics for the YOLOv8 model, opencv-python for image manipulation, and cvlib for a simple, pre-trained gender classification model.#Setup #Installation
# Open your terminal or command prompt and run:
pip install ultralytics opencv-python cvlib tensorflow
---
Step 2: Loading Models and Image
We will load two models: the official YOLOv8 model pre-trained for object detection, and we'll use
cvlib for gender detection. We also need to load the image we want to analyze. Make sure you have an image named crowd.jpg in the same directory.#DataLoading #Model
import cv2
from ultralytics import YOLO
import cvlib as cv
import numpy as np
# Load the YOLOv8 model (pre-trained on COCO dataset)
model = YOLO('yolov8n.pt')
# Load the image
image_path = 'crowd.jpg' # Make sure this image exists
img = cv2.imread(image_path)
# Check if the image was loaded correctly
if img is None:
print(f"Error: Could not load image from {image_path}")
else:
print("Image and YOLOv8 model loaded successfully.")
---
Step 3: Person Detection with YOLOv8
Now, we'll run the YOLOv8 model on our image to detect all objects and then filter those results to keep only the ones identified as a 'person'.
#PersonDetection #Inference
# Run inference on the image
results = model(img)
# A list to store the bounding boxes of detected people
person_boxes = []
# Process the results
for result in results:
boxes = result.boxes
for box in boxes:
# Get class id and check if it's a person (class 0 in COCO)
if model.names[int(box.cls)] == 'person':
# Get bounding box coordinates
x1, y1, x2, y2 = map(int, box.xyxy[0])
person_boxes.append((x1, y1, x2, y2))
# Print the total number of people found
total_people = len(person_boxes)
print(f"Total people detected: {total_people}")
---
Step 4: Gender Classification
For each detected person, we will crop their bounding box from the image. Then, we'll use
cvlib to detect a face within that crop and predict the gender. This is a multi-step pipeline.#GenderClassification #CV