π New Tutorial: Automatic Number Plate Recognition (ANPR) with YOLOv11 + GPT-4o-mini!
This hands-on tutorial shows you how to combine the real-time detection power of YOLOv11 with the language understanding of GPT-4o-mini to build a smart, high-accuracy ANPR system! From setup to smart prompt engineering, everything is covered step-by-step. ππ‘
π― Key Highlights:
β YOLOv11 + GPT-4o-mini = High-precision number plate recognition
β Real-time video processing in Google Colab
β Smart prompt engineering for enhanced OCR performance
π’ A must-watch if you're into computer vision, deep learning, or OpenAI integrations!
π Colab Notebook
βΆοΈ Watch on YouTube
#YOLOv11 #GPT4o #OpenAI #ANPR #OCR #ComputerVision #DeepLearning #AI #DataScience #Python #Ultralytics #MachineLearning #Colab #NumberPlateRecognition
π By : https://t.iss.one/DataScienceN
This hands-on tutorial shows you how to combine the real-time detection power of YOLOv11 with the language understanding of GPT-4o-mini to build a smart, high-accuracy ANPR system! From setup to smart prompt engineering, everything is covered step-by-step. ππ‘
π― Key Highlights:
β YOLOv11 + GPT-4o-mini = High-precision number plate recognition
β Real-time video processing in Google Colab
β Smart prompt engineering for enhanced OCR performance
π’ A must-watch if you're into computer vision, deep learning, or OpenAI integrations!
π Colab Notebook
βΆοΈ Watch on YouTube
#YOLOv11 #GPT4o #OpenAI #ANPR #OCR #ComputerVision #DeepLearning #AI #DataScience #Python #Ultralytics #MachineLearning #Colab #NumberPlateRecognition
π By : https://t.iss.one/DataScienceN
π2β€1π₯1
π―πππππππππ πππ
π²πππππππ πππ ππππππππ π¨ππππππππ β½οΈπ
π Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. π§ π―
π§© At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. πΊοΈ
π Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. βοΈβ‘
π For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
π Implementing and evaluating this stateβofβtheβart solution has deepened our appreciation for keypointβdriven approaches in sports analytics. πΉπ
π https://lnkd.in/em94QDFE
π‘ By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
π Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. π§ π―
π§© At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. πΊοΈ
π Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. βοΈβ‘
π For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:
π Implementing and evaluating this stateβofβtheβart solution has deepened our appreciation for keypointβdriven approaches in sports analytics. πΉπ
π https://lnkd.in/em94QDFE
π‘ By: https://t.iss.one/DataScienceN
#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
lnkd.in
LinkedIn
This link will take you to a page thatβs not on LinkedIn
π4β€1π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing CoMotion, a project that detects and tracks detailed 3D poses of multiple people using a single monocular camera stream. This system maintains temporally coherent predictions in crowded scenes filled with difficult poses and occlusions, enabling online tracking through frames with high accuracy.
π Key Features:
- Precise detection and tracking in crowded scenes
- Temporal coherence even with occlusions
- High accuracy in tracking multiple people over time
This project advances 3D human motion tracking by offering faster and more accurate tracking of multiple individuals compared to existing systems.
#AI #DeepLearning #3DTracking #ComputerVision #PoseEstimation
Please open Telegram to view this post
VIEW IN TELEGRAM
π2π₯1
π― Trackers Library is Officially Released! π
If you're working in computer vision and object tracking, this one's for you!
π‘ Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
β Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
β Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTβSORT, ByteTrack, OCβSORT β with even more coming soon!
π§© Released under the permissive Apache 2.0 license β free for everyone to use and contribute.
π Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
π Links:
π GitHub
π Docs
π Quick-start notebooks for SORT and DeepSORT are linked ππ»
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
π‘ By: https://t.iss.one/DataScienceN
If you're working in computer vision and object tracking, this one's for you!
π‘ Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
β Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
β Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTβSORT, ByteTrack, OCβSORT β with even more coming soon!
π§© Released under the permissive Apache 2.0 license β free for everyone to use and contribute.
π Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
π Links:
π GitHub
π Docs
π Quick-start notebooks for SORT and DeepSORT are linked ππ»
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
π‘ By: https://t.iss.one/DataScienceN
Linkedin
Trackers Library is Out! | Piotr Skalski
Trackers Library is Out! π₯ π₯ π₯
- Plugβandβplay integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Builtβin support for SORT and DeepSORT today, with StrongSORT, BoTβSORT, ByteTrack, OCβSORT, andβ¦
- Plugβandβplay integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Builtβin support for SORT and DeepSORT today, with StrongSORT, BoTβSORT, ByteTrack, OCβSORT, andβ¦
π4β€1π₯1
π The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM β including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs β all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
π Documentation: https://lnkd.in/e5iDT6Tf
π§ Model Access: https://lnkd.in/ehS6ZUyv
π» Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
πhttps://t.iss.one/DataScienceN
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM β including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs β all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
π Documentation: https://lnkd.in/e5iDT6Tf
π§ Model Access: https://lnkd.in/ehS6ZUyv
π» Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
πhttps://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page thatβs not on LinkedIn
β€2π2π₯1
π₯Powerful Combo: Ultralytics YOLO11 + Sony Semicon | AITRIOS (Global) Platform + Raspberry Pi
Weβve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor β making it even easier to develop advanced Edge AI applications. π‘
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
π
πBenchmark results for YOLO11n on IMX500:β Inference Time: 62.50 msβ mAP50-95 (B): 0.644π Want to learn more about YOLO11 and Sony IMX500? Check it out here β‘οΈ
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
πhttps://t.iss.one/DataScienceN
Weβve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor β making it even easier to develop advanced Edge AI applications. π‘
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
π
yolo export model=<path_to_drone_model> format=imx data=VisDrone.yamlπ₯ The video below shows the result of this process!
πBenchmark results for YOLO11n on IMX500:β Inference Time: 62.50 msβ mAP50-95 (B): 0.644π Want to learn more about YOLO11 and Sony IMX500? Check it out here β‘οΈ
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
πhttps://t.iss.one/DataScienceN
Ultralytics
SONY IMX500
Learn to export Ultralytics YOLO11 models to Sony's IMX500 format for efficient edge AI deployment on Raspberry Pi AI Camera with on-chip processing.
π1π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
NVIDIA introduces GENMO, a unified generalist model for human motion that seamlessly combines motion estimation and generation within a single framework. GENMO supports conditioning on videos, 2D keypoints, text, music, and 3D keyframes, enabling highly versatile motion understanding and synthesis.
Currently, no official code release is available.
Review:
https://t.ly/Q5T_Y
Paper:
https://lnkd.in/ds36BY49
Project Page:
https://lnkd.in/dAYHhuFU
#NVIDIA #GENMO #HumanMotion #DeepLearning #AI #ComputerVision #MotionGeneration #MachineLearning #MultimodalAI #3DReconstruction
Please open Telegram to view this post
VIEW IN TELEGRAM
π3
Forwarded from Python | Machine Learning | Coding | R
LLM Interview Questions.pdf
71.2 KB
Top 50 LLM Interview Questions!
#LLM #AIInterviews #MachineLearning #DeepLearning #NLP #LLMInterviewPrep #ModelArchitectures #AITheory #TechInterviews #MLBasics #InterviewQuestions #LargeLanguageModels
βοΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBkπ± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
β€1
Forwarded from Python | Machine Learning | Coding | R
10 GitHub repos to build a career in AI engineering:
(100% free step-by-step roadmap)
1οΈβ£ ML for Beginners by Microsoft
A 12-week project-based curriculum that teaches classical ML using Scikit-learn on real-world datasets.
Includes quizzes, lessons, and hands-on projects, with some videos.
GitHub repo β https://lnkd.in/dCxStbYv
2οΈβ£ AI for Beginners by Microsoft
This repo covers neural networks, NLP, CV, transformers, ethics & more. There are hands-on labs in PyTorch & TensorFlow using Jupyter.
Beginner-friendly, project-based, and full of real-world apps.
GitHub repo β https://lnkd.in/dwS5Jk9E
3οΈβ£ Neural Networks: Zero to Hero
Now that youβve grasped the foundations of AI/ML, itβs time to dive deeper.
This repo by Andrej Karpathy builds modern deep learning systems from scratch, including GPTs.
GitHub repo β https://lnkd.in/dXAQWucq
4οΈβ£ DL Paper Implementations
So far, you have learned the fundamentals of AI, ML, and DL. Now study how the best architectures work.
This repo covers well-documented PyTorch implementations of 60+ research papers on Transformers, GANs, Diffusion models, etc.
GitHub repo β https://lnkd.in/dTrtDrvs
5οΈβ£ Made With ML
Now itβs time to learn how to go from notebooks to production.
Made With ML teaches you how to design, develop, deploy, and iterate on real-world ML systems using MLOps, CI/CD, and best practices.
GitHub repo β https://lnkd.in/dYyjjBGb
6οΈβ£ Hands-on LLMs
- You've built neural nets.
- You've explored GPTs and LLMs.
Now apply them. This is a visually rich repo that covers everything about LLMs, like tokenization, fine-tuning, RAG, etc.
GitHub repo β https://lnkd.in/dh2FwYFe
7οΈβ£ Advanced RAG Techniques
Hands-on LLMs will give you a good grasp of RAG systems. Now learn advanced RAG techniques.
This repo covers 30+ methods to make RAG systems faster, smarter, and accurate, like HyDE, GraphRAG, etc.
GitHub repo β https://lnkd.in/dBKxtX-D
8οΈβ£ AI Agents for Beginners by Microsoft
After diving into LLMs and mastering RAG, learn how to build AI agents.
This hands-on course covers building AI agents using frameworks like AutoGen.
GitHub repo β https://lnkd.in/dbFeuznE
9οΈβ£ Agents Towards Production
The above course will teach what AI agents are. Next, learn how to ship them.
This is a practical playbook for building agents covering memory, orchestration, deployment, security & more.
GitHub repo β https://lnkd.in/dcwmamSb
π AI Engg. Hub
To truly master LLMs, RAG, and AI agents, you need projects.
This covers 70+ real-world examples, tutorials, and agent app you can build, adapt, and ship.
GitHub repo β https://lnkd.in/geMYm3b6
(100% free step-by-step roadmap)
A 12-week project-based curriculum that teaches classical ML using Scikit-learn on real-world datasets.
Includes quizzes, lessons, and hands-on projects, with some videos.
GitHub repo β https://lnkd.in/dCxStbYv
This repo covers neural networks, NLP, CV, transformers, ethics & more. There are hands-on labs in PyTorch & TensorFlow using Jupyter.
Beginner-friendly, project-based, and full of real-world apps.
GitHub repo β https://lnkd.in/dwS5Jk9E
Now that youβve grasped the foundations of AI/ML, itβs time to dive deeper.
This repo by Andrej Karpathy builds modern deep learning systems from scratch, including GPTs.
GitHub repo β https://lnkd.in/dXAQWucq
So far, you have learned the fundamentals of AI, ML, and DL. Now study how the best architectures work.
This repo covers well-documented PyTorch implementations of 60+ research papers on Transformers, GANs, Diffusion models, etc.
GitHub repo β https://lnkd.in/dTrtDrvs
Now itβs time to learn how to go from notebooks to production.
Made With ML teaches you how to design, develop, deploy, and iterate on real-world ML systems using MLOps, CI/CD, and best practices.
GitHub repo β https://lnkd.in/dYyjjBGb
- You've built neural nets.
- You've explored GPTs and LLMs.
Now apply them. This is a visually rich repo that covers everything about LLMs, like tokenization, fine-tuning, RAG, etc.
GitHub repo β https://lnkd.in/dh2FwYFe
Hands-on LLMs will give you a good grasp of RAG systems. Now learn advanced RAG techniques.
This repo covers 30+ methods to make RAG systems faster, smarter, and accurate, like HyDE, GraphRAG, etc.
GitHub repo β https://lnkd.in/dBKxtX-D
After diving into LLMs and mastering RAG, learn how to build AI agents.
This hands-on course covers building AI agents using frameworks like AutoGen.
GitHub repo β https://lnkd.in/dbFeuznE
The above course will teach what AI agents are. Next, learn how to ship them.
This is a practical playbook for building agents covering memory, orchestration, deployment, security & more.
GitHub repo β https://lnkd.in/dcwmamSb
To truly master LLMs, RAG, and AI agents, you need projects.
This covers 70+ real-world examples, tutorials, and agent app you can build, adapt, and ship.
GitHub repo β https://lnkd.in/geMYm3b6
#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
βοΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBkπ± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
β€3
π₯ Trending Repository: Machine-Learning-Tutorials
π Description: machine learning and deep learning tutorials, articles and other resources
π Repository URL: https://github.com/ujjwalkarn/Machine-Learning-Tutorials
π Website: https://ujjwalkarn.github.io/Machine-Learning-Tutorials
π Readme: https://github.com/ujjwalkarn/Machine-Learning-Tutorials#readme
π Statistics:
π Stars: 16.6K stars
π Watchers: 797
π΄ Forks: 3.9K forks
π» Programming Languages: Not available
π·οΈ Related Topics:
==================================
π§ By: https://t.iss.one/DataScienceN
π Description: machine learning and deep learning tutorials, articles and other resources
π Repository URL: https://github.com/ujjwalkarn/Machine-Learning-Tutorials
π Website: https://ujjwalkarn.github.io/Machine-Learning-Tutorials
π Readme: https://github.com/ujjwalkarn/Machine-Learning-Tutorials#readme
π Statistics:
π Stars: 16.6K stars
π Watchers: 797
π΄ Forks: 3.9K forks
π» Programming Languages: Not available
π·οΈ Related Topics:
#list #machine_learning #awesome #deep_neural_networks #deep_learning #neural_network #neural_networks #awesome_list #machinelearning #deeplearning #deep_learning_tutorial
==================================
π§ By: https://t.iss.one/DataScienceN
β€2
π₯ Trending Repository: datascience
π Description: This repository is a compilation of free resources for learning Data Science.
π Repository URL: https://github.com/geekywrites/datascience
π Website: https://twitter.com/geekywrites
π Readme: https://github.com/geekywrites/datascience#readme
π Statistics:
π Stars: 5.1K stars
π Watchers: 381
π΄ Forks: 529 forks
π» Programming Languages: Not available
π·οΈ Related Topics:
==================================
π§ By: https://t.iss.one/DataScienceN
π Description: This repository is a compilation of free resources for learning Data Science.
π Repository URL: https://github.com/geekywrites/datascience
π Website: https://twitter.com/geekywrites
π Readme: https://github.com/geekywrites/datascience#readme
π Statistics:
π Stars: 5.1K stars
π Watchers: 381
π΄ Forks: 529 forks
π» Programming Languages: Not available
π·οΈ Related Topics:
#data_science #machine_learning #natural_language_processing #computer_vision #machine_learning_algorithms #artificial_intelligence #neural_networks #deeplearning #datascienceproject
==================================
π§ By: https://t.iss.one/DataScienceN
Forwarded from Data Science Machine Learning Data Analysis
β¨ Detecting COVID-19 in X-ray images with Keras, TensorFlow, and Deep Learning β¨
π In this tutorial, you will learn how to automatically detect COVID-19 in a hand-created X-ray image dataset using Keras, TensorFlow, and Deep Learning. Like most people in the world right now, Iβm genuinely concerned about COVID-19. I find myself constantlyβ¦...
π·οΈ #DeepLearning #KerasandTensorFlow #MedicalComputerVision #Tutorials
π In this tutorial, you will learn how to automatically detect COVID-19 in a hand-created X-ray image dataset using Keras, TensorFlow, and Deep Learning. Like most people in the world right now, Iβm genuinely concerned about COVID-19. I find myself constantlyβ¦...
π·οΈ #DeepLearning #KerasandTensorFlow #MedicalComputerVision #Tutorials
β€1