This media is not supported in your browser
VIEW IN TELEGRAM
Introducing CoMotion, a project that detects and tracks detailed 3D poses of multiple people using a single monocular camera stream. This system maintains temporally coherent predictions in crowded scenes filled with difficult poses and occlusions, enabling online tracking through frames with high accuracy.
๐ Key Features:
- Precise detection and tracking in crowded scenes
- Temporal coherence even with occlusions
- High accuracy in tracking multiple people over time
This project advances 3D human motion tracking by offering faster and more accurate tracking of multiple individuals compared to existing systems.
#AI #DeepLearning #3DTracking #ComputerVision #PoseEstimation
Please open Telegram to view this post
VIEW IN TELEGRAM
๐2๐ฅ1
๐ฏ Trackers Library is Officially Released! ๐
If you're working in computer vision and object tracking, this one's for you!
๐ก Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
โ Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
โ Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTโSORT, ByteTrack, OCโSORT โ with even more coming soon!
๐งฉ Released under the permissive Apache 2.0 license โ free for everyone to use and contribute.
๐ Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
๐ Links:
๐ GitHub
๐ Docs
๐ Quick-start notebooks for SORT and DeepSORT are linked ๐๐ป
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
๐ก By: https://t.iss.one/DataScienceN
If you're working in computer vision and object tracking, this one's for you!
๐ก Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:
โ Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!
โ Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTโSORT, ByteTrack, OCโSORT โ with even more coming soon!
๐งฉ Released under the permissive Apache 2.0 license โ free for everyone to use and contribute.
๐ Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!
๐ Links:
๐ GitHub
๐ Docs
๐ Quick-start notebooks for SORT and DeepSORT are linked ๐๐ป
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo
#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI
๐ก By: https://t.iss.one/DataScienceN
Linkedin
Trackers Library is Out! | Piotr Skalski
Trackers Library is Out! ๐ฅ ๐ฅ ๐ฅ
- Plugโandโplay integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Builtโin support for SORT and DeepSORT today, with StrongSORT, BoTโSORT, ByteTrack, OCโSORT, andโฆ
- Plugโandโplay integration with detectors from Transformers, Inference, Ultralytics, PaddlePaddle, MMDetection, and more.
- Builtโin support for SORT and DeepSORT today, with StrongSORT, BoTโSORT, ByteTrack, OCโSORT, andโฆ
๐4โค1๐ฅ1
Forwarded from ENG. Hussein Sheikho
ูุฑุตุฉ ุนู
ู ุนู ุจุนุฏ ๐งโ๐ป
ูุง ูุชุทูุจ ุงู ู ุคูู ุงู ุฎุจุฑู ุงูุดุฑูู ุชูุฏู ุชุฏุฑูุจ ูุงู ูโจ
ุณุงุนุงุช ุงูุนู ู ู ุฑููโฐ
ูุชู ุงูุชุณุฌูู ุซู ุงูุชูุงุตู ู ุนู ูุญุถูุฑ ููุงุก ุชุนุฑููู ุจุงูุนู ู ูุงูุดุฑูู
https://forms.gle/hqUZXu7u4uLjEDPv8
ูุง ูุชุทูุจ ุงู ู ุคูู ุงู ุฎุจุฑู ุงูุดุฑูู ุชูุฏู ุชุฏุฑูุจ ูุงู ู
ุณุงุนุงุช ุงูุนู ู ู ุฑูู
ูุชู ุงูุชุณุฌูู ุซู ุงูุชูุงุตู ู ุนู ูุญุถูุฑ ููุงุก ุชุนุฑููู ุจุงูุนู ู ูุงูุดุฑูู
https://forms.gle/hqUZXu7u4uLjEDPv8
Please open Telegram to view this post
VIEW IN TELEGRAM
Google Docs
ูุฑุตุฉ ุนู
ู
ุงูุนู
ู ู
ู ุงูู
ูุฒู ูู ุจุจุณุงุทุฉ ุญู ูู
ุดููุฉ ุงูุจุทุงูุฉ ููุดุจุงุจ ุงูุนุฑุจู ูููู ุงูุจุดุฑ ุญูู ุงูุนุงูู
ุ๐ ุงูู ุทุฑููู ูููุตูู ุงูู ุงูุญุฑูุฉ ุงูู
ุงููุฉ ูุจุนูุฏุงู ุนู ุดุบู ุงููุธููุฉ ุงูุญููู
ูุฉ ุงูู
ู
ูุฉ ูุงูู
ุฑุชุจุงุช ุงูุถุนููุฉ..
ุฃุตุจุญ ุงูุฑุจุญ ู ู ุงูุงูุชุฑูุช ุฃู ุฑ ุญูููู ูููุณ ููู ..๐ค
ููุฏู ูู ูุฑุตุฉ ุงูุขู ู ู ุบูุฑ ุฃู ุดูุงุฏุงุชโฆ
ุฃุตุจุญ ุงูุฑุจุญ ู ู ุงูุงูุชุฑูุช ุฃู ุฑ ุญูููู ูููุณ ููู ..๐ค
ููุฏู ูู ูุฑุตุฉ ุงูุขู ู ู ุบูุฑ ุฃู ุดูุงุฏุงุชโฆ
โค1
Forwarded from Python Courses
Please open Telegram to view this post
VIEW IN TELEGRAM
๐๐ Introducing Unidrone v1.0 โ The Next Generation of Aerial Object Detection Models ๐๐
We are excited to present Unidrone v1.0, a powerful collection of AI detection models based on YOLOv8, specially designed for object recognition in drone imagery.
๐ What is Unidrone?
Unidrone is a smart fusion of two previous models: WALDO (optimized for nadir/overhead views) and NANO (designed for forward-looking angles). Now you no longer need to choose between themโUnidrone handles both angles with high accuracy!
๐ฆ These models accurately detect objects in drone images taken from altitudes of approximately 50 to 1000 feet, regardless of camera angle.
๐ Supported Object Classes:0๏ธโฃ Person (walking, biking, swimming, skiing, etc.)
1๏ธโฃ Bike & motorcycle
2๏ธโฃ Light vehicles (cars, vans, ambulances, etc.)
3๏ธโฃ Trucks
4๏ธโฃ Bus
5๏ธโฃ Boat & floating objects
6๏ธโฃ Construction vehicles (e.g., tractors, loaders)
๐ซ Note: This version of Unidrone does not include military-related classes or smoke detection. It's built solely for civilian and safety-focused applications.
๐ Use Cases:โ Disaster recovery operations
โ Wildlife and protected area monitoring
โ Occupancy analysis (e.g., parking lots)
โ Infrastructure surveillance
โ Search and rescue (SAR)
โ Crowd counting
โ Ground-risk mitigation for drones
๐ ๏ธ The models are available in .pt format and can easily be exported to ONNX or TFLite. They also support visualization with Roboflowโs Supervision library for clean, annotated outputs.
๐ง If you're a machine learning practitioner, you can:
Fine-tune the models on your own dataset
Optimize for fast inference on edge devices
Quantize and deploy on low-cost hardware
Use the models to auto-label your own data
๐จ If you're facing detection issues or want to contribute to future improvements, feel free to contact the developer:
[email protected]
Enjoy exploring the power of Unidrone v1.0!
๐ฌhttps://huggingface.co/StephanST/unidrone
๐ก By: https://t.iss.one/DataScienceN
We are excited to present Unidrone v1.0, a powerful collection of AI detection models based on YOLOv8, specially designed for object recognition in drone imagery.
๐ What is Unidrone?
Unidrone is a smart fusion of two previous models: WALDO (optimized for nadir/overhead views) and NANO (designed for forward-looking angles). Now you no longer need to choose between themโUnidrone handles both angles with high accuracy!
๐ฆ These models accurately detect objects in drone images taken from altitudes of approximately 50 to 1000 feet, regardless of camera angle.
๐ Supported Object Classes:0๏ธโฃ Person (walking, biking, swimming, skiing, etc.)
1๏ธโฃ Bike & motorcycle
2๏ธโฃ Light vehicles (cars, vans, ambulances, etc.)
3๏ธโฃ Trucks
4๏ธโฃ Bus
5๏ธโฃ Boat & floating objects
6๏ธโฃ Construction vehicles (e.g., tractors, loaders)
๐ซ Note: This version of Unidrone does not include military-related classes or smoke detection. It's built solely for civilian and safety-focused applications.
๐ Use Cases:โ Disaster recovery operations
โ Wildlife and protected area monitoring
โ Occupancy analysis (e.g., parking lots)
โ Infrastructure surveillance
โ Search and rescue (SAR)
โ Crowd counting
โ Ground-risk mitigation for drones
๐ ๏ธ The models are available in .pt format and can easily be exported to ONNX or TFLite. They also support visualization with Roboflowโs Supervision library for clean, annotated outputs.
๐ง If you're a machine learning practitioner, you can:
Fine-tune the models on your own dataset
Optimize for fast inference on edge devices
Quantize and deploy on low-cost hardware
Use the models to auto-label your own data
๐จ If you're facing detection issues or want to contribute to future improvements, feel free to contact the developer:
[email protected]
Enjoy exploring the power of Unidrone v1.0!
๐ฌhttps://huggingface.co/StephanST/unidrone
๐ก By: https://t.iss.one/DataScienceN
๐3๐ฅ1
๐ Retail Fashion Sales Data Analysis
Here's a fascinating project in the field of data analysis, focused on real-world fashion retail sales. The dataset contains 3,400 records of customer purchases, including item types, purchase amounts, customer ratings, and payment methods.
๐ Project Goals:
- Understand customer purchasing behavior
- Identify the most popular products
- Analyze preferred payment methods
๐ The dataset was first cleaned using Pandas to handle missing values, and then insightful visualizations were created with Matplotlib to reveal hidden patterns in the data.
๐Data source: https://lnkd.in/dbGbuhG7
๐ Check out the full notebook here:
๐ https://lnkd.in/dhnJpk47
If you're interested in customer behavior analytics and working with real-world retail data, this project is a great source of insight! ๐
๐ก By: https://t.iss.one/DataScienceN
Here's a fascinating project in the field of data analysis, focused on real-world fashion retail sales. The dataset contains 3,400 records of customer purchases, including item types, purchase amounts, customer ratings, and payment methods.
๐ Project Goals:
- Understand customer purchasing behavior
- Identify the most popular products
- Analyze preferred payment methods
๐ The dataset was first cleaned using Pandas to handle missing values, and then insightful visualizations were created with Matplotlib to reveal hidden patterns in the data.
๐Data source: https://lnkd.in/dbGbuhG7
๐ Check out the full notebook here:
๐ https://lnkd.in/dhnJpk47
If you're interested in customer behavior analytics and working with real-world retail data, this project is a great source of insight! ๐
๐ก By: https://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page thatโs not on LinkedIn
โค5๐1
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0๏ธโฃ Python
1๏ธโฃ Data Science
2๏ธโฃ Machine Learning
3๏ธโฃ Data Visualization
4๏ธโฃ Artificial Intelligence
5๏ธโฃ Data Analysis
6๏ธโฃ Statistics
7๏ธโฃ Deep Learning
8๏ธโฃ programming Languages
โ
https://t.iss.one/addlist/8_rRW2scgfRhOTc0
โ
https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
๐3
๐ The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM โ including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs โ all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
๐ Documentation: https://lnkd.in/e5iDT6Tf
๐ง Model Access: https://lnkd.in/ehS6ZUyv
๐ป Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
๐https://t.iss.one/DataScienceN
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM โ including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs โ all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
๐ Documentation: https://lnkd.in/e5iDT6Tf
๐ง Model Access: https://lnkd.in/ehS6ZUyv
๐ป Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
๐https://t.iss.one/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page thatโs not on LinkedIn
โค2๐2๐ฅ1
Forwarded from Python | Machine Learning | Coding | R
๐ Your balance is credited $4,000 , the owner of the channel wants to contact you!
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+0DQSCADFTUA3N2Qx
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+0DQSCADFTUA3N2Qx
โค2๐1
Follow me on LinkedIn for more projects and jobs
https://www.linkedin.com/in/hussein-sheikho-4a8187246
https://www.linkedin.com/in/hussein-sheikho-4a8187246
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0๏ธโฃ Python
1๏ธโฃ Data Science
2๏ธโฃ Machine Learning
3๏ธโฃ Data Visualization
4๏ธโฃ Artificial Intelligence
5๏ธโฃ Data Analysis
6๏ธโฃ Statistics
7๏ธโฃ Deep Learning
8๏ธโฃ programming Languages
โ
https://t.iss.one/addlist/8_rRW2scgfRhOTc0
โ
https://t.iss.one/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
๐ฅPowerful Combo: Ultralytics YOLO11 + Sony Semicon | AITRIOS (Global) Platform + Raspberry Pi
Weโve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor โ making it even easier to develop advanced Edge AI applications. ๐ก
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
๐
๐Benchmark results for YOLO11n on IMX500:โ Inference Time: 62.50 msโ mAP50-95 (B): 0.644๐ Want to learn more about YOLO11 and Sony IMX500? Check it out here โก๏ธ
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
๐https://t.iss.one/DataScienceN
Weโve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor โ making it even easier to develop advanced Edge AI applications. ๐ก
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
๐
yolo export model=<path_to_drone_model> format=imx data=VisDrone.yaml๐ฅ The video below shows the result of this process!
๐Benchmark results for YOLO11n on IMX500:โ Inference Time: 62.50 msโ mAP50-95 (B): 0.644๐ Want to learn more about YOLO11 and Sony IMX500? Check it out here โก๏ธ
https://docs.ultralytics.com/integrations/sony-imx500/
#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment
๐https://t.iss.one/DataScienceN
Ultralytics
SONY IMX500
Learn to export Ultralytics YOLO11 models to Sony's IMX500 format for efficient edge AI deployment on Raspberry Pi AI Camera with on-chip processing.
๐1๐ฅ1
PIA S5 Proxy solves all problems for AI developers.
๐ฅ Why top AI teams choose PIA S5 Proxy:
๐น SOCKS5 proxy: as low as $0.045/IP
โ
Global high-quality IP | No traffic limit / static IP
โ
High success rate >99.9% | Ultra-low latency | Stable anti-ban
โ
Smart crawling API, support seamless integration
๐น Unlimited traffic proxy: only $79/day
โ
Unlimited traffic | Unlimited concurrency | Bandwidth over 100Gbps | Customization supported
โ
Best for large-scale AI / LLM data collection
โ
Save up to 90% on crawling costs
โจ Exclusive for new users:
Enter the coupon code [AI Python] to enjoy a 10% discount!
๐ Buy now: https://www.piaproxy.com/?co=piaproxy&ck=?ai
Enter the coupon code [AI Python] to enjoy a 10% discount!
Please open Telegram to view this post
VIEW IN TELEGRAM
Piaproxy
PIA Proxy - Largest Socks5 Residential Proxy Anonymous & Secure
Pia S5 Proxy is the world's largest commercial residential proxy service. With over 350 million fresh residential IPs that can be located by country, city, postcode, and ISP, it supports both HTTP(S) proxy and Socks5 proxy, allowing you to easily access theโฆ
๐1
This media is not supported in your browser
VIEW IN TELEGRAM
NVIDIA introduces GENMO, a unified generalist model for human motion that seamlessly combines motion estimation and generation within a single framework. GENMO supports conditioning on videos, 2D keypoints, text, music, and 3D keyframes, enabling highly versatile motion understanding and synthesis.
Currently, no official code release is available.
Review:
https://t.ly/Q5T_Y
Paper:
https://lnkd.in/ds36BY49
Project Page:
https://lnkd.in/dAYHhuFU
#NVIDIA #GENMO #HumanMotion #DeepLearning #AI #ComputerVision #MotionGeneration #MachineLearning #MultimodalAI #3DReconstruction
Please open Telegram to view this post
VIEW IN TELEGRAM
๐3
Forwarded from Thomas
๐ช +30.560$ with 300$ in a month of trading! We can teach you how to earn! FREE!
It was a challenge - a marathon 300$ to 30.000$ on trading, together with Lisa!
What is the essence of earning?: "Analyze and open a deal on the exchange, knowing where the currency rate will go. Lisa trades every day and posts signals on her channel for free."
๐นStart: $150
๐น Goal: $20,000
๐นPeriod: 1.5 months.
Join and get started, there will be no second chance๐
https://t.iss.one/+HjHm7mxR5xllNTY5
It was a challenge - a marathon 300$ to 30.000$ on trading, together with Lisa!
What is the essence of earning?: "Analyze and open a deal on the exchange, knowing where the currency rate will go. Lisa trades every day and posts signals on her channel for free."
๐นStart: $150
๐น Goal: $20,000
๐นPeriod: 1.5 months.
Join and get started, there will be no second chance๐
https://t.iss.one/+HjHm7mxR5xllNTY5
๐2โค1