Code With Python
39.2K subscribers
890 photos
27 videos
22 files
771 links
This channel delivers clear, practical content for developers, covering Python, Django, Data Structures, Algorithms, and DSA – perfect for learning, coding, and mastering key programming skills.
Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Topic: Python OpenCV – Part 4: Video Processing, Webcam Capture, and Real-Time Operations

---

1. Reading Video from File

import cv2

cap = cv2.VideoCapture('video.mp4')

while cap.isOpened():
ret, frame = cap.read()
if not ret:
break

cv2.imshow('Video Frame', frame)

if cv2.waitKey(25) & 0xFF == ord('q'):
break

cap.release()
cv2.destroyAllWindows()


---

2. Capturing Video from Webcam

cap = cv2.VideoCapture(0)  # 0 is usually the built-in webcam

while True:
ret, frame = cap.read()
if not ret:
break

cv2.imshow('Webcam Feed', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):
break

cap.release()
cv2.destroyAllWindows()


---

3. Saving Video to File

fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480))

cap = cv2.VideoCapture(0)

while True:
ret, frame = cap.read()
if not ret:
break

out.write(frame) # write frame to output file
cv2.imshow('Recording', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):
break

cap.release()
out.release()
cv2.destroyAllWindows()


---

4. Real-Time Video Processing

• Example: Convert webcam feed to grayscale live.

cap = cv2.VideoCapture(0)

while True:
ret, frame = cap.read()
if not ret:
break

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('Grayscale Webcam', gray)

if cv2.waitKey(1) & 0xFF == ord('q'):
break

cap.release()
cv2.destroyAllWindows()


---

Summary

• You can capture video from files or webcams easily with OpenCV.

• Real-time processing lets you modify frames on the fly (filters, detections).

• Saving video requires specifying codec, frame rate, and resolution.

---

Exercise

• Write a program that captures webcam video, applies Canny edge detection to each frame in real-time, and saves the processed video to disk. Quit when pressing 'q'.

---

#Python #OpenCV #VideoProcessing #Webcam #RealTime

https://t.iss.one/DataScience4
3
# Open the video file
video_path = 'industrial_video.mp4'
cap = cv2.VideoCapture(video_path)

# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()

if success:
# Run YOLOv8 inference on the frame
results = model(frame)

# A flag to check if fire was detected in the current frame
fire_detected_in_frame = False

# Visualize the results on the frame
annotated_frame = results[0].plot()

# Process detection results
for r in results:
for box in r.boxes:
# Check if the detected class is 'fire'
# model.names[0] should correspond to 'fire' in your custom model
if model.names[int(box.cls[0])] == 'fire' and box.conf[0] > 0.5:
fire_detected_in_frame = True
break

# If fire is detected and alarm is not already on, trigger alarm
if fire_detected_in_frame and not alarm_on:
alarm_on = True
# Run the alarm sound in a background thread to not block video feed
alarm_thread = threading.Thread(target=play_alarm)
alarm_thread.start()

# Display the annotated frame
cv2.imshow("YOLOv8 Fire Detection", annotated_frame)

# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break

# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

# Hashtags: #RealTimeDetection #VideoProcessing #OpenCV


---

#Step 4: Results and Discussion

After running the script, you will see a window playing the video. When the model detects an object it identifies as 'fire' with a confidence score above 50%, it will:
• Draw a colored box around the fire.
• Print "ALARM: Fire Detected!" to the console.
• Play the alarm.wav sound.

Discussion of Results:
Model Performance: The accuracy of this system depends entirely on the quality of your custom-trained model (fire_model.pt). A model trained on a diverse dataset of industrial fires (different lighting, angles, sizes) will perform best.
False Positives: The system might incorrectly identify orange/red lights, reflections, or welding sparks as fire. This is a common challenge. To fix this, you need to add more "negative" images (images of things that look like fire but aren't) to your training dataset.
Thresholding: The confidence threshold (box.conf[0] > 0.5) is a critical parameter. A lower value increases the chance of detecting real fires but also increases false alarms. A higher value reduces false alarms but might miss smaller or less obvious fires. You must tune this value based on your specific environment.
Real-World Implementation: For a real industrial facility, you would replace the video file with a live camera stream (cv2.VideoCapture(0) for a webcam) and integrate the alarm logic with a physical siren or a central monitoring system via an API or GPIO pins.

#ProjectComplete #AIforGood #IndustrialSafety

━━━━━━━━━━━━━━━
By: @DataScience4
1