OiO.lk Blog python I'm working on a Python script that controls an LED light based on object detection results from a subprocess
python

I'm working on a Python script that controls an LED light based on object detection results from a subprocess


I’m working on a Python script that controls an LED light based on object detection results from a subprocess.

The goal is:

  • Turn the LED red when a face is detected.
  • Turn the LED another colour (turquoise) when only a person is
    detected.

Face detection should have priority over person detection. So if both a face and a person are detected, the LED should be red.

Issue:

I’m having trouble with the loop logic that processes the detection outputs. The LED keeps switching between red and turquoise when both a face and a person are detected. I’ve tried using timestamps and flags to manage the state, but the behavior isn’t what I want—the LED sometimes holds on red even when no face is present.

I’ve sought help from ChatGPT, but the code suggestions haven’t resolved the issue.

What I’ve Tried:

  1. Using Timestamps: I attempted to use timestamps to add a delay, but
    the LED sometimes stays on red even when no face is present.
  2. Using Flags: I tried using flags to manage the state, but I still
    can’t get the loop to prioritize face detection properly.
  3. Consulted ChatGPT: I sought assistance, but the suggested code
    didn’t resolve the issue and sometimes made it more complicated.

Question:

How can I modify my loop logic so that:

  1. The LED turns red and stays red when a face is detected, regardless
    of person detection.
  2. The LED turns turquoise only when a person is detected, and no face
    is detected.
  3. The LED doesn’t flicker or switch rapidly between colors when both
    detections are present.
  4. The LED returns to green when no detection occurs and eventually to
    blue after a period of inactivity.

I believe the issue lies within the way I’m handling the detection flags and the loop that processes the subprocess output. Any guidance or suggestions on how to fix this would be greatly appreciated.

Additional Information:

Environment: Running on a Raspberry Pi with a camera module.
Dependencies: The set_light_color function controls the LED hardware.
Subprocess: The rpicam-hello command runs an external detection system that outputs detection results to stdout.

import os
import time
import subprocess
import threading
import numpy as np
import re
from trak_utils import (
    set_light_color,
    pan_goto,
    move_to_random_position,
    get_key_press,
    camera_lock,
    pan_max_left,
    pan_max_right,
    tilt_max_top,
    tilt_max_bottom,
)
import trak_overlay

# Define starting positions for the pan/tilt system
pan_start_x = 0
pan_start_y = 0

# Define camera resolution (width and height)
CAMERA_WIDTH = 320
CAMERA_HEIGHT = 200

camera_lock = threading.Lock()

# Command to run the external detection system
command = [
    "rpicam-hello",
    "-t", "0",
    "--post-process-file", "/home/kazber/rpicam-apps/assets/hailo_yolov5_personface.json",
    "--lores-width", "640",
    "--lores-height", "640",
    "-v", "2",
]

env = os.environ.copy()
env["PYTHONUNBUFFERED"] = "1"

# Flag to track human control mode
human_control_active = False

# Run the detection in a separate subprocess
process = subprocess.Popen(
    command,
    stdout=subprocess.PIPE,
    stderr=subprocess.STDOUT,
    text=True,
    bufsize=1,
    env=env,
)

def track_person(x, y, width, height):
    print(f"Tracking person at x={x}, y={y}, width={width}, height={height}")
    set_light_color(64, 224, 208)  # Set to turquoise for person detection
    pass

def track_face(x, y, width, height):
    print(f"Tracking face at x={x}, y={y}, width={width}, height={height}")
    set_light_color(255, 0, 0)  # Set to red for face detection
    pass

def track_motion_and_faces():
    """Main loop for tracking motion and faces with the new detection system."""
    global human_control_active, pan_cx, pan_cy

    # Initialize pan and tilt start positions
    pan_cx, pan_cy = pan_start_x, pan_start_y
    no_motion_time = time.time()
    blue_led_on = False
    in_scan_mode = False
    locked_on = False  # Indicates if we are locked onto a motion/face
    lock_timeout = 30  # Number of frames to wait before unlocking if no new detection

    frame_counter = 0  # Counter to keep track of frames without detection

    last_detection_time = 0
    face_detected = False
    person_detected = False

    while True:
        # Process output from the detection system
        for line in iter(process.stdout.readline, ''):
            cleaned_line = line.strip()
            print(f"Detection output: {cleaned_line}")

            # Check for object detection
            if "Object:" in cleaned_line:
                if "face" in cleaned_line:
                    face_detected = True
                    person_detected = False

                    last_detection_time = time.time()
                    match = re.search(
                        r'Object: face\[\d+\] \(\d+\.\d+\) @ (\d+),(\d+) (\d+)x(\d+)',
                        cleaned_line,
                    )
                    if match:
                        x, y, width, height = map(int, match.groups())
                        track_face(x, y, width, height)
                elif "person" in cleaned_line and not "face" in cleaned_line:
                    person_detected = True
                    face_detected = False

                    last_detection_time = time.time()
                    match = re.search(
                        r'Object: person\[\d+\] \(\d+\.\d+\) @ (\d+),(\d+) (\d+)x(\d+)',
                        cleaned_line,
                    )
                    if match:
                        x, y, width, height = map(int, match.groups())
                        track_person(x, y, width, height)

        # Check if either face or person were detected within the last second
        current_time = time.time()
        if current_time - last_detection_time <= 1:
            if face_detected:
                set_light_color(255, 0, 0)  # Red for face detection
                locked_on = True
                frame_counter = 0
            elif person_detected and not face_detected:
                set_light_color(64, 224, 208)  # Turquoise for person detection
                locked_on = False
                frame_counter = 0
        else:
            set_light_color(0, 255, 0)  # Green for no detection
            locked_on = False

        # Handle no detection case
        if not locked_on:
            if time.time() - no_motion_time > 10 and not blue_led_on:
                print("No motion detected for 10 seconds, setting LED to blue.")
                set_light_color(0, 0, 255)  # Set to blue for no motion
                blue_led_on = True
            elif blue_led_on and not in_scan_mode:
                print("Resuming scan mode.")
                in_scan_mode = True

        # Reset `locked_on` after a certain number of frames with no detection
        if locked_on:
            frame_counter += 1
            if frame_counter >= lock_timeout:
                print("Lock timeout reached, resetting lock.")
                locked_on = False
                frame_counter = 0

if __name__ == '__main__':
    try:
        track_motion_and_faces()
    except KeyboardInterrupt:
        print("\nExiting program")
        set_light_color(0, 0, 0)  # Turn off LEDs on exit



You need to sign in to view this answers

Exit mobile version