Imx8mp verdin face blur using opencv

hi anybody has done face-blur or face recognition using opencv in imx8mp-verdin board using python
i am facing issue pipeline error and Lost frame start error and using an mainline kernel with BSP-7

Hi, I’m the Toradex AI Assistant.

I might make mistakes or occasionally get you wrong. Our support team will step in if needed.


OpenCV Face Recognition/Blur on Verdin iMX8M Plus

I’d be glad to help! Could you let me know which OS and version you are using?

To guide you properly with your OpenCV face recognition/blur implementation and troubleshoot the pipeline and frame start errors, I need to understand your setup better. You can use the tdx-info command to get detailed information about your system - more details available at this link.

Available OpenCV Resources

For Torizon OS

Toradex provides a comprehensive OpenCV sample for computer vision applications(1). The sample demonstrates how to use OpenCV with Python on arm64v8 based SoMs like the Verdin iMX8M Plus(1).

You can quickly test the sample without building by running(1):

# wget https://github.com/toradex/torizon-samples/raw/bookworm/opencv/docker-compose.yaml  
# docker-compose -f docker-compose.yaml up  

(1)

For Yocto BSP

OpenCV is also available for Yocto-based images(2). The documentation includes a face detection example using Haar Cascade classifiers(2).

Troubleshooting Pipeline and Frame Errors

Based on the community discussions, pipeline errors and frame start issues on Verdin iMX8M Plus can be related to:

  1. Video device resource management - Multiple open/close cycles can cause video devices to lock(3)
  2. CSI interface timing issues - Some modules may experience inconsistent behavior with camera interfaces(4)
  3. Power management - Chips may enter low power mode unexpectedly(4)

Once you provide your OS version and setup details, I can give you more specific guidance for implementing face recognition/blur and resolving your pipeline errors.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

1 Like

I’d be glad to help! Could you let me know which OS and version you are using?

To provide you with the most relevant Python script examples for your face recognition/blur project, I need to understand your platform setup better. You can use the tdx-info command to get detailed information about your system - more details available at this link.

Available Python OpenCV Examples

Torizon OS OpenCV Sample

For Torizon OS, there’s a complete OpenCV sample with Python that you can use as a starting point(1). The main example code loads and displays an image in grayscale(1):

import numpy as np  
import cv2  
  
# Load an color image in grayscale  
img = cv2.imread('106926-verdin-imx8mm-front-view.jpg',cv2.IMREAD_GRAYSCALE)  
cv2.imshow('image',img)  
cv2.waitKey(20000) # Time in miliseconds  
cv2.destroyAllWindows()  

(1)

Yocto BSP Python Examples

For Yocto BSP, there are Python examples for GPIO control and RTC access(2). Here’s a GPIO blinking LED example(2):

#!/usr/bin/env python  
  
import time  
import os.path  
import traceback  
  
GPIO_RESET    = False;  # Whether GPIOs should be re-exported  
GPIO_PATH     = "/sys/class/gpio";  
GPIO_DIR_OUT  = "out";  
GPIO_VAL_HI   = "1";  
GPIO_VAL_LO   = "0";  
GPIO_CHAN_NUM = "146";  # GPIO1 on Apalis T30  
  
BLINK_PERIOD  = 500;  # Blink period (milliseconds)  
BLINK_DUTY    = 0.25; # Blink duty cycle (fraction)  
  
def main():  
    try:   
        ### Initialize GPIO - optionally reset if already initialized  
  
        ## Note: GPIOs which are already used in the drivers can not be controlled from sysfs,   
        ## unless a driver explicitly exported that particular pins GPIO.  
  
        # Open GPIO export & unexport files  
        exportFile = open(GPIO_PATH+'/export', 'w')  
        unexportFile = open(GPIO_PATH+'/unexport', 'w')  
  
        # Unexport GPIO if it exists and GPIO_RESET is enabled  
        exportExists = os.path.isdir(GPIO_PATH+'/gpio'+GPIO_CHAN_NUM)  
        if exportExists and GPIO_RESET:  
            unexportFile.write(GPIO_CHAN_NUM)  
            unexportFile.flush()  
              
        # Export GPIO  
        if not exportExists or GPIO_RESET:  
            exportFile.write(GPIO_CHAN_NUM)  
            exportFile.flush()  
  
        # Open GPIO direction file to set direction  
        directionFile = open(GPIO_PATH+'/gpio'+GPIO_CHAN_NUM+'/direction','w')  
              
        # Set GPIO direction to "out"  
        directionFile.write(GPIO_DIR_OUT)  
        directionFile.flush()  
              
       # Open GPIO value file to set value  
        valueFile = open(GPIO_PATH+'/gpio'+GPIO_CHAN_NUM+'/value','w')  
              
        # Loop indefinitely  
        while True:  
                  
            # Set GPIO value to HI  
            valueFile.write(GPIO_VAL_HI)  
            valueFile.flush()  
          
            # Sleep for blink on duration  
            time.sleep(BLINK_PERIOD*BLINK_DUTY/1000.0)  
          
            # Set GPIO value to LO  
            valueFile.write(GPIO_VAL_LO)  
            valueFile.flush()  
                  
            # Sleep for blink off duration  
            time.sleep(BLINK_PERIOD*(1.0-BLINK_DUTY)/1000.0)  
  
    except Exception:  
        print(traceback.format_exc())  
  
    return  
  
if __name__ == "__main__":  
    main()  

(2)

Getting Complete Source Code

To get the complete Torizon OpenCV sample project, you can clone the repository(1):

$ cd ~  
$ git clone -b bookworm https://github.com/toradex/torizon-samples.git  

(1)

The documentation mentions that OpenCV can use cv2.VideoCapture function to capture frames from video inputs, such as external cameras connected to the MIPI interface or USB(1). This would be essential for your face recognition/blur implementation.

Once you provide your OS version and platform details, I can give you more specific guidance for implementing face recognition with OpenCV and resolving your pipeline errors.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

Hey @ToradexAI @lucas_az.tx I tried the torizon .cpp example in my board but why this error

root@verdin-imx8mp-15460278:~# ./test-cpp
[ 1287.320910] imx-mipi-csis 32e40000.csi: SOT Error events: 1
[ 1287.320981] imx-mipi-csis 32e40000.csi: Lost Frame Start Error events: 1810
[ 1287.321037] imx-mipi-csis 32e40000.csi: ECC Error events: 1
[ WARN:0@0.451] global /usr/src/debug/opencv/4.6.0.imx/modules/videoio/src/cap_gstreamer.cpp (1405) open OpenCV | GStreamer warning: Cannot query video 1
[ 1] Failed to open device: No such file or directory, Try again…
[ 2] Failed to open device: No such file or directory, Try again…
[ 3] Failed to open device: No such file or directory, Try again…
[ 4] Failed to open device: No such file or directory, Try again…
[ 5] _OpenDevice(1036): FATAL: Failed to open device, errno=No such file or directory.

I am running mainline kernel in yocto BSP-7

#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/objdetect.hpp>

#include <iostream>
#include <sstream>
#include <stdio.h>

using namespace std;
using namespace cv;

CascadeClassifier face_cascade;
string window_name = "Face Detection Demo";
String face_cascade_name = "/root/haarcascade_frontalface_default.xml";
const int BORDER = 8;   /* order between GUI elements to the edge of the image */

template <typename T> string toString(T t)
{
        ostringstream out;
        out << t;

        return out.str();
}

/*
 * Draw text into an image. Defaults to top-left-justified text,
 * but you can give negative x coords for right-justified text,
 * and/or negative y coords for bottom-justified text
 * Returns the bounding rect around the drawn text
*/
Rect drawString(Mat img, string text, Point coord, Scalar color,
                float fontScale = 0.6f, int thickness = 1, int fontFace = FONT_HERSHEY_COMPLEX)
{
        /* Get the text size & baseline */
        int baseline = 0;
        Size textSize = getTextSize(text, fontFace, fontScale, thickness, &baseline);
        baseline += thickness;

        /* Adjust the coords for left/right-justified or top/bottom-justified */
        if (coord.y >= 0) {
                /*
                 * Coordinates are for the top-left corner of the text
                 * from the top-left of the image, so move down by one row.
                 */
                coord.y += textSize.height;
        } else {
                /*
                 * Coordinates are for the bottom-left corner of the text
                 * from the bottom-left of the image, so come up from the bottom
                 */
                coord.y += img.rows - baseline + 1;
        }

        /* Become right-justified if desired */
        if (coord.x < 0) {
                coord.x += img.cols - textSize.width + 1;
        }

        /* Get the bounding box around the text */
        Rect boundingRect = Rect(coord.x, coord.y - textSize.height, textSize.width, baseline + textSize.height);

        /* Draw anti-aliased text */
        //putText(img, text, coord, fontFace, fontScale, color, thickness, CV_AA);
        putText(img, text, coord, fontFace, fontScale, color, thickness, cv::LINE_AA);


        /* Let the user know how big their text is, in case they want to arrange things */
        return boundingRect;
}

int main(int argc, const char** argv)
{
        VideoCapture capture;
        Mat frame;
    std::vector<Rect> faces;
        Mat frame_gray;


        if (!face_cascade.load( face_cascade_name ) ) {
                printf("--(!)Error loading training file: haarcascade_frontalface_alt2.xml\n");
                return -1;
        };

        try {
                std::string pipeline =
                "v4l2src device=/dev/video0 io-mode=mmap ! "
                "video/x-raw,format=NV16,width=800,height=600,framerate=30/1 ! "
                "videoconvert ! video/x-raw,format=BGR ! appsink";

                capture.open(pipeline, cv::CAP_GSTREAMER);

        //      capture.open("v4l2:///dev/video0");
              capture.set(cv::CAP_PROP_FRAME_WIDTH, 800);
              capture.set(cv::CAP_PROP_FRAME_HEIGHT, 600);

        }
        catch (cv::Exception &e)
        {
                const char *err_msg = e.what();
                cout << "Exception caught: " << err_msg << endl;
        }

        if ( !capture.isOpened() ) {
                cout << "ERROR: Could not access the camera!" << endl;
                exit(1);
        }

        while(true) {
                capture >> frame;

                if (!frame.empty()) {


                        cvtColor(frame, frame_gray, cv::COLOR_BGR2GRAY);

                        equalizeHist(frame_gray, frame_gray);

                        face_cascade.detectMultiScale(frame_gray,faces,1.2,3,0,cv::Size(80, 80));


                        for (size_t i = 0; i < faces.size(); i++) {
                                cv::Point pt1(faces[i].x, faces[i].y);
                                cv::Point pt2(faces[i].x + faces[i].width,
                                              faces[i].y + faces[i].height);

                                cv::rectangle(frame, pt1, pt2,cv::Scalar(0, 255, 0),3);

                                Mat faceROI = frame_gray(faces[i]);
                        }
                        string stringToDisplay = "Number Of Faces: " + toString(faces.size());

                        drawString(frame, stringToDisplay, Point(BORDER, -BORDER - 2 - 50), CV_RGB(0, 0, 0));
                        drawString(frame, stringToDisplay, Point(BORDER + 1, -BORDER - 1 - 50), CV_RGB(0, 255, 0));

                        imshow(window_name, frame);
                } else {
                        printf(" --(!) No captured frame");
                }

                int c = waitKey(1);
                if ((char)c == 27) {
                        break;
                }
        }

        return 0;
}

Looking at your error messages, I can see several issues that are commonly reported with the Verdin iMX8M Plus and BSP 7 mainline kernel.

Error Analysis

The errors you’re experiencing are well-documented issues:

  1. MIPI-CSI errors: The “SOT Error”, “Lost Frame Start Error”, and “ECC Error” messages indicate problems with the MIPI-CSI interface(1)(2)
  2. Video device access: The “Failed to open device: No such file or directory” suggests /dev/video0 is not available or accessible(3)

Camera Interface Issues with BSP 7

Based on community reports, there are known issues with MIPI-CSI camera migration to Yocto Scarthgap (BSP 7)(2). Users have reported kernel panics and i2c communication problems when trying to use cameras with the newer kernel versions(2).

One user specifically mentioned that their camera worked with BSP 7.0.0 but failed with newer BSP 7.1.0 builds, requiring them to use the specific BSP 7.0.0 tag(2).

Troubleshooting Steps

1. Check Available Video Devices

First, verify what video devices are available:

ls /dev/video*
v4l2-ctl --list-devices

2. Verify Camera Detection

Check if your camera is properly detected:

dmesg | grep -i csi
dmesg | grep -i camera

3. Test with GStreamer Pipeline

Before using OpenCV, test the camera with a simple GStreamer pipeline:

gst-launch-1.0 v4l2src device=/dev/video2 ! waylandsink

Note: The video device number may vary - check your v4l2-ctl --list-devices output(3).

OpenCV Example Reference

The OpenCV documentation provides a face detection example that uses this camera initialization(3):

capture.open("v4l2:///dev/video3");
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);

(3)

Recommendations

  1. Use BSP 7.0.0: Based on community feedback, consider using the specific BSP 7.0.0 tag instead of newer 7.1.0 builds if you’re experiencing camera issues(2)

  2. Check video device number: Your code tries /dev/video0, but the correct device might be /dev/video2 or another number(3)

  3. Simplify the pipeline: Start with a basic v4l2src pipeline without format conversion:

std::string pipeline = "v4l2src device=/dev/video2 ! videoconvert ! appsink";
  1. Verify camera hardware: Ensure your camera module is properly connected and supported with your specific BSP version

The MIPI-CSI interface issues you’re seeing are known problems with certain BSP 7 versions, and the community has found that using specific kernel versions can resolve these issues(2).

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

@ToradexAI Nope its the same I edited some part in code but the same result.

#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/objdetect.hpp>

#include <iostream>
#include <sstream>
#include <stdio.h>

using namespace std;
using namespace cv; 

CascadeClassifier face_cascade;
string window_name = "Face Detection Demo";
String face_cascade_name = "/root/haarcascade_frontalface_alt2.xml";
const int BORDER = 8;   /* order between GUI elements to the edge of the image */

template <typename T> string toString(T t)
{
        ostringstream out;
        out << t;

        return out.str();

}

/*
 * Draw text into an image. Defaults to top-left-justified text,
 * but you can give negative x coords for right-justified text,
 * and/or negative y coords for bottom-justified text
 * Returns the bounding rect around the drawn text
*/
Rect drawString(Mat img, string text, Point coord, Scalar color,
                float fontScale = 0.6f, int thickness = 1, int fontFace = FONT_HERSHEY_COMPLEX)
{
        /* Get the text size & baseline */
        int baseline = 0;
        Size textSize = getTextSize(text, fontFace, fontScale, thickness, &baseline);
        baseline += thickness;

        /* Adjust the coords for left/right-justified or top/bottom-justified */
        if (coord.y >= 0) {
                /*
                 * Coordinates are for the top-left corner of the text
                 * from the top-left of the image, so move down by one row.
                 */
                coord.y += textSize.height;
        } else {
                /*
                 * Coordinates are for the bottom-left corner of the text
                 * from the bottom-left of the image, so come up from the bottom
                 */
                coord.y += img.rows - baseline + 1;
        }

        /* Become right-justified if desired */
        if (coord.x < 0) {
                coord.x += img.cols - textSize.width + 1;
        }

        /* Get the bounding box around the text */
        Rect boundingRect = Rect(coord.x, coord.y - textSize.height, textSize.width, baseline + textSize.height);

        /* Draw anti-aliased text */
        //putText(img, text, coord, fontFace, fontScale, color, thickness, CV_AA);
        putText(img, text, coord, fontFace, fontScale, color, thickness, cv::LINE_AA);


        /* Let the user know how big their text is, in case they want to arrange things */
        return boundingRect;
}

int main(int argc, const char** argv)
{
        VideoCapture capture;
        Mat frame;
    std::vector<Rect> faces;
        Mat frame_gray;
    
        if (!face_cascade.load( face_cascade_name ) ) { 
                printf("--(!)Error loading training file: haarcascade_frontalface_alt2.xml\n");
                return -1; 
        };

        try {
                std::string pipeline ="v4l2src device=/dev/video0  ! video/x-raw,format=NV12,width=800,height=600,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink";

                capture.open(pipeline,cv::CAP_GSTREAMER);
//              capture.open("v4l2:///dev/video0");
                capture.set(cv::CAP_PROP_FRAME_WIDTH, 800);
                capture.set(cv::CAP_PROP_FRAME_HEIGHT, 600);

        }
        catch (cv::Exception &e) 
        {
                const char *err_msg = e.what();
                cout << "Exception caught: " << err_msg << endl;
        }

        if ( !capture.isOpened() ) { 
                cout << "ERROR: Could not access the camera!" << endl;
                exit(1);
        }

        while(true) {
                capture >> frame;

                if (!frame.empty()) {
    
                        cvtColor(frame, frame_gray, cv::COLOR_BGR2GRAY);

                        equalizeHist(frame_gray, frame_gray);

                        face_cascade.detectMultiScale(frame_gray,faces,1.2,3,0,cv::Size(80, 80));


                        for (size_t i = 0; i < faces.size(); i++) {
                                cv::Point pt1(faces[i].x, faces[i].y);
                                cv::Point pt2(faces[i].x + faces[i].width,
                                              faces[i].y + faces[i].height);

                                cv::rectangle(frame, pt1, pt2,cv::Scalar(0, 255, 0),3);

                                Mat faceROI = frame_gray(faces[i]);
                        }
                        string stringToDisplay = "Number Of Faces: " + toString(faces.size());

                        drawString(frame, stringToDisplay, Point(BORDER, -BORDER - 2 - 50), CV_RGB(0, 0, 0));
                        drawString(frame, stringToDisplay, Point(BORDER + 1, -BORDER - 1 - 50), CV_RGB(0, 255, 0));

                        imshow(window_name, frame);
                } else {
                        printf(" --(!) No captured frame");
                }

                int c = waitKey(1);
                if ((char)c == 27) {
                        break;
                }
        }

        return 0;
}

Looking at your code, I can identify several issues that need to be addressed for proper camera access on your Verdin iMX8M Plus with BSP 7.

Code Issues

1. Incorrect Constant Reference

Your code has this line:

capture.open(pipeline,cv2.CAP_GSTREAMER);

This should be:

capture.open(pipeline, cv::CAP_GSTREAMER);

You’re mixing Python (cv2) and C++ (cv) namespace conventions.

2. Camera Device Detection

Based on the Toradex documentation, you need to first identify the correct video device. The documentation shows how to check available video devices(1):

root@apalis-imx6:~# v4l2-ctl --list-devices

(1)

The documentation provides an example output showing different video devices available(1):

DISP3 BG ():
        /dev/video16
        /dev/video17

HD Pro Webcam C920 (usb-ci_hdrc.1-1.1.3):
        /dev/video3

Failed to open /dev/video0: Resource temporarily unavailable

(1)

This shows that /dev/video0 may not be available, which explains your “No such file or directory” error.

3. Proper Camera Initialization

The OpenCV documentation for Toradex shows this camera initialization approach(1):

try {
    capture.open("v4l2:///dev/video3");
    capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
}
catch (cv::Exception &e)
{
    const char *err_msg = e.what();
    cout << "Exception caught: " << err_msg << endl;
}

(1)

Troubleshooting Steps

Step 1: Check Available Video Devices

Run this command to see what video devices are actually available:

v4l2-ctl --list-devices

Step 2: Test Camera Access

Before running your OpenCV code, test if the camera works with GStreamer directly:

gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! waylandsink

(Replace /dev/video2 with the actual device from your v4l2-ctl output)

Step 3: Use Correct Video Device

Update your code to use the correct video device number found in Step 1.

Known Issues with BSP 7

Based on community reports, there are known camera interface issues with BSP 7, particularly with MIPI-CSI cameras(2). Users have reported that VPU encoding functionality was broken in BSP 7 but was fixed in version 7.3.0(2).

The community discussions show that some users had to revert to BSP 6 or use specific BSP 7.0.0 builds to get camera functionality working properly(3).

Recommended Next Steps

  1. Verify your video device: Use v4l2-ctl --list-devices to find the correct device
  2. Fix the namespace issue: Change cv2.CAP_GSTREAMER to cv::CAP_GSTREAMER
  3. Test with simple pipeline: Try the basic v4l2src pipeline first
  4. Consider BSP version: If issues persist, you may need to use a different BSP 7 version or revert to BSP 6

The camera interface problems you’re experiencing are well-documented issues with certain BSP 7 versions, and the solution often involves using the correct video device and ensuring your BSP version supports your specific camera hardware properly.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.