HD and SD Streams from AR0521 camera

Hi all,

Not entirely sure if this is the right forum for this but here goes.

I’ve successfully got a camera streaming using the method as described here

and then extended it to for use in a rtsp server, this all works well with my current pipeline

v4l2src device=/dev/{device} io-mode=4  !
video/x-raw,width={width},height={height} !
imxg2dvideotransform !
videorate drop-only=true !
video/x-raw,framerate=10/1 !
clockoverlay shaded-background=true font-desc='Sans, 24' halignment=right valignment=top !
queue leaky=upstream max-size-buffers=2 max-size-bytes=0 max-size-time=0 !
imxvpuenc_h264 bitrate=512000 gop-size=10 !
rtph264pay pt=96 name=pay0

My question is now, if I want to provide two feeds from this camera sensor HD/SD what’s the recommended methodology for doing so?

As far as I can tell my options are:

  1. Duplicate the /dev/video2 feed and then do two different conversions using imxg2dvideotransform?
  2. Some how duplicate /dev/video2 perhaps using a v4l loopback?
  3. Something else?

Any help from someone who might have done this previously would be greatly appreciated.

Hi, I’m the Toradex AI Assistant. I’m in early beta, and I would appreciate your feedback.
The Toradex Support team will follow up with you shortly!

Did you like my answer?

:+1: :-1:


Hello,

To provide both HD and SD feeds from a single camera sensor on a Toradex module, you have a few options to consider. Each approach has its trade-offs in terms of complexity, performance, and resource utilization. Here are the methods aligned with your suggestions and an additional one:

  1. Use Separate Pipelines for HD and SD Streams: This approach involves duplicating the camera feed and processing it through two different GStreamer pipelines, each configured for the desired resolution. You would use imxg2dvideotransform or a similar element to scale the video to HD and SD resolutions accordingly. This method is straightforward but may increase CPU/GPU load since you’re effectively decoding, scaling, and encoding the video twice.

  2. Utilize v4l2loopback for Duplicating the Video Feed: With v4l2loopback, you can create a virtual video device that duplicates the feed from your camera. You can then capture from this virtual device in two separate GStreamer pipelines, similar to the first option, but with the feed duplication handled at a lower level. This might offer a slight performance benefit over processing the duplication in GStreamer, but it introduces additional complexity in managing the loopback device.

  3. Single Pipeline with tee Element: A more efficient way might be to use a single GStreamer pipeline that captures the feed from the camera and then uses a tee element to branch the pipeline into two paths: one for HD and another for SD processing. After the tee, you can scale, encode, and package each stream appropriately. This method is more resource-efficient since the capture and initial processing stages are shared. However, it requires careful pipeline construction to ensure that performance bottlenecks do not occur, especially in the scaling and encoding stages.

Here’s a simplified example of how the pipeline with a tee might look:

v4l2src device=/dev/{device} ! video/x-raw,width={max_width},height={max_height} ! tee name=t
t. ! queue ! imxg2dvideotransform ! video/x-raw,width={hd_width},height={hd_height} ! [HD encoding and streaming elements]
t. ! queue ! imxg2dvideotransform ! video/x-raw,width={sd_width},height={sd_height} ! [SD encoding and streaming elements]

This example assumes that {max_width} and {max_height} are the maximum resolution supported by

Please, also check if these links can help you solve your problem:

How to use Cameras on Torizon | Toradex Developer Center.

Hello @alan01252,

The recommended way to achieve this would be to duplicate the frames using tee, similar to the suggestion from @ToradexAI.

Encoding the video twice is inevitable here, as you have two different resolutions.
If the native resolution of the camera is not HD, but higher, a possible optimization would be to not scale the video from the native resolution twice, but scale them one after the other:

Capture Pipeline -> Conversion to HD -> tee
                                        | '-> HD Streaming Pipeline
                                        '-> Conversion to SD -> SD Streaming Pipeline

Also please note that GStreamer factories can have issues with multiple clients, therefore it may be a good idea to set them as shared.

Best Regards,
Bruno

Thanks Bruno,

Was kinda hoping there was some voodoo I was missing.

I can’t use the tee approach because gstreamer rtpserver puts in the sink element at the end of the pipeline.

I’ve had some success with using the v4l loop back interface and two factories one for hd/sd, seems like I’ll endeavor down this path further!

Thank you!

Hello @alan01252,

It may still be possible to do this with tee, but you will need to setup the origin pipeline and two media factories separately, assigning each of the factories to a different mount point in the RTSP server.
Maybe some delayed linking of the pads would be needed.

I don’t think this implementation would be that simple, but would be a path forward.

Best Regards,
Bruno

Hi @bruno.tx after a day of experimenting this seems to be the best I can do to make HD/SD streams from a single feed with lowest form of cpu usage

GST_DEBUG=3 GST_DEBUG_DUMP_DOT_DIR=/tmp/ gst-launch-1.0 v4l2src device=/dev/video2 ! \
    video/x-raw, width=1920, height=1080, framerate=65/1 ! \
    videorate drop-only=1 ! video/x-raw, framerate=10/1 ! \
    tee name=t \
        t. ! queue leaky=upstream max-size-buffers=0 !  \
            imxvpuenc_h264 gop-size=10 ! rtph264pay pt=96 config-interval=1 aggregate-mode=zero-latency ! \
            shmsink socket-path=/tmp/video_2_hd \
        t. ! queue leaky=upstream max-size-buffers=0 ! \
            imxg2dvideotransform ! video/x-raw, width=640, height=480 ! imxvpuenc_h264 gop-size=10 ! \
            rtph264pay pt=96 config-interval=1 aggregate-mode=zero-latency ! \
            shmsink socket-path=/tmp/video_2_sd

I then use shmsrc on the rtsp server to generate the two feeds reading from the shared memory.

So a few questions:

  1. is this madness? :slight_smile:

  2. we’re using the ar0521 for these experiments, every so often the sensor just seems to for want of a better word “die” and nothing but a power reboot will fix it ( i.e reloading the kernel modules does nothing ). I’ve tested two different sensors and both have ended up locked this way. Have you seen this before?

  3. Because of the above, I am wondering if there is another way of rebooting these cameras ( i’m using the toradex dev board if that helps )

  4. I’m looking at this plugin, and am wondering if you think this could also be used ina similar way to the shared memory idea, but might be even more performant New unixfd plugin in GStreamer 1.24 ( would need to backport it somehow… )

Thanks in advance!!!

Hello @alan01252,

It is great to know you were able to find a solution to the problem.

Well, I was not familiar with the shmsink and shmsrc pipeline elements, but they seem to use a socket to control the communication and deal with common producer/consumer issues.
So this seems to be a good solution without going too deep into GStreamer and manual pad linking.

The unixfd plugin appears to be a viable alternative, but as you said would require backporting to the version of GStreamer which can support the VPU on the Verdin iMX8MP.


We have seen similar issues if the camera is not properly connected or when using the older driver available on BSP5.
From our tests, the driver available on the meta-toradex-econ layer for BSP6 is stable.
To confirm, are you using this driver?
Can you also check the camera connection?

There is a reset pin going to the camera, but I need to check if and how this is exposed on the software side.
I will get back to you with this information.

Best Regards,
Bruno

Yep, using the latest commit from that repo, and have actually tried with two different camera sensors and made sure connection is okay.

It’s weird it’ll run for a while but then just lock up. That being said it seems to have been more stable since this specific pipeline.

Hello @alan01252,

Thanks for the confirmation.

Unfortunately the reset signal is not exposed in any way by the driver.
If you continue to see issues even when using the new pipeline, please let us know so we can try to reproduce the problem and try to fix it.

Best Regards,
Bruno

Thanks @bruno.tx ,

We seem to still be having stability issues with this camera, although it working better…
I managed to backport the unixfd gstreamer plugins and it’s working great and vastly simplifies the pipeline.

However what is now more obvious in the logs is we get this error occastionally

v4l2src gstv4l2src.c:1264:gst_v4l2src_create:<v4l2src0> Timestamp does not correlate with any clock, ignoring driver timestamps

what is interesting is that this log line is thrown underneath this comment:

which at least strongly implies an issue with the driver?

So I am still continually facing this issue, it’s very strange. Stream just dies and there’s no way to reset it without a reboot.

I’m monitoring hardware etc and nothing leaps out as a root cause, no logs in dmesg etc.

Is there anything else I could/should be doing to work out why the feed might just “stop” after x hours of running perfectly?

Any help would be greatly appreciated.

Hello @alan01252,

Yes, this can indicate a bug in the driver, specifically when time-stamping is involved. Such problem may or may not be related to the freezing issues you are seeing.


This should not be happening.
I will prepare a setup to try to reproduce this issue and look for a possible workaround.
The workaround may involve removing the reset/powerdown pins from the camera’s node on the device tree overlays so it can be reset outside of the driver, but this may interfere with the driver initialization.

When we have further updates, we will sent them here.

Best Regards,
Bruno

1 Like

Thank you. I’ve had the longest run so far over night currently still working properly at 9 hours 30 minutes.

I’ve made no changes to the setup, either physically or software wise, so this might not be the easiest thing to reliably reproduce.

Yesterday I had one crash after ~4 hours.

Out of curiosity @bruno.tx there’s a few interesting commits in the upstream video4linux driver.

I am suspicious this is related as I wouldn’t have thought any bug in gstreamer itself would warrant a hard reboot to “fix” them and just a restart of the pipeline would suffice? Would you be in agreement with that hypothesis?

Hello @alan01252,

Thanks for the update.

Yes, I think that is a fair assessment.
The issue would most likely be in the camera driver or firmware.

Best Regards,
Bruno

Just another update, I’ve kicked off another pipeline and am up to 43 hours.

As far as I am aware I’ve made zero changes and the pipeline is so simple I don’t think I’ve got anything “wrong” in prior runs.

GST_DEBUG=4 gst-launch-1.0 -v v4l2src device=/dev/video2 io-mode=dmabuf !  \
    video/x-raw, width=1920, height=1080, framerate=65/1 ! \
    videorate drop-only=1 ! video/x-raw, framerate=10/1 ! \
    tee name=t \
        t. ! queue leaky=upstream max-size-buffers=0 ! unixfdsink socket-path=/var/run/video_2_hd \
        t. ! queue leaky=upstream max-size-buffers=0 ! unixfdsink socket-path=/var/run/video_2_sd

but wanted to let you know…

Another couple of interesting observations, the pipeline crashed again this morning eventually required a hard reboot, however even after a reboot the camera wouldn’t stream using the exact same command as above, yet another reboot and it was “fixed” again.

I’ll continue to endeavor to try and get something that narrows down how to reliably reproduce this issue, but it seems very tricky indeed.

Hello @alan01252,

Thanks for the update!

The crashing of the camera could also be due to its overheating. Could you do another test by adding some sort of ventilation?

If it still doesn’t solve the issue, another idea would be to power down the camera by pulling down the GPIO pin (that would be gpio1 8, this is pwn-gpios in the overlay file). So when the camera dies, you can try rmmod ar0521 && gpioset gpio1 8=0. Please let me know how that goes!

Thanks @rudhi.tx

That module can’t be removed because it’s built in, unless I am doing something stupid?

root@:~# rmmod ar0521
rmmod: ERROR: Module ar0521 is builtin.
root@~#

I’ve just updated my custom image to include the gpioset tooling so will give that a try next time it hangs ( I am still experincing the issue )

In terms of ventilation, the sensor is currently propped up on a shelf in my office so lots of airflow, but no fan/heat sink etc, are these required in your experience for these sensors?

Hello @alan01252

Since you are using Torizon OS, you can easily access the gpio pins from a pre-built docker container. See more details here: How to Use GPIO on Torizon OS | Toradex Developer Center

Perhaps you could do a quick test by running this gpio container.