RPMsg linux driver and delays

Hello,

I’m trying to transfer ADC values @ about 10 kHz from M4 to Cortex-A using RPMsg.

on the Cortex-A side I send and receive data through the ttyRPMSG which is created through the kernel module imx_rpmsg_tty.

The current algorithm is sending a request from Cortex-A to M4 using write(rpmsgFileHandler,tx_buffer,sizeof(tx_buffer)).
When M4 receives the request, it sends back the acquired ADC value in the rpmsg call back function (like the ping pong example). The Cortex-A receives it using read(rpmsgFileHandler,rx_buffer,sizeof(rx_buffer))

However, the problem is that there is often a big delay (up to 20ms) when reading or writing 10 Bytes from/to ttyRPMSG “frequently in a loop” ( where reading/writing 10Bytes from/to ttyRPMSG “only once” would take around 60µs).
By measuring the delays in M4, the sending and receiving including memory copy take less than 60 µs.

  • Is the 20ms delays expected when using RPMsg for such small amount of data? ( Unfortunately, I couldn’t find a benchmark for RPMsg in iMX Modules )
  • Isn’t there any driver/example which uses the RPMsg APIs directly in the user space rather than using a kernel module then reading and writing from tty (which could be the reason of the delay)?

Thanks in advance,

Best regards,
Majd

I wrote my own kernel driver to avoid this issue. This is a character driver instead of tty. In addition for high speed data transfer I made a DDR3 shared memory area. The rpmsg is just to send signals in both way.

The TTY is clearly the bottleneck…

I made some benchmark I can transfer 1024 32 bit samples within 100us. So approximatively 327 Mbit/s.

To be more efficient I would suggest you to use FreeRTOS, it’s more efficient because you can multitask. In addition there is a lot of Buffer API.

An advice just don’t use memcopy …

hi @arnaud_infoteam,
Thank you a lot for the information. I’m totally new in this field, and appreciate any simple comment.

It would be really nice if you can share this driver. how is the user space application talking to the driver? or is the driver within the application?

I would also prefer to use the FreeRTOS, but unfortunately the rpmsg didn’t work on it for me. The debugger stucks at rpmsg_rtos_init(). So I just went currently for the quick working solution on bare-metal.

Is it enough to use the MCIMX7D_M4_ddr.ld linker to get a shared “data” memory on the ddr (which means any variable created using M4 will land in the shared field)?

Thanks again and best regards,
Majd

will toradex provide a RPMsg character driver/example? or is it possible to use any provided RPMsg character driver from other vendors?

Hi @majd.m,

Did you find out the root cause of the issue when the debugger is stuck at rpmsg_rtos_init()? Please have a look here.

hi,

You can enable RPMSG_CHAR driver in the kernel config. For the Example please have a look here.

Best regards,
Jaski

@majd.m: Just know that Linux is not a real time OS, jitters have to be expected depending on the system load.

hi @morandg

Thanks for you Input.
Best regards,
Jaski

Thanks @morandg. but im using Fully RT PREEMPT linux, and running nothing else than this real time thread on iMX7. I expect that the jitter shouldn’t be that high and frequent.

In case that the RPMsg driver is not real time capable in linux, then I think the Cortex-M4 is useless. The goal of adding Cortex-M4 is to introduce the real time capability to our applications which are mainly running on Cortex-A, so the RPMsg is the key to benefit from the M4.

Best regards,
Majd

Hello @jaski.tx,
Thanks. I tried to add: CONFIG_RPMSG_CHAR=y to the kernel config

but I get this error when I try to compile the kernel:

scripts/kconfig/conf  --silentoldconfig Kconfig

*** Error during update of the configuration.

By removing CONFIG_RPMSG_CHAR=y, the kernel can be compiled without problems

Best regards,
Majd

Hi @majd.m,

Yes, the RT patch might help to decrase jitters. I’m not an expert of TTY drivers but I suspect that there might be buffering done in the TTY layer. As suggested, probably writing a bare char device driver might help bypassing buffering… Take my comment with care as I don’t know the whole kernel codebase by heart :D!

FYI, I used to work on a Xilinx platform that has a basic rpmsg char dev driver that is a good starting point if you feel brave enough to write it yourself. Some minor changes might be required to make it work on newer kernels:

Hi @majd.m

There was an error in the kernel module sources, you should apply the latest patches to the kernel.

Best regards,
Jaski

Hi @jaski.tx,

Thanks for the hint. Now after compiling the kernel with “CONFIG_RPMSG_CHAR=m” and probing the module with “modprobe rpmsg_char”, I get unfortunately no response and the rpmsg_ctrl is not found in /dev.
Is there something else that should be considered to create the rpmsg_ctrl device?

Best regards,
Majd

hi @majd.m

Did you deployed the new kernel and kernel modules too?

hi @jaski.tx,
sure. rpmsg_char.ko is in /lib/modules/kernel-version/kernel/driver/rpmsg
“modprobe rpmsg_char” gives no error but it doesnt create rpmsg_ctrldev. (I also tried with CONFIG_RPMSG_CHAR=y. The device also does not exist)

Hi @majd.m,

I can reproduce the error. Could you try to apply the following patch and check if it works.

Thanks and best regards,
Jaski

Hi @jaski.tx,

Thanks for the test. It is actually not clear in the patch’s page which kernel version should be used. When using the patch with the current version of linux-toradex, I get the error (during compiling the kernel):
drivers/rpmsg/virtio_rpmsg_bus.c:1302:29: error: ‘virtio_rpmsg_release_device’ undeclared (first use in this function).

Best regards, Majd

hi @majd.m

Thanks for your Input. I created internally a ticket and we will come soon to you.

Best regards,
Jaski

Hi @majd.m

Could you update to Bsp 3.0b2 and check if there the rpmsg_char device is working or not?

Best regards,
Jaski

Hi @jaski.tx,

I tried it today. I built a new yocto image BSPs 3.0b2 with rpmsg_char, and deployed the image and the module. unfortunately the same issue still there.

Best regards, Majd.