Rpmsg buffer size on TorizonOS

I developed an application on Verdin iMX8M-Plus that uses rpmsg to communicate between Torizon OS and FreeRTOS.
Everything works fine and I’m able to communicate.
In FreeRTOS I see that rpmsg uses buffers of 512 bytes, and if I look to the address of the messages sent and received I see that it increments of 512 at every message.
So this is ok.
Moreover, I’m able to send messages with a paypload of 496 (they require 512 bytes) from FreeRTOS to Torizon Os, where I have a python application that receives them properly. No issue here.

But if I write a message of 300 bytes from python to /dev/ttyRPMSG30 on Torizon OS, I see that this message is automatically split into two different messages: the first is 256 bytes long, and the other 44 bytes long.
On FreeRTOS side, I see both of them and I can merge them into the original message.
I did several test, and I see that the message is automatically split into chunks of 256 bytes (even if the mailbox is 512).
Do you know why this happens?
Is this the Torizon OS rpmsg_tty driver?
Or python?
Or some rpmsg configuration?

Hi, I’m the Toradex AI Assistant. I’m in early beta, and I would appreciate your feedback.
The Toradex Support team will follow up with you shortly!

Did you like my answer?

:+1: :-1:


Hello,

The behavior you’re observing, where messages larger than 256 bytes are being split into multiple messages, is related to the rpmsg_tty driver’s handling of message sizes in the Linux kernel used by TorizonCore. The rpmsg framework, which facilitates communication between Linux (running on the A core) and FreeRTOS (running on the M core), has its own set of constraints and configurations that can affect how data is transmitted and received.

The rpmsg_lite implementation used in FreeRTOS is designed to work with the default buffer size of 512 bytes, as you’ve noted. However, the splitting of messages into 256-byte chunks when sending from Torizon OS to FreeRTOS suggests that there’s a limitation or configuration within the rpmsg_tty driver or the underlying rpmsg implementation that caps the maximum message size that can be sent in a single transaction to 256 bytes.

This behavior is not directly related to Python or any specific application-level code running on Torizon OS. Instead, it’s more likely tied to the way the rpmsg subsystem or the rpmsg_tty driver itself handles message segmentation and transmission. The rpmsg protocol requires that both the sender and receiver agree on the maximum message size. If the sender attempts to send a message larger than this agreed size, the message will be segmented into smaller chunks that fit within the maximum size limit.

To address this issue, you might consider the following approaches:

  1. Check the rpmsg_tty Driver Configuration: Review the source code of the rpmsg_tty driver in the Linux kernel used by TorizonCore to see if there are any configurable parameters related to maximum message size or segmentation behavior.

  2. Adjust Application Logic: Since you’re able to successfully reassemble the messages on the FreeRTOS side, you might continue using this approach as a workaround. Ensure your application logic on both sides can handle message segmentation and reassembly correctly.

  3. Review rpmsg Documentation: Although direct modifications to the rpmsg or rpmsg_tty driver code are beyond the scope of this support, reviewing the documentation and source code for these components might provide further insights into their behavior and any potential configuration options.

  4. Kernel Configuration: It’s also worth checking if there are kernel configuration options related to rpmsg that might influence message size limits or segmentation behavior.

Understanding the exact cause of this behavior requires a deeper dive into the `rpmsg

Please, also check if these links can help you solve your problem:

Cortex-M RPMsg Guide | Toradex Developer Center.

Hello @vix,

As you imagined, this limitation is related to the imx_rpmsg_tty driver.
You can see here that RPMSG_MAX_SIZE is defined as 256 and this seems to be the code that does the message splitting.

To change this behavior, the driver needs to be modified.

Best Regards,
Bruno

Hi @bruno.tx
as far as I can see there is a mismatch on NXP software for rpmsg between cortex-A (buffer size 256) and cortex-M (buffer size 496).
Toradex uses the NXP values without any change.
I think that NXP should fix it (and afterwards, Toradex can update it).
I reported this to NXP tech support and I’ve been waiting for their answer.

2 Likes

Hi @bruno.tx
I get an answer from NXP, and it’s the usual answer from their side:

NXP provides software as a reference starting point for customer implementation, and so on…

I wrote that this is a kernel driver, not a software for customer implementation, but you know NXP better than me…

Long story short, I think that we have no chance that NXP is going to update the driver.
But I think that Toradex can do this on its own fork, because this can bring a big benefit to Torizon OS users:

  • now RPMSG_MAX_SIZE is set to 256 here

  • but the comment here states that

    /* this needs to be less then (RPMSG_BUF_SIZE - sizeof(struct rpmsg_hdr)) */
    
  • RPMSG_BUF_SIZE is defined here as 512

  • sizeof(struct rpmsg_hdr) is 16 bytes

All the above steps have been confirmed by NXP too (but they’re clear from the sources), and they depend on Cortex-A-side only.
This means that a rpmsg buffer 512-bytes long can be used with its maximum throughput if the maximum message size is set to 496 bytes.
Setting it to 256 wastes resources, becaue the buffer size is 512 (and this must be the same on Cortex-M side), but only 256 bytes are used for every message.

I know I can change it and rebuild Torizon OS by myself, but from my perspective this is a bug (or a lack of optmization), and so I suggest/ask Toradex to consider changing RPMSG_MAX_SIZE to 496 (i.e., 512 - 16) in its own fork.
In this way all the Torizon OS customers could (potentially) benefit from a high rpmsg throughput with no unwanted side effects.

If I’m right, I’ve also noticed that the mainline/upstream linux for imx doesn’t have such kind of define to set the maximum size of the message (maybe because this is not ideal, as I’ve pointed out above), but there is this function

static ssize_t virtio_rpmsg_get_mtu(struct rpmsg_endpoint *ept)
{
	struct rpmsg_device *rpdev = ept->rpdev;
	struct virtio_rpmsg_channel *vch = to_virtio_rpmsg_channel(rpdev);

	return vch->vrp->buf_size - sizeof(struct rpmsg_hdr);
}

that basically does the same buf_size - sizeof(header)

Do you think this change in Torizon OS sources would be possible?
I would greatly appreciate this.

2 Likes

Dear @vix,

Thank you for your detailed explanation. This does look like it is a rather application specific case/request, which means it might not be changed in the near future (if at all). I will however inform our team internally about it, so that they can have a look and evaluate the situation.

Best regards
Collin

If you mean that some user applications don’t need all the space in the rpmsg buffer, yes, this can be seen as “application specific”.
But I think that a bigger buffer (with no bad side effects) is useful for everyone.

Let me know the position of the team.

1 Like

Hello @vix,
I reviewed this ticket and the documentation / code, and it seems the 256 byte message limit on the linux side is arbitrary.

However, I’m coming to the realization that the imx-rpmsg-tty driver was developed by NXP with one goal: have the easy-to-use string echo demo running. It’s nice to be able to open a tty device and write strings directly from user space for demo purposes, but we need to consider that the tty subsystem adds a lot of overhead.

We also need to consider that this driver was never upstreamed, and it looks like there are no efforts to do so. We could change the buffer size on our fork, but now we diverged from NXP on the downstream. Furthermore, we have to carry this patch along in the future, and we actively work towards reducing the number of patches we have applied on the downstream to reduce our maintenance burden. Even if there are no bad side effects today, we cannot guarantee this in the future. NXP is free to change their downstream however they want, and if they do, some of these changes could break the functionality in subtle ways.

Consider a simple made-up example:
We change the buffer size on our fork and test everything. All good. Now NXP changes the implementation of the imx-rpmsg-tty driver, and they decide to make a copy of the data using the number 256 as a size, instead of using the #define. They test the changes on their side and everything works without issues. The code comes to us, and maybe we do some quick test using the demo functionality in user space (with strings smaller than 256 bytes) and we also see no problem. You get the new version and now your application is broken, but for everyone else it’s working. This is a recipe for a lot of wasted time trying to debug the issue.

I know this example seems silly, but it could happen exactly like I described, and there could be other similarly “inoffensive” changes that could go unnoticed and break stuff. Of course, these can happen even if we don’t have the patch on our fork, but by having it we have one more chance of bad stuff happening.

With all that said, have you considered using a more standard (and upstream) driver like rpmsg-char on your application? From what I saw, with rpmsg-char you wouldn’t have any limitation on the buffer size (apart from the rpmsg-bus limitation of 512 bytes) and you also wouldn’t have the tty overhead. You could send messages in binary form between the two cores, or you could also choose to send text strings if you like.

Please let me know what you think, and I’m also open to discussing the issue more.

Rafael

Hi @rafael.tx
I understand your position. It makes sense.

I’ve never heard about this driver.
Can you provide some links to doc or examples on the usage of rpmsg-char (both on M-side and Linux side)?

Thanks

Hello @vix,

Can you provide some links to doc or examples on the usage of rpmsg-char (both on M-side and Linux side)?

Unfortunately, I don’t know of any examples of this being present in NXP documentation. However, its use shouldn’t be very different from the imx tty driver, at least on the Cortex-M side. I expect you would have to change the channel ID’s / names so they match what’s being created on the linux side (rpmsg tty creates a channel with a custom ID and the correct ID is opened on the Cortex-M echo example).
On the linux side, your application would then open the char driver control device, tell it to create an endpoint with ioctl and after that open the newly created char device. From that point on, you can just exchange data with the read and write functions.

I’m planning to work on an example for this, but I’m not sure when I’ll be able to get it ready.

Hi @rafael.tx
after your suggestion I’ve just found this topic on the community.
It seems that @hfranco.tx has been able to have rpmsg_char working with a custom device tree.
But maybe that was the device tree that is now included in Torizon OS.
I hope I will have time to check it.