Crash in M4 with Rpmsg and ecspi

I have implemented an application running on M4 core using Rpmsg and ecspi.
There are two freertos tasks: one for cyclic query over ecspi3 to get data from slaves. The other task for cyclic query over Rpmsg to transfer data from M4 to A7 and vice versa.
If both tasks running together, the M4 crash after some seconds. Both tasks for itself are running infinite (if I comment out the other).

So I don’t know why my application crashes if both tasks are active.
Is there a resource which is used by both rpmsg and ecspi (Systick, …)?

Priority of tasks are different. There are also a delay in both tasks so the other one can do some work and enough time for idle task. I checked this with GPIO pins and a oscilloscope. Think the task scheduling is ok.

In my ecspi task I also use two GPIO pins and a GPT. Maybe that cause the problem. But I commented out GPIO and GPT so only ecspi was active and the problem still exists.

Hope you can help me. Thanks.

Dear @Kuzco

I’m afraid tracking down the problem would involve some serious debugging on my side, which I currently don’t have the resources for.
There is one wild guess you might verify, if your code is not located in TCML/TCMU: A few days ago I came across a bug report which can hit while synchronizing multiple tasks:

It would be a simple test to move your code to TCML/TCMU and verify whether the error persists.

Regards, Andy

Dear @andy.tx,
thanks for the hint.
I modified two things in my code:

  1. Now there is only one FreeRTOS task (beside idle task). In this task I send and receive over ecspi and do some data exchange over Rpmsg.
  2. In the linker file/scatterfile I use TCM_L and TCM_U for code and OCRAM for data. My code is to big to use only TCM so I choosed a mix.

If I now choose a cyclic request time so that the request from A7 arrives at the M4 while the Rpmsg/Ecspi task is in sleep mode, the code works.
But if the request arrives while Rpmsg/Ecspi task is active, the M4 crashes.
I measured this with a oscilloscope.

Maybe a interrupt priority problem? But I can’t find an interrupt in Rpmsg library.

Kind regards,

Dear @Kuzco

Did I get you right, there is no problem on the WEC2013 side? I’m a bit confused that you mentioned a possible interrupt in the RpMsg library in this context. Maybe you were referring to the rpmsg implementation on the M4?

The rpmsg implementation uses the Messaging Unit hardware (MU) for signalling between the M4 and A7.
There are interrupts used on both sides to process incoming messages.

I recommend you try to strip down your test application (e.g. remove the Ecspi communication by a dummy wait loop / Instead of an incoming rpmsg use another task which triggers your Ecspi thread) to narrow down the possible error reason.

Regards, Andy

Dear @andy.tx,

your’re right, there is no problem on WinCE side. On WinCE side (A7) I request some data from FreeRTOS side (M4). I refering to rpmsg implementation on M4 side.

I think M4 crashes if an interrupt occurs from MU while sending or receiving over ecspi.
But the priorities for MU_M4_IRQn (Messaging Unit interrupt) and eCSPI3_IRQn (ecspi interrupt) are different.
As I said, program runs if interrupt occur while ecspi task is not active.

I commented out just the part where spi is sending and receiving and my application worked.
Also I commented out just the rpmsg part and it worked.
Therefore I suspect that ecspi together with rpmsg is the problem.
But I don’t know what exactly is the problem.

Best regards

Probably I solved the problem.
It seems that an interrupt from MU crashes M4 if ecspi is sending some data.
So before sending over spi I disable MU interrupt with NVIC_DisableIRQ(BOARD_MU_IRQ_NUM); and after receiving data over spi I enable MU interrupt again with NVIC_EnableIRQ(BOARD_MU_IRQ_NUM);.
Until now it works. But there are some long-term tests necessary.

Dear @Kuzco

Thank you for posting this solution. If I can find time to debug this, I will have a look at the issue to create a real fix. I just can’t promise anything about when this will happen.

Regards, Andy