Lpi2c FIFO watermark value

Hello,

I have recently done some testing with lpi2c i2c interface on a Toradex iMX8DX Colibri board and I’m not sure how to interpret some register values from the Reference Manual (Rev. 0, 05/2020).

From the lpi2c-imx driver, if I read PARAM register to get the FIFO size I get 0x44. So I have a Read and Write FIFO of 2^4 = 16 words (so I expect to be able to store up to 16 bytes in the RX FIFO). According to the Reference Manual of iMX8, it must be 4 words, so I’m a bit surprised!

When I read a 9 bytes frames, I would like to set RXWATER to 9 to trig an interruption at the end of the transfer and fully use the FIFO, however if I set it to 9, no interrupt is triggered as if 9 is invalid.

Previously, it was set to 8 and seems to properly trigger interrupts.

My understanding is that RXWATER value is the number of word/byte to store in the RX FIFO before an interrupt is triggered.
Did I misunderstood something ?
And how is it possible that the iMX8 reports 16 words of FIFO ?

Kind regards

Hi @ykrons !

Thanks for you question.

We are going to check on the Reference Manual from NXP about this.

Also, have you tried to ask this question in NXP community forum?

And could you please share how you are carrying out your tests? Source codes, hardware setup, screenshots can be helpful :slight_smile:

Best regards,

Hi @henrique.tx,

I have made a mistake. I was setting RXWATER to 9 if I want to receive 9 bytes, but I have to set it to 8, so that an interrupt is generated on the 9th byte.

The Receive Data Flag is set whenever the number of words in the receive FIFO is greater than
RXWATER.

That has fixed my broken driver, but I still have a weird behavior. In rare case, it seems the last interrupt of a frame is not triggered. I will share the code and how I’m testing it.

Regards

Hi @ykrons,

Thanks for the update. Please remember to share the code and how you’re testing it if you’re still facing the second issue.

Best regards,

Hi @gclaudino.tx,

More details about my tests:

  • I’m using a modified version of the i2c driver (Read more than 256 bytes via i2c with iMX8X - #6 by ykrons) to be able to send i2c frame bigger than 256 bytes.
  • I have a custom carrier board connected, via a dedicated i2c bus, to a microcontroller which implement a home made protocol. The same microcontroller with same carrier board is working with an iMX7.
  • I have a custom tool that is reading a block 5300 bytes of data over i2c
  • Reading is split in 3 commands that read 3 chunks of 2048, 2048 and 1204 bytes
  • The tool is stressing i2c by reading in loop and stop the first time reading doesn’t match expected data

Communication typically end like that:


where the stop is not generated by the iMX8

I have modified a bit the driver to set RXWATER at the size of remaining data rather than 0 at the end of the frame as I’m suspecting that no interrupt is generated for the last byte, but I still have sometime an error as if a byte has been lost. I’m now suspecting it is not at the end of the frame.

Regards

Hi @gclaudino.tx,

I get a better view now with additional tests.
Checking the data read by the driver, it appears it sometime skips one byte “somewhere” in the frame. The result is that the driver is still waiting another byte at the end of the frame, that it will never receive.

The detailed logs of the driver shows that the driver is called each time the RXWATER level is reached (RXWATER is set to 8 as long as more than 16 bytes remains to transfer) and it is able to read 9 bytes from the FIFO as expected.

I have noticed that sometime the driver is able to read 16 and even 17 bytes (I suppose the 17th byte is read during the processing of other bytes in the FIFO). I guess the iMX8 is overloaded and it may cause the driver to be triggered late which cause the FIFO to be full (the typical interval between interrupts is around 250us, but it is around 500us when 16/17 bytes read).
Unfortunately, it seems one byte is lost between this reading and the next one.

Here are part of logs of the driver:
[75304.349475] i2c i2c-17: <lpi2c_imx_read_rxfifo> Bytes read : A0 00 40 18 88 04 88 04 FF
[75304.349930] i2c i2c-17: <lpi2c_imx_read_rxfifo> Bytes read : FF FF FF 00 00 00 00 00 00 FF FF FF FF A1 00 40
[75304.350150] i2c i2c-17: <lpi2c_imx_read_rxfifo> Bytes read : 88 04 88 04 FF FF FF FF 00

Byte 0x18 is missing between the two last blocks, but it is visible using a scope.

I’m expecting that the lpi2c block must stall the i2c bus waiting the FIFO to be emptied and so avoiding any data lost but it doesn’t seems to be the case.

Here is the scope screenshot with the missing byte present:

Thanks for your comments on that issue.
I’m wondering:

  • why is there those latency spikes ? It is a minimal system with only Weston running
  • is there a way to keep a low latency ?
  • why a byte is lost when the FIFO is full ?

Kind regards

Hi,

One additional point : the hardware is sending a NACK on the last byte readed, that means that the lower layer has received all the expected bytes (sum of all the READ commands queued) and so generate the NACK accordingly to signal the end of the transfer.
So I think all the bytes are properly received by the hardware but, that for unknown reason, some bytes don’t reach the FIFO or are maybe overwritten.

Regards

Hi @ykrons !

Sorry for the delay.

We will ask internally about your questions and come back as soon as we have something to share.

Since you are not using the default driver, we would need to understand what could be going wrong with the modification that you applied and it can take time.

Also, as asked before, have you tried to ask this question in NXP community forum?

Just to be sure:

  • using the default driver is not possible for your use case?
  • while using the default driver, do you also get the latencies and the byte is also lost?

Best regards,

Hi @henrique.tx ,

I have done a request on the NXP community forum. I’ll keep you updated of the outcomes.

Using the default driver is not an option, because we have to read blocks of data up to 2070 bytes, which is way above the 256 bytes limitations of the default driver.
With the default driver or the modified one, there is no data lost with frames smaller than 256 bytes.

About the latency, default and modified driver both have some latency glitches. With some logs added to original driver, I can see a mean value around 225us with some glitches at 350-430us

Kind regards

Hi @ykrons !

Sorry for the delay.

Could you please share a minimal userspace I2C code so we can understand how you are using the I2C-related functions?

Best regards,

Hi @ykrons !

Do you have any news on this topic? Were you able to come up with a minimal I2C userspace code?

Best regards,

Hi @henrique.tx ,

Sorry, not yet.
I plan to get back on that topic in the coming days, probably next week.

Regards

Hi @ykrons !

Did you guys have time to perform more tests or come up with the minimal userspace I2C code so we can try to reproduce the behavior?

Best regards,

Hello @henrique.tx ,

Sorry, I don’t have so much bandwidth on this topic so I was not able to build a test setup. We have done higher level temporary workaround with communication retries, but I hope to get back later on that topic.

Kind regards

Hi @ykrons !

Thanks for the feedback.

We will wait for your message to continue with this support.

Have a great day!

Best regards,