Verdin SPI Data Size on Cortex-M4

Hi,
I am trying to use a ECSPI communication between the M-core and a FPGA, but I can’t seem to figure out how to change the data size from 32 bits to 8 bits. When using the CMSIS SPI example from the MCUXpresso SDK (cmsis_ecspi_int_loopback_transfer, adjusted to use pins instead of loopback), the control command:

DRIVER_MASTER_SPI.Control(ARM_SPI_MODE_MASTER | ARM_SPI_CPOL0_CPHA0 | ARM_SPI_MSB_LSB | ARM_SPI_SS_MASTER_SW | ARM_SPI_DATA_BITS(8), TRANSFER_BAUDRATE);

Has absolutely no influence on the transfer, most importantly the ARM_SPI_DATA_BITS(8), but also changing the Slave Select mode or the speed changes nothing. The transfer keeps operating correctly only when uint32_t values are sent.
I then decided to try without CMSIS, and use the example “ecspi_loopback” from driver_examples folder, but just at the first look of the ECSPI Documentation I see that struct ecspi_transfer_t have a send/receive buffer of uint32_t.
I know that the physical buffer is 64 x 32 but is it really impossible to send/receive 8 bit sized elements (e.g. char)?

Hi

I’m not familiar with MCUXpresso SDK’s, but eCSPI transfer size is specified in CONREG register, BURST_LENGTH field. In document you provided, it should be burstLength field of struct ecspi_master_config_t.

Hi Edward,
Thank you for your answer!
I will try changing the burst length, but I am still confused why the txData and rxData are strictly of type uint32_t* (I would expect void* with my limited C knowledge). The other functions in this documentation also always expect data as uint_32*. In the case of limited bandwidth, it is inefficient to parse 8bit integers into 32bit ints. The other solution is to contecate every four 8bit integers into one transfer package, but I’m sure there is a more subtle solution than that. Maybe I’m just missing something
Bild_2022-07-19_095051608

That’s the question for developers of this SW. eCSPI data register is 32bits wide, this is probably why they used uint32_t*.
eCSPI HW specifies total amount of bits in single transfer for automatic chip select function, which may work up to 4096 bits in single transfer. For longer transfers you need to IOMUX your CS pad for GPIO function and toggle CS pin high/low from software. With GPIO CS you may fix BURST_LENGTH to 8/16/32 bits and fill TX buffer with 8/16/32 bits at once, then toggle CS low/high after all your bytes are sent/received.
I also wonder why there’s dataSize (bytes) field in addition to burstLenght field (bits). burstLength could be not limited to 4096 and then dataSize could be calculated from burstLength…

2 Likes

Hi @swiss,

Thanks @Edward for your help!

@swiss could you please also post your question on the NXP Community? I think they can help you better with these questions about their SDK. Meanwhile, I’ll try to check the documentation here and try to find something that might help you.

Best Regards,
Hiago.

1 Like

@hfranco.tx just saw your message while formulating this answer. I will post the question in the NXP Community, I will just leave my findings here also.

@Edward
I think dataSize tells the size of the array to the transfer function (e.g. ECSPI_MasterTransferNonBlocking), since the data is passed as a uint32_t*, so basically just so the function knows where the array ends.
I experimented with different burst lengths and here are my observations:
When setting the burst length to 32 and sending uint32_t data, where first three numbers are 1002, 1023, 1024 the data is sent correctly (see Figure under).

But when the burst length is set to 8 with the same uint32_t data, only the last 8 bits of each number are sent (you can double-check in binary but it is clear that 1023 is sent as 1111 1111 and 1024 as 0000 0000). I expected to see the whole uint32_t sent in four bursts, but the fact is that the data just gets lost. It is funny how the driver examples from the SDK work just because they are always sending numbers smaller than 256 packed inside a uint32_t. If they tried with a bigger number, it wouldn’t work.



Afterwards I tried a burst length of 64, but the data was instead sent in 32 bit bursts as in the first figure. Looks like that when the SPI is software controlled, it wont send two uint32_t chunks in one burst, probably have to use the GPIO CS method.

Edit: I was able to send 64 bit bursts, I dont even think I changed anything in the code.
I would still love to understand whats going on a bit better.

Yes. burstLength tells the same, but specifies amount of bits instead of amount of bytes. Never mind.

This is correct behavior. BURST_LENGTH is what specifies transfer size.

Unless SW does something weird, and your 64-1 setting really gets written to BURST_LENGTH register field, difference with first figure should be no CS pulse between two adjacent 32-bit bursts. It should be single transfer 64 bits transfer with CS edges marking whole 64bit transfer. When FIFO is properly used (TX register loaded in time), SCK clock should be uniform during whole 64bits transfer, no gap in SCK between two 32bit transfers.

1 Like

I expected the 32 bit data to be sent in 4 bursts, but if this is the correct behavior then I think all my questions got solved.

Yes, I somehow made it work this. Two 32-bit integers are sent in one 64-bit burst correctly now.
Thank you!