Disable ECSPI in a device tree overlay

Hi,
I am running a program on Cortex-M4 of the Verdin Mini that uses two SPI interfaces: ecspi1 and ecspi2. I’ve setup the automatic boot of this code, but the both interfaces freeze after the Linux boots. My guess is that both SPIs must be disabled in the Linux device tree, so I used a device tree overlay together with the torizoncore-builder to activate it. I am completely new to device tree customizations, so I just added two disable commands inside the overlay that was already activated by default on the TorizonCore (verdin-imx8mm_lt8912_overlay.dts) and renamed it to verdin-imx8mm_lt8912_noSPI_overlay.dts:

// SPDX-License-Identifier: GPL-2.0-or-later OR MIT
/*
 * Copyright 2020-2021 Toradex
 */

// Verdin DSI to HDMI Adapter orderable at Toradex.

/dts-v1/;
/plugin/;

/ {
	compatible = "toradex,verdin-imx8mm";
};

&gpu {
	status = "okay";
};

&hdmi_lontium_lt8912 {
	status = "okay";

	port {
		lt8912_1_in: endpoint {
			remote-endpoint = <&mipi_dsi_bridge1_out>;
		};
	};
};

/* Verdin I2C_2_DSI */
&i2c2 {
	status = "okay";
};

&lcdif {
	status = "okay";
};

&mipi_dsi {
	#address-cells = <1>;
	#size-cells = <0>;
	status = "okay";

	port@1 {
		mipi_dsi_bridge1_out: endpoint {
			remote-endpoint = <&lt8912_1_in>;
			attach-bridge;
		};
	};
};

&pwm1 {
	status = "disabled";
};


/* Disable Verdin SPI */
&ecspi1 {
	status = "disabled";
        };

&ecspi2 {
	status = "disabled";
        };


You can see my adjusments at the end of the file. I’ve built the new image, unpacked it and deployed it to the board. The process seems to be successfull since overlays.txt now containes my imx8mm_lt8912_noSPI_overlay file, but the SPI still freezes after the linux is started.

Am I disabling the SPIs right, or is there some other way to disable them. I couldn’t find this information anywhere else.

Thanks in advance!

Hello @swiss,
Do you have serial access to the module? Have you checked if the right overlay is being loaded by u-boot?
You should see something around these lines right at the beginning of the boot process (this is from a different module):

Loading DeviceTree: imx7d-colibri-emmc-eval-v3.dtb
66357 bytes read in 15 ms (4.2 MiB/s)
80 bytes read in 10 ms (7.8 KiB/s)
Applying Overlay: colibri-imx7_lcd-vga_overlay.dtbo

Additionally, could you try to collect the kernel output of the failed boot? Does it print anything on the output?

Thank you,
Rafael Beims

Hi @rafael.tx,
The right overlay is being loaded. Here is the full output over the UART during the startup:

U-Boot SPL 2020.04-5.6.0+git.7d1febd4af77 (Jan 01 1970 - 00:00:00 +0000)
DDRINFO: start DRAM init
DDRINFO: DRAM rate 3000MTS
DDRINFO:ddrphy calibration done
DDRINFO: ddrmix config done
Normal Boot
Trying to boot from MMC1
NOTICE:  BL31: v2.2(release):toradex_imx_5.4.70_2.3.0-g835a8f67b2
NOTICE:  BL31: Built : 00:00:00, Jan  1 1970


U-Boot 2020.04-5.6.0+git.7d1febd4af77 (Jan 01 1970 - 00:00:00 +0000)

CPU:   i.MX8MMDL rev1.0 1600 MHz (running at 1200 MHz)
CPU:   Industrial temperature grade (-40C to 105C) at 31C
Reset cause: POR
DRAM:  1 GiB
MMC:   FSL_SDHC: 0, FSL_SDHC: 1
Loading Environment from MMC... OK
Fail to setup video link
In:    serial
Out:   serial
Err:   serial
Model: Toradex Verdin iMX8M Mini DualLite 1GB WB IT V1.1B, Serial# 06944123
Carrier: Toradex Dahlia V1.1C, Serial# 10952646

 BuildInfo:
  - ATF 835a8f6
  - U-Boot 2020.04-5.6.0+git.7d1febd4af77

flash target is MMC:0
Net:   eth0: ethernet@30be0000
Fastboot: Normal
Normal Boot
Hit any key to stop autoboot:  0
37864 bytes read in 29 ms (1.2 MiB/s)
## Starting auxiliary core stack = 0x20020000, pc = 0x1FFE030D...
switch to partitions #0, OK
mmc0(part 0) is current device
Scanning mmc 0:1...
Found U-Boot script /boot.scr
973 bytes read in 12 ms (79.1 KiB/s)
## Executing script at 47000000
5162 bytes read in 22 ms (228.5 KiB/s)
65018 bytes read in 27 ms (2.3 MiB/s)
53 bytes read in 23 ms (2 KiB/s)
Applying Overlay: verdin-imx8mm_lt8912_noSPI_overlay.dtbo
1738 bytes read in 31 ms (54.7 KiB/s)
12174168 bytes read in 287 ms (40.5 MiB/s)
Uncompressed size: 30591488 = 0x1D2CA00
9179071 bytes read in 222 ms (39.4 MiB/s)
## Flattened Device Tree blob at 44000000
   Booting using the fdt blob at 0x44000000
   Loading Device Tree to 000000007d6d9000, end 000000007d70bfff ... OK
Modify /vpu_g1@38300000:status disabled
Modify /vpu_g2@38310000:status disabled
Modify /vpu_h1@38320000:status disabled
Delete node /cpus/cpu@2
Delete node /cpus/cpu@3
Update node /thermal-zones/cpu-thermal/cooling-maps/map0, cooling-device prop
Update node /pmu, interrupt-affinity prop

Starting kernel ...

[    0.068678] No BMan portals available!
[    0.069187] No QMan portals available!
[    1.204940] imx_sec_dsim_drv 32e10000.mipi_dsi: Failed to attach bridge: 32e10000.mipi_dsi
[    1.213246] imx_sec_dsim_drv 32e10000.mipi_dsi: failed to bind sec dsim bridge: -517, retry 0
[    1.803929] rtc-ds1307 0-0032: hctosys: unable to read the hardware clock
[    2.814530] imx6q-pcie 33800000.pcie: failed to initialize host
[    2.820464] imx6q-pcie 33800000.pcie: unable to add pcie port.
Starting version 244.5+
[    6.592081] debugfs: Directory '30020000.sai' with parent 'imx8mm-wm8904' already present!
[    6.600540] debugfs: File 'Headphone Jack' in directory 'dapm' already present!
[    7.571804] mcp25xxfd spi2.0: CRC read error: computed: 6c2a received: ffff - data: be 00 04 ff ff ff ff
[    7.586108] mcp25xxfd spi2.0: CRC read of clock register resulted in a bad CRC mismatch - hw not found
[    7.598657] mcp25xxfd spi2.0: Probe failed, err=84

TorizonCore 5.6.0+build.13 verdin-imx8mm-06944123 ttymxc0

verdin-imx8mm-06944123 login:

Could you confirm the addresses of the SPI interfaces that you are using on the M4 side?
I ask because ecspi1 is disabled by default and ecspi2 is enabled by the spidev debug device:
http://git.toradex.cn/cgit/linux-toradex.git/tree/arch/arm64/boot/dts/freescale/imx8mm-verdin-dahlia.dtsi?h=toradex_5.4-2.3.x-imx#n37

I can also see that there are some errors when detecting the mcp2518 CAN controller, and this one is connected via SPI, but on ecspi3. I think the best way to make sure is to check the addresses being used and compare them with:
http://git.toradex.cn/cgit/linux-toradex.git/tree/arch/arm64/boot/dts/freescale/imx8mm.dtsi?h=toradex_5.4-2.3.x-imx#n844

It could also help if you enable one interface at a time (on the M4 side) and test it to see if you can make it work.

If this is what you mean by adresses, I am using the predefined macros, so the addresses match the ones from your link. This is how they are defined in my code (I used the cmsis_ecspi_int_loopback_transfer example as a guide):

#define ECSPI1_TRANSFER_SIZE     8 
#define ECSPI1_TRANSFER_BAUDRATE 2000000U
#define ECSPI1_MASTER_BASEADDR   ECSPI1
#define ECSPI1_MASTER_CLK_FREQ                                                                 \
    (CLOCK_GetPllFreq(kCLOCK_SystemPll1Ctrl) / (CLOCK_GetRootPreDivider(kCLOCK_RootEcspi1)) / \
     (CLOCK_GetRootPostDivider(kCLOCK_RootEcspi1)))
#define ECSPI1_MASTER_TRANSFER_CHANNEL kECSPI_Channel0


#define ECSPI2_TRANSFER_SIZE     64  // ECSPI uses 32 bit buffers that are sent at once, with 64bits, two uint32 are sent at once
#define ECSPI2_TRANSFER_BAUDRATE 2000000U
#define ECSPI2_MASTER_BASEADDR   ECSPI2
#define ECSPI2_MASTER_CLK_FREQ                                                                 \
    (CLOCK_GetPllFreq(kCLOCK_SystemPll1Ctrl) / (CLOCK_GetRootPreDivider(kCLOCK_RootEcspi2)) / \
     (CLOCK_GetRootPostDivider(kCLOCK_RootEcspi2)))
#define ECSPI2_MASTER_TRANSFER_CHANNEL kECSPI_Channel0

And these are the macros from the \devices\MIMX8MM6\MIMX8MM6_cm4.h and the actual addresses:

#define ECSPI1_BASE                              (0x30820000u)
/** Peripheral ECSPI1 base pointer */
#define ECSPI1                                   ((ECSPI_Type *)ECSPI1_BASE)
/** Peripheral ECSPI2 base address */
#define ECSPI2_BASE                              (0x30830000u)
/** Peripheral ECSPI2 base pointer */
#define ECSPI2                                   ((ECSPI_Type *)ECSPI2_BASE)

You are right. The ecspi2 keeps working, but the ecspi1 halts when the linux starts when I test them separately, but I don’t see why. Especially when the ecspi2 should be the one that’s enabled in linux and causing the interference and not the ecspi1.

Here’s some of the initialization code that I use that could maybe help you identify the problem:

    //ECSPI1 config and init
    CLOCK_SetRootMux(kCLOCK_RootEcspi1, kCLOCK_EcspiRootmuxSysPll1); /* Set ECSPI1 source to SYSTEM PLL1 800MHZ */
    CLOCK_SetRootDivider(kCLOCK_RootEcspi1, 2U, 5U);                 /* Set root clock to 800MHZ / 10 = 80MHZ */
    ECSPI_MasterGetDefaultConfig(&masterConfig1);
    masterConfig1.baudRate_Bps   = ECSPI1_TRANSFER_BAUDRATE;
    masterConfig1.channel = kECSPI_Channel0;
    masterConfig1.burstLength = 8;
    //SS software controlled as GPIO
    gpio_pin_config_t gpio_config1 = {kGPIO_DigitalOutput, 1, kGPIO_NoIntmode};
    GPIO_PinInit(ECSPI1_CS_MUX_GPIO, ECSPI1_CS_MUX_GPIO_PIN, &gpio_config1);
    ECSPI_MasterInit(ECSPI1_MASTER_BASEADDR, &masterConfig1, ECSPI1_MASTER_CLK_FREQ);

    ECSPI_MasterTransferCreateHandle(ECSPI1_MASTER_BASEADDR, &g_m_handle1, ECSPI1_MasterUserCallback, NULL);
    //ECSPI2 config and init    
    CLOCK_SetRootMux(kCLOCK_RootEcspi2, kCLOCK_EcspiRootmuxSysPll1); /* Set ECSPI2 source to SYSTEM PLL1 800MHZ */
    CLOCK_SetRootDivider(kCLOCK_RootEcspi2, 2U, 5U);                 /* Set root clock to 800MHZ / 10 = 80MHZ */
    ECSPI_MasterGetDefaultConfig(&masterConfig2);
    masterConfig2.baudRate_Bps   = ECSPI2_TRANSFER_BAUDRATE;
    masterConfig2.channel = kECSPI_Channel0;
    masterConfig2.burstLength = 64;

    //SS software controlled as GPIO
    gpio_pin_config_t gpio_config1 = {kGPIO_DigitalOutput, 1, kGPIO_NoIntmode};
    GPIO_PinInit(ECSPI2_SS0_GPIO, ECSPI2_SS0_GPIO_PIN, &gpio_config1);


    ECSPI_MasterInit(ECSPI2_MASTER_BASEADDR, &masterConfig2, ECSPI2_MASTER_CLK_FREQ);

    ECSPI_MasterTransferCreateHandle(ECSPI2_MASTER_BASEADDR, &g_m_handle2, ECSPI2_MasterUserCallback, NULL);

It is also interesting that ECSPI1_MasterUserCallback function receives kStatus_Success even though that data is not transfered to the slave.

One more point regarding the CLOCK_SetRootMux. I wasn’t sure if both interfaces can use the same clock so firstly I tried using kCLOCK_EcspiRootmuxSysPll1 and kCLOCK_EcspiRootmuxSysPll3, but my tests showed that they can in fact both use the same clock apparently.

Please tell me if you need any further information in order to understand the problem.
Thanks in advance.

EDIT:
One other thing that crossed my mind is that those ecspi1 pins are by default muxed for uart3. I use pin muxing on M-Core to use them for spi, but it might be that linux tried using them as uart3 and blocks them? I should try and disable the uart3 in the device tree then I guess.
image

UPDATE:
The problem was that the uart3 had to be disabled in the device tree because it is the default mux on these pins and not the ecspi1