Using CAN bus on Verdin imx8 plus board

Verdin iMX8m plus v1.1
Verdin Dev Board v1.1
Linux verdin-imx8mp-14762705 5.15.77-6.1.0+git.349786b46e61 #1-TorizonCore SMP PREEMPT Wed Dec 28 09:58:45 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

I am trying to use the 2 CAN bus controllers. According to documentation, the CAN bus are already enabled on Torizon Verdin boards. It also is supposed to have the can utils installed, yet, if I execute the ip link command, it doesn’t know what that is. Am I missing something?

Thanks,
Steve

  1. Try installing can-utils inside the container.
  2. Invoke the container with command prompt option and run can-utils.

bash: can-utils: command not found.
I also tried logging into the SOM via ssh using torizon. Same result.

Greetings @Evets,

Have you already taken a look at our article on using CAN in TorizonCore: How to Use CAN on TorizonCore | Toradex Developer Center

If yes, then what exact part of the process are you having issues with?

Best Regards,
Jeremias

Hi @jeremias.tx ,
Did you read the first thing I wrote about? All is supposed to enabled already in the Verdin plus module. Yet executing any of the command line CAN commands associated with that don’t seem to be there. This is what is mentioned in the beginning articles, so I don’t know if the device tree has it turned on or not.

Steve

Did you read the first thing I wrote about?

Yes I did, you stated: “According to documentation, the CAN bus are already enabled on Torizon Verdin boards”. You didn’t link to any specific article or document, so I’m not sure how I’m suppose to know or assume what you are referring to here. Hence me clarifying with the article link in my response.

Furthermore, I followed the documentation using a Verdin i.MX8M Plus on the same software version as you and it works fine for me, as seen here:

# All below commands are being executed in the custom container as specified by the documentation.
# Here you can see the 2 can interfaces:
root@verdin-imx8mp-06849059:/# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
3: ethernet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:14:2d:68:82:23 brd ff:ff:ff:ff:ff:ff
4: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN mode DEFAULT group default qlen 10
    link/can
5: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN mode DEFAULT group default qlen 10
    link/can
6: mlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether c2:09:f4:d5:85:4a brd ff:ff:ff:ff:ff:ff permaddr d8:c0:a6:cf:72:59
7: uap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether 3a:d4:da:a1:01:b2 brd ff:ff:ff:ff:ff:ff permaddr d8:c0:a6:cf:70:59
8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:bc:97:d8:19 brd ff:ff:ff:ff:ff:ff
# Configuring and bringing up one of the CAN interfaces as instructed by the documentation seems to work:
root@verdin-imx8mp-06849059:/# ip link set can0 type can bitrate 500000
root@verdin-imx8mp-06849059:/# ip link set can0 up
root@verdin-imx8mp-06849059:/# ip link show can0
4: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP mode DEFAULT group default qlen 10
    link/can

As seen above, everything seems to be working as documented. Now according to your previous messages:

bash: can-utils: command not found.

It also is supposed to have the can utils installed, yet, if I execute the ip link command, it doesn’t know what that is.

It sounds like you are not installing and running these in a container as the documentation specifies.

Best Regards,
Jeremias

The documentation I referred to was what you pointed to.
Well, with Torizon version 6.1x, if I ssh into the board from my pc (not using the terminal from VSC), I can execute the ip link command and it does show I have 2 can controllers can0 and can1. So from that point, I guess it is working on the board. This did not work in 5.7.

Now my next step of building a docker file using the command according to the same documentation is: “docker build -t can-torizon-sample .”. I copied what is on the page in the " Creating a Container for CAN Communication" and named the file dockerfilecan.
So I tried running:
docker build -t dockerfilecan .
I get:
invalid argument “./dockerfilecan” for “-t, --tag” flag: invalid reference format

Looking up docker commands, I came across using text files to build. Here is the correct syntax:

docker build - < Dockerfile


Thanks
Steve

Well, with Torizon version 6.1x, if I ssh into the board from my pc (not using the terminal from VSC), I can execute the ip link command and it does show I have 2 can controllers can0 and can1. So from that point, I guess it is working on the board. This did not work in 5.7.

Then what were you talking about in your original message? In your original message you said you were on TorizonCore version 6.1.0 and you claimed that the ip link command wasn’t working. Now you bring up version 5.7 despite never mentioning it until now. You’re providing very conflicting information.

Also ip link does also work on 5.7, I just checked.

I am not sure what it was expecting other than a name:tag on the commandline?

Well according to your error message you didn’t execute docker build -t dockerfilecan .. You executed docker build -t ./dockerfilecan .. The -t flag is suppose to be an arbitary string that will essentially become the name of your new container image that you are building. You passed a file path ./dockerfilecan which is not a proper format as the error message says. Also the value of -t doesn’t need to be the name of your Dockerfile.

Best Regards,
Jeremias

I tried both ways, with and without the ./. Neither worked. But the syntax above did work, but you have to be in the directory where the dockerfile is. You can’t attach a path to it.

It could be I executed it using the terminal connection instead of a direct ssh connection. I also thought I was running 6.1, but it appears that is not the case. And the 6.1 will no longer allow the tunnel connection, and the 5.7 version doesn’t boot all the way. I have looked for an earlier version,

What is the best way to tell the torizon version once it is installed?

Thanks,
Steve

I tried both ways, with and without the ./. Neither worked.

As demonstrated below it works without the “./”:

$ docker build -t dockerfilecan .
Sending build context to Docker daemon   7.68kB
Step 1/4 : ARG IMAGE_ARCH=arm64v8
Step 2/4 : FROM torizon/$IMAGE_ARCH-debian-shell:2
 ---> f9788bd0f3a1
Step 3/4 : WORKDIR /home/torizon
 ---> Using cache
 ---> c2d854c1410a
Step 4/4 : RUN apt-get -y update && apt-get install -y     nano     python3     python3-pip     python3-setuptools     git     iproute2     can-utils     python3-can     && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 526309641c0a
Successfully built 526309641c0a
Successfully tagged dockerfilecan:latest

If I have the “./” then I get the same error you got previously:

$ docker build -t ./dockerfilecan .
invalid argument "./dockerfilecan" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.

but you have to be in the directory where the dockerfile is. You can’t attach a path to it.

If you’re running the docker build command from a different directory then where the Dockerfile actually is you can specify a path with the --file flag as documented here: docker build | Docker Docs

It could be I executed it using the terminal connection instead of a direct ssh connection.

Everything I demonstrated above works with both a serial connection and over SSH, this shouldn’t matter.

And the 6.1 will no longer allow the tunnel connection

What do you mean by this? I’m able to access my board via SSH with no issues and I’m running 6.1.

5.7 version doesn’t boot all the way.

There is a known issue where the graphical containers are not working properly for 5.7.2, but this only results in no graphical output. If you connect via serial or SSH then you can see the system boots and you can login. Is this the issue you are referring to or something else?

What is the best way to tell the torizon version once it is installed?

There are multiple ways you can look at the contents of /etc/issue:

$ cat /etc/issue
TorizonCore 6.1.0+build.1 \n \l

Or the contents of /etc/os-release:

$ cat /etc/os-release
ID=torizon
NAME="TorizonCore"
VERSION="6.1.0+build.1 (kirkstone)"
VERSION_ID=6.1.0-build.1
PRETTY_NAME="TorizonCore 6.1.0+build.1 (kirkstone)"
DISTRO_CODENAME="kirkstone"
BUILD_ID="1"
ANSI_COLOR="1;34"
VARIANT="Docker"

Or via uname -a as you did in your original message in this thread:

$ uname -a
Linux verdin-imx8mp-06849059 5.15.77-6.1.0+git.349786b46e61 #1-TorizonCore SMP PREEMPT Wed Dec 28 09:58:45 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

Best Regards,
Jeremias

It was a windows firewall issue. Apparently, the moses.exe app got updated or something because in the exception list in the windows firewall, there were several in there, some moses.exe and some just moses (4 exe’s and 2 plain). I added them to ALL the networks exception list, as well as Visual Studio Code. And now I can detect the board and connection for debugging.

Steve