Imx8 XOpenDisplay(NULL) weston-vivante

This one baffles me. With a little bit of tweaking to put source all in one directory, this works fine

https://github.com/Immediate-Mode-UI/Nuklear/tree/master/demo/x11

The app runs in an ugly terminal window instead of taking over the screen, but it runs.

The main.c file has this exact line at startup for both this working one and the x11 opengl example.

xw.dpy = XOpenDisplay(NULL);

Run the x11 this way.

docker run -e ACCEPT_FSL_EULA=1 -d --rm --name=fred \
             --net=host --cap-add CAP_SYS_TTY_CONFIG \
             -v /dev:/dev -v /tmp:/tmp -v /run/udev/:/run/udev/ \
             --device-cgroup-rule='c 4:* rmw' \
             --device-cgroup-rule='c 13:* rmw' \
             --device-cgroup-rule='c 199:* rmw' \
             --device-cgroup-rule='c 226:* rmw' \
             seasonedgeek/nuklear-x11-demo --developer weston-launch \
             --tty=/dev/tty7 --user=torizon


docker run -e ACCEPT_FSL_EULA=1 -d --rm --name=ethyl --user=torizon \
             -v /dev/dri:/dev/dri -v /dev/galcore:/dev/galcore \
             -v /tmp:/tmp \
             --device-cgroup-rule='c 199:* rmw' \
             --device-cgroup-rule='c 226:* rmw' \
             seasonedgeek/nuklear-x11-demo launch-x11-demo

x11 opengl demo from here. Slightly tweaked so it would look for all source in same directory.

https://github.com/Immediate-Mode-UI/Nuklear/tree/master/demo/x11_opengl2

Built and run same way.

docker run -e ACCEPT_FSL_EULA=1 -d --rm --name=fred \
             --net=host --cap-add CAP_SYS_TTY_CONFIG \
             -v /dev:/dev -v /tmp:/tmp -v /run/udev/:/run/udev/ \
             --device-cgroup-rule='c 4:* rmw' \
             --device-cgroup-rule='c 13:* rmw' \
             --device-cgroup-rule='c 199:* rmw' \
             --device-cgroup-rule='c 226:* rmw' \
             seasonedgeek/nuklear-x11-opengl-demo --developer weston-launch \
             --tty=/dev/tty7 --user=torizon			

docker run -e ACCEPT_FSL_EULA=1 -d --rm --name=ethyl --user=torizon \
             -v /dev/dri:/dev/dri -v /dev/galcore:/dev/galcore \
             -v /tmp:/tmp \
             --device-cgroup-rule='c 199:* rmw' \
             --device-cgroup-rule='c 226:* rmw' \
             seasonedgeek/nuklear-x11-opengl-demo launch-x11-opengl-demo

Fails to open display.

    win.dpy = XOpenDisplay(NULL);
    if (!win.dpy) die("Failed to open X display\n");
    {
        /* check glx version */
        int glx_major, glx_minor;
        if (!glXQueryVersion(win.dpy, &glx_major, &glx_minor))
            die("[X11]: Error: Failed to query OpenGL version\n");
        if ((glx_major == 1 && glx_minor < 3) || (glx_major < 1))
            die("[X11]: Error: Invalid GLX version!\n");
    }

It doesn’t even get to the checks below that. I’m building within the same VM where I build the working one.

Everything is looking for aarch64 as it should be.

root@verdin-imx8mp-06848973:/home/torizon# ldd demo
	linux-vdso.so.1 (0x0000ffff93af1000)
	libX11.so.6 => /usr/lib/aarch64-linux-gnu/libX11.so.6 (0x0000ffff93911000)
	libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff93866000)
	libGL.so.1 => /usr/lib/aarch64-linux-gnu/libGL.so.1 (0x0000ffff937ca000)
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff93654000)
	/lib/ld-linux-aarch64.so.1 (0x0000ffff93ac1000)
	libxcb.so.1 => /usr/lib/aarch64-linux-gnu/libxcb.so.1 (0x0000ffff9361c000)
	libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff93608000)
	libGAL.so => /usr/lib/aarch64-linux-gnu/libGAL.so (0x0000ffff93445000)
	libXdamage.so.1 => /usr/lib/aarch64-linux-gnu/libXdamage.so.1 (0x0000ffff93432000)
	libXfixes.so.3 => /usr/lib/aarch64-linux-gnu/libXfixes.so.3 (0x0000ffff9341c000)
	libXext.so.6 => /usr/lib/aarch64-linux-gnu/libXext.so.6 (0x0000ffff933f8000)
	libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff933c7000)
	libdrm.so.2 => /usr/lib/aarch64-linux-gnu/libdrm.so.2 (0x0000ffff933a4000)
	libXau.so.6 => /usr/lib/aarch64-linux-gnu/libXau.so.6 (0x0000ffff93390000)
	libXdmcp.so.6 => /usr/lib/aarch64-linux-gnu/libXdmcp.so.6 (0x0000ffff9337a000)
	libbsd.so.0 => /usr/lib/aarch64-linux-gnu/libbsd.so.0 (0x0000ffff93355000)
	libmd.so.0 => /usr/lib/aarch64-linux-gnu/libmd.so.0 (0x0000ffff93339000)

Does the Weston vivante container just not like version 2 of OpenGL eventhough it loads libraries for it?

Greetings @seasoned_geek,

According to NXP documentation the Vivante GPU and driver for the i.MX8M Plus supports the following: OpenGL ES 1.1/2.0/3.0/3.1

They only seem to claim support for specifically OpenGL ES, while standard OpenGL support is a bit more of an open question.

All I can say is that while OpenGL ES is labeled as a subset of OpenGL this isn’t the whole truth. If I recall correctly ES 1.X isn’t a proper subset of any OpenGL version. ES 2.X has some similarities/shared features with OpenGL 2.1 but still isn’t a “proper” subset.

Anyways this is to say that even though NXP says the GPU supports ES 2.0 this doesn’t necessarily translate to proper OpenGL 2.0 support.

In the repo where you are getting these demos I see various other OpenGL versions as well as a single opengles example. Have you tried any of these other references?

Best Regards,
Jeremias

Jeremias,

Not yet. Trying a lot of different packages looking for “path of least pain” both for development and for FDA approval.

Will try some others soon.

Let me know how your tests work out, it’s always good info to hear what UI technologies work and what don’t, for future reference.

Sorry to take so long.

You can get X11 to run on target despite all attempts to lock it down.
Helps if one edits this
sudo cat /var/sota/storage/docker-compose/docker-compose.yml

so it looks like this

services:
  weston:
    cap_add:
    - CAP_SYS_TTY_CONFIG
    container_name: App_weston
    device_cgroup_rules:
    - c 4:0 rmw
    - c 4:7 rmw
    - c 13:* rmw
    - c 199:* rmw
    - c 226:* rmw
    environment:
      ACCEPT_FSL_EULA: '1'
    image: torizon/weston-vivante@sha256:b39bd723e554a95522bd6774796d6af7cac7cf4960e1c142b9a6f45e62691f45
    network_mode: host
    volumes:
    - source: /tmp
      target: /tmp
      type: bind
    - source: /dev
      target: /dev
      type: bind
    - source: /run/udev
      target: /run/udev
      type: bind
version: '2.4'

Then cold boot first.

docker pull seasonedgeek/xclock-demo

docker run --rm -d -v /tmp:/tmp -v /var/run/dbus:/var/run/dbus -v /dev/galcore:/dev/galcore --device-cgroup-rule='c 199:* rmw' seasonedgeek/xclock-demo

There is also this demo container for NanoGUI

docker pull seasonedgeek/base_build_container
docker run --rm -it -v /tmp:/tmp -v /var/run/dbus:/var/run/dbus -v /dev/galcore:/dev/galcore --device-cgroup-rule='c 199:* rmw' seasonedgeek/base_build_container /bin/sh

su torizon

export DISPLAY=:0

export XDG_RUNTIME_DIR=/tmp/1000-runtime-dir

example1

Sorry, image too big to upload

These are quite interesting results. Thank you for sharing them, this should be helpful to others as well.

Best Regards,
Jeremias