Tools for GUI test automation

Hi everyone interested in GUI test automation on Torizon,

have you any idea or experience to share about GUI test automation on Torizon, Wayland/Weston?

On my side I was checking the range of possibilities to make something like a python script (e.g. pytest framework) which simulates mouse and keyboard inputs, takes screenshots or reads the UI and compare with the expected results to conclude if it is pass or fail.

From a previous chat before creating this thread, the first attempt is ydotool (GitHub - ReimuNotMoe/ydotool: Generic command-line automation tool (no X!)) which has good references here and here, but I’m having issues running it on Torizon OS 6.8 using the version available in Torizon debian repo:

  1. Install and run the server in the Weston container which runs with enough permissions to work with mouse, keyboard, etc:

$ docker exec -it torizon-weston-1 /bin/bash
root:/home/torizon# apt update && apt install ydotoold ydotool
root:/home/torizon# ydotoold
ydotoold: listening on socket /tmp/.ydotool_socket

  1. Open other terminal in the same container and try to simulate typing a string or other operations:

root:/home/torizon# ydotool type “hello world”
ydotool: notice: Using ydotoold backend

  1. ydotoold gets the request, sometimes actually shows “hello world” in GUI, but crashes immediately:

ydotoold: listening on socket /tmp/.ydotool_socket
ydotoold: accepted client
terminate called after throwing an instance of ‘std::system_error’
what(): Unknown error -1629887088
Aborted (core dumped)

I am not sure if it is because of my config or because the ydotool build in the repo is very old (0.1.8-3 at the moment), or for some other Torizon configuration.

Any feedback about general GUI test automation and/or ydotool on Torizon is welcome!
Cheers,
ldvp

Hi @ldvp

Glad to see you back on our community here.

I tried a quick setup similar to yours but using Torizon 7.3.0. My debian containers in this case are based on Debian 12 which I assume is newer than yours.

Unfortunately I don’t even get as far as you do; ydotoold errors out when trying to open uinput:

root@verdin-imx8mp-07106925:/home/torizon# ydotoold
ydotoold: listening on socket /tmp/.ydotool_socket
terminate called after throwing an instance of 'std::runtime_error'
  what():  failed to open uinput device
Aborted (core dumped)

I am getting the same version (0.1.8-3) of ydotool as you are so it may be that a newer version would be helpful. Assuming this is the canonical page for it there is a v1.0.4 release from 2023 which may be a better starting point.

Interestingly it seems that the development has gone quiet with some plans to rewrite in Javascript. Not sure if there is a better location for this project.

Incidentally, I am able to run this from an Ubuntu container on my AMD64 desktop and it has v0.1.8 as well. Maybe something specific to the Arm arch?

Can you try and build a newer version and test that?

Drew

I didn’t know about ydotoold, but it seems to me that it’s not supported anymore.
@drew.tx Do you know about any other tools similat to this one to run an automated test of UI (free of charge)?

Hi @vix

I agree the lack of activity in the repo makes it seem abandoned unfortunately. I don’t know of any other tools but am asking around. I did find dotool with a quick google search but cannot vouch for it.

Drew

Hi @drew.tx and @vix, thanks for your feedback on this topic.

How do you see to direct work on /dev/uinput?

I’ve just tried the python3-uinput library to do that directly from python code, it is included in the Debian repos of Torizon OS 6.8.2 as Index of /debian bullseye/main arm64 python3-uinput arm64 0.11.2-2.1+b3, tried to run the hello world from the README, but my COG-based GUI is not getting the typing from it.

Do you observe the same result on your test setup?

Cheers,
ldvp

Hi @drew.tx and @vix, quick update about the hello world with python-uinput, it actually works: the only issue was the timing between the construction of the virtual input and the execution of the events, with a 1 second wait between them it worked!

import uinput
import time

events = [uinput.KEY_A]

# Create the virtual input device
with uinput.Device(events) as device:
    time.sleep(1)  # Give the system time to recognize the device

    # Simulate key press and release
    device.emit(uinput.KEY_A, 1)  # Key down
    device.emit(uinput.KEY_A, 0)  # Key up

To me it seems a valid option in terms of test input, now to close the loop I would look for a mechanism to acquire a screenshot of the UI to check if the GUI is reacting as expected.

I see that wcap-capture is not available in Torizon, weston-screenshooter requires to run Weston in debug mode and am not sure if /dev/fb0 is a valid option.

Do you have other ideas?

Cheers,
ldvp

That’s an annoying but interesting finding. I took a crack at the dotool that I linked earlier and it requires golang so may be a bit tricky to get working. Ideally we could use a multi-stage dockerfile to create a Weston-automation container. Maybe I’ll give that a shot today as I think it will be easier than cobbling together python code to do this testing.

Drew

So far I’ve not been able to get dotool to work. It seems to crash looking for some specific file or device but it’s not clear to me what is missing.

@ldvp regarding running in debug mode, assuming that can be configured in weston.ini you can make that change fairly easily. Just map in a volume to a local file with the container-side path of /etc/xdg/weston/weston.ini containing the setting you need. More details here. Hopefully that will get what you need for weston-screenshooter.

Also, I’m still looking internally to see if any of my colleagues have suggestions for this.

Drew

Hi @drew.tx, thanks for the feedback and for checking internally.

I agree Weston in debug mode is an option, if cannot find a good alternative I’ll probably go for it.

My only doubt is that I’m not sure how much testing with Weston in debug mode is different from testing with Weston in normal mode, in general debug mode may results in different behavior.

Until the next update, cheers,
ldvp