I am running a custom image with torizoncore 6.3 build 4 as the base. With this image I am including an application bundle via a docker-compose file and provisioning with a shared-data file in offline mode. see below the provisioning section of my tcbuild.yaml file.
when I plug in my usb I can confirm the offline_updates_source directory exists.
The update only contains an application update via docker-compose and contains two directories: images and metadata. The images directory contains a .images file and my docker-compose.lock file. The metadata directory contains: director, docker, image-repo
However, it seems that it is never initiating the update and is stuck looking for connections even if the directory exists and I am unsure why. Any help would be appreciated!
I believe the issue here is that your path "/media/'USB DISK'/update" has a space in it. The way you have it defined here would probably not result in it being processed correctly by the update client. This would probably explain why it’s not being detected.
I would suggest just not having a space in the path if possible as having a space could cause more issues and complications going forward.
that seemed to fix the detection issue now I am running into another issue during the update upload. See below image for the result of journalctl -f -u aktualizr*. Three errors seem to pop out to me:
“Unable to read filesystem statistics: error code -l”
“curl error 60 (http code 0): SSL peer certificate or SSH remote key was not OK”
“Offline loading failed: Bad manifest type”
I am using docker images that used buildx to be made. I know that caused issues on the torizoncore-builder side of lockbox but that seemed to be fixed on that end is this a similar issue here?
I am using docker images that used buildx to be made. I know that caused issues on the torizoncore-builder side of lockbox but that seemed to be fixed on that end is this a similar issue here?
However, this fix did not make it into the 6.3.0 release that you are using. Therefore you should use a more recent image that should have this fix. Do note that our upcoming 6.4.0 release will have this fix.