Credentials Storage Location

Hi there,

I’ve just completed the process of moving our docker data directory to encrypted storage and I’d like to do the same for the credentials used by aktualizr and docker. Would you be able to point me in the right direction please?

If I make a bind mount such that /var/sota points to encrypted storage, would that be sufficient?

Best regards,
Lloyd

Hi, I’m the Toradex AI Assistant. The Toradex Support team will follow up with you shortly!

Did you like my answer?

:+1: :-1:


Hello Lloyd,

Thank you for reaching out with your question. To securely store credentials for aktualizr and Docker on encrypted storage, you can follow a similar approach to what you did with your Docker data. Here’s a step-by-step guide:

  1. Identify Credential Storage Locations:

    • For Docker, credentials are typically stored in ~/.docker/config.json.
    • For aktualizr, credentials and configuration files are usually located in /etc/sota/conf.d/ or similar directories.
  2. Encrypt the Storage:

    • You can use a tool like cryptsetup to create an encrypted partition or file-based storage. This can be achieved by setting up a LUKS-encrypted partition or using a loopback file with encryption.
  3. Move Credentials to Encrypted Storage:

    • Once the encrypted storage is set up, move the credential files to this location.
    • Update any configuration files or environment variables to point to the new location of these credentials.
  4. Automate Mounting and Access:

    • Ensure that the encrypted storage is mounted at boot time or when needed, using scripts or systemd services.
    • Consider using a secure method to unlock the encrypted storage, such as a hardware security module (HSM) or a secure key management service.
  5. Test the Setup:

    • Verify that both Docker and aktualizr can access their credentials from the new location and that the encryption is functioning as expected.

By following these steps, you can enhance the security of your credentials on Torizon. Always ensure that your encryption keys are stored securely and that access to the encrypted storage is tightly controlled.

I hope this helps you achieve your goal. If you have any more questions, feel free to ask.

Best regards,
Toradex AI Assistant

Please, also check if these links can help you solve your problem:

Use a credential helper to handle docker authentication on Torizon OS.

Greetings @lblackbeard,

Let me try to provide some advice/guidance on this use-case of yours.

First of all let’s address Aktualizr. Off the top of my head I see 2 options here.

Option 1: As you already know Aktualizr uses /var/sota as the location to store and dump it’s data. This location is in fact configurable. As you can see here I believe nearly every path for Aktualizr can be configured: aktualizr/docs/ota-client-guide/modules/ROOT/pages/aktualizr-config-options.adoc at toradex-master · toradex/aktualizr · GitHub

So in theory you could change the configuration to point to your encrypted storage location. You’ll also need to change the configurations found in secondaries.json so all the secondary update types (docker-compose, bootloader, etc) also use the new location you want.

Option 2: If you don’t want to change the default configuration then I guess you could try to use symlinks, or bind mounts as you already proposed so that /var/sota links to/points to your encrypted location.

Please keep in mind these are rough suggestions and I have not tested either thoroughly.

One more point to consider here. A lot of our Aktualizr adjacent tooling assumes the data directory is located at /var/sota. For example the service that performs the auto-provisioning to Torizon Cloud assumes /var/sota. These may need to be adjusted/changed depending on how you handle this.

Now speaking about the docker credentials. First of all, when you say “docker credentials” I assume you mean the credentials for docker login. Please correct me if my assumption is wrong here.

On TorizonOS we change the default Docker configuration location to /etc/docker as seen here: meta-toradex-torizon/recipes-core/systemd/systemd-conf/system.conf-docker at scarthgap-7.x.y · torizon/meta-toradex-torizon · GitHub

This only applies to processes started by systemd like docker-compose.service. If you run docker login as the torizon user on the command-line then it will use the normal default location. That said I imagine when your device is on the field it will be systemd processes/services doing these actions anyways.

Similar idea to Aktualizr you could either change the configuration so DOCKER_CONFIG just points to your encrypted storage or use a symlink/bind-mount to similar effect. Keep in mind whether you want the Docker credentials to be update-able or not. One reason we configured the location to /etc/docker is so that the location would be managed by OSTree and could be updated if needed by users.

Anyways, those are my initial thoughts and impressions with regards to your use-case here. I hope it was of some help to your goal. Let me know if you have any questions about anything I described.

Best Regards,
Jeremias

Excellent, thanks so much! I’ll see where this leads me

Let me know if you have any further questions. Also it would be appreciated if you could share any of your own findings or results. Would be good to know for future reference.

Best Regards,
Jeremias

We’re starting with the docker credentials but we’ve run into a bit of a speedbump. The current idea is:

  1. copy /etc/docker to encrypted storage (/media/encrypted/docker-creds)
  2. shred /etc/docker/*
  3. bind mount /media/encrypted/docker-creds to /etc/docker

However, we realise that /etc/docker is present in /ostree and so will still be present after an ota upgrade (and also present before ota if it’s working as an overlay??). Can we configure torizoncore-builder not to put docker’s config.json into the ostree repo?

I’m not sure if I fully understand your use-case and process here.

If you delete /etc/docker/* and capture this change with TorizonCore Builder. Then why would it still be present after an OTA update? Did you already observe this behavior?

From my perspective, if a file/directory is deleted and you capture this with TorizonCore Builder. This should make a new OSTree reference with the change. The new OSTree reference when deployed should not recreate this file since the file was deleted when the reference was made.

Best Regards,
Jeremias

Hi Jeremias,

The process mentioned is excecuted on first boot on every device.
Up until this point, we’ve not captured a deletion with TCB - just some “changes” folders and “.tcattr”. We weren’t aware deletion was possible? Can we describe this in the yaml build file?

Kind regards,
Lloyd

Up until this point, we’ve not captured a deletion with TCB - just some “changes” folders and “.tcattr”. We weren’t aware deletion was possible?

It’s the same process. I assume you’re using torizoncore builder isolate to create this changes directory, correct? Well this command captures most file modifications, including deletion. If you start with some Torizon OS image as a base and then you delete some file in /etc from this base image. Then the isolate command will capture this deletion.

The process mentioned is excecuted on first boot on every device.

Okay wait, so you do rely on /etc/docker/config.json existing at boot? But then you delete it after copying it to this encrypted storage.

But, your worry is that doing an OTA update will restore this file? Why would that be the case? Again I ask have you actually tested this use-case and seen it occur?

What should happen is the following:

  • You flash an initial image that has /etc/docker/config.json
  • You copy this to your encrypted located and then delete it from /etc/docker.
  • Later on you do an OS update.
    • Since you deleted this file this change should propagate and the file should not be restored.

This follows the logic of how 3-way merges are handled for OSTree concerning files/directories in /etc as described here: Atomic Upgrades | ostreedev/ostree

For example as a test, I took an out of the box Torizon OS image. I disabled the docker-compose.service from starting on boot. This has the effect of deleting the symlink file /etc/systemd/system/multi-user.target.wants/docker-compose.service.

I then did an OTA update to another out of the box Torizon OS image, where this service is of course enabled by default. After the update succeeded the service was still disabled and /etc/systemd/system/multi-user.target.wants/docker-compose.service was still deleted.

As per the 3-way merge process in OSTree, local modifications to /etc always take priority over the new /etc that comes with an update. If you deleted a file locally on a device, then that will remain through all updates going forward. The only way that file is being restored is if some user or process outside of OSTree restores that file on that device.

Best Regards,
Jeremias

Hi Jeremias, thanks for all this.

We’re not actually using the isolate command - we have a tcbuild.yaml with the following snippet:

customization:
    filesystem:
        - changes_common/
        - changes_arm64/

Since the config json doesn’t exist here, there’s nothing to delete?

The worry isn’t necessarily that the file will reappear in /etc/docker, it’s that it exists in /ostree/deploy/torizon/deploy/... for each baseos upgrade. We do a secure trim on the files in here and for units that have undergone an upgrade, we see config.json present in both ostree commit hash folders.

That said, we haven’t tested upgrading a unit after doing this shredding - I’ve asked @hsharma to test this out today and we’ll get back to you.

Nevertheless, I don’t think this is a long term solution, because we intend in future to upgrade to Torizon7 and extend the CoT to ostree - unless checks arent’ done for /ostree/.../etc?

Kind regards,
Lloyd

The worry isn’t necessarily that the file will reappear in /etc/docker, it’s that it exists in /ostree/deploy/torizon/deploy/... for each baseos upgrade.

Okay so your worry is that config.json is in the OSTree metadata at all. Well if you don’t want this file and it’s contents in the OSTree metadata at all I think that really only leads us towards one direction. This file should never be in your OSTree deployments at all then. It would be the only way to solve your concerns as I understand them.

Similar to a Git repository, if you don’t want sensitive data in the repository then you should never commit it to begin with.

So then, the question comes about on how we get the initial config.json to your systems to begin with. You could instead put the initial config.json in /var which is not controlled by OSTree. Therefore when you delete it after copying it to your encrypted storage it won’t exist in some OSTree metadata somewhere.

Another idea is to deliver the initial config.json via network, USB, or SD card. This would require the first boot to occur in a factory setting I suppose. That way you can make sure the config.json is copied from whatever medium transported it.

I’m sure there’s other ideas but it largely depends on your process and use-cases for these devices.

Best Regards,
Jeremias

Thanks Jeremias,

Could you please point us in the right direction RE putting it in /var?
I’m not sure I understand how config.json is generated in the first place… I’m guessing it’s just a commit of dind’s /etc/docker? Can we tell tcb not to commit this using tcbuild.yaml? Or do we have to break the yaml up into separate tcb commands and intervene somewhere along the way?
You mentioned the isolate command - I’ll have a read up on that.

Kind regards,
Lloyd

Could you please point us in the right direction RE putting it in /var?

This is not something TorizonCore Builder can do. But it’s relatively easy to accomplish. In the directory of your OS image that is used by Easy Installer to install your image. There should be a file image.json in there. This file can be modified to trivially install provided files in the final filesystem. This is documented here: Toradex Easy Installer Configuration Files | Toradex Developer Center

In particular look at the filelist property. For example if you want to install a file config.json into /var you could do something like this:

"filelist": [
                            "config.json:/ostree/deploy/torizon/var/"
                        ]

Of course you’ll need to put a copy of config.json in the image directory.

I’m not sure I understand how config.json is generated in the first place… I’m guessing it’s just a commit of dind’s /etc/docker?

I’m not sure I understand this point. You don’t understand how the file itself is generated?

Well the file is not there by default with a out of the box Torizon OS installation. So I assume you’re generating this file yourself. Did you perform a docker login or something like this? This generates the file. I’m not sure if that’s the clarification you’re looking for, but this file doesn’t come from Toradex.

Can we tell tcb not to commit this using tcbuild.yaml?

No OSTree commits and tracks every file in every directory that it’s responsible for, no exceptions.

You mentioned the isolate command - I’ll have a read up on that.

I don’t think the isolate command is going to help you here. This command looks at file differences vs a base image filesystem and the filesystem on a running device. It then captures these differences so they can be replicated in a reproducible fashion. These differences are captured as OSTree commit data.

As far as I understand you don’t want config.json to be in OSTree at all. Therefore this won’t do anything for you regarding this point. I would recommend what I suggested at the start of this post. That way config.json is added to /var which is outside of the control of OSTree.

Best Regards,
Jeremias

Sherbit… I was convinced tcb was generating that file based on the user/pass we passed to it but sure enough, it’s sitting in our changes folder :man_facepalming:, put there three years ago by the contractor who did the initial port from yocto to torizon - I just never noticed.
Thanks for your help on this, we should be ok from here. There’s been a preemption to a different task in the meantime but we’ll probably post again when we pick up the torizon ota credential side of things.

Glad I was able to assist and help clarify things.

Best Regards,
Jeremias