How to push the compose image to the toradex board from remote sites

I have a verdin imx8m-plus board connected in our company dev network. I can ping the device from my laptop, but not the other way around due to the way our network is configured and the subnet I am on. So the last step fails where the device it trying to pull the image from the registry on my laptop. Is there a way to just push that image to the device instead of it having to pull it from my laptop? It would seem to always work in that case since I can open connections out to the device but it can’t open a connection to my laptop.

Thanks,

Steve

Hi, I’m the Toradex AI Assistant.

I might make mistakes or occasionally get you wrong. Our support team will step in if needed.


Hi Steve,

Yes, there are several ways to push container images to your Verdin iMX8M Plus device instead of having it pull from your laptop registry. Here are the most practical solutions for your network configuration:

Docker Save and Load Method

The most straightforward approach is to use Docker’s save and load functionality(1):

  1. Save the container image as a tar file on your laptop(1):
$ docker save -o my-dockerfile.tar <username>/my-dockerfile

(1)

  1. Transfer the file to your target device using SCP(1):
$ scp my-dockerfile.tar torizon@X.X.X.X:/home/torizon/

(1)

  1. Load the container image on the device(1):
# docker load -i my-dockerfile.tar

(1)

This method works perfectly for your scenario since you can initiate the connection from your laptop to the device(1).

Alternative: Use Online Registry

If you prefer a more automated workflow, consider using an online registry like Docker Hub(1):

  • Push to Docker Hub from your laptop(1):
$ docker push <username>/<my-container-image>

(1)

  • Pull from Docker Hub on the device(1):
$ docker pull <username>/<my-container-image>

(1)

Performance Tip

You can compress container images to speed up their transfer to your target device(1). This is especially useful when using the save/load method over your network connection.

The save and load approach should resolve your connectivity issue completely since it eliminates the need for the device to establish outbound connections to your laptop registry.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

@ToradexAI @er
Well, the debugging system done by VSC through the torizon extension is quite ridged. I would have to change the build process to do the pulling and pushing, to make this work, because at the end it copies the executable over into the composed image and then runs it. I don’t know this well enough as to how this all gets done. I understand some of it, but I would need more info.

Steve

I understand your concern about working within the Torizon IDE Extension’s build process. Let me explain how the debugging workflow works and provide some practical solutions for your network connectivity issue.

How the Torizon IDE Extension Build Process Works

The Torizon IDE Extension uses different containers for different purposes (1):

SDK Container

The SDK container runs on your host machine and cross-compiles your code (1). It uses bind-mounting to share the workspace directory:

{
    "label": "build-debug-arm64",
    "detail": "Build a debug version of the application for arm64 using\nthe toolchain from the SDK container.",
    "command": "DOCKER_HOST=",
    "type": "shell",
    "args": [
        "docker",
        "run",
        "--rm",
        "-it",
        "-v",
        "${workspaceFolder}:${config:torizon_app_root}",
        "cross-toolchain-arm64-__container__",
        "cmake",
        "--build",
        "build-arm64"
    ]
}

(1)

Debug Container

The debug container is deployed to your target device via SSH (1). Build files are copied through rsync, not pulled from a registry:

{
    "label": "deploy-torizon-arm64",
    "detail": "",
    "hide": true,
    "command": "rsync",
    "type": "process",
    "args": [
        "-P"
    ]
}

(1)

Solutions for Your Network Configuration

Option 1: Modify the Local Registry Configuration

The Torizon IDE Extension uses a local registry (localhost:5002) (2). You can modify the tasks to use the save/load approach instead of registry pulls.

Option 2: Use Release Container Tasks

The run-container-torizon-release-<arch> task performs these steps (2):

  1. Build a container image based on Dockerfile
  2. Push the container image to the local container registry (localhost:5002)
  3. Pull the container image to your SoM
  4. Run the container using docker-compose.yml configuration

Option 3: Custom Task Modification

You can create custom tasks in your .vscode/tasks.json that:

  1. Build the container locally
  2. Save it as a tar file
  3. Transfer via SCP (which works in your network)
  4. Load and run on the device

Workspace File Handling

The extension handles files differently for each container type (1):

  • SDK container: Mounts the workspace directory directly
  • Debug container: Copies only the build directory via SSH
  • Release container: Uses multi-stage builds with COPY commands

Since you can establish SSH connections to your device, the debug workflow should work fine - it doesn’t rely on the device pulling from your laptop registry, but rather uses direct file transfer via SSH.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

Hey @EvetsMostel1,

It sounds like what you are asking for is something like our remote updates via our cloud services. As long as the device has access to the internet, you can push containers to the device. But this wouldn’t be for remote debugging. But there is a Remote Access feature that allows you a shell into the remote device.

-Eric

While this would work, our devices aren’t normally connected to the outside world in any way. This is for development only.
I need simply to be able to push the container to the device, instead of having it pull from my laptop. The device can’t open a connection to my laptop, but my laptop can open a 2 way connection to the device. It would be simply after the push to the repository, I would need to push the container directly to the device, then everything else would be the same. It sounds easy enough, but I can’t seem to find the hook. I know I could scp a tar of the image to the device, but that seems a bit archaic because then I would have to import it. Then continue on with the debug session. There has to be a simpler way…
Steve

What about docker save piped over SSH to docker load on the device - something like:

text

docker save your-image:tag | ssh user@device-ip 'docker load'

Then proceed with your debug commands remotely via ssh user@device-ip ‘docker run …’.

Hi Steve,

Good timing on this question, as we recently rolled out a change to the way the IDE delivers containers to the device. Instead of spinning up a registry on your laptop, it basically uses the mechanisms described above but built into the IDE so you don’t have to do anything manually. To get access to this, please use the pre-release version of the extension. In the marketplace settings, there should be a setting to switch to the prerelease. Please do let us know if this works for you, as it is a pretty invasive change to the IDE, and we want to make sure it works as we think it will for people who have the kind of issue you are describing.

Drew Moseley
Toradex

@dcasota
Thanks, I will try that!

@drew.tx

This is interesting. However, I got blindsided by both VSC and the extension on an existing project which was using powershell commands and working. The update causes a LOT of problems. I even went so far as to create a NEW C++ project (see this post: I tried using the C++ template doing a helloworld project. It had issues - #8 by ToradexAI )and the first issue that the basic setup that happens with some powershell commands use xonsh to change the directory and VSC won’t recognize the workspace as trusted. I tried adding the directories several times but that didn’t work. The work around is adding an option of “cwd”;”” in some of the commands. This only happens when trying to run the port sharing routine.
Then the next issue is the actual build and container running. It seems to build it on my local repo, but it thinks it is starting the container, it doesn’t spin up because the image is not located on the device. Then when it does the xonsh service-check.xsh, it thinks it is only getting 1 argument. I haven’t been able to figure that one out as I don’t know anything about xonsh.

So basically, I can’t even get a C++ HelloWorld to compile and run on the device.

So when you say it is invasive, it makes me cringe…

@drew.tx

OK, so I switched my C++ Hello World project to the prerelease vs 3.2.139. Here are the logs. And while I didn’t have to put in the cwd: change, VSC still gives the permissions error at startup but does run the powershell:

Executing task: xonsh /home/steve/HW1/.conf/run-container-if-not-exists.xsh --container-runtime docker --run-arguments “–rm --privileged torizon/binfmt:latest” --container-name binfmt

  • The terminal process failed to launch: Cannot launch a terminal process in an untrusted workspace with cwd \home\steve\HW1 and userHome \home\steve.
    Container Runtime: docker
    Run Arguments: --rm --privileged torizon/binfmt:latest
    Container Name: binfmt
    Container does not exist. Starting …
    Cmd: docker run --name binfmt --rm --privileged torizon/binfmt:latest

Here are the logs when I try to build it.

Executing task: xonsh .conf/validate-deps-running.xsh

:warning: VALIDATING ENVIRONMENT

OS: Linux
OS Version: 6.6.87.2-microsoft-standard-WSL2
OS Arch: x86_64
Under WSL: Ubuntu-24.04
VS Code:
1.106.3
bf9252a2fb45be6893dd8870c0bf37e2e1766d61
x64

TOR Extension: toradex.apollox-vscode@3.2.139

:white_check_mark: Environment is valid!

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: bash -c [[ ! -z “192.168.6.243” ]] && true || false

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: bash -c [[ “arm64” == “arm64” ]] && true || false

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: sshpass -p password scp -P 22 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PubkeyAuthentication=no /home/steve/HW1/docker-compose.yml torizon@192.168.6.243:~/

Warning: Permanently added ‘192.168.6.243’ (ED25519) to the list of known hosts.

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: /home/steve/HW1/.conf/remove-dangling-images.xsh 192.168.6.243:2375

Waiting for prune lock on host 192.168.6.243:2375
Pruning dangling images on 192.168.6.243:2375
Total reclaimed space: 0B

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: docker compose -p torizon stop hw1 hw1-debug && docker compose -p torizon rm -f hw1 hw1-debug && sleep 1

No stopped containers

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: xonsh /home/steve/HW1/.conf/torizon-packages.xsh arm64

Applying torizonPackages.json:
Applying to Dockerfile.debug …
:white_check_mark: Dockerfile.debug
Applying to Dockerfile.sdk …
:white_check_mark: Dockerfile.sdk
Applying to Dockerfile …
:white_check_mark: Dockerfile
torizonPackages.json applied

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: DOCKER_HOST= docker build --pull -f /home/steve/HW1/Dockerfile.sdk /home/steve/HW1 -t cross-toolchain-arm64-hw1 --build-arg IMAGE_ARCH=arm64 --build-arg GPU=-imx8 --build-arg APP_ROOT=/home/torizon/app

[+] Building 0.8s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile.sdk 0.0s
=> => transferring dockerfile: 1.06kB 0.0s
=> WARN: InvalidDefaultArgInFrom: Default value for ARG torizon/cross-toolchain-${IMAGE_ARCH}:${CROSS_SDK_BASE_T 0.0s
=> [internal] load metadata for docker.io/torizon/cross-toolchain-arm64:4 0.7s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 56B 0.0s
=> [1/4] FROM docker.io/torizon/cross-toolchain-arm64:4@sha256:65007ec3c44cc782e61843d2ed60fd362a268e5d6824e2626 0.0s
=> CACHED [2/4] RUN apt-get -q -y update && apt-get -q -y install && apt-get clean && apt-get autore 0.0s
=> CACHED [3/4] RUN apt-get -q -y update && apt-get -q -y install && apt-get clean && apt-get autore 0.0s
=> CACHED [4/4] WORKDIR /home/torizon/app 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:81fde3946f4fe5cb9c4ebff48b1ada9755488718c676fda133d4a16f76d0f8cc 0.0s
=> => naming to Docker Hub Container Image Library | App Containerization 0.0s

1 warning found (use docker --debug to expand):

  • InvalidDefaultArgInFrom: Default value for ARG torizon/cross-toolchain-${IMAGE_ARCH}:${CROSS_SDK_BASE_TAG} results in empty or invalid base image name (line 16)

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: make ARCH=arm64 makedir

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: DOCKER_HOST= docker run --rm -v /home/steve/HW1:/home/torizon/app cross-toolchain-arm64-hw1 make ARCH=arm64 CC=aarch64-linux-gnu-g++ debug

make: Nothing to be done for ‘debug’.

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: DOCKER_HOST= docker compose build --pull --build-arg SSHUSERNAME=torizon --build-arg APP_ROOT=/home/torizon/app --build-arg IMAGE_ARCH=arm64 --build-arg DEBUG_SSH_PORT=2230 --build-arg GPU=-imx8 hw1-debug

WARN[0000] The “DOCKER_LOGIN” variable is not set. Defaulting to a blank string.
[+] Building 0.8s (11/11) FINISHED
=> [internal] load local bake definitions 0.0s
=> => reading from stdin 718B 0.0s
=> [internal] load build definition from Dockerfile.debug 0.0s
=> => transferring dockerfile: 2.43kB 0.0s
=> [internal] load metadata for docker.io/torizon/debian:4 0.7s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 56B 0.0s
=> [1/5] FROM docker.io/torizon/debian:4@sha256:eafcfdf5ee95136c719b219cac53480ecee4cb85a3655d13d0836bb29ee3cf0f 0.0s
=> CACHED [2/5] RUN apt-get -q -y update && apt-get -q -y install openssl openssh-server rsync 0.0s
=> CACHED [3/5] RUN apt-get -q -y update && apt-get -q -y install && apt-get clean && apt-get autore 0.0s
=> CACHED [4/5] RUN mkdir /var/run/sshd && sed 's@session\srequired\spam_loginuid.so@session optional pam_ 0.0s
=> CACHED [5/5] RUN rm -r /etc/ssh/ssh*key && dpkg-reconfigure openssh-server 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:93b954d4bfd3e898a1a0bdc8ecf68191b4d622f78d3db0fe3f5382e3ef01faa0 0.0s
=> => naming to localhost:5002/hw1-debug:arm64 0.0s
=> resolving provenance for metadata file 0.0s
[+] Building 1/1
:check_mark: localhost:5002/hw1-debug:arm64 Built 0.0s

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: DOCKER_HOST= docker compose push hw1-debug

WARN[0000] The “DOCKER_LOGIN” variable is not set. Defaulting to a blank string.
[+] Pushing 16/16
:check_mark: Pushing localhost:5002/hw1-debug:arm64: d599356eaa20 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 42a25b36965f Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 27c5b0390bce Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: dfc1a8f56768 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 2b0a92b65536 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 2d150839e3fc Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: fe1e47ec2d53 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 643d70c9c98b Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 5f59aed187eb Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 154018f8f276 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 567ef5ff4ef8 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 27b609b94cd1 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 5f70bf18a086 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: c0d17df75b5a Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: fb518d894ad8 Layer already exists 0.0s
:check_mark: Pushing localhost:5002/hw1-debug:arm64: 3e5d01a55aea Layer already exists 0.0s

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: /home/steve/.local/bin/xonsh /home/steve/HW1/.conf/spin-up-down-registry.xsh down password torizon 192.168.6.243 /home/steve/HW1

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: /home/steve/.local/bin/xonsh /home/steve/HW1/.conf/spin-up-down-registry.xsh up password torizon 192.168.6.243 /home/steve/HW1

Container Runtime: docker
Run Arguments: --rm -d --network host torizonextras/ide-port-tunnel:0.0.0 sshpass -p password ssh -vv -N -R 5002:localhost:5002 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PubkeyAuthentication=no torizon@192.168.6.243
Container Name: torizon-ide-port-tunnel
Container does not exist. Starting …
Cmd: docker run --name torizon-ide-port-tunnel --rm -d --network host torizonextras/ide-port-tunnel:0.0.0 sshpass -p password ssh -vv -N -R 5002:localhost:5002 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PubkeyAuthentication=no torizon@192.168.6.243
979686d9b3244512edc365568ef73e839abb243db8b5fa35d9e58cabc7e3e825

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: xonsh /home/steve/HW1/.conf/service-check.xsh registry password 22 torizon 192.168.6.243 ‘curl --silent --max-time 5 http://localhost:5002/v2/_catalog’

Error: Expected 7 arguments, but got 1.
Report on GitHub · Where software is built
Error cause: Invalid argument

  • The terminal process “/usr/bin/bash ‘-c’, ‘xonsh /home/steve/HW1/.conf/service-check.xsh registry password 22 torizon 192.168.6.243 ‘curl --silent --max-time 5 http://localhost:5002/v2/_catalog’’” failed to launch (exit code: 22).
  • Terminal will be reused by tasks, press any key to close it.

Hi @EvetsMostel1

The feature I mentioned above has now been moved into the release version of the extension. You are probably best off switching to that as the dev team has moved into a new dev phase and you may be experiencing instability of the new development.

Also if you have switch to the dev branch of the templates repo, I suggest you undo that and try this test fresh with the release version. You can also do a rm -rf ~/.apollox and then relaunch VSCode to force it to checkout the templates from scratch.

Drew

I did just that, but it took a new directory with a hello world setup, then merging my code into that. I couldn’t get it to put the newer files into my older directory, even after I uninstalled and reinstalled the extension.

OK. But does the default template work before you merge your code in?

Can you share your project? You can upload it to share.toradex.com and then send me the link via DM if you like.

Drew

@drew.tx

Sorry, I was on another thread. Yes, it works before and after, other than my compatibility issue on the other thread.

Steve

Hi @EvetsMostel1

I’m not following you. Are you still having this issues with getting containers deployed to your board?

Drew

I am working on the same project, but I’ve had 2 different issues. I finally got the thing to compile and launch on the device here, by what I described above. But I have issues with Boost logging now which I didn’t before. The only thing I can think of is that Boost needed to be recompiled. This worked fine under the powershell build and runs fine, but with the xonsh build gives me exceptions running with logging. That is why I was asking on the other thread about compiling my code with 13.3 (on the other thread), as it would make things a lot easier.
A question about the compiler you use to build, is it with or without a coprocessor? The one that AI pointed me to was without as it as the extra “none” in the name.

Steve

It’s just aarch64-linux-gnu-gcc. Full specs shown below.

Drew

root@c963717ce5bd:/# aarch64-linux-gnu-gcc -dumpspecs
*asm:
%{mbig-endian:-EB} %{mlittle-endian:-EL} %{march=*:-march=%*} %(asm_cpu_spec)%{mabi=*:-mabi=%*}

*asm_debug:
%{g*:%{%:debug-level-gt(0):}} %{ffile-prefix-map=*:--debug-prefix-map %*} %{fdebug-prefix-map=*:--debug-prefix-map %*}

*asm_debug_option:
%{g*:%{%:debug-level-gt(0):%{%:dwarf-version-gt(4):--gdwarf-5 ;%:dwarf-version-gt(3):--gdwarf-4 ;%:dwarf-version-gt(2):--gdwarf-3 ;:--gdwarf2 }}}

*asm_final:
%{gsplit-dwarf: 
       objcopy --extract-dwo 	 %{c:%{o*:%*}%{!o*:%w%b%O}}%{!c:%U%O} 	 %b.dwo 
       objcopy --strip-dwo 	 %{c:%{o*:%*}%{!o*:%w%b%O}}%{!c:%U%O}     }

*asm_options:
%{-target-help:%:print-asm-header()} %{v} %{w:-W} %{I*} %(asm_debug_option) %{gz|gz=zlib:--compress-debug-sections=zlib} %{gz=none:--compress-debug-sections=none} %{gz=zlib-gnu:--compress-debug-sections=zlib-gnu} %a %Y %{c:%W{o*}%{!o*:-o %w%b%O}}%{!c:-o %d%w%u%O}

*invoke_as:
%{!fwpa*:   %{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}   %{!S:-o %|.s |
 as %(asm_options) %m.s %A }  }

*cpp:
%{pthread:-D_REENTRANT}

*cpp_options:
%(cpp_unique_options) %1 %{m*} %{std*&ansi&trigraphs} %{W*&pedantic*} %{w} %{f*} %{g*:%{%:debug-level-gt(0):%{g*} %{!fno-working-directory:-fworking-directory}}} %{O*} %{undef} %{save-temps*:-fpch-preprocess} %(distro_defaults)

*cpp_debug_options:
%<dumpdir %<dumpbase %<dumpbase-ext %{d*} %:dumps()

*cpp_unique_options:
%{!Q:-quiet} %{nostdinc*} %{C} %{CC} %{v} %@{I*&F*} %{P} %I %{MD:-MD %{!o:%b.d}%{o*:%.d%*}} %{MMD:-MMD %{!o:%b.d}%{o*:%.d%*}} %{M} %{MM} %{MF*} %{MG} %{MP} %{MQ*} %{MT*} %{Mmodules} %{Mno-modules} %{!E:%{!M:%{!MM:%{!MT:%{!MQ:%{MD|MMD:%{o*:-MQ %*}}}}}}} %{remap} %{%:debug-level-gt(2):-dD} %{!iplugindir*:%{fplugin*:%:find-plugindir()}} %{H} %C %{D*&U*&A*} %{i*} %Z %i %{E|M|MM:%W{o*}}

*trad_capable_cpp:
cc1 -E %{traditional|traditional-cpp:-traditional-cpp}

*cc1:
%{profile:-p}%{%:sanitize(address):-funwind-tables}

*cc1_options:
%{pg:%{fomit-frame-pointer:%e-pg and -fomit-frame-pointer are incompatible}} %{!iplugindir*:%{fplugin*:%:find-plugindir()}} %1 %{!Q:-quiet} %(cpp_debug_options) %{m*} %{aux-info*} %{g*} %{O*} %{W*&pedantic*} %{w} %{std*&ansi&trigraphs} %{v:-version} %{pg:-p} %{p} %{f*} %{undef} %{Qn:-fno-ident} %{Qy:} %{-help:--help} %{-target-help:--target-help} %{-version:--version} %{-help=*:--help=%*} %{!fsyntax-only:%{S:%W{o*}%{!o*:-o %w%b.s}}} %{fsyntax-only:-o %j} %{-param*} %{coverage:-fprofile-arcs -ftest-coverage} %{fprofile-arcs|fprofile-generate*|coverage:   %{!fprofile-update=single:     %{pthread:-fprofile-update=prefer-atomic}}}

*cc1plus:


*link_gcc_c_sequence:
%{static|static-pie:--start-group} %G %{!nolibc:%L}    %{static|static-pie:--end-group}%{!static:%{!static-pie:%G}}

*distro_defaults:
%{!fno-asynchronous-unwind-tables:-fasynchronous-unwind-tables}

*link_ssp:
%{fstack-protector|fstack-protector-all|fstack-protector-strong|fstack-protector-explicit:}

*endfile:
%{Ofast|ffast-math|funsafe-math-optimizations:crtfastmath.o%s} %{!static:%{fvtable-verify=none:%s;      fvtable-verify=preinit:vtv_end_preinit.o%s;      fvtable-verify=std:vtv_end.o%s}}    %{static:crtend.o%s;      shared|static-pie|!no-pie:crtendS.o%s;      :crtend.o%s} crtn.o%s 

*link:
%{!r:--build-id} %{!static|static-pie:--eh-frame-hdr} %{h*}		   --hash-style=gnu				   %{!fsanitize=*:--as-needed}			   %{static:-Bstatic}				   %{shared:-shared}				   %{symbolic:-Bsymbolic}			   %{!static:%{!static-pie:			     %{rdynamic:-export-dynamic}		     %{!shared:-dynamic-linker %{muclibc:/lib/ld-uClibc.so.0;:%{mbionic:/system/bin/linker;:%{mmusl:/lib/ld-musl-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1;:/lib/ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1}}}}}}    %{static-pie:-Bstatic -pie --no-dynamic-linker -z text}    -X						   %{mbig-endian:-EB} %{mlittle-endian:-EL}        -maarch64linux%{mabi=ilp32:32}%{mbig-endian:b} %{mfix-cortex-a53-835769:--fix-cortex-a53-835769} %{!mno-fix-cortex-a53-843419:--fix-cortex-a53-843419}

*lib:
%{pthread:-lpthread} %{shared:-lc}    %{!shared:%{profile:-lc_p}%{!profile:-lc}}

*link_gomp:


*libgcc:
%{static|static-libgcc|static-pie:-lgcc -lgcc_eh}%{!static:%{!static-libgcc:%{!static-pie:%{!shared-libgcc:-lgcc --push-state --as-needed -lgcc_s --pop-state}%{shared-libgcc:-lgcc_s%{!shared: -lgcc}}}}}

*startfile:
%{shared:;      pg|p|profile:%{static-pie:grcrt1.o%s;:gcrt1.o%s};      static:crt1.o%s;      static-pie:rcrt1.o%s;      !no-pie:Scrt1.o%s;      :crt1.o%s} crti.o%s    %{static:crtbeginT.o%s;      shared|static-pie|!no-pie:crtbeginS.o%s;      :crtbegin.o%s}    %{fvtable-verify=none:%s;      fvtable-verify=preinit:vtv_start_preinit.o%s;      fvtable-verify=std:vtv_start.o%s} 

*cross_compile:
1

*version:
12.2.0

*multilib:
. !mabi=lp64;lp64:../lib:aarch64-linux-gnu mabi=lp64;

*multilib_defaults:
mabi=lp64

*multilib_extra:


*multilib_matches:
mabi=lp64 mabi=lp64;

*multilib_exclusions:


*multilib_options:
mabi=lp64

*multilib_reuse:


*linker:
collect2

*linker_plugin_file:


*lto_wrapper:


*lto_gcc:


*post_link:


*link_libgcc:
%D

*md_exec_prefix:


*md_startfile_prefix:


*md_startfile_prefix_1:


*startfile_prefix_spec:


*sysroot_spec:
--sysroot=%R

*sysroot_suffix_spec:


*sysroot_hdrs_suffix_spec:


*self_spec:


*asm_cpu_spec:
 %{mcpu=*:-march=%:rewrite_mcpu(%{mcpu=*:%*})}

*link_command:
%{!fsyntax-only:%{!c:%{!M:%{!MM:%{!E:%{!S:    %(linker) %{!fno-use-linker-plugin:%{!fno-lto:     -plugin %(linker_plugin_file)     -plugin-opt=%(lto_wrapper)     -plugin-opt=-fresolution=%u.res         %{flinker-output=*:-plugin-opt=-linker-output-known}     %{!nostdlib:%{!nodefaultlibs:%:pass-through-libs(%(link_gcc_c_sequence))}}     }}%{flto|flto=*:%<fcompare-debug*}     %{flto} %{fno-lto} %{flto=*} %l %{static|shared|r:;!no-pie:-pie} %{fuse-ld=*:-fuse-ld=%*}  %{gz|gz=zlib:--compress-debug-sections=zlib} %{gz=none:--compress-debug-sections=none} %{gz=zlib-gnu:--compress-debug-sections=zlib-gnu} %X %{o*} %{e*} %{N} %{n} %{r}    %{s} %{t} %{u*} %{z} %{Z} %{!nostdlib:%{!r:%{!nostartfiles:%S}}}     %{static|no-pie|static-pie:} %@{L*} %(mfwrap) %(link_libgcc) %{fvtable-verify=none:} %{fvtable-verify=std:   %e-fvtable-verify=std is not supported in this configuration} %{fvtable-verify=preinit:   %e-fvtable-verify=preinit is not supported in this configuration} %{!nostdlib:%{!r:%{!nodefaultlibs:%{%:sanitize(address):%{!shared:libasan_preinit%O%s} %{static-libasan:%{!shared:-Bstatic --whole-archive -lasan --no-whole-archive -Bdynamic}}%{!static-libasan:--push-state --no-as-needed -lasan --pop-state}}     %{%:sanitize(hwaddress):%{static-libhwasan:%{!shared:-Bstatic --whole-archive -lhwasan --no-whole-archive -Bdynamic}}%{!static-libhwasan:--push-state --no-as-needed -lhwasan --pop-state}}     %{%:sanitize(thread):%{!shared:libtsan_preinit%O%s} %{static-libtsan:%{!shared:-Bstatic --whole-archive -ltsan --no-whole-archive -Bdynamic}}%{!static-libtsan:--push-state --no-as-needed -ltsan --pop-state}}     %{%:sanitize(leak):%{!shared:liblsan_preinit%O%s} %{static-liblsan:%{!shared:-Bstatic --whole-archive -llsan --no-whole-archive -Bdynamic}}%{!static-liblsan:--push-state --no-as-needed -llsan --pop-state}}}}} %o      %{fopenacc|fopenmp|%:gt(%{ftree-parallelize-loops=*:%*} 1):	%:include(libgomp.spec)%(link_gomp)}    %{fgnu-tm:%:include(libitm.spec)%(link_itm)}    %(mflib)  %{fsplit-stack: --wrap=pthread_create}    %{fprofile-arcs|fprofile-generate*|coverage:-lgcov} %{!nostdlib:%{!r:%{!nodefaultlibs:%{%:sanitize(address): %{static-libasan|static:%:include(libsanitizer.spec)%(link_libasan)}    %{static:%ecannot specify -static with -fsanitize=address}}    %{%:sanitize(hwaddress): %{static-libhwasan|static:%:include(libsanitizer.spec)%(link_libhwasan)}	%{static:%ecannot specify -static with -fsanitize=hwaddress}}    %{%:sanitize(thread): %{static-libtsan|static:%:include(libsanitizer.spec)%(link_libtsan)}    %{static:%ecannot specify -static with -fsanitize=thread}}    %{%:sanitize(undefined):%{static-libubsan:-Bstatic} %{!static-libubsan:--push-state --no-as-needed} -lubsan  %{static-libubsan:-Bdynamic} %{!static-libubsan:--pop-state} %{static-libubsan|static:%:include(libsanitizer.spec)%(link_libubsan)}}    %{%:sanitize(leak): %{static-liblsan|static:%:include(libsanitizer.spec)%(link_liblsan)}}}}}     %{!nostdlib:%{!r:%{!nodefaultlibs:%(link_ssp) %(link_gcc_c_sequence)}}}    %{!nostdlib:%{!r:%{!nostartfiles:%E}}} %{T*}  
%(post_link) }}}}}}