Compiling new image(customized kernel)

Hi,
I’m using BSP 5.0. I want to create a new image that i customized it’s device tree for our carrier board and i’m moving on this guide. I followed all the instructions to create a custom image. However, i took some errors as below while compiling new image with bitbake. Why am i taking these errors?

bitbake custom-console-image
Loading cache: 100% |###########################################| Time: 0:00:03
Loaded 3914 entries from dependency cache.
Parsing recipes: 100% |#########################################| Time: 0:00:03
Parsing of 2755 .bb files complete (2746 cached, 9 parsed). 3921 targets, 247 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION           = "1.46.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "universal"
TARGET_SYS           = "arm-tdx-linux-gnueabi"
MACHINE              = "colibri-imx6"
DISTRO               = "tdx-xwayland"
DISTRO_VERSION       = "5.7.1-devel-20221116071610+build.0"
TUNE_FEATURES        = "arm armv7a vfp thumb neon callconvention-hard"
TARGET_FPU           = "hard"
meta-toradex-nxp     = "HEAD:ee63c90fde9fde0229bff9ac1c5cffe356fc4f41"
meta-freescale       = "HEAD:3cb29cff92568ea835ef070490f185349d712837"
meta-freescale-3rdparty = "HEAD:c52f64973cd4043a5e8be1c7e29bb9690eb4c3e5"
meta-toradex-tegra   = "HEAD:f5753af4a5b9d33f0f474b320a74c2e29a66ec39"
meta-toradex-bsp-common = "HEAD:029a663150449a5e71b84dd4000476754d525c8c"
meta-oe              
meta-filesystems     
meta-gnome           
meta-xfce            
meta-initramfs       
meta-networking      
meta-multimedia      
meta-python          = "HEAD:8ff12bfffcf0840d5518788a53d88d708ad3aae0"
meta-freescale-distro = "HEAD:5d882cdf079b3bde0bd9869ce3ca3db411acbf3b"
meta-toradex-demos   = "HEAD:ce3c1925df34b4d299b2dd1003ced41b9485ce41"
meta-qt5             = "HEAD:5ef3a0ffd3324937252790266e2b2e64d33ef34f"
meta-toradex-distro  = "HEAD:4cdeb8980411db8fc65655832a664bf80d93e534"
meta-poky            = "HEAD:7e0063a8546250c4c5b9454cfa89fff451a280ee"
meta                 = "HEAD:add860e1a69f848097bbc511137a62d5746e5019"
meta-customer        = "master:04d1f1240bbace8850638b5762996cabf145c16f"

Initialising tasks: 100% |######################################| Time: 0:00:04
Sstate summary: Wanted 214 Found 184 Missed 30 Current 1469 (85% match, 98% complete)
NOTE: Executing Tasks
ERROR: linux-toradex-5.4.193+gitAUTOINC+6e440a8fa4-r0 do_compile: oe_runmake failed
ERROR: linux-toradex-5.4.193+gitAUTOINC+6e440a8fa4-r0 do_compile: Execution of '/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/temp/run.do_compile.17827' failed with exit code 1
ERROR: Logfile of failure stored in: /home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/temp/log.do_compile.17827
Log data follows:
| DEBUG: Executing shell function do_compile
| NOTE: KBUILD_BUILD_TIMESTAMP: Thu Oct  6 12:58:20 UTC 2022
| NOTE: make -j 4 HOSTCC=gcc  -isystem/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/include -O2 -pipe -L/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/lib                         -L/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/lib                         -Wl,--enable-new-dtags                         -Wl,-rpath-link,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/lib                         -Wl,-rpath-link,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/lib                         -Wl,-rpath,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/lib                         -Wl,-rpath,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/lib                         -Wl,-O1 -Wl,--allow-shlib-undefined -Wl,--dynamic-linker=/home/oguzz/oe-core/build/tmp/sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2 HOSTCPP=gcc  -E HOSTCXX=g++  -isystem/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/include -O2 -pipe -L/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/lib                         -L/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/lib                         -Wl,--enable-new-dtags                         -Wl,-rpath-link,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/lib                         -Wl,-rpath-link,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/lib                         -Wl,-rpath,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/usr/lib                         -Wl,-rpath,/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native/lib                         -Wl,-O1 -Wl,--allow-shlib-undefined -Wl,--dynamic-linker=/home/oguzz/oe-core/build/tmp/sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2 zImage CC=arm-tdx-linux-gnueabi-gcc  -mno-thumb-interwork -marm -fuse-ld=bfd -fmacro-prefix-map=/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0=/usr/src/debug/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0                      -fdebug-prefix-map=/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0=/usr/src/debug/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0                      -fdebug-prefix-map=/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot=                      -fdebug-prefix-map=/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/recipe-sysroot-native=  -fdebug-prefix-map=/home/oguzz/oe-core/build/tmp/work-shared/colibri-imx6/kernel-source=/usr/src/kernel   LD=arm-tdx-linux-gnueabi-ld.bfd   LOADADDR=0x11000000
| ***
| *** The source tree is not clean, please run 'make mrproper'
| *** in /home/oguzz/oe-core/build/tmp/work-shared/colibri-imx6/kernel-source
| ***
| /home/oguzz/oe-core/build/tmp/work-shared/colibri-imx6/kernel-source/Makefile:535: recipe for target 'outputmakefile' failed
| make[1]: *** [outputmakefile] Error 1
| /home/oguzz/oe-core/build/tmp/work-shared/colibri-imx6/kernel-source/Makefile:179: recipe for target 'sub-make' failed
| make: *** [sub-make] Error 2
| ERROR: oe_runmake failed
| WARNING: /home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/temp/run.do_compile.17827:1 exit 1 from 'exit 1'
| ERROR: Execution of '/home/oguzz/oe-core/build/tmp/work/colibri_imx6-tdx-linux-gnueabi/linux-toradex/5.4.193+gitAUTOINC+6e440a8fa4-r0/temp/run.do_compile.17827' failed with exit code 1
ERROR: Task (/home/oguzz/oe-core/build/../layers/meta-toradex-nxp/recipes-kernel/linux/linux-toradex_5.4-2.3.x.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3869 tasks of which 3865 didn't need to be rerun and 1 failed.
NOTE: Writing buildhistory
NOTE: Writing buildhistory took: 4 seconds

Summary: 1 task failed:
  /home/oguzz/oe-core/build/../layers/meta-toradex-nxp/recipes-kernel/linux/linux-toradex_5.4-2.3.x.bb:do_compile
Summary: There were 2 ERROR messages shown, returning a non-zero exit code.

Because you aren’t running a clean build. You ran some part of this build and have partial output files lying around. It tells you to run

make mrproper

to clean things up.

Don’t let anybody tell you that you can “just make a minor change and re-run” because that never works. Even when it succeeds what you end up with isn’t correct.

I use ~/yocto_work whenever I’m building. I create the following as ~/bin/nuke-yocto-build.

cd ~/yocto_work
rm -rf sstate-cache/

cd build-torizon/
rm -rf cache/
mkdir cache
rm -rf tmp
mkdir tmp
rm -rf buildhistory
mkdir buildhistory
rm -rf deploy
mkdir deploy

Since Ubuntu puts ~/bin in a user PATH variable if it finds it at boot/login I just have to have that file set to executable and before I try another build

nuke-yocto-build

In a VM on my HP Z820 with 128GB of RAM and 20 physical core and a 6TB WD Black for speedy I/O it takes about 3.5 hours to build from source.

IDE developers really need to steer clear of Yocto/Bitbake. This is not a universe where you can reliably do partial builds and get instant gratification. It’s more suited to old-timers who grew up submitting source compiles to a batch queue before lunch and waiting for operations to bring them a compiler listing some time prior to end of day.

There are no shortcuts with Yocto. You need a massive machine or a server farm to get clean build times down to a few hours. If you are using a quad-core i-7, do not be surprised if your build takes over 24 hours.

Free advice. Find something old like this on eBay or wherever you shop for used computers. At least 20 core and over 100GB of RAM. Load it with fast spinning disks and a USB 3.x add-in card. Use it for your Yocto work.

The short answer to your question is that you tried to do a partial build and that just never works.

The longer answer is you tried to do a partial build because you didn’t want to wait over half a day to see the results. I understand. This is one situation where you really do have to throw hardware at it.

Hi @ogusis01!

Is it possible that you manually performed some modifications on the kernel source instead of using Yocto to perform those changes?

Searching for this mrproper, I found this StackOverflow thread: bash - how to use "make mrproper " command? i am working on Linux From Scratch - Stack Overflow.

So, mrproper is related to cleaning up the current Linux Kernel configuration that you might have. Having in mind that seems like your problem is related to Linux Kernel’s configuration, I would try one of the following:

Running make mrproper

You should get into the Yocto devshell for the recipe you are build (linux-toradex, in your case) and manually run it:

$ bitbake -c devshell linux-toradex
$ make mrproper

Trying some of the already available tasks for virtual/kernel

Here, virtual/kernel is the same as linux-toradex recipe for downstream-based images in BSP 5 (set by distro tdx-xwayland), which is your case.

By using bitbake -c listtasks virtual/kernel, we get several available tasks to run. Among them, we have the task do_kernel_configme. You can try to run it and then try to build again:

$ bitbake -c kernel_configme virtual/kernel
$ bitbake virtual/kernel

Cleaning virtual/kernel

Yocto has 3 levels of cleaning by default for its recipes:

$ bitbake -c listtasks virtual/kernel | grep clean
do_clean                              Removes all output files for a target
do_cleanall                           Removes all output files, shared state cache, and downloaded source files for a target
do_cleansstate                        Removes all output files and shared state cache for a target

You can start trying with do_clean and test how it goes. After that, if the problem persists, you can try running do_cleansstate and test building the kernel again. Then, you can finally try running do_cleanall and finally build the kernel again. I see the do_cleanall as a kind of “last resource”.

Please, let us know if this helps you :slight_smile:


As a PS, I just wanted to comment on this part:

This is not my experience with Yocto. I perform small changes and rebuild packages or complete images on a daily basis with Yocto and I can say that things one of the following happens:

  • Things work
    • And I can actually check using Yocto/OpenEmbedded output and temporary files to check it
  • Something goes wrong. This can happen because:
    • I did something wrong
    • I had not enough knowledge/understanding on what/how I should have performed the modification.

When something goes wrong, I am usually able to build as many times as I need without the need to wipe out my Yocto cache. I am able to tweak, build, check the output, tweak again, clean some parts (when needed), and rebuild packages and images on Yocto dozens of times a day without the need to perform a “Yocto wide” wipe-out.

Please note that I am not saying that I never got frustrated by Yocto having to get back the ashes and perform a clean build. This would not be true at all. I have already made mistakes that were faster to solve by deleting all the cache and starting from zero. What I am saying is that this event is not the rule for me. Actually, it is far away from being the rule.

Best regards,

This is never a good thing. Yocto/Bitbake is not good at determining what does and does not need to be built. The higher level the change the more probably you will have a corrupted build.

Can you change a single .c file without changing any compiler options or yocto recipes and do a partial? Yes. Once you have to tweak a recipe it is very unwise to trust a partial build.

That has been my experience. Then again, I work in the medical device world and we trust nothing.

Hi @seasoned_geek !

What do you mean by “higher level” change? Could you elaborate? Or maybe give an example?

If it is needed, there is always the possibility to check what is going on by running the tasks and checking the temporary files. Although I understand that this is not scalable, doing so also helps to better understand Yocto’s mechanisms, which is very good know it well :slight_smile:

Best regards,

Hi @ogusis01 !

Do you have any updates regarding this topic?

Were you able to solve your issue?

Best regards,

Sorry,

“Higher level change”

You change a recipe or an override with previous build left laying around. The “do I need to rebuild” logic is not good with Yocto/Bitbake. Same goes for changing a header file used by many C files. A standard make would pick it up, but you don’t get that far.

Never trust a partial build and your live with be happy.

1 Like

If you have Intel quad-core i-7 with 32GB RAM it would be more efficient to invest in a good SSD disk to decrease a compilation speed. Low latency and high IOPS are more important for compilation.

If you have Intel quad-core i-7 with 32GB RAM it would be more efficient to invest in a good SSD disk to decrease a compilation speed. Low latency and high IOPS are more important for compilation.

Been there, tried that, doesn’t work. Everybody is gaga over SSD, but SSD has a fatal flaw for large Yocto type builds.

It only seems fast with little things.

SSD has a much slower write speed than a good (WD Black) spinning disk. Yocto from scratch and other large build which have north of 10K temporary object files and include the same standard headers (think stdlib.h and such) thousands of times pop past the end of that cache almost instantly.

32GB of RAM really isn’t enough. My HP Z820 has 120GB of RAM and 24-core. When building I’m using a 4TB WD Black spinning disk. With only 32GB of RAM, the bulk of the RAM is allocated/reserved for the multitude of compilations. With 120GB the OS is free to use over 30GB for disk cache before the physical hardware caches come into play. The really common header files never really get flushed from the read cache. The 10+K object files get to fly full speed to the disk.

It defies what the general public wants to believe, but a spinning disk is still faster for high volume writes. Most Midrange and Mainframe computers don’t use SSD for system spooler or other high volume write activities for this reason. Every one of the storage appliances has a rack of spinning disks for the high volume stuff and SSDs for the longer term or “reference only” files.

That was my experience. My i-7 gen-6 with 32GB and good SSD was slower than when using WD Black spinning drive. The ancient HP Z820 simply blows it away.

When it comes to a Yocto build, one really does have to throw hardware at it. You just don’t have to throw “new” hardware at it. Quantity really matters.

Hi @seasoned_geek !

That’s an interesting piece of information.

In my case, I have one machine that I use for Yocto build and other tasks. Therefore SSD makes sense to me. With an i7 10th gen, 32GB of RAM, and SSD I am able to trigger a Yocto build and keep using the computer for (heavy) internet browsing, reference manual with thousands of pages, and so on.

But, for those who can/need to have a separate server to run Yocto builds, the information you shared is really interesting.

If you could kindly share some Yocto build time measurements to back up your information, it would be great! At least, I would like to see it :slight_smile:

E.g.: the time it takes to perform a clean build of BSP 5.7.1 reference minimal multimedia (and/or reference minimal?) on SSD and on mechanical spinning disk would be a nice comparison.

Best regards,

I don’t do minimal builds. I was building client stuff using Torizon Core and out application.
The Dell T5500 they provided had 12-core and 40GB of RAM. The Ubuntu VM (Windows 10 Oracle VirtualBox host) was assigned 8-core and I think 36GB of RAM. Took just over 3.5 hours. Yes, I could answer email and surf the Web. I even worked on documentation using MS Word locally. Windows was booting and running from the SSD. The VM was out on the spinning disk.

Exported the VM to HP Z820. Gave it something like 18-core and 100GB of RAM. Again Windows 10 VirtualBox host. One run took 3.2 hours and the others under 3 hours. The only thing I can think of is Windows must have been pulling down updates during the outlier. There wasn’t really anything “different” about that build. We were only making small tweaks at that point.

The I-7 Gen-4 has either 24 or 32GB of RAM (not sitting in front of it now) Running Ubuntu 20.04 native. SSD build took well over 5 hours. Changed development to use a directory tree on the spinning 4TB disk with OS still botting from SSD. Came in at just over 4 hours but I’m working from memory. There was well over an hour’s difference in the build time. That was just a generic Seagate or WD Blue, not a WD Black spinning disk. If you are using a Black or a Red you can really get some I/O.

The problem with all Linux builds, Yocto or not, is the fact you need to read the same include files a zillion times and you need to write well north of 10K object files.

Some people stack a machine with RAM, have Linux create a RAM disk at boot and move all of the include files to said RAM disk and monkey with the include path. If you don’t trust your disk cache to do the right thing, this bit of pain can work for you. It’s only a bit of pain to figure out the first time, after that, it just happens at boot for you.

Likewise, if you had 256GB of RAM you could create a 120GB RAM disk to use as the “build drive” and theoretically remove the cost of disk writes for all of the object files.

Both of those involve more Linux knowledge than I care to have.

Buying machines with at least 20-core that I can stack over 100GB of RAM in and putting in a WD Black or WD Red spinning disk for the working disk is a don’t-need-to-know-nothing approach I’m qualified to take.

What I haven’t tried, because I need Windows 10 on that Z is another Z configured the same running native ubuntu so it boots from SSD with a 4TB Black or Red for the work drive. This would have the entire machine without VirtualBox interference. I bet I could build in under 2 hours.

If someone has time on their hands and likes to collect machines, please run that experiment. You can find Z820 (or higher) on eBay pretty reasonable. Be sure it has a minimum of 20-core.

Do not RAID your build disk. That slows down writes.