top of page

75 results found with an empty search

  • Use External Toolchain in Build root (from Part 1) to generate Rootfs/Linux - Part 2

    For some months, we at  WhileOne Techsoft Pvt. Ltd.  have been helping our customer setup a system to validate the performance of their SoC platform. In this context, we had to bring up an aarch64 Linux based target image to run on their proprietary hardware SoC platform. Part -1   of this series explains how to build an External Toolchain with BuildRoot. Part -2 of this series explains how to build a Target Linux image and Rootfs in BuildRoot using the External Toolchain that we built in  Part -1 Part -3   of this series explains how to integrate the generated External Toolchain binaries inside the Target Linux image using BuildRoot In the following steps we shall configure the Buildroot to use the External Toolchain tarball (refer Part -1) to build the Kernel and Rootfs images and copy the extracted tarball binaries to the target under directory /usr We have to modify the configuration and then rebuild using clean option. This will delete all output folders and files. So, its very important to move the tarball to some other location outside the ‘buildroot’ folder. If not yet done, please do this before proceeding ahead 1. Start menuconfig make menuconfig We only modify a few options. Rest of the options will remain the same as was configured earlier in Part -1. Target Options and Build Options shall remain same. So no need to change them 2. Modifying Toolchain Options: a. Select option ‘External Toolchain’ b. Modify option of Toolchain origin to ‘Toolchain to be downloaded and installed’ c. Update URL to ‘file:///path/to/sdk-tarball’ as shown below. d. Modify External Toolchain GCC version to ’11.x’ (We used GCC-11.x to cross-compile the external toolchain in Part -1) e. Modify External Toolchain kernel headers series to ‘5.4.x’ (This was same configuration that we had kept for tarball earlier in Part -1) f. Modify External Toolchain C library to ‘glibc/eglibc’ g. Disable ‘Toolchain has RPC support’ (Please disable if it was not selected earlier during tarball generation) h. Enable support for C++ and Fortran (This was enabled earlier for tarball configuration ) a. Modify Init system from ‘None’ to ‘BusyBox’ b. Enable option ‘Use symlinks to /usr for /bin, /sbin and /lib c. Modify default BusyBox shell from ‘None’ to ‘/bin/sh/’ 7. Now, save and exit from menuconfig 8. Build with new configuration settings make clean all 9. Once the build is successful, the resulting images can be seen in the folder ‘output/images’ We were able to generate Rootfs and aarch64 Linux images by compiling with the External Toolchain that was earlier built in Part 1 https://www.whileone.in/post/how-to-create-an-external-toolchain-in-buildroot-part-1 [Check out Part -3 to know more on how to integrate the External Toolchain binaries inside the Target Linux Image https://www.whileone.in/post/how-to-integrate-external-toolchain-generated-in-part-1-inside-the-target-linux-image-in-buildroot ] A quote that has inspired me for a long time… “Obstacles don’t have to stop you. If you run into a wall, don’t turn around and give up. Figure out how to climb it, go through it, or work around it.” — Michael Jordan

  • How to Create an External Toolchain in Buildroot - Part 1

    Background: ​ For some months, we at WhileOne Techsoft Pvt. Ltd. have been helping our customer setup a system to validate the performance of their SoC platform. In this context, we had to bring up an aarch64 Linux based target image to run on their proprietary hardware SoC platform. Part-1 of this series explains how to build an External Toolchain with BuildRoot. Part -2   of this series explains how to build a Target Linux image and Rootfs in BuildRoot using the External Toolchain that we built in Part-1 Part -3   of this series explains how to integrate the generated External Toolchain binaries(built in Part -1)inside the Target Linux image using BuildRoot Tools and Development station: Buildroot ( www.buildroot.org ) AWS aarch64 ubuntu based instance with 50GiB SSD and 8GiB RAM(Free Tier instances with drive space < 20GiB will fail during build process) SSH client(Putty or WSL2 Ubuntu VM for windows ) Why Buildroot? Buildroot is a tool that automates the process of building a complete Linux system for an embedded system, using cross compilation. It has a significantly lesser learning curve as compared to another popular tool Yocto. Yocto supports designing complex Linux systems using a plethora of recipes(customization scripts) whereas Buildroot supports quicker prototype designs targeted mainly for embedded systems. Challenges? Although Buildroot does not support toolchains generated by Yocto, OpenEmbedded or even GCC. So, any of these toolchain binaries cannot be used directly as the external toolchain. But, Buildroot provides an alternate way where we can create an external toolchain based on glibc, musl or uClibc-ng and cross-compile these with a choice of GCC v11.x or v12.x Prepare the remote system for build: First, we need to log in to the remote (aarch64) instance with IP address using either Putty or any other SSH tool of your choice. I have connected to the remote instance via SSH from WSL2 (Windows Subsystem Linux) Ubuntu on Windows 11. On successful login to ubuntu instance, we need to prepare it by installing needed dependencies for Buildroot to work properly. 1. From AWS website, using your account, launch an AWS Ubuntu aarch64 instance (c6g.2xlarge) [around 8GB RAM and 50GB SSD volume], generate keypair and store on local windows 11 system (Tutorial for launching your own instances can be found online or on aws website) 2. Launch WSL2 Ubuntu OS (Installation of WSL2 is beyond the scope of this article) Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.10.16.3-microsoft-standard-WSL2 x86_64)* Documentation: https://help.ubuntu.com   Management: https://landscape.canonical.com   Support: https://ubuntu.com/advantage@:~$ 3. To copy the keypair on local system to Ubuntu WSL2, we need to open location “\\WSL$” in windows explorer. This will display folder structure with Ubuntu as primary and the only folder. Navigate to “Ubuntu/home/” and copy the keypair to this location (other location can also be used as per convenience) 4. Run the below command to connect to remote aws instance ssh -i ubuntu@ 5. If connection is successful, an ubuntu prompt shall be displayed Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-1029-aws aarch64)* Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantageSystem information as of Fri Jul 1 13:55:38 UTC 2022System load: 0.0 Processes: 186 Usage of /: 41.2% of 28.91GB Users logged in: 0 Memory usage: 7% IPv4 address for ens5: 172.31.58.93 Swap usage: 0%* Ubuntu Pro delivers the most comprehensive open source security and compliance features.https://ubuntu.com/aws/pro11 updates can be applied immediately. To see these additional updates run: apt list --upgradableLast login: Thu Jun 30 20:05:55 2022 from 17x.4x.3x.5x ubuntu@ip-17x-3x-5x-9x:~$ 6. Run below command to verify system details of remote instance 7. Run below required commands to install dependencies. The package libncurses-dev is needed for menuconfig to work properly sudo apt updatesudo apt-get install build-essential libncurses-dev 8. Clone buildroot repository git clone git:// git.buildroot.net/buildroot 9. Prepare the build for aarch64 platform. Get command from readme.txt vi buildroot/board/aarch64-efi/readme.txt 10. Execute the command to configure for aarch64 make aarch64_efi_defconfig 11. Execute command to display menuconfig GUI make menuconfig 12. Menuconfig is visible as shown below 13. Modify Target Options (Select Architecture): 14. Modify Toolchain Options: Select toolchain as Buildroot toolchain, C library as glibc, Kernel headers < version of kernel that is built, gcc version 11.x, Enable Fortran support 15. Modify options for System configuration: 16. Modify options for Target packages (Disable option busybox): 17. Modify options in Filesystem Images: 18. Save and exit from "menuconfig" UI 19. Run command “make sdk” on terminal 20. SDK is built successfully. The log after generation of SDK is shown below. >>> Rendering the SDK relocatablePER_PACKAGE_DIR=/home/ubuntu/buildroot/output/per-package /home/ubuntu/buildroot/support/scripts/fix-rpath hostPER_PACKAGE_DIR=/home/ubuntu/buildroot/output/per-package /home/ubuntu/buildroot/support/scripts/fix-rpath staging/usr/bin/install -m 755 /home/ubuntu/buildroot/support/misc/relocate-sdk.sh /home/ubuntu/buildroot/output/host/relocate-sdk.shmkdir -p /home/ubuntu/buildroot/output/host/share/buildrootecho /home/ubuntu/buildroot/output/host > /home/ubuntu/buildroot/output/host/share/buildroot/sdk-location>>> Generating SDK tarballtar czf "/home/ubuntu/buildroot/output/images/aarch64-buildroot-linux-gnu_sdk-buildroot.tar.gz" \--owner=0 --group=0 --numeric-owner \--transform='s#^home/ubuntu/buildroot/output/host#aarch64-buildroot-linux-gnu_sdk-buildroot#' \-C / home/ubuntu/buildroot/output/host 21. Verify if the tarball ‘aarch64-buildroot-linux-gnu_sdk-buildroot.tar.gz’ is available in the folder ‘buildroot/output/images’ 22. Copy this tarball to some external location in ‘home/ubuntu’ outside the ‘buildroot’ directory. mv output/images/aarch64-buildroot-linux-gnu_sdk-buildroot.tar.gz ~ In Part-2, we will use this tarball for generating a root file system and a kernel image by compiling the kernel with the newly generated External Toolchain Conclusion: A quote that has inspired me for a long time… “Obstacles don’t have to stop you. If you run into a wall, don’t turn around and give up. Figure out how to climb it, go through h it, or work around it.” — Michael Jordan Part 2 : https://www.whileone.in/post/external-toolchain-in-build-root-from-part-1-to-generate-rootfs-linux-part-2 Part 3 : https://www.whileone.in/post/how-to-integrate-external-toolchain-generated-in-part-1-inside-the-target-linux-image-in-buildroot

  • ARM64 Benchmarking with DeathStarBench: A Porting Journey

    Delivering Modernization, Benchmarking & Cost Efficiency Migrating workloads from AMD64 to ARM64 allows organizations to harness the price-performance and scalability benefits of leading-edge ARM cloud platforms (Ampere, AWS Graviton, Oracle, etc.). We offer comprehensive containerized porting packages, including leading microservices benchmarks such as DeathStarBench (DSB), to accelerate your cloud adoption and benchmarking initiatives on next-generation ARM infrastructure. DSB serves as a prime example of these optimized workloads, enabling robust performance evaluation and seamless migration to ARM-based environments. Key benefits for your organization: Unlock significant cloud cost savings with ARM’s high-efficiency compute Run production-ready, optimized containers and DSB benchmarks on ARM64 Build technical credibility and customer trust through future-ready, open-source best practices Our Comprehensive Porting Services Applications & Database Containers MongoDB (Sharded/Replica), Redis Cluster, Memcached, Mcrouter Exporters (Prometheus-compatible), orchestration utilities Full Dockerfile, image, and chart refactoring for ARM64 DeathStarBench (DSB) Microservices Suite End-to-end ARM64 porting of all major DSB microservices (social network, media, graph, etc.) Validation of dependencies and orchestration scripts for ARM compatibility Helm chart adaptation and seamless CI/CD integration Kubernetes & Benchmarking Enablement Refactor Helm charts and manifests for ARM64 Secure deployment via RBAC, RoleBindings, and cluster automation Guidance on operationalizing DSB for robust ARM-native benchmarking How We Deliver: DSB Porting Workflow What We Port & Support MongoDB Sharded & Exporter (DSB-ready) Redis Cluster & Exporter (DSB-ready) Memcached & Exporter (DSB-ready) Mcrouter (DSB-ready) DeathStarBench Microservices (all major suites) Supporting K8s utilities (yq, kubectl, gosu, etc.) All solutions are validated for ARM64 compatibility, performance, and reliability, both as standalone services and within the DeathStarBench framework. Conclusion This enables organizations to accelerate their modernization and benchmarking initiatives with confidence. Our deep expertise in ARM-native cloud, SaaS, and benchmarking ecosystems ensures that you benefit from: Full lifecycle delivery: from binary integration to Docker builds, Helm/K8s enablement, and DSB orchestration Transparent and auditable processes, aligned with your compliance and operational requirements Demonstrated success across cloud and SaaS environments, validated by references and production deployments

bottom of page