The GNU Compiler Collection - Linux Embedded systems

The GCC compiler, like the kernel, is designed for portability. Like all open source programs, GCC is available in source form, and you can compile the code to create your own compiler. Part of the compilation process of GCC involves configuring the project; during that step, you can configure GCC to produce code for a different target processor and thus become a cross-compiler.

However, the compiler is only one part of the tool chain necessary to produce running code. You must also get a linker, a C standard library, and a debugger. These are separate, albeit related, projects in Linux. This separation is vexing for engineers used to tools from a certain company in Redmond, Washington, where the tools are monolithic in nature. But not to worry; when you’re working on an embedded project, this separation is an advantage, because the additional choice lets you select the right tool for your application.

The GCC compiler installed on your host machine is pre-configured to use the GNU C Standard Library, frequently called glibc. Most embedded projects use an alternate, smaller library called uClibc for embedded development; it’s discussed later.


The GNU Debugger (GDB) project deserves a special mention. It’s the most commonly used debugger on Linux systems. Although it’s frequently included in the tool chain, GDB is a separate, independent project.For embedded development, GDB is compiled so that it can debug code running on a different processor than the debugger, much like GCC can cross-compile code. This sort of debugging adds another complication: the machine running the debugger is rarely the machine running the code to be debugged. Debugging code in this fashion is called remote debugging and is accomplished by running the program to be debugged with a stub program that communicates with another host where the debugger is running.

The stub program in this case is gdbserver, and it can communicate by serial or TCP connection with a host running GDB. Using gdbserver also has practical considerations because at only 100KB, give or take, it’s small enough in terms of size and resources required when running on even the most resource constrained targets.


BusyBox is a multicall (more later on what this means) binary that provides many of the programs normally found on a Linux host. The implementations of the programs are designed so that they’re small both in size but also with respect to how much memory they consume while running. In order to be as small as possible, the programs supply a subset of the functionality offered by the programs running on desktop system. BusyBox is highly configurable, with lots of knobs to turn to reduce the amount of space it requires; for example, you can leave out all the command-line help to reduce the size of the program.

As for the multicall binary concept, BusyBox is compiled as a single program. The root file system is populated with symlinks to the BusyBox executable; the name of the symlink controls what bit of functionality BusyBox runs. For example, you can do the following on an embedded system:

BusyBox runs this argument through a switch statement, which then calls the function ls_main(), passing in all the parameters on the command line. BusyBox calls the programs it provides applets.BusyBox is a key component of most embedded systems. It’s frequently used in conjunction with the uClibc project to create very small systems.


Nearly everything in an embedded Linux system is fair game for some sort of substitution even things you take for granted. One such item that most engineers use frequently but never give much consideration to is the implementation of the standard C library. The C language contains about 30 keywords (depending on the implementation of C), and the balance of the language’s functionality is supplied by the standard library.4 This bit of design genius means that C can be easily implemented on a new platform by creating a minimal compiler and using that to compile the standard library to produce something sufficient for application development.The separation between the core language and the library also means there can be several implementations. That fact inhibited the adoption of C for a while, because each compiler maker shipped a C standard library that differed from their competitors’, meaning a complex project needed tweaking (or major rework) in order to be used with a different compiler.

In the case of Linux, several small library implementations exist, with uClibc being the most common. uClibc is smaller because it was written with size in mind and doesn’t have the platform support of glibc; it’s also missing some other features. Most of what’s been removed has no effect on an embedded system.


Open source software is designed to be distributed in source code form so that it can be compiled for the target platform. When target platforms were diverse, this made perfect sense, because there was no way for a binary to work on a wide range of targets. For example, one key part of the target system was the C library. Most open source software is written in C; when compiled, the binary attempts to use the C library on the target system. If the C library used for compilation wasn’t compatible with the library on the target system, the software wouldn’t run.

To make sure the software could be compiled on a wide range of systems, open source software developers found themselves doing the same sort of work, such as detecting the existence of a function or the length of a buffer in order to compile properly. For a project to be widely adopted, not only did it need to work, but users also needed to be able to compile it easily.

The Automake and Autoconf projects solve the problem of discovering the state of the target environment and creating make files that can build the project. Projects using Automake and Autoconf can be compiled on a wide range of targets with much less effort on the part of the software developer, meaning more time can be dedicated to improving the state of the software itself rather than working through build-related problems.

Packaging Systems

As Linux became more mainstream, distributions were developed. A distribution is a kernel and a group of programs for a root file system, and one of the things on the root file system is the C library. The increasing use of distributions means a user can compile open source software with the expectation that it will run on a computer with a certain target distribution.

Distributions added another layer with the concept of packages. A package is a layer of indirection on top of an open source project; it has the information about how to compile the software it contains, thereby producing a binary package. In addition to the binaries, the package contains dependency information such as the version of the C library that’s required and, sometimes, the ability to run arbitrary scripts to properly install the package. Distributions typically built a group of packages as a set; you install a subset of those packages on your system. If you install additional packages, then as long as they come from the same set used to create the distribution, the dependencies are satisfied or can be satisfied by using other packages in the set.

Several packaging systems are available. RPM (neé Red Hat Package Manager, now RPM Package Manager) and deb (the packaging system used by the Debian project first and then Ubuntu) are two of the more popular packages for desktop systems. Some embedded distributions use these packing systems to create a distribution for embedded targets. In some cases, it makes sense to use a packaging system, we cover what’s available for embedded developers and when using a packing system makes sense.


You hear a lot about patches when working with Linux. You may even make one or two yourself. A patch is nothing other than a file containing a unified diff, which is enough information that it can be used to non-interactively edit a file. This information is frequently referred to as a change set. The file to which the changes are applied is called the target file. Change sets can specify that lines can be changed, removed from, or added to the target. Although patches are typically created for text files, a patch can be created for a binary program as well.

A patch is just a data file. To update the target files, another program must be used, and that program is patch. Created by the Perl guy Larry Wall, patch does the work of reading the patch file, locating the file to be edited, and applying the changes. Patch is clever in that it can apply changes even if the file to be patched is a little different than the one used to create the patch. You can create patches using the diff program, like so:

diff -Naur old-something new-something > patch-file

However, many source code control systems generate a patch based on the current contents of your directory versus what’s stored in the source code control repository. No matter how you create your patch, applying it works the same:
patch < patch-file

If the patch program can’t make the changes requested, it produces error messages and creates reject files so you can see where things went wrong.


Make is a core underpinning of open source software. It works by scanning a list of rules and building a dependency graph. Each rule contains a target, which is usually a file and a list of dependencies. A dependency can be either another target or the name of a file. Make then scans the file system to determine what files aren’t present and figures out what targets to run in what order to create the missing files.

Make has been around since 1977 and has been rewritten several times. The version used on Linux systems is GNU Make.

Make uses a combination of a terse syntax in conjunction with many preset defaults, such that the way a make file works is nearly magic. Consider this:

This is sufficient to give make the proper instructions to compile the file mybinary.c into the executable mybinary. The defaults tell make how to compile and link a C file from the files in the make rule.

Using make for embedded development isn’t much different from using make for desktop development. One big difference is that when you’re doing embedded development, you need to tell make to invoke a cross-compiler or compile under emulation. In the previous example, it’s as simple as this:

Although this is the simplest approach, it can probably be done more elegantly so that changing the compiler requires fewer changes to the make file. We dedicate time to creating make files so you can use a different compiler with minimal impact.

Using make along with your favorite editor is all you need to do embedded development. IDE tools like KDevelop and Eclipse scan the project to create a make file that is then executed to perform a build.

You may decide to use an IDE, but having a firm understanding of what’s happening under the covers is important when you’re working with other open source projects or creating automated builds.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

Linux Embedded systems Topics