Over the last several years, Docker has taken the software industry by storm. Docker provides developers an “open platform for developing, shipping, and running applications”. The major advantage that docker provides developers is that it can separate the applications from the infrastructure that it is running on by building the environment in a container. The container can then be executed on nearly any development platform to provide the developer with an identical environment. In this post, we will explore Docker and how embedded software developers can use it to improve their development environments.
Dockers use in Embedded Software Development
Developers can leverage Docker for many purposes, but there are two that are most interesting to embedded software developers.
First, developers can build a portable container with their build environment. This ensures that every developer is working with the same tools and development environment. A new developer can come on board and be up and running nearly instantly by providing them access to the source code and the associated Docker file that is used to build the Docker image. This can alleviate all those issues and discussions about software not building, having the right libraries, paths, and so forth.
Second, developers can build a DevOps pipeline that leverages their container to automate builds, testing, analytics, and deployment. Automated DevOps is a very powerful concept and very valuable to any business that uses them successfully. Most pipeline development requires some virtual machine or container that has the build and test environment installed. Developers can leverage Docker to create this environment and use tools such as Jenkins and Gitlab to build out their DevOps system.
Docker containers are built from a Docker file, often named Dockerfile, is a text file that contains all the commands necessary to build a Docker container. For example, many Dockerfiles will start with a FROM command that specifies an existing docker image that the Dockerfile will be built upon. There are different options such as:
Which uses the latest Ubuntu image as the base. Someone working with gcc might use something like:
Which uses a specific version of gcc. (One could also use FROM gcc:latest).
Another common command is WORKDIR. This specifies the working directory for any commands like RUN, CMD, ADD, COPY, ENTRYPOINT that might follow. For example, if I wanted to install the Arm gcc-arm-none-eabi version 10.3 compiler into the /home/dev directory, I might do something like the following:
# Set up a tools dev directory
# Get and install the Arm gcc compiler
RUN wget -qO- https://developer.arm.com/-/media/Files/downloads/gnu-rm/10.3-2021.10/gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 | tar -xj
A Docker file will contain as many commands as necessary to set up the development environment. However, the file itself is not the Docker container. To use the container, we need to first build it.
Building a Docker Container
Once the Docker file has all the commands necessary to build the image, the image can be built by issuing a build command like:
## Build the docker image
docker build -t beningo/gcc-arm .
In the above command, docker is invoked to build the Dockerfile that exists in the current directory. The -t is the tag parameter. In this case, I’m tagging the newly created image as beningo/gcc-arm. For example, if I run the command:
I should receive a list of all the Docker images that I have created:
Tags help us to identify our image. As you can see, I have an image I did not tag. It’s helpful to know what the image is. Since an image can easily be 2 GB, they can quickly eat up a lot of hard drive space if someone isn’t careful!
Running the Docker Image
Once the Docker container has been built, we want to run the image. We can just run commands in the image using the docker run command or we can interact with the image in an interactive manner using -it. This will allow us to have access to the image we are running through a terminal interface. A common command to run a docker image would be like the following:
docker run --rm -it beningo/gcc-arm
The --rm tells Docker that we want to remove the volume when we exit it. If I want access to the source code that is in the local directory, I could also use a command like the following:
docker run --rm -it -v "$(PWD):/home/app" beningo/gcc-arm
If I run this command and then navigate to the /home/app folder, we can see that I have access to my code repository:
I can then exit my container by just typing EXIT into the terminal. If I don’t want to exit, I could go on to build my source code or perform whatever other function I might have for my container.
Docker is a useful tool for embedded teams who are looking to simplify the build environments and build out a DevOps process. At first glance, Docker can seem complicated and confusing, but it doesn’t have to be that way. As we have seen in this blog, Docker is conceptually simple and easy to get started with. Certain details can be tricky, but they can easily be worked through to provide more flexibility to developers and help them to improve their processes.
Jacob Beningo is an embedded software consultant who works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees, including a Master of Engineering from the University of Michigan. Feel free to contact him at [email protected], at his website www.beningo.com, and sign-up for his monthly Embedded Bytes Newsletter.