Creating Containerized Applications

Creating Docker Images

Once you’ve mastered running containers, the next step is to deploy containerized applications. In this article, we will validate the environment and troubleshooting common issues. If you haven’t read the previous Docker articles, please see Part 1 and Part 2.

Getting Started with Images – Prep Work

As a first step, create a working directory for all of the files used in creating the image. This is where the Dockerfile and any libraries or other applications needed in the image can be stored.

mkdir wc
cd wc

The Dockerfile is the configuration file for creating Docker images. For all of the options, the reference is located in Docker’s documentation. Next, create a file called run_http.py with these contents:

#!/usr/bin/env python3
python3 -m http_server

Dockerfile Options

Now that the working directory is created, let’s start with some Dockerfile commands. There are many more options with tons of blogs and YouTube videos discussing how to create images. The image below simply runs a Python web server. When creating an image, using ENV and COPY in the Dockerfile is extremely useful and for the most part, required. These phrases sets up the image environment (i.e. CLASSPATH, PATH or anything else the app needs to run) and will ensure dependent libraries are in the correct path.

  • FROM
    • Declares a parent image to use as a base.
  • ENV
    • Sets up environment variables if the application requires it. For example, the postgres image has several environment variables for startup of the container. Providing environment variables so users can change the behavior or provide different startup options helps flexibility.
  • COPY / ADD
    • Copy or add a file from the host to the image. Most commonly this is an initialization script to prepare the container’s environment. Alternatively, you can specify a persistent volume when running the container if the files are installed on the host.
    • Important note: Wherever the scripts, libraries or other dependencies are copied, make sure the application is configured to search the destination path.
  • EXPOSE
    • Expose a port to the running container. This is typically used when running a container with -p. For example docker run -p 5448:5432 which tells docker to expose port 5448 on the host and 5432 in the container. When connecting to the service, you would connect to the hostname on port 5448.
  • ENTRYPOINT
    • The container will run an exec of ENTRYPOINT and is used as the running application. If the application dies or is killed, the container is killed as well.
FROM centos
LABEL  maintainer="Your Friendly Maintainer"

EXPOSE 80

COPY run_http.py /
RUN yum install -y epel-release
RUN yum install -y python3 python3-pip
RUN python3 -m pip install pip --upgrade
ENV PATH=${PATH}
ENTRYPOINT /run_http.py

Distributing Images

With this file saved in the wc directory as Dockerfile, run the build command. Note that -t is typically used to tag the version of the application. Make use of this since tags will make life much easier for upgrades and lifecycle management.

docker build -t py-web .
docker image tag py-web:latest py-web:latest

At this point, the developer has two choices. Either push the image to repository, local or Docker Hub, or save the image for distribution.

docker push py-web:latest # Pushing to Docker Hub
docker push 1.1.1.1:5000/py-web:latest # Pushing the image to a local repository

If the image is saved, users can use docker image load to load the image locally. The drawback from this approach is usability. While running the docker image load command is straightforward, users will sometimes react negatively. Since most companies either use Docker Hub or their own registry for distribution. This makes it very easy for users to run containers. The other issue is every release, the user has to go back and download the tar file and upload it to their system. Whenever possible, either use Hub or a public registry.
Here is an example of saving an image:

docker save py-web:latest > py-web.tar

Docker Containers Part 2 – Working with Images

docker containers and images administration

If you haven’t installed Docker, please read Part 1 of this Docker series.

Managing container lifecycles is more involved than starting and stopping. In this second part of Docker Containers, we show how to administer images locally and on remote repositories. The syntax for maintaining images is the subcommand image. We will cover list/ls/inspect, pull and rm/prune in this article.

Working with Images

List Images

The first part of managing images is to know which images are being used, what their disk utilization is and the image version. Listing images is done in one of two ways: either long docker image list or more Linux/UNIX friendly docker image ls.

docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
python latest e285995a3494 10 days ago 921MB
postgres latest 75993dd36176 10 days ago 376MB

Inspecting images provides information such as environment variables, parent image, commands used during initialization, network information, volumes and much more. This data is vital when troubleshooting issues with container startup or creating new images. The following is only an excerpt – the actual command has about two pages of data.

docker inspect postgres
[
{
"RepoTags": [
"postgres:latest"
],
"Hostname": "81312c458473",
"ExposedPorts": {
"5432/tcp": {}
},
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/14/bin",
"PG_MAJOR=14",
"PG_VERSION=14.5-1.pgdg110+1",
"PGDATA=/var/lib/postgresql/data"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"postgres\"]"
.......

Search Repositories

Searching repositories can be accomplished by either going to Docker Hub, or searching by command line, docker search <string>, so you never have to leave the shell. Here’s an example:

docker search postgres
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
postgres The PostgreSQL object-relational database sy… 11486 [OK]
bitnami/postgresql Bitnami PostgreSQL Docker Image 154 [OK]
circleci/postgres The PostgreSQL object-relational database sy… 30
ubuntu/postgres PostgreSQL is an open source object-relation… 19
bitnami/postgresql-repmgr 18
rapidfort/postgresql RapidFort optimized, hardened image for Post… 15

Pulling Images

In order to run containers, you will need to pull the image from a repository. This can be accomplished either by docker pull <image name> or docker run <image name> which will automatically pull the image if it doesn’t exist locally. By default, pull will get the latest version, Alternatively, you can specify a version by using a colon, :, after the image name like this: docker pull <image>:4.2.0

docker pull postgres
Using default tag: latest
latest: Pulling from library/postgres
31b3f1ad4ce1: Pull complete
1d3679a4a1a1: Pull complete
667bd4154fa2: Pull complete
87267fb600a9: Pull complete
Digest: sha256:b0ee049a2e347f5ec8c64ad225c7edbc88510a9e34450f23c4079a489ce16268
Status: Downloaded newer image for postgres:latest
docker.io/library/postgres:latest

Removing and Pruning Images

Unfortunately, Docker doesn’t automatically remove images, so disk utilization tends to grow fairly quickly if not managed. Docker has two commands to remove images, prune, rm and rmi. As part of normal maintenance, prune should run in cron every few weeks or once a month depending on how active the system is.

docker image prune – Deletes unused images
docker rm <container IDs> – Removes container IDs from the system.
docker rmi <image ID> – Remov the image.

docker image prune
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] y
<none>                               <none>      48e3a3f31a48   10 months ago   999MB
<none>                               <none>      89108dc97df7   10 months ago   1.37GB
<none>                               <none>      26e43fa5dd7c   11 months ago   998MB
<none>                               <none>      b98d351f790b   11 months ago   1.37GB
<none>                               <none>      334a4df3c05a   11 months ago   998MB
<none>                               <none>      17c5a57654e4   11 months ago   1.37GB

Please checkout the other container articles here.

 

Docker Containers – Part 1 Installation

Containers in DevOps allows an application to run from any supported environment. An application running in a Container can run in Windows and Linux without any changes to the application. A container is a lightweight piece of software similar in nature to FreeBSD Jails or Linux Containers (LXD). However, a container isn’t like a traditional Operating System. Although Containers are configurable to behave like an OS, this is not the design. Containers are highly configurable, and are able to run just about any application. For example, middle-tier applications, web servers, and in some cases, databases. One of the more popular Containers is Docker, which is the focus of this post, but there are many more.

Docker has a few different offerings. The two most common are Community Edition (CE) and Enterprise Edition (EE). CE is the free and unsupported version whereas EE is a paid model and bundled with support.

Before installing and configuring Docker, we need to understand some key terms.

Images are templates for running a container. For example, building a middle-tier application server, an installation of JBoss, a version of Java and an Oracle driver are required to be part of an image. You can see all of the images on a system by running the docker images command.

Containers are running instances of images. To see the containers, use the command docker ps -a.

Repositories are sets of images with different tags. This is similar to code repositories where you check out different versions of code based upon a tag or version. Omitting the tag will checkout the latest image version. With repos, you can create and share the repo with the world (which is the default behavior) or you can keep the repo private.

Dockerfile is the configuration file used while creating a Docker image.

Installing Docker

Windows

Docker is easy to install no matter which OS is being used. On Windows, use this link to download the stable version of Docker. The installer works like any other – double click it and follow the instructions. Note: .Net version 4.0.3 is required for Docker.

RHEL, CENTos or Fedora Linux

Prerequisites

First, install the following required packages, and then enable the Docker  Yum repository as root or a user with yum sudo privileges.

yum install -y yum-utils device-mapper-persistent-data lvm2

Next, add the Yum Docker repo

yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo

Once these steps are complete, you can use yum to install Docker and service to start the Docker process.

yum install -y docker
service docker start

Ubuntu Linux

Install the prerequisite software, add the gpg key to the system and add the Docker repository. Adding the repository will allow us to use apt-get to install Docker.

apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Once the software is installed and the key added to the system, use apt-get to install Docker Community Edition (docker-ce).

apt-get install docker-ce

Linux

No matter which version of Linux,  only the root user is configured to run Docker commands. If other users need permissions, then create a docker group as root, and add the users into the group. The gpasswd command will add the user to the docker group. It takes the username and group as arguments.

groupadd docker
usermod -G docker dockeruser1
service docker restart

After the installation, you can now use the docker command to list images and run containers. In the next post, we will create our own repository and make it publicly available.