Once you’ve mastered running containers, the next step is to deploy containerized applications. In this article, we will validate the environment and troubleshooting common issues. If you haven’t read the previous Docker articles, please see Part 1 and Part 2.
Getting Started with Images – Prep Work
As a first step, create a working directory for all of the files used in creating the image. This is where the Dockerfile and any libraries or other applications needed in the image can be stored.
mkdir wc cd wc
The Dockerfile is the configuration file for creating Docker images. For all of the options, the reference is located in Docker’s documentation. Next, create a file called run_http.py
with these contents:
#!/usr/bin/env python3 python3 -m http_server
Dockerfile Options
Now that the working directory is created, let’s start with some Dockerfile commands. There are many more options with tons of blogs and YouTube videos discussing how to create images. The image below simply runs a Python web server. When creating an image, using ENV and COPY in the Dockerfile is extremely useful and for the most part, required. These phrases sets up the image environment (i.e. CLASSPATH, PATH or anything else the app needs to run) and will ensure dependent libraries are in the correct path.
- FROM
- Declares a parent image to use as a base.
- ENV
- Sets up environment variables if the application requires it. For example, the postgres image has several environment variables for startup of the container. Providing environment variables so users can change the behavior or provide different startup options helps flexibility.
- COPY / ADD
- Copy or add a file from the host to the image. Most commonly this is an initialization script to prepare the container’s environment. Alternatively, you can specify a persistent volume when running the container if the files are installed on the host.
- Important note: Wherever the scripts, libraries or other dependencies are copied, make sure the application is configured to search the destination path.
- EXPOSE
- Expose a port to the running container. This is typically used when running a container with
-p
. For exampledocker run -p 5448:5432
which tells docker to expose port 5448 on the host and 5432 in the container. When connecting to the service, you would connect to the hostname on port 5448.
- Expose a port to the running container. This is typically used when running a container with
- ENTRYPOINT
- The container will run an exec of ENTRYPOINT and is used as the running application. If the application dies or is killed, the container is killed as well.
FROM centos LABEL maintainer="Your Friendly Maintainer" EXPOSE 80 COPY run_http.py / RUN yum install -y epel-release RUN yum install -y python3 python3-pip RUN python3 -m pip install pip --upgrade ENV PATH=${PATH} ENTRYPOINT /run_http.py
Distributing Images
With this file saved in the wc
directory as Dockerfile, run the build
command. Note that -t
is typically used to tag the version of the application. Make use of this since tags will make life much easier for upgrades and lifecycle management.
docker build -t py-web . docker image tag py-web:latest py-web:latest
At this point, the developer has two choices. Either push the image to repository, local or Docker Hub, or save the image for distribution.
docker push py-web:latest # Pushing to Docker Hub docker push 1.1.1.1:5000/py-web:latest # Pushing the image to a local repository
If the image is saved, users can use docker image load
to load the image locally. The drawback from this approach is usability. While running the docker image load
command is straightforward, users will sometimes react negatively. Since most companies either use Docker Hub or their own registry for distribution. This makes it very easy for users to run containers. The other issue is every release, the user has to go back and download the tar file and upload it to their system. Whenever possible, either use Hub or a public registry.
Here is an example of saving an image:
docker save py-web:latest > py-web.tar