A Comprehensive Guide to Demystifying Kubernetes Networking Configuration

Illusion street with boat like a pod in networking

Kubernetes has become the de facto standard for container orchestration, enabling the seamless deployment and scaling of applications. However, understanding and configuring networking in a Kubernetes cluster can be complex, especially for newcomers. We’ll delve into the intricacies of Kubernetes networking and provide a comprehensive guide to help you navigate through the various options and configurations.

In a K8s cluster, networking plays a vital role in facilitating communication between pods, services, and external clients. Each pod in Kubernetes gets its own IP address, allowing containers within the pod to communicate with each other over the loopback interface. However, pods are ephemeral, and their IP addresses change. This is where configurations come into play.

Pod-to-Pod Communication

To enable communication between pods in the cluster, Kubernetes implements a flat networking model. Pods can communicate directly with each other using their IP addresses, regardless of the node they are running on. The Container Network Interface (CNI) plugin is responsible for managing pod networking and assigning IP addresses to pods. Popular CNI plugins include Calico, Flannel, and Weave.

Kubernetes Services provide a stable endpoint for accessing pods. Services abstract the underlying pod IP addresses, allowing clients to access pods through a consistent DNS name or IP address. Services support different types of load balancing, such as round-robin or session affinity, to distribute traffic among the pods behind the service. Kubernetes automatically manages the load balancing configuration based on the service type and endpoints.

Ingress and External Connectivity

Ingress is a Kubernetes resource that provides external connectivity to services within the cluster. It acts as an entry point for incoming traffic and allows for the routing and load balancing of requests to different services based on specific rules. To enable Ingress functionality, an Ingress Controller is required, which can be implemented using various solutions such as Nginx Ingress Controller, Traefik, or Istio.

Network Policies allow you to define fine-grained rules to control traffic flow within the cluster. They act as a firewall for your Kubernetes network, allowing or denying traffic based on specific criteria such as pod labels, namespaces, or IP ranges. By leveraging policies, you can enforce security and isolation between different components of your application and ensure that only authorized communication is allowed.

Networking Plugins and Configuration

Kubernetes network plugins, such as Calico, Flannel, or Weave, provide the underlying infrastructure for pod communication. These plugins integrate with the CNI interface and handle IP address management, routing, and network policy enforcement. Choosing the right plugin depends on factors such as scalability requirements, performance, and compatibility with your cloud provider or on-premises infrastructure.

In some cases, you may require custom networking configurations to meet specific requirements. Kubernetes allows for advanced networking features, such as network overlays, multi-cluster networking, or integrating with external services. These custom configurations often involve working with additional tools and technologies like Virtual Extensible LAN (VXLAN), Border Gateway Protocol (BGP), or Service Mesh solutions like Istio.

Understanding and configuring networking in Kubernetes is crucial for building scalable, resilient, and secure applications. By grasping the basics of pod-to-pod communication, service discovery, load balancing, ingress, network policies, and networking plugins, you can effectively design and manage your Kubernetes networking infrastructure. As you gain expertise, exploring custom networking configurations can provide additional flexibility and enable advanced networking capabilities. With this comprehensive guide, you’re equipped to navigate the intricacies of Kubernetes networking. You’ll be able to create robust production quality networking solutions for your applications.

Take a look at the other articles here.

Unlocking the Power of Orchestration with AWS Kubernetes Service

Pomegranate clustered like K8s pods

Containerization has revolutionized the way we develop, deploy, and scale applications. Kubernetes, an open-source container orchestration platform, has emerged as the de facto standard for managing containerized workloads efficiently. However, setting up and managing a Kubernetes (K8s) cluster can be a complex and time-consuming task. This is where AWS Elastic Kubernetes Service (EKS) comes to the rescue. In this blog post, we’ll explore the key features and benefits of EKS and how it simplifies the deployment and management of Kubernetes clusters on AWS.

Unserstanding Elastic Kubernetes Service

EKS is a fully managed service that makes it easier to run Kubernetes on AWS without the need to install and manage the K8s control plane. It takes care of the underlying infrastructure, including server provisioning, scaling, and patching, allowing developers and operations teams to focus on deploying and managing applications.

One of the significant advantages of AWS EKS is its seamless integration with other AWS services. EKS leverages Elastic Load Balancers (ELB), Amazon RDS for database management, and Amazon VPC for networking, enabling you to build highly scalable and resilient applications on the AWS platform. Additionally, EKS integrates with AWS Identity and Access Management (IAM) for secure authentication and authorization.

Scalability and Security

AWS EKS provides a highly available and scalable K8s control plane. It runs across multiple Availability Zones (AZs), ensuring redundancy and minimizing downtime. EKS automatically detects and replaces unhealthy control plane nodes, ensuring the stability of your cluster. Moreover, EKS enables you to scale your cluster horizontally by adding or removing worker nodes to meet the changing demands of your applications.

Security is a critical aspect of any cloud service, and AWS EKS offers robust security features. EKS integrates with AWS Identity and Access Management (IAM), allowing you to define granular access controls for your Kubernetes cluster. It also supports encryption of data at rest and in transit, using AWS Key Management Service (KMS) and Transport Layer Security (TLS) respectively. With EKS, you can meet various compliance requirements, such as HIPAA, GDPR, and PCI DSS.

Monitoring and Logging

AWS EKS provides comprehensive monitoring and logging capabilities. You can leverage Amazon CloudWatch to collect and analyze logs, metrics, and events from your EKS cluster. CloudWatch enables you to set up alarms and notifications to proactively monitor the health and performance of your applications. Additionally, EKS integrates with AWS X-Ray, a service for tracing and debugging distributed applications, allowing you to gain insights into the behavior of your microservices.

Cost Optimization

AWS EKS offers cost optimization features to help you manage your infrastructure efficiently. With EKS, you only pay for the resources you use, and you can scale your worker nodes based on demand. EKS integrates with AWS Auto Scaling, which automatically adjusts the number of worker nodes in your cluster based on predefined rules and metrics. This ensures optimal resource utilization and cost savings.

Elastic Kubernetes Service is a powerful service that simplifies management of Kubernetes clusters on the AWS platform. By leveraging the seamless integration with other AWS services, high availability, scalability, robust security, monitoring, and cost optimization features, AWS EKS empowers developers and operations teams to focus on building and scaling their applications without worrying about the underlying infrastructure. If you’re considering Kubernetes for your next project on AWS, EKS should be at the top of your list.

Checkout our other articles on Containers here.

Creating Containerized Applications

Creating Docker Images

Once you’ve mastered running containers, the next step is to deploy containerized applications. In this article, we will validate the environment and troubleshooting common issues. If you haven’t read the previous Docker articles, please see Part 1 and Part 2.

Getting Started with Images – Prep Work

As a first step, create a working directory for all of the files used in creating the image. This is where the Dockerfile and any libraries or other applications needed in the image can be stored.

mkdir wc
cd wc

The Dockerfile is the configuration file for creating Docker images. For all of the options, the reference is located in Docker’s documentation. Next, create a file called run_http.py with these contents:

#!/usr/bin/env python3
python3 -m http_server

Dockerfile Options

Now that the working directory is created, let’s start with some Dockerfile commands. There are many more options with tons of blogs and YouTube videos discussing how to create images. The image below simply runs a Python web server. When creating an image, using ENV and COPY in the Dockerfile is extremely useful and for the most part, required. These phrases sets up the image environment (i.e. CLASSPATH, PATH or anything else the app needs to run) and will ensure dependent libraries are in the correct path.

  • FROM
    • Declares a parent image to use as a base.
  • ENV
    • Sets up environment variables if the application requires it. For example, the postgres image has several environment variables for startup of the container. Providing environment variables so users can change the behavior or provide different startup options helps flexibility.
  • COPY / ADD
    • Copy or add a file from the host to the image. Most commonly this is an initialization script to prepare the container’s environment. Alternatively, you can specify a persistent volume when running the container if the files are installed on the host.
    • Important note: Wherever the scripts, libraries or other dependencies are copied, make sure the application is configured to search the destination path.
  • EXPOSE
    • Expose a port to the running container. This is typically used when running a container with -p. For example docker run -p 5448:5432 which tells docker to expose port 5448 on the host and 5432 in the container. When connecting to the service, you would connect to the hostname on port 5448.
  • ENTRYPOINT
    • The container will run an exec of ENTRYPOINT and is used as the running application. If the application dies or is killed, the container is killed as well.
FROM centos
LABEL  maintainer="Your Friendly Maintainer"

EXPOSE 80

COPY run_http.py /
RUN yum install -y epel-release
RUN yum install -y python3 python3-pip
RUN python3 -m pip install pip --upgrade
ENV PATH=${PATH}
ENTRYPOINT /run_http.py

Distributing Images

With this file saved in the wc directory as Dockerfile, run the build command. Note that -t is typically used to tag the version of the application. Make use of this since tags will make life much easier for upgrades and lifecycle management.

docker build -t py-web .
docker image tag py-web:latest py-web:latest

At this point, the developer has two choices. Either push the image to repository, local or Docker Hub, or save the image for distribution.

docker push py-web:latest # Pushing to Docker Hub
docker push 1.1.1.1:5000/py-web:latest # Pushing the image to a local repository

If the image is saved, users can use docker image load to load the image locally. The drawback from this approach is usability. While running the docker image load command is straightforward, users will sometimes react negatively. Since most companies either use Docker Hub or their own registry for distribution. This makes it very easy for users to run containers. The other issue is every release, the user has to go back and download the tar file and upload it to their system. Whenever possible, either use Hub or a public registry.
Here is an example of saving an image:

docker save py-web:latest > py-web.tar

Docker Containers Part 2 – Working with Images

docker containers and images administration

If you haven’t installed Docker, please read Part 1 of this Docker series.

Managing container lifecycles is more involved than starting and stopping. In this second part of Docker Containers, we show how to administer images locally and on remote repositories. The syntax for maintaining images is the subcommand image. We will cover list/ls/inspect, pull and rm/prune in this article.

Working with Images

List Images

The first part of managing images is to know which images are being used, what their disk utilization is and the image version. Listing images is done in one of two ways: either long docker image list or more Linux/UNIX friendly docker image ls.

docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
python latest e285995a3494 10 days ago 921MB
postgres latest 75993dd36176 10 days ago 376MB

Inspecting images provides information such as environment variables, parent image, commands used during initialization, network information, volumes and much more. This data is vital when troubleshooting issues with container startup or creating new images. The following is only an excerpt – the actual command has about two pages of data.

docker inspect postgres
[
{
"RepoTags": [
"postgres:latest"
],
"Hostname": "81312c458473",
"ExposedPorts": {
"5432/tcp": {}
},
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/14/bin",
"PG_MAJOR=14",
"PG_VERSION=14.5-1.pgdg110+1",
"PGDATA=/var/lib/postgresql/data"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"postgres\"]"
.......

Search Repositories

Searching repositories can be accomplished by either going to Docker Hub, or searching by command line, docker search <string>, so you never have to leave the shell. Here’s an example:

docker search postgres
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
postgres The PostgreSQL object-relational database sy… 11486 [OK]
bitnami/postgresql Bitnami PostgreSQL Docker Image 154 [OK]
circleci/postgres The PostgreSQL object-relational database sy… 30
ubuntu/postgres PostgreSQL is an open source object-relation… 19
bitnami/postgresql-repmgr 18
rapidfort/postgresql RapidFort optimized, hardened image for Post… 15

Pulling Images

In order to run containers, you will need to pull the image from a repository. This can be accomplished either by docker pull <image name> or docker run <image name> which will automatically pull the image if it doesn’t exist locally. By default, pull will get the latest version, Alternatively, you can specify a version by using a colon, :, after the image name like this: docker pull <image>:4.2.0

docker pull postgres
Using default tag: latest
latest: Pulling from library/postgres
31b3f1ad4ce1: Pull complete
1d3679a4a1a1: Pull complete
667bd4154fa2: Pull complete
87267fb600a9: Pull complete
Digest: sha256:b0ee049a2e347f5ec8c64ad225c7edbc88510a9e34450f23c4079a489ce16268
Status: Downloaded newer image for postgres:latest
docker.io/library/postgres:latest

Removing and Pruning Images

Unfortunately, Docker doesn’t automatically remove images, so disk utilization tends to grow fairly quickly if not managed. Docker has two commands to remove images, prune, rm and rmi. As part of normal maintenance, prune should run in cron every few weeks or once a month depending on how active the system is.

docker image prune – Deletes unused images
docker rm <container IDs> – Removes container IDs from the system.
docker rmi <image ID> – Remov the image.

docker image prune
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] y
<none>                               <none>      48e3a3f31a48   10 months ago   999MB
<none>                               <none>      89108dc97df7   10 months ago   1.37GB
<none>                               <none>      26e43fa5dd7c   11 months ago   998MB
<none>                               <none>      b98d351f790b   11 months ago   1.37GB
<none>                               <none>      334a4df3c05a   11 months ago   998MB
<none>                               <none>      17c5a57654e4   11 months ago   1.37GB

Please checkout the other container articles here.

 

Docker Containers – Part 1 Installation

Containers in DevOps allows an application to run from any supported environment. An application running in a Container can run in Windows and Linux without any changes to the application. A container is a lightweight piece of software similar in nature to FreeBSD Jails or Linux Containers (LXD). However, a container isn’t like a traditional Operating System. Although Containers are configurable to behave like an OS, this is not the design. Containers are highly configurable, and are able to run just about any application. For example, middle-tier applications, web servers, and in some cases, databases. One of the more popular Containers is Docker, which is the focus of this post, but there are many more.

Docker has a few different offerings. The two most common are Community Edition (CE) and Enterprise Edition (EE). CE is the free and unsupported version whereas EE is a paid model and bundled with support.

Before installing and configuring Docker, we need to understand some key terms.

Images are templates for running a container. For example, building a middle-tier application server, an installation of JBoss, a version of Java and an Oracle driver are required to be part of an image. You can see all of the images on a system by running the docker images command.

Containers are running instances of images. To see the containers, use the command docker ps -a.

Repositories are sets of images with different tags. This is similar to code repositories where you check out different versions of code based upon a tag or version. Omitting the tag will checkout the latest image version. With repos, you can create and share the repo with the world (which is the default behavior) or you can keep the repo private.

Dockerfile is the configuration file used while creating a Docker image.

Installing Docker

Windows

Docker is easy to install no matter which OS is being used. On Windows, use this link to download the stable version of Docker. The installer works like any other – double click it and follow the instructions. Note: .Net version 4.0.3 is required for Docker.

RHEL, CENTos or Fedora Linux

Prerequisites

First, install the following required packages, and then enable the Docker  Yum repository as root or a user with yum sudo privileges.

yum install -y yum-utils device-mapper-persistent-data lvm2

Next, add the Yum Docker repo

yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo

Once these steps are complete, you can use yum to install Docker and service to start the Docker process.

yum install -y docker
service docker start

Ubuntu Linux

Install the prerequisite software, add the gpg key to the system and add the Docker repository. Adding the repository will allow us to use apt-get to install Docker.

apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Once the software is installed and the key added to the system, use apt-get to install Docker Community Edition (docker-ce).

apt-get install docker-ce

Linux

No matter which version of Linux,  only the root user is configured to run Docker commands. If other users need permissions, then create a docker group as root, and add the users into the group. The gpasswd command will add the user to the docker group. It takes the username and group as arguments.

groupadd docker
usermod -G docker dockeruser1
service docker restart

After the installation, you can now use the docker command to list images and run containers. In the next post, we will create our own repository and make it publicly available.