Книга Kubernetes Cookbook - читать онлайн бесплатно, автор Viachaslau Matsukevich. Cтраница 2
bannerbanner
Вы не авторизовались
Войти
Зарегистрироваться
Kubernetes Cookbook
Kubernetes Cookbook
Добавить В библиотекуАвторизуйтесь, чтобы добавить
Оценить:

Рейтинг: 3

Добавить отзывДобавить цитату

Kubernetes Cookbook

This line copies the binary from the first build stage to the current directory of the second stage image. We didn’t set “WORKDIR” in the second build stage, so by default, the current directory is the root directory.

CMD [”. /auth-app”, "-a”, “0.0.0.0”, "-p”, “8080”]

This line sets the default command to run when the container starts.

Now, we can build the image by using the following command:

docker build -t auth-app.

The `-t’ flag sets the image tag. The same tag as we placed in the “FROM” instruction in the first build stage. We didn’t put the version after the colon, so Docker automatically builds with the “latest’ tag. The’. ’ at the end of the command means a build’s context. The build process happens inside the build context’s directory.

After the image is built, we can check it by using the following command:

docker images

See the obtained size of our build image, which is compared to a regular Rust image, is much smaller:

REPOSITORY TAG IMAGE ID CREATED SIZE

auth-app latest 94e11dc49c66 2 minutes ago 34.8MB

rust 1.73-bookworm 890a6b209e1c 3 days ago 1.5GB

Now, the time is to create an instance of this image. We call such an instance a container. We can do it by using the following command:

docker run -p 8080:8080 auth-app: latest

The `-p’ flag maps the container port to the host port. The first port is the host port, and the second is the container port. Also, we explicitly specified a tag. However, Docker will pull the “latest’ tag without it by default if not specified. Let’s now request the `/health’ endpoint:

curl http://localhost:8080/health

You should see the following response, meaning that our application is healthy:

{“status”: “OK”}

Containerizing with Podman

To start with Podman, you need to install Podman Desktop. You can download it from the [official website] (https://podman.io/docs/installation). Once you’ve installed it, you can check the Podman version by using this command:

podman – version

You should see the version of Podman:

podman version 4.7.0

Compared to Docker Desktop, Podman requires an additional step from the user to start a virtual machine. You can do it by running the command below:

podman machine start

By default, the Podman machine is configured in rootless mode. So if your container requires root permissions (as an example, you need to get access to privileged ports), you need to change the machine settings to root mode:

podman machine set – rootful

We will run our application in rootless mode, which is more secure. If the “USER” instruction is not specified, the image will run as root by default. It could be a better way to do it, so we need to create a user and group in the container and run the application as this user. So, let’s adjust the second stage in our Dockerfile:

# … (first stage is omitted)

FROM gcr.io/distroless/cc-debian12

ENV RUST_LOG=info

COPY – from=builder /app/target/release/auth-app.

USER nobody: nobody

ENTRYPOINT [”. /auth-app”]

CMD [” -a”, “0.0.0.0”, "-p”, “8080”]

The “nobody’ user is uniquely reserved in Unix-like operating systems. To limit the harm if a process is compromised, it’s common to run daemons or other processes as the “nobody’ user. Also, we added the “ENTRYPOINT” instruction to the Dockerfile. It is like “CMD” but cannot be overridden when running the container.

After that, you can build the image similarly to Docker by using the following statement:

podman build -t auth-app: latest.

Start the container using, overriding predefined “CMD” instruction:

podman run -p 5555:5555 auth-app -a 0.0.0.0 -p 5555

The part after the image name is the “CMD” instruction. It overrides the default command specified in the Dockerfile with a new port. After requesting the `/health’ endpoint with a new port, we should get the same response as with Docker, which will say that our application is healthy.

Containerizing with Colima

To install Colima, you can obtain the latest release from the official [Github repository] (https://github.com/abiosoft/colima), and follow the provided installation guide. Once you’ve installed it, you can check the Colima version by using this command:

colima – version

You should see something like this:

colima version 0.5.6

git commit: ceef812c32ab74a49df9f270e048e5dced85f932

To start Colima machine, you need to use “start’ command:

colima start

This command adds Docker context to your environment. You can use the Docker client (Docker CLI) to interact with the Docker daemon inside the Colima machine. To get the context list, run the following:

docker context ls – format=json

[

{

“Current”: true,

“Description”: “colima”,

“DockerEndpoint”: "unix:///Users/m_muravyev/.colima/default/docker.sock”,

“KubernetesEndpoint”: “”,

“ContextType”: “moby”,

“Name”: “colima”,

“StackOrchestrator”:””

},

{

“Current”: false,

“Description”: “”,

“DockerEndpoint”: "unix:///Users/m_muravyev/.docker/run/docker.sock”,

“KubernetesEndpoint”: “”,

“ContextType”: “moby”,

“Name”: “desktop-linux”,

“StackOrchestrator”:””

}

]

The Colima context is the default one pointing to the Docker daemon inside the Colima machine. The desktop-linux context is the default Docker Desktop context. You can always switch between them.

Building Multi-Architecture Docker Images

Docker, Podman, and Colima support multi-architecture images, a powerful feature. You can create and share container images that work on different hardware types. This section will briefly touch on the concept of multi-arch images and how to make them.

Let’s refresh our memory about computer architecture. The Rust compiler can build the application for different architectures. The default one is the host architecture. For example, if you want to run an application on a modern macOS with an M chip, you must compile it on that machine. That’s because the M chip has “arm64” architecture. This architecture differs from the common “amd64”, which you can find on most regular Windows or Linux systems.

You can use Rust’s cross-compilation feature to compile a project for any architecture. It works on any source host platform, even if the target is different. You need to add simple flags to build up running for Apple’s M chip on a regular Linux machine. No matter what our system is, the Rust compiler will always make M chip compatible binary:

rustup target add aarch64-apple-darwin # add the M chip target tripple

cargo build – release – target aarch64-apple-darwin # build the binary using that target

To build the application for Linux, we can use the target triple “x86_64-unknown-linux-gnu’. Don’t worry about the “unknown’ part. It is just a placeholder for the vendor and operating system. In this case, it just means for any vendor and Linux OS. The “gnu’ part means that the GNU C library is used. It is the most common C language library for Linux.

It is important to say that there are drawbacks to using this method instead of creating images that support multiple architectures:

– Cross-compilation adds complexity and overhead to the build process because it works differently for each programming language.– Building an image takes more time because of installing and configuring the cross-compilation toolchains.– Creating distinct Dockerfiles for each architecture becomes necessary, leading to a less maintainable and scalable approach.– Distinguishing the image’s architecture relies on using tags or names. In the case of multi-arch images, these tags or names may remain identical across all architectures.

Let’s create a multi-arch image for our application. We will use the Dockerfile we created earlier.

docker buildx create – use – name multi-arch # create a builder instance

docker buildx build – platform linux/amd64,linux/arm64 -t auth-app: latest.

Buildx is a Docker CLI plugin, formerly called BuildKit, that extends the Docker build. Because we are using Colima with Docker runtime inside, we can use Buildx. Podman also supports Buildx. The ` – platform’ flag specifies the target platforms. The “linux/amd64” is the default platform. The “linux/arm64” is the platform for Apple’s M chip.

Under the hood, Buildx uses QEMU to emulate the target architecture. The build process can take more time than usual cause it will start separate VMs for each target architecture. After the build is complete, you can find out the image’s available architectures by using the following command:

docker inspect auth-app | jq '.[].Architecture’

You need to install the “jq’ tool to run this and further commands. It is a command-line JSON processor that helps you parse and manipulate JSON data.

brew install jq

You will get the following output:

“amd64”

You might notice that only one architecture is available. This is because Buildx uses the ` – output=docker’ type by default, which cannot export multi-platform images. Instead, multi-platform images must be pushed to a registry using the ` – output=oci’ or simply with just the ` – push’ flag. When you use this flag, Docker creates a manifest with all available architectures for the image and attaches it to a nearby image within the registry where it’s pushed. When you pull the image, it will choose your architecture’s image. Let’s check the manifest for the [official Rust image] (https://hub.docker.com/_/rust) on the Docker Hub registry:

docker manifest inspect rust:1.73-bookworm | jq '.manifests[].platform’

Why don’t we specify any URL for a remote Docker Hub registry? That is because Docker CLI has a default registry, so the actual command above explicitly looks like this:

docker manifest inspect docker.io/rust:1.73-bookworm | jq '.manifests[].platform’

You will see output like so:

{

“architecture”: “amd64”,

“os”: “linux”

}

{

“architecture”: “arm”,

“os”: “linux”,

“variant”: “v7”

}

{

“architecture”: “arm64”,

“os”: “linux”,

“variant”: “v8”

}

{

“architecture”: “386”,

“os”: “linux”

}

You can see that the Rust image supports four architectures. Roughly speaking, the “arm’ architecture is for the Raspberry Pi. The “386” architecture is for 32-bit systems. The “amd64” architecture is for 64-bit systems. The “arm64” architecture is for Apple’s M chip.

The Role of Docker in Modern Development

Docker has transformed modern software development by providing a standardized approach through containerization. This approach has made software development, testing, and operations more efficient. Docker creates container images on various hardware configurations, including traditional x64/64 and ARM architectures. It integrates with multiple programming languages, making development and deployment more accessible and versatile for developers.

Docker is helpful for individual development environments and container orchestration and management. Organizations use Docker to streamline their software delivery pipelines, making them more efficient and reliable. Docker provides a comprehensive tool suite for containerization, which impacts software development at all stages.

Our journey doesn’t end with Docker alone as we navigate the complex world of modern development. The following section will explain the critical role of Kubernetes in orchestration and how it fits into the contemporary development landscape. Let’s explore how Kubernetes can orchestrate containerized applications.

Understanding Kubernetes’ Role in Orchestration

Building on our prior knowledge, we understand that container deployment is straightforward. What Kubernetes brings to the table, as detailed earlier, is large-scale container orchestration – particularly beneficial in complex microservice and multi-cloud environments.

Kubernetes, often regarded as the cloud’s operating system, extends beyond its origins as Google’s internal project, now serving as a cornerstone in the orchestration of containerized applications. It is a decent system for automating containerized application deployment, scaling, and management. It is a portable, extensible, and open-source platform. It is also a production-ready platform that powers the most extensive applications worldwide. Google, Spotify, The New York Times, and many other companies use Kubernetes at scale.

With the increasing complexity of microservices, Kubernetes’ vibrant community, including contributors from leading entities like Google and Red Hat, continually enhances its capabilities to simplify its management. Its active development mirrors the characteristic rapid evolution of open-source projects. Expect more discussions about Kubernetes involving IT professionals and individuals from diverse technical backgrounds, even those less familiar with technology.

Comparing Docker Compose and Kubernetes

Docker is a container platform. Kubernetes is a platform for orchestrating containers. It’s crucial to recognize that these two platforms cater to distinct purposes. An alternative to Kubernetes, even if incomplete, is Docker Compose. It presents a simpler solution for running Docker applications with multiple containers, finding its niche in local development environments. Some fearless individuals even deploy it in production. However, when comparing them, Docker Compose is like a small forklift that moves containers. On the other hand, Kubernetes can be envisioned as a cutting-edge logistics center comparable to the top-tier facilities in Amazon’s warehouses. It gives advanced automation, offering unparalleled container management at scale.

Docker Compose for Multi-Container Applications

With Docker Compose, you can define and run multiple containers. It uses a simple YAML file structure to configure the services. A service definition contains the configuration that is applied to each container. You can create and start all the services from your configuration with a single command.

Let’s enhance our auth-app application. Let’s assume it requires in-memory storage to keep the user’s data. We will use Redis for that. Also, we need a broker to send messages to the queue. We will use RabbitMQ as a traditional way to do that. Let’s create a “compose. yml’ file with the following content:

version: “3”

services:

auth-app:

image: /auth-app: latest

ports:

– “8080:8080”

environment:

RUST_LOG: info

REDIS_HOST: redis

REDIS_PORT: 6379

RABBITMQ_HOST: rabitmq

RABBITMQ_PORT: 5672

redis:

image: redis: latest

volumes:

– redis:/data

ports:

– 6379

rabitmq:

image: rabbitmq: latest

volumes:

– rabbitmq:/var/lib/rabbitmq

environment:

RABBITMQ_DEFAULT_USER: guest

RABBITMQ_DEFAULT_PASS: guest

ports:

– 5672

volumes:

redis:

rabbitmq:

To run two containers, you need to use the following command:

docker-compose up

Ofter it’s practical to run containers in the background:

docker-compose up -d

And follow the logs in the same terminal session:

docker-compose logs -f

To stop all the compose’s containers, use the following command:

docker-compose down

Transitioning from Docker Compose to Kubernetes Orchestration

Migrating from Docker Compose to Kubernetes can offer several benefits and enhance the capabilities of your containerized applications. There are various reasons why Kubernetes can be a suitable option for this transition:

– Docker Compose is constrained by a single-cluster limitation, restricting deployment to just one host. Conversely, Kubernetes is a platform that effectively manages containers across multiple hosts.– In Docker Compose, the failure of the host running containers results in the failure of all containers on that host. In contrast, Kubernetes employs a primary node to oversee the cluster and multiple worker nodes. If a worker node fails, the cluster can operate with minimal disruption.– Kubernetes boasts many features and possibilities that can be expanded with new components and functionalities. Although Docker Compose allows adding a few features, it generally needs to catch up to Kubernetes in popularity and scope.– With robust cloud-native support, Kubernetes facilitates deployment on any cloud provider. This flexibility has contributed to its growing popularity among software developers in recent years.

Conclusion

This section discusses how software packaging has evolved from traditional methods to modern containerization techniques using Docker and Kubernetes. It explains the benefits and considerations associated with Docker Engine, Docker Desktop, Podman, and Colima. The book will further explore the practical aspects of encapsulating applications into containers, the importance of Docker in current development methods, and the crucial role Kubernetes plays in orchestrating containerized applications at scale.

Docker and Kubernetes: Understanding Containerization

Creating a Local Cluster with Minikube

Minikube is a tool that makes it easy to run Kubernetes locally. It simplifies the process by running a single-node cluster inside a virtual machine (VM) on your device, which can emulate a multi-node Kubernetes cluster. Minikube is the most used local Kubernetes cluster. It is a great way to get started with Kubernetes. It is also an excellent environment for testing Kubernetes applications before deploying them to a production cluster.

There are equivalent alternatives to Minikube, such as Kubernetes support in Docker Desktop and Kind (Kubernetes in Docker), where you can also run Kubernetes clusters locally. However, Minikube is the most favored and widely used tool. It is also the most straightforward. It is a single binary that you can quickly download and run on your machine. It is also available for Windows, macOS, and Linux.

Installing Minikube

To install Minikube, download the binary from the [official website] (https://minikube.sigs.k8s.io/docs/start/). For example, If you use macOS with Intel Chip, apply this command:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64

sudo install minikube-darwin-amd64 /usr/local/bin/minikube

In case you prefer not to use Curl and Sudo combination, you can use Homebrew:

brew install minikube

Configuring and Launching Your Minikube Cluster

You can start Minikube simply as much as possible with the default configuration:

minikube start

While the provided command is generally functional, it’s recommended to explicitly specify the Minikube driver to enhance understanding of future provisioning configurations. For instance, the Container Network Interface (CNI) is set to auto by default, potentially leading to unforeseen consequences depending on the Minikube-selected driver.

It’s worth noting that Minikube often selects the driver based on the underlying operating system configuration. For example, if the Docker service runs, Minikube might default to using the Docker driver. Explicitly specifying the driver ensures a more predictable and tailored configuration for your specific needs.

minikube start – cpus=4 – memory=8192 – disk-size=50g – driver=docker – addons=ingress – addons=metrics-server

Most options are self-explanatory. The ` – driver’ option specifies the virtualization driver. By default, Minikube prefers the Docker driver or VM on macOS if Docker is not installed. On Linux – Docker, KVM2, and Podman drivers are favored; however, you can use all seven currently available options. The ` – addons’ option specifies the list of add-ons to enable. You can list the available add-ons by using the following command:

minikube addons list

If you use Docker Desktop, make sure the virtual machine’s CPU and memory settings are higher than Minikube’s settings. Otherwise, you will get an error like:

Exiting due to MK_USAGE: Docker Desktop has only 7959MB memory, but you specified 8192MB.

Once you’ve started, use this command to check the cluster’s status:

minikube status

And get:

minikube

type: Control Plane

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

Interacting with Minikube Cluster

The kubectl command-line tool is the most common way to interact with Kubernetes. It has to be the first tool for any Kubernetes user. It’s an official client for Kubernetes API. Minikube already has it, and we can use it – however, the recommended way is to install Kubectl from the [official website] (https://kubernetes.io/docs/tasks/tools/) and use it separately from Minikube. At least, that’s because Minikube’s kubectl is not always up to date and can be a few versions behind.

You can check Minikube’s kubectl version by using the following command:

minikube kubectl – version

Alternatively, if you have kubectl installed separately, you can use it by using the following command:

kubectl version

From now on, we will use the kubectl command-line tool installed separately from Minikube.

You will receive the client version, also known as kubectl, and the server version, the Kubernetes cluster. It’s okay if the versions differ, as the Kubernetes server has a different release cycle than kubectl. While it’s better to aim for identical versions, it’s not always necessary.

To get the list of nodes in the cluster, use the following command:

kubectl – get nodes

You will get our cluster’s single node:

NAME STATUS ROLES AGE VERSION

minikube Ready control-plane 10m v1.24.1

This output means that we have one node that was created 10 minutes ago. The node has a role control plane, which is the primary node. Usually, cluster-plane nodes are for Kubernetes components (things that make Kubernetes run), not for user workloads (applications that users deploy on Kubernetes). But, due to Minikube’s development purposes, it is the only node in the cluster for everything.

It is also worth noting that this single node exposes the Kubernetes API server. You can find out the URL of it by using the following command:

kubectl – cluster-info

You will get the same address where kubectl is requesting to:

Kubernetes control plane is running at https://127.0.0.1:59813

Finally, let’s use the first add-on we enabled earlier. The metrics server is a cluster-wide aggregator of resource usage data. It collects metrics from Kubernetes, such as CPU and memory usage per node and pod. It is a prerequisite for the autoscaling mechanism we will discuss later in this book. For now, let’s check cluster node resource usage:

kubectl – top node

You will receive data showing the utilization of CPU and memory resources by the node. In our case, the usage might appear minimal because nothing has been deployed yet. The specific percentages can vary depending on background processes and Minikube’s overhead.

NAME CPU (cores) CPU% MEMORY (bytes) MEMORY%

minikube 408m 10% 1600Mi 20%

Stopping and Deleting Your Minikube Cluster

To stop Minikube, use the following command:

minikube stop

You can also delete the cluster by using the following command:

minikube delete

Recipe: Deploying Your First Application to Kubernetes

In this recipe, we will deploy our first application to the Kubernetes cluster. We will use the same application we containerized in the previous recipe. That said, we will use the same Docker image we built earlier. However, we will deliberately use a non-common imperative approach using command-line commands to start with simple things. We will use the declarative way in this chapter as soon as we warm up. For now, let’s refresh our fundamental computer science knowledge and recall the differences between these two approaches.

Understanding Imperative vs. Declarative Management Model

Imperative paradigm is a term that is mainly, but not always, related to programming. In this programming style, the engineer tells the computer step-by-step how to do a task. The imperative approach is used to operate programs or issue direct commands to configure infrastructure. For example, using terminal command-line commands to start a Docker container demonstrates the use of the imperative approach.