Книга IT Cloud - читать онлайн бесплатно, автор Eugeny Shtoltc. Cтраница 2
bannerbanner
Вы не авторизовались
Войти
Зарегистрироваться
IT Cloud
IT Cloud
Добавить В библиотекуАвторизуйтесь, чтобы добавить
Оценить:

Рейтинг: 0

Добавить отзывДобавить цитату

IT Cloud

{

"storage-opts": [

"dm.basesize = 50G",

"dm.min_free_space = 5%",

]

}

You can limit the resources consumed by the container in its settings:

* -m 256m – maximum size of RAM consumption (here 256Mb);

* -c 512 – CPU usage priority weight (1024 by default);

* —Cpuset = "0,1" – numbers of allowed processor cores.

Product transfer and distribution

To transfer a project, for example, to a customer, and distribute it between developers and servers, you can use installation scripts, archives, images, and containers. Each of these ways to distribute a project has its own characteristics, disadvantages and advantages. Let's talk about them and compare.

lines, but the main thing is that it has a special mode, enabled by the -p switch , which dynamically outputs the number of lines we need, when new ones arrive, it updates the output, for example, docker logs name_container | tail -p .

When there are too many applications to manually monitor their work separately, it is advisable to centralize application logs. For centralization, numerous programs can be used that collect logs from different services and send them to a central repository, for example, Fluentd. It is convenient to use ElasticSearch to store logs, simply by writing them to a search engine. It is highly desirable that the logs are in a structured format – JSON. This will allow you to sort them, select the ones you need, identify trends using built-in aggregate functions, perform analysis and forecasting, and not just search by text. For analysis, the Kubana web interface included in the Elastic stack.

Logging is important not only for long-running applications. So for test containers, it is convenient to get the output of the passed dough. This can be done by writing in the Dockerfile in the CMD section: NPM run, which will run the tests.

Image storage:

* public and private Docker Hub (http://hub.docker.com)

* for private and secret projects, you can create your own image repository. The image is called registry

Docker for building apps and one-off jobs

Unlike virtual machines, launching, which is associated with significant human and computational costs, Docker is often used to perform one-time actions when software needs to be launched one-time, and it is desirable not to spend effort on installing and removing it. To do this, a container is launched, which is mounted to the folder with our application, which performs the required actions on it and, after they are completed, is deleted. An example is a JavaScript project for which you need to build and run tests. At the same time, the project itself does not use NodeJS, but contains only collector configs, for example, WEBPack, and written tests. To do this, we start the build container in iterative mode, in which you can control the build process, if necessary, and after the build is completed, the container will stop and delete itself, for example, you can run something like this at the root of the application: docker run -it –rm -v $ (pwd): app node-build . Tests can be carried out in a similar way. As a result, the application is built and tested on a test server, but the software that is not required for its operation on the production server will not be adopted and will not consume resources, and can be transferred to the production server, for example, using a container. In order not to write documentation on starting the build and testing, you can put two corresponding configs Docker-compose-build.yml and Docker-compose-test.yml and call them Docker-compose up -f ./docker-compose-build.

Management and access

We manage containers using the Docker command . Now, let's say there is a need to manage remotely. Using VNC, SSH, or something else to manage your Docker team will probably be too time consuming if the task gets complicated. That's right, first you will need to figure out what Docker is, because the Docker command and the Docker program are not the same thing, or rather, the Docker command is a console client for managing the Docker Engine client-server application. The team interacts with the Docker Machine server through the Docker REST API, which is intended for remote interaction with the server. But, in this case, you need to take care of authorization and SSL-encryption of traffic. This is ensured by the creation of keys, but in general, if the task is centralized management, differentiation of rights and security, it is better to look towards products that initially provide this and use Docker as a container launch, and not as a system.

By default, for security reasons, the client communicates with the server over a Unix socket (special file /var/run/socket.sock), not over a network socket. To work through a Unix socket, you can tell the curl sending program to use curl –unix-socket /var/run/docker.sock http: /v1.24/containers/json , but this feature is supported since curl 1.40, which is not supported on CentOS. To solve this problem and to communicate between remote servers, we use a network socket. To activate it, stop the systemctl server stop docker and start it with the settings dockerd -H tcp: //0.0.0.0: 9002 & (listen to everyone on port 9002, which is not permissible for production). After running, the docker ps command will not work, but docker -H 5.23.52.111:9002 ps or docker -H tcp: //geocode1.essch.ru: 9202 ps . By default, Docker uses port 2375 for http, and 2376 for https. In order not to change the code everywhere and not to specify the socket every time, we will write it in the environment variables:

export DOCKER_HOST = 5.23.52.111: 9002

Docker ps

Docker info

unser DOCKER_HOST

It is important to write export to make the variable available to all programs: child processes. Also, there should be no spaces around = . Now we can access the Dockerd server from anywhere on the same network. To manage remote and local Docker Engine (Docker on hosts), the Docker Machine program has been developed. Interaction with it is carried out using the Docker-machine command. First, let's install:

base = https: //github.com/docker/machine/releases/download/v0.14.0 &&

curl -L $ base / docker-machine – $ (uname -s) – $ (uname -m)> / usr / local / bin / docker-machine &&

chmod + x / usr / local / bin / docker-machine

Group of related applications

We already have several different applications, let's say NGINX, MySQL and our application. We isolated them in different containers, now they do not conflict with each other and NGINX and MySQL, we did not waste time and effort on making our own configuration, but simply downloaded: Docker run mysql , docker run Nginx , and for the application docker build.; docker run myapp -p 80:80 bash . As you can see, it would seem that everything is very simple, but there are two points: control and interconnection.

To demonstrate control, we will take the container of our application, and we will implement two possibilities – start and creation (bulkhead). For manual start, when we know that the container has already been created, but just stopped, it is enough to execute docker start myapp , but for automatic mode this is not enough, and we need to write a script that would take into account whether the container already exists, whether there is an image for it :

if $ (docker ps | grep myapp)

then

docker start myapp

else

if! $ (docker images | grep myimage)

docker build.

fi

docker run -d –name myapp -p 80:80 myimage bash

fi

… And to create it, you need to delete the container, if it exists:

if $ (docker ps | grep myapp)

docker rm -f myapp

fi

if! $ (docker images | grep myimage)

docker build.

fi

docker run -d –name myapp -p 80:80 myimage bash

… It is clear that you need to general parameters, the name of the image, the container to be displayed in variables, to check that the Dockerfile is there, it is valid, and only after that delete the container and much more. To understand the real scale, without going into the interaction of containers, about cloning (scaling) these groups and the like, but I will just mention that the Docker run command can exceed one to two dozen lines. For example, a dozen of forwarded ports, mountable folders, memory and processor limits, connections with other containers, and a few more specific parameters. Yes, this is not good, but it is difficult to divide into many containers in this version, due to the lack of a container interaction map. But the question arises: Isn't there a lot to do to just provide the user with the opportunity to start the container or rebuild? Often, the answer of the system administrator boils down to the fact that only a select few can be given access. But even here there is a solution: Docker-compose is a tool for working with a group of containers:

# docker-compose

version: v1

services:

myapp:

container-name: myapp

images: myimages

ports:

– 80:80

build:.

… For start docker-compose up -d , and for bulkhead docker down; docker up -d . Moreover, when changing the configuration, when a complete bulkhead is not needed, it will simply be updated.

Now that we simplify the process of managing a single container, let's work with a group. But here, for us, only the config itself will change:

# docker-compose

version: v1

services:

mysql:

images: mysql

Nginx:

images: nginx

ports:

– 80:80

myapp:

container-name: myapp

build:.

depence-on: mysql

images: myimages

link:

– db: mysql

– Nginx: Nginx

… Here we see the whole picture as a whole, the containers are connected by one network, where the application can access mysql and NGINX via the db and NGINX hosts, respectively, the myapp container will be created only when after raising the mysql database, even if it takes some time.

Service Discovery

With the growth of the cluster, the probability of nodes falling increases and manual detection of what has happened becomes more complicated; Service Discovery systems are designed to automate the detection of newly appeared services and their disappearance. But in order for the cluster to be able to detect the state, given that the system is decentralized – the nodes must be able to exchange messages with each other and choose a leader, examples are Consul, ETCD and ZooKeeper. We will consider Consul based on its following features: the whole program is one file, it is extremely easy to use and configure, has a high-level interface (ZooKeeper does not have it, it is believed that over time, third-party applications that implement it should appear), is written in a non-demanding language to computing machine resources (Consul – Go, ZooKeeper – Java) and neglected its support in other systems, such as, for example, ClickHouse (supports ZooKeeper by default).

Let's check the distribution of information between the nodes using a distributed key-value storage, that is, if we added records to one node, then they should spread to other nodes, and it should not have a hard-coded Master node. Since Consul consists of one executable file, download it from the official website at the link https://www.consul.io/downloads. html on each node:

wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip -O consul.zip

unzip consul.zip

rm -f consul.zip

Now you need to start one node, for now, as master consul -server -ui , and others as slave consul -server -ui and consul -server -ui . After that, we will stop Consul, which is in master mode, and launch it as an equal, as a result of Consul – they will re-elect the temporary leader, and in case of a yoke of failure, they will re-elect again. Let's check the work of our cluster consul members :

consul members;

And so let's check the distribution of information in our storage:

curl -X PUT -d 'value1' .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

Let's set up service monitoring, for more details see the documentation https://www.consul.io/docs/agent/options. html #telemetry, for that .... https://medium.com/southbridge/monitoring-consul-with-statsd-exporter-and-prometheus-bad8bee3961b

In order not to configure, we will use the container and mode for development with the already configured IP address at 172.17.0.2:

essh @ kubernetes-master: ~ $ mkdir consul && cd $ _

essh @ kubernetes-master: ~ / consul $ docker run -d –name = dev-consul -e CONSUL_BIND_INTERFACE = eth0 consul

Unable to find image 'consul: latest' locally

latest: Pulling from library / consul

e7c96db7181b: Pull complete

3404d2df15cb: Pull complete

1b2797650ac6: Pull complete

42eaf145982e: Pull complete

cef844389e8c: Pull complete

bc7449359c58: Pull complete

Digest: sha256: 94cdbd83f24ec406da2b5d300a112c14cf1091bed8d6abd49609e6fe3c23f181

Status: Downloaded newer image for consul: latest

c6079f82500a41f878d2c513cf37d45ecadd3fc40998cd35020c604eb5f934a1

essh @ kubernetes-master: ~ / consul $ docker inspect dev-consul | jq '. [] | .NetworkSettings.Networks.bridge.IPAddress'

"172.17.0.4"

essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_1 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

8ec88680bc632bef93eb9607612ed7f7f539de9f305c22a7d5a23b9ddf8c4b3e

essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_2 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

babd31d7c5640845003a221d725ce0a1ff83f9827f839781372b1fcc629009cb

essh @ kubernetes-master: ~ / consul $ docker exec -t dev-consul consul members

Node Address Status Type Build Protocol DC Segment

53cd8748f031 172.17.0.5:8301 left server 1.6.1 2 dc1

8ec88680bc63 172.17.0.5:8301 alive server 1.6.1 2 dc1

babd31d7c564 172.17.0.6:8301 alive server 1.6.1 2 dc1

essh @ kubernetes-master: ~ / consul $ curl -X PUT -d 'value1' 172.17.0.4:8500/v1/kv/group1/key1

true

essh @ kubernetes-master: ~ / consul $ curl $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / v1 / kv / group1 / key1

[

{

"LockIndex": 0,

"Key": "group1 / key1",

"Flags": 0,

"Value": "dmFsdWUx",

"CreateIndex": 277,

"ModifyIndex": 277

}

]

essh @ kubernetes-master: ~ / consul $ firefox $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / ui

With the determination of the location of the containers, it is necessary to provide authorization; for this, key stores are used.

dockerd -H fd: // –cluster-store = consul: //192.168.1.6: 8500 –cluster-advertise = eth0: 2376

* –cluster-store – you can get data about keys

* –cluster-advertise – can be saved

docker network create –driver overlay –subnet 192.168.10.0/24 demo-network

docker network ls

Simple clustering

In this article, we will not consider how to create a cluster manually, but will use two tools: Docker Swarm and Google Kubernetes – the most popular and most common solutions. Docker Swarm is simpler, it is part of Docker and therefore has the largest audience (subjectively), and Kubernetes provides much more capabilities, more tool integrations (for example, distributed storage for Volume), support in popular clouds, and more easily scalable for large projects (large abstraction, component approach).

Let's consider what a cluster is and what good it will bring us. A cluster is a distributed structure that abstracts independent servers into one logical entity and automates work on:

* In the event of a server crash, containers are dropped (new ones created) to other servers;

* even distribution of containers across servers for fault tolerance;

* creating a container on a server suitable for free resources;

* Expanding the container in case of failure;

* unified management interface from one point;

* performing operations taking into account the parameters of servers, for example, the size and type of disk and the characteristics of containers specified by the administrator, for example, associated containers with a single mount point are placed on this server;

* unification of different servers, for example, on different OS, cloud and non-cloud.

We will now move from looking at Docker Swarm to Kubernetes. Both of these systems are orchestration systems, both work with Docker containers (Kubernetes also supports RKT and Containerd), but the interactions between containers are fundamentally different due to the additional Kubernetes abstraction layer – POD. Both Docker Swarm and Kubernetes manage containers based on IP addresses and distribute them to nodes, inside which everything works through localhost, proxied by a bridge, but unlike Docker Swarm, which works for the user with physical containers, Kubernetes for the user works with logical – POD. A logical Kubernetes container consists of physical containers, the networking between which occurs through their ports, so they are not duplicated.

Both orchestration systems use an Overlay Network between host nodes to emulate the presence of managed units in a single local network space. This type of network is a logical type that uses ordinary TCP / IP networks for transport and is designed to emulate the presence of cluster nodes in a single network to manage the cluster and exchange information between its nodes, while at the TCP / IP level they cannot be connected. The fact is that when a developer develops a cluster, he can describe a network for only one node, and when a cluster is deployed, several of its instances are created, and their number can change dynamically, and in one network there cannot be three nodes with one IP address and subnets (for example, 10.0.0.1), and it is wrong to require the developer to specify IP addresses, since it is not known which addresses are free and how many will be required. This network takes over tracking the real IP addresses of nodes, which can be allocated randomly from the free ones and change as the nodes in the cluster are re-created, and provides the ability to access them via container IDs / PODs. With this approach, the user refers to specific entities, rather than the dynamics of changing IP addresses. Interaction is carried out using a balancer, which is not logically allocated for Docker Swarm, but in Kubernetes it is created by a separate entity to select a specific implementation, like other services. Such a balancer must be present in every cluster and, but within the Kubernetes ecosystem, is called a Service. It can be declared either separately as a Service or in a description with a cluster, for example, as a Deployment. The service can be accessed by its IP address (see its description) or by its name, which is registered as a first-level domain in the built-in DNS server, for example, if the name of the service specified in the my_service metadata , then the cluster can be accessed through it like this: curl my_service; … This is a fairly standard solution, when the components of the system, along with their IP addresses, change over time (re-created, new ones are added, old ones are deleted) – send traffic through a proxy server, IP or DNS addresses for the external network remain constant, while internal ones can change, leaving taking care of their approval on the proxy server.

Both orchestration systems use the Ingress overlay network to provide access to themselves from the external network through the balancer, which matches the internal network with the external one based on the Linux kernel IP address mapping tables (iptalbes), separating them and allowing information to be exchanged even if there are identical IP addresses in internal and external network. And, here, to maintain the connection between these potentially conflicting networks at the IP level, an overlay Ingress network is used. Kubernetes provides the ability to create a logical entity – an Ingress controller, which will allow you to configure the LoadBalancer or NodePort service depending on the traffic content at a level above HTTP, for example, routing based on address paths (application router) or encrypting TSL / HTTPS traffic, like GCP does and AWS.

Kubernetes is the result of evolution through internal Google projects through Borg, then through Omega, based on the experience gained from experiments, a fairly scalable architecture has developed. Let's highlight the main types of components:

* POD – regular POD;

* ReplicaSet, Deployment – scalable PODs;

* DaemonSet – it is created in each cluster node;

* services (sorted in order of importance): ClusterIP (by default, basic for the rest), NodePort (redirects ports open in the cluster, for each POD, to ports from the range 30000-32767 for accessing specific PODs from the external), LoadBalancer ( NodePort with the ability to create a public IP address for Internet access in public clouds such as AWS and GCP), HostPort (opens ports on the host machine corresponding to the container, that is, if port 9200 is open in the container, it will also be open on the host machine for forward traffic) and HostNetwork (the containers in the POD will be in the host's network space).

The wizard contains at least: kube-APIserver, kube-sheduler and kube-controller-manager. Slave composition:

* kubelet – checking the health of a system component (nodes), creating and managing containers. It is located on each node, accesses the kube-APIserver and matches the node on which it is located.

* cAdviser – node monitoring.

Let's say we have hosting and we have created three AVS servers. Now you need to install Docker and Docker-machine on each server, how to do this was described above. Docker-machine itself is a virtual machine for Docker containers, we will only build an internal driver for it – VirtualBox – so as not to install additional packages. Now, from the operations that must be performed on each server, it remains to create Docker machines, the rest of the operations for setting up and creating containers on them can be performed from the master node, and they will be automatically launched on free nodes and redistributed when their number changes. So, let's start the Docker-machine on the first node:

docker-machine create –driver virtualbox –virtualbox-cpu-count "2" –virtualbox-memory "2048" –virtualbox-disk-size "20000" swarm-node-1

docker-machine env swarm-node-1 // tcp: //192.168.99.100: 2376

eval $ (docker-machine env swarm-node-1)

We launch the second node:

docker-machine create –driver virtualbox –virtualbox-cpu-count "2" –virtualbox-memory "2048" –virtualbox-disk-size "20000" swarm-node-2

docker-machine env swarm-node-2

eval $ (docker-machine env swarm-node-2)

We launch the third node:

docker-machine create –driver virtualbox –virtualbox-cpu-count "2" –virtualbox-memory "2048" –virtualbox-disk-size "20000" swarm-node-3

eval $ (docker-machine env swarm-node-3)

Let's connect to the first node, initialize the distributed storage in it and pass it the address of the manager (leader) node:

docker-machine ssh swarm-node-1

docker swarm init –advertise-addr 192.168.99.100:2377

docker node ls // will display the current

docker swarm join-token worker

If tokens are forgotten, they can be obtained by executing the commands docker swarm join-token manager and docker swarm join-token worker in a node with a distributed storage .

To create a cluster, it is necessary to register (join) all its future nodes with the Docker swarm join –token … 192.168.99.100:2377 command , a token is used for authentication, to discover them, they must be on the same subnet. You can view all servers with the docker node info command

The docker swarm init command will create a cluster leader, while alone, but in the response received, the command necessary to connect other nodes to this cluster will be given, important information in which is a token, for example, docker swarm join –token … 192.168 .99.100: 2377 . Connect to the remaining nodes via SSH using the docker-machine SSH name_node command and execute it.

For the interaction of containers, the bridge network is used, which is a switch. But for several replicas to work, a subnet is needed, since all replicas will have the same ports, and proxying is done by ip using a distributed storage, and it does not matter whether they are physically located on the same server or different. It should be noted right away that balancing is carried out according to the roundrobin rule, that is, one by one to each replica. It is important to create an overlay network to create DNS on top of it and be able to refer to replicas by name. Let's create a subnet: