Changes

Jump to: navigation, search

Docker

28,493 bytes removed, 22:21, 23 February 2020
no edit summary
[[Docker basic]]
=Manage Virtual Hosts with docker-machine=<br>==Introduction== Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean. Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.:[[File:ClipCapIt-180622-224748.PNGDocker Compose]] When people say “Docker” they typically mean Docker Engine, the client-server application made up of the Docker daemon, a REST API that specifies interfaces for interacting with the daemon, and a command line interface (CLI) client that talks to the daemon (through the REST API wrapper). Docker Engine accepts docker commands from the CLI, such as docker run <image>, docker ps to list running containers, docker image ls to list images, and so on.
'''[[Docker Machine''' is a tool for provisioning and managing your Dockerized hosts (hosts with Docker Engine on them). Typically, you install Docker Machine on your local system. Docker Machine has its own command line client docker-machine and the Docker Engine client, docker. You can use Machine to install Docker Engine on one or more virtual systems. These virtual systems can be local (as when you use Machine to install and run Docker Engine in VirtualBox on Mac or Windows) or remote (as when you use Machine to provision Dockerized hosts on cloud providers). The Dockerized hosts themselves can be thought of, and are sometimes referred to as, managed “machines”.]]
Source: https://docs.docker.com/machine/overview/<br>[[Docker Swarm Classic]]
[[Docker Swarm Mode]]
==Hypervisor drivers==<br>===What is a driver===Machine can be created with the '''docker-machine create''' command. Most simple usage: <pre>docker-machine create -d <hybervisor driver name> <driver options> <machine name></pre>[[Docker Swarm management]]
The default value of the driver parameter is "virtualbox". [[Docker volume orchestration]]
'''docker-machine''' can create and manage virtual hosts [[Docker Swarm on the local machine and on remote clouds. Always the chosen driver determines where and how the virtual machine will be created. The guest operating system that is being installed on the new machine is also determined by the hypervisor driver. E.g. with the "virtualbox" driver you can create machines locally using the boot2docker as the guest OS. AWS]]
The driver also determines the virtual network types and interfaces types that are created inside the virtual machine. E.g. the KVM driver creates two virtual networks (bridges), one host[[Stateful load-global and one host-private network. balancing in swarm]]
In the '''docker-machine create''' command, the available driver options are also determined by the driver. You always has to check the available options at the vandor of driver. For cloud drivers typical options are the remote url, the login name and the password. Some driver allows to change the guest OS, the CPU number or the default memory. [[Centralized logging in swarm]]
Here is a complete list of the currently available drivers: https://github.com/docker/docker.github.io/blob/master/machine/AVAILABLE_DRIVER_PLUGINS.md<br><br>===KVM driver===[[Metrics and Monitoring in swarm]]
KVM driver home page: https://github.com/dhiltgen/docker[[Auto-machine-kvmscaling swarm]]
Minimum Parameters: * --driver kvm* --kvm-network: The name of the kvm virtual (public) network that we would like to use. If this is not set, the new machine will be connected to the '''"default"''' KVM virtual network.[[Kafka with ELK on swarm]]
'''Images''':[[Java EE application with docker]]<br>By default Itt egy tipikus, produkciós dockerarchitektúrát mutatunk be egy két lábas JBoss cluster-machine-kvm uses a boot2docker.iso as guest os for the kvm hypervisior. It's also possible to use every guest os image that is derived from boot2docker.iso as well. For using another image use the --kvm-boot2docker-url parameter.el, de swarm nélkül
'''Dual Network''':[[Java EE application with swarm]]<br>* '''eth1''' - A host private network called docker-machines is automatically created to ensure we always have connectivity to the VMs. The docker-machine ip command will always return this IP address which is only accessible from your local system.* '''eth0''' - You can specify any libvirt named network. If you don't specify one, the "default" named network will Egy lehetséges production grade swarm architektúra telepített Java EE alkalmazás kialakítását mutatjuk be used.If you have exotic networking topolgies (openvswitch, etc.), you can use virsh edit mymachinename after creation, modify the first network definition by hand, then reboot the VM for the changes to take effect.Typically this would be your "public" network accessible from external systemsTo retrieve the IP address of this network, you can run a command like the following:docker-machine ssh mymachinename "ip -one -4 addr show dev eth0|cut -f7 -d' '"
Driver Parameters:<br>
*--kvm-cpu-count Sets the used CPU Cores for the KVM Machine. Defaults to 1 .
*--kvm-disk-size Sets the kvm machine Disk size in MB. Defaults to 20000 .
*--kvm-memory Sets the Memory of the kvm machine in MB. Defaults to 1024.
*--kvm-network Sets the Network of the kvm machinee which it should connect to. Defaults to default.
*--kvm-boot2docker-url Sets the url from which host the image is loaded. By default it's not set.
*--kvm-cache-mode Sets the caching mode of the kvm machine. Defaults to default.
*--kvm-io-mode-url Sets the disk io mode of the kvm machine. Defaults to threads.
<br>
=Docker on Fedora 31=Install softwares==https://www.reddit.com/r/linuxquestions/comments/dn2psl/upgraded_to_fedora_31_docker_will_not_work/<br>https://fedoraproject.org/wiki/Changes/CGroupsV2<brA Fedora31-ben bevezették a CGroupsV2-t amit a docker még nem követett le, ezért a docker a CGroupsV2-vel nem fog működni, ki kell kapcsolni.
First we have to install the docker-machine app itself:
<pre>
base=https://github.com/docker/machine/releases/download/v0.14.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo install /tmp/docker-machine /usr/local/bin/docker-machine
</pre>
1-
Secondly we have to install the hypervisor driver for the dockervim /etc/default/grub2-machine to be able to create, manage Virtual Machines running on the hypervisorAdd Line below in GRUB_CMDLINE_LINUX systemd. As we are going to use the KVM hypervisor, we have to install the "docker-machine-driver-kvm" driver: unified_cgroup_hierarchy=0
<pre>
# curl -Lo docker-machine-driver-kvm \GRUB_TIMEOUT=5 https://githubGRUB_DISTRIBUTOR="$(sed 's, release .com*$,,g' /dhiltgenetc/dockersystem-machine-kvmrelease)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="resume=/releasesdev/downloadmapper/v0fedora_localhost--live-swap rd.7lvm.0lv=fedora_localhost-live/dockerroot rd.luks.uuid=luks-machine42aca868-driver45a4-kvm \ && chmod +x docker438e-machine8801-driverbb23145d978d rd.lvm.lv=fedora_localhost-kvm \live/swap rhgb quiet systemd.unified_cgroup_hierarchy=0" && sudo mv docker-machine-driver-kvm /usr/local/binGRUB_DISABLE_RECOVERY="true"GRUB_ENABLE_BLSCFG=true
</pre>
We suppose that KVM and the libvirt is already installed on the system.{{tip|If you want to use VirtualBox as your hypervisor, no extra steps are needed, as its docker3-machine driver is included in the docker-machine app}}Then :
Available 3rd party drivers: <br>https://github.com/docker/docker.github.io/blob/master/machine/AVAILABLE_DRIVER_PLUGINS.md<br><br># grub2-mkconfig
==Create machines with KVM==4- Restart your PC
===Create the machine===
Before a new machine can be created with the docker-machine command, the proper KVM virtual network must be created.
See [[KVM#Add_new_networ|How to create KVM networks]] for details.
<pre>
# docker-machine create -d kvm --kvm-network "docker-network" manager
Running pre-create checks...
Creating machine...
(manager) Copying /root/.docker/machine/cache/boot2docker.iso to /root/.docker/machine/machines/manager/boot2docker.iso...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager
</pre>
{{tip|The machine is created under '''/USER_HOME/.docker/machine/machines/<machine_name>''' directory
If the new VM was created with virtualbox driver, the VirtualBox graphical management interface must be started with the same user, that the VM was created with, and the VirtualBox will discover the new VM automatically}}
===Check what was created===
<br>
====Interfaces on the host====
<pre>
# ifconfig
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.105 netmask 255.255.255.0 broadcast 192.168.0.255
....
virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.42.1 netmask 255.255.255.0 broadcast 192.168.42.255
...
virbrDocker: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.123.1 netmask 255.255.255.0 broadcast 192.168.123.255
inet6 2001:db8:ca2:2::1 prefixlen 64 scopeid 0x0<global>
...
</pre>
On the host, upon the regular interfaces, we can see the two bridges for the two virtual networks:
* '''virbrDocker''': That is the virtual network that we created in libvirt. This is connected to the host network with NAT. We assigned these IP addresses, when we defined the network.
* '''virbr1''': That is the host-only virtual network that was created out-of-the-box. This one has no internet access.
<br>
====Interface the new VM====
You can log in to the newly created VM with the '''docker-machine ssh <machine_name>''' command
On the newly created docker ready VM, four interfaces were created.
<pre>
# docker-machine ssh manager
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 18.05.0-ce, build HEAD : b5d6989 - Thu May 10 16:35:28 UTC 2018
Docker version 18.05.0-ce, build f150324
</pre>
<br>
Check the interfaces of the new VM:
<pre>
docker@manager:~$ ifconfig
docker0 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
...
eth0 inet addr:192.168.123.195 Bcast:192.168.123.255 Mask:255.255.255.0
...
eth1 inet addr:192.168.42.118 Bcast:192.168.42.255 Mask:255.255.255.0
</pre>
* '''eth0''':192.168.123.195 - Interface to the new virtual network (docker-network) created by us. this network is connected to the host network,so it has public internet access as well.
* '''eth1''':192.168.42.118 - This connect to the dynamically created host-only virtual network. Just for VM-to-VM communication
* '''docker0''':172.17.0.1 - This VM is ment to host docker container, so the docker daemon was already installed and started on it. Form docker point of view, this VM is also a (docker) host, and therefore the docker daemon created the default virtual bridge, that the containers will be connected to unless it is specified implicitly otherwise during container creation.
<br>
Inspect the new VM with the '''docker-machine inspect''' command
<pre>
# docker-machine inspect manager
{
"ConfigVersion": 3,
"Driver": {
....
"CPU": 1,
"Network": "docker-network",
"PrivateNetwork": "docker-machines",
"ISO": "/root/.docker/machine/machines/manager/boot2docker.iso",
"...
},
"DriverName": "kvm",
"HostOptions": {
....
},
"SwarmOptions": {
"IsSwarm": false,
...
},
"AuthOptions": {
....
}
},
"Name": "manager"
}
</pre>
<br>
====Routing table====
<pre>
All the packages that ment to go to the docker VMs are routed to the bridges
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
...
192.168.42.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 <<<<this
192.168.123.0 0.0.0.0 255.255.255.0 U 0 0 0 virbrDocker <<<this
</pre>
<br>
====IPtables modifications====
:[[File:ClipCapIt-180623-010335.PNG|800px]]
<br>
Switches:
* -o, --out-interface name
* -i, --input-interface name
* -s, source IP address
* -d, destination IP address
* -p, Sets the IP protocol for the rule
* -j, jump to the given target/chain
<br>
DNS and DCHP packages from the Virtual Bridges are allowed to be sent to the host machine.
<pre>
-A INPUT -i virbr1 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr1 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr1 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr1 -p tcp -m tcp --dport 67 -j ACCEPT
-A INPUT -i virbrDocker -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbrDocker -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbrDocker -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbrDocker -p tcp -m tcp --dport 67 -j ACCEPT
</pre>
<br>
The host machine is allowed to send DHCP packages to the virtual bridges in order to configure them.
<pre>
-A OUTPUT -o virbr1 -p udp -m udp --dport 68 -j ACCEPT
-A OUTPUT -o virbrDocker -p udp -m udp --dport 68 -j ACCEPT
</pre>
<br>
The bridge '''virbrDocker''' can send packages anywhere (first line) and can receive packages back if the connections was previously established (second line)<br>
<pre>
-A FORWARD -d 192.168.123.0/24 -o virbrDocker -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.123.0/24 -i virbrDocker -j ACCEPT
</pre>
<br>
The bridges can send packages to themselves, otherwise everything is rejected that was sent to or form the bridges
<pre>
-A FORWARD -i virbrDocker -o virbrDocker -j ACCEPT
-A FORWARD -i virbr1 -o virbr1 -j ACCEPT
#If not accepted above, we reject everything from the two bridges
-A FORWARD -o virbrDocker -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbrDocker -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -o virbr1 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr1 -j REJECT --reject-with icmp-port-unreachable
</pre>
The bridge '''virbrDocker''' can send packages to the outside world. (MASQUERADE is =Swarm Classic VS Swarm mode=Docker has been innovating at quite a special SNAT targetdramatic pace, where the destination IP doesn't have and focussing on making their technology easier to be specifieddeploy, and applicable for a wider range of use cases. SNAT replaces the source IP address One of the package with features that has received the public IP address highest level of our system)Last two lines: The bridge can't send anything to the multicast and to the broadcast addresses. <pre>-A POSTROUTING -s 192.168.123.0focus is Clustering/24 ! -d 192Orchestration.168.123.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535-A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -p udp -j MASQUERADE --to-ports 1024-65535-A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE-A POSTROUTING -s 192.168.123In Docker language, that means Swarm.0/24 -d 224.0.0.0/24 -j RETURN-A POSTROUTING -s 192.168.123.0/24 -d 255.255.255.255/32 -j RETURN</pre>
<br>==Manage machines==<br>source: https://www.linkedin.com/pulse/docker-swarm-vs-mode-neil-cresswell/
===List/InspectSwarm classic===Whit the '''ls''' subcommand we can list all the docker-machine managed hosts. <pre># docker-machine lsNAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORSmanager - kvm Running tcp://192.168.42.118:2376 v18.05.0-ce </pre>* NAME: name of the created machine* ACTIVE: from the Docker client point of view, the active virtual host can be managed with the docker and with the docker-compose commands form the local host, as we would executed these commands on the remote virtual host. There can be always a single active machine that is marked with an asterisk '*' in the ls output. * DRIVER: * STATE:* URL: The IP address of the virtual host. * SWARM:   With the '''inspect <machine name>''' subcommand we can get very detailed information about a specific machine: <pre># docker-machine inspect manager{ "ConfigVersion": 3, "Driver": { "IPAddress": "", "MachineName": "manager", "SSHUser": "docker", "SSHPort": 22, "SSHKeyPath": "", "StorePath": "/root/.docker/machine", "SwarmMaster": false, "SwarmHost": "tcp://0.0.0.0:3376", "SwarmDiscovery": "", "Memory": 1024, "DiskSize": 20000, "CPU": 1, "Network": "docker-network", "PrivateNetwork": "docker-machines", "ISO": "/root/.docker/machine/machines/manager/boot2docker.iso", ... }, "DriverName": "kvm", "HostOptions": { ... ... "Name": "manager"} </pre> <br> ===Set Active machine===With our local docker client we can connect Prior to the docker daemon of any of the virtual hosts. That virtual host that we can managed locally is called "active" host. From Docker client point of view, the active virtual host can be managed with the '''docker''' and with the '''docker-compose''' commands form the local host, as we executed these commands on the (remote) virtual host. We can make any docker-machine managed virtual host active with the ''''docker-machine env <machine name>'''' command. Docker gets connection information fromenvironment variables. With this command we can redirect our docker CLI. Run this command in the host. <pre># docker-machine env managerexport DOCKER_TLS_VERIFY="1"export DOCKER_HOST="tcp://192.168.42.118:2376"export DOCKER_CERT_PATH="/root/.docker/machine/machines/manager"export DOCKER_MACHINE_NAME="manager"# Run this command to configure your shell: # eval $12 Swarm (docker-machine env managerClassic)</pre>  As the output of the env command suggests, you have to run the '''eval''' command in that shell that you want to use to managed the active virtual host. <pre># eval $(docker-machine env manager)</pre>  Now, in the same shell, run the '''ls''' command again. The machine 'manager' will be marked with the asterisk in the ACTIVE column. <pre># docker-machine lsNAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORSmanager * kvm Running tcp://192.168.42.118:2376 v18.05.0-ce </pre>  In the same shell on the host machine, create existed as a docker container: <pre># docker run -d -i -t --name container1 ubuntu /bin/bash9dfda56f7739831b0d19c8acd95748b1c93f6c6bb82d2aa87cfb10ecee0e4f28</pre>  Now we will log on to the virtual host with the '''ssh <machine name>''' command. <pre># docker-machine ssh manager ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ ||_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|Boot2Docker version 18.05.0-ce, build HEAD : b5d6989 - Thu May 10 16:35:28 UTC 2018Docker version 18.05.0-ce, build f150324docker@manager:~$</pre>  List the available docker containers. We should see there the newly created '''container1'''. The '''docker run''' command was executed in the host, bat was run in the remote, virtual host. <pre>docker@manager:~$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES9dfda56f7739 ubuntu "/bin/bash" 5 minutes ago Up 5 minutes container2</pre> Running the '''docker ps''' command on the host, should give the same result. <br> ===Unset active machine=== The active machine can be unset with the "'''--unset'''" switch. Once the active docker machine was unset, the docker client will manage the local docker daemon again. <pre># docker-machine env --unsetunset DOCKER_TLS_VERIFYunset DOCKER_HOSTunset DOCKER_CERT_PATHunset DOCKER_MACHINE_NAME# Run this command to configure your shell: # eval $(docker-machine env --unset)</pre>  As the output suggest, we have to run the eval command again with the '''--unset''' switch to clear the shell. Alternatively you can just start a new shell. <pre># eval $(docker-machine env --unset)</pre>  Now lets run the '''ps''' command again. As the docker client now connected to the local docker daemon, we shouldn't see '''container1''' anymore in the list. <pre># docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES66e9cfbbc947 busybox "sh" 30 hours ago Up 4 minutes critcon</pre> <br><br><br>=Services= ==Introduction==In a distributed application, different pieces of the app are called “services.” Services are really just “containers in production.” A service only runs '''one''' '''image'''standalone product, but it codifies the way that image runs—what ports it should use, '''how many replicas''' of the container should run so the service has the capacity it needs, and so relied on. Scaling a service changes the number of container instances running that piece complicated setup of software, assigning more computing resources to the external service in the process. Luckily it’s very easy to define, run, and scale services with the Docker platform -- just write a docker-compose.yml file. Source: https://docs.docker.com/get-started/part3/#prerequisites ==YAMEL==YAML /'jæm.ḷ/ is a human-readable data serialization language. It is commonly used for configuration files, but could be used in many applications where data is being stored discovery systems (e.g. debugging outputeg consul) or transmitted (e.g. document headers). YAML targets many of the same communications applications as XML but has a minimal syntax which intentionally breaks compatibility with SGML [1]. It uses both Python-style indentation to indicate nesting, and a more compact format that uses [] for lists and {} for maps making YAML 1.2 a superset dedicated set of JSONcontainers which ran as the swarm controllers. Custom data types are allowed, but YAML natively encodes scalars (such as strings, integersLoad balancing network traffic across containers required external load balancers, and floats), lists, and associative arrays (also known as hashes, maps, or dictionaries).  =Docker Composition= ==Introduction==Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file these needed to configure your application’s services. Then, be integrated with a single command, you build, create containers and start all the services from your configuration with a single command.   '''Multiple isolated environments on a single host'''<br>Compose uses a project name service discovery to isolate environments from each otherfunction correctly. You can make use Standalone Docker hosts were members of this project name in several different contexts:* on a dev hostswarm cluster, to create multiple copies of a single environment, such as when you want to run a stable copy for each feature branch of a project* on a CI server, to keep builds from interfering with each other, you can set and the project name to a unique build number  '''Development environments'''<br>When you’re developing software, swarm controllers presented the ability to run an application in an isolated environment and interact with it is crucial. The Compose command line tool can be used to create the environment and interact with it.* The Compose file provides a way to document and configure pooled capacity from all of the application’s service dependencies (databases, queues, caches, web service APIs, etc). Using the Compose command line tool you can create and start one or more containers for each dependency with hosts as a single command (“virtual” docker-compose up).* Together, these features provide a convenient way for developers to get started on a project. Compose can reduce a multi-page “developer getting started guide” to a single machine readable Compose file and a few commands.  '''Automated testing environments'''<br>An important part of any Continuous Deployment or Continuous Integration process is the automated test suite. Automated end-to-end testing requires an environment in which to run tests. Compose provides a convenient way to create and destroy isolated testing environments for your test suitehost. By defining presenting the full environment in swarm cluster as a Compose file, you can create and destroy these environments in just a few commands:  Source: https://docs.virtual docker.com/compose/overview/ ==docker compose vs docker stack (swarm)==In recent releases, a few things have happened in host meant that the Docker world. way you interacted with Swarm mode got integrated into the Docker Engine in 1.12, and has brought with it several new tools. Among others, it’s possible to make use of docker-compose.yml files to bring up stacks of Docker containers, without having to install Docker Compose. The command is called docker stack, and it looks was exactly the same to docker-compose.Both docker-compose and the new docker stack commands can be used with docker-compose.yml files which are written according to the specification of version 3. For your version 2 reliant projects, you’ll have to continue using docker-compose. If way you want to upgrade, it’s not a lot of work though. As docker stack does everything docker compose does, it’s a safe bet that docker stack will prevail. This means that docker-compose will probably be deprecated and won’t be supported eventually. However, switching your workflows to using docker stack is neither hard nor much overhead for most users. You can do it while upgrading your docker compose files from version 2 to 3 interacted with comparably low effort. If you’re new to the Docker world, or are choosing the technology to use for a new project - by all means, stick to using docker stack deploy. Source: https://vsupalov.com/difference-docker-compose-and-docker-stack/ ==Install==docker-compose is not part of the standard docker installation. We have to install it form github. <pre>sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$standalone host (uname -m) -o /usr/local/bin/docker-composesudo chmod +x /usr/local/bin/docker-compose</pre> <pre># docker-compose --versiondocker-compose version 1.21.2run, build a133471</pre> <br> ==How to use docker-compose== I will demonstrate the potential in docker-compose through a simple example. We are going to build a WordPress service that requires two containers. On for the mysql database and one for the WordPress itself. To make the example a little bit more complicatedps, we won't simple download the WordPress image from DockerHub, we are going to build it, using the wordPress image as the base image of our newly built image. So with a simple docker-compose.yml file we can build as many images as we want and we can construct containers from them in the given order. Isn't it huge? <pre>$ mkdir wp-example$ cd wp-example$ mkdir wordpress$ touch docker-compose.yml</pre> <pre>[wp-example]# lltotal 8-rw-r--r-- 1 root root 148 Jun 23 23:09 docker-compose.ymldrwxr-xr-x 2 root root 4096 Jun 23 22:58 wordpress</pre> <pre>$ cd wordpress$ touch Dockerfile$ touch example.html</pre> <pre>[wordpress]# lltotal 8-rw-r--r-- 1 root root 159 Jun 23 22:58 Dockerfile-rw-r--r-- 1 root root 18 Jun 23 22:44 example.html</pre> <syntaxhighlight lang="Python">FROM wordpress:latestCOPY ["./example.html","/var/www/html/example.html"]VOLUME /var/www/htmlENTRYPOINT ["docker-entrypoint.sh"]CMD ["apache2-foreground"]</syntaxhighlight>  docker-compose.yml <syntaxhighlight lang="C++">version: '3'services: wordpress: container_name: my-worldrpress-container image: myWordPress:6.0 build: ./wordpress links: - db:mysql ports: - 8080:80  db: image: mariadb environment: MYSQL_ROOT_PASSWORD: example</syntaxhighlight>{{note|You can use only space to make indention. Tab is not supported }}  <pre>[wp-example]# docker-compose up -dBuilding wordpressStep 1/5 : FROM wordpress:latest ---> 7801d36d734cStep 2/5 : COPY ./example.html /var/www/html/example.html ---> ab67aee3c270Removing intermediate container a0894a2e834fStep 3/5 : VOLUME /var/www/html ---> Running in 470025d9c877 ---> 9890d3cd9f0aRemoving intermediate container 470025d9c877Step 4/5 : ENTRYPOINT docker-entrypoint.sh ---> Running in 09548484b9b2 ---> 555754d6a3a7Removing intermediate container 09548484b9b2Step 5/5 : CMD apache2-foreground ---> Running in 035fcfc0876d ---> 076e75c72b58Removing intermediate container 035fcfc0876dSuccessfully built 076e75c72b58Successfully tagged wp2-example_wordpress:latestWARNING: Image for service wordpress was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.Creating wp2-example_db_1 ... doneCreating wp2-example_wordpress_1 ... doneAttaching to wp2-example_db_1, wp2-example_wordpress_1</pre>  <pre>[wp-example]# docker-compose ps Name Command State Ports ---------------------------------------------------------------------------------------wp2-example_db_1 docker-entrypoint.sh mysqld Up 3306/tcp wp2-example_wordpress_1 docker-entrypoint.sh apach ... Up 0.0.0.0:8080->80/tcp</pre>  <pre># docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES8ec2920234b6 wp2-example_wordpress "docker-entrypoint..." About a minute ago Up About a minute 0.0.0.0:8080->80/tcp wp2-example_wordpress_1786fb7da1ca7 mariadb "docker-entrypoint..." About a minute ago Up About a minute 3306/tcp wp2-example_db_1</pre> <br> ==docker-compose.yml syntax== There are three major versions of compose file format.  ===compose only=== * '''build''': Configuration options that are applied at build time** The object form is allowed in Version 2 and up.** In version 1, using build together with image is not allowed. Attempting to do so results in an error.** From version 3volumes), if you specify image as well as build, then Compose names just directed the built image with the webapp and optional tag specified in image** Note: This option is ignored when deploying a stack in swarm mode with a commands (version 3) Compose file. The docker stack command accepts only pre-built images.*** '''dockerfile''': Dockerfile-alternate*** '''CONTEXT''': Either a path to a directory containing a Dockerfile, or a url to a git repository.*** '''TARGET''': Build the specified stage as defined inside the Dockerfile (added in 3.4)<pre>version: '3'services: webapp: build: context: ./dir dockerfile: Dockerfile-alternate target: prod</pre>  * '''container_name''': define the name of the container that is created for the service. {{note|* If the build and the image is both provided, the image will be created with the name given in the image parameter. * If the image tag is not provided, the default name of the new image isusing –H=tcp: </directory name>_<service_name>* If the container_name is not provided, the container default name is: </directory name>_<service_name> }}  * '''container_name''': Specify a custom container name, rather than a generated default name  * '''external_links''': Link to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services. external_links follow semantics similar to links when specifying both the container name and the link alias (CONTAINER:ALIAS).<pre>external_links: - redis_1 - project_db_1:mysql - project_db_1:postgresql</pre>  * '''network_mode''': Network mode. Use the same values as the docker client --network parameter, plus at the special form service:[service name].<pre>network_mode: "bridge"network_mode: "host"network_mode: "none"network_mode: "service:[service name]"network_mode: "container:[container name/id]"</pre>    <hr>=== swarm only===<br> * '''deploy''': This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.** '''ENDPOINT_MODE''': *** vip: (default): Single Virtual master IP for the service. Swarm is load balancing *** dnsrr: (DNS round rubin): DNS query gives the list Port instead of the individual swarm nodes for our own load balancing. ** '''MODE''': Either global (exactly one container per swarm node) or replicated (a specified number of containers)** '''PLACEMENT''': ** '''REPLICAS''': If the service is replicated (which is the default), specify the number of containers that should be running at any given time.** '''RESOURCES'''** '''RESTART_POLICY''':<pre>version: '3'services: redis: image: redis:alpine deploy: replicas: 6 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure resources: limits: cpus: '0.50' memory: 50M reservations: cpus: '0.25' memory: 20M restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s</pre> <br><br>
==Good to know==A Docker 1.12-es verziója előtt a Swarm (Classic) egy külön álló termék volt, nem volt része a docker engine-nek. A swarm-ot a docker engine-en futó swarm konténerekkel kellett létrehozni. Vo
You ==Swarm mode==Since releasing Docker 1.12, and embedding Swarm Mode (I really wish they had called it something else to minimise confusion) into the core Docker engine, the functionality and management of swarm has altered dramatically. No longer does the cluster (pool of resources) emulate a virtual docker host, and no longer can control you run standard docker engine commands against the order swarm cluster, you now need to use specific commands (service create, service inspect, service ps, service ls, service scale etc). If you run Docker engine commands (docker ps) what is returned is a list of containers running on the Docker Swarm Master HOST (not the cluster). If you want to interact with containers that make up a swarm “service”, you need to take multiple steps (service startup with ps, to show the depends_on option. Compose always starts containers in dependency order, where dependencies and which host they are determined by depends_onon, linksthen change the focus of your docker commands to that host, volumes_fromconnect to that host, and network_mode: "service:...". However, Compose does not wait until a container is “ready” (whatever then issue the docker commands to manage the containers on that means for your particular applicationspecific host/swarm member) - only until it’s running. There’s a good reason for this.
=SWARM=The key point of SwarmMode is that it is an overlay engine for running SERVICES, not Containers. In fact, a service actually comprises a number of tasks, with a task being a container and any commands to execute within the container (but a task might also be a VM in the future).
==Introduction==A swarm One of the major enhancements in Swarm mode is a group of machines that are running Docker and joined into a cluster. After that has happenedthe load balancing, which is now built-in; now when you continue to run publish service, exposed ports will automatically be load balanced across the Docker commands you’re used tocontainers (tasks though, but now they are executed on a cluster by a swarm managerremember) that comprise that service. The machines in a swarm can be physical or virtualYou don’t need to configure any additional load balancing. After joining This change makes it incredibly easy to, say for instance, scale a swarm, they are referred nginx service from 1 worker task (container) to as nodes10.
Source: https://docsSo, if you are using Swarm mode in Docker 1.docker12, you need to stop thinking about Containers (and trying to interact with the containers that make up a service) and rather, manage the service and tasks.com/get-started/part4/#introduction
=Kubernetes=In Portainer.io, we exhibit the same behaviour as above, so if you click on “containers” you will only see the container

Navigation menu