Changes

Jump to: navigation, search

Docker

6,636 bytes removed, 22:21, 23 February 2020
no edit summary
[[Docker basic]]
=Manage VMs with docker-machine=<br>==Introduction==[[Docker Compose]]https://docs.docker.com/machine/overview/[[Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean.]]
Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.:[[File:ClipCapIt-180622-224748.PNGDocker Swarm Classic]]
When people say “Docker” they typically mean [[Docker Engine, the client-server application made up of the Docker daemon, a REST API that specifies interfaces for interacting with the daemon, and a command line interface (CLI) client that talks to the daemon (through the REST API wrapper). Docker Engine accepts docker commands from the CLI, such as docker run <image>, docker ps to list running containers, docker image ls to list images, and so on.Swarm Mode]]
'''[[Docker Machine''' is a tool for provisioning and managing your Dockerized hosts (hosts with Docker Engine on them). Typically, you install Docker Machine on your local system. Docker Machine has its own command line client docker-machine and the Docker Engine client, docker. You can use Machine to install Docker Engine on one or more virtual systems. These virtual systems can be local (as when you use Machine to install and run Docker Engine in VirtualBox on Mac or Windows) or remote (as when you use Machine to provision Dockerized hosts on cloud providers). The Dockerized hosts themselves can be thought of, and are sometimes referred to as, managed “machines”.Swarm management]]
[[Docker volume orchestration]]
[[Docker Swarm on AWS]]
[[Stateful load-balancing in swarm]]
==Hypervisor drivers==[[Centralized logging in swarm]]
===What are drivers===Machine can be created with the '''docker-machine create''' command. Most simple usage: <pre>docker-machine create -d <hybervisor driver name> <driver options> <machine name></pre>[[Metrics and Monitoring in swarm]]
The default value of the driver parameter is "virtualbox". [[Auto-scaling swarm]]
'''docker-machine''' can create and manages VMs on the local host and on remote clouds. Always the chosen driver determines where and how the machine will be created. The guest operating system that is being installed [[Kafka with ELK on the new machine is also determined by the hypervisor driver. E.g. with the "virtualbox" driver you can create machines locally using the boot2docker as the guest OS. swarm]]
The driver also determines the virtual network types and interfaces types that are created inside the machine. E.g. the KVM driver creates two virtual networks (bridges)[[Java EE application with docker]]<br>Itt egy tipikus, one hostprodukciós docker architektúrát mutatunk be egy két lábas JBoss cluster-global and one host-private networkel, and name of the host-private network is hardcoded. de swarm nélkül
In the '''docker-machine create''' command, the driver options are also determined by the driver. You always has to check the available options at the provider of driver. For cloud drivers typical options are the remote url, the login name and the password. Some driver allows to change the guest OS, the CPU number or the default memory[[Java EE application with swarm]]<br>Egy lehetséges production grade swarm architektúra telepített Java EE alkalmazás kialakítását mutatjuk be.
Here is a complete list of the currently available drivers: https://github.com/docker/docker.github.io/blob/master/machine/AVAILABLE_DRIVER_PLUGINS.md
<br>=Docker on Fedora 31==KVM driver===https://www.reddit.com/r/linuxquestions/comments/dn2psl/upgraded_to_fedora_31_docker_will_not_work/<br>https://fedoraproject.org/wiki/Changes/CGroupsV2<brA Fedora31-ben bevezették a CGroupsV2-t amit a docker még nem követett le, ezért a docker a CGroupsV2-vel nem fog működni, ki kell kapcsolni.
Here is the KVM driver home page: https://github.com/dhiltgen/docker-machine-kvm
Minimum Parameters: * 1--driver kvm: * --kvm-network: The name of the kvm virtual (public) network that we would like to use. If this is not set, the new machine will be connected to the '''"default"''' KVM virtual network.
vim /etc/default/grub2- Add Line below in GRUB_CMDLINE_LINUX systemd.unified_cgroup_hierarchy=0<pre>GRUB_TIMEOUT=5GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g''Images''':<br>/etc/system-release)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"By default dockerGRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_localhost--machinelive-kvm uses a boot2dockerswap rd.iso as guest os for the kvm hypervisiorlvm. It's also possible to use every guest os image that is derived from boot2dockerlv=fedora_localhost-live/root rd.iso as wellluks. For using another image use the uuid=luks-42aca868-kvm45a4-boot2docker438e-url parameter8801-bb23145d978d rd.lvm.lv=fedora_localhost-live/swap rhgb quiet systemd.unified_cgroup_hierarchy=0"GRUB_DISABLE_RECOVERY="true"GRUB_ENABLE_BLSCFG=true</pre>
'''Dual Network''':<br>* '''eth1''' 3- A host private network called docker-machines is automatically created to ensure we always have connectivity to the VMs. The docker-machine ip command will always return this IP address which is only accessible from your local system.* '''eth0''' - You can specify any libvirt named network. If you don't specify one, the "default" named network will be used.If you have exotic networking topolgies (openvswitch, etc.), you can use virsh edit mymachinename after creation, modify the first network definition by hand, then reboot the VM for the changes to take effect.Typically this would be your "public" network accessible from external systemsTo retrieve the IP address of this network, you can run a command like the followingThen :docker-machine ssh mymachinename "ip -one -4 addr show dev eth0|cut -f7 -d' '"
Driver Parameters:<br>* # grub2--kvm-cpu-count Sets the used CPU Cores for the KVM Machine. Defaults to 1 .*--kvm-disk-size Sets the kvm machine Disk size in MB. Defaults to 20000 .*--kvm-memory Sets the Memory of the kvm machine in MB. Defaults to 1024.*--kvm-network Sets the Network of the kvm machinee which it should connect to. Defaults to default.*--kvm-boot2docker-url Sets the url from which host the image is loaded. By default it's not set.*--kvm-cache-mode Sets the caching mode of the kvm machine. Defaults to default.*--kvm-io-mode-url Sets the disk io mode of the kvm machine. Defaults to threads.mkconfig
4- Restart your PC
==Install softwares==
First we have to install the docker-machine app itself:
<pre>
base=https://github.com/docker/machine/releases/download/v0.14.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo install /tmp/docker-machine /usr/local/bin/docker-machine
</pre>
Secondly we have to install the hypervisor driver for the docker-machine to be able to create, manage Virtual Machines running on the hypervisor. As we are going to use the KVM hypervisor, we have to install the "docker-machine-driver-kvm" driver:
<pre>
# curl -Lo docker-machine-driver-kvm \
https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.7.0/docker-machine-driver-kvm \
&& chmod +x docker-machine-driver-kvm \
&& sudo mv docker-machine-driver-kvm /usr/local/bin
</pre>
We suppose that KVM and the libvirt is already installed on the system.
{{tip|If you want to use VirtualBox as your hypervisor, no extra steps are needed, as its docker-machine driver is included in the docker-machine app}}
Available 3rd party drivers: <br>
https://github.com/docker/docker.github.io/blob/master/machine/AVAILABLE_DRIVER_PLUGINS.md
<br>
<br>
==Create machines with KVM==
===Create the machine===
Before a new machine can be created with the docker-machine command, the proper KVM virtual network must be created.
See [[KVM#Add_new_networ|How to create KVM networks]] for details.
<pre>
# docker-machine create -d kvm --kvm-network "docker-network" manager
Running pre-create checks=Swarm Classic VS Swarm mode=Docker has been innovating at quite a dramatic pace, and focussing on making their technology easier to deploy, and applicable for a wider range of use cases.One of the features that has received the highest level of focus is Clustering/Orchestration.In Docker language, that means Swarm.Creating machine...(manager) Copying source: https:/root/www.docker/machine/cache/boot2dockerlinkedin.iso to com/rootpulse/.docker-swarm-vs-mode-neil-cresswell/machine/machines/manager/boot2docker.iso...Waiting for machine ==Swarm classic==Prior to be runningDocker 1.12 Swarm (Classic) existed as a standalone product, this may take it relied on a few minutes...Detecting operating system complicated setup of external service discovery systems (eg consul) and a dedicated set of created instancecontainers which ran as the swarm controllers...Waiting for SSH Load balancing network traffic across containers required external load balancers, and these needed to be available...Detecting the provisioner...Provisioning integrated with boot2docker...Copying certs service discovery to the local machine directoryfunction correctly...Copying certs to Standalone Docker hosts were members of a swarm cluster, and the remote machine...Setting Docker configuration on swarm controllers presented the remote daemonpooled capacity from all hosts as a single “virtual” docker host...Checking connection to Docker...Docker is up and running!To see how to connect your Docker Client to By presenting the Docker Engine running on this swarm cluster as a virtual machinedocker host meant that the way you interacted with Swarm was exactly the same way you interacted with a standalone host (docker run, rundocker ps, docker images, docker volumes), you just directed the commands (using –H=tcp: docker-machine env manager</pre>/) at the swarm master IP:Port instead of individual swarm nodes.
{{tip|The machine is created under '''/USER_HOME/A Docker 1.12-es verziója előtt a Swarm (Classic) egy külön álló termék volt, nem volt része a docker/machine/machines/<machine_name>''' directoryengine-nek. A swarm-ot a docker engine-en futó swarm konténerekkel kellett létrehozni. Vo
==Swarm mode==Since releasing Docker 1.12, and embedding Swarm Mode (I really wish they had called it something else to minimise confusion) into the core Docker engine, the functionality and management of swarm has altered dramatically. No longer does the cluster (pool of resources) emulate a virtual docker host, and no longer can you run standard docker engine commands against the swarm cluster, you now need to use specific commands (service create, service inspect, service ps, service ls, service scale etc). If you run Docker engine commands (docker ps) what is returned is a list of containers running on the Docker Swarm Master HOST (not the new VM was created cluster). If you want to interact with virtualbox drivercontainers that make up a swarm “service”, you need to take multiple steps (service ps, to show the VirtualBox graphical interface must be started with containers, and which host they are on, then change the same userfocus of your docker commands to that host, connect to that the VM was created withhost, and then issue the VirtualBox will discover docker commands to manage the new VM automatically}}containers on that specific host/swarm member).
===Check what was created===<br>====Interfaces on the host====<pre># ifconfigeno1: flags=4163<UPThe key point of SwarmMode is that it is an overlay engine for running SERVICES,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192not Containers.168.0.105 netmask 255.255.255.0 broadcast 192.168.0.255 ....virbr1: flags=4163<UPIn fact,BROADCASTa service actually comprises a number of tasks,RUNNING,MULTICAST> mtu 1500 inet 192.168.42.1 netmask 255.255.255.0 broadcast 192.168.42.255 ...virbrDocker: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.123.1 netmask 255.255.255.0 broadcast 192.168.123.255 inet6 2001:db8:ca2:2::1 prefixlen 64 scopeid 0x0<global> ...</pre>On with a task being a container and any commands to execute within the host, upon the regular interfaces, we can see the two bridges for the two virtual networks: * '''virbrDocker''': That is the virtual network that we created container (but a task might also be a VM in libvirt. This is connected to the host network with NAT. We assigned these IP addresses, when we defined the network. * '''virbr1''': That is the host-only virtual network that was created out-of-the-box. This one has no internet accessfuture).<br>
====Interface One of the new VM====You can log major enhancements in to Swarm mode is the newly created VM with the '''docker-machine ssh <machine_name>''' commandOn the newly created docker ready VMload balancing, four interfaces were created.<pre># dockerwhich is now built-machine ssh manager ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ ||_.__/ \___/ \___/ \__|_____\__in; now when you publish service,_|\___/ \___|_|\_\___|_|Boot2Docker version 18.05.0-ce, build HEAD : b5d6989 - Thu May 10 16:35:28 UTC 2018Docker version 18.05.0-ce, build f150324</pre><br>Check exposed ports will automatically be load balanced across the interfaces of the new VM: <pre>docker@manager:~$ ifconfigdocker0 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 ...eth0 inet addr:192.168.123.195 Bcast:192.168.123.255 Mask:255.255.255.0 ...eth1 inet addr:192.168.42.118 Bcast:192.168.42.255 Mask:255.255.255.0 </pre>* '''eth0''':192.168.123.195 - Interface to the new virtual network containers (docker-networktasks though, remember) created by usthat comprise that service. this network is connected You don’t need to the host network,so it has public internet access as well. * '''eth1''':192configure any additional load balancing.168.42.118 - This connect change makes it incredibly easy to the dynamically created host-only virtual network. Just , say for VM-to-VM communication* '''docker0''':172.17.0.1 - This VM is ment to host docker containerinstance, so the docker daemon was already installed and started on it. Form docker point of view, this VM is also scale a nginx service from 1 worker task (dockercontainer) host, and therefore the docker daemon created the default virtual bridge, that the containers will be connected to unless it is specified implicitly otherwise during container creation10. <br>Inspect the new VM with the '''docker-machine inspect''' command<pre># docker-machine inspect manager{ "ConfigVersion": 3, "Driver": { .... "CPU": 1, "Network": "docker-network", "PrivateNetwork": "docker-machines", "ISO": "/root/.docker/machine/machines/manager/boot2docker.iso", "... }, "DriverName": "kvm", "HostOptions": { .... }, "SwarmOptions": { "IsSwarm": false, ... }, "AuthOptions": { .... } }, "Name": "manager"}</pre><br>
====Routing table====So, if you are using Swarm mode in Docker 1.12, you need to stop thinking about Containers (and trying to interact with the containers that make up a service) and rather, manage the service and tasks.
<pre>All In Portainer.io, we exhibit the packages that ment to go to same behaviour as above, so if you click on “containers” you will only see the docker VMs are routed to the bridges# routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface...192.168.42.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 <<<<this192.168.123.0 0.0.0.0 255.255.255.0 U 0 0 0 virbrDocker <<<this</pre><br>====IPtables modifications====container

Navigation menu