Difference between revisions of "Docker"

From berki WIKI
Jump to: navigation, search
 
(147 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
[[Docker basic]]
  
=Manage VMs with docker-machine=
+
[[Docker Compose]]
  
==Introduction==
+
[[Docker Machine]]
https://docs.docker.com/machine/overview/
 
Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean.
 
  
Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.
+
[[Docker Swarm Classic]]
:[[File:ClipCapIt-180622-224748.PNG]]
 
  
When people say “Docker” they typically mean Docker Engine, the client-server application made up of the Docker daemon, a REST API that specifies interfaces for interacting with the daemon, and a command line interface (CLI) client that talks to the daemon (through the REST API wrapper). Docker Engine accepts docker commands from the CLI, such as docker run <image>, docker ps to list running containers, docker image ls to list images, and so on.
+
[[Docker Swarm Mode]]
  
'''Docker Machine''' is a tool for provisioning and managing your Dockerized hosts (hosts with Docker Engine on them). Typically, you install Docker Machine on your local system. Docker Machine has its own command line client docker-machine and the Docker Engine client, docker. You can use Machine to install Docker Engine on one or more virtual systems. These virtual systems can be local (as when you use Machine to install and run Docker Engine in VirtualBox on Mac or Windows) or remote (as when you use Machine to provision Dockerized hosts on cloud providers). The Dockerized hosts themselves can be thought of, and are sometimes referred to as, managed “machines”.
+
[[Docker Swarm management]]
  
 +
[[Docker volume orchestration]]
  
==Install software==
+
[[Docker Swarm on AWS]]
  
 +
[[Stateful load-balancing in swarm]]
  
==Create machines==
+
[[Centralized logging in swarm]]
===Create the KVM netwrok===
 
Before a new machine can be created with the docker-machine command, the proper KVM virtual network must be created.
 
  
See [[KVM#Add_new_networ|How to create KVM networks]] for details.
+
[[Metrics and Monitoring in swarm]]
  
===Create machine===
+
[[Auto-scaling swarm]]
Machine can be created with the '''docker-machine create''' command.  
+
 
Most simple usage:
+
[[Kafka with ELK on swarm]]
 +
 
 +
[[Java EE application with docker]]<br>
 +
Itt egy tipikus, produkciós docker architektúrát mutatunk be egy két lábas JBoss cluster-el, de swarm nélkül
 +
 
 +
[[Java EE application with swarm]]<br>
 +
Egy lehetséges production grade swarm architektúra telepített Java EE alkalmazás kialakítását mutatjuk be.
 +
 
 +
 
 +
<br>
 +
=Docker on Fedora 31=
 +
https://www.reddit.com/r/linuxquestions/comments/dn2psl/upgraded_to_fedora_31_docker_will_not_work/<br>
 +
https://fedoraproject.org/wiki/Changes/CGroupsV2<br
 +
A Fedora31-ben bevezették a CGroupsV2-t amit a docker még nem követett le, ezért a docker a CGroupsV2-vel nem fog működni, ki kell kapcsolni.  
 +
 
 +
 
 +
1-
 +
 
 +
vim /etc/default/grub
 +
2- Add Line below in GRUB_CMDLINE_LINUX systemd.unified_cgroup_hierarchy=0
 
<pre>
 
<pre>
docker-machine create -d <hybervisor driver name> --<driver options> <machine name>
+
GRUB_TIMEOUT=5
 +
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
 +
GRUB_DEFAULT=saved
 +
GRUB_DISABLE_SUBMENU=true
 +
GRUB_TERMINAL_OUTPUT="console"
 +
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_localhost--live-swap rd.lvm.lv=fedora_localhost-live/root rd.luks.uuid=luks-42aca868-45a4-438e-8801-bb23145d978d rd.lvm.lv=fedora_localhost-live/swap rhgb quiet systemd.unified_cgroup_hierarchy=0"
 +
GRUB_DISABLE_RECOVERY="true"
 +
GRUB_ENABLE_BLSCFG=true
 
</pre>
 
</pre>
  
* -d: hypervisor driver. Default value: "virtalbox". For KVM use: "kvm".
+
3- Then :
* --kvm-network: The name of the kvm virtual (public) network that we would like to use. If this is not set, the new machine will be connected to the '''"default"''' KVM virtual network.
+
 
 +
# grub2-mkconfig
 +
 
 +
4- Restart your PC
 +
 
 +
 
  
{{note|Docker will always create a second, isolated virtual network (bridge), called "'''docker-machines'''", that all the VMs will be connected to regardless of the value of the '''--kvm-network''' paramter that controls only the name of the "public" network. It seems the the name of the isolated virtual network is hardcoded, and can't be changed.
 
}}
 
  
<pre>
 
# docker-machine create -d kvm --kvm-network "docker-network" manager
 
  
Running pre-create checks...
 
Creating machine...
 
(manager) Copying /root/.docker/machine/cache/boot2docker.iso to /root/.docker/machine/machines/manager/boot2docker.iso...
 
Waiting for machine to be running, this may take a few minutes...
 
Detecting operating system of created instance...
 
Waiting for SSH to be available...
 
Detecting the provisioner...
 
Provisioning with boot2docker...
 
Copying certs to the local machine directory...
 
Copying certs to the remote machine...
 
Setting Docker configuration on the remote daemon...
 
Checking connection to Docker...
 
Docker is up and running!
 
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager
 
</pre>
 
  
{{tip|The machine is created under '''/USER_HOME/.docker/machine/machines/<machine_name>''' directory
 
  
If the new VM was created with virtualbox driver, the VirtualBox graphical interface must be started with the same user, that the VM was created with, and the VirtualBox will discover the new VM automatically}}
 
  
===Check what was created===
 
<br>
 
====Interfaces on the host====
 
<pre>
 
# ifconfig
 
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 
        inet 192.168.0.105  netmask 255.255.255.0  broadcast 192.168.0.255
 
        ....
 
virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 
        inet 192.168.42.1  netmask 255.255.255.0  broadcast 192.168.42.255
 
        ...
 
virbrDocker: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 
        inet 192.168.123.1  netmask 255.255.255.0  broadcast 192.168.123.255
 
        inet6 2001:db8:ca2:2::1  prefixlen 64  scopeid 0x0<global>
 
        ...
 
</pre>
 
On the host, upon the regular interfaces, we can see the two bridges for the two virtual networks:
 
* '''virbrDocker''': That is the virtual network that we created in libvirt. This is connected to the host network with NAT. We assigned these IP addresses, when we defined the network.
 
* '''virbr1''': That is the host-only virtual network that was created out-of-the-box. This one has no internet access.
 
 
<br>
 
<br>
  
====Interface the new VM====
 
You can log in to the newly created VM with the '''docker-machine ssh <machine_name>''' command
 
On the newly created docker ready VM, four interfaces were created.
 
<pre>
 
# docker-machine ssh manager
 
                        ##        .
 
                  ## ## ##        ==
 
              ## ## ## ## ##    ===
 
          /"""""""""""""""""\___/ ===
 
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
 
          \______ o          __/
 
            \    \        __/
 
              \____\_______/
 
_                _  ____    _            _
 
| |__  ___  ___ | |_|___ \ __| | ___  ___| | _____ _ __
 
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
 
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|  <  __/ |
 
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
 
Boot2Docker version 18.05.0-ce, build HEAD : b5d6989 - Thu May 10 16:35:28 UTC 2018
 
Docker version 18.05.0-ce, build f150324
 
</pre>
 
<br>
 
Check the interfaces of the new VM:
 
<pre>
 
docker@manager:~$ ifconfig
 
docker0  inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
 
          ...
 
eth0      inet addr:192.168.123.195  Bcast:192.168.123.255  Mask:255.255.255.0
 
          ...
 
eth1      inet addr:192.168.42.118  Bcast:192.168.42.255  Mask:255.255.255.0         
 
</pre>
 
* '''eth0''':192.168.123.195 - Interface to the new virtual network (docker-network) created by us. this network is connected to the host network,so it has public internet access as well.
 
* '''eth1''':192.168.42.118 - This connect to the dynamically created host-only virtual network. Just for VM-to-VM communication
 
* '''docker0''':172.17.0.1 - This VM is ment to host docker container, so the docker daemon was already installed and started on it. Form docker point of view, this VM is also a (docker) host, and therefore the docker daemon created the default virtual bridge, that the containers will be connected to unless it is specified implicitly otherwise during container creation.
 
<br>
 
Inspect the new VM with the '''docker-machine inspect''' command
 
<pre>
 
# docker-machine inspect manager
 
{
 
    "ConfigVersion": 3,
 
    "Driver": {
 
        ....
 
        "CPU": 1,
 
        "Network": "docker-network",
 
        "PrivateNetwork": "docker-machines",
 
        "ISO": "/root/.docker/machine/machines/manager/boot2docker.iso",
 
        "...
 
    },
 
    "DriverName": "kvm",
 
    "HostOptions": {
 
      ....
 
        },
 
        "SwarmOptions": {
 
            "IsSwarm": false,
 
            ...
 
        },
 
        "AuthOptions": {
 
          ....
 
        }
 
    },
 
    "Name": "manager"
 
}
 
</pre>
 
<br>
 
  
====Routing table====
 
  
<pre>
+
 
All the packages that ment to go to the docker VMs are routed to the bridges
+
 
# route
+
 
Kernel IP routing table
+
=Swarm Classic VS Swarm mode=
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
+
Docker has been innovating at quite a dramatic pace, and focussing on making their technology easier to deploy, and applicable for a wider range of use cases. One of the features that has received the highest level of focus is Clustering/Orchestration. In Docker language, that means Swarm.
...
+
 
192.168.42.0    0.0.0.0        255.255.255.0  U    0      0        0 virbr1  <<<<this
+
source: https://www.linkedin.com/pulse/docker-swarm-vs-mode-neil-cresswell/
192.168.123.0  0.0.0.0        255.255.255.0  U    0      0        0 virbrDocker  <<<this
+
 
</pre>
+
==Swarm classic==
<br>
+
Prior to Docker 1.12 Swarm (Classic) existed as a standalone product, it relied on a complicated setup of external service discovery systems (eg consul) and a dedicated set of containers which ran as the swarm controllers. Load balancing network traffic across containers required external load balancers, and these needed to be integrated with service discovery to function correctly. Standalone Docker hosts were members of a swarm cluster, and the swarm controllers presented the pooled capacity from all hosts as a single “virtual” docker host. By presenting the swarm cluster as a virtual docker host meant that the way you interacted with Swarm was exactly the same way you interacted with a standalone host (docker run, docker ps, docker images, docker volumes), you just directed the commands (using –H=tcp://) at the swarm master IP:Port instead of individual swarm nodes.
====IPtables modifications====
+
 
 +
A Docker 1.12-es verziója előtt a Swarm (Classic) egy külön álló termék volt, nem volt része a docker engine-nek. A swarm-ot a docker engine-en futó swarm konténerekkel kellett létrehozni. Vo
 +
 
 +
==Swarm mode==
 +
Since releasing Docker 1.12, and embedding Swarm Mode (I really wish they had called it something else to minimise confusion) into the core Docker engine, the functionality and management of swarm has altered dramatically. No longer does the cluster (pool of resources) emulate a virtual docker host, and no longer can you run standard docker engine commands against the swarm cluster, you now need to use specific commands (service create, service inspect, service ps, service ls, service scale etc). If you run Docker engine commands (docker ps) what is returned is a list of containers running on the Docker Swarm Master HOST (not the cluster). If you want to interact with containers that make up a swarm “service”, you need to take multiple steps (service ps, to show the containers, and which host they are on, then change the focus of your docker commands to that host, connect to that host, and then issue the docker commands to manage the containers on that specific host/swarm member).  
 +
 
 +
The key point of SwarmMode is that it is an overlay engine for running SERVICES, not Containers. In fact, a service actually comprises a number of tasks, with a task being a container and any commands to execute within the container (but a task might also be a VM in the future).
 +
 
 +
One of the major enhancements in Swarm mode is the load balancing, which is now built-in; now when you publish service, exposed ports will automatically be load balanced across the containers (tasks though, remember) that comprise that service. You don’t need to configure any additional load balancing. This change makes it incredibly easy to, say for instance, scale a nginx service from 1 worker task (container) to 10.
 +
 
 +
So, if you are using Swarm mode in Docker 1.12, you need to stop thinking about Containers (and trying to interact with the containers that make up a service) and rather, manage the service and tasks.
 +
 
 +
In Portainer.io, we exhibit the same behaviour as above, so if you click on “containers” you will only see the container

Latest revision as of 22:21, 23 February 2020

Docker basic

Docker Compose

Docker Machine

Docker Swarm Classic

Docker Swarm Mode

Docker Swarm management

Docker volume orchestration

Docker Swarm on AWS

Stateful load-balancing in swarm

Centralized logging in swarm

Metrics and Monitoring in swarm

Auto-scaling swarm

Kafka with ELK on swarm

Java EE application with docker
Itt egy tipikus, produkciós docker architektúrát mutatunk be egy két lábas JBoss cluster-el, de swarm nélkül

Java EE application with swarm
Egy lehetséges production grade swarm architektúra telepített Java EE alkalmazás kialakítását mutatjuk be.



Docker on Fedora 31

https://www.reddit.com/r/linuxquestions/comments/dn2psl/upgraded_to_fedora_31_docker_will_not_work/
https://fedoraproject.org/wiki/Changes/CGroupsV2<br A Fedora31-ben bevezették a CGroupsV2-t amit a docker még nem követett le, ezért a docker a CGroupsV2-vel nem fog működni, ki kell kapcsolni.


1-

vim /etc/default/grub 2- Add Line below in GRUB_CMDLINE_LINUX systemd.unified_cgroup_hierarchy=0

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_localhost--live-swap rd.lvm.lv=fedora_localhost-live/root rd.luks.uuid=luks-42aca868-45a4-438e-8801-bb23145d978d rd.lvm.lv=fedora_localhost-live/swap rhgb quiet systemd.unified_cgroup_hierarchy=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true

3- Then :

# grub2-mkconfig

4- Restart your PC









Swarm Classic VS Swarm mode

Docker has been innovating at quite a dramatic pace, and focussing on making their technology easier to deploy, and applicable for a wider range of use cases. One of the features that has received the highest level of focus is Clustering/Orchestration. In Docker language, that means Swarm.

source: https://www.linkedin.com/pulse/docker-swarm-vs-mode-neil-cresswell/

Swarm classic

Prior to Docker 1.12 Swarm (Classic) existed as a standalone product, it relied on a complicated setup of external service discovery systems (eg consul) and a dedicated set of containers which ran as the swarm controllers. Load balancing network traffic across containers required external load balancers, and these needed to be integrated with service discovery to function correctly. Standalone Docker hosts were members of a swarm cluster, and the swarm controllers presented the pooled capacity from all hosts as a single “virtual” docker host. By presenting the swarm cluster as a virtual docker host meant that the way you interacted with Swarm was exactly the same way you interacted with a standalone host (docker run, docker ps, docker images, docker volumes), you just directed the commands (using –H=tcp://) at the swarm master IP:Port instead of individual swarm nodes.

A Docker 1.12-es verziója előtt a Swarm (Classic) egy külön álló termék volt, nem volt része a docker engine-nek. A swarm-ot a docker engine-en futó swarm konténerekkel kellett létrehozni. Vo

Swarm mode

Since releasing Docker 1.12, and embedding Swarm Mode (I really wish they had called it something else to minimise confusion) into the core Docker engine, the functionality and management of swarm has altered dramatically. No longer does the cluster (pool of resources) emulate a virtual docker host, and no longer can you run standard docker engine commands against the swarm cluster, you now need to use specific commands (service create, service inspect, service ps, service ls, service scale etc). If you run Docker engine commands (docker ps) what is returned is a list of containers running on the Docker Swarm Master HOST (not the cluster). If you want to interact with containers that make up a swarm “service”, you need to take multiple steps (service ps, to show the containers, and which host they are on, then change the focus of your docker commands to that host, connect to that host, and then issue the docker commands to manage the containers on that specific host/swarm member).

The key point of SwarmMode is that it is an overlay engine for running SERVICES, not Containers. In fact, a service actually comprises a number of tasks, with a task being a container and any commands to execute within the container (but a task might also be a VM in the future).

One of the major enhancements in Swarm mode is the load balancing, which is now built-in; now when you publish service, exposed ports will automatically be load balanced across the containers (tasks though, remember) that comprise that service. You don’t need to configure any additional load balancing. This change makes it incredibly easy to, say for instance, scale a nginx service from 1 worker task (container) to 10.

So, if you are using Swarm mode in Docker 1.12, you need to stop thinking about Containers (and trying to interact with the containers that make up a service) and rather, manage the service and tasks.

In Portainer.io, we exhibit the same behaviour as above, so if you click on “containers” you will only see the container