Changes

Jump to: navigation, search

Docker

888 bytes added, 22:21, 23 February 2020
no edit summary
[[Docker basic]]
=Manage VMs with docker-machine=[[Docker Compose]]
==Introduction==[[Docker Machine]]
==Install software==[[Docker Swarm Classic]]
[[Docker Swarm Mode]]
==Create machines=====Create the KVM netwrok===Before a new machine can be created with the docker-machine command, the proper KVM virtual network must be created. [[Docker Swarm management]]
See [[KVM#Add_new_networ|How to create KVM networksDocker volume orchestration]] for details.
[[Docker Swarm on AWS]] [[Stateful load-balancing in swarm]] [[Centralized logging in swarm]] [[Metrics and Monitoring in swarm]] [[Auto-scaling swarm]] [[Kafka with ELK on swarm]] [[Java EE application with docker]]<br>Itt egy tipikus, produkciós docker architektúrát mutatunk be egy két lábas JBoss cluster-el, de swarm nélkül [[Java EE application with swarm]]<br>Egy lehetséges production grade swarm architektúra telepített Java EE alkalmazás kialakítását mutatjuk be.   <br>=Docker on Fedora 31==Create machine===Machine can be created with the '''https://www.reddit.com/r/linuxquestions/comments/dn2psl/upgraded_to_fedora_31_docker_will_not_work/<br>https://fedoraproject.org/wiki/Changes/CGroupsV2<brA Fedora31-ben bevezették a CGroupsV2-t amit a docker még nem követett le, ezért a dockera CGroupsV2-machine create''' commandvel nem fog működni, ki kell kapcsolni. Most simple usage:  1- vim /etc/default/grub2- Add Line below in GRUB_CMDLINE_LINUX systemd.unified_cgroup_hierarchy=0
<pre>
dockerGRUB_TIMEOUT=5GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-machine create release)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_localhost-d <hybervisor driver name> -live-<hypervisor options> <>swap rd.lvm.lv=fedora_localhost-live/root rd.luks.uuid=luks-42aca868-45a4-438e-8801-bb23145d978d rd.lvm.lv=fedora_localhost-live/swap rhgb quiet systemd.unified_cgroup_hierarchy=0"GRUB_DISABLE_RECOVERY="true"GRUB_ENABLE_BLSCFG=true
</pre>
* 3-dThen : hypervisor driver. Default value: "virtalbox". For KVM use: "kvm". * # grub2-mkconfig 4-kvm-network: The name of the kvm virtual network that we would like to use. If this is not set, the new machine will be connected to the '''"default"''' KVM virtual network.Restart your PC    
{{note|Even with the --kvn-network parameter provided, two new interfaces are created for every new VM.
* one for the virtual network, described with the --kvm-network parameter
* docker-machines will create a second, isolated virtual network called "'''docker-machines'''", that the new VM is also connected to}}
<pre>
# docker-machine create -d kvm --kvm-network "docker-network" manager
Running pre-create checks...
Creating machine...
(manager) Copying /root/.docker/machine/cache/boot2docker.iso to /root/.docker/machine/machines/manager/boot2docker.iso...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager
</pre>
{{tip|The machine <br>      =Swarm Classic VS Swarm mode=Docker has been innovating at quite a dramatic pace, and focussing on making their technology easier to deploy, and applicable for a wider range of use cases. One of the features that has received the highest level of focus is created under '''Clustering/USER_HOMEOrchestration. In Docker language, that means Swarm. source: https://www.linkedin.com/pulse/docker-swarm-vs-mode-neil-cresswell/machine ==Swarm classic==Prior to Docker 1.12 Swarm (Classic) existed as a standalone product, it relied on a complicated setup of external service discovery systems (eg consul) and a dedicated set of containers which ran as the swarm controllers. Load balancing network traffic across containers required external load balancers, and these needed to be integrated with service discovery to function correctly. Standalone Docker hosts were members of a swarm cluster, and the swarm controllers presented the pooled capacity from all hosts as a single “virtual” docker host. By presenting the swarm cluster as a virtual docker host meant that the way you interacted with Swarm was exactly the same way you interacted with a standalone host (docker run, docker ps, docker images, docker volumes), you just directed the commands (using –H=tcp:/machines/<machine_name>''' directory) at the swarm master IP:Port instead of individual swarm nodes.
If the new VM was created with virtualbox driverA Docker 1.12-es verziója előtt a Swarm (Classic) egy külön álló termék volt, the VirtualBox graphical interface must be started with the same user, that the VM was created with, and the VirtualBox will discover the new VM automatically}}nem volt része a docker engine-nek. A swarm-ot a docker engine-en futó swarm konténerekkel kellett létrehozni. Vo
==Swarm mode=Check =Since releasing Docker 1.12, and embedding Swarm Mode (I really wish they had called it something else to minimise confusion) into the core Docker engine, the functionality and management of swarm has altered dramatically. No longer does the cluster (pool of resources) emulate a virtual docker host, and no longer can you run standard docker engine commands against the swarm cluster, you now need to use specific commands (service create, service inspect, service ps, service ls, service scale etc). If you run Docker engine commands (docker ps) what was created===is returned is a list of containers running on the Docker Swarm Master HOST (not the cluster). If you want to interact with containers that make up a swarm “service”, you need to take multiple steps (service ps, to show the containers, and which host they are on, then change the focus of your docker commands to that host, connect to that host, and then issue the docker commands to manage the containers on that specific host/swarm member).
====Check the interface list The key point of the host====On the host, upon the regular interfaces, we can see the two bridges for the two virtual networks: * '''virbrDocker''': That SwarmMode is the virtual network that we created in libvirt. This it is connected to the host network with NATan overlay engine for running SERVICES, not Containers. We assigned these IP addressesIn fact, when we defined a service actually comprises a number of tasks, with a task being a container and any commands to execute within the network. * '''virbr1''': That is container (but a task might also be a VM in the host-only virtual network that was created out-of-the-box. This one has no internet accessfuture).
<pre>eno1: flags=4163<UPOne of the major enhancements in Swarm mode is the load balancing,BROADCASTwhich is now built-in; now when you publish service,RUNNINGexposed ports will automatically be load balanced across the containers (tasks though,MULTICAST> mtu 1500 inet 192remember) that comprise that service.168You don’t need to configure any additional load balancing.0.105 netmask 255.255.255.0 broadcast 192.168.0.255 ....virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.42.1 netmask 255.255.255.0 broadcast 192.168.42.255 ...virbrDocker: flags=4163<UPThis change makes it incredibly easy to,BROADCASTsay for instance,RUNNING,MULTICAST> mtu 1500 inet 192.168.123.1 netmask 255.255.255.0 broadcast 192.168.123.255 inet6 2001:db8:ca2:2::scale a nginx service from 1 prefixlen 64 scopeid 0x0<global> ..worker task (container) to 10.</pre>
====Login So, if you are using Swarm mode in Docker 1.12, you need to stop thinking about Containers (and trying to interact with the new machine====<pre># docker-machine ssh manager ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|| |_) | (_) | (_) | |_ / __/ (_| | (_containers that make up a service) | (__| < __/ ||_.__/ \___/ \___/ \__|_____\__and rather,_|\___/ \___|_|\_\___|_|Boot2Docker version 18.05.0-ce, build HEAD : b5d6989 - Thu May 10 16:35:28 UTC 2018Docker version 18manage the service and tasks.05.0-ce, build f150324</pre>
Check In Portainer.io, we exhibit the interfaces of same behaviour as above, so if you click on “containers” you will only see the new VM: <pre>docker@manager:~$ ifconfigdocker0 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 ...eth0 inet addr:192.168.123.195 Bcast:192.168.123.255 Mask:255.255.255.0 ...eth1 inet addr:192.168.42.118 Bcast:192.168.42.255 Mask:255.255.255.0</pre>container

Navigation menu