Docker

Revision as of 18:26, 22 June 2018 by Adam (talk | contribs) (Interface list of the new VM)

Revision as of 18:26, 22 June 2018 by Adam (talk | contribs) (Interface list of the new VM)

Contents

Manage VMs with docker-machine

Introduction

Install software

Create machines

Create the KVM netwrok

Before a new machine can be created with the docker-machine command, the proper KVM virtual network must be created.

See How to create KVM networks for details.

Create machine

Machine can be created with the docker-machine create command. Most simple usage:

docker-machine create -d <hybervisor driver name> --<hypervisor options> <>
  • -d: hypervisor driver. Default value: "virtalbox". For KVM use: "kvm".
  • --kvm-network: The name of the kvm virtual network that we would like to use. If this is not set, the new machine will be connected to the "default" KVM virtual network.
ImportantIcon.png

Note
Even with the --kvn-network parameter provided, two new interfaces are created for every new VM.

  • one for the virtual network, described with the --kvm-network parameter
  • docker-machines will create a second, isolated virtual network called "docker-machines", that the new VM is also connected to


# docker-machine create -d kvm --kvm-network "docker-network" manager

Running pre-create checks...
Creating machine...
(manager) Copying /root/.docker/machine/cache/boot2docker.iso to /root/.docker/machine/machines/manager/boot2docker.iso...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager
TipIcon.png

Tip
The machine is created under /USER_HOME/.docker/machine/machines/<machine_name> directory

If the new VM was created with virtualbox driver, the VirtualBox graphical interface must be started with the same user, that the VM was created with, and the VirtualBox will discover the new VM automatically


Check what was created


Interface list on the host

# ifconfig
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.105  netmask 255.255.255.0  broadcast 192.168.0.255
        ....
virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.42.1  netmask 255.255.255.0  broadcast 192.168.42.255
        ...
virbrDocker: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.123.1  netmask 255.255.255.0  broadcast 192.168.123.255
        inet6 2001:db8:ca2:2::1  prefixlen 64  scopeid 0x0<global>
        ...

On the host, upon the regular interfaces, we can see the two bridges for the two virtual networks:

  • virbrDocker: That is the virtual network that we created in libvirt. This is connected to the host network with NAT. We assigned these IP addresses, when we defined the network.
  • virbr1: That is the host-only virtual network that was created out-of-the-box. This one has no internet access.


Interface list of the new VM

You can log in to the newly created VM with the docker-machine ssh <machine_name> command

# docker-machine ssh manager
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 18.05.0-ce, build HEAD : b5d6989 - Thu May 10 16:35:28 UTC 2018
Docker version 18.05.0-ce, build f150324

Check the interfaces of the new VM:

docker@manager:~$ ifconfig
docker0   inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          ...
eth0      inet addr:192.168.123.195  Bcast:192.168.123.255  Mask:255.255.255.0
          ...
eth1      inet addr:192.168.42.118  Bcast:192.168.42.255  Mask:255.255.255.0           
  • eth0:192.168.123.195 - Interface to the new virtual network (docker-network) created by us. this network has public internet connection
  • eth1:192.168.42.118 - This connect to the dynamically created host-only virtual network. Just for VM-to-VM communication
  • docker0:172.17.0.1 -


Routing table

All the packages that ment to go to the docker VMs are routed to the bridges
# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
...
192.168.42.0    0.0.0.0         255.255.255.0   U     0      0        0 virbr1  <<<<this
192.168.123.0   0.0.0.0         255.255.255.0   U     0      0        0 virbrDocker  <<<this


IPtables modifications