April 08, 2017

Docker Compose

Docker Compose

Particularly with multi-tiered applications, your Dockerfile and runtime commands get increasingly complex. Docker Compose is a tool to streamline the definition and instantiation of multi-tier, multi-container Docker applications. Compose requires a single configuration file and a single command to organize and spin up the application tier.

Docker Compose simplifies the containerization of a multi-tier, multi-container application, which can be stitched together using the docker-compose.yml configuration file and the docker-compose command to provide a single application service.

The Compose file provides a way to document and configure all of the application’s service dependencies (databases, queues, caches, web service APIs, etc.)
  • Docker Compose defines and runs complex services:
    • define single containers via Dockerfile
    • describe a multi-container application via single configuration file (docker-compose.yml)
    • manage application stack via a single binary (docker-compose up)
    • link services through Service Discovery
  • The Docker Compose configuration file specifies the services, networks, and volumes to run:
    • services - the equivalent of passing command-line parameters to docker run
    • networks - analogous to definitions from docker network create
    • volumes - analogous to  definitions from docker volume create
  • The Compose configuration file is a YAML declarative file format:
    • YAML Ain’t Markup Language (YAML)
    • YAML philosophy is that "When data is easy to view and understand, programming becomes a simpler task"
    • human-friendly and compatible with modern programming languages for common tasks
    • Minimal structure for maximum data:
      • indentation may be used for structure
      • colons separate key: value pairs
      • dashes are used to create “bullet” lists
      version: "3"
            services:
                web:
                     build: .
                     volumes:
                         - web-data:/var/www/data
                redis:
                     image: redis:alpine
                     ports:
                          - "6379"
                     networks:
                          - default
  • Docker Compose file, docker-compose.yml:
    • describes the services, networks, and volumes of the application stack
    • document and configure the application’s service dependencies (databases, queues, caches, web service APIs, etc.)
    • Use one or more -f flags to call the Compose configuration file. Without the -f flag, the current directory searched for a docker-compose.yml file.
  • Command examples:
  • docke­r-c­ompose up Launches all containers
    docke­r-c­ompose stop Stop all containers
    docke­r-c­ompose kill Kills all containers
    docke­r-c­ompose exec <se­rvi­ce> <co­mma­nd> Executes a command in the container
  • Enhances security and manageability by moving docker run commands to a YAML file

 

April 02, 2017

Docker Swarm

Docker Swarm

Docker Swarm is a clustering tool that allows the management of a set of Docker Hosts as if they were a single Docker Host. Most of the familiar Docker tools, APIs and services can be used in Docker Swarm, enabling scaling of the Docker ecosystem.
  • Native cluster management and orchestration of Docker Engines (nodes)
  • Run distributed application (containers) across multiple Docker hosts, as if it were a single host
  • A node is an instance of the Docker engine participating in the swarm
    • Two types of Docker nodes:
      • Manager
        • Deploys applications to the swarm
        • dispatches tasks (units of work) to worker nodes
        • perform the orchestration and cluster management functions
      • Worker
        • receive and execute tasks dispatched from manager nodes
        • runs agents which report on tasks to the manager node
    • A service is the definition of the tasks to execute on the worker nodes

Physical Network Requirements

Physical Design Requirements:

  • the Docker built-in network drivers have NO requirements for:
    • Multicast
    • External key-value stores
    • Specific routing protocols
    • Layer 2 adjacencies between hosts
    • Specific topologies such as spine and leaf, traditional 3-tier, and PoD designs
  • This is in line with the Container Networking Model which promotes application portability across all environments while still achieving the performance and policy required of applications.
References:

Network Scope

Network Scope

  • The Docker network driver concept of scope is the domain of the driver, which can be the local or swarm
    • Local scope drivers provide connectivity and network services within the scope of the host
    • Swarm scope drivers provide connectivity and network services across a swarm cluster
  • Swarm scope networks will have the same network ID across the entire cluster
  • Local scope networks will have a unique network ID on each host.
  • Scope is identified via the docker network ls command:


Plug-In Network Drivers

Plug-In Network Drivers:

Plug-In Network Drivers are network drivers created by users, the community and other vendors to provide integration with incumbent software and hardware and add specific functionality. Network driver plugins are supported via the LibNetwork project.
  • User-Defined Network
    • You can create a new bridge network that is isolated from the hosts' bridge network
    • Example:
  • Community- and vendor-created
    • These are network drivers created by third-party vendors or the community
    • Enables integration with incumbent software and hardware
    • Can be used to provide functionality not available in standard or existing network drivers
    • E.g. Weave Network Plugin - a network plugin that creates a virtual network that connects your Docker containers across hosts or clouds
  • IPAM Drivers
    • IP Address Management (IPAM) Driver
    • Built-in or Plug-in IPAM drivers
    • provides default subnets or IP addresses for Networks and Endpoints if they are not specified
    • IP addressing can be manually created/assigned
References:

Overlay & Underlay Drivers

Overlay and Underlay Drivers

  • Overlay
  • Overlay network driver creates networking tunnels to enable communication across hosts. Containers on this network behave as if they are on the same machine by tunneling network subnets from one host to the next. It spans one network across multiple hosts. Several tunneling technologies are supported, e.g. virtual extensible local area network (VXLAN)
    • The overlay driver creates an overlay network that supports multi-host networks
    • uses a combination of local Linux bridges and VXLAN to overlay container-to-container communications over physical network infrastructure
    • utilizes an industry-standard VXLAN data plane that decouples the container network from the underlying physical network (the underlay)
    • encapsulates container traffic in a VXLAN header which allows the traffic to traverse the physical Layer 2 or Layer 3 network
    • Created when a Swarm is instantiated
  • Underlay
  • Underlay network drivers expose host interfaces, e.g. eth0, directly to containers running on the host. An example of an underlay driver is the Media Access Control virtual local area network (MACvlan).
    • Allows direct connection to the hosts' physical interface
    • MACvlan eliminates the need for the Linux bridge, NAT and port-mapping
    • The MACvlan establishes a connection between container interfaces and the host interface (or sub-interfaces)
    • used to provide IP addresses to containers that are routable on the physical network
References:

None Network Driver

None Network Driver

With the none network driver the network stack has a loopback interface, however it does not have an external network interface, as such it cannot communicate outside the container
  • The none network driver is an unmanaged networking option
    • Docker Engine will not:
      • create interfaces inside the container
      • establish port mapping
      • install routes for connectivity
    • the none driver will create a separate namespace for each container
      • This guarantees container network isolation between any containers and the host
  • The none driver gives a container its own networking stack and network namespace but does not configure interfaces inside the container
  • Only local loopback address is available,
  • I/O performed through files or STDIN and STDOUT only
Reference

Bridge Network Driver

Bridge Network Driver

The bridge network driver provides a single-host network on top of which containers may communicate. The IP address in the pool are private and not accessible from outside the host. Bridge networking leverages NAT and port-mapping (via iptables) to communication outside the host.
  •  The bridge driver creates a Linux bridge on the host that is managed by Docker
    • By default containers on the same bridge network will be able to communicate with one another
    • External access to containers can also be configured through the bridge driver.
  • On a Docker host, by default, there is a local Docker network named bridge, created using a bridge network driver which instantiates a Linux bridge called docker0
  • Unless otherwise specified, containers are started on the bridge network by default
  • Note: the --net bridge option is not needed, as bridge is the default network driver
  • The docker host has an IP of 172.17.0.2. Any container launched on this host will be on this network, unless specified otherwise.
  • The scope of this network is local, i.e. these IPs can only be accessed from this host
  • A pair of veth interfaces will be created for the container
    • One side of the veth pair will remain on the host attached to the bridge while the other side of the pair will be placed inside the container’s namespaces
    • An IP address will be allocated for containers on the bridge’s network and traffic will be routed through to the container
  • Any container in the bridge network cannot be accessed outside the docker host, unless the port is mapped to the docker host using the -p parameter in docker run.
  • To launch a container on any network other than the default, bridge, use the --net option:
    • docker run -d -P --net none --name <containerName> <imageName>
References:

Host Network Driver

Host Network Driver

The host network driver has access to the hosts' network interfaces and makes that available to the containers. The advantage of the host network driver includes higher performance, and a NAT-free environment. A disadvantage is that it is susceptible to port conflicts.
  • The host network driver connects a container directly to the hosts' network stack
    • Containers using the host driver reside in the same network namespace as the host
    • Provide native bare-metal network performance at the cost of namespace isolation
    • There is no namespace separation
    • All interfaces on the host can be used directly by the container
  • Running containers on the host network, you don't have to expose and map ports as they are bound on the docker host directly

  • Host networks are not isolated like the bridge network, if you run two applications in containers that use the same port number, they will be a port conflict as they would be bound to the same host
  • host mode gives the container full access to local system services and is considered insecure
  • host mode gives better networking performance than in bridge mode as it uses the host’s native networking stack
    • bridge mode goes through one level of virtualization through the docker daemon
References:

Built-In Network Driver

Built-In Network Drivers

The Docker built-in network drivers facilitate the containers' ability to communicate on a network. They are built into the Docker Engine. They are invoked and used through standard docker network commands.
  • Network drives: none, host, and bridge, will exist by default on every Docker host.

Publishing Ports

Publishing Ports

Exposing and publishing network ports is a way to allow the container communicate with other containers and the external world. The container ports can be bound or published to specific or random host ports.
  • Publish all exposed ports to random ports
    • -P or --publish-all
  • Publish or bind a container port or group of ports to the host
    • -p, --publish list
    • Publish or bind to specific port (<publicPort>:<privatePort>)
      • e.g. -p 8080:80
    • Publish or bind to random port (<privatePort>)
      • e.g. -p 80
      • This binds container port 80 to a random host port, e.g. port 32768
    • You can optionally specify which IP to bind on: <hostInterface>:<publicPort>:<privatePort>, e.g. 127.0.0.1:6379:6379.
      This limits the connection to this port from the localhost (127.0.0.1) only.
    • Examples
      • $ docker run -d -P redis
        • Run redis detached and publish all exposed ports to random ports. The container port, 6379, is exposed through the random port, 32768, to the host
        • Docker communicates through this random port to the default port in the container
        • The container is listening on this exposed port.
      • $ docker run -d -P nginx
        • Run nginx server, detached and publish all exposed ports. In this case ports 80 and 443 are  available through the exposed random port(s), e.g.:
      • -P publishes all exposed ports to random port numbers on the host
      • With -p (lower case p) you can pick which container port to publish and which host port to map it to
      • -p syntax is -p <container port> or -p <host port>:<container port> or -p <host interface>:<host port>:<container port>
        • -p <container port>

      • Publish container port to a random host port. E.g. container port 80 published to host port 32771.
        Note that only port 80 is published, container port 443 is not available on the host.
      • Use telnet to verify service (HTTP) is bound to the random port, 32771 on the host:
      • -p <host port>:<container port>

      • The container port 80 is published as port 8080 to the host.
        A connection to port 8080 on the host is mapped to port 80 in the container

 

Exposing Ports

Exposing Ports

Exposing a container port enables the container to accept incoming connections on that port, e.g. the web service container listening on port 80. Use the EXPOSE instruction in the Dockerfile or the --expose option at runtime to expose a port. Docker will only route traffic to exposed ports.
  • The EXPOSE instruction informs Docker that the container listens on the specified network port(s) at runtime, e.g. EXPOSE 80 443 indicates the the container listens for connections on two ports: 80 and 443
  • EXPOSE does not make the ports of the container accessible to the host
    • To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports
  • Use the command line option --expose to expose a port or a range of ports at runtime

  • This telnet command allows us to confirm that the container is listening for (HTTP) connections at port 80:

  • With EXPOSE instruction or --expose command line option, ports are only exposed to the container IP. To expose a container to the external world, publish the port via the -p or -P runtime options.

Container Networking Model (CNM)

Container Networking Model (CNM)

    • Network Basics
      • IP address
        • IPv4
        • IPv6
        • Partitions/subnets
        • 0.0.0.0
      • Ports
        • Well-known ports
          • port numbers from 0 to 1023
          • used by system processes for widely used network services
          • requires super-user privileges to bind port
        • Registered ports
          • port numbers from 1024 to 49151
          • assigned by IANA for specific services
          • does not require super-user privileges to bind port
        • Ephemeral ports (dynamic, or private)
          • port numbers from 49152 to 65535
          • ports not available to be registered with IANA
          • used for temporary, private services and automatic allocation
        • Port Forwarding
          • Redirect incoming requests to specific services by port number
      • NAT (Network Address Translation)

    • Container Network Model (CNM)
    • CNM provides the forwarding rules, network segmentation, and management tools for complex network policies. It formalizes the steps required to enable networking for containers while providing an abstraction that can be used to support multiple network drivers. Docker uses several networking technologies to implement the CNM network drivers including Linux bridges, network namespaces, veth pairs, and iptables.

      • The CNM is built on three components, sandbox, endpoint, network:

        • Sandbox
          • contains the configuration of a container's network stack, e.g.
            • container interface management
            • routing table
            • DNS settings
          • implemented as a Linux Network Namespace
          • may contain multiple endpoints from multiple networks
          • local scope - associated with a specific host
        • Endpoint
          • joins a Sandbox to a Network
          • Endpoint can be a veth pair
        • Network
          • group of Endpoints that can directly communicate with one other
          • implemented as a Linux bridge, a VLAN, etc.


    • Libnetwork
      • is Docker’s extensibility model
      • Is a code library that adds networking capability to the Docker daemon
    References:

 

Docker Networking

Docker Networking

Containers are isolated, single-application environments. A network enables containers to communicate with one another, the host and the external network. A multi-tier application, e.g. web server, PHP process and database could be built across three containers. For this multi-tier application to function, a network is created between them. Note: inter-host networking is possible with a special overlay network driver.
  • Docker Networking design themes include:
    • portability
      • portability across diverse network environments
    • service discovery
      • locate services even as they are scaled and migrated
    • load balancing
      • dynamically share load across services
    • Security
      • segmentation and access control
    • Performance
      • minimize latency and maximize bandwidth
    • Scalability
      • maintain linearity of characteristics as applications scale across hosts
References: