Skip to main content

Posts

Showing posts from April 2, 2017

Docker Compose

Docker Compose Particularly with multi-tiered applications, your Dockerfile and runtime commands get increasingly complex. Docker Compose is a tool to streamline the definition and instantiation of multi-tier, multi-container Docker applications. Compose requires a single configuration file and a single command to organize and spin up the application tier. Docker Compose simplifies the containerization of a multi-tier, multi-container application, which can be stitched together using the docker-compose.yml configuration file and the docker-compose command to provide a single application service. The Compose file provides a way to document and configure all of the application’s service dependencies (databases, queues, caches, web service APIs, etc.) Docker Compose defines and runs complex services: define single containers via Dockerfile describe a multi-container application via single configuration f

Docker Swarm

Docker Swarm Docker Swarm is a clustering tool that allows the management of a set of Docker Hosts as if they were a single Docker Host. Most of the familiar Docker tools, APIs and services can be used in Docker Swarm, enabling scaling of the Docker ecosystem. Native cluster management and orchestration of Docker Engines (nodes) Run distributed application (containers) across multiple Docker hosts, as if it were a single host A node is an instance of the Docker engine participating in the swarm Two types of Docker nodes: Manager Deploys applications to the swarm dispatches tasks (units of work) to worker nodes perform the orchestration and cluster management functions Worker receive and execute tasks dispatched from manager nodes runs agents which report on tasks to the manager node A service is the definition of the tasks to execute on the worker nodes

Physical Network Requirements

Physical Design Requirements: the Docker built-in network drivers have NO requirements for: Multicast External key-value stores Specific routing protocols Layer 2 adjacencies between hosts Specific topologies such as spine and leaf, traditional 3-tier, and PoD designs This is in line with the Container Networking Model which promotes application portability across all environments while still achieving the performance and policy required of applications. References: https://github.com/docker/labs/blob/master/networking/concepts/09-physical-networking.md Licensed under a Creative Commons Attribution 4.0 International License .

Network Scope

Network Scope The Docker network driver concept of scope is the domain of the driver, which can be the local or swarm Local scope drivers provide connectivity and network services within the scope of the host Swarm scope drivers provide connectivity and network services across a swarm cluster Swarm scope networks will have the same network ID across the entire cluster Local scope networks will have a unique network ID on each host. Scope is identified via the docker network ls command: Licensed under a Creative Commons Attribution 4.0 International License .

Plug-In Network Drivers

Plug-In Network Drivers: Plug-In Network Drivers are network drivers created by users, the community and other vendors to provide integration with incumbent software and hardware and add specific functionality. Network driver plugins are supported via the LibNetwork project. User-Defined Network You can create a new bridge network that is isolated from the hosts' bridge network Example: Community- and vendor-created These are network drivers created by third-party vendors or the community Enables integration with incumbent software and hardware Can be used to provide functionality not available in standard or existing network drivers E.g. Weave Network Plugin - a network plugin that creates a virtual network that connects your Docker containers across hosts or clouds IPAM Drivers IP Address Management (IPAM) Driver Built-in or Plug-in IPAM drivers

Overlay & Underlay Drivers

Overlay and Underlay Drivers Overlay Overlay network driver creates networking tunnels to enable communication across hosts. Containers on this network behave as if they are on the same machine by tunneling network subnets from one host to the next. It spans one network across multiple hosts. Several tunneling technologies are supported, e.g. virtual extensible local area network (VXLAN) The overlay driver creates an overlay network that supports multi-host networks uses a combination of local Linux bridges and VXLAN to overlay container-to-container communications over physical network infrastructure utilizes an industry-standard VXLAN data plane that decouples the container network from the underlying physical network (the underlay) encapsulates container traffic in a VXLAN header which allows the traffic to traverse the physical Layer 2 or Layer 3 network Created when a Swarm is instantiat

None Network Driver

None Network Driver With the none network driver the network stack has a loopback interface, however it does not have an external network interface, as such it cannot communicate outside the container The none network driver is an unmanaged networking option Docker Engine will not: create interfaces inside the container establish port mapping install routes for connectivity the none driver will create a separate namespace for each container This guarantees container network isolation between any containers and the host The none driver gives a container its own networking stack and network namespace but does not configure interfaces inside the container Only local loopback address is available, I/O performed through files or STDIN and STDOUT only Reference https://github.com/docker/labs/blob/master/networking/concepts/08-host-networking.md

Bridge Network Driver

Bridge Network Driver The bridge network driver provides a single-host network on top of which containers may communicate. The IP address in the pool are private and not accessible from outside the host. Bridge networking leverages NAT and port-mapping (via iptables) to communication outside the host.  The bridge driver creates a Linux bridge on the host that is managed by Docker By default containers on the same bridge network will be able to communicate with one another External access to containers can also be configured through the bridge driver . On a Docker host, by default, there is a local Docker network named bridge , created using a bridge network driver which instantiates a Linux bridge called docker0 Unless otherwise specified, containers are started on the bridge network by default Note: the --net bridge option is not needed, as bridge is the default

Host Network Driver

Host Network Driver The host network driver has access to the hosts' network interfaces and makes that available to the containers. The advantage of the host network driver includes higher performance, and a NAT-free environment. A disadvantage is that it is susceptible to port conflicts. The host network driver connects a container directly to the hosts' network stack Containers using the host driver reside in the same network namespace as the host Provide native bare-metal network performance at the cost of namespace isolation There is no namespace separation All interfaces on the host can be used directly by the container Running containers on the host network , you don't have to expose and map ports as they are bound on the docker host directly Host networks are not isolated like the bridge network, if you run two applications i

Built-In Network Driver

Built-In Network Drivers The Docker built-in network drivers facilitate the containers' ability to communicate on a network. They are built into the Docker Engine. They are invoked and used through standard docker network commands. Network drives: none, host, and bridge, will exist by default on every Docker host. Licensed under a Creative Commons Attribution 4.0 International License .

Publishing Ports

Publishing Ports Exposing and publishing network ports is a way to allow the container communicate with other containers and the external world. The container ports can be bound or published to specific or random host ports. Publish all exposed ports to random ports -P or --publish-all Publish or bind a container port or group of ports to the host -p, --publish list Publish or bind to specific port (<publicPort>:<privatePort>) e.g. -p 8080:80 Publish or bind to random port (<privatePort>) e.g. -p 80 This binds container port 80 to a random host port, e.g. port 32768 You can optionally specify which IP to bind on: <hostInterface>:<publicPort>:<privatePort>, e.g. 127.0.0.1:6379:6379. This limits the connection to this port from the localhost (127.0.0.1) only. Examples $ docker run -d -P redis Run redis detac

Exposing Ports

Exposing Ports Exposing a container port enables the container to accept incoming connections on that port, e.g. the web service container listening on port 80. Use the EXPOSE instruction in the Dockerfile or the --expose option at runtime to expose a port. Docker will only route traffic to exposed ports. The EXPOSE instruction informs Docker that the container listens on the specified network port(s) at runtime, e.g. EXPOSE 80 443 indicates the the container listens for connections on two ports: 80 and 443 EXPOSE does not make the ports of the container accessible to the host To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports Use the command line option --expose to expose a port or a range of ports at runtime This telnet command allows us to confirm that the container is listening for (HTTP) conne

Container Networking Model (CNM)

Container Networking Model (CNM) Network Basics IP address IPv4 IPv6 Partitions/subnets 0.0.0.0 Ports Well-known ports port numbers from 0 to 1023 used by system processes for widely used network services requires super-user privileges to bind port Registered ports port numbers from 1024 to 49151 assigned by IANA for specific services does not require super-user privileges to bind port Ephemeral ports (dynamic, or private) port numbers from 49152 to 65535 ports not available to be registered with IANA used for temporary, private services and automatic allocation Port Forwarding Redirect incoming requests to specific services by port number NAT (Network Address Translation) Container Network Model (CNM) CNM provides the forwarding rules, network segmentation, and management tools for complex network po

Docker Networking

Docker Networking Containers are isolated, single-application environments. A network enables containers to communicate with one another, the host and the external network. A multi-tier application, e.g. web server, PHP process and database could be built across three containers. For this multi-tier application to function, a network is created between them. Note: inter-host networking is possible with a special overlay network driver. Docker Networking design themes include: portability portability across diverse network environments service discovery locate services even as they are scaled and migrated load balancing dynamically share load across services Security segmentation and access control Performance minimize latency and maximize bandwidth Scalability maintain linearity of characteristics as applications scale across hosts References: https://github.com