April 25, 2017

Relevant Linux Features: Exit Status

Exit Status


On Unix and Linux systems, every command ends with an exit status (also known as return status or exit code). Exit status is an integer value ranging from 0 to 255.

By default, a command that ends "successfully" has an exit status of zero, 0.
A command that ends with a "failure" has a non-zero (1 - 255) exit status.

Note: Success and failure, with respect to exit status is relative. By default if a command does what it's expected to do, on exit it sets a zero, 0, exit status. E.g. If the directory /var/log/apt exists, the command ls /var/log/apt will end successfully and result in an exit status of 0. However if the argument, in this case a directory, is not accessible the ls command will "fail" and leave a non-zero exit status:


By convention, success results in an exit status of zero, however commands are generally free to decide what non-zero integer, between 1 and 255 to use. In the above example, ls uses an exit status of 2 to reflect that a directory is not accessible. And docker chooses an exit status of 125 to reflect that the image is not accessible from the repository.

Commands are free to choose which value to use to reflect success or failure. However there are some reserved value that have special meaning, defined here: http://www.tldp.org/LDP/abs/html/exitcodes.html

A command writes its exit status into the ? shell variable, accessible via $?. This variable can hold one value at a time, as such it is overwritten when the next command exits.

To read the previous command's exit status, use the command, echo $?.

Summary:

0 the exit status of a command on success
1 - 255 the exit status of a command on failure
? holds the exit status of the last command executed
$? reads the exit status of the last command executed

References:

Relevant Linux Features: Control Operator

Control Operator


A Control Operator is a token that performs a control function.  It is one of the following symbols: || & && ; ;; ( ) | |& <newline>. We will focus on only && and || control operators in this article.

On occasion you might need to group Docker commands. Let's see a few ways to do this in Linux with three of the control operators.

Control operators Description
; Semicolon - delimits commands in a sequence
Used to run multiple commands one after the other
Similar to hitting ENTER after each command:

$ docker run --rm -it debian bash -c "ls /var; sleep 1; ls /"

Run the container and execute the three commands one after the other, separated by ; (semicolon)
&& AND - runs commands conditionally, on success
has the form A && B where B is run if, and only if A succeeds, i.e. if A returns an exit status of zero.


$ apt-get update && apt-get install -y openssh-server

This runs the second command, apt-get install -y openssh-server, IF AND ONLY IF the first command, apt-get update succeeded.
|| OR - runs command conditionally, on failure
has the form A || B where B is run if, and only if A fails, i.e. if A returns a non-zero exit status


This runs the second command, IF AND ONLY IF, the first command fails. In this example, since the first command, false will always fail, i.e. return a non-zero exit status, the second command, true, runs and sets the zero exit status.

April 22, 2017

Relevant Linux Features: Pipe

Relevant Linux Features: Pipe

The pipe is implemented with the  "|" symbol. It takes the output (stdout) of the command on the left and sends it as input (stdin)  for the command on the right:


In the example below, docker run --help is the first command. Its output is used as input to the more command, which displays the output, one screen at a time:

Note: stderr (standard error) is NOT passed through the pipe. I.e. we are not able to pass stderr through the pipe as we can stdout.

Relevant Linux Features: Command Substitution

Command substitution - $(command)

In command substitution, the shell runs command, however instead of displaying the output of command, it stores the output in a variable. You can then pass that variable as input to another command.

The syntax of command substitution is $(command) or the older `command`, using back-ticks.

Let's say you want to remove the most recent container running. You can use docker ps -a which lists all containers, starting with the most recent, then copy the Container ID into the docker rm <Container ID> command:

 
Alternatively, you can use Command Substitution and let the system do some of the work for you. The following command runs docker ps -lq which gets the ID of the most recent container, it passes that ID to the docker rm command: $ docker rm $(docker ps -lq):

Relevant Linux Features: Standard I/O

Relevant Linux Features: Standard I/O

stdin, stdout, stderr:
Linux recognizes three input/output streams:
  • STDIN
    • standard input
    • by default input to a command comes from the keyboard
    • STDIN can be redirected to come from other than the keyboard, e.g. a file or a device
      • use the "<" symbol to redirect input, e.g. command < file
      • e.g. to send the contents of file, text001 as INPUT to the command pr and offset each line by 5 spaces:
        $ pr --indent=5 < text001
  • STDOUT
    • standard output
    • by default, output of a command is sent to the terminal
    • STDOUT can be redirected to go to other than the terminal, e.g. a file or a device
      • use the ">" symbol to redirect output, e.g. command > file
      • e.g. send the output of command, ls to a file, text001 instead of the terminal:
        $ ls > text001
  • STDERR
    • standard error
    • by default, error from a command is sent to the terminal
    • STDERR can be redirected to go to other than the terminal, e.g. a file or a device
      • use the "2>" symbol to redirect error, e.g. command 2> file
      • e.g. send the error of command, ls to a file, capture.err instead of the terminal:
        $ ls file 2> capture.err

Relevant Linux Features: Redirection

Relevant Linux Features: Redirection

Linux allows I/O to be redirected away from the default source or target.

The default source of STDIN is the keyboard, i.e. by default a command expects to get its input from the keyboard. To send input to a command, from a file, use the "<" redirection symbol.

The default target of STDOUT is the terminal or screen, i.e. by default a command expects to send its output to the screen. To redirect it elsewhere, use the ">" symbol:

Note: "command > file" sends the output to a file, "file". If "file" already exists, any existing content is overwritten. To instead append the output to the end of the file, use ">>" instead, i.e. "command >> file".

The default target of STDERR is the screen, i.e. by default a command expects to send its error output to the screen. To redirect it elsewhere, use the "2>" symbol:

Note: "command 2> file" send the output to a new file, "file". If "file" already exists, any existing content is overwritten. To instead append any new output to the end of the file, use "2>>" instead, i.e. "command 2>> file".

April 11, 2017

The Linux Command Line

Linux Command Line

The Linux command line provides a way to manually interact with the Linux operating system. The shell program is a program that acts as an interface between the user and the rest of the Linux operating system, including the kernel.

The shell displays the shell prompt. Users enter commands at this prompt. By default, the shell display one of two prompts, depending on the type of user logged in.

For root users, the prompt is the # symbol:
 
For non-root users, the prompt is the $ symbol:
 
The shell accepts the commands and processes it. The Linux command line refers to commands entered at the shell prompt.

The command line ends when you hit the Enter key.
A command line however can be extended beyond a single line. I.e. if the command line is longer than one line, you can use the backslash to extend the command line to two or more lines, e.g.

sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data \
-p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash

When the shell sees the backslash, it knows to ignore the Enter key and expect more arguments and/or options.

There are many shells in Linux. A commonly used shell is bash, the Bourne Again SHell. When you start a Linux container in Docker, you can specify which shell it should run, e.g. $ docker run --rm -it debian bash.

The Linux command line consists of three main types of objects: command, argument(s), option(s).
Command is the application/program to run, e.g. ls, perl, docker, docker-compose, etc. The command is always the first object on the command line.

There is normally one command object per command line. An exception is if you have pipes (|). A pipe allows multiple commands to be run in series on the same "command line". More on pipes in a later article.

An argument is a parameter or sub-command used to provide command with additional information, e.g. by itself, the ls command lists the files or directories in the current directory. To list files in another directory, you can enter the other directory as an argument, e.g. ls /opt/bin. A command line can have zero or more arguments.

Options are used to modify the behavior of the command. E.g. the ls command will display visible files/directories. Given the -a option, e.g. ls -a, it will display both visible and non-visible files.

Options come in two forms: short-form, typically prepended with a single dash, and long-form, prepended with two dashes. Examples:
  • short-form option: ls -a or docker ps -a
  • Long-form option: ls --all or docker ps --all
There can be zero or more options per command line. Use a space to separate multiple options. You can mix and match short-form and long-form options on the same command line:
  • ls --all -l
For the short-form notation, you are allowed to concatenate the options. I.e. instead of ls -a -F -l, it's OK to combine the options, prepending the set with a single dash, e.g. ls -aFl.

An exception to being able to combine options is if the option requires an argument, i.e. the -v option in Docker requires the volume path or directory as an argument, e.g. docker run -v /data, as such it should stand by itself.

Docker: Deprecated Features

Deprecated Features

Periodically, existing Docker features may be removed or replaced with newer features.

Features to be removed/replaced are marked as deprecated in Docker documentation.
Deprecated features will remain available in Docker for at least three stable releases (roughly 9 months).

Users are expected to migrate away from deprecated features as soon as possible and within the deprecation time-frame.

References:
  • Deprecated Features page.

Docker: Combine Options

Combining options

Multiple single-character command line options, particularly if they do not require an argument, can be combined.

For example, rather than typing:
  • docker run -i -t --name test busybox sh
you can use:
  • docker run -it --name test busybox sh

References:

Docker: Getting Help

Getting help

To get help with Docker at the command line, simply append the --help option to the command line:
  • docker --help
  • docker <command> --help
  • Note: If you enter an incomplete command, Docker will usually display a condensed syntax for that command

April 10, 2017

Docker and Sudo

Docker and Sudo

Docker is a privileged command that only the root or system administrator can run. In order to use docker, you have to be root or a superuser. However, from a security point of view it's best practice to login in as a non-root user and elevate your privileges to root only when needed to administer the system.

The sudo command allows a non-root user to run commands reserved only for root. Depending on your Docker host configuration, you may be required to prepend docker commands with sudo:


To avoid this, particularly in a non-production environment, add a user to the docker group. Users that are part of the docker group can use docker without having to prepend sudo. E.g. edit the /etc/group file and a update the line:

docker:x:999: to docker:x:999:user

where user is the username of a user on the system. To add multiple users delimit each name with a comma. Docker can then be run without prepending sudo.

 

Docker Command Line Syntax

Docker Command Line Syntax

  • docker
    • A self-sufficient runtime for containers
    • Usage:
      • docker COMMAND [OPTIONS] [arg...]
      • docker [ --help | -v | --version ]
  • docker-machine
    • Create and manage machines running Docker
    • Usage:
      • $ docker-machine [OPTIONS] COMMAND [arg…]
  • docker-compose
    • Define and run multi-container applications with Docker
    • Usage:
      • $ docker-compose [-f <arg>...] [options] [COMMAND] [ARGS…]
      • $ docker-compose -h|--help

Docker Command Line

Docker Command Line


Note: multiple short-form command line options without arguments can be combine, e.g. instead of specifying -i and -t separately, they can be combined under a single dash, as in, -it.

April 09, 2017

Docker Command Line: Real-world Question

Docker Command Line: Real-world Question

Some time ago a user posted this question on the Google Docker Group. He had inherited a Docker platform and wanted to know what the following command line did:

$ sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data  \
-p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash

Let's take each command line parameter in turn:

Parameter Description
sudo used to run docker as the super user if not previously setup
docker run docker run command
-v <host path>:<container path> maps a host volume into a container
-p <hostIP>:<hostPORT>:<containerPORT> binds a container port to a host port from a specific host IP
-p <hostPORT>:<containerPORT> binds a container port to a host port from any host IP
-t attaches a terminal to the container
-i enables interactive mode
user2/foo image identifier
bash container startup command

The main command, docker run, starts a container from the image, user2/foo and runs the bash executable in the container. Persistent data (-v) is enabled by mounting the host directory, /projects/foo, as a mount point /home/user2/data inside the container.

The container exposes three container ports 80, 8000, 5820 as host mounts 40180, 48000, 45820 respectively (-p). Additionally container ports 80 and 48000 can only be access on the host via local interface, 127.0.0.1.

Finally -i and -t are used to enable interactive access to the standard input and output of the container, i.e. you can enter commands directly at the keyboard and see the output on the terminal.

Note: The back-slash (\) at the end of the line is a continuation mark. It tells the Linux Shell that the command line continues on the next line; it joins the two lines together as one contiguous command line.

References:

April 08, 2017

Docker Compose

Docker Compose

Particularly with multi-tiered applications, your Dockerfile and runtime commands get increasingly complex. Docker Compose is a tool to streamline the definition and instantiation of multi-tier, multi-container Docker applications. Compose requires a single configuration file and a single command to organize and spin up the application tier.

Docker Compose simplifies the containerization of a multi-tier, multi-container application, which can be stitched together using the docker-compose.yml configuration file and the docker-compose command to provide a single application service.

The Compose file provides a way to document and configure all of the application’s service dependencies (databases, queues, caches, web service APIs, etc.)
  • Docker Compose defines and runs complex services:
    • define single containers via Dockerfile
    • describe a multi-container application via single configuration file (docker-compose.yml)
    • manage application stack via a single binary (docker-compose up)
    • link services through Service Discovery
  • The Docker Compose configuration file specifies the services, networks, and volumes to run:
    • services - the equivalent of passing command-line parameters to docker run
    • networks - analogous to definitions from docker network create
    • volumes - analogous to  definitions from docker volume create
  • The Compose configuration file is a YAML declarative file format:
    • YAML Ain’t Markup Language (YAML)
    • YAML philosophy is that "When data is easy to view and understand, programming becomes a simpler task"
    • human-friendly and compatible with modern programming languages for common tasks
    • Minimal structure for maximum data:
      • indentation may be used for structure
      • colons separate key: value pairs
      • dashes are used to create “bullet” lists
      version: "3"
            services:
                web:
                     build: .
                     volumes:
                         - web-data:/var/www/data
                redis:
                     image: redis:alpine
                     ports:
                          - "6379"
                     networks:
                          - default
  • Docker Compose file, docker-compose.yml:
    • describes the services, networks, and volumes of the application stack
    • document and configure the application’s service dependencies (databases, queues, caches, web service APIs, etc.)
    • Use one or more -f flags to call the Compose configuration file. Without the -f flag, the current directory searched for a docker-compose.yml file.
  • Command examples:
  • docke­r-c­ompose up Launches all containers
    docke­r-c­ompose stop Stop all containers
    docke­r-c­ompose kill Kills all containers
    docke­r-c­ompose exec <se­rvi­ce> <co­mma­nd> Executes a command in the container
  • Enhances security and manageability by moving docker run commands to a YAML file

 

April 02, 2017

Docker Swarm

Docker Swarm

Docker Swarm is a clustering tool that allows the management of a set of Docker Hosts as if they were a single Docker Host. Most of the familiar Docker tools, APIs and services can be used in Docker Swarm, enabling scaling of the Docker ecosystem.
  • Native cluster management and orchestration of Docker Engines (nodes)
  • Run distributed application (containers) across multiple Docker hosts, as if it were a single host
  • A node is an instance of the Docker engine participating in the swarm
    • Two types of Docker nodes:
      • Manager
        • Deploys applications to the swarm
        • dispatches tasks (units of work) to worker nodes
        • perform the orchestration and cluster management functions
      • Worker
        • receive and execute tasks dispatched from manager nodes
        • runs agents which report on tasks to the manager node
    • A service is the definition of the tasks to execute on the worker nodes

Physical Network Requirements

Physical Design Requirements:

  • the Docker built-in network drivers have NO requirements for:
    • Multicast
    • External key-value stores
    • Specific routing protocols
    • Layer 2 adjacencies between hosts
    • Specific topologies such as spine and leaf, traditional 3-tier, and PoD designs
  • This is in line with the Container Networking Model which promotes application portability across all environments while still achieving the performance and policy required of applications.
References:

Network Scope

Network Scope

  • The Docker network driver concept of scope is the domain of the driver, which can be the local or swarm
    • Local scope drivers provide connectivity and network services within the scope of the host
    • Swarm scope drivers provide connectivity and network services across a swarm cluster
  • Swarm scope networks will have the same network ID across the entire cluster
  • Local scope networks will have a unique network ID on each host.
  • Scope is identified via the docker network ls command:


Plug-In Network Drivers

Plug-In Network Drivers:

Plug-In Network Drivers are network drivers created by users, the community and other vendors to provide integration with incumbent software and hardware and add specific functionality. Network driver plugins are supported via the LibNetwork project.
  • User-Defined Network
    • You can create a new bridge network that is isolated from the hosts' bridge network
    • Example:
  • Community- and vendor-created
    • These are network drivers created by third-party vendors or the community
    • Enables integration with incumbent software and hardware
    • Can be used to provide functionality not available in standard or existing network drivers
    • E.g. Weave Network Plugin - a network plugin that creates a virtual network that connects your Docker containers across hosts or clouds
  • IPAM Drivers
    • IP Address Management (IPAM) Driver
    • Built-in or Plug-in IPAM drivers
    • provides default subnets or IP addresses for Networks and Endpoints if they are not specified
    • IP addressing can be manually created/assigned
References:

Overlay & Underlay Drivers

Overlay and Underlay Drivers

  • Overlay
  • Overlay network driver creates networking tunnels to enable communication across hosts. Containers on this network behave as if they are on the same machine by tunneling network subnets from one host to the next. It spans one network across multiple hosts. Several tunneling technologies are supported, e.g. virtual extensible local area network (VXLAN)
    • The overlay driver creates an overlay network that supports multi-host networks
    • uses a combination of local Linux bridges and VXLAN to overlay container-to-container communications over physical network infrastructure
    • utilizes an industry-standard VXLAN data plane that decouples the container network from the underlying physical network (the underlay)
    • encapsulates container traffic in a VXLAN header which allows the traffic to traverse the physical Layer 2 or Layer 3 network
    • Created when a Swarm is instantiated
  • Underlay
  • Underlay network drivers expose host interfaces, e.g. eth0, directly to containers running on the host. An example of an underlay driver is the Media Access Control virtual local area network (MACvlan).
    • Allows direct connection to the hosts' physical interface
    • MACvlan eliminates the need for the Linux bridge, NAT and port-mapping
    • The MACvlan establishes a connection between container interfaces and the host interface (or sub-interfaces)
    • used to provide IP addresses to containers that are routable on the physical network
References:

None Network Driver

None Network Driver

With the none network driver the network stack has a loopback interface, however it does not have an external network interface, as such it cannot communicate outside the container
  • The none network driver is an unmanaged networking option
    • Docker Engine will not:
      • create interfaces inside the container
      • establish port mapping
      • install routes for connectivity
    • the none driver will create a separate namespace for each container
      • This guarantees container network isolation between any containers and the host
  • The none driver gives a container its own networking stack and network namespace but does not configure interfaces inside the container
  • Only local loopback address is available,
  • I/O performed through files or STDIN and STDOUT only
Reference

Bridge Network Driver

Bridge Network Driver

The bridge network driver provides a single-host network on top of which containers may communicate. The IP address in the pool are private and not accessible from outside the host. Bridge networking leverages NAT and port-mapping (via iptables) to communication outside the host.
  •  The bridge driver creates a Linux bridge on the host that is managed by Docker
    • By default containers on the same bridge network will be able to communicate with one another
    • External access to containers can also be configured through the bridge driver.
  • On a Docker host, by default, there is a local Docker network named bridge, created using a bridge network driver which instantiates a Linux bridge called docker0
  • Unless otherwise specified, containers are started on the bridge network by default
  • Note: the --net bridge option is not needed, as bridge is the default network driver
  • The docker host has an IP of 172.17.0.2. Any container launched on this host will be on this network, unless specified otherwise.
  • The scope of this network is local, i.e. these IPs can only be accessed from this host
  • A pair of veth interfaces will be created for the container
    • One side of the veth pair will remain on the host attached to the bridge while the other side of the pair will be placed inside the container’s namespaces
    • An IP address will be allocated for containers on the bridge’s network and traffic will be routed through to the container
  • Any container in the bridge network cannot be accessed outside the docker host, unless the port is mapped to the docker host using the -p parameter in docker run.
  • To launch a container on any network other than the default, bridge, use the --net option:
    • docker run -d -P --net none --name <containerName> <imageName>
References:

Host Network Driver

Host Network Driver

The host network driver has access to the hosts' network interfaces and makes that available to the containers. The advantage of the host network driver includes higher performance, and a NAT-free environment. A disadvantage is that it is susceptible to port conflicts.
  • The host network driver connects a container directly to the hosts' network stack
    • Containers using the host driver reside in the same network namespace as the host
    • Provide native bare-metal network performance at the cost of namespace isolation
    • There is no namespace separation
    • All interfaces on the host can be used directly by the container
  • Running containers on the host network, you don't have to expose and map ports as they are bound on the docker host directly

  • Host networks are not isolated like the bridge network, if you run two applications in containers that use the same port number, they will be a port conflict as they would be bound to the same host
  • host mode gives the container full access to local system services and is considered insecure
  • host mode gives better networking performance than in bridge mode as it uses the host’s native networking stack
    • bridge mode goes through one level of virtualization through the docker daemon
References:

Built-In Network Driver

Built-In Network Drivers

The Docker built-in network drivers facilitate the containers' ability to communicate on a network. They are built into the Docker Engine. They are invoked and used through standard docker network commands.
  • Network drives: none, host, and bridge, will exist by default on every Docker host.