Skip to main content

Posts

Showing posts from February 26, 2017

Containers vs. Virtual Machines

Containers vs. Virtual Machines Virtualization is where one host computer can be turned into one or more pseudo-computers, known as virtual machines (VMs). Inside VMs, you can install an operating system and multiple applications. A Docker container is similar to a virtual machine. It runs a pre-packaged application inside a container. Virtualization decouples the application from the underlying hardware. Containers, operating at a higher level, decoupling the application from the underlying operating system. Hardware Virtualization Virtualization abstracts hardware, allowing multiple workloads to share a common set of resources. Virtualization allows multiple workloads to co-locate on the virtualized hardware, while maintaining full isolation from each other. The hardware abstraction piece of virtualization is made possible by a portfolio of technologies such as Intel Virtualization Technology (Intel VT) and AMD

Containerization: A Timeline

Technologies that populate the containerization timeline include: chroot (1979) UNIX system call for changing the root directory of a process to a new location in the filesystem which is only visible to a given process FreeBSD Jails (2000) compartmentalize of files and its resources Virtuozzo (2001) Linux Vserver (2001) Oracle Solaris Containers (2004) Open VZ (Open Virtuzzo) (2005) Process Containers (2006) Google Control Groups (2007) HP-UX Containers (2007) used to create an isolated operating environment within a single instance of the HP-UX 11i v3 operating system AIX WPARs - Workload Partition (2007) software-based virtualization that allows multiple AIX environments to run on a single system. WPARs provide isolation from other processes and WPARs. LXC - LinuX Containers (2008) Cloud Foundry Warden (2011) Google LMCTFY (2013) - replacement for LXC (based on cgroups, does not

Resource Isolation

Docker Resource Isolation Cgroups and Namespaces are features of the Linux kernel. They form the basis for lightweight process virtualization . Docker uses them to allow individual "containers" to run in an isolated environment within a single Linux kernel, avoiding the overhead of starting and maintaining virtual machines. CGroups - resource allocation - limits how much resources can be used cgroups (control groups) limits an application to a specific set of resources (CPU, memory, disk I/O, network, etc.). Facilitating the sharing of available hardware resources to containers.  control group A Resource Management and Resource Accounting/Tracking solution Provides a generic process-grouping framework A cgroup limits an application to a specific set of system resources allows Docker to share available system resources to containers and enforce limits and constraints e

Docker Characteristics

Four characteristics of Docker: single-process Preferred run-state is as a single application per container Runs application as PID 1 Multi-tier components implemented in separate containers, by default stateless changes/state kept in a writable layer which exists only until the container is deleted Container persistence initiated by committing changes to a new image Persistent data implemented by writing to host mounts or data containers scalable image/container consists of a set of layers layer downloaded only if it doesn't already exist on the host... saving space container builds on instead of duplicating existing resources... saving space and time layering improves resource efficiency: improves performance and scale portable containers are portable across Docker platform self-contained, i.e. each container packaged with required configuration environment

Docker Runtime Environment

Docker Runtime Environment Containerization, the ability to run multiple isolated compute environments on a single kernel, was not introduced by Docker. Docker's contribution includes a user-friendly management model. Two features, cgroups and namespaces , introduced into the Linux kernel around 2008, make it possible to track and partition system resources within a single kernel. These and other capabilities are packaged by runtime environment technologies such as LXC, libContainer, and RunC. The runtime environment forms  the foundation of Docker's ability to host multiple isolated containers under a single kernel. Docker facilitates building an application image, packaging it with all its dependencies, and running it in a software container (isolated user-space processes). The container runs the same on any    Docker-supported environment: physical server, virtual machine, a cloud platform. The mantra is: “build once, run anywhere”.

Containerization Is The New Virtualization

What is Docker? Docker is an open-source project that automates the deployment of applications inside software containers. At a high level, Docker is a utility that efficiently builds, ships, and runs containers. It is a container management tool. Software containers are self-contained, immutable execution environments. They do not change as it's promoted through the pipeline or development cycle. Each container has its own compute resources, and share the kernel of the host operating system. Containers offer an environment as close as possible to that of a virtual machine (VM) without the overhead that comes with running a separate kernel and simulating the hardware. A Container could be correctly described as "operating system virtualization", which facilitates running multiple isolated user-space operating environments (containers) on top of a single kernel. User-space is that portion of system memory in w