DOCKER

Containers: A group of processes that run-in isolation. All processors MUST be able to run on a shared kernel.

Each container has its own set of “namespaces” (isolated view) • PID – process IDs • USER – user and group IDs • UTS – hostname and domain name • NS – mount points • NET – Network devices, stacks, ports • IPC – inter-process communications, message queues.

cgroups – control limits and monitoring of resources

VM vs Container

Virtual Machines has its own Operating System and each instance running on the same host. Virtual Machines are very heavy and slow to start.

Containers do not have a full blown OS. They only include few server specific files and this can be trimmed down based on the needs of the application. Containers run on a shared kernel. Containers are light-weight and are quick to start.

Containers don’t replace virtual machines. Containers can run on the top of virtual machines.

What is docker?

Docker is a tooling to manage containers. It simplified existing technology to enable it for the masses. It enables developers to use containers for their applications. Package dependencies with containers: “build once, run anywhere” for building and deploying our containers.

Why containers are appealing to users?

• It enables us to work on any machine and removes the system dependencies. No more “Works on my machine”. It is because we ship all our requirements on an image that can be updated whenever needed.

• Lightweight and fast

• Containers give us much better resource utilization than VMs. It can fit far more containers than VMs into a host. It allows us to use our infrastructure in a better way.

• It provides a standard developer to operations interface.

• Docker also provides an ecosystem and tooling

Containers are just a process (or a group of processes) running in isolation, which is achieved with Linux namespaces and control groups. Linux namespaces and control groups are features that are built into the Linux kernel. Other than the Linux kernel itself, there is nothing special about containers. What makes containers useful is the tooling that surrounds them. Docker has been the understood standard tool for using containers to build applications. Docker provides developers and operators with a friendly interface to build, ship, and run containers on any environment. Control Groups (cgroups), provide a mechanism for easily managing and monitoring system resources, by partitioning things like cpu time, system memory, disk and network bandwidth, into groups, then assigning tasks to those groups.

LinuxKit makes it possible to run Docker containers on operating systems other than Linux

Docker Images

A Docker image contains application code, libraries, tools, dependencies and other files needed to make an application run. When a user runs an image, it can become one or many instances of a container. Docker images have multiple layers, each one originates from the previous layer but is different from it.

Docker Registry

The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images. The Registry is open-source, under the permissive Apache license.

Creating a docker image with DockerBuild

The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.

The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.

Create a “Dockerfile”

The instructions for how to construct the container:

‘docker build -f Dockerfile’

$ cat Dockerfile

FROM ubuntu

ADD myapp /

EXPOSE 80

ENTRYPOINT /myapp

Docker image layers are cached. Every layer is built on the top of previous layers built before. We need to arrange our lines in such a way that the lines that are going to change more are at the bottom of the code (hence on the top layer and can be easily rebuilt)

Whenever these layers are created, it creates an extra thin R/W layer. This layer allows us to reuse these image layers across multiple instances of the same container.

Built With

Share this project:

Updates