Skip to content

Docker by kuldeep

December 21, 2022 | 12:00 AM

image

Docker

What is docker?

Docker is an open-source platform that enables developers to build, deploy, run, update and manage containers.

Containers are standardized, executable components that combine application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

Install (Linux)

ubuntu

curl -sSL https://get.docker.com/ | sh

Images

Docker images are a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

# Build an Image from a Dockerfile
docker build -t <image_name>

# Build an Image from a Dockerfile without the cache
docker build -t <image_name> . –no-cache

# List local images
docker images

# Delete an Image
docker rmi <image_name>

# Remove all unused images
docker image prune

Docker Hub

Docker Hub is a service provided by Docker for finding and sharing container images with your team. Learn more and find images at https://hub.docker.com

# Login into Docker
docker login -u <username>
# Publish an image to Docker Hub

docker push <username>/<image_name>
# Search Hub for an image

docker search <image_name>
# Pull an image from a Docker Hub

docker pull <image_name>

General Commands

# Start the docker daemon
docker -d

# Get help with Docker. Can also use –help on all subcommands
docker --help

# Display system-wide information
docker info

Containers

A container is a runtime instance of a docker image. A container will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.

# Start a container
docker start <container_name>

# Stop a container
docker stop <container_name>
docker stop -t 30 <container_name>

# Kill a container
docker kill <container_name>

# Restart a container
docker restart <container_name>

# Pause a container
docker pause <container_name>

# Resume container
docker unpause <container_name>

# Create and run a container from an image, with a custom name:
docker run --name <container_name> <image_name>

# Run a container with and publish a container’s port(s) to the host.
docker run -p <host_port>:<container_port> <image_name>

# Run a container in the background
docker run -d <image_name>

# Start or stop an existing container:
docker start|stop <container_name>(or <container-id>)

# Remove a stopped container:
docker rm <container_name>

# Open a shell inside a running container:
docker exec -it <container_name> sh
docker exec -it <container_name> bash
docker exec <container_name> ls

# Create image from container
docker commit <container_name> (or <container-id>) <image_name>
docker commit -m "Added configuration changes" -t my_new_image:v1 my_container

# Fetch and follow the logs of a container:
docker logs -f <container_name>

# To inspect a running container:
docker inspect <container_name>(or <container_id>)

# To list currently running containers:
docker ps

# List all docker containers (running and stopped):
docker ps --all

# View resource usage stats
docker container stats

Dockerfile

Keywords to know

Dockerfile example

# Use an official Python runtime as a parent image
FROM python:3.8-slim

# Set the working directory in the container to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Define environment variable
ENV BASEURL

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Run app.py when the container launches
CMD ["python", "app.py"]

Dockerfile example

# Use the official Node.js 20 image based on Alpine Linux
FROM node:20-alpine

# Create a group named 'app' and a system user named 'app' that belongs to the 'app' group
RUN addgroup app && adduser -S -G app app

# Set the working directory to /app
WORKDIR /app

# Copy the package.json and package-lock.json files to the working directory
COPY package*.json ./

# Install the project dependencies as root user
RUN npm install

# Copy the rest of the application code to the working directory
COPY . .

# Change the ownership of all files in the working directory to the 'app' user and group
RUN chown -R app:app /app

# Switch to the 'app' user
USER app

# Expose port 5173 for the application
EXPOSE 5173

# Run the application using the 'npm run dev' command
CMD ["npm", "run", "dev"]

.dockerignore

node_modules/
.git/
npm-debug.log

compose.yaml

version: '3'
services:
  mongodb:
    image: mongo
    ports:
      - '27017:27017'
    environment:
      - MONGO_INITDB_ROOT_USERNAME=admin
      - MONGO_INITDB_ROOT_PASSWORD=password
  mongo-express:
    image: mongo-express
    ports:
      - '8081:8081'
    environment:
      - ME_CONFIG_MONGODB_ADMINUSERNAME=admin
      - ME_CONFIG_MONGODB_ADMINPASSWORD=password
      - ME_CONFIG_MONGODB_SERVER=mongodb

compose.yaml

compose with watch

version: "3"
services:
  web:
    build: .
    ports:
      - 3000:5173
    develop:
      watch:
        - path: ./package.json
          action: rebuild
        - path: ./package-lock.json
          action: rebuild
        - path: .
          target: /app
          action: sync

Docker compose example

Volumes

Docker volumes are a way to persist data generated by and used by Docker containers. Unlike data stored in a container’s filesystem, which is ephemeral and lost when the container is removed, volumes allow data to persist beyond the lifecycle of a container.

Why Use Docker Volumes?

  1. Data Persistence: Volumes retain data even if the container is deleted.
  2. Isolation from Container Filesystem: They provide a managed and isolated storage location, separate from the container’s filesystem.
  3. Improved Performance: Volumes are optimized for I/O operations, especially on Docker for Linux.
  4. Data Sharing: Multiple containers can access the same volume simultaneously, enabling data sharing.

Types of Docker Volumes

  1. Anonymous Volumes: Created automatically when you don’t specify a name for the volume. They are less commonly used because they are difficult to reference after creation.
    docker run -v /data busybox
  2. Named Volumes: Created explicitly by the user with a specific name. This is the most common type because they are easier to manage.
    docker volume create my_volume
    docker run -v my_volume:/data busybox
  3. Bind Mounts: Allow you to map a directory from the host machine to the container. Useful when you want the container to have access to existing files on the host.
    docker run -v /host/path:/container/path busybox

Network

In Docker, networks are used to manage how containers communicate with each other and with the outside world. Docker networking provides several built-in network drivers that enable different types of communication for containers based on your needs.

Why Use Docker Networks?

  1. Isolated Communication: Containers can communicate with each other without exposing services to the host machine or external networks.
  2. Service Discovery: Containers on the same network can communicate using container names instead of IP addresses, simplifying connectivity in microservices.
  3. Security: By using isolated networks, you can control access and limit communication between different containers.

Types of Docker Networks

Docker provides several types of network drivers, each with its specific use cases:

1. Bridge Network (default)

Create and use a bridge network:

docker network create my_bridge
docker run -d --name container1 --network my_bridge nginx
docker run -d --name container2 --network my_bridge alpine sleep infinity
docker exec container2 ping container1  # Check connectivity

2. Host Network

Run a container with host networking:

docker run --network host -d nginx

3. Overlay Network

Create an overlay network in a Swarm:

docker network create -d overlay my_overlay

4. Macvlan Network

Create a macvlan network:

docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 my_macvlan

5. None Network

Run a container with no network:

docker run --network none -d nginx

Docker Network Commands

Example: Networking with Docker Compose

Docker Compose makes it easy to define and manage networks between services in a single file (docker-compose.yml):

version: '3'
services:
  web:
    image: nginx
    networks:
      - my_network

  db:
    image: mysql
    environment:
      MYSQL_ROOT_PASSWORD: example
    networks:
      - my_network

networks:
  my_network:
    driver: bridge

In this example, both web and db services share the my_network, allowing them to communicate using container names (web and db).

Use Cases

Docker compose file

A simple docker-compose.yml file

version: '3.8'

services:
  web:
    image: nginx
    container_name: my_nginx
    ports:
      - "8080:80"
    networks:
      - front-end
    volumes:
      - ./html:/usr/share/nginx/html

  db:
    image: postgres
    container_name: my_postgres
    environment:
      POSTGRES_USER: example
      POSTGRES_PASSWORD: example_password
    networks:
      - back-end
    volumes:
      - db_data:/var/lib/postgresql/data

volumes:
  db_data:

networks:
  front-end:
  back-end:

Breaking Down the docker-compose.yml File:

1. version: ‘3.8’

2. services:

web (Service 1)
db (Service 2)

3. volumes:

4. networks:

Docker Compose automatically creates these networks unless you specify an existing external network. By default, if no networks are specified, Docker Compose will create a default network for all services.


Additional Options in docker-compose.yml

1. build

Instead of pulling an image from Docker Hub, you can specify a build context. This is useful when you want to build an image from a Dockerfile located in the specified directory.

Example:

web:
  build: ./web-directory
  ports:
    - "8080:80"

This would build the Docker image from a Dockerfile in the ./web-directory.

2. depends_on

The depends_on directive controls the order in which services start. For example, if you want the database container to start before the web container, you can specify:

web:
  image: nginx
  depends_on:
    - db

Note that depends_on does not wait for the database to be fully ready (just for it to be started). If you need to ensure that a service is ready, you’ll need additional handling (like using health checks).

3. restart

The restart policy controls whether a container should be automatically restarted if it crashes or is stopped.

Example:

web:
  image: nginx
  restart: always

4. environment (Environment Variables)


Summary

A docker-compose.yml file allows you to define and manage multi-container Docker applications. Here’s a recap of the key components:

Docker Compose makes it easier to manage complex applications that involve multiple services, each with its own settings and configurations, by using a simple declarative YAML format.

A Dockerfile is a script containing a series of instructions on how to build a Docker image. It automates the process of creating a containerized application by defining the steps to install software, set up environment variables, configure the application, and define container behavior.

Each line in a Dockerfile represents a command or instruction to be executed in the image-building process.

Basic Structure of a Dockerfile

Here’s a basic template of a Dockerfile:

# 1. Specify the base image
FROM ubuntu:20.04

# 2. Set environment variables (optional)
ENV APP_HOME /app

# 3. Install dependencies
RUN apt-get update && apt-get install -y \
    curl \
    git \
    python3

# 4. Set the working directory in the container
WORKDIR /app

# 5. Copy files from the host machine into the container
COPY . .

# 6. Expose a port for communication
EXPOSE 80

# 7. Define the command to run when the container starts
CMD ["python3", "app.py"]

Dockerfile Instructions

1. FROM

2. ENV

3. RUN

4. WORKDIR

5. COPY

6. ADD

7. EXPOSE

8. CMD

9. ENTRYPOINT

10. VOLUME

11. USER

12. ARG

13. LABEL


Example of a Full Dockerfile

Let’s look at a more complete Dockerfile for a Node.js application.

# 1. Use an official Node.js runtime as the base image
FROM node:20

# 2. Set the working directory inside the container
WORKDIR /app

# 3. Copy package.json and install dependencies
COPY package.json /app
RUN npm install

# 4. Copy the rest of the application code
COPY . /app

# 5. Expose the port that the app will run on
EXPOSE 3000

# 6. Define the command to start the app
CMD ["npm", "start"]

This Dockerfile does the following:

  1. Starts with a Node.js base image (node:14).
  2. Sets the working directory to /app.
  3. Copies package.json to the container and runs npm install to install dependencies.
  4. Copies the rest of the application code to the container.
  5. Exposes port 3000, which is the port the Node.js app listens on.
  6. Defines the default command to run npm start when the container is started.

Building an Image Using the Dockerfile

Once you have your Dockerfile ready, you can build a Docker image from it by running the following command in the same directory as your Dockerfile:

docker build -t my-node-app .

Running the Image

Once the image is built, you can run it with the following command:

docker run -d -p 3000:3000 --name node-container my-node-app

Multi-Stage Builds (Advanced)

For more complex applications, you may want to reduce the size of your image by using multi-stage builds. With multi-stage builds, you can use one image to build your application and another for the final image.

Here’s an example of a multi-stage Dockerfile:

# Stage 1: Build the application
FROM node:20 AS builder

WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app

# Stage 2: Create the production image
FROM node:20-slim

WORKDIR /app
COPY --from=builder /app /app

EXPOSE 3000
CMD ["npm", "start"]

Summary

Docker Advanced

Overlay Networks in Docker

An overlay network in Docker is a virtual network that allows containers to communicate with each other across multiple Docker hosts (i.e., across different physical or virtual machines) as if they were on the same network. This is especially useful in multi-host Docker setups, such as in Docker Swarm or Kubernetes, where containers on different hosts need to communicate securely and seamlessly.

How Overlay Networks Work

An overlay network abstracts the underlying physical network and enables communication between containers that are deployed across multiple Docker hosts. Docker uses a VXLAN (Virtual Extensible LAN) technology to create this overlay network. VXLAN encapsulates Ethernet frames inside UDP packets, which are then transmitted over the physical network.

Overlay networks make it possible for Docker to simulate a single, unified network layer that spans multiple hosts, allowing containers to talk to each other as if they are all in the same local network, even though they might be distributed across different physical machines.

Key Concepts of Overlay Networks

  1. VXLAN:

    • VXLAN is the encapsulation technology used in overlay networks. It allows the creation of Layer 2 networks over a Layer 3 infrastructure (like the internet or any IP-based network).
    • VXLAN uses a unique VXLAN Network Identifier (VNI) for each network, making it possible to create isolated logical networks, even if they share the same underlying physical network.
  2. Docker Swarm and Overlay Networks:

    • Overlay networks are crucial in Docker Swarm because they provide communication between services across different Swarm nodes.
    • When you deploy services in a Swarm cluster, they are automatically connected to the default overlay network (ingress), but you can also create custom overlay networks for more control.
  3. Control Plane and Data Plane:

    • Control Plane: Docker manages the overlay network’s configuration and coordination through a control plane (using a key-value store, such as Consul or etcd).
    • Data Plane: Once the overlay network is configured, containers can communicate with each other across Docker hosts via the data plane, where VXLAN packets are exchanged.
  4. Routing Between Hosts:

    • When a container on one Docker host sends a packet to a container on a different host in an overlay network, Docker encapsulates the packet in a VXLAN header and sends it across the underlying network to the destination host.
    • The destination host decapsulates the packet, delivering it to the correct container on the target machine.

Types of Overlay Networks in Docker

  1. Default Overlay Network (ingress):

    • The ingress network is created automatically when you initialize a Docker Swarm.
    • This network is used for service discovery and communication between containers on different nodes in the Swarm, primarily for managing service ports and load balancing.
    • It is not intended for direct container-to-container communication but rather for communication involving services exposed via ports.
  2. User-defined Overlay Networks:

    • You can create custom overlay networks for containers to communicate securely and isolate traffic.
    • Custom networks allow for finer control, such as specifying which services can talk to each other and ensuring that traffic stays isolated within the network.

    Example:

    docker network create --driver overlay my_overlay_network

    This creates a custom overlay network named my_overlay_network.

Advantages of Overlay Networks

  1. Multi-Host Communication:

    • The primary benefit of overlay networks is enabling containers to communicate across multiple hosts in a Docker Swarm or Kubernetes cluster, as if they are on the same local network.
  2. Isolation:

    • Overlay networks can provide network isolation. Each network is isolated, so containers connected to one network cannot directly communicate with containers on other networks unless explicitly configured.
  3. Security:

    • Overlay networks can be secured using TLS encryption for traffic between Docker hosts. This is particularly important when deploying containers in a distributed system where traffic traverses public or untrusted networks.
  4. Service Discovery:

    • Docker provides built-in service discovery for containers connected to the same overlay network. Containers can communicate using their service names instead of IP addresses, making it easier to manage container communication.
  5. Scalability:

    • Overlay networks enable containers to scale across multiple hosts. As your application grows, you can easily scale services and containers across multiple nodes in the cluster without having to reconfigure the network.

Setting Up an Overlay Network

  1. Creating an Overlay Network: To create a custom overlay network, you need to be running Docker Swarm (multi-node Docker setup). If Docker Swarm is not initialized, you’ll need to run:

    docker swarm init

    Then, create the overlay network:

    docker network create --driver overlay --attachable my_overlay_network
    • --driver overlay: Tells Docker to use the overlay driver.
    • --attachable: Allows standalone containers to attach to the overlay network (useful if you’re not just using Swarm services but also regular containers).
  2. Deploying Containers on the Overlay Network: Once you’ve created an overlay network, you can deploy containers to it, either by using Docker Swarm services or by running standalone containers.

    With Docker Swarm (using a service):

    docker service create --name my_service --replicas 3 --network my_overlay_network my_image

    With Standalone Containers:

    docker run -d --name my_container --network my_overlay_network my_image
  3. Service Discovery on Overlay Networks: Docker supports DNS-based service discovery within the overlay network. For example, if you have a service named my_service running on the overlay network, other containers can refer to it by the service name my_service, and Docker will automatically resolve the correct IP address.

  4. Inspecting an Overlay Network: To inspect the details of the overlay network, including connected containers and its configuration, use:

    docker network inspect my_overlay_network

Example Scenario: Deploying Multiple Containers Across Hosts

  1. Initialize Docker Swarm (on both machines): On the first node:

    docker swarm init

    On the second node:

    docker swarm join --token <join_token> <manager_ip>:2377
  2. Create an Overlay Network: On the manager node:

    docker network create --driver overlay my_overlay_network
  3. Deploy a Service Across Multiple Hosts: On the manager node:

    docker service create --name web --replicas 2 --network my_overlay_network nginx
  4. Verify the Deployment: Use the following to check the running services:

    docker service ls
  5. Inspect the Overlay Network: To check which containers are connected to the overlay network:

    docker network inspect my_overlay_network

Use Cases for Overlay Networks

  1. Microservices Architecture:

    • In a microservices setup, different services may need to communicate over a network. Using overlay networks, services running on different hosts can communicate seamlessly.
  2. Distributed Applications:

    • For applications that span multiple Docker hosts, overlay networks allow components running on different hosts to communicate as if they were part of the same local network.
  3. Isolated Network Segments:

    • You may want to isolate traffic between services (e.g., between front-end and back-end services) using different overlay networks for security and performance reasons.

Summary