- Published on
What is Docker?
- Authors
- Name
- Brian Weeks
Today, we're diving into the core concepts of Docker — the technology that shapes how we build, deploy, and scale modern applications. It's a tool that brings simplicity and consistency across environments, and it just works everywhere.
Isolation and Consistency
Docker containers are isolated environments. That means your application runs the same whether it's on your laptop, a staging server, or in production. This removes the classic “it works on my machine” problem. Developers can be confident that their code will behave the same across the board.
The Dockerfile
Let’s start with the foundation: the Dockerfile. This is where we define the environment our application needs. We specify a base image — such as node:18-alpine
— carefully selecting only what's essential. Choosing slim variants of official images, combining commands to reduce layers, and removing build tools after compilation are all best practices. These keep our images lean and efficient.
# Stage 1: Build stage
FROM node:18-alpine AS builder
# Set working directory
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy the rest of the application source
COPY . .
# Optional: Run build script if needed (e.g., for React/Vue apps)
# RUN npm run build
# Stage 2: Final image
FROM node:18-alpine
WORKDIR /app
# Copy only the necessary files from the builder stage
COPY /app .
# Expose the app port
EXPOSE 3000
# Define the default command
CMD ["node", "index.js"]
Layers: The Building Blocks of Images
In a Dockerfile, each instruction creates a new layer. These layers capture specific changes like installing dependencies or copying source code. The beauty of this approach is that Docker caches and reuses layers that haven’t changed. This speeds up builds and saves system resources.
FROM node:18-alpine # Layer 1
WORKDIR /app # Layer 2
COPY package*.json ./ # Layer 3
RUN npm ci --only=production # Layer 4
COPY . . # Layer 5
CMD ["node", "index.js"] # Layer 6
Build #1 (initial build) Build #2 (rebuild with source code change)
+-----------------------------+ +-----------------------------+ ✅ Cache hit
| FROM node:18-alpine | | FROM node:18-alpine | ✅
+-----------------------------+ +-----------------------------+
| WORKDIR /app | | WORKDIR /app | ✅
+-----------------------------+ +-----------------------------+
| COPY package*.json ./ | | COPY package*.json ./ | ✅
+-----------------------------+ +-----------------------------+
| RUN npm install | | RUN npm install | ✅
+-----------------------------+ +-----------------------------+
| COPY . | | COPY . | 🔄 Rebuilt
+-----------------------------+ +-----------------------------+
| CMD ["node", "index.js"] | | CMD ["node", "index.js"] | ✅
+-----------------------------+ +-----------------------------+
🔁 In the second build, only the
COPY .
step triggers a rebuild because the source code changed.
✅ All previous layers are reused from Docker’s cache — saving time and compute resources.
Portability and Deployment
One of Docker’s biggest strengths is portability. A containerized app can be deployed on any system that supports Docker. This opens the door for consistent deployment strategies — whether you're using Kubernetes, a CI/CD pipeline, or deploying manually.
Developer Experience and Speed
Containers make development faster. They reduce setup time, eliminate dependency conflicts, and allow for easy rollback. Need to try a different version of a dependency? Just tweak your Dockerfile and rebuild.
Docker revolutionized how we think about application development. Its ability to provide isolated, repeatable, and portable environments has made it a standard in modern DevOps. Whether you're scaling a startup or managing a complex system at scale, Docker is a tool worth mastering.
Docker FAQ
Q: What exactly is a container, and how is it different from a virtual machine?
A container is a lightweight, standalone executable unit that includes everything needed to run a piece of software: the code, runtime, libraries, and configuration. Unlike virtual machines, containers share the host operating system's kernel, which makes them much more efficient — they start faster and consume fewer resources.
Q: Why do layers matter in a Dockerfile?
Each instruction in a Dockerfile creates a layer in the image. Docker caches these layers so that when you rebuild the image, it only rebuilds the layers that have changed. This significantly speeds up build times and reduces the need to re-download dependencies or re-run unchanged steps.
Q: What is the difference between a Docker image and a container?
A Docker image is a static file that contains the application code, libraries, environment variables, and configuration files. A container is a running instance of an image. You can think of an image as the blueprint and the container as the live, running version of that blueprint.
Q: How do I deploy my Dockerized app to the cloud?
There are several ways to deploy a Dockerized app:
- Use a virtual machine with Docker installed (e.g., AWS EC2, DigitalOcean Droplets).
- Use a container platform like AWS ECS, Google Cloud Run, or Azure Container Apps.
- Use Kubernetes or Docker Swarm for container orchestration at scale.
- Push your Docker image to a container registry (e.g., Docker Hub or GitHub Packages) and pull it from there during deployment.