Understanding Docker Virtualization with Images and Containers and Essential Commands

Fundamentals of Docker Images and Containers
Docker is a platform that provides container-based virtualization, allowing applications and their dependencies to be packaged and executed in standardized, isolated environments. This ensures consistent behavior across development, testing, and production, reducing issues caused by environment discrepancies. Docker also simplifies deployment and scaling, and automates key workflows, significantly improving developer productivity. For these reasons, Docker has become an increasingly critical component in modern software development and delivery pipelines.
This article is based on a macOS environment with Docker Desktop installed. Using a simple Spring Boot project as an example, we will explore how Docker images and containers are structured and built in practice.
1. Hypervisor vs. Containers
Virtualization is a computing technique that allows multiple isolated environments to run on a single physical machine. It enables efficient resource allocation and provides secure isolation, creating a stable foundation for system operation.
There are two main approaches to virtualization: hypervisor-based virtual machines and container-based virtualization.
-
Hypervisors: A hypervisor is software that virtualizes hardware to run multiple virtual machines (VMs). Each VM includes its own operating system and resources, providing complete isolation. This ensures that one VM does not interfere with others. However, VMs tend to consume significant system resources and require longer boot times.
-
Containers: Containers offer operating-system-level virtualization. Multiple containers can run on the same host while sharing the host’s OS kernel. Each container includes the application along with its dependencies and libraries, but without a full OS. This makes containers lightweight and allows for rapid startup and shutdown.

Docker is currently one of the most widely adopted container platforms. Its popularity stems from the advantages containers provide over traditional hypervisors:
- Docker images are lightweight and responsive, enabling fast startup and deployment.
- Applications and their dependencies are bundled into self-contained images, simplifying the build-to-deploy lifecycle.
- Consistency is maintained across development, testing, and production environments.
- Containers isolate resource usage such as CPU, memory, and I/O, increasing stability and protecting the host system.
- Docker Hub offers a central repository for discovering, sharing, and reusing container images.
- Docker is compatible with multiple operating systems, including macOS, Linux, and Windows.
2. Docker Image
2-1. Docker Image Concepts
A Docker image is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software—code, runtime, libraries, environment variables, and configuration files.
Each Docker image consists of multiple layers. These layers form an independent filesystem and track changes made between containers. This layered structure allows for smaller image sizes, faster build and deployment processes, and efficient code reuse.

A Docker image can be compared to an architectural blueprint. A blueprint outlines a building’s core structure—room layouts, materials, and placement—and can be used to create multiple buildings with the same structure. However, the interior details such as lighting, material finishes, and furniture can differ from one building to another.
Similarly, a Docker image includes all the essential information needed to run a container—OS, libraries, and application code. You can create multiple containers from the same image, but customize their behavior by configuring them at runtime. For instance, you might override environment variables or mount different files, allowing each container to behave uniquely.
In short, a Docker image acts as the design foundation, and containers are the customized instances. This model offers flexibility and efficiency throughout the software development lifecycle.
2-2 Docker Image Commands
Docker provides a variety of commands for building, publishing, and managing images. These commands help developers easily create, inspect, push, and remove images from both local and remote repositories.
Docker image naming convention
Docker images typically follow theusername/repository:tag
format. Here,username
is your Docker Hub account name,repository
is the image name, andtag
refers to the version.
For example,catsriding/ongs:1.0.0
represents version1.0.0
of theongs
image owned by thecatsriding
account.
Official images hosted on Docker Hub may omit the account prefix; for instance, the imageubuntu
is shorthand forlibrary/ubuntu
.
2-2-1. docker images
The docker images
command lists Docker images stored locally. It shows details such as image ID, repository name, tag, creation time, and size.
$ docker images [OPTIONS] [REPOSITORY[:TAG]]
- Options:
-a
,--all
: Show all images, including intermediate ones.-q
,--quiet
: Display only image IDs.-f
,--filter
: Filter output based on conditions.
Example usage:
# List all images
$ docker images -a
# Find a specific image
$ docker images catsriding/hello-docker
2-2-2. docker build
The docker build
command reads instructions from a Dockerfile
to build an image.
$ docker build [OPTIONS] PATH | URL | -
- Options:
-t
,--tag
: Assign a name and tag to the image.--platform
: Specify the build platform (e.g.,linux/amd64
).-f
,--file
: Use a custom Dockerfile path or name.--build-arg
: Set build-time variables.--no-cache
: Skip cache when building the image.-q
,--quiet
: Suppress output and show only the image ID.
Example usage:
$ docker build -t catsriding/waves:1.0.0 .
$ docker build -t catsriding/waves:1.0.0 -f Dockerfile.dev .
$ docker build --platform linux/amd64 -t catsriding/waves:1.0.0 .
2-2-3. docker buildx
The docker buildx
command enables multi-platform builds and is available starting from Docker 19.03.
$ docker buildx build [OPTIONS] PATH | URL | -
- Options:
--platform
: Specify one or more platforms, such aslinux/amd64
,linux/arm64
.--push
: Push the image to a remote registry after building.
Example usage:
$ docker buildx build --platform linux/amd64,linux/arm64 -t waves:1.0.0 --push .
$ docker buildx build --platform linux/amd64 -t waves:1.0.0 -f Dockerfile.dev --push .
2-2-4. docker pull
The docker pull
command downloads an image from a remote repository like Docker Hub.
$ docker pull [OPTIONS] NAME[:TAG|@DIGEST]
- Options:
-a
,--all-tags
: Download all tags for a given repository.--platform
: Specify a platform for the image.--quiet
: Suppress detailed output.
Digest-based pulls ensure exact matches by using SHA256 hashes:
$ docker pull ubuntu:18.04
$ docker pull ubuntu@sha256:abc123...
2-2-5. docker push
The docker push
command uploads a local image to a remote registry such as Docker Hub.
Example usage:
$ docker login
$ docker push catsriding/waves-server:1.0.0
2-2-6. docker rmi
The docker rmi
command removes one or more images from the local system.
$ docker rmi [OPTIONS] IMAGE [IMAGE...]
- Options:
-f
,--force
: Force removal, even if the image is in use.
Example:
$ docker rmi ubuntu:18.04
$ docker rmi -f $(docker images -q)
2-2-7. docker image inspect
The docker image inspect
command displays detailed metadata for an image in JSON format.
$ docker image inspect [OPTIONS] IMAGE [IMAGE...]
- Options:
-f
,--format
: Format output using a Go template.
Example:
$ docker image inspect ubuntu:18.04
This command is useful for debugging, as it reveals environment variables, command history, labels, and configuration details.
2-3. Docker Image Build
While it is possible to create a Docker image by capturing the state of a running container, the standard approach is to use a special file called a Dockerfile
. A Dockerfile is a declarative script that contains a sequence of instructions used to build an image.
A Dockerfile is typically placed at the root of the project directory to provide access to all necessary resources. This is a common convention rather than a strict rule—Dockerfiles can be placed elsewhere as needed.
The basic structure of a Dockerfile is as follows:
# Comment
INSTRUCTION arguments
Instructions are case-insensitive, though uppercase is conventionally used for readability. Arguments are values passed to each instruction.
Below is a table of commonly used Dockerfile instructions:
Instruction | Description |
---|---|
ADD | Add local or remote files and directories. |
ARG | Use build-time variables. |
CMD | Specify default commands. |
COPY | Copy files and directories. |
ENTRYPOINT | Specify default executable. |
ENV | Set environment variables. |
EXPOSE | Declare the ports the application will listen on. |
FROM | Set the base image for a new build stage. |
HEALTHCHECK | Define a container health check. |
LABEL | Add metadata to an image. |
MAINTAINER | Identify the image author. |
ONBUILD | Define triggers for when the image is used in another build. |
RUN | Execute shell commands during build. |
SHELL | Set the default shell. |
STOPSIGNAL | Specify system call signal for container shutdown. |
USER | Set the user and group to run subsequent commands. |
VOLUME | Create a mount point with a specified path. |
WORKDIR | Set the working directory for instructions that follow. |
Using these instructions, a Dockerfile may look like this:
FROM base-image
LABEL key=value
WORKDIR /app
COPY ./source /app
USER appuser
EXPOSE 8080
ARG BUILD_ENV=production
ENV APP_ENV=production
ENTRYPOINT ["java", "-jar", "app.jar"]
CMD ["--spring.profiles.active=prod"]
When the build process starts, the Docker client reads and executes each instruction in the Dockerfile sequentially to generate a new image. The resulting image can then be used to run containers.
For more details, refer to the Dockerfile reference.
2-3-1. Single Stage Build
Let’s walk through a basic example using a Spring Boot project. First, create a Dockerfile
in the root directory of your project. Here is a simplified project structure:
.
├── .git
├── .gitignore
├── .gradle
├── .idea
├── build.gradle
├── Dockerfile
├── gradle
├── gradlew
├── gradlew.bat
├── HELP.md
├── settings.gradle
└── src
├── main
│ ├── java
│ │ └── app
│ │ └── catsriding
│ │ ├── api
│ │ │ └── DockerController.java
│ │ └── Application.java
│ └── resources
│ └── application.yml
└── test
Then write the Dockerfile as follows:
FROM gradle:8.6.0-jdk17
LABEL version="1.0.0" description="Hello, Docker!" vendor="catsriding" maintainer="Jynn"
WORKDIR /app
COPY . .
RUN gradle clean build --no-daemon
RUN cp build/libs/*[^.plain].jar application.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "application.jar"]
CMD ["--spring.profiles.active=test"]
This Dockerfile uses the Gradle base image, copies the project source into the container, builds the Spring Boot application, and prepares the JAR file for execution.
To build the image:
$ docker build -t catsriding/hello-docker:1.0.0 .
Sample output:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
catsriding/hello-docker 1.0.0 7071df080f72 a few seconds ago 832MB
Once the image is created, you can run a container based on it. This completes the single-stage image build using a Dockerfile.
2-3-2. Multi Stage Builds
A single-stage build defines all steps in one image. While simple, this can lead to large images with unnecessary build tools included. To address this, Docker supports multi-stage builds by allowing multiple FROM
statements in a Dockerfile.
This technique breaks the build process into stages. Artifacts from the build stage are copied into a smaller runtime image, resulting in a cleaner and more efficient image.
Here’s a multi-stage Dockerfile for the same Spring Boot application:
# Build stage
FROM gradle:8.6.0-jdk17 as build
WORKDIR /app
COPY . .
RUN gradle clean build --no-daemon
# Runtime stage
FROM openjdk:17
LABEL version="1.0.0" description="Hello, Docker!" vendor="catsriding" maintainer="Jynn"
WORKDIR /app
COPY /app/build/libs/*.jar /app/application.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "application.jar"]
CMD ["--spring.profiles.active=test"]
This structure ensures that only the compiled JAR file is transferred to the final image, significantly reducing its size.
Compare image sizes:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
catsriding/hello-docker 2.0.0 f19f9785c066 just now 521MB
catsriding/hello-docker 1.0.0 7071df080f72 3 hours ago 832MB
The multi-stage image is smaller and cleaner. This method is highly recommended for efficient image construction in production environments.
3. Docker Container
3-1. Docker Container Concepts
A Docker container is a running instance of a Docker image. While a Docker image is a static package that includes everything required to run software—such as code, runtime, libraries, environment variables, and configuration files—a container represents the dynamic execution state of that image.
In essence, a Docker container serves as a deployable unit of software that operates in an isolated environment on the host system. Each container runs a single application process and maintains a completely independent execution environment, ensuring that it does not interfere with other containers.
Like Docker images, containers are built on a layered filesystem. The base image sits at the bottom, with additional layers stacked on top. When changes occur, new layers are added. A container adds a writable layer on top of the image, allowing it to maintain state during execution.
3-2. Docker Container Commands
Docker provides a wide range of commands for managing containers, including starting, stopping, removing, and inspecting logs.
3-2-1. docker run
The docker run
command creates and starts a new container from a Docker image.
$ docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
- OPTIONS:
-p
,--publish
: Maps a port from the host to the container.-d
,--detach
: Runs the container in the background.-e
,--env
: Sets environment variables inside the container.--name
: Assigns a name to the container.--rm
: Automatically removes the container after it exits.-v
,--volume
: Mounts a volume between host and container.-it
: Runs the container in interactive mode with a pseudo-TTY.--net
: Specifies the container's network mode.-a
,--attach
: Attaches tostdin
,stdout
, orstderr
.--add-host
: Adds entries to the container’s/etc/hosts
file.--blkio-weight
: Configures block IO weight.--cpu-shares
: Limits CPU usage.--device
: Adds a device from the host.--dns
: Configures DNS servers.--expose
: Exposes additional container ports.--group-add
: Adds supplementary groups.--health-cmd
: Defines a health check command.
- COMMAND, ARG:
- Specifies the command to run in the container and its arguments.
To pull an image from a repository and run a container in one step:
$ docker run -d -p 8080:8080 --name waves catsriding/hello-docker:1.0.0
This command runs a container named waves
from the specified image, maps container port 8080 to the host, and detaches the process.
3-2-2. docker ps
The docker ps
command lists currently running containers.
$ docker ps [OPTIONS]
- OPTIONS:
-a
,--all
: Shows all containers (running and stopped).-q
,--quiet
: Displays only container IDs.-f
,--filter
: Filters output by given conditions.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
123456789abc catsriding/hello-docker:1.0.0 "java -jar applic…" 5 minutes ago Up 5 minutes 8080/tcp waves
3-2-3. docker stop
The docker stop
command gracefully stops a running container.
$ docker stop [OPTIONS] CONTAINER [CONTAINER...]
- OPTIONS:
-t
,--time
: Seconds to wait before forcibly stopping the container (default: 10 seconds).
Example:
$ docker stop waves
$ docker stop --time 5 waves
Docker sends a SIGTERM
signal to the container and, if it doesn’t stop within the given time, follows up with a SIGKILL
.
3-2-4. docker rm
The docker rm
command removes one or more containers.
$ docker rm [OPTIONS] CONTAINER [CONTAINER...]
- OPTIONS:
-f
,--force
: Forces removal of a running container.-l
,--link
: Removes container links, but not the container.-v
,--volumes
: Removes associated volumes.
Examples:
$ docker rm waves # Remove a stopped container
$ docker rm -f waves # Force-remove a running container
3-2-5. docker logs
The docker logs
command displays logs from a container.
$ docker logs [OPTIONS] CONTAINER
- OPTIONS:
-f
,--follow
: Streams logs in real time.--since
: Shows logs since a specific time.--tail
: Limits the number of lines shown.
Example:
$ docker logs --tail 100 waves
3-2-6. docker exec
The docker exec
command runs a command in a running container.
$ docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
- OPTIONS:
-i
,--interactive
: Runs in interactive mode.-t
,--tty
: Allocates a pseudo-TTY.
Example:
$ docker exec -it waves /bin/bash
root@717942d91b98:/app#
This allows you to interact with the container’s file system or shell.
3-2-7. docker container inspect
The docker container inspect
command retrieves detailed information about a container.
$ docker container inspect [OPTIONS] CONTAINER [CONTAINER...]
- OPTIONS:
-f
,--format
: Specifies a Go template for formatted output.-s
,--size
: Displays total file size and virtual size.
Examples:
$ docker container inspect waves
$ docker container inspect $(docker container ls -q)
The command outputs detailed information in JSON format, including container ID, creation time, status, network settings, mounted volumes, and environment variables.
4. Docker Network
4-1. Docker Network Concepts
Docker networking governs how containers access the network and communicate with each other. Docker provides several network modes, each determining how containers interact with their network environment.
The primary Docker network modes are as follows:
- Bridge: This is the default mode. Each container operates within its own network namespace and communicates externally via a bridge network. Containers in the same bridge share a virtual subnet and can communicate using private IP addresses.
- Host: In this mode, the container shares the host’s network namespace. It uses the host's IP address and network interfaces directly, eliminating network isolation from the host.
- None: This mode disables all networking. The container is assigned a network namespace but no interfaces, meaning it cannot communicate with the outside.
- Overlay: Used for multi-host networking, this mode allows containers running on different Docker hosts to communicate as if they were on the same local network. It’s primarily used in Docker Swarm for inter-service communication.
When a container is created, Docker assigns it a virtual Ethernet interface (typically named veth*
) inside a network namespace. This allows the container to access networks independently of the host. Each container's virtual interface connects to a virtual bridge on the host.
Virtual Network
A virtual network is a software-defined network built on top of a physical one, enabling isolated environments that mimic real networking. Docker’s virtual network architecture enables containers to communicate securely using private IP addresses. Cloud providers like AWS offer similar concepts through services like VPC (Virtual Private Cloud), where users can define subnets and control traffic between cloud resources.
By default, Docker creates a bridge network named docker0
, which acts as a gateway for containers to access external resources. This bridge connects containers to both other containers and the outside world.
To enable external communication, Docker uses the Linux kernel’s iptables
feature to apply Network Address Translation (NAT). NAT maps internal container IPs and ports to the host, allowing traffic routing between the container and the outside world.
Users can also customize Docker networks. You can define your own networks, choose drivers (e.g., bridge
, overlay
), set custom subnets, control IP assignment, and configure routing rules to fit project-specific needs.

The image above illustrates Docker’s network mechanism. Containers 1 through 3 are on the same bridge network and can communicate with each other. Container 4 resides on a different bridge network and, by default, cannot access the others due to Docker’s isolation policies.
However, Docker allows containers to connect to multiple networks. Using the docker network connect
command, you can explicitly join a container to another network, enabling communication between otherwise isolated containers.
Ultimately, Docker provides flexible networking that can be adjusted to match the architecture and security needs of any application. Proper configuration allows for seamless communication between containers, whether they are on the same host or distributed across multiple machines.
4-2. Docker Network Commands
Docker provides a set of commands that simplify networking operations, allowing developers to manage container communication, port mapping, network isolation and security, and integration with external infrastructure more efficiently.
4-2-1. docker network ls
The docker network ls
command lists all currently defined Docker networks.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
0a350a6107de bridge bridge local
af4e13892c91 host host local
919eabc335c1 none null local
23adcabd9a3b waves-network bridge local
In addition to the default bridge
, host
, and none
networks, any user-defined networks will also appear in this list.
4-2-2. docker network create
The docker network create
command creates a new network. Users can specify the network driver and subnet configuration. By default, the bridge
driver is used.
$ docker network create --driver bridge waves-network
This example creates a custom bridge network named waves-network
.
4-2-3. docker network rm
The docker network rm
command removes a user-defined Docker network.
$ docker network rm waves-network
4-2-4. docker network connect
The docker network connect
command connects a running container to a specified network.
$ docker network connect waves-network waves
In this example, the waves
container is added to the waves-network
.
A container can be connected to multiple networks, enabling it to communicate across different network scopes. To connect a container to multiple networks, repeat the command for each one:
$ docker network connect another-network waves
After this, the container can communicate within both waves-network
and another-network
.
4-2-5. docker network disconnect
The docker network disconnect
command removes a container from a specified network.
$ docker network disconnect waves-network waves-container
This disconnects the waves-container
from waves-network
, without stopping the container itself.
4-2-6. docker network inspect
The docker network inspect
command retrieves detailed information about a specific network.
$ docker network inspect NETWORK
For example, inspecting the waves-network
would look like this:
$ docker network inspect waves-network
The output includes detailed information such as configuration, connected containers, IP address assignments, and more, in JSON format:
[
{
"Name": "waves-network",
"Id": "d7b9c780de8a2cd17a52c924745cece8dd4b5adceb32553b66e54a49364dc426",
"Created": "2024-04-06T03:37:31.749671326Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
This output helps verify network configuration, diagnose connectivity issues, or audit container communication within a Docker environment.
5. Docker Volume
5-1. Docker Volume Concepts
Docker containers are inherently stateless, meaning each container processes requests independently and does not retain any internal state after it stops. This stateless nature allows containers to be easily created and destroyed, significantly improving scalability and flexibility.
However, this same characteristic introduces challenges in data management. Any data generated or modified within a container is lost once the container is removed.
To address this issue, Docker provides several mechanisms for persisting and sharing data:
- Volume: A volume is a storage mechanism managed by Docker that mounts a portion of the host’s filesystem into the container. Volumes can be created, maintained, and removed independently of containers. They can also be shared across multiple containers. Since Docker handles the management of volumes internally, users do not need to know the exact physical location of the data on the host.
- Bind mount: A bind mount allows any directory on the host system to be mounted into a container. This method enables direct access to the host’s file system, making it particularly useful for development, debugging, or testing scenarios.
- tmpfs mount: A tmpfs mount stores data in the host system’s memory (RAM), rather than on disk. These mounts are ephemeral and are not shared between containers or persisted on the host. They are ideal for storing sensitive or temporary data that should not survive beyond the container’s runtime.
Among these options, volumes are generally recommended when persistent storage is needed. Volumes provide a high degree of flexibility for backup, replication, and migration through Docker’s built-in tooling. Furthermore, they decouple data from the container lifecycle, ensuring that data remains intact even when the container is removed or restarted.
Docker Volume vs Bind Mount
While both volumes and bind mounts enable persistent data storage, they differ significantly in terms of use cases and management. Volumes are managed by Docker itself and integrate with Docker’s APIs and features, making them well-suited for production environments. The actual data location is abstracted away from the user. In contrast, bind mounts provide direct access to host directories and are more transparent to the user. They allow the container and host to modify the same data, which is useful for development workflows but can be risky in production if not carefully managed.
5-2. Docker Volume Commands
Docker provides a powerful set of commands for managing volumes efficiently. These commands simplify complex data operations such as persistent storage, backup, replication, and migration.
5-2-1. docker volume ls
The docker volume ls
command lists all available volumes.
$ docker volume ls [OPTIONS]
- OPTIONS:
-f
,--filter
: Filters volumes based on conditions.--format
: Specifies the output format.-q
,--quiet
: Displays only volume names.
Example:
$ docker volume ls
DRIVER VOLUME NAME
local ocean-volume
local waves-volume
5-2-2. docker volume create
The docker volume create
command creates a new volume.
$ docker volume create
- OPTIONS:
-d
,--driver
: Specifies the volume driver.--label
: Adds metadata labels to the volume.--name
: Names the volume.--opt
: Sets driver-specific options.
Example:
$ docker volume create ocean-volume
ocean-volume
5-2-3. docker volume rm
The docker volume rm
command removes one or more volumes.
$ docker volume rm [OPTIONS] VOLUME [VOLUME...]
- OPTIONS:
-f
,--force
: Forces removal even if the volume is in use.
Example:
$ docker volume rm ocean-volume
ocean-volume
5-2-4. docker volume prune
The docker volume prune
command removes all unused volumes.
$ docker volume prune [OPTIONS]
- OPTIONS:
-f
,--force
: Removes without confirmation.--filter
: Removes only volumes that match certain conditions.
5-2-5. docker volume inspect
The docker volume inspect
command displays detailed information about one or more volumes.
$ docker volume inspect VOLUME
Example:
$ docker volume inspect ocean-volume
Sample output:
[
{
"CreatedAt": "2024-03-30T05:20:40Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/ocean-volume/_data",
"Name": "waves-volume",
"Options": null,
"Scope": "local"
}
]
5-3. Docker Storage Connection
Docker supports two main ways to connect files and directories inside containers to the host: volumes and bind mounts.
5-3-1. Volumes
To mount a Docker-managed volume named waves-volume
to the /app/logs
directory inside a container:
$ docker run -d --name waves -v waves-volume:/app/logs catsriding/waves-server:1.0.1
Any data written to /app/logs
inside the container is persisted in the waves-volume
. Even if the container is deleted, the volume and its data remain intact and can be reused in a new container.
5-3-2. Bind Mounts
Bind mounts connect a specific directory from the host to a directory inside the container. This allows direct access and modification of host files by the container.
$ docker run -d --name waves -v /home/ec2-user/app/log:/app/logs catsriding/waves-server:1.0.1
This example binds the host directory /home/ec2-user/app/log
to the container's /app/logs
. This approach is useful during development but may require careful management in production due to access control and complexity in backup or migration.
6. Wrapping Up
This guide covered the fundamental concepts of Docker. At first, Docker felt quite daunting—not because the tool itself was inherently complex, but because of gaps in my foundational knowledge of computer science. As I gradually built up that foundation, Docker’s features and internal workings became increasingly clear. It reaffirmed the idea that strong fundamentals are essential in any technical field.