Installation
Source : https://docs.docker.com/engine/install/
Ubuntu
OS requirements
To install Docker Engine, you need the 64-bit version of one of these Ubuntu versions:
- Ubuntu Oracular 24.10
- Ubuntu Noble 24.04 (LTS)
- Ubuntu Jammy 22.04 (LTS)
- Ubuntu Focal 20.04 (LTS)
Docker Engine for Ubuntu is compatible with x86_64 (or amd64), armhf, arm64, s390x, and ppc64le (ppc64el) architectures.
Uninstall old version
Before you can install Docker Engine, you need to uninstall any conflicting packages.
Your Linux distribution may provide unofficial Docker packages, which may conflict with the official packages provided by Docker. You must uninstall these packages before you install the official version of Docker Engine.
The unofficial packages to uninstall are:
- docker.io
- docker-compose
- docker-compose-v2
- docker-doc
- podman-docker
Run the following command to uninstall all conflicting packages:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker; do sudo apt-get remove $pkg; doneSetup the apt repository
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get updateInstallation from the apt repository
Run the following command to install the latest version of docker from the apt repository
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginTo verify the docker engine was sucessfully installed, run the following command :
sudo docker run hello-worldThis command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
Mac
Install steps
-
Download the installer using the download buttons at the top of the page, or from the release notes.
-
Double-click Docker.dmg to open the installer, then drag the Docker icon to the Applications folder. By default, Docker Desktop is installed at /Applications/Docker.app.
-
Double-click Docker.app in the Applications folder to start Docker.
-
Select Accept to continue.
-
From the installation window, select either:
- Use recommended settings (Requires password). This lets Docker Desktop automatically set the necessary configuration settings.
- Use advanced settings. You can then set the location of the Docker CLI tools either in the system or user directory, enable the default Docker socket, and enable privileged port mapping. See Settings, for more information and how to set the location of the Docker CLI tools.
-
Select Finish. If you have applied any of the previous configurations that require a password in step 6, enter your password to confirm your choice.
Windows
Installation steps
-
Download the installer using the download button at the top of the page, or from the release notes.
-
Double-click Docker Desktop Installer.exe to run the installer. By default, Docker Desktop is installed at C:\Program Files\Docker\Docker.
-
When prompted, ensure the Use WSL 2 instead of Hyper-V option on the Configuration page is selected or not depending on your choice of backend.
-
On systems that support only one backend, Docker Desktop automatically selects the available option.
-
Follow the instructions on the installation wizard to authorize the installer and proceed with the installation.
-
When the installation is successful, select Close to complete the installation process.
NB : If your administrator account is different to your user account, you must add the user to the docker-users group
Docker and Docker Compose Locally
Running NextJS project on Docker for local development is a little bit tricky. Here I will explain individual files and how they work. I am open to suggestions for improvements and making the process more efficient.
.npmrc
- Since we have private
@phpcreationowned packages inpackage.jsonand they require GitHub Read Only Token to download those packages. - Create a
.npmrcfile with the following content. I have added.npmrcto.gitignoreto simplify the setup for now. - Improvement for automation
- Replace
PHPC_GITHUB_PKG_READ_TOKENwith GitHub private npm packages read token.
@phpcreation:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=${PHPC_GITHUB_PKG_READ_TOKEN}
registry=https://registry.npmjs.orgNode modules
Unlike php docker compose setup, we need to have node_modules installed in order to run the next app. Hence in Dockerfile, we install the package.json dependencies. However it is extremely slow to sync them back to host computer.
We will discuss solution for it below in the doc.
Makefile
Docs: https://www.gnu.org/software/make/
make makes it easy to write executable commands. Please refer to the relevant OS docs for installing make.
loadEnv.sh
- This script is used to load the correct
.envfile configurations form environments directory based onAPP_ENV. - It also copy the
yarn.lockfile to the/app, so that it can be synced back to host files.
Run the project
Run the following command to start up the project
$ make up- It will export the
UIDandGID. - Trigger
docker-compose.yaml. - Build the docker image using
Dockerfile - As entrypoint, it will copy the
/environments/.env.*file based onAPP_ENV - Run
next dev
Dockerfile Snapshot
FROM node:18-alpine AS runner
# # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat gettext
# RUN apt-get update && apt-get install -y gettext
# Next.js collects completely anonymous telemetry data about general usage: https://nextjs.org/telemetry
# Disable nextjs telemetry
ENV NEXT_TELEMETRY_DISABLED 1
WORKDIR /app
# Install package.json dependencies
COPY package.json yarn.lock* .npmrc ./
# RUN export $(grep -v '^#' .env.setup | xargs) && envsubst < .npmrc > .npmrc.tmp && mv .npmrc.tmp .npmrc
RUN yarn install && yarn cache clean
# RUN addgroup --gid 1001 nodejs && adduser -D --uid 1001 -G nodejs nextjs
RUN mkdir -p /tmp/cache/
RUN cp /app/yarn.lock /tmp/cache/yarn.lock
RUN chown node:node /tmp/cache/yarn.lock
COPY . .
# USER nextjs
EXPOSE 3000yarn.lock file is copied to /tmp/cache/yarn.lock inside Dockerfile
- To preserve the
yarn.lockand not overwritten once the code is copied at stepCOPY . . - Later on, in the
loadEnv.shscript, theyarn.lockis copied back as part of code (to/appdirectory).
docker-compose.yaml Snapshot
services:
# ... other services
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
APP_ENV: dev
NODE_ENV: development
DYNAMODB_RUNTIME: local
WATCHPACK_POLLING: true
command: ["npm", "run", "dev"]
# user: ${UID}:${GID}
depends_on:
- dynamodb
volumes:
- ./:/app
- /app/node_modules
# ... other configWe are creating 2 volumes
/app/node_moduleswhich is not pointing to any path in host OS. Hence thenode_modulesare not synced back to host../:/appsync the rest of the code files between host OS and docker image.
user: ${UID}:${GID}
Since we are working locally, in order to simplify most of the permission issues, we will run code inside contains with same UID and GID as the host OS user’s UID and GID. This resolved the weird permissions issues.
Self-Note: Review and improve permission related issues
VS Code Dev Containers
With the Docker setup, there is one issue: you do not have the node_modules locally present and hence it won’t provide IntelliSense and everywhere there will be red squiggly lines.
VS Code provides Dev Containers feature which makes it possible to connect your VS Code IDE to a Docker container. Hence, you write your code inside the container itself and test it. Please refer to the following documentation on how to connect to a Docker container from VS Code:
VS Code Dev Containers
Now that you are attached to the container and node modules are present inside the container, you will have a seamless development experience without needing to install node_modules separately on the local host OS.