Skip to Content
FrontendSetupInstallation

Installation

Source : https://docs.docker.com/engine/install/ 

Ubuntu

OS requirements

To install Docker Engine, you need the 64-bit version of one of these Ubuntu versions:

  • Ubuntu Oracular 24.10
  • Ubuntu Noble 24.04 (LTS)
  • Ubuntu Jammy 22.04 (LTS)
  • Ubuntu Focal 20.04 (LTS)

Docker Engine for Ubuntu is compatible with x86_64 (or amd64), armhf, arm64, s390x, and ppc64le (ppc64el) architectures.

Uninstall old version

Before you can install Docker Engine, you need to uninstall any conflicting packages.

Your Linux distribution may provide unofficial Docker packages, which may conflict with the official packages provided by Docker. You must uninstall these packages before you install the official version of Docker Engine.

The unofficial packages to uninstall are:

  • docker.io
  • docker-compose
  • docker-compose-v2
  • docker-doc
  • podman-docker

Run the following command to uninstall all conflicting packages:

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker; do sudo apt-get remove $pkg; done

Setup the apt repository

# Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update

Installation from the apt repository

Run the following command to install the latest version of docker from the apt repository

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

To verify the docker engine was sucessfully installed, run the following command :

sudo docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

Mac

Install steps

  1. Download the installer using the download buttons at the top of the page, or from the release notes.

  2. Double-click Docker.dmg to open the installer, then drag the Docker icon to the Applications folder. By default, Docker Desktop is installed at /Applications/Docker.app.

  3. Double-click Docker.app in the Applications folder to start Docker.

  4. Select Accept to continue.

  5. From the installation window, select either:

    • Use recommended settings (Requires password). This lets Docker Desktop automatically set the necessary configuration settings.
    • Use advanced settings. You can then set the location of the Docker CLI tools either in the system or user directory, enable the default Docker socket, and enable privileged port mapping. See Settings, for more information and how to set the location of the Docker CLI tools.
  6. Select Finish. If you have applied any of the previous configurations that require a password in step 6, enter your password to confirm your choice.

Windows

Installation steps

  1. Download the installer using the download button at the top of the page, or from the release notes.

  2. Double-click Docker Desktop Installer.exe to run the installer. By default, Docker Desktop is installed at C:\Program Files\Docker\Docker.

  3. When prompted, ensure the Use WSL 2 instead of Hyper-V option on the Configuration page is selected or not depending on your choice of backend.

  4. On systems that support only one backend, Docker Desktop automatically selects the available option.

  5. Follow the instructions on the installation wizard to authorize the installer and proceed with the installation.

  6. When the installation is successful, select Close to complete the installation process.

NB : If your administrator account is different to your user account, you must add the user to the docker-users group

Docker and Docker Compose Locally

Running NextJS project on Docker for local development is a little bit tricky. Here I will explain individual files and how they work. I am open to suggestions for improvements and making the process more efficient.

.npmrc

  • Since we have private @phpcreation owned packages in package.json and they require GitHub Read Only Token to download those packages.
  • Create a .npmrc file with the following content. I have added .npmrc to .gitignore to simplify the setup for now.
  • Improvement for automation
  • Replace PHPC_GITHUB_PKG_READ_TOKEN with GitHub private npm packages read token.
@phpcreation:registry=https://npm.pkg.github.com/ //npm.pkg.github.com/:_authToken=${PHPC_GITHUB_PKG_READ_TOKEN} registry=https://registry.npmjs.org

Node modules

Unlike php docker compose setup, we need to have node_modules installed in order to run the next app. Hence in Dockerfile, we install the package.json dependencies. However it is extremely slow to sync them back to host computer. We will discuss solution for it below in the doc.

Makefile

Docs: https://www.gnu.org/software/make/  make makes it easy to write executable commands. Please refer to the relevant OS docs for installing make.

loadEnv.sh

  • This script is used to load the correct .env file configurations form environments directory based on APP_ENV.
  • It also copy the yarn.lock file to the /app, so that it can be synced back to host files.

Run the project

Run the following command to start up the project

$ make up
  • It will export the UID and GID.
  • Trigger docker-compose.yaml.
  • Build the docker image using Dockerfile
  • As entrypoint, it will copy the /environments/.env.* file based on APP_ENV
  • Run next dev

Dockerfile Snapshot

FROM node:18-alpine AS runner # # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat gettext # RUN apt-get update && apt-get install -y gettext # Next.js collects completely anonymous telemetry data about general usage: https://nextjs.org/telemetry # Disable nextjs telemetry ENV NEXT_TELEMETRY_DISABLED 1 WORKDIR /app # Install package.json dependencies COPY package.json yarn.lock* .npmrc ./ # RUN export $(grep -v '^#' .env.setup | xargs) && envsubst < .npmrc > .npmrc.tmp && mv .npmrc.tmp .npmrc RUN yarn install && yarn cache clean # RUN addgroup --gid 1001 nodejs && adduser -D --uid 1001 -G nodejs nextjs RUN mkdir -p /tmp/cache/ RUN cp /app/yarn.lock /tmp/cache/yarn.lock RUN chown node:node /tmp/cache/yarn.lock COPY . . # USER nextjs EXPOSE 3000

yarn.lock file is copied to /tmp/cache/yarn.lock inside Dockerfile

  • To preserve the yarn.lock and not overwritten once the code is copied at step COPY . .
  • Later on, in the loadEnv.sh script, the yarn.lock is copied back as part of code (to /app directory).

docker-compose.yaml Snapshot

services: # ... other services app: build: context: . dockerfile: Dockerfile ports: - "3000:3000" environment: APP_ENV: dev NODE_ENV: development DYNAMODB_RUNTIME: local WATCHPACK_POLLING: true command: ["npm", "run", "dev"] # user: ${UID}:${GID} depends_on: - dynamodb volumes: - ./:/app - /app/node_modules # ... other config

We are creating 2 volumes

  • /app/node_modules which is not pointing to any path in host OS. Hence the node_modules are not synced back to host.
  • ./:/app sync the rest of the code files between host OS and docker image.

user: ${UID}:${GID}

Since we are working locally, in order to simplify most of the permission issues, we will run code inside contains with same UID and GID as the host OS user’s UID and GID. This resolved the weird permissions issues.

Self-Note: Review and improve permission related issues

VS Code Dev Containers

With the Docker setup, there is one issue: you do not have the node_modules locally present and hence it won’t provide IntelliSense and everywhere there will be red squiggly lines.
VS Code provides Dev Containers feature which makes it possible to connect your VS Code IDE to a Docker container. Hence, you write your code inside the container itself and test it. Please refer to the following documentation on how to connect to a Docker container from VS Code:
VS Code Dev Containers 
Now that you are attached to the container and node modules are present inside the container, you will have a seamless development experience without needing to install node_modules separately on the local host OS.

Last updated on