Problem
You have an app that you wish to deploy to the cloud. How do you set it up so that you can
- be portability (switch to another cloud provider)
- be rapid online deployments (you can do rapid releases)
Solution: We use OCI developer service tooling to dictate the pipeline operations. The instance will be provisioned with an a thin image with cloud-init script to include docker. Since docker is installed on the instance, we keep the apps on the container and implement a blue-green swap pattern. Basically, under this pattern, the old version of the app container is marked blue and then the new app container will be marked green. The swap then updates so that all essential wiring reaches the green container and the updated app is complete.
Instance Setup
The instance is the single virtual machine that is running the app. It uses an image. The image itself is thin image further enhanced by user data (cloud-init script).
A dichotomy appears on whether to bake the additional dependencies and other software into the image (static) or to do it at boot time (dynamic). Here’s a simple table to contrast them.
| Feature | Cloud-Init (Scripting) | Image Baking (Packer) |
|---|---|---|
| Setup Time | Slower (Installs on boot) | Fastest (Already installed) |
| Updates | Always the latest versions | Stale (Requires re-baking) |
| Portability | High (Works across shapes) | Low (Locked to CPU arch) |
| Debugging | Harder (Check /var/log) | Easier (It either works or it doesn’t) |
| Best For | Development / Small Scale | High-Scale Production |
Thin Image + Custom Cloud-Init
#!/bin/bash
# 1. Update the Ubuntu Repositories to the latest versions
apt-get update -y
apt-get upgrade -y
# 2. Install prerequisites for the official Docker Repository
apt-get install -y ca-certificates curl gnupg lsb-release
# 3. Add Docker’s official GPG key for security
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# 4. Set up the official Docker stable repository (Automated for ARM/Aarch64)
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# 5. Update the repo list again to include Docker's specific repo
apt-get update -y
# 6. Install Docker Engine, CLI, and the Compose Plugin
# This installs 'docker compose' (the V2 version)
apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# 7. Start and enable Docker service to persist through reboots
systemctl enable docker
systemctl start docker
# 8. Add the default 'ubuntu' user to the docker group
# This lets you run 'docker' commands without 'sudo' after you log in
usermod -aG docker ubuntu
# 9. Verify installation in the logs
echo "Docker installation and System Update complete."
Zero Downtime Strategy
We want to have a setup is that when we have new code or a release or deployment, it can be deployed to production with ZERO downtime. To do this, we setup a build pipeline. When code is pushed, OCI triggers a Build Stage creating an image submitted to OCI Registry. Once the image is in the registry, the pipeline runs some command on the VM.
Pipeline Setup
Basically, build the image and then submit to the registry.
version: 0.1
component: build
timeoutInSeconds: 600
steps:
- type: Command
name: "Build Docker Image"
command: |
docker build -t my-app:${OCI_BUILD_RUN_ID} .
- type: Command
name: "Push to Registry"
command: |
docker push my-app:${OCI_BUILD_RUN_ID}
Then deploy to the VM.
version: 1.0
component: deployment
agentConfig:
runAs: root
steps:
- type: Command
name: "Update Container"
command: |
docker stop my-app-container || true
docker rm my-app-container || true
docker pull ${DOCKER_IMAGE_URL}
docker run -d --name my-app-container -p 80:80 ${DOCKER_IMAGE_URL}
Blue Green Swapout
For this setup to have zero downtime, we have two copies of the app container running.
- The old container (blue) is running at some port 8081.
- The new container is at port 8082.
- The pipeline waits and pings the green container as a health check.
- If good, the pipeline tells Nginx, switch to port 8082.
- Kill the blue container.
Here’s the bash script.
# 1. Determine which port is currently "Live"
if [ $(docker inspect -f '{{.State.Running}}' app-blue) == "true" ]; then
TARGET_COLOR="green"
TARGET_PORT="8082"
OLD_COLOR="blue"
else
TARGET_COLOR="blue"
TARGET_PORT="8081"
OLD_COLOR="green"
fi
# 2. Start the NEW container
docker pull ${IMAGE_URL}
docker run -d --name app-${TARGET_COLOR} -p ${TARGET_PORT}:80 ${IMAGE_URL}
# 3. Wait for it to be ready (The "Health Check")
until $(curl --output /dev/null --silent --head --fail http://localhost:${TARGET_PORT}/health); do
printf '.'
sleep 5
done
# 4. Atomic Swap in Nginx
# (We overwrite a small config file that defines the 'upstream')
echo "upstream my_app { server 127.0.0.1:${TARGET_PORT}; }" > /etc/nginx/conf.d/upstream.conf
nginx -s reload
# 5. Clean up the OLD container
docker stop app-${OLD_COLOR}
docker rm app-${OLD_COLOR}