I publish journal notes of my cost-effective blog engineering challenge. This time I'll walk you through the cornerstone of my design - compute instance. Bear with me for a few more minutes, and I'll walk you through the chain of decision points to the final solution.

Once again, I start with my mantra: Containers are the most effective way to keep up with the latest software releases. Lucky me: Ghost community publishes Docker images and a few examples of a container-based deployment.  

Since I need at least two containers (MySQL and Ghost), I might go with the Podman Compose (I use it with the WSL2 Linux), but Google has made this choice with the standard Container-Optimised OS image. It is a hardened image with a single purpose - safely and effectively running docker containers. Next stop - building the startup script.

Startup Scripts

In broad strokes, a  new instance should perform the steps:

  • Adjust the instance configuration
  • Download the latest backup and configuration files
  • Spin up a new container stack

For starters, the Container Optimized OS has Google API, and you can't install it because there is no package manager. It does not allow you to execute custom scripts or applications either. Fortunately, it offers a toolbox utility - containerized app that you can use to access Google Cloud Services. Another feature of this instance - OS takes about 5Gb of the storage device; the rest is mounted as stateful storage. What we can take and use:  toolbox container mounts /var location by default, and my extra space for site and database is outside the root partition. There is another essential system update - disable Docker's daemon live restoration. So the first part of my startup script would look like this:

echo "0. Prepare file structure"
 mkdir -p /mnt/stateful_partition/ghost 
 ln -s /mnt/stateful_partition/ghost /var/gost
 mkdir -p /var/ghost/sql-init
 mkdir -p /var/ghost/sql-load

echo "1. Update Docker Daemon Configuration"
 sed -i 's/\("live-restore"\): true/\1: false/g' /etc/docker/daemon.json 
 systemctl restart docker
Setup a Compute Engine Instance.

Now, the system is ready for the configuration files. Next, I need to restore: static content, the database dump file, and the stack configuration. With the toolbox in mind, my restoration commands are:

echo "2.1 Fetch Site Content"
 cd /var/ghost/
 toolbox gsutil cp gs://${google_cloud_bucket}/site-backup/chronicler.content.tgz  /media/root/var/ghost/chronicler.content.tgz
echo "2.2 Fetch Site Content"
 tar zxf chronicler.content.tgz && rm -r chronicler.content.tgz

echo "3.1 Fetch Database Content"
  toolbox gsutil cp gs://${google_cloud_bucket}/site-backup/chronicler.sql.gz /media/root/var/ghost/sql-load/chronicler.sql.gz
echo "3.2 Unpack DB Dump"
  gunzip /var/ghost/sql-load/chronicler.sql.gz 
echo "3.3 Fix Legacy Charset Settings"
  sed -i 's/utf8mb4_0900_ai_ci/utf8mb4_0900_ai_ci/g;s/utf8mb4/utf8mb4/g' /var/ghost/sql-load/chronicler.sql 
echo "4. Get Stack File"
  toolbox gsutil cp gs://${google_cloud_bucket}/stack-scripts/docker-compose.yaml /media/root/var/ghost/docker-compose.yaml
Restore the Site Content

Please note that gsutil runs in the toolbox container, so the /var/ghost folder becomes a /media/root/var/ghost one. An extra step is to adjust my legacy database character set and collation configurations.

Now the whole scene is ready to start up the blog's clone. All I need to do is initialize the swarm and bring up the ghost stack. But before we go through, let's look at the docker stack description. It's very similar to the docker-compose with a few product-specific twists:

version: '3.1'

services:
  db:
    image: docker.io/library/mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: example
    volumes:
      -  /var/ghost/sql-load/chronicler.sql:/docker-entrypoint-initdb.d/chronicler.sql:ro

  ghost:
    image: docker.io/library/ghost:latest
    ports:
      - 80:2368
    environment:
      # see https://ghost.org/docs/config/#configuration-options
      database__client: mysql
      database__connection__host: db
      database__connection__user: root
      database__connection__password: example
      database__connection__database: ghost_dbatabase
      # this url value is just an example, and is likely wrong for your environment!
      url: http://localhost:80
      logging__info: 'info'
    volumes:
      -  /var/ghost/content:/var/lib/ghost/content:rw
Docker Stack Descriptor

There are a few critical things to consider:

  • Static content for the Ghost container is mounted as /var/lib/ghost/content. Respectfully, on our side, the content location is /var/ghost/content.
  • Set your database password. The "example" one is for illustration purposes only, even though it's unavailable outside the stack.
  • Database name 'ghost_dbatabase' is a part of the full database export.
  • The /docker-entrypoint-initdb.d/ mapping is for the site content restoration. If you have some complex initial setup - put .sql, .gz, or .sh files, so MySQL will l run them as part of the database initialization.
  • Ghost blog connection parameters are derived from MySQL container configuration.
  • I used a port 80 mapping since it's only one click in the instance configuration.

When all the commands were executed without a scratch, I combined them in a single shell script and uploaded them to the same storage bucket. This way, I can use it as a startup-script-url parameter for my instance template.

Create an Instance Group Template

Since day one, I mean to use spot instances for my development environment. So let's start with saving startup and shutdown scripts on the same Google Store. You can use startup-script-url and shutdown-script-url attributes with the Google Store files. After some cost/performance estimations, I ended up with the regular, two vCPU instance with the 2GB RAM  and the standard persistent disk (50GB is more than enough today). For development purposes, I use the e2-small VM with the container-optimized OS image for the spot instance.

gcloud compute instance-templates create dev-chronicler-template --project=${your-project} --machine-type=e2-small --network-interface=network=default,\
network-tier=PREMIUM --metadata=startup-script-url=https://storage.cloud.google.com/${your-bucket}/init-scripts/init-vm.sh,shutdown-script-url=https://storage.cloud.google.com/${your-bucket}/init-scripts/stop-vm.sh,google-logging-enabled=true,google-monitoring-enabled=true \
--no-restart-on-failure --maintenance-policy=TERMINATE --provisioning-model=SPOT \
--instance-termination-action=STOP --service-account=${your-id}-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=allow-health-check,http-server,https-server \
--create-disk=auto-delete=yes,boot=yes,device-name=instance-template-1,image=projects/cos-cloud/global/images/cos-101-17162-210-12,mode=rw,size=50,type=pd-standard \
--no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring \
--reservation-affinity=an
Instance Template Configuration 

To fit the strict budget limitations, my instance template:

  • Uses the e2-small instance as the most suitable low-load shape for the task.
  • Use the standard persistent device, not the balanced one (default and more expensive)  
  • Have a 50Gb allocated size instead of the default 10Gb

The next step is - LoadBalancer, firewall, and instance group configuration.  

Previous articles in the series are: