Deploying My Site on Kubernetes with GitHub Actions and ArgoCD

The deployment is automated using GitHub Actions and ArgoCD Core to ensure that changes pushed to the repository are reflected in the live environment without manual intervention

Deploying My Site on Kubernetes with GitHub Actions and ArgoCD

This is an update of a post on my GitHub from August 2024.

So, I wanted a place to tie together everything. My music study crams, my deployments, the books, and this educational resource repo. So I made the repo for my main site, Beatsinthe.cloud public for you to observe or even fork. This consists of an HTML static front page with CSS & Javascript elements, with a Ghost blog attached to the backend with it's own unique domain name (both through Cloudflare). I will explain how I got this deployed. I updated things a bit.

This repository contains the source code and configuration for my website, deployed on a self-managed K3s cluster hosted on Hetzner Cloud. The deployment is automated using GitHub Actions and ArgoCD Core to ensure that changes pushed to the repository are reflected in the live environment without manual intervention. I am figuring out how to get Ghost to workin in a GitOps manner, or if it's even feasible. I started hosting on Digital Ocean's managed Kubernetes, which I quite liked, but realized that scaling would be costly, so I decided for the move to Hetnzer.

Overview

I set up the CI/CD pipeline to automatically build and deploy changes to my site.

Tools Used

  • GitHub Actions: For automating the CI process (Docker image build and push).
  • Argo: For automating the CD process (monitoring and updating Kubernetes deployments).
  • Docker: For containerizing the nginx app.
  • Kubernetes(K3s): For orchestrating the containerized application in a cloud environment. Lightweight and familiar.
  • Hetzner (Formerly Digital Ocean on managed K8s): As the cloud provider for hosting the Kubernetes cluster.

Project Structure

Here’s an overview of the important files and directories in this repo:

  • .github/workflows/Update docker-image.yml: The GitHub Actions workflow that automates the Docker image build and push process.
  • deployment.yaml: Defines the Kubernetes deployment for the app.
  • service.yaml: Configures the Kubernetes service to expose the app.
  • ingress.yaml: Manages the traffic flow of for the cluster.
  • Dockerfile: Describes how the Docker image is built.

Workflow Overview

1. GitHub Actions for CI

Whenever I commit changes to the repository, GitHub Actions triggers a workflow that:

  • Builds the Docker image using the Dockerfile. Courtesy of Github Actions.
  • Pushes the updated image to my DockerHub account.

This automates the image creation process, ensuring that the latest version of the site is always available if disaster occurs.

2. Argo CD for Continuous Deployment

Argo CD continuously monitors my Github repository for changes. When a new version is detected, Argo:

  • Updates the Kubernetes deployment with the new image.
  • Rolls out the changes by replacing the old deployment with the new one, ensuring zero downtime during the update.

Argo knows this because I've configured it to recognize changes in the deployment manifest, I version it with timestamps.

3. Kubernetes

Kubernetes(K3s) handles the deployment and scaling of the containerized app. The app’s deployment and service are defined in the deployment.yaml, service.yaml andingress.yaml files. These manifests define how the site runs and how it is exposed to the internet.

How It Works

Here’s a high-level look at how the automation works:

  1. Push updates to the repository (e.g., modifying Deployment.yaml or Dockerfile).
  2. GitHub Actions builds and pushes the new Docker image to DockerHub.
  3. Argo detects the changes in Github and updates the K3s deployment.
  4. Kubernetes rolls out the updated deployment without any downtime.

Migrating from Digital Ocean to Hetzner Cloud

Partings are such sweet sorrow. But this was truly painless. Since everything was on Github, it was simple as:

  • Installing K3s on the new VM (I prefer Ubuntu. Whatever man). Make sure you sudo apt update before you do and save yourself the headaches.
  • Installing Argo CD Core, setting the permissions needed, passkeys, secrets etc in the cluster.
  • Creating the App in the ArgoCD UI and setting sync to automatic.
  • Pointing the server's public IP to the DNS server
  • Importing the TLS secrets from the previous cluster on Digital Ocean (You can create a new certificate from Cert Manager)
  • Updating the Ingress manifest to reflect the Domain and TLS Secret, then version my deployment manifest.
  • Go back to Argo UI and Sync.
  • Drinking some well deserved coffee.

Ghost Backend Blog

So this was quite a challenge and I spent more time on this that I am comfortable admitting. So it was originally sitting alone on a Docker container on an AWS instance, from the official Ghost team. I had reverse proxied it into my cluster, as the current resources on Digital Ocean didn't allow for it to be on same cluster (due to resource and cost). However, I encountered several problems with the Ingress controller(s). I looked at countless blog posts, forums, videos, to little success for what I was trying to achieve. I ran into "too many redirect" errors, to ingress inconsistencies where the JS and CSS assets were not not displaying while sharing an ingress with the HTML page (although the HTML page displayed just fine). To using separate Ingresses. I tried everything. Going into the config files, changing absolute paths. It was painful.

One day it just suddenly came to me. See, one of the primary reasons for moving to Hetzner Cloud was the flexibility in pricing in deploying load balancing while self-hosting as compared to Digital Ocean's admittedly excellent managed Kubernetes offering. What ultimately worked was installing Ghost directly on the AWS VM and configuring the required database with the environmental variables, and create a Docker image from it all, and uploading that image to Dockerhub to my private repository. From there, I was able to deploy it to my K3s cluster easily, by creating a Load Balancer service for it, and creating an Ingress for it and specifying a different port than what the static HTML page uses, all in it's own namespace (I don't want ArgoCD deleting it for not being on Github). That allowed me to delete my AWS instance and save the credits for other endeavors. On Digital Ocean, I would not be able to do this, as creating an additional Load Balancer incurs an additional $15 monthly.

Why This Setup?

This setup provides a fully automated CI/CD pipeline for deploying my site. Using GitHub Actions for CI and Argo for CD ensures that I can focus on the fun stuff (coding HTML, yay), while the deployment process is handled seamlessly. Also becauseI don't want alot of maintenance and wanted to show off. Also because I'm both lazy and a show off.

Why My Setup MIGHT BE Overkill

Here are a few reasons why the setup I implemented could be considered over-engineered for this project:

Single Node Deployment

  • Running a Kubernetes cluster for a single-node website might be overkill. A simpler solution, such as using Docker directly on a virtual machine (VM), could achieve the same result with less complexity. But what's the fun in that?
  • Kubernetes is best suited for complex, multi-node environments, which isn't necessary for a basic or small-scale site.

Automated CI/CD for Small Projects

  • While using GitHub Actions and Argo for CI/CD is powerful, manually deploying the app or using a simpler CI tool could have been more efficient for a personal or small project.
  • The level of automation might not be fully needed if updates and deployments happen infrequently.
Visual representation of what I deployed.

Argo for Image Reconciliation

  • Argo automatically monitors Github and updates the Kubernetes deployment with new images, but for a small site that rarely changes, manually updating the deployment would have been simpler and less resource-intensive.

Horizontal Auto-scaling

  • Enabling auto-scaling in Kubernetes is useful for high-traffic applications, but for a single-node personal website, it adds complexity without much benefit. Static resources or manually scaled environments could be easier to manage.

High Availability Features

  • Kubernetes provides features like rolling updates and zero-downtime deployments, which are overkill for a personal or small project. If the site doesn’t need high availability or frequent updates, a simpler deployment strategy could suffice.

If you're looking to not spend much on Kubernetes

If you're looking for cost-effective (See: CHEAP) ways to run Kubernetes, consider these options, or if setting up a home lab is out of the question:

  1. DigitalOcean/Linode/Vultr/Civo
    They have low-cost managed Kubernetes services available on VPS providers. I get zero revenue from them. They're just cheap. There. I said it. But they charge for load balancers. And you kind of need those.
  2. K3s, Minikube or MicroK8s
    A lightweight Kubernetes distribution suitable for cloud VMs or edge devices. Could run on most free tier VMs offered by Oracle Cloud (I highly recommend trying them out), GCP, Azure or AWS. I like Canonical's work, don't let the snap-haters tell you otherwise, but I have a feeling you'll end up liking Minikube. Sorry K3s, you're the "Community" of light K8s distros, loved by your base, but continually failing to catch on with audiences. 6 Distros and a movie!
  3. Self-Hosted on Cloud VMs
    This is how I learned, got permanently suspended from OCI (It's ok, I appealed it and won many months later). Use cloud VMs from AWS, GCP, Azure, OVH, Civo, and Digital Ocean (by my experience OCI does not play well with self-hosted) to self-manage Kubernetes for lower costs. If you want to keep costs low, use the minimal K8s distros (Microk8s, Minoikube, k3s) and not vanilla. I migrated from Digital Ocean to K3s, and it was painless. Learn GitOps and create a solid CI/CD pipeline.
  4. Bare Metal Kubernetes
    Rent bare metal servers for affordable, large-scale Kubernetes hosting. Rackspace spot instance often go for $17 A YEAR! - with a big caveat....It will likely get deleted out of nowhere. That's the caveat. Yikes.com
  5. Docker Desktop or OpenShift OKD
    Docker Desktop has a Kubernetes feature that has Kubectl installed for you to be able to run a small cluster locally. I've never tried Openshift. But I hear they're just lovely.

But if you want to do a project to show off....doing it on the cloud is the way to go.


As I said before, the point of this was to prove my abilities, prove that I know what I am talking about and show my work, per se.

  • In 2020, I was making sites on Blogger and wordpress still.
  • In 2021 I was using AWS S3 to make an object public and use the external IP as the site.
  • In 2022 I was finally using nginx and apache to manage my own servers on a VM with multiple sites.
  • In 2023 I was reducing the resource usage by deploying those servers with Docker
  • In 2024 I am fully multicloud, and now overdoing it by using Kubernetes, with high availability and automated CI/CD. All of this for under $16 a month, down from $30. This is including domain names.
  • And yeah, the blog section is no longer on managed Wordpress. I am self hosting this on my own VPS.

Let's see what happens when I set up Cillium & Prometheus

:root { --font-family-sans: 'Montserrat', Arial, sans-serif; } html, body { font-family: var(--font-family-sans); }