i decided to make a website. a static one. this one. with Hugo. the main reason i have for needing a website is as a vanity project, so i have some stuff to host in a Kubernetes cluster i’m running. the k8s cluster is also a vanity project.
because i don’t like software, i wanted a way to deploy my site that doesn’t involve much of it. this post is about that.
Getting Started
i built my site by following the straight-forward Getting Started guide in the Hugo documentation.
i did hugo new site estradiol.cloud
. and then cd estradiol.cloud; git init
.
and then i picked a ridiculous theme
“inspired by terminal ricing aesthetics”, installing it like git submodule add https://github.com/joeroe/risotto.git themes/risotto; echo "theme = 'risotto'" >> hugo.toml
.1
at this point, my website is basically finished (i also changed the title in
hugo.toml
). i probably won’t be putting anything on it, so there’s no point
fiddling with other details.
about deployment, the Hugo guide’s Basic Usage page has this to offer:
Most of our users deploy their sites using a CI/CD workflow, where a push1 to their GitHub or GitLab repository triggers a build and deployment. Popular providers include AWS Amplify, CloudCannon, Cloudflare Pages, GitHub Pages, GitLab Pages, and Netlify.
- The Git repository contains the entire project directory, typically excluding the public directory because the site is built after the push.
importantly, you can’t make a post about deploying this way. everyone deploys this way. if i deploy this way, this site will have no content.
this approach also involves a build system somewhere that can run Hugo to compile the code and assets and push them onto my host. i definitely already need Hugo installed on my laptop if i’m going to post anything.2 so now i’m running Hugo in two places. there’s surely going to be other complex nonsense like webhooks involved.
and hang on. let’s look at this again:
- The Git repository contains the entire project directory, typically excluding the public directory because the site is built after the push.
you’re telling me i’m going to build a nice static site and not check the actual content into version control? couldn’t be me.
Getting Static
suppose i instead check my content in exactly as i intend to serve it? then i could shell into my server box, pull the site, and nifty-galifty! isn’t this the way it has always been done?
my problem is that i don’t have a server box. i have a container orchestration system. there are several upsides to this3 but it means that somehow my generated content needs to end up in a container. because Pods are ephemeral and i’d like to run my site with horizontal scalability4, i don’t want my container to retain runtime state across restarts or replicas.
i could run a little pipeline that builds a container image wrapping my content and pushes it to a registry. when i deploy, the cluster pulls the image, content and all. all ready to go. but now i’ve got software again: build stages and webhooks and, to make matters worse, now i’m hosting and versioning container images.
i don’t want any of this. i just want to put some HTML and static assets behind a web server.
instead, i’d like to deploy a popular container image from a public registry and deliver my content to it continuously.
a minimal setup to achieve this might look like:
- a
Pod
with:- an
nginx
container to serve the content; - a
git-pull
sidecar that loops, pulling the content; - an
initContainer
to do the initial checkout; - an
emptyDir
volume to share between the containers.
- an
- a
ConfigMap
to store the nginx config.
when a new Pod
comes up, the initContainer
mounts the
emptyDir
at /www
and clones the repository into it. i use
git sparse-checkout
to avoid pulling repository contents i don’t want to serve
out:
# git-clone command
git clone https://code.estradiol.cloud/tamsin/estradiol.cloud.git --no-checkout --branch trunk /tmp/www;
cd /tmp/www;
git sparse-checkout init --cone;
git sparse-checkout set public;
git checkout;
shopt -s dotglob
mv /tmp/www/* /www
for the sidecar, i script up a git pull
loop:
# git-pull command
while true; do
cd /www && git -c safe.directory=/www pull origin trunk
sleep 60
done
and i create a ConfigMap with a server block to configure
nginx
to use Hugo’s public/
as root:
# ConfigMap; data: default.conf
server {
listen 80;
location / {
root /www/public;
index index.html;
}
}
the rest of this is pretty much boilerplate:
kubectl apply -f https://estradiol.cloud/posts/hugo-on-k8s/site.yaml
# estradiol-cloud.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
name: nginx-server-block
data:
default.conf: |-
server {
listen 80;
location / {
root /www/public;
index index.html;
}
}
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
spec:
containers:
- name: nginx
image: nginx:1.25.4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /www
name: www
- mountPath: /etc/nginx/conf.d
name: nginx-server-block
- name: git-pull
image: bitnami/git
command:
- /bin/bash
- -ec
- |
while true; do
cd /www && git -c safe.directory=/www pull origin trunk
sleep 60
done
volumeMounts:
- mountPath: /www
name: www
initContainers:
- name: git-clone
image: bitnami/git
command:
- /bin/bash
- -c
- |
shopt -s dotglob
git clone https://code.estradiol.cloud/tamsin/estradiol.cloud.git --no-checkout --branch trunk /tmp/www;
cd /tmp/www;
git sparse-checkout init --cone;
git sparse-checkout set public;
git checkout;
mv /tmp/www/* /www
volumeMounts:
- mountPath: /www
name: www
volumes:
- name: www
emptyDir: {}
- name: nginx-server-block
configMap:
name: nginx-server-block
my Hugo workflow now looks like:
- make changes to source;
- run
hugo --gc --minify
;5 git
commit & push.
my git pull
control loop takes things over from here and i’m on easy street.
Getting Web
this is going great! my Pod
is running. it’s serving out my code. i get
Continuous Deployment™ for the low price of 11 lines bash
. i mean…
no one can actually browse to my website6 but that will be an easy fix,
right? yes. networking is always the easy part.
first, i need a Service
. this gives me a proxy to my several
replicas7 and in-cluster service discovery.
kubectl apply -f https://estradiol.cloud/posts/hugo-on-k8s-nginx/service.yaml
# service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
name: nginx
spec:
type: ClusterIP
selector:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
next, i need an Ingress
to handle traffic inbound to the cluster
and direct it to the Service
:
kubectl apply -f https://estradiol.cloud/posts/hugo-on-k8s-nginx/ingress.yaml
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
name: nginx
spec:
rules:
- host: estradiol.cloud
http:
paths:
- backend:
service:
name: nginx
port:
name: http
path: /
pathType: Prefix
this part expresses a routing rule: traffic reaching the cluster via
estradiol.cloud
should go to my Service
, and then to one of its backend Pod
s.
to actually apply this rule, i need an ingress controller. mine is
ingress-nginx.
when i deployed controller in my cluster, it created some more nginx
Pod
s.
these update their configuration dynamically based on the rules
in my Ingress
resource(s). the controller also creates a Service
of
type LoadBalancer
, which magically creates a load balancer appliance
in my cloud provider. off-screen, i can point DNS to that appliance to finish
the setup.
you can tell it’s working by looking at your browser bar.
as this has come together, i’ve gotten increasingly anxious about how much YAML i’ve had to write. this is a problem because YAML is software and, as established, i’m hoping not to have much of that. it’s also annoying that most of this YAML really is just boilerplate.
conveniently, Bitnami maintains a Helm Chart that templates out all the boilerplate and does exactly what i’ve just been doing.8 i can replace all my YAML with a call out to this chart and a few lines of configuration, assuming i have helm client installed:
helm upgrade --install --create-namespace --namespace estradiol-cloud -f https://estradiol.cloud/posts/hugo-on-k8s-nginx/values.yaml oci://registry-1.docker.io/bitnamicharts/nginx
# values.yaml
cloneStaticSiteFromGit:
enabled: true
repository: "https://code.estradiol.cloud/tamsin/estradiol.cloud.git"
branch: trunk
gitClone:
command:
- /bin/bash
- -ec
- |
[[ -f "/opt/bitnami/scripts/git/entrypoint.sh" ]] && source "/opt/bitnami/scripts/git/entrypoint.sh"
git clone {{ .Values.cloneStaticSiteFromGit.repository }} --no-checkout --branch {{ .Values.cloneStaticSiteFromGit.branch }} /tmp/app
[[ "$?" -eq 0 ]] && cd /tmp/app && git sparse-checkout init --cone && git sparse-checkout set public && git checkout && shopt -s dotglob && rm -rf /app/* && mv /tmp/app/* /app/
ingress:
enabled: true
hostname: estradiol.cloud
ingressClassName: nginx
tls: true
annotations: {
cert-manager.io/cluster-issuer: letsencrypt-prod
}
serverBlock: |-
server {
listen 8080;
root /app/public;
index index.html;
}
service:
type: ClusterIP
configuration for the git-clone
script and our custom server block are added
via values.yaml
. the git-pull
loop configured by the chart works as-is.
by using the chart, we get a few other nicities. for instance,
my Pod
s are now managed by a Deployment
.9 this will make my
grand scale-out plans a breeze.
Getting Flux’d
by now, i’m riding high. my whole setup is my static site code and <30 lines of YAML.
i do have a bunch of stuff deployed into my cluster, and none of this is very reproducible without all of that. my workflow has also expanded to:
- for routine site deploys:
- make changes to source;
- run
hugo --gc --minify
;5 git
commit & push.
- to update
nginx
, the chart version, or change config:- make changes to
values.yaml
helm upgrade
- make changes to
i could do without the extra helm
client dependency on my laptop. i’m also
pretty git push
-pilled, and i really want the solution to all my problems
to take the now familiar shape: put a control loop in my cluster and push
to a git
repository.
enter flux
.
with flux
, i decide on a repository (and maybe a path within it) to act as
a source for my Kubernetes YAML. i go through a short bootstrap
process which installs the flux
controllers and add them to repository. to
make a change to a resource in my cluster, i edit the YAML and push to the
repository. flux
listens and applies the changes.
flux
supports Helm deploys, so i can get that helm
client off my laptop.
i can also use it to manage my ingress controller, cert-manager
, flux
itself and whatever other infrastructural junk i may end up needing.
to move my web stack into flux
, i create a HelmRepository
resource for
the bitnami
Helm charts:
# bitnami-helm.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: bitnami
namespace: default
spec:
url: https://charts.bitnami.com/bitnami
and add a HelmRelease
pointing to the repository/chart version and containing
my values.yaml
:
release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: web
namespace: estradiol-cloud
spec:
interval: 5m
chart:
spec:
chart: nginx
version: '15.12.2'
sourceRef:
kind: HelmRepository
name: bitnami
namespace: default
interval: 1m
values:
cloneStaticSiteFromGit:
enabled: true
repository: "https://code.estradiol.cloud/tamsin/estradiol.cloud.git"
branch: trunk
gitClone:
command:
- /bin/bash
- -ec
- |
[[ -f "/opt/bitnami/scripts/git/entrypoint.sh" ]] && source "/opt/bitnami/scripts/git/entrypoint.sh"
git clone {{ .Values.cloneStaticSiteFromGit.repository }} --no-checkout --branch {{ .Values.cloneStaticSiteFromGit.branch }} /tmp/app
[[ "$?" -eq 0 ]] && cd /tmp/app && git sparse-checkout init --cone && git sparse-checkout set public && git checkout && shopt -s dotglob && rm -rf /app/* && mv /tmp/app/* /app/
ingress:
enabled: true
hostname: estradiol.cloud
ingressClassName: nginx
tls: true
annotations: {
cert-manager.io/cluster-issuer: letsencrypt-prod
}
serverBlock: |-
server {
listen 8080;
root /app/public;
index index.html;
}
service:
type: ClusterIP
when i push these to my flux
source repository, the Helm
release rolls out.
A Note About Software
in the end, i’m forced to admit there’s still a lot of software involved in all
of this. setting aside the stuff that provisions and scales my cluster nodes,
and the magic LoadBalancer
, i have:
nginx
(running from a stock image);git
&bash
(running from a stock image);- a remote git server (i’m running
gitea
10, but github dot com is fine here); - Kubernetes (oops!);
flux
, especiallykustomize-controller
andhelm-controller
;ingress-nginx
controller;cert-manager
and Let’s Encrypt;
- the
bitnami/nginx
Helm chart;
the bulk of this i’ll be able to reuse for the other things i deploy on the cluster11. and it replaces SASS black-boxes like “AWS Amplify, CloudCannon, Cloudflare Pages, GitHub Pages, GitLab Pages, and Netlify” in the recommended Hugo deployment.
to actually deploy my site, i get to maintain a bash
scripts for git-clone
, my
NGINX config, and a couple of blobs of YAML.
at least there are no webhooks.
fin
-
i appreciate the culinary branding. ↩︎
-
unlikely. ↩︎
-
few of which could be considered relevant for my project. ↩︎
-
i absolutely will not need this ↩︎
-
i added
disableHTML = true
anddisableXML = true
to[minify]
configuration inhugo.toml
to keep HTML and RSS diffs readable. ↩︎ ↩︎ -
i can check that its working, at least, with a port-forward. ↩︎
-
lmao ↩︎
-
what incredible luck! (obviously, until now i’ve been working backward from this chart) ↩︎
-
i also snuck a TLS certificate configuration via Let’s Encrypt with
cert-manager
into this iteration. if you’re following along at home and don’t havecert-manager
installed, this should still work fine (but with no HTTPS). ↩︎ -
because i’m running
gitea
in my cluster and i want to avoid a circular dependency for myflux
source repository, i also depend on GitLab dot com. ↩︎ -
i won’t. ↩︎