Computers Can Be Fun

k3s

1960s logging truck generated by AI

Logging must be configured first to collect telemetry on the K3s Control host. Detailed configuration for setting up API Server logging is provided at https://docs.k3s.io/security/hardening-guide#api-server-audit-configuration.

A logging directory must be created first.

mkdir -p -m 744 /var/lib/rancher/k3s/server/logs

Next, a default audit policy, audit.yaml should be created in /var/lib/rancher/k3s/server

A simple policy manifests where only metadata is logged will look like this:

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

Additional server options are added to the k3s systemd service file.

...
ExecStart=/usr/local/bin/k3s \
    server \
    ...
    '--kube-apiserver-arg=audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \
    '--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' \

We should see an audit.log file in /var/lib/rancher/k3s/server/logs/ once the K3s Server service is restarted.

Collect Logs With Splunk OTEL

We are now ready to deploy an updated Splunk OTEL Collector configuration for monitoring the K3s server audit log. This is done by adding extraFileLogs and agent stanzas to the my_customized_values.yaml file used to initially deploy the splunk-otel-collector

logsCollection:
  ...
  extraFileLogs:
    filelog/audit-log:
      include: [/var/lib/rancher/k3s/server/logs/audit.log]
      start_at: beginning
      include_file_path: true
      include_file_name: false
      resource:
        com.splunk.source: /var/lib/rancher/k3s/server/logs/audit.log
        host.name: 'EXPR(env("K8S_NODE_NAME"))'
        com.splunk.sourcetype: kube:apiserver-audit
 agent:
  extraVolumeMounts:
    - name: audit-log
      mountPath: /var/lib/rancher/k3s/server/logs
  extraVolumes:
    - name: audit-log
      hostPath:
        path: /var/lib/rancher/k3s/server/logs

Let's review the important parts of this manifest. First, extraFileLogs defines a new set of logs in the DaemonSet's container to collect through include directive. The resource directive is used to set the various internal Splunk related values for host, source, and sourcetype.

Next, the agent sections configures the DaemonSet to mount the the K3s server logs directory inside the container running the OTEL collector. agent.extraVolumes.hostPath points to the logging path on the Control Node.

Then we can run helm to apply the new values configuration.

helm upgrade -n infra splunk-otel-collector splunk-otel-collector-chart/splunk-otel-collector -f my_customized_values.yaml

#splunk #k3s #otel #logging #apiserver

BookStack Github Stats

I decided to deploy a wiki application to the Kubernetes cluster to have a place to store documentation for research, new projects, and existing system architectures. I found BookStatck to be simple to use and provided a nice test app to get running in the K3s cluster.

BookStack will run in two pods each with dedicated storage. A Pod consisted of one or more Containers and is the smallest unit of work in Kubernetes. While use cases do exist for multiple Containers in a Pod, normally, only one is assigned. I will be creating a Pod for BookStacks backend SQL database as well as a Pod for the frontend web application. These Pods will each have dedicated storage, provisioned with Longhorn.

MariaDB will be used for the SQL database. A PersistentVolumeClaim will be used to allocated disk space.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    storage: bookstack-db-storage
  name: bookstack-db-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: longhorn

A second PVC will be used for the BookStack's frontend. This storage will be for file uploads and attachments.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    storage: bookstack-storage
  name: bookstack-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: longhorn

The configuration accessModes: ReadWriteOnce instructs the cluster that only one worker node is allowed to mount and write to this volume.

Apply the yaml configuration with kubectl.

control-01:~/apps/bookstack$ kubectl apply -f bookstack-db-storage.yaml -f bookstack-storage.yaml

Read more...

Longhorn Logo

Longhorn provides native block storage for Kubernetes clusters and can be used with K3s. The K3s worker nodes have a secondary disk attached which will be used by Longhorn to provision storage. Disks attached to the nodes must be prepped. 

First, we will wipe the second disk attached to the node.

worker-01:~# wipefs -a /dev/<second_disk>

Now, format the disk as ext4.

worker-01:~# mkfs.ext4 /dev/<second_disk>

Create a directory to mount the disk. Longhorn will use this directory for data storage.

worker-01:~# mkdir /longhorn-storage

Mount the disk, then add it to /etc/fstab to persist reboot.

worker-01:~# mount /dev/<second_disk> /longhorn-storage/
worker-01:~# echo UUID\=$(findmnt -n -o UUID,TARGET,FSTYPE,OPTIONS /longhorn-storage) 0 0 | tee -a /etc/fstab

open-iscsi and a NFSv4 client is required for Longhorn.

worker-01:~# apt install -y nfs-common open-iscsi

Now, we are ready to install Longhorn using Helm. Additional configuration flags, such as, defaultDataPath can be passed to Helm as described in customize default settings.

helm repo add longhorn https://charts.longhorn.io; helm repo update

helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --set defaultSettings.defaultDataPath="/longhorn-storage"

We can access the Longhorn UI by creating a service from the control node.

control-01:~$ kubectl apply -f longhorn-svc-lb.yaml

longhorn-svc-lb.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: longhorn-svc-lb
  namespace: longhorn-system
spec:
  selector:
    app: longhorn-ui
  type: LoadBalancer
  loadBalancerIP: 10.33.0.210
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http

#kubernetes #k3s #k8s #longhorn #storage #weekendproject

MetalLB Logo MetalLB is used to handle load balancing services for bare metal installation of Kubernetes, with LoadBalancer declarative providing network connectivity into the cluster. To get a better understanding of how modular and flexible Kubernetes can be, ServiceLB which ships as the default service load balance used by K3s, can be disabled during installation or through configuration. Once disabled, MetalLB can be setup.

Start with disabling ServiceLB by adding a statement to config.yaml

echo disable:
  - \"servicelb\" >> /etc/rancher/k3s/config.yaml

Using Helm, install MetalLB.

helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb

IPAddressPool configures the addresses available for MetalLB to assign to Services deployed in K3s. The IP address selected from the pool effectively act as the “external” address for the service in the cluster.

---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: metallb-address-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.33.0.200-10.33.0.254

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: metallb-l2-adver
  namespace: metallb-system
spec:
  ipAddressPools:
  - default

Now, MetalLB will automatically assign the next available IP address to any Service with configuration type: LoadBalancer

#kubernetes #k3s #k8s #metallb #weekendproject

K3s logo

Building a Kubernetes cluster is a great way to start learning about the power, functionality and purpose this technology provides in the modern containerization stack.

K3s is a light-weight Kuberetes compliant distribution which can run well on low powered devices and can be used on a single node cluster or it can scale up to a multi-node cluster.

This project will start with a 4 node cluster, 1 control and 3 workers. Nodes are virtual machines with 8GB of RAM and 2 – 32GB disks. The first disk will have Debian installed while the second disk will be used for cluster storage.

K3s can easily be installed on the control node with the following command, as described in the documentation

control-01:~$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --disable servicelb --token super_secret_password --node-taint CriticalAddonsOnly=true:NoExecute --bind-address 10.33.0.100

A couple of additional flags are passed to the install script then described in the K3s Quick start guide. Detail on the configuration options can be found on on https://docs.k3s.io/cli/server.

Briefly, —write-kubeconfig-mode 644 will allow non-root users to read /etc/rancher/k3s/k3s.yaml, the k3s configuration file.

—disable servicelb is used to disable the ServiceLB package. Later on, a different load balance manager, MetalLB will be installed.

—token supersecretpassword defines the password that will be used to connect agent nodes to the server.

—node-taint CriticalAddonsOnly=true:NoExecute creates a node taint that prevents pods to running on the control-plane node unless a tolerance value of “CriticalAddonsOnly” exits. A further explanation would be,

By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. https://docs.k3s.io/datastore/ha#2-launch-server-nodes

Setup an env variable on control node to simplify using kubectl.

control-01:~$ echo "KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> /etc/environment

Install K3s as an Agent on the worker nodes, worker01-03. K3S_URL is the IP of the control node.

worker-01:~$ curl -sfL https://get.k3s.io | K3S_URL=https://10.33.0.100:6443 K3S_TOKEN=super_secret_password sh -

Label each worker to organize the nodes in the cluster.

control-01:~$ kubectl label nodes k3s-worker-01 kubernetes.io/role=worker

The cluster should be operational at this point. We can check the status of the nodes with kubectl get nodes --show-labels.

#kubernetes #k3s #k8s #weekendproject

Finally!

It took some work, but Writefreely is finally running on my K3s cluster.

My starting point was with configurations in writefreely-docker which provided an excellent source of knowledge around building the docker container. Once built, I published the image to a local container repository for K3s to pull from.

I deployed a Pod to keep things simple. The key here is to make sure the securityContext object has fsGroup and runAsUser is set to to the same user defined in the image build process. In this case, the user is daemon (uid=2).

apiVersion: v1
kind: Pod
...
spec:
  securityContext:
    runAsUser: 2
    runAsGroup: 2
    fsGroup: 2
  containers:
    ...

ConfigMap can be used to inject Writefreely's config.ini into the Pod. The container image expects the configuration to be in in /config/config.ini and a directory for the templates and database in /data/. The data directory should be a PersistentVolume.

  containers:
    ...
      volumeMounts:
      - name: data
        mountPath: /data
      - name: config
        mountPath: /config/config.ini
        subPath: config.ini
    ...

The container expects environment variables to configure the admin user on first deployment.

...
containers:
    - name: writefreely
      image: containers.internal.hrck.net/writeas/writefreely:latest
      ...
      env:
        - name: USERNAME
          value: "alex"
        - name: PASSWORD
          value: "<value of admin user's password>"
     ...

Full build configuration and Kubernetes manifests can be found on Github

#kubernetes #k3s #k8s