19 Jun 2019, 06:00

k3s on arm64

I’m evaluating k3s a Lightweight Kubernetes on a 3 nodes arm64 cluster (RK3328 Quad arm64).

At the time of writing the stable release is k3s v0.6.1.

Here are my notes:

  • If you haven’t installed k3s with the install.sh, you may need to load some modules: br_netfilter and overlay
  • Docker is not needed since k3s is using containerd but it seems I had to start docker to initialized the whole cgroups, at least on Arch
  • Remember to modify all the templates to use an arm64 image

              - name: kubernetes-dashboard
                image: k8s.gcr.io/kubernetes-dashboard-arm64:v1.10.1
  • Deploy metrics server Clone the k3s and change the image to be arm64 k8s.gcr.io/metrics-server-arm64:v0.3.1

       git clone git@github.com:rancher/k3s.git
       vi k3s/recipes/metrics-server/metrics-server-deployment.yaml
       sudo  k3s kubectl -n kube-system create -f k3s/recipes/metrics-server

Test it by running k3s kubectl top node

  • Dynamic storage class
    Kubernetes provides a local storage but no dynamic volume provisioning, you still have to manually create the PersistentVolume which breaks any automatic deployment (especially from Helm).
    Rancher developed local-path-provisioner for that purpose. I couldn’t find a recent build for arm64, so I built one, you can find it here.

    curl -sfL https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml > local-path-storage.yaml
    # Edit local-path-storage.yaml to set image to akhenakh/local-path-provisioner:arm64-v0.0.9
    kubectl create -f local-path-storage.yaml
    kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  • Deploy dashboard
    Change the image to be arm64 k8s.gcr.io/kubernetes-dashboard-arm64:v1.10.1.
    Add the skip option to the args - --enable-skip-login

          curl -sfL https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml > kubernetes-dashboard.yaml
          vi kubernetes-dashboard.yaml
          cat <<EOF | sudo k3s kubectl create -f -
          apiVersion: rbac.authorization.k8s.io/v1beta1
          kind: ClusterRoleBinding
            name: kubernetes-dashboard
              k8s-app: kubernetes-dashboard
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
            name: cluster-admin
          - kind: ServiceAccount
            name: kubernetes-dashboard
            namespace: kube-system
          sudo  k3s kubectl -n kube-system create -f kubernetes-dashboard.yaml

Merge /etc/rancher/k3s/k3s.yaml to your .kube/config on your workstation host, then kubectl proxy to localhost:8001 * Install a different version

      curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.6.1 sh -
  • Install agents

      curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.6.1 K3S_URL=https://mymaster.home:6443 K3S_TOKEN=K101864xxxxxxxxxxxxxxxxxxx::node:xxxxxxxxxxxxxxxxxxxxxxxxx sh -

27 May 2019, 00:00

H96 Max+ Android box as a Linux server

The H96 Max + is an Android 8.1 box with a Rockchip 3328, 4Gb RAM and 32G or 64G builtin eMMC, it’s the same chipset as the Rock64, it costs around 60 USD.
The only downside of this board is the 100Mb network link, can be a non issue using a USB network adapter.

Here are some notes how to install Arch Linux on the H96 to make it a small server appliance.

Serial Console Pins

Video output won’t work while installing you need to establish a serial connection to the board.

Baudrate 1500000 8N1 3.3V


DTS/DTB are Device Tree Blob describing the hardware to the kernel.

You can extract the DTB from the Android H96 image but I’m using rk3328-t9.dtb from armbian forum


If you write to mmcblk2 you may override the rockchip firmware located on the eMMC, which will result in a non booting device …

Since this box does not have any reset button you need to short two pins as described in this post.

You also need a male male USB connector (it’s a straight cable, black, green, white and red on both ends).

Shorts the two pins while inserting the USB cable, do not plug the power cable.

Find an image for the H96 somewhere or dump yours, it’s only useful to recover the boot image, then write it to the eMMC with upgrade_tool

sudo ./upgrade_tool uf RK3328-H96MAX+_hs2734_8.1.0_20180823.1108.img


Prepare an SD card as described in the Rock64 Arch installation.

Note the output of blkid for this partition.

Before unmounting root, create a new file root/boot/boot.txt.

part uuid ${devtype} ${devnum}:${bootpart} uuid
setenv bootargs console=ttyS2,1500000 root=UUID=a3490212-03b6-46a6-9e6b-7733513a3efa storagemedia=emmc rw rootwait earlycon=uart8250,mmio32,0xff130000
setenv fdtfile rockchip/rk3328-t9.dtb

if load ${devtype} ${devnum}:${bootpart} ${kernel_addr_r} /boot/Image; then
if load ${devtype} ${devnum}:${bootpart} ${fdt_addr_r} /boot/dtbs/${fdtfile}; then
              fdt addr ${fdt_addr_r}
              fdt resize
              if load ${devtype} ${devnum}:${bootpart} ${ramdisk_addr_r} /boot/initramfs-linux.img; then
                booti ${kernel_addr_r} ${ramdisk_addr_r}:${filesize} ${fdt_addr_r};
                booti ${kernel_addr_r} - ${fdt_addr_r};

Copy the dtb file:

cp rk3328-t9.dtb root/boot/dtbs/rockchip

Update the bootloader:

cd root/boot
mkimage -A arm -O linux -T script -C none -n "U-Boot boot script" -d boot.txt boot.scr

mkimage is provided by uboot-tools on Arch.

Mainline kernel won’t work properly for now here is a copy ayufan kernel 1187 untar in root/boot

This is a one time thing only…

First boot

Boot with the SD, you should be able to see /dev/mmblck2. It should be safe to write after the block 32768, create a partition, (it’s here I’ve bricked then unbricked mine).

Device         Boot Start       End   Sectors  Size Id Type
/dev/mmcblk2p1      32768 122142719 122109952 58.2G 83 Linux

You can safely perform a normal Arch install on it, then remember to modify the UUID in boot.txt from the SD Card to boot / from the newly created /dev/mmblck2p1 partition, (the SD card is /dev/mmblck0).

Reboot, the SD card is only needed to find the kernel, the rest is read off the eMMC.

Kernel Updates

Until mainline is stable for the rk3328, you can use my updated Arch rockchip ayufan pkg.

mount /dev/mmblck0 /mnt
mount --bind /mnt/boot /boot
cd linux-aarch64-rock64-bin
makepkg -si


I had tons of issues but it seems it was related to USB 3.0.

I’ve blacklisted video output since it’s not used , create /etc/modprobe.d/video-blacklist.conf

blacklist mali 
blacklist dw_hdmi_i2s_audio

Do not use USB 3 port and you should be fine, (until mainline is stable).

3D Printed rack

Since this board is not standard, I’ve made a 3D printed rack for the H96 and with a support for Rpis/Odroid and Rock64.


It should be possible to boot directly from the eMMC without the SD Card.

This board is incredibly fast and convenient for the price, it’s a perfect target to play with edge clusters.
It’s a bit hacky and I don’t recommend it if you are not ready to debug from a serial console.

22 May 2019, 00:00

Arch on Rock64 with USB boot

Here are my notes to install Arch on a Rock64 and boot on USB first.

Warning, you can brick your device (but can unbrick it), you are on your own.

  • Follow Arch Instructions to Install on an SD Card
  • Boot on it
  • Insert your USB device and follow the exact same installation but this time to the /dev/sdadevice
    At the end
    Mount / into root again Run blkid to grab the UUID of your /
  /dev/sda1: UUID="a3490212-03b6-46a6-9e6b-7733513a3efa" TYPE="ext4"  

Add the file root/boot/boot.txt as follow and recopy your own UUID

              # MAC address (use spaces instead of colons)
              setenv macaddr da 19 c8 7b 6d bd

              part uuid ${devtype} ${devnum}:${bootpart} uuid
              setenv bootargs console=ttyS2,1500000 root=UUID=a3490212-03b6-46a6-9e6b-7733513a3efa rw rootwait earlycon=uart8250,mmio32,0xff130000
              setenv fdtfile rockchip/rk3328-rock64.dtb

              if load ${devtype} ${devnum}:${bootpart} ${kernel_addr_r} /boot/Image; then
                if load ${devtype} ${devnum}:${bootpart} ${fdt_addr_r} /boot/dtbs/${fdtfile}; then
                  fdt addr ${fdt_addr_r}
                  fdt resize
                  fdt set /ethernet@ff540000 local-mac-address "[${macaddr}]"
                  if load ${devtype} ${devnum}:${bootpart} ${ramdisk_addr_r} /boot/initramfs-linux.img; then
                    booti ${kernel_addr_r} ${ramdisk_addr_r}:${filesize} ${fdt_addr_r};
                    booti ${kernel_addr_r} - ${fdt_addr_r};

Then write it to the scr

  cd root/boot
  pacman -S uboot-tools
  mkimage -A arm -O linux -T script -C none -n "U-Boot boot script" -d boot.txt boot.scr

Reboot and remove the USB device

  • Download a dedicated image labelled u-boot-flash-spi-rock64.img.xz from ayufan’s github
  • Burn it to an SD Card, turn on your Rock64, wait for the white LED to flash several at 1 second frequency
  • Turn it off, remove the SC Card, insert the USB device, it should now boot on your USB device


Here is a config for a ext4 /boot and / btrfs.

mount -o noatime,ssd,compress=lzo,subvol=@root /dev/sda2 root
mount /dev/sda1 root/boot
cd root/boot
mkimage -A arm -O linux -T script -C none -n "U-Boot boot script" -d boot.txt boot.scr

And the matching boot.txt

# After modifying, run ./mkscr
# MAC address (use spaces instead of colons)
setenv macaddr da 19 c8 7a 6d f4
part uuid ${devtype} ${devnum}:${bootpart} uuid
setenv bootargs console=ttyS2,1500000 rootfstype=btrfs root=UUID=7ba94ef7-a453-495a-a93c-a6cdb875b28e rootflags=subvol=@root rw rootwait earlycon=uart8250,mmio32,0xff130000
setenv fdtfile rockchip/rk3328-rock64.dtb
if load ${devtype} ${devnum}:${bootpart} ${kernel_addr_r} /Image; then
  if load ${devtype} ${devnum}:${bootpart} ${fdt_addr_r} /dtbs/${fdtfile}; then
    fdt addr ${fdt_addr_r}
    fdt resize
    fdt set /ethernet@ff540000 local-mac-address "[${macaddr}]"
    if load ${devtype} ${devnum}:${bootpart} ${ramdisk_addr_r} /initramfs-linux.img; then
      booti ${kernel_addr_r} ${ramdisk_addr_r}:${filesize} ${fdt_addr_r};
      booti ${kernel_addr_r} - ${fdt_addr_r};

Do not used zstd compression for now since it was not available in 4.4 ayufan kernel you may find yourself stuck with a filesystem you cannot read back in older kernel (been there).


I’ve tried mainline 5.1 kernel for days, with a lot of instabilities, memory corruption, compilator crashed.
It looked like cpu overheating problems but was not, simply downgraded to 4.4 ayufan kernel and everything went back to normal.

You can install linux-aarch64-rock64-bin from AUR or used mine more recent AUR packages.


USB 3 port is unstable with ayufan kernel, hopefully it may be fixed in mainline.

Disabling uas for your usb key seems to improve a lot.

Identify your USB storage with lsusb -v

Add usb-storage module loading options identifying your device on your bootargs in /boot/boot.txt

setenv bootargs console=ttyS2,1500000 rootfstype=btrfs root=UUID=c9136915-614c-4203-ba8d-5f2f202ffbcd rootflags=subvol=@root usb-storage.quirks=0x078
1:0x5588:u rw rootwait earlycon=uart8250,mmio32,0xff130000

27 Apr 2019, 02:01

Deploying a website with Caddy, Git and Kubernetes

Caddy is the swiss army of the web server, and with the recent commercial license changes, it’s time to give it some love back.

I have several static websites, some generated with Hugo, some are plain HTML.
I wanted a small container, to run it inside a Kubernetes cluster, capable of pulling some git repos and serve them.


Caddy is already capable of that with the help of caddy-git unfortunately it is only working with ssh keys.
I wanted it to use Github access token, also the current implementation is relying on the git command and sh, I wanted mine to be able to run on Distroless.


I’ve used go-git a pure Go implementation of git, to first make a clone of the git command: minigit.
minigit can be useful in devops environnements and scriptings to facilitate git pulls.
Faking the git command with minigit into your image and tweak caddy-git to pass an extra parameter --ghtoken

root /public
git https://github.com/myuser/repo {
   path /public
   clone_args --ghtoken XXXXXXXXXXXXX
   pull_args --ghtoken XXXXXXXXXXXXX
   interval 3600

It’s nice but I wanted something cleaner and get rid of the sh dependency, I had to fork caddy-git.


So here is caddy-puregit, a fork without execs but native pure Go git calls.
Give it your token and it will clone then pull on regular intervals.

root /public
puregit https://github.com/myuser/repo {
   path /public
   auth_token XXXXXXXXXXXXX 
   interval 3600

I’ve also created a Caddy + Hugo image, so you can trigger a Hugo build on every commits.

root /public
puregit https://github.com/myuser/hugo-blog {
   path /data
   then hugo --destination=/public --source=/data
   auth_token XXXXXXXXXXXXX 
   interval 3600

Here is caddy-puregit and associated Docker image & Dockerfile


Since Caddy supports environment variables it’s easy to deploy in k8s:

root /public
puregit {$REPO} {
    auth_token {$TOKEN}

Put your token into a secret and expose it as an environment variable.

Here is a template which will deploy caddy and pull your repo then serving it according to the config.


For development purpose, to work on a new Caddy plugin you can use the RegisterDevDirective, or you have to fork Caddy.

I don’t plan on maintaining this fork but I’ll reach out to the author since a pure Go git concept is working maybe he will be interested.

11 Mar 2019, 00:32

gRPC Load Balancing inside Kubernetes


I wanted to blog about this for years: how to connect to a Kubernete’s loadbalanced service?
How to deal with disconnections/re-connections, maintenance? What about gRPC specifically?
The answer is heavily connected to the network stack used by Kubernetes, but with the “Mesh Network” revolution, It’s not always clear how it works anymore and what the options are.

How it works

First I recommend you to watch this great yet simple video: Container Networking From Scratch, then the Services clusterIP documentation.

To make it simple when you create a Service in Kubernetes, it creates a layer 4 proxy and load balance connections to your pods using iptables, the service endpoint is one IP and a port hiding your real pods.

The Problem

A simple TCP load balancer is good enough for a lot of things especially for HTTP/1.1 since connections are mainly short lived, the clients will try to reconnect often, so it won’t stay connected to an old running pod.

But with gRPC over HTTP/2, a TCP connection is maintained open which could lead to issues, like staying connected to a dying pod or unbalancing the cluster because the clients will end on the older pods.

One solution is to use a more advanced proxy that knows about the higher layers.

Envoy, HAProxy and Traefik are layer 7 reverse proxy load balancers, they know about HTTP/2 (even about gRPC) and can disconnect a backend’s pod without the clients noticing.


On the edge of your Kubernetes cluster, you need a public IP, provided by your cloud provider via the Ingress directive it will expose your internal service.

To further control your request routing you need an Ingress Controller.
It’s a reverse proxy that knows about the Kubernetes clusters and can direct the requests to the right place. Envoy, HAProxy and Traefik can act as Ingress Controllers.

Internal Services & Service Mesh

In a Micro-services environment, most if not all your micro-services will also be clients to others micro-services.

Istio, a “Mesh Network” solution, use Envoy as a sidecar. This sidecar is configured from a central place (control plane) and makes each micro-service talking to each other through Envoy.

This way the client does not need to know about all the topology.

That’s great but in a controlled environment (yours), where you control all the clients, sending all the traffic through a proxy is not always necessary.

Client Load Balancing

In Kubernetes you can create a headless service; where there are no load balanced single endpoints anymore, the service pods are directly exposed, Kubernetes DNS will return all of them.

Here is an example service called geoipd scaled to 3.

Name:      geoipd
Address 1: 172-17-0-18.geoipd.default.svc.cluster.local
Address 2: 172-17-0-21.geoipd.default.svc.cluster.local
Address 3: 172-17-0-9.geoipd.default.svc.cluster.local

It’s up to your client to connect them all and load balance the connections.

In Go gRPC client side, a simple dns:/// notation will fetch the entries for you, then the roundrobin package will handle load balancing.

conn, err := grpc.Dial(

This may sound like a good solution but it is not: the default refresh frequency is 30 minutes, meaning whenever you add new pods, it can take up to 30 minutes for them to start getting traffic! You can lower this problem by tweaking MaxConnectionAge on the gRPC server:

gsrv := grpc.NewServer(
    // MaxConnectionAge is just to avoid long connection, to facilitate load balancing
    // MaxConnectionAgeGrace will torn them, default to infinity
    grpc.KeepaliveParams(keepalive.ServerParameters{MaxConnectionAge: 2 * time.Minute}),

Even if you could refresh the list more often you wouldn’t know about pod eviction fast enough and you’d miss some traffic.

There is a nicer solution, implementing the gRPC client resolver for Kubernetes, talking to the Kubernetes API to get the endpoints and watch them constantly, this is exactly what Kuberesolver does.

// Register kuberesolver to grpc

conn, err := grpc.Dial(

By using kubernetes schema you tell kuberesolver to fetch and watch the endpoints for the geoipd service.

For this to work the pod must have GET and WATCH access to endpoints using a role:

kubectl create role pod-reader-role --verb=get --verb=watch --resource=endpoints,services 
kubectl create sa pod-reader-sa 
kubectl create rolebinding pod-reader-rb --role=pod-reader-role --serviceaccount=default:pod-reader-sa 

Redeploy your app (the client) with the service account:

  serviceAccountName: pod-reader-sa

Deploy, scale up, scale down, kill your pods, your client is still sending traffic to a living pod !

I’m surprised it’s not mentioned more often, client load balancing did the job for years, the same apply inside Kubernetes environment.
It is fine for small to medium projects and can deal with a lot of traffic, this will do it for many of you unless if you are Netflix sized…


Load-balancing proxies are great tools, especially useful on the edge of your platform. “Mesh Network” solutions are nice additions to our tool set, but the cost of operating and debugging a full mesh network could be really expensive and overkill in some situations, while a client load balancing solution is simple and easy to grasp.

Thanks to Prune who helped me with this post, and to Robteix & diligiant for reviewing.

04 Mar 2019, 00:32

Traefik gRPC Load Balancing and Traces Propagation

Following my recent blog post on setting up a dev environment in Kubernetes, here are some tips to use Traefik as a gRPC load balancer.

Traefik can be used on the edge and route incoming HTTP traffic to your Kubernetes cluster, but it’s also supporting gRPC.

gRPC Load Balancing with Traefik

Here I have a gRPC service I want to expose on the edge.

apiVersion: v1
kind: Service
  name: myapp
    name: "myapp"
    type: "grpc"
    - port: 9200
      name: "grpc"
      targetPort: grpc
      protocol: TCP
    app: "myapp"
  clusterIP: None

Note the clusterIP: None, it’s a headless service.

It will create a non loadbalanced service, pod’s services can be accessed directly.

myapp.default.svc.cluster.local.    2       IN      A
myapp.default.svc.cluster.local.    2       IN      A
myapp.default.svc.cluster.local.    2       IN      A

Here is the ingress for Traefik.

apiVersion: extensions/v1beta1
kind: Ingress
  name: myapp-ingress
  namespace: default
    name: "myapp"
    ingress.kubernetes.io/protocol: h2c
  - host: myapp-lb.minikube
      - path: /
          serviceName: myapp
          servicePort: 9200

Note the h2c prefix, indicating HTTP2 protocol without TLS to your backend !



Traefik can be configured to emit tracing.

I’m using ocgrpc Opencensus, for gRPC metrics & traces.
It automatically emits several counters for gRPC and traces using the StatsHandler.

Unfortunately ocgrpc does not yet propagate Jaeger traces, I’ve temporary forked it to support Jaeger.

As you can see you can follow the request from Traefik down to your services.


Happy tracing !

21 Feb 2019, 00:19

Kubernetes Quick Setup with Prometheus, Grafana & Jaeger


When starting on a new project or prototyping on a new idea, I find myself doing the same tasks again and again.
Thanks to Kubernetes it’s possible to setup a new env from scratch really fast.

Here is a quick setup (mostly notes) to create a dev environment using Minikube and the workflow I’m using with it.

Not knowing in advance where this future project will be hosted, I try to stay platform agnostic.
OpenCensus or OpenTracing can abstract the target platform, letting you choose what tooling you want for your dev.

I consider some tools to be mandatory these days:

  • Jaeger for tracing
  • Prometheus for instrumentation/metrics
  • Grafana to display these metrics
  • A logging solution: this is already taken care of by Kubernetes and depends on your cloud provider (StackDriver on GCP…), otherwise use another tool like ELK stack.
    On your dev, plain structured logs to stdout with Kubernetes dashboard or Stern should be fine.
  • A messaging system: for example NATS but out of the scope of this post.

Tools Installation

I find it easier to let Minikube open reserved ports, but not mandatory:

minikube start --extra-config=apiserver.service-node-port-range=80-30000

I’ll use Traefik as Ingress to simplify access to several admin UI later.

I’m not a big fan of helm, so here is a little trick, I’m only using helm to create my deployment templates, using the charts repo as a source template, so I can commit or modify the resulting generated files. (Thanks Prune for this tips).

git clone git@github.com:helm/charts.git 

helm template charts/stable/traefik --name traefik --set metrics.prometheus.enabled=true --set rbac.enabled=true \
--set service.nodePorts.http=80 --set dashboard.enabled=true --set dashboard.domain=traefik-ui.minikube > traefik.yaml 

helm template charts/stable/grafana --name grafana --set ingress.enabled=true \ 
--set ingress.hosts\[0\]=grafana.minikube --set persistence.enabled=true --set persistence.size=100Mi > grafana.yaml 

helm template charts/stable/prometheus --name prometheus --set server.ingress.enabled=true \ 
--set server.ingress.hosts\[0\]=prometheus.minikube --set alertmanager.enabled=false \ 
--set kubeStateMetrics.enabled=false --set nodeExporter.enabled=false --set server.persistentVolume.enabled=true \
--set server.persistentVolume.size=1Gi --set pushgateway.enabled=false > prometheus.yaml 

A lot more templates are available: NATS, Postgresql …

For Jaeger a development ready solution exists, here is mine slightly tweaked to use an ingress:

curl -o jaeger.yaml https://gist.githubusercontent.com/akhenakh/615686891340f5306dcbed82dd1d9d67/raw/41049afecafb05bc29de3b0d25208c784f963695/jaeger.yaml

Deploy to Minikube (ensure your current context is minikube…), if you need to work on several projects at the same time remember you can use Kubernetes namespaces, (beware some helm templates are overriding it).

kubectl create -f traefik.yaml
kubectl create -f jaeger.yaml
kubectl create -f grafana.yaml
kubectl create -f prometheus.yaml

Again to ease my workflow I want Traefik to bind 80, edit the service and change it to nodePort 80.

kubectl edit service traefik

Add some urls to your /etc/hosts

echo "$(minikube ip) prometheus.minikube" | sudo tee -a /etc/hosts 
echo "$(minikube ip) grafana.minikube" | sudo tee -a /etc/hosts 
echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
echo "$(minikube ip) jaeger.minikube" | sudo tee -a /etc/hosts

Point your browser to any of these addresses and you are good to go !

Deploy your own apps

First remember to always use the Kubernetes Docker:

eval `minikube docker-env` 

My applications are reading parameters from environment, in Go, I’m using namsral/flag, so the flag -httpPort is also set by the environment variable HTTPPORT.

I then use templates, where I set all my environment variables, to create my yaml deployment.


So typical Makefile targets would fill the template with envsubst:

VERSION := $(shell git describe --always --tags)
DATE := $(shell date -u +%Y%m%d.%H%M%S)
LDFLAGS := -ldflags "-X=main.version=$(VERSION)-$(DATE)"
PROJET = mysuperproject

	cd helloworld/cmd/helloworld && go build $(LDFLAGS)

helloworld-image: helloworld
	cd helloworld/cmd/helloworld && docker build -t helloworld:$(VERSION) .

helloworld-deploy: NAME=helloworld
helloworld-deploy: helloworld-image 
	cat deployment/project-apps.yaml | envsubst | kubectl apply -f - 
	cat deployment/project-services.yaml | envsubst | kubectl apply -f - 

    kubectl delete --ignore-not-found=true deployments,services,replicasets,pods --selector=appgroup=$(PROJECT)

Dockerfile contains nothing but FROM gcr.io/distroless/base and a copy of the helloworld binary.

Note all this setup is only good for your dev, production deployment is another story.

make helloworld-deploy, compilation, image creation and deployment is less than 2s over here!

Shell tool

Another useful trick when working with new tools is to have a shell available inside Kubernetes to experiment with.

Create an image for this purpose, and copy your tools and clients.

FROM alpine:3.9
RUN apk add --no-cache curl busybox-extras tcpdump

WORKDIR /root/
COPY helloworldcli .
ENTRYPOINT ["tail", "-f", "/dev/null"]

And the relevant Makefile targets:

debugtool-image: helloworldcli
	cp helloworld/cmd/helloworldcli/helloworldcli debugtool/helloworldcli
	cd debugtool && docker build  -t debugtool:latest .
	rm -f debugtool/helloworldcli

debugtool-deploy: debugtool-image
	kubectl delete --ignore-not-found=true pod debugtool
	sleep 2
	kubectl run --restart=Never --image=debugtool:latest --image-pull-policy=IfNotPresent debugtool

	kubectl exec -it debugtool -- /bin/sh 

By calling make debugtool-shell, you’ll be connected on a shell inside Kubernetes.


Hope this will help some of you, I would love to hear your own tricks and setup to develop on Minikube !

27 Aug 2018, 00:19

Wasm with Go to build an S2 cover map viewer

I needed a reason to use the new Go 1.11 Wasm port for “real”.

To make it short, it compiles Go code to Wasm binary format for a virtual machine running in web browsers.

I’ve always needed a debug tool to display S2 Cells on a map for different shapes, some online tools already exist:

I’ve planned for a Qt Go app or a QGIS plugin with C++ bindings to Python but to ship those modules would be a nightmare.

I needed another solution, a simple web app would do it, not very complicated but since I hate HTML/CSS/js, I’ve never bothered to start one…
The s2map solution was great but having to start a backend and pay for it, was a no go for the long run, plus since I work with Go I needed something that was relying on the S2 Go port for matching results.

So here I am doing some web dev…

Wasm & Go

First Wasm is not the best solution (GopherJS maybe is) to my problem but hey it’s working.

The main() is a bit weird but close enough to a normal Go program:

func registerCallbacks() {
	js.Global().Set("geocell", js.NewCallback(geoJSONToCells))

func main() {
	c := make(chan struct{}, 0)
	println("Wasm ready")

Since our function geocell() will be called by js, we wait for a channel that will never be triggered so the main loop won’t return.

NewCallback() wants a fn func(event Value) it means you can’t return directly from Go to js

func geoJSONToCells(i []js.Value)  

Just a slice of untyped values thank you js.

All other functions (not exposed to js) can be normal Go functions, packages …

Interaction with the DOM is very limited via the package syscall/js
So far, to update back the UI (from Go to js), I pass the result of the computation via a set and call the js method, very hackish …

func updateUIWithData(data string) {
	js.Global().Set("data", data)

The updateui() is a regular js function that processes data and updates the DOM.


The app is calling a lot of Go code and libraries that were not written for the web but are now executed from a webpage without any backends:

  • the web interface creates a shape on a js Leaflet layer
  • exports the shape in GeoJSON
  • serializes it to JSON
  • calls the geoJSONToCells() func passing the JSON as string argument
  • computes the S2 cells in the Go world
  • sets the result back via a js var containing GeoJSON as string
  • reads back this GeoJSON and displays it as a Leaflet layer

The first loading and running of the Wasm is very slow on Chrome but not on Firefox, the execution itself is really fast.


Code is on Github and Demo is hosted on Github Pages at https://s2.inair.space, sorry folks Wasm works on phones but the demo won’t be very useful on mobile.


You can argue 11M (1.5M gzipped) is too big for a webapp providing only one functionality, and you are probably right (then look at a modern webpage), but how big would be a Python app shipped with the C++ library or a full Qt app …

Also the size could be a non-issue, in a near future, a solution like jsgo.io could provide package level CDN caching.


Again you should have really good reasons to use Wasm, the back and forth between js & Wasm is nonsense, but the tooling will improve and I’m sure we will see plenty of solutions around it (one is gRPC in the browser).

EDIT: Call() can call a js method no need for eval !

01 Aug 2018, 00:19

My Own Car System, Rear Camera, Offline Maps & Routing, Map Matching with Go on Raspberry Pi part II

This is my journey building an open source car system with Go & Qt, rear camera, live OpenGL map …

Cross compilation

In Part I, I had to patch qtmultimedia for the camera to work, but Qt compilation is resource hungry, same goes for the osrm compilation, the memory of the Raspberry Pi is too small.

I had to to set up a cross compilation system in my case for armv7h.

QML Development

Since most of the application is in QML, I’ve used the c++ main.cpp launcher as long as possible for the development.
At the moment I needed to inject data from the outside world (like the GPS location) to QML via Qt, so I switched to Go using therecipe Qt Go bindings.

The Go bindings project is young but the main author is really active fixing issues.

It makes desktop applications easy to code without the assle of C++ (at least for me).

About QML, by separating the logic and forms using .qml.ui you still can edit your views with Qt Creator:
That’s just the narrative, truth is Creator is really buggy and I edited the ui files by hand most of the time.
I worked with Interface Builder on iOS for years, Qt is painful, lack of decent visual editor for QML really hurts.

Serving the map without internet access

In Part I, we talked about OpenMapTiles and OpenGL rendering, but I needed a web server capable of reading MBTiles format and serving the necessary assets for the map to be rendered.

I’ve created mbmatch in Go for that purpose so mocs can render the map without Internet access, it will also map match the positions in the future.

Experimenting with another touch screen

I’m using a less performant but smaller LANDZO 5 Inch Touch Display 800x480
This touchscreen is handled as a one button mouse.

It can be calibrate using tslib ts_calibrate command.

Then in your start env tell Qt to use tslib.



Like I said in part I, the Linux gps daemons are using obscure and over complicated protocols so I’ve decided to write my own gps daemon in Go using a gRPC stream interface. You can find it here.

I’m also not satisfied with the map matching of OSRM for real time display, I may rewrite one using mbmatch.


I’ve started POIs lookups with full text search and geo proximity using bleve by exposing an API compatible with the OSM API so it can be used directly by QML Locations.

Night Map

I’m a huge fan of the Solarized colors, I’ve made a style for the map you can find it here


Speeding up boot

systemctl mask systemd-udev-settle.service
systemctl mask lvm2-activation-net.service
systemctl mask lvm2-monitor.service


The project is far from finished and not ready for everybody but it’s fun to play with.

I’ve open sourced most of the code for Mocs on github, feel free to contribute.

10 Jun 2018, 09:19

My Own Car System, Rear Camera, Offline Maps & Routing on Raspberry Pi part I

At first I needed a car rear camera, one thing led to another…

My Car, from 2011, only has an LCD display and no rear camera, so I bought a PAL rear camera, we passed some cables from the rear window to the front then everything begun.
Here is my journey to transform my car into a modern system running on RPi3 (a never ending project).


I’m using an Rpi3 (old model).

With Arch for ARM but any system will do.

The screen is an Eleduino Raspberry Pi 7 Inch 1024x600 Pixel IPS Hdmi Input Capacitive Touch Screen Display
An USB 2.0 EasyCap to retrieve the composite signal.

No driver needed for both the screen and the video capture dongle.

mplayer tv:// -tv driver=v4l2:device=/dev/video0:fps=25:outfmt=yuy2:norm=PAL

mplayer is working out of the box, so I thought everything was okay with the camera, so I thought.

I needed a GUI to display the camera and the date (at this this time this project was just a rear camera display).
So I choose Qt & Golang, not the normal contenders but I can’t handle C++ and had experiences with QtGo, and modern Qt apps are just QML code anyway… So I thought…

I’ve started to code a small QML app but when displaying the video I got:

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video0' does not support progressive interlacing
Additional debug info:
gstv4l2object.c(3768): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Device wants interleaved interlacing

Qt via Gstreamer does not allow non interlaced videos :(
One solution is to force a pipeline with an interlacer.

Easy on the command line gst-launch-1.0 v4l2src ! interlace ! xvimagesink

Not that easy via Qt I had to patch the Qt Gstreamer plugin camerabinsession.cpp to insert a filter on the preview: at the end of GstElement *CameraBinSession::buildCameraSource()

    const QByteArray envInterlace = qgetenv("QT_GSTREAMER_CAMERABIN_VIDEO_INTERLACE");
    if (envInterlace.length() > 0) {
        GstElement *interlace = gst_element_factory_make ("interlace", NULL);
        if (interlace == NULL)
            g_error ("Could not create 'interlace' element");

        g_object_set(G_OBJECT(m_camerabin), "viewfinder-filter", interlace, NULL);

            qDebug() << "set camera filter" << interlace;
        gst_object_unref (interlace);


I had a serial GPS around, why not display a moving map?

Enable serial port in /boot/config.txt (note Bluetooth must be disabled …)


pin 8 TXD
pin 10 RXD

remove console=ttyAMA0,115200 and kgdboc=ttyAMA0,115200 from /boot/cmdline.txt

I thought it would be very easy to read NMEA via serial.
It was, gpsd worked in seconds but … it seems we can’t disable the auto speed, equal 4s lost at start.
Plus Qt is using libgeoclue or Gypsy which does not want to talk with gpsd.
I tried both of them, they didn’t work, it’s a mess to debug, documentation is just the API…

So one thing led to another… I’ve written a very small and simple gpsd in Go with a gRPC interface, so it can be queried from anything.
It’s also a bit more advanced since it can map match & route match the positions with OSRM.

Offline maps

OpenMapTiles project is great to generate vectors data in MBTILES format, you can serve them with mbmatch.
Qt Map QML can display them using the mapboxgl driver, some free styles are provided.

Here is an example QML Map plugin.

    Plugin {
        id: mapPlugin
        name: "mapboxgl"
            name: "mapboxgl.mapping.additional_style_urls"
            value: "http://localhost:4000/osm-liberty-gl.style"

Note on X11 and EGL:
Using the mapboxgl renderer under X11 on the Rpi3 is taking a lot of ressources.
Qt5 is capable of directly talking to the GPU without X11, the performance difference is night and day.

So just run your Qt app without X11 with the following env vars.


Offline Routing

Hopefully the provided Qt osm plugins knows how to route using the OSRM API.
So you can run a local OSRM backend for routing and it will just work.

Generate the route indexes.

osrm-extract -p /usr/share/osrm/profiles/car.lua quebec-latest.osm.pbf 
osrm-contract quebec-latest.osrm
osrm-routed quebec-latest.osrm
    Plugin {
        id: routePlugin
        name: "osm"
            name: "osm.routing.host"
            value: "http://localhost:5000/route/v1/driving/"


The app can display the rear camera and a moving map !!

Part 2 will be about searching places by extracting OSM data and indexing them in a small Go program that can run on the rpi, reading ODB data from the car via Bluetooth, packaging the whole thing and open sourcing some code.