27 Aug 2018, 00:19

Wasm with Go to build an S2 cover map viewer

I needed a reason to use the new Go 1.11 Wasm port for “real”.

To make it short, it compiles Go code to Wasm binary format for a virtual machine running in web browsers.

I’ve always needed a debug tool to display S2 Cells on a map for different shapes, some online tools already exist:

I’ve planned for a Qt Go app or a QGIS plugin with C++ bindings to Python but to ship those modules would be a nightmare.

I needed another solution, a simple web app would do it, not very complicated but since I hate HTML/CSS/js, I’ve never bothered to start one…
The s2map solution was great but having to start a backend and pay for it, was a no go for the long run, plus since I work with Go I needed something that was relying on the S2 Go port for matching results.

So here I am doing some web dev…

Wasm & Go

First Wasm is not the best solution (GopherJS maybe is) to my problem but hey it’s working.

The main() is a bit weird but close enough to a normal Go program:

func registerCallbacks() {
	js.Global().Set("geocell", js.NewCallback(geoJSONToCells))

func main() {
	c := make(chan struct{}, 0)
	println("Wasm ready")

Since our function geocell() will be called by js, we wait for a channel that will never be triggered so the main loop won’t return.

NewCallback() wants a fn func(event Value) it means you can’t return directly from Go to js

func geoJSONToCells(i []js.Value)  

Just a slice of untyped values thank you js.

All other functions (not exposed to js) can be normal Go functions, packages …

Interaction with the DOM is very limited via the package syscall/js
So far, to update back the UI (from Go to js), I pass the result of the computation via a set and call the js method, very hackish …

func updateUIWithData(data string) {
	js.Global().Set("data", data)

The updateui() is a regular js function that processes data and updates the DOM.


The app is calling a lot of Go code and libraries that were not written for the web but are now executed from a webpage without any backends:

  • the web interface creates a shape on a js Leaflet layer
  • exports the shape in GeoJSON
  • serializes it to JSON
  • calls the geoJSONToCells() func passing the JSON as string argument
  • computes the S2 cells in the Go world
  • sets the result back via a js var containing GeoJSON as string
  • reads back this GeoJSON and displays it as a Leaflet layer

The first loading and running of the Wasm is very slow on Chrome but not on Firefox, the execution itself is really fast.


Code is on Github and Demo is hosted on Github Pages at https://s2.inair.space, sorry folks Wasm works on phones but the demo won’t be very useful on mobile.


You can argue 11M (1.5M gzipped) is too big for a webapp providing only one functionality, and you are probably right (then look at a modern webpage), but how big would be a Python app shipped with the C++ library or a full Qt app …

Also the size could be a non-issue, in a near future, a solution like jsgo.io could provide package level CDN caching.


Again you should have really good reasons to use Wasm, the back and forth between js & Wasm is nonsense, but the tooling will improve and I’m sure we will see plenty of solutions around it (one is gRPC in the browser).

EDIT: Call() can call a js method no need for eval !

01 Aug 2018, 00:19

My Own Car System, Rear Camera, Offline Maps & Routing, Map Matching with Go on Raspberry Pi part II

This is my journey building an open source car system with Go & Qt, rear camera, live OpenGL map …

Cross compilation

In Part I, I had to patch qtmultimedia for the camera to work, but Qt compilation is ressource hungry, same goes for the osrm compilation, the memory of the Raspberry Pi is too small.

I had to to set up a cross compilation system in my case for armv7h.

QML Development

Since most of the application is in QML, I’ve used the c++ main.cpp launcher as long as possible for the development.
At the moment I needed to inject data from the outside world (like the GPS location) to QML via Qt, so I switched to Go using therecipe Qt Go bindings.

The Go bindings project is young but the main author is really active fixing issues.

It makes desktop applications easy to code without the assle of C++ (at least for me).

About QML, by separating the logic and forms using .qml.ui you still can edit your views with Qt Creator:
That’s just the narrative, truth is Creator is really buggy and I edited the ui files by hand most of the time.
I worked with Interface Builder on iOS for years, Qt is painful, lack of decent visual editor for QML really hurts.

Serving the map without internet access

In Part I, we talked about OpenMapTiles and OpenGL rendering, but I needed a web server capable of reading MBTiles format and serving the necessary assets for the map to be rendered.

I’ve created mbmatch in Go for that purpose so mocs can render the map without Internet access, it will also map match the positions in the future.

Experimenting with another touch screen

I’m using a less performant but smaller LANDZO 5 Inch Touch Display 800x480
This touchscreen is handled as a one button mouse.

It can be calibrate using tslib ts_calibrate command.

Then in your start env tell Qt to use tslib.



Like I said in part I, the Linux gps daemons are using obscure and over complicated protocols so I’ve decided to write my own gps daemon in Go using a gRPC stream interface. You can find it here.

I’m also not satisfied with the map matching of OSRM for real time display, I may rewrite one using mbmatch.


I’ve started POIs lookups with full text search and geo proximity using bleve by exposing an API compatible with the OSM API so it can be used directly by QML Locations.

Night Map

I’m a huge fan of the Solarized colors, I’ve made a style for the map you can find it here


Speeding up boot

systemctl mask systemd-udev-settle.service
systemctl mask lvm2-activation-net.service
systemctl mask lvm2-monitor.service


The project is far from finished and not ready for everybody but it’s fun to play with.

I’ve open sourced most of the code for Mocs on github, feel free to contribute.

10 Jun 2018, 09:19

My Own Car System, Rear Camera, Offline Maps & Routing on Raspberry Pi part I

At first I needed a car rear camera, one thing led to another…

My Car, from 2011, only has an LCD display and no rear camera, so I bought a PAL rear camera, we passed some cables from the rear window to the front then everything begun.
Here is my journey to transform my car into a modern system running on RPi3 (a never ending project).


I’m using an Rpi3 (old model).

With Arch for ARM but any system will do.

The screen is an Eleduino Raspberry Pi 7 Inch 1024x600 Pixel IPS Hdmi Input Capacitive Touch Screen Display
An USB 2.0 EasyCap to retrieve the composite signal.

No driver needed for both the screen and the video capture dongle.

mplayer tv:// -tv driver=v4l2:device=/dev/video0:fps=25:outfmt=yuy2:norm=PAL

mplayer is working out of the box, so I thought everything was okay with the camera, so I thought.

I needed a GUI to display the camera and the date (at this this time this project was just a rear camera display).
So I choose Qt & Golang, not the normal contenders but I can’t handle C++ and had experiences with QtGo, and modern Qt apps are just QML code anyway… So I thought…

I’ve started to code a small QML app but when displaying the video I got:

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video0' does not support progressive interlacing
Additional debug info:
gstv4l2object.c(3768): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Device wants interleaved interlacing

Qt via Gstreamer does not allow non interlaced videos :(
One solution is to force a pipeline with an interlacer.

Easy on the command line gst-launch-1.0 v4l2src ! interlace ! xvimagesink

Not that easy via Qt I had to patch the Qt Gstreamer plugin camerabinsession.cpp to insert a filter on the preview: at the end of GstElement *CameraBinSession::buildCameraSource()

    const QByteArray envInterlace = qgetenv("QT_GSTREAMER_CAMERABIN_VIDEO_INTERLACE");
    if (envInterlace.length() > 0) {
        GstElement *interlace = gst_element_factory_make ("interlace", NULL);
        if (interlace == NULL)
            g_error ("Could not create 'interlace' element");

        g_object_set(G_OBJECT(m_camerabin), "viewfinder-filter", interlace, NULL);

            qDebug() << "set camera filter" << interlace;
        gst_object_unref (interlace);


I had a serial GPS around, why not display a moving map?

Enable serial port in /boot/config.txt (note Bluetooth must be disabled …)


pin 8 TXD
pin 10 RXD

remove console=ttyAMA0,115200 and kgdboc=ttyAMA0,115200 from /boot/cmdline.txt

I thought it would be very easy to read NMEA via serial.
It was, gpsd worked in seconds but … it seems we can’t disable the auto speed, equal 4s lost at start.
Plus Qt is using libgeoclue or Gypsy which does not want to talk with gpsd.
I tried both of them, they didn’t work, it’s a mess to debug, documentation is just the API…

So one thing led to another… I’ve written a very small and simple gpsd in Go with a gRPC interface, so it can be queried from anything.
It’s also a bit more advanced since it can map match & route match the positions with OSRM.

Offline maps

OpenMapTiles project is great to generate vectors data in MBTILES format, you can serve them with mbmatch.
Qt Map QML can display them using the mapboxgl driver, some free styles are provided.

Here is an example QML Map plugin.

    Plugin {
        id: mapPlugin
        name: "mapboxgl"
            name: "mapboxgl.mapping.additional_style_urls"
            value: "http://localhost:4000/osm-liberty-gl.style"

Note on X11 and EGL:
Using the mapboxgl renderer under X11 on the Rpi3 is taking a lot of ressources.
Qt5 is capable of directly talking to the GPU without X11, the performance difference is night and day.

So just run your Qt app without X11 with the following env vars.


Offline Routing

Hopefully the provided Qt osm plugins knows how to route using the OSRM API.
So you can run a local OSRM backend for routing and it will just work.

Generate the route indexes.

osrm-extract -p /usr/share/osrm/profiles/car.lua quebec-latest.osm.pbf 
osrm-contract quebec-latest.osrm
osrm-routed quebec-latest.osrm
    Plugin {
        id: routePlugin
        name: "osm"
            name: "osm.routing.host"
            value: "http://localhost:5000/route/v1/driving/"


The app can display the rear camera and a moving map !!

Part 2 will be about searching places by extracting OSM data and indexing them in a small Go program that can run on the rpi, reading ODB data from the car via Bluetooth, packaging the whole thing and open sourcing some code.

23 Apr 2018, 14:37

Bitlbee: Slack, Hangouts & Facebook via IRC gateway

Having multiple clients to handle multiple networks is a mess, especially the Slack client which is really heavy and annoying.

Slack is deprecating its IRC gateway interface on May 15h 2018.

Bitlbee is an IRC server working as a gateway to different IMs.

Optionally Bitlbee can be compiled with LibPurple to support even more networks, like Slack and Hangouts.

Here are some notes for Slack, Facebook and Hangouts to be enabled with Bitlbee.


You need Slack for libpurple, compiled and installed, it will provide /usr/lib/purple-2/libslack.so.
Restart Bitlbee run, check with help purple you can see * slack (Slack).

account add slack myuser@domain.slack.com
Create a Legacy token.
account slack set api_token xoxp-xxxxxxxxx
account slack on

Join an existing Slack channel:
chat add slack general then join #general

Hangouts, Gtalk

You need Hangouts for libpurple, AUR Arch it will provide /usr/lib/purple-2/libhangouts.so.

account add hangouts youremail@yourdomain.com


You need Facebook for Bitlbee.

List existing group chats: fbchats facebook, sometimes it will only display older chats.

Look at the URL in your browser when clicking on a chat on https://messenger.com, grab the id of the group chat and type the command:
chat add facebook 1393844480dd74 #myclub then join #myclub channel.

More Facebook commands.

Apply to all

Autojoining channel on connection:
channel myclub set auto_join true

18 Jan 2018, 08:20

Google S2 with Python & Jupyter

Google is working again on S2 a spatial library !!!

And they even have created a website to communicate about it s2geometry.

The C++ port contains a Python Swig interface.

I’ve been using an unofficial Python port with Jupyter for years now things are way more simpler.

If you are on Arch I’ve create a package, simply install AUR s2geometry-git

First we want a clean Jupyter install from scratch:

virtualenv3 ~/dev/venv3
source ~/dev/venv3/bin/activate
pip install jupyter
pip install cython
pip install numpy
pip install matplotlib scikit-learn scipy Shapely folium geojson Cartopy
cp /usr/lib/python3.6/site-packages/_pywraps2.so $VIRTUAL_ENV/lib/python3.6/site-packages      
cp /usr/lib/python3.6/site-packages/pywraps2.py $VIRTUAL_ENV/lib/python3.6/site-packages

Here is a simple test map.

import folium
import pywraps2 as s2

# create a rect in s2
region_rect = s2.S2LatLngRect(
        s2.S2LatLng.FromDegrees(48.831776, 2.222639),
        s2.S2LatLng.FromDegrees(48.902839, 2.406))

# ask s2 to create a cover of this rect
coverer = s2.S2RegionCoverer()
covering = coverer.GetCovering(region_rect)
print([c.ToToken() for c in covering])

# create a map
map_osm = folium.Map(location=[48.86, 2.3],zoom_start=12, tiles='Stamen Toner')

# get vertices from rect to draw them on map
rect_vertices = []
for i in [0, 1, 2, 3, 0]:
    vertex = region_rect.GetVertex(i)
    rect_vertices.append([vertex.lat().degrees(), vertex.lng().degrees()])
# draw the cells
style_function = lambda x: {'weight': 1, 'fillColor':'#eea500'}
for cellid in covering:
    cell = s2.S2Cell(cellid)
    vertices = []
    for i in range(0, 4):
        vertex = cell.GetVertex(i)
        latlng = s2.S2LatLng(vertex)
    gj = folium.GeoJson({ "type": "Polygon", "coordinates": [vertices]}, style_function=style_function)
# warning PolyLine is lat,lng based while GeoJSON is not
ls = folium.PolyLine(rect_vertices, color='red', weight=2)

And here is the resulting Jupyter notebook

08 Oct 2017, 14:28

Hacking Temperature Radio Sensors and Graphing with Prometheus

One year ago I’ve started to collect temperature from my house using Acurite sensors.

Acurite outddor Acurite indoor

These sensors are not too expensive and good quality but the “base” aka the radio receiver connected to internet is costly and totally closed, it’s sending your data to the Cloud™, it’s not just Acurite, all those “IOT” devices are generally poor on the software side.

Receiving radio data

Most of these sensors have their protocols already reverse engineered you only need the radio receiver part.

RTL-SDR dongles to the rescue and a great community around it.

Prune998 and I have added some JSON support to the acurite driver and sent it to the projet.

Collecting and graphing the data

The last needed piece was something to collect the data and send to prometheus:

So I wrote a quick Go program to do exactly that Acurite to graph

Et voila, we can graph all sensors in our home.


It has been tested on OSX & Linux.

08 Oct 2017, 09:57

Notes on PacBSD

PacBSD is a FreeBSD kernel/world with a PacMan Arch package manager and an optionnal OpenRC init system.

In short ZFS, DTrace and the FreeBSD kernel but the simplicity of Arch for packages management but the Gentoo init.

It’s experimental, uncompleted, unfinished, lacks proper documentations but it works and could be/should be the solution we are waiting for :).

Here is some notes on installation (in QEMU), note that it slightly diverges from the official install since it’s using a whole ZFS disk, so no GPT.

zpool create tank /dev/vtbd0
zfs create -o canmount=off -o mountpoint=legacy tank/ROOT
zfs create -o canmount=on -o compression=lz4 -o mountpoint=/ tank/ROOT/pacbsd
zfs create -o compression=lz4 -o mountpoint=/home tank/HOME
zfs create -o compression=lz4 -o mountpoint=/root tank/HOME/root
pacstrap /mnt base

Add to loader.conf

arch-chroot /mnt
ln -s /usr/share/zoneinfo/zone/subzone /etc/localtime
rc-update add zfs default
echo 'hostname="pacbsd"' > /etc/conf.d/hostname
dd if=/boot/zfsboot of=/dev/vtbd0 count=1
dd if=/boot/zfsboot of=/dev/vtbd0 iseek=1 oseek=1024

Edit loader.conf


Reboot, manually starts dhcpd, explore …

dhcpcd vtnet0

Even with you don’t plan on installing PacBSD, the provided ISO is a useful bootable FreeBSD 11 kernel with zfs and pacman/pacstrap tools.

22 Jul 2017, 21:28

Arch Linux on a Chromebook Asus C301SA

I’ve got an Asus C301S at work, it’s a Chromebook with Chromeos.

I like those little laptops, for the price it’s actually a very good little machine.
ChromeOS is responding very well, ssh and Chrome are working too.
But I often need more, like X11 forwarding or offline coding …

You will see the C301SA is marked as a C300SA internally.

Here are the steps to install Arch Linux on it, do it at your own risk, you can brick your computer, you will need an external USB keyboard.

  • Activate the recovery mode, by using ESC + F3 (refresh) and the power key

  • Activate the developer mode with ctrl + d, confirm you want to continue, it will take a long time before it’s finished preparing the developer mode.

  • Shutdown and remove the write protection screw
    It’s actually written as not mandatory but also “could be dangerous”, didn’t take the chance.
    Gently dismount the keyboard (by removing the back screws), warning the screws are different, so remember their positions.
    Locate the write protection screw, mine was covered with a black tape, (on the left close to the USB port) and remove it
    c301 screw

  • Reboot into ChromeOS by pressing ctrl + d

  • Patch the firmware [Ctrl+Alt+T] to get a ChromeOS terminal
    type or copy paste cd; curl -LO https://mrchromebox.tech/firmware-util.sh && sudo bash firmware-util.sh
    Choose 1 to enable legacy boot then n to boot from internal then shutdown the computer.

  • Create a bootable USB key for Arch Linux
    sudo dd if=/home/akh/Downloads/archlinux-2017.06.01-x86_64.iso of=/dev/sde bs=512

  • Boot on Arch by pressing ctrl + l then ctrl + u, sometimes it fails

  • The actual Arch kernel does not see the keyboard … you’ll need an external USB keyboard.

  • Perform a normal installation of Arch (I’ve totally removed ChromeOS partitions) and even performed an MBR install (instead of GPT) with only 2 partitions.
    The internal SSD disk is /dev/mmcblk0

  • Install grub
    grub-install --target=i386-pc /dev/mmcblk0
    grub-mkconfig -o /boot/grub/grub.cfg

  • Install yaourt

git clone https://aur.archlinux.org/package-query.git
cd package-query
makepkg -si
cd ..
git clone https://aur.archlinux.org/yaourt.git
cd yaourt
makepkg -si
cd ..
  • Install the galliumos patched kernel, you can go to bed coz it will take a long time to compile
    yaourt --tmp /var/tmp -S aur/linux-galliumos-braswell

The keyboard should work now.
Also install or grab the config file from aur/galliumos-braswell-config for audio support.

If it won’t boot don’t worry you can still boot using the Arch USB key, then remount your system:

mount /dev/mmcblk0p1 /mnt
arch-chroot /mnt

Touch pad config:

Section "InputClass"
    Identifier "Elan Touchpad"
    Driver "libinput"
    MatchIsTouchpad "on"
    Option "Tapping" "on"
    Option "NaturalScrolling" "true"  
    Option "ClickMethod" "clickfinger"

Congratulations you have a fully working computer.

EDIT: Arch kernel 4.12 is fully working without the need to use aur/linux-galliumos-braswell, but 4.13 breaks this support, so keep 4.12 for now.

25 Feb 2017, 09:57

From OSX to Linux

I’ve been a long time UNIX user, ditched Microsoft back in the 90s for FreeBSD, Solaris and Linux on the desktop, but when Apple released MacOS X, I’ve used it as a workstation.

For the last years I’ve used Linux desktops but not on my main computer, today here I am switching back to Linux.

This post is not about the reasons I’m switching, they are simple.
My typical work day is mostly about parsing giant files, running VM and Dockers, coding in Go and not about developing for iOS anymore.

I’m using Arch Linux and KDE/Plasma but many items from this list apply to any Linux distributions.

Put your user in the following groups uucp audio input lp.

Bonjour, mDNS and .local

I was used to query the .local domain to ssh my laptop back.

  • Install nss-mdns, add mdns_minimal [NOTFOUND=return] before resolve in /etc/nsswith.conf
  • Install avahi and start avahi-daemon.service.
  • To make your ssh server visible to others, cp /usr/share/doc/avahi/ssh.service /etc/avahi/services/

Try pinging a Mac host on your LAN with ping hostname.local

Google drive

Install kio-gdrive then start Dolphin and go to Network then Google drive and set up your account or by running the shell command: kioclient5 exec gdrive:/.

Emojis in color 💻

Install noto-fonts-emoji and edit .config/fontconfig/fonts.conf as follow

<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>

This setup is working in most applications but can sometimes display weird results in terminals.

2 Factor USB key U2F

I have a cheap Fido U2F key but Chrome was unable to see it. Edit /etc/udev/rules.d/50-fido-u2f.rules

# this udev file should be used with udev 188 and newer
ACTION!="add|change", GOTO="u2f_end"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="096e", ATTRS{idProduct}=="0850|0880", TAG+="uaccess"

Win key aka ⌘ cmd key

On a PC keyboard the left alt and the windows key are inverted in opposite to a Mac where alt is left of ⌘.
To avoid being lost when I switch back to my Macbook I’ve physically inverted the keys and change the behavior in Plasma in “Hardware | Input Devices”.


Compose key

To type special or accentuated characters you’ll use the “Compose” key.
You can set the compose key in Plasma in “Hardware | Input Devices”.


I’m using the right alt, which is the right ⌘ cmd on a Mac (and also a compose key).

For a complete list of compose shortcuts see the bottom of this page


The tool/window equivalent to running Spotlight is called “Plasma Search”, you can configure what it can search for.
The mapping for this key in “System Settings | Global Shortcuts | Run Command”.

System Settings | Global Shortcuts | Run Command

It’s also capable of indexing files content, in KDE/Plasma this service is provided by baloo, ignored directories can be set by calling “Configure File Search”.

You can empty your baloo index by stopping by killing all your baloo processes and rm -r .local/share/baloo then restart for indexation balooctl start

Exposé and active corners

It’s called “Screen Edges” and it’s under “Windows behavior”.


Alternative for Dash

Dash was part of my workflow to get documentation, an alternative solution is to install zeal it’s using the exact same docsets as Dash.
Also see DevDocs

Samba share

Install samba, tweak /etc/samba/smb.conf and enable nmbd.service.
I personally prefer a different password than my shell account: smpasswd -a yourusername.

Taking Screenshot

Install spectacle and use the Ptr Sc key (this shortcut can be setup in Plasma).

Webcam, Hangout and Skype

I have a Logitech C920, it worked without any configuration, inside Chrome so Hangout and even with Skype for Linux.

Mounting a macOS disk

That one is weird …
mount -o ro,sizelimit=498876809216 /dev/sda2 /mnt/OSX

If your existing partition was big, you need to find this magic number by following this guide


Steam needs x86 32 bits libraries, on Arch you have to enable multilib by editing /etc/pacman.conf

Include = /etc/pacman.d/mirrorlist

Then install the steam package.

Virtualbox / QEMU

I prefer QEMU/KVM & libvirt-manager, when available, over Virtualbox, it’s more integrated into the system, and it’s capable of emulating other cpu architectures like aarch64…

One more huge advantage for QEMU it’s also capable of booting a virtual macOS X


Install cups and print-manager then enable org.cups.cupsd.service, note that you need the .local resolution above for network printer resolution.
Also install hplib for HP printers.

Share your session aka remote desktop

Install x11vnc and run x11vnc -usepw -once -noxdamage -ncache 10 from your X session.

Note that this vnc server is not compatible with macOS X embedded vnc viewer (vnc://hostname), here is one for Chrome RealVNC

Remember vnc protocol is not secure and you must use an SSH tunnel over it.

Magic Trackpad

I’m using a magic trackpad 2, after enabling bluetooth and pairing the trackpad, one, two & three fingers touches, vertical & horizontal scroll worked via hid_magicmouse module.

For gestures you need libinput-gestures, here is an example /etc/libinput-gestures.conf file:

gesture swipe left 3 xdotool key control+Right

gesture swipe right 3 xdotool key control+Left

gesture swipe up 3 xdotool key control+F9

The bad

  • Not as nice, not as well integrated, for example supporting HiDPI with only one retina screen is weird with Xorg.
  • Key shorcuts are a giant mess under Linux, every applications have their own and it can’t be configured centrally.
  • I’m still missing some applications like Sketch, but most of all Tower when dealing with a git merge issue.

The good

After 8 years of absence on Linux as main desktop, things have changed, it’s not free from bugs but way more simpler to use now than before, and way more configurable than macOS X.

It won’t work for everybody but I’m really happy with this setup, the gain compared to a Mac is big, first the machine itself a 4Ghz i7 with 64G of ram does not even exist at Apple (Hackintosh is not a good solution), ZFS, native Docker over ZFS, better OpenGL (faster fps in games), well maintained packages over Brew/Macports, well maintained drivers, my work is easier …


20 Oct 2016, 08:58

Telegraf & Prometheus Swiss Army Knife for Metrics

There are a lot of different solutions when it comes to collecting metrics, I found myself happy with this hybrid solution.

Telegraf is an agent written in Go for collecting metrics from the system it’s running on.
It’s developed by Influxdata the people behind InfluxDB, but Telegraf has a lot of outputs plugins and can be used without InfluxDB.
Many different platform (FreeBSD, Linux, x86, Arm …) are offered and only one single static binary (Thanks to Golang) is needed to deploy an agent.

Prometheus is a time series database for your metrics, with an efficient storage.
It’s easy to deploy, no external dependencies, it’s gaining traction in the community because it’s a complete solution, for example capable of discovering your targets inside a Kubernete cluster.

Here is a simple configuration to discover both products.

You can deploy the agent on every hosts you want to monitor but need only one Prometheus running.

Install Prometheus

Download Prometheus for your platform and edit a config file named prometheus.yml.

  - job_name: 'telegraf'
    scrape_interval: 10s
      - targets: ['mynode:9126']

Prometheus is a special beast in the monitoring world, the agents are not connecting to the server, it’s the opposite the server is scrapping the agents.
In this config we are creating a job called telegraf to be scrapped every 10s connecting to mynode host on port 9126.

That’s all you need to run a Prometheus server, start it by specifying a path to store the metrics and the path of the config file:

prometheus -storage.local.path /opt/local/var/prometheus -config.file prometheus.yml

The server will listen on port 9090 for the HTTP console.

Install Telegraf

Download Telegraf agent for your platform and edit telegraf.conf.

    listen = ""

# Read metrics about cpu usage
  ## Whether to report per-cpu stats or not
  percpu = true
  ## Whether to report total system cpu stats or not
  totalcpu = true
  ## If true, collect raw CPU time metrics.
  collect_cpu_time = false

# Read metrics about memory usage

# Read metrics about network interface usage
  ## By default, telegraf gathers stats from any up interface (excluding loopback)
  ## Setting interfaces will tell it to gather these explicit interfaces,
  ## regardless of status.
  interfaces = ["en2"]

Remember this is the node agent “client” but since Prometheus server will connect it, you are providing a listening endpoint.
Starts the agent with telegraf -config telegraf.conf

There are many more inputs plugins for telegraf for example you can monitor all your Docker instance.

# Read metrics about docker containers
  endpoint = "unix:///var/run/docker.sock"
  ## Only collect metrics for these containers, collect all if empty
  container_names = []
  ## Timeout for docker list, info, and stats commands
  timeout = "5s"

  ## Whether to report for each container per-device blkio (8:0, 8:1...) and
  ## network (eth0, eth1, ...) stats or not
  perdevice = false
  ## Whether to report for each container total blkio and network stats or not
  total = false

It’s also capable of monitoring third parties product like MySQL, Cassandra

Read Metrics

Prometheus is provided with a visual HTTP console & query tool available on port 9090.

The query language is described here.

The console can’t be really used as a dashboard, you can use Grafana which can speak directly to prometheus.

Install or run Grafana with docker

docker run -i -p 3000:3000 -e "GF_SECURITY_ADMIN_PASSWORD=mypassword"  grafana/grafana

Point your browser to the port 3000.
Add a Prometheus data source and point the host to your Prometheus server port 9090.

Then create your dashboard, here are some queries to display the telegraf agents:

  • CPU
    cpu_usage_idle{host="myhost", cpu="cpu-total"}
    cpu_usage_user{host="myhost", cpu="cpu-total"}
    cpu_usage_system{host="myhost", cpu="cpu-total"}

  • Memory

  • Docker Memory
    Legend format {{container_name}}

  • Docker CPU
    Legend format {{container_name}}

Graph prometheus


You can easily instrument your own development using the client libraries.
There is also a Prometheus gateway for the short lived jobs, so you can batch to the gateway between the scrap period.

It’s a simple setup but capable of handling a lot of data in different contexts, system monitoring & instrumentation.