26 Jan 2016, 14:01

A fast geo database with Google S2 take #2

Six months ago, I wrote on this blog about Geohashes and LevelDB with Go, to create a fast geo database.
This post is very similar as it works the same way but replacing GeoHashes with Google S2 library for better performances.

There is an S2 Golang implementation maintened by Google not as complete as the C++ one but close.

For the storage this post will stay agnostic to avoid any troll, but it applies to any Key Value storages: LevelDB/RocksDB, LMDB, Redis…
I personnaly use BoltDB and gtreap for my experimentations.

This post will focus on Go usage but can be applied to any languages.

Or skip to the images below for visual explanations.

Why not Geohash?

Geohash is a great solution to perform geo coordinates queries but the way it works can sometimes be an issue with your data.

  • Remember geohashes are cells of 12 different widths from 5000km to 3.7cm, when you perform a lookup around a position, if your position is close to a cell’s edge you could miss some points from the adjacent cell, that’s why you have to query for the 8 neightbour cells, it means 9 range queries into your DB to find all the closest points from your location.

  • If your lookup does not fit in level 4 39km by 19.5km, the next level is 156km by 156km!

  • The query is not performed around your coordinates, you search for the cell you are in then you query for the adjacent cells at the same level/radius, based on your needs, it means it works very approximately and you can only perform ‘circles’ lookup around the cell you are in.

  • The most precise geohash needs 12 bytes storage.

  • -90 +90 and +180 -180, -0 +0 are not sides by sides prefixes.

Why S2?

S2 cells have a level ranging from 30 ~0.7cm² to 0 ~85,000,000km².
S2 cells are encoded on an uint64, easy to store.

The main advantage is the region coverer algorithm, give it a region and the maximum number of cells you want, S2 will return some cells at different levels that cover the region you asked for, remember one cell corresponds to a range lookup you’ll have to perform in your database.

The coverage is more accurate it means less read from the DB, less objects unmarshalling…

Real world study

We want to query for objects inside Paris city limits using a rectangle:

h5
Using level 5 we can’t fit the left part of the city.
We could add 3 cells (12 total DB queries ) on the left but most algorithms will zoom out to level 4.

h4
But now we are querying for the whole region.

s2
Using s2 asking for 9 cells using a rectangle around the city limits.

s2 vs h4
The zones queried by Geohash in pink and S2 in green.

Example S2 storage

Let’s say we want to store every cities in the world and perform a lookup to find the closest cities around, first we need to compute the CellId for each cities.

// Compute the CellID for lat, lng
c := s2.CellIDFromLatLng(s2.LatLngFromDegrees(lat, lng))

// store the uint64 value of c to its bigendian binary form
key := make([]byte, 8)
binary.BigEndian.PutUint64(key, uint64(c))

Big endian is needed to order bytes lexicographically, so we can seek later from one cell to the next closest cell on the Hilbert curve.

c is a CellID to the level 30.

Now we can store key as the key and a value (a string or msgpack/protobuf) for our city, in the database.

Example S2 lookup

For the lookup we use the opposite procedure, first looking for one CellID.

// citiesInCellID looks for cities inside c
func citiesInCellID(c s2.CellID) {
  // compute min & max limits for c
  bmin := make([]byte, 8)
  bmax := make([]byte, 8)
  binary.BigEndian.PutUint64(bmin, uint64(c.RangeMin()))
  binary.BigEndian.PutUint64(bmax, uint64(c.RangeMax()))

  // perform a range lookup in the DB from bmin key to bmax key, cur is our DB cursor
  var cell s2.CellID
  for k, v := cur.Seek(bmin); k != nil && bytes.Compare(k, bmax) <= 0; k, v = cur.Next() {
    buf := bytes.NewReader(k)
    binary.Read(buf, binary.BigEndian, &cell)

    // Read back a city
    ll := cell.LatLng()
    lat := float64(ll.Lat.Degrees())
    lng := float64(ll.Lng.Degrees())
    name = string(v)
    fmt.Println(lat, lng, name)
  }
}

Then compute the CellIDs for the region we want to cover.

rect := s2.RectFromLatLng(s2.LatLngFromDegrees(48.99, 1.852))
rect = rect.AddPoint(s2.LatLngFromDegrees(48.68, 2.75))

rc := &s2.RegionCoverer{MaxLevel: 20, MaxCells: 8}
r := s2.Region(rect.CapBound())
covering := rc.Covering(r)

for _, c := range covering {
    citiesInCellID(c)
}

RegionCoverer will return at most 8 cells (in this case 7 cells: 4 8, 1 7, 1 9, 1 10) that is guaranteed to cover the given region, it means we may have to exclude cities that were NOT in our rect, with func (Rect) ContainsLatLng(LatLng).

Congrats we have a working geo db.

S2 can do more with complex shapes like Polygons and includes a lot of tools to compute distances, areas, intersections between shapes…

Here is a Github repo for the data & scripts generated for the images

11 Jan 2016, 09:33

MyAPRS APRS for iPhone

I’m happy to announce a new side project: MyAPRS, a modern iOS APRS application, for radio amateur enthusiasts.

I’ve already mentioned APRS on this blog, it will mainly be useful for radio amateurs but can be interesting to RTL-SDR listeners too.

The application is built around LevelDB and geo hashing as mentioned in this post blog, it’s a lot faster than using SQLite especially on iOS, SatSat is still using the SQLite cities lookup and you can compare it’s terribly slow.
It receives weather data & APRS data from APRS-IS connections, then decodes and stores the packets into LevelDB for indexation.

MyAPRS has some extra features over other applications, like transceiver models detection, so it can highlight all C4FM/Fusion users in your area.
It receives the latest 10mn packets for your area, helped by a small Golang API.

The first release is simple but works very well to discover repeaters and amateurs around you or a region you plan to visit, I’ll add position sharing later: due to the way APRS-IS works it’s the responsability of the developer to provide passwords & check for the amateur license of the users (even if the algorithm is known publicly…).

I plan to make it work with some bluetooth modems starting with the one I’ve worked on to be used completely off the grid, so we can have a cheap and powerful APRS application, with a real keyboard (try to answer a message with a Yaesu…).

I had to put some ads in it, a lot of people have asked for a paid SatSat version to get rid of the ads but that’s the only way I can maintain these apps, so I’m experimenting with ads for now.

It was a fun project to develop for, I hope you will find a use for it.

73 de KK6NXK

MyAPRS

10 Jan 2016, 11:32

Freebsd on Raspberry Pi 2 and Golang

FreeBSD is now fully supported on the Raspberry Pi2, makes it a fun small computer to experiment with BSD.

If you have a Raspberry Pi 1, you can simply install 10.2-RELEASE image.

For Raspberry Pi 2, you need 11.0-CURRENT which is the development branch, images can be found here.

dd the image as usual to a SD card, it will be auto resized at first boot. (See growfs_enable="YES") in rc.conf.

CPU frequency

To enable on demand cpu overclocking (ranging from 600 to 1000MHz), enable powerd by adding this to rc.conf.

powerd_enable="YES"
powerd_flags="-a hadp"

Production speed

FreeBSD CURRENT is the development version, some debugging tools may slow down your system.

As stated in UPDATING if you are running CURRENT:

ln -s 'abort:false,junk:false' /etc/malloc.conf

Wifi

Depending on your wifi dongle this may be different, for RealTek devices add an entry to /etc/wpa_supplicant.conf:

network={
    ssid="myssid"
    psk="mypass"
}

And this to /etc/rc.conf:

wlans_urtwn0="wlan0"
ifconfig_wlan0="WPA SYNCDHCP"

And this to /boot/loader.conf

legal.realtek.license_ack=1

And type service netif restart.

Installing the ports

The ports are a long list of third parties software you can install on your system, first synchronize the ports tree:

portsnap fetch
portsnap extract

It’s highly recommend you install portmaster to keep your ports updated:

cd /usr/ports/ports-mgmt/portmaster/
make install clean

To later update your ports tree and ports:

portsnap fetch update
portmaster -a

To compile and install a port simply go to its directory and run make install clean.

Keeping the sources updated (optional)

All the FreeBSD sources are available and can be used to recompile the whole system.

Subversion needs some space in /tmp to complete this task, edit /etc/fstab to grow tmpfs to at least 70M the reboot:

tmpfs /tmp tmpfs rw,mode=1777,size=70m 0 0
cd /usr/ports/security/ca_root_nss
make install clean
svnlite checkout https://svn.FreeBSD.org/base/head /usr/src

To keep it in sync later, just type:

cd /usr/src
svnlite update

Keeping your FreeBSD updated can be achieved by recompiling the system aka make world, note that this could take a long time on a Raspberry Pi but still doable (remember to use make -j 4 on RPi2).

Installing Go (optional)

If you are into Go and need a recent version, first you need to compile Go 1.4 as a bootstraper (note that you also need to install git):

cd /usr/ports/lang/go14
make install clean

Then you can compile a more recent Go, for example using /usr/local/go:

cd /usr/local
git clone https://go.googlesource.com/go
cd go/src
env TMPDIR=/var/tmp GOARM=7 GOROOT_BOOTSTRAP=/usr/local/go14 ./all.bash

Add /usr/local/go/bin to your PATH.

09 Nov 2015, 18:51

Listening to satellites for 30 dollars

I’ve ever dreamed of space & satellites, it turms out you can received pictures from them.
After getting my radio amateur license in the US, I’ve discovered there was some satellites dedicated to radio amateurs but also some weather satellites from the late 90s still working and capable of sending pictures from the space like this one NOOA-15.

SDR

You don’t need expensive hardware anymore thanks to the SDR movement Software defined radio and some great developers, a simple 20$ USB key and some pratices are good enough to make it work.

Software

There is plenty of softwares available but the ones I’m using for OSX or Linux are:

  • GQRX the software receiver
  • Audacity for sounds editing
  • Soundflower (OSX only) to reroute the sound from GQRX to Audacity, but you can do that with pulseaudio on Linux.

They are all free.

Know what, when and where to listen

The most important things to know is what frequency and when to listen to.
For that you need a software for passes prediction, I have developed an app for iOS devices called SatSat, it’s free (with ads), hoping enough people could be interested I’ll invest more into the product.

SatSat

Good satellites candidates for a start, some weather satellites, are:

  • NOAA-15
  • NOAA-18
  • NOAA-19

Experiment with GQRX options:
Receiver options -> Mode Narrow FM -> Mode options -> Max Dev APT 17k -> Tau OFF.
Record in mono inside Audacity at 11025Hz.

With the sound you just recorded, open it inside WXtoImg.

Cities are really noisy, expect best results outdoor, sometimes a simple wire antenna is enough for great results.

Here is an example of a weather satellite:
Sat view

And this is the “sound” of a satellite:

30 Aug 2015, 18:25

A blazing fast geo database with LevelDB, Go and Geohashes

You probably have heard of LevelDB it’s a blazing fast key value store (as a library not a daemon), that uses Snappy compression.
There is plenty of usages for it, the API is very simple at least in Go (I will be using Goleveldb).

The key is a []byte the value is a []byte so you can “get”, “put” & “delete” that’s it.

I needed a low memory low cpu system that could collect millions of geo data and query over them, Geohash has an interesting property you can encode longitude and latitude into a string : f2m616nn this hash represents the lat & long 46.770, -71.304 f2m616nn, if you shorten the string to f2m61 it still refers to the same lat & long but with less precisions f2m61.
A 4 digits hash leads to 19545 meters precision, to perfom a lookup around a position you simply query for the 8 adjacent blocks. A Geohash library for Go.

Here you would store all of your data points matching a geohash to the same set.
Problem there is no such thing as a set in LevelDB.

But there is a cursor so you can seek to a position then iterate over the next or previous one (byte ordered).
So your data could be stored that way: 4 digits geohash + a uniq id.

Then you can perform proximity lookup by searching for the 8 adjacents hashes from the position your are looking with a precision of 20km, good but not very flexible.

We can have a more generic solution, first we need a key a simple int64 uniq id.

// NewKey generates a new key using time prefixed by 'K'
func NewKey() Key {
	return NewKeyWithInt(time.Now().UnixNano())
}

// NewKeyWithInt returns a key prefixed by 'K' with value i
func NewKeyWithInt(id int64) Key {
	key := bytes.NewBufferString("K")
	binary.Write(key, binary.BigEndian, id)
	return key.Bytes()
}

Here we can encode a key with a Unix timestamp so our key is not just a key it’s also an encoded time value, it will be uniq thanks to the nanosecond precision. We are using BigEndian so it can be byte compared: older will be before newer after.

Now about geo encoding our key will be of the form:
G201508282105dhv766K��Ϸ�Y� (note the end of the key is binary encoded) You always need a prefix for your keys so you can seek and browse them without running over different keys types, here I have a G as Geo, then a string encoded date prefix, so we can search by date, but we don’t want extra precision here, it would add extra seek to LevelDB, (that’s why we have a modulo of 10 for minutes) then we add a precise geohash and finally our previous uniq id.

// NewGeoKey generates a new key using a position & a key
func NewGeoKey(latitude, longitude float64) GeoKey {
	t := time.Now().UTC()
	kint := t.UnixNano()
	kid := NewKeyWithInt(kint)
	// G + string date + geohash 6 + timestamped key 
	// G201508282105dhv766K....
	gk := geohash.EncodeWithPrecision(latitude, longitude, 6)
	ts := t.Format("2006010215")

	// modulo 10 to store 10mn interval
	m := t.Minute() - t.Minute()%10
	zkey := []byte("G" + ts + fmt.Sprintf("%02d", m) + gk)
	zkey = append(zkey, kid...)
	return zkey
}

We can now lookup by flexible date & by flexible proximity like a Redis ZRANGE, you simply need to reverse the process.

// GeoKeyPrefix return prefixes to lookup using a GeoKey and timerange
func GeoKeyPrefix(start, stop time.Time) []string {
	var res []string
	d := 10 * time.Minute
	var t time.Time
	t = start
	for {
		if t.After(stop) {
			break
		}

		key := "G" + t.Format("2006010215") + fmt.Sprintf("%02d", t.Minute()-t.Minute()%10)
		res = append(res, key)
		t = t.Add(d)
	}
	return res
}

Lookup that way:

	d := time.Duration(-10) * time.Minute
	geoPrefixs := GeoKeyPrefix(time.Now().UTC().Add(d), time.Now().UTC())

	// find adjacent hashes in m
	// 1, 5003530
	// 2, 625441
	// 3, 123264
	// 4, 19545
	// 5, 3803
	// 6, 610
	gk := geohash.EncodeWithPrecision(lat, long, 4)
	adjs := geohash.CalculateAllAdjacent(gk)
	adjs = append(adjs, gk)

	// for each adjacent blocks
	for _, gkl := range adjs {

		// for each time range modulo 10
		for _, geoPrefix := range geoPrefixs {
			startGeoKey := []byte(geoPrefix + gkl)
			iter := s.NewIterator(util.BytesPrefix(startGeoKey), nil)

			for iter.Next() {
				log.Println(iter.Value())
			}
			iter.Release()
		}
	}

It can be optimized, reducing the size of the keys, but it performs extremely well storing around 3 millions geopoints per day, using less than 3% cpu and can received hundreds of queries per second.

Oh did I forget to mention it’s running on a Raspberry Pi? :)

I could maybe turn it into a library but it’s so simple it’s probably useless.
Next blog post: what are those millions points used for?

02 Aug 2015, 16:26

Offering free internet well almost and survive disaster, power outage, Internet blackout

In my neighborhood, I’m experimenting to give free access to some services, not the full internet but a full access to Wikipedia, maps…
Here are some tips do to the same.

Getting access to the internet may be crucial for our lives, but the commercial providers are here to make money out of it. They don’t event provide disasters nor minimum safety access.

Sometimes even in normal condition, while traveling your roaming will cost you hundreds just to get access to a map or Wikipedia.

With a simple RaspberryPi, a wifi antenna and a good geographical position you can reach hundreds of people.

Wifi

To serve wifi to others you need to get a wifi USB dongle that can be set in AP mode.
I’m using an Alfa network one.

To enable the AP mode you need to install the package hostapd. Here is my /etc/hostapd/hostapd.conf.

ssid=freewifi
interface=wlan0
auth_algs=3
channel=7
hw_mode=b
logger_stdout=-1
logger_stdout_level=2
country_code=CA

(Always ensure you are using the correct country to avoid disturbing others and look for a free channel.)

You also need a DNS & DCHP provider, install dnsmasq, my /etc/dnsmasq.conf.

no-resolv
no-poll
server=/localnet/10.4.0.1
address=/#/10.4.0.1
interface=wlan0
bind-interfaces
dhcp-range=10.4.0.10,10.4.0.200,12h

It means all DNS queries will be intercepted and will respond with the same IP 10.4.0.1, by extend every HTTP request will be redirected to 10.4.0.1.

Enable the same IP on the vlan0 interface.
ifconfig wlan0 10.4.0.1

You are all set, no need for NAT no need for routing, we just want to provide access to Wikipedia.

Content

I’m using Gozim to run a full offline copy of Wikipedia. (For example, make it listen on port 8080)

Install nginx to reverse proxy to your content to a fake domain call wikipedia.wifi (port 8080) or serve any directories you want to be published.

Wispr

Another issue while running your own wifi AP is that these days almost all traffic is secured by HTTPS so there is no way to intercept these calls with the right certificates, the users will just get a security error page.
To solve that you need to display a popup page like the one you get when you connect to a Starbucks free wifi network.

I’m working on a set of tool to do just that, uninstall nginx and give it a shot the project is called WisprGo and the code is on Github too.
It’s a work in progress any help is appreciated.

02 Aug 2015, 13:10

Access OS metrics from Golang

I’ve recently published StatGo, it gives you access to your operating system metrics like free memory, used disk spaces …

It’s a binding to the C library libstatgrab, a proven stable piece of code that works on many different systems, FreeBSD, Linux, OSX …

It’s very simple to use:

s := NewStat()
c := s.CPUStats()
fmt.Prinln(c.Idle)
98.2

Feel free to contribute, it may need some improvement but it’s working I’m using it in a small metrics web server to monitor small network of servers.

The code is on Github

02 Aug 2015, 00:32

Host your blog on Github with autodeploy

I’ve always developed my own blog system, that’s a good way to learn a new langage.
But having to maintain a working server or hosting is no fun, there are some solutions like Jekyll or Hugo they generate static web pages based on some Markdown files you wrote.

As it’s just basic html files, they can be served by Github gh-pages.
It opens the door to blogging from anywhere without internet connection or your own laptop, just write some Markdown then publish to github later or event edit your new blog post from the Github editor.
Coupled with a Wercker auto deploy, publishing is automated, no excuse anymore.

Here is some tips for this to work smoothly with Hugo.

Github setup

First create 2 repositories on Github, one for the HTML pages, one for the markdown itself.
Let’s call them blog & hugo-blog.

DNS setup

You can use your own domain, if so you need to enable a CNAME file for your gh-pages to the blog repo, then add a CNAME to your DNS provider:
blog.mydomain.com. IN CNAME username.github.io.

Hugo setup

Clone the hugo-blog repo, put your new hugo blog files in it (created via hugo new).
Choose a theme for your blog (don’t forger to remove the .git directory from it) and set it up in your config.yml as follow:

baseurl = "http://blog.mydomain.com/"
languageCode = "en-us"
title = "My supa blog"
canonifyurls = true
theme = "hyde"

Then run hugo server --buildDrafts, point your browser to http://localhost:1313, no need to reload all modifications appear directly.

Wercker setup

For the auto deploy to occur after each commits you need a build & deploy system, Wercker is very cool as you can reproduce the exact same system on your host with Docker, subscribe to the service (it’s free) and register your hugo-blog repo, hit next, next …

Add a wercker.yml file in your hugo-blog that looks exactly like this.

box: debian
build:
  steps:
    - arjen/hugo-build:
        version: "0.14"
        theme: purehugo
        flags: --disableSitemap=true
deploy :
  steps :
    - script:
        name: Configure git
        code: |-
          sudo apt-get -y update
          sudo apt-get -y install git-core
          git config --global user.email "pleasemailus@wercker.com"
          git config --global user.name "wercker"

          # remove current .git folder
          rm -rf .git
    - script:
        name: Deploy to Github pages
        code: |-
          cd public
          # if you are using a custom domain set it here
          echo "blog.mydomain.com" > CNAME
          git init
          git add .
          git commit -m "deploy commit from $WERCKER_STARTED_BY"
          git push -f $GIT_REMOTE master:gh-pages 2> /dev/null

It will use Hugo to generate the pages then deploy to your Github repo on every commit to the blog repo.

Last step is to get a token from Github for the deploy to occur without your credentials.
Go to your Github settings, Personal access tokens, generate a token.

Go to your Wercker app settings, Deploy targets, add a new target, check auto deploy successful builds to branch(es): and type master.
Add a new variable named GIT_REMOTE check protected and type https://{TOKEN}@github.com/yourusername/blog.git replace {TOKEN} with the token from Github.
You can use this outdated blogpost from Wercker for the screenshot but don’t follow what they said.

You are all set, happy blogging, you can check the exact same config on my repos under my username Github@akhenakh.

01 Aug 2015, 22:36

Migrate to Hugo

This blog is running Hugo with an auto deploy via Wercker and hosted on Github Page. Take #2

Previous blog was hosted on Google App Engine with a Python blog system, to get the previous articles, I had to run in a small data migration.

Make a backup from GAE admin web interface: Go to Datastore admin and backup your entity mine was Post and then backup to blobstore, go to Blob Viewer and download your file named around datastore_backup_datastore_backup_2015_08_02_Post-157413521680733022360302ADC43E4-output-1-attempt-1

I put the migration code I’ve used here, it’s super ugly but helped me to migrate from GAE to Hugo, so it may be help you too.

I’ve used html2text to convert my HTML data back to Markdown.

import sys
import os
import json
import html2text
import errno
import datetime

sys.path.append('/usr/local/google_appengine')
from google.appengine.api.files import records
from google.appengine.datastore import entity_pb
from google.appengine.api import datastore

def mkdir_p(path):
    try:
        os.makedirs(path)
    except OSError as exc:
        if exc.errno == errno.EEXIST and os.path.isdir(path):
            pass
        else: raise

raw = open("datastore", 'r')
titles = open("titles.txt", 'r').readlines()
reader = records.RecordsReader(raw)
i = 0
for record in reader:
    entity_proto = entity_pb.EntityProto(contents=record)
    entity = datastore.Entity.FromPb(entity_proto)

    if entity.get("status") == 1:
        path = titles[i].rstrip()
        content = html2text.html2text(entity["content_html"], "http://blog.nobugware.com/")
        directory = os.path.dirname(path)
        mkdir_p("post" + directory)

        # get current local time and utc time
        localnow = datetime.datetime.now()
        utcnow = datetime.datetime.utcnow()

        # compute the time difference in seconds
        tzd = localnow - utcnow
        secs = tzd.days * 24 * 3600 + tzd.seconds

        # get a positive or negative prefix
        prefix = '+'
        if secs < 0:
            prefix = '-'
            secs = abs(secs)

        # print the local time with the difference, correctly formatted
        suffix = "%s%02d:%02d" % (prefix, secs/3600, secs/60%60)
        now = localnow.replace(microsecond=0)
        date = "%s%s" % (entity["creation_date"].isoformat(' '), suffix)
        tags_string = ""
        tags_cleaned = []
        if entity.get("tags") is not None:
            tags = entity.get("tags")
            for tag in tags:
                tags_cleaned.append("\""+ tag + "\"")
            tags_string = ",".join(tags_cleaned)

        print tags_string
        page = """+++
date = "%s"
title = "%s"
tags = [%s]
+++

%s
""" % ( date, entity["title"] , tags_string, content)
    md=open("post" + path + ".md", 'w')
    md.write(page.encode('utf8'))
    md.close()
    i = i + 1

02 Apr 2015, 12:49

A 10 minutes walk into Grafana & Influxdb

This is a 10 minute tutorial to set up an InfluxDB + Grafana with Go on your Mac, but should work with minor modifcations on your favorite Unix too, it assumes you already have a working Go compiler.

InfluxDB is a database specialized into time series, think store everything associated with a time, makes it perfect for monitoring and graphing values. Grafana is a js frontend capable of reading the data from InfluxDB and graphing it.

brew install influxdb

Start InfluxDB, and then point your browser to http://localhost:8083 default user is root, password is root and default port is 8086.

influxdb -config /usr/local/etc/influxdb.conf Create a database called test.

Let’s test the connection with the db and Go, first install the InfluxDB driver for Go:

go get github.com/influxdb/influxdb/client Test your setup with some code:

package main

import (
    "fmt"

    "github.com/influxdb/influxdb/client"
)

func main() {
    c, err := client.NewClient(&client.ClientConfig{
        Username: "root",
        Password: "root",
        Database: "test",
    })

    if err != nil {
        panic(err)
    }

    dbs, err := c.GetDatabaseList()
    if err != nil {
        panic(err)
    }

    fmt.Println(dbs)
}

If you are good you should see a map containing all your created InfluxDB databases.

Now let’s measure something real: the time it takes for your http handler to answer.

package main

import (
    "fmt"
    "log"
    "math/rand"
    "net/http"
    "time"

    "github.com/influxdb/influxdb/client"
)

var c *client.Client

func mySuperFastHandler(rw http.ResponseWriter, r *http.Request) {
    start := time.Now()
    // sleeping some random time
    rand.Seed(time.Now().Unix())
    i := rand.Intn(1000)
    time.Sleep(time.Duration(time.Duration(i) * time.Millisecond))
    fmt.Fprintf(rw, "Waiting %dms", i)
    t := time.Since(start)

    // sending the serie
    s := &client.Series{
        Name:    "myhostname.nethttp.mySuperFastHandler.resp_time",
        Columns: []string{"duration", "code", "url", "method"},
        Points: [][]interface{}{
            []interface{}{int64(t / time.Millisecond), 200, r.RequestURI, r.Method},
        },
    }
    err := c.WriteSeries([]*client.Series{s})
    if err != nil {
        log.Println(err)
    }
}

func main() {
    var err error
    c, err = client.NewClient(&client.ClientConfig{
        Username: "root",
        Password: "root",
        Database: "test",
    })
    if err != nil {
        panic(err)
    }

    http.HandleFunc("/", mySuperFastHandler)
    http.ListenAndServe(":8080", nil)
}

This is not very useful as it’s measuring the time to write to the ResponseWriter that’s why I’ve added some random time but you get the sense. It will save a serie per request as: duration, status code, url, http method, the name of the serie is important as many tools (as Graphite) are using the dots as separator, so think twice before naming your serie. Point your browser to http://localhost:8080 and reload the page several times.

Now that we have data let’s browse them with the InfluxDB browser, go to the InfluxDB admin and hit “explore data” and select with:

SELECT duration FROM myhostname.nethttp.mySuperFastHandler.resp_time WHERE code = 200;

Image Alt

You should be able to see the inserted data points.

Now let’s work with Grafana, download the tar gz, uncompress it somewhere, copy this demo config.js file in the root directory of Grafana. Go to the InfluxDB admin with your browser and add a new database called “grafana”.

In your web browser, open the file index.html in the Grafana directory, you should see a the Grafana interface edit the default graph, enter the query as follow:

  • click on series it will complete with myhostname.nethttp.mySuperFastHandler.resp_time
  • In alias type $0 $2, it will use the 1st part and the 3rd part of the name (remember the dots) so it will display myhostname mySuperFastHandler
  • Finally click on mean and choose duration in the completion, then add code = 200 as where clause.

Hit save and you are done !

Image Alt

There is so much more you can do with InfluxDB & Grafana, it’s really simple to collect and display, hope you want to go further after this. You can look at my generic net/http handler for InfluxDB on Github that can be integrated into your code.