Troglobit

Help me Obi-Wan Troglobi, you’re my only hope!

Using Netcat to Test Your Internet Daemon

So you’re having a problem with the Internet daemon you wrote. You’re convinced the firewall, or some other magic, in your modern Linux distribution is eating your packets.

No.

First, make sure your daemon is actually running and has successfully bound to the address and port in question:

sudo netstat -atnup

If your application is not listed there you have a problem with it binding its server socket. Check the return values from bind().

If you’re still suspecting the firewall, or some other magic, test your theory with netcat. First start a server, with your relevant address and port, remember you need to be root, or have CAP_NET_ADMIN, to use ports <= 4096:

nc -l -u -p 9999

Here we bind to 0.0.0.0:9999. Now, start a client to test if the server can receive any data:

nc -u 127.0.0.1 9999

Type something on the console and press enter to send it to the server. If you receive the data then there’s no magic, execpt from bugs in you application, preventing your appplication from working.

The Key to Successful Boot

How do you know when your UNIX service (daemon) is ready? Simple, it has created a PID file, signalling to you how to reach it. Usually this file is created as /var/run/daemon.pid, or /run/daemon.pid, and has the PID of daemon as the first and only data in the file. This data may or may not have a UNIX line ending.

Only trouble is: most UNIX daemons do not re-assert that PID file properly on SIGHUP (if they support SIGHUP that is). When I send SIGHUP to a daemon I expect it to re-read its /etc/daemon.conf and resume operation, basically a quicker way than stop/start.

Annoyingly however, most daemons do not signal us back to tell us when they’re done with the SIGHUP. Naturally a new movement has risen that says we should all instrument our daemons with D-bus … I say no. Simply touch the PID file instead.

Yeah, one could argue the natural (and pure) thing would be to add a UNIX domain socket and use a daemonctl client instead of SIGHUP + PID file … but for this little mechanism of signalling back to the user that a daemon is ready for business, it’s too much overhead.

My own Init replacement, Finit, is being fitted with a system to synchronize services with events. Eg. wait for one service to start, an interface to be created, or come up, or have an address set, or a gateway to be set … and so on.

In the case where process B depends on process A you do not want to start process B before process A is actually up and running. Simply starting process A is rarely sufficient – starting B too soon can lead to B terminating prematurely because it cannot yet connect to A.

One may argue that B should try and reconnect, or that A and B should have some other means of synchronizing. Sure, when dependencies are clear and developers create cooperating services this works great. But this is rarely the case in real life. Services are usually developed by multiple teams, scattered across both time and distance. So what we’re left with is finding the least common denominator and use that for our synchronization needs.

Waiting for daemon A to create its PID file before we start process B is enough. When the system is reconfigured – many services may need to be restarted – we can shake the dependency tree and send SIGHUP to all daemons in the correct order. The only patching required is to ensure that all daemons re-assert their PID files after having reloaded their respective config files.

More on the changes in Finit3 and the upcoming new dependency systems in a later article. Hopefully this will have made you interested!

Lecture From the UNIX Beards

After the rm -rf /* disaster that hit me a couple of weeks ago I’ve been rebuilding my setup, restoring the few files I’ve had backed up, and collecting advice from the elders.

Turns out there are a few tricks that can save your home directory from accidents like mine. The first one is rather obvious, but I’m writing it down anyway:

  1. Keep separate accounts. If possible, use separate accounts (with different permissions obviously) for different projects. I had my private life and work life mixed up, so that’s a big no-no to begin with.
  2. Create a file called -t in your $HOME

The last bit of advice I’d heard about earlier, about ten years earlier, but completely forgotten about. The trick is to create a file that will be interpreted by rm as an unknown option. For GNU rm the -t is a good choice.

Here are two ways of creating such a file:

cd
touch ./-t

or, the perhaps easier to remember:

touch -- -t

Cheers!

Disaster Recovery

Days like these inconspicuously start out just like any other day, except on days like these you accidentally manage to erase $HOME and have no real backup to rely on … Maundy Thursday will forever be Black Thursday for me, from now on.

Best thing your can do, after cursing at yourself constantly for a couple of hours, is to:

  1. Come up with a useful backup and restore strategy
  2. Read up on undeletion tools for Ext4
  3. Blog about it, naturally

BUT FIRST – QUICK – UNMOUNT OR POWER-OFF YOUR COMPUTER – PULL OUT THE BATTERY – AND STEP AWAY FROM THE COMPUTER! Must protect the partition from being accidentally written to – I completely fumbled this step, so take heed young people!

Now calm down and act like an engineer again.

There exist two neat tools, three if you count the more hard core debugfs:

Both have been around a while, and both are capable of restoring all files. See their respective home pages for details.

My system used LVM, of course, to make things a bit harder. That’s why I’m documenting my steps here:

  1. Remove disk from lappy
  2. Prepare to connect using USB cradle to workstation, prepare don’t do it yet! I did the next few steps as root on my workstation, that’s right, I didn’t care anymore at this point. The world may as well burn!
  3. You haven’t connected the disk yet, right stupid? Most systems today have cool features like automount that would cause the journal to be replayed – WE DO NOT WANT THAT OK? OK!

You need the lvm tools:

apt-get install lvm2
vgscan
vgchange -ay ubuntu-vg
lvs

The last command should list your logical volume group(s), in my case labled ubuntu-vg. To be able to run extundelete or ext4magic on the partition you have to mount the partition, read-only needless to say:

mkdir /rescue
mount -o ro /dev/ubuntu-vg/root /rescue

Now we try some fairy dust:

ext4magic /dev/ubuntu-vg/root -r

OK, so let’s see:

ls -l RECOVERDIR/

The results between ext4magic and extundelete may vary, so try them both out, test different options, and maybe even look at debugfs, we’re desperate after all.

extundelete /dev/ubuntu-vg/root --restore-all
ls -l RECOVERED_FILES

If there’s a LOT of files you may have some luck narrowing down the result using the --after DATE option … where DATE is the number of seconds since the UNX epoch:

extundelete /dev/ubuntu-vg/root --restore-all --after 1458802800
ls -l RECOVERED_FILES

I found my magic number using

date -d "Mar 24 8:00 2016" +%s

Well, as I said, I didn’t turn off my computer in time. Instead I took the braindead option of starting to google for solutions. So all my files (with proper filenames) turned out to contain only cached files from the browser – reclaiming the blocks goes quick, so watch out kids.

Testing Multicast With Docker

Recently issue #70 was reported to pimd. That number of issues reported is cool in itself, but this was a question about Docker and pimd.

Up until that point I had only read about this new fad, and played around with it a bit at work for use as a stable build environment for cross-compiling. I had no idea people would want to use a Docker container as a multicast sink. Basically I was baffled.

The reporter used a Java based tool but simply couldn’t get things to work properly with pimd running on the host:

                eth0
 MC sender ---> [ Server host ]    <--- router running pimd
                       |
               ________|________
              /     docker0     \   <--- bridge    ______
             /         |         \                |      |   <--- MC receiver
  __________/          |          \_______________|______|_____
 \                     |                            /         /
  \                     `------------------>-------'         /
   \________________________________________________________/
      Container ship

We tried several approaches, but nothing seemed to help. This became a bit of blocker for the pimd v2.3.2 release and I admittedly lost a bit of sleep over this. So finally this weekend, I sat down and whipped my old mcjoin tool up into shape. I’ve relied on it for years, but it couldn’t send or receive packets, until now.

Running docker v1.5 in Ubuntu 15.10 I ran this, with pimd on the host and mcjoin as a multicast sink for 250 groups in a container:

cd ~/Troglobit/mcjoin
docker run -t -i -u `id -u`:`id -g` -v $HOME:$HOME -w $PWD troglobit/toolchain:latest ./mcjoin 225.1.2.3+250
^C
Received total: 2500 packets

The pimd and the multicast sender runs on my host, which should not matter since Linux still has to route the traffic to the docker0 interface. Also, without setting the TTL to 2 (or greater) the container receives no traffic at all. Here’s what I run in another terminal on my host:

./mcjoin -s -t 2 -c 10 225.1.2.3+250

Although pimd is a little slow to register and install the forwarding rules in the kernel, it sure enough worked on the first attempt! \o/

This is my first real application level experience with Docker, but it is sure not the last. Docker is a truly revolutionary new tool!

Multicast Testing, Made Easy!

For the better part of the last ten years I’ve been working with multicast in one way or another. I’ve used many different tools for testing, but usually simply using ping(1) and tcpdump(1) is quite sufficient. However, you often need to tell bridges (switches) to open up multicast in your general direction for your pings to get through, so you need to send an IGMP “join” first. I found mcjoin, written by David Stevens, in this posting to LKML back in 2006, and started improving it and adding features to it over the years.

Now, my interest and fascination with multicast only grew with time, and despite elegant tools like mgen(1) and omping(1) I was never quite happy.

When releasing SMCRoute v2.1.0 recently, and currently working on the pimd v2.3.2 release, I was so tired of having to do so many manual steps just to verify correct operation of a routing daemon. Therefore I spent the better part of the weekend fixing up my old mcjoin tool.

I wanted a reliable, simple, and UNIX-y tool to just test things for me. So I cleaned up the old mcjoin project, first by migrating it from the toolbox repo, then remove confusing command line options, improve and simplify the syntax, and then added send/receive capabilities. Been meaning to get around to this for ages, and now it seems I had finally had enough. So here it is, v2.0:

Most of the time I simply want to see a resulting IGMP join message in Wireshark, see it bite in a switch’s FDB or a routing daemon’s forwarding table. So, join is the default operation, and also continues to be the name of the tool. My favourite testing group is set as the default, 225.1.2.3, so you only need to start the tool and you’re off. To send to the same default group (225.1.2.3), simply add -s to the sender side.

sender$ mcjoin -s
^C
sender$

receiver$ mcjoin
joined group 225.1.2.3 on eth0 ...
..................................................................^C
Received total: 66 packets
receiver$

If you ever need anything else, e.g. routing multicast, there’s even a man page. It mentions setting the TTL and other such nastiness :)

Useful UNIX API:s

Had an interesting conversation with a buddy last night. It started out as a shift-reduce problem with Bison and ended up a ping-pong of useful UNIX API:s. We concluded that despite having worked professionally with UNIX for over a decade, it is still very satisfying finding gems like these.

Most people are completely unaware they exist and end up rolling their own (buggy) implementations.

SysV search.h

Mangage a simple queue:

  • insque()
  • remque()

Manage hash search table:

  • hsearch()
  • hcreate()
  • hdestroy()

Manage a binary search tree:

  • tsearch()
  • tfind()
  • tdelete()
  • twalk()
  • tdestroy()

Linear search and update:

  • lfind()
  • lsearch()

BSD sys/queue.h

This header has lots of macros for handling various forms of linked lists. The version in GLIBC is a bit behind the BSD’s, because the latter also have _safe() versions of some macros to aid the user in some tricky cases, e.g. when removing entries while iterating.

Several types of lists are supported:

  • LIST: Doubly linked list
  • SLIST: Single linked list
  • STAILQ: Single linked tail queue
  • SIMPLEQ: Simple queue
  • TAILQ: Tail queue

Here’s a few of them, this example for doubly linked lists:

  • LIST_INIT()
  • LIST_EMPTY()
  • LIST_FIRST()
  • LIST_NEXT()
  • LIST_REMOVE()
  • LIST_FOREACH()
  • LIST_INSERT_AFTER()
  • LIST_INSERT_BEFORE()
  • LIST_INSERT_HEAD()

I wrote a demo of the TAILQ API a couple of years ago.

Other Noteworthy API’s

Other functions worthy of mentioning here are:

stdlib.h

  • bsearch()
  • qsort()

glob.h

  • glob()

En Vanlig Dag På Jobbet (SWEDISH)

I vanlig ordning bashar vi DNS på jobbet, pga ofungerar hårt över VPN för de flesta. (Ja vi kör alla Linux, utom cheferna som envisas med att använda något ur gamla testamentet.) Här följer ett utdrag från vår IRC:

14:32 <n00b> Success! Äntligen fick jag ordning på DNS via guest
      wifi -> vpn -> office network. Firar med att skapa lite irc noise. :D
14:32 < rooth>n00b: Du har väl fått den distribuerade /etc/hosts filen?
14:33 < rooth> !dns paperboy
14:33 < |master|> rooth: server028.intranet.example.com. 192.168.130.72 lazyboy lazyboy.intranet.example.com
14:33 < rbot> rooth: lazyboy: 192.168.130.72
14:33 < rooth> bottarna hjälper dig annars =)
14:34 <n00b> hosts file is so... "unclean" ;)
14:35 <n00b> Men bra att veta att bottar kan hjälpa. :)
14:37 < rooth>n00b: Förstår inte alls vad du menar... =)
14:37 < rooth>n00b: Det finns två skolor på bygget, dns vs hosts.
14:38 <n00b> Oh! Så jag har tydligen valt sida nu då. :D
14:40 < rooth>n00b: dns = dark-side.
14:50 < lazzer>n00b: ;-)
14:52 < rooth>n00b: s#local dns cache#/etc/hosts# =)
14:52 < moggen> hehe HOSTS.TXT revival
14:52 < moggen> var så det började en gång i tiden
14:52 < moggen> innan man uppfann DNS
14:53 < lazzer> Vi måste komma på ett bra system för att dela våra hosts-filer.
14:53 <n00b> A long time ago, in a network far far away...
14:53 <n00b> git repo for shared hosts fil, annan conf?
14:57 <n00b> Finns det någon slags share-mekanism vad gäller grundläggande test
             setup btw? Eller är det för svårt att få till generell grund conf
             för lokal test setup?
15:38 < troglobit> Åh, har ni öppnat DNS-lådan! LOL =)
15:38  * troglobit kör med hosts-fil, såklart ;)
15:40 < lazzer> troglobit: bra val ;-=
15:44 < rooth> troglobit: Jag tyckte det var dags =)
16:00 < moggen> troglobit: upptäckte när jag jobbade hemma att man får "fel" IP på
                web.example.com om man inte sitter på kontors-IP... hosts ftw.
16:22 < troglobit> moggen: Härligt att se att du sållat dig till den enda sanna läran ;)
16:59 < rooth> =)
17:06 < wkz> moggen,troglobit : another one bites the dust... :)
18:12 < n00b2> Helgen är räddad vpn-dosan återfunnen
20:19 < troglobit> wkz: DNS är en kvarleva av ett döende system. Kom, anslut dig
               till oss istället, vi kämpar för rättvisa, säkerhet, och fria
               statiska /etc/hosts-uppdateringar i galaxen!
20:19 < wkz> troglobit: NEVER!
20:20 < troglobit> wkz: *Feel* the power of the dark side!
20:20 < wkz> troglobit: LOL
20:20 < troglobit> wkz: :-D

Some names have been changed to protect the innocent, the rest are Sith Lords or Jedi Knights.

Awesome: Changing Next/Prev Tune in Spotify

Back to using the Awesome WM in Ubuntu. This time I’m setting up everything from scratch and first up is fixing keybindings to control my main music player: Spotify!

Edit your ~/.config/awesome/rc.lua with Emacs (obviously). If you do not have an rc file, simply copy the system /etc/xdb/awesome/rc.lua:

globalkeys = awful.util.table.join(globalkeys,
           awful.key({}, "XF86AudioRaiseVolume", function () awful.util.spawn("amixer -D pulse sset Master 5%+", false) end),
       awful.key({}, "XF86AudioLowerVolume", function () awful.util.spawn("amixer -D pulse sset Master 5%-", false) end),
       awful.key({}, "XF86AudioMute", function() awful.util.spawn('amixer -D pulse sset Master 1+ toggle') end),
       awful.key({ }, "XF86AudioNext", function () awful.util.spawn("dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Next")end),
       awful.key({ }, "XF86AudioPrev", function () awful.util.spawn("dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Previous")end),
       awful.key({ }, "XF86AudioPlay", function () awful.util.spawn("dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlayPause")end),
       awful.key({ }, "XF86AudioStop", function () awful.util.spawn("dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Stop")end))
root.keys(globalkeys)

That’s it.

Stray Puppies

Sometimes I just cannot help myself. It’s like finding a stray puppy, or abandonded kitten …

… I recently decided to adopt mini-snmpd since the original upstream site had passed into the great beyond. At this point in my life almost everyone I know can tell you I have no warm fuzzy feels for SNMP, at all. So why did I even consider this to begin with?!

Well, I have to confess that there are certain things that SNMP can be really useful for. Most of it is remote monitoring, and since I work with embedded systems a lot, SNMP can be quite a handy tool.

I first ran into mini-snmpd when I created TroglOS, by forking miniroot, and what struck me immediately was how small and simple it was. The code was also in reasonably good shape, so it had passed my initial quality control. Suddenly one day I could no longer download it when building TroglOS, so I had to do something!

My plans for mini-snmpd are quite humble. I’m currently cleaning it up a bit, adding configure script for all optional features, testing portability to FreeBSD and integrating (good) patches from various sources scattered around the web.

I will not add lots of new features, but as always I’m your humble patch monkey, so if you submit a pull reqeust at GitHub it’ll probably be merged and put into a release.

Cheers