There’s just one thing to say on the topic of commit messages: ur doin it wrong if …
Sometimes people ask me what I do for a living. Usually I tend to pause and think, real hard, becuase the people asking me this aren’t programmers. They use computers, but are mostly limited to a Windows machine, writing in MS Word and browsing the Internet, mostly for Facebook.
I often start off with: “It’s a bit complicated to explain … “, by which time I’ve lost most of the people in the room listening to me. Sometimes I say: “I’m a software architect.”, because people seem to know what architects do for a living, they draw houses, design stuff and drive SAAB’s. Much like dentists. The prefix “software” however does confuse people.
In reality I’m just a programmer with a little bit more responsibility. A day starts with coming in to the office, or sitting down in my home office, connecting to the company VPN, meanwhile planning loosely the day and my goals. All while checking status of e-mail, our IRC channel and the issue tracker for any recently reported/updated issues in my fields of responsibility. Somewhere around there I have a pretty good idea about what must be done and in what order. So I write it down, in no particular order, on folded A4 sheets I use for TODO lists and start working. About 20 minutes into that, when I’ve just reached The Zone, I get interrupted for an impromptu meeting, or telco, which in turn always leads to another meeting, which in turn runs over and suddenly it’s 16:00 (4 pm) and I have 45 minutes to complete a days work before picking up the kids from school.
Or … you can apply that same crazy to a day when trying to fix a simple bug, finding another horrbile bug and fixing that first, leading to a minor redesign in need of a refactor, which in turn gives me a pain to merge since during the time I tried fixing that minor bug my collegues have done multiple changes to the same files I’ve been working on.
That’s what I do for a living. And yes, I know it’s not sustainable work conditions. I’m working on that, it’s on the TODO list …
Don’t try this at home kids. (Disclaimer: I live in Sweden :)
This is a rant about something I recently found to be a long standing battle line in the world of programming, Lau78. The event vs thread based approach to programming. As rants go I do not aspire to deliver a clear or logical message, what so ever. It’s basically just something I need to get off my chest.
It was not until 2007 I first learned about the event based approach to programming and event libraries like libevent and libev. Up until that point the silver bullet everyone was using was … Threads.
I don’t really know when it all started, maybe it was the Linux revolution, the first NPTL release with GLIBC, Java or Solaris. Nevertheless, from my point of view it was sometime in the mid 90’s during my time at university that the use of threads was starting to become prevalent.
With the rise of the thread based model of programming we now had a hammer, and every problem looked like a nail. I had a gut feeling there was something really wrong with using threads for every conceivable program, but I could not find a way to express it, so I chugged away with my threads, semaphores and condition variables. I convinced myself I was happy like this.
Of course I knew about the event based approach, but it was more or less dismissed as a thing of the past, a while(1) loop to mimic the behavior of PLC’s. So almost every program I wrote, and every program I took over from others, were like Indiana Jones types of mazes full of deadlocks and race conditions.
I thought I did something wrong, and so did many others like me. I spent days and nights trying to understand, refactor, and redesign threaded programs. What I found was a doubt that the thread based model actually didn’t suit every problem, Ous96. There are quite a few domains, however, where thread based models shine. Usually in languages that come with thread support built-in, like Erlang.
Most of the programs I work with today are network daemons. Meaning they are essentially message based applications that spend a lot of time waiting for an event to occur: receiving a data frame, waiting for a timer to expire, a signal to be raised, etc. Of course threads can be used for this, but it is a lot simpler to employ an event based framework instead. Also, they are all written in C for speed and portability between different UNIX systems. For that domain, where I currently make my living, it will be difficult to convince me to ever look at threads again.
Vacation time means catching up on my Open Source projects! :)
Currently I’m shaping up the home pages and this blog to improve the easy access and overview of all the packages I maintain. The following packages have new releases, or can expect new releases soon:
- Minix Editline v1.14.1
- SMCRoute v1.99.1 – There’s even a v2.0.0 being planned, with libev support
- mrouted minor cleanup an sync with OpenBSD
- pimd cleanups and bug fixes, needs testing
- inadyn is in dire need of a release, but needs more testing and fixes
As usual, see my GitHub for the latest commits if you want to try anything out, file an issue report, or if you want to contribute.
I usually run Debian or Ubuntu on my machines. However, having recently found some time to work on my various projects again, I’ve now suddenly found myself in need of a CentOS machine.
All I had to provide was an FTP server and directory:
- Anonymous FTP
That’s it, the graphical installer started and I had to start selecting various packages. Must say it’s a bit confusing since the package naming is not the same in RedHat/CentOS as in Debian.
Oh, and if the installations seems to have gotten stuck, just wait it out. It’ll get there :)
Read the following to get console in virsh working with CentOS guest.
The below setup is done using four Ubuntu 12.04 LTS virtual machines running the linux-virtual kernel package. In the HowTo I mention both pimd and mrouted, since they work out-of-the-box w/o any config changes, but you could just as easily use SMCRoute for the same purpose.
When setting up virtual machines and virtual networking there are several requirements for the host. The most important one, that needs pointing out, is a bug in the IGMP snooping code in the Linux bridging code: the bridge handles the special case 224.0.0.* well, but all unknown multicast streams outside of that segment should also be forwarded as-is to all multicast routers. Since this does not work with the current IGMP snooping code in the Linux kernel bridge code you must disable snooping:
host# echo 0 > /sys/devices/virtual/net/virbr1/bridge/multicast_snooping host# echo 0 > /sys/devices/virtual/net/virbr2/bridge/multicast_snooping host# echo 0 > /sys/devices/virtual/net/virbr3/bridge/multicast_snooping
Disabling IGMP snooping on the hosts’ virbr3 is not really necessary, but is done anyway for completeness, and also because I re-use the same setup in other test cases as well.
R1 R2 R3 R4 +-------+ +-------+ +-------+ +-------+ |eth0 | 172.16.12.0/24 |eth0 | 172.16.10.0/24 |eth0 | 10.1.0.0/24 |eth0 | | .1|----------------|.2 .1|----------------|.2 .1|----------------|.2 | | eth1| virbr1 | eth1| virbr2 | eth1| virbr3 | | +-------+ +-------+ +-------+ +-------+
This setup can be used in two separate use cases, remember there is only one multicast routing socket, so you have to choose one of:
- pimd -c pimd.conf
- mrouted -c mrouted.conf
- smcroute -f smcroute.conf
The default configuration files delivered with pimd and mrouted usually suffice, see their respective manual pages or the comments in each .conf file for help.
When you start mrouted, you’re usually ready to go immediately. But in the case of pimd, wait for routers to peer. Then you can test your setup using ping from R1 to a tcpdump on R4:
R1# ping -I eth1 -t 3 184.108.40.206 R4# tcpdump -i eth0
As soon as the PIM routers R2 and R3 have peered you should start seeing ICMP traffic reaching R4.
Now, to the actual test case. The first command for R1 adds a route for all multicast packets, that is necessary for all tools where you cannot set the outbound interface for the multicast stream, in our case iperf.
R1# ip route add 220.127.116.11/4 dev eth1 R1# iperf -u -c 18.104.22.168 -T 3 R4# iperf -s -u -B 22.214.171.124
The -T option is important since it tells iperf to raise the TTL to 3, the default TTL for multicast is otherwise 1 due to its broadcast like nature.
The desired output from iperf is as follows:
R1# iperf -u -c 126.96.36.199 -T 3 ------------------------------------------------------------ Client connecting to 188.8.131.52, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 3 UDP buffer size: 160 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.12.1 port 55731 connected with 184.108.40.206 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec [ 3] Sent 893 datagrams R4# iperf -s -u -B 220.127.116.11 ------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 18.104.22.168 Joining multicast group 22.214.171.124 Receiving 1470 byte datagrams UDP buffer size: 160 KByte (default) ------------------------------------------------------------ [ 3] local 126.96.36.199 port 5001 connected with 172.16.12.1 port 55731 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.268 ms 0/ 893 (0%)
To achieve the same using SMCRoute you need to setup
the multicast routing rules manually. The by far easiest way to do
this is to update
/etc/smcroute.conf and and start/restart smcroute,
or send SIGHUP to an already running daemon. The below example makes
use of the source-less (*,G) approach, since we in our limited setup
have full control over all multicast senders. There is a slight setup
cost associated with this: the time it takes the kernel to notify
SMCRoute about a new source and before the the actual multicast route
is written to the kernel. In most cases this is acceptable.
smcroute.conf on R2:
mgroup from eth0 group 188.8.131.52 mroute from eth0 group 184.108.40.206 to eth1
smcroute.conf on R3:
mgroup from eth0 group 220.127.116.11 mroute from eth0 group 18.104.22.168 to eth1
Now, start smcroute on each of R2 and R4 and then proceed to start iperf on R4 and R1, as described above. You should get the same result as with mrouted and pimd.
That’s it. Have fun!
- It doesn’t work? – Check the TTL.
- It doesn’t work? – Check the TTL!
- It doesn’t work? – CHECK THE TTL!
- Why does the TTL in multicast default to 1? – Because multicast is classified as broadcast, which inherently is dangerous. Without proper limitation, like switches with support for IGMP Snooping, multicast IS broadcast.
- It doesn’t work? – Check your network, maybe a switch between the sender and the receiver doesn’t properly support IGMP Snooping.
This post doesn’t cover fully setting up KVM/Qemu with virt-manager and creating virtual machine guests. See the Ubuntu KVM Installtion, VirtManager Guide, the Ubuntu Server Guide on libvirt, or HowtoForge for that.
Instead this blog post details the most relevant steps to get file system pass-through between a Linux host and Qemu guest working. The upstream Qemu docs provide a good starting point, as is the original IBM paper on VirtFS. For users of Ubuntu <= 13.04, watch out for the libvirt bug that I know many people run into, myself included.
First of all, I could never really master the beast called AppArmor in
Ubuntu. Once I got the hang of the files to edit, the order to make
changes and the syntax of its profile files I think I tried every
possible permutation without any success. So I ended up disabling the
profile(s) of my VM guests. The UUID in the filename can be found in
the details of your VM, or in the process listing on the host:
| grep guestname. Here is an example of how to disable one guest:
You need to install apparmor-utils to get the
aa-complain tool. Where complain basically means ignore any hits
from the given profile and just complain in the log. The default is
aa-enforce. For more info on AppArmor, see the
excellent upstream docs
Now, how to do it. I like virsh, but for most of the time the vmware like virt-manager is a lot more user friendly. In the VM’s Detailed view, click the “Add Hardware” button and select “Filesystem”. This is where the action happens.
- Type: preset to Passthrough
- Mode: change to
MappedThis is the most important step in this blog, or you will not get read/write support!
- Source path: select the path on your host that will be shared
with this guest. I use
/var/lib/libvirt/sharebut you can use any directory you want
- Target path: enter magic string that you’ll use in the mount
command in the guest. I use
share, no slashes or anything. In reality this isn’t a path per se, it’s a tag that the guest sends to the kernel 9p driver via the mount command
Please note that 9P file systems simply pass-through the owner UID/GID and directory permissions from the host to the guest. This can be a bit confusing, but just make sure to use the same for all guests that share the same directory. I chowned it to my account on the host:
host# chown jocke:users /var/lib/libvirt/share
guest:/etc/modules I added the following modules, even though
the kernel can probably load them itself on demand:
9p 9pnet 9pnet_virtio
The actual command to get the ball rolling on the guest:
guest# mount -t 9p -o trans=virtio,version=9p2000.L,rw share /mnt
To automatically mount this every time at boot, add the following
share /mnt 9p trans=virtio,version=9p2000.L,rw 0 0
That’s it. Good Luck!
This is a response to the excellent post by Jani Gorše, titled Why is Programming an Art?
Ever since I began studying Computer Engineering at university back in 1995 I have struggled to find the “proper” ways to format my code, name functions and variables appropriately, structure functions into files and files into directories with Makefiles and Makefile snippets, using both recursive and non-recursive make. Formatting of code, for instance, was for a while a bit of an obsession of mine, and it sort of is still. But today I am more concerned with the overall structure and how components interact. Even though I can still get very annoyed at people naming their local variables obtrusively.
I still do most of my work, professional and hobby, using plain old C. I’ve read many books and style guides on the subject and the one that really stood out as being extremely helpful, apart from reading a LOT of code, is The Practice of Programming by Kernighan and Pike! The book is crammed full with tons of neat advice and best practises that simly just make sense. Stuff like “name your local variables i, j, k for counters and loop variables”, which is basically what most of us do already. Just read it and you’ll see what I mean – spot on!
Spot on? I hear you say, why should you read something that seemingly doesn’t tell you anything you already know? Well, even the most evident practises often come under scrutiny when programmers from the most different schools meet. At my job we have Microsoft, Bombardier, and ABB programmers which have little to no experience with UNIX, Linux or any Free/Open Source software development. They are used to more cumbersome practises, “Hungarian Style Notation”, inflexible corporate styles which nobody can explain anymore, and so on.
It is difficult to explain the way of Free/Open Source software, but I think the most important message is: keep it simple, maintain the same style as used in the rest of the project, and if you comment then write what you mean with a piece of code, not what it does. And, in the true style of UNIX, write one program that does its job well.
The beuty in software comes from the structure, the flow, the similarities and how the modules fit together. The absence of duplication, and the readability.
Visit the homepage for the book and download the source code to have a look for yourself. Enjoy!
- Multiple TTYs
- One-shot tasks
Let’s start off with tasks. Tasks are one-shot commands, with a syntax like service directives, but are not monitored and respawned like services. Tasks are started in parallel, just like services. For some cases, like the system bootstrap phase, some tasks may need to be executed in sequence, and for that purpose there also exists a run command. Run commands are executed in the order listed in finit.conf and will run until completion before continuing with the next task or service.
Multiple TTYs is another neat feature. Similar to services many TTYs can be started and automatically be respawned when a user logs out. For embedded targets wanting to save CPU cycles usually one TTY is the system console. Use the console command to point to a defined TTY to activate “Press any key to activate this console.”
Finally, runlevels! This is the key feature in this release of Finit. Adding the flexibility from SysV init, without the complexity. This is one of the key points of Finit – it should be simple!
service  /usr/sbin/sshd -D -- OpenSSH Daemon
This command tells Finit that the OpenSSH daemon should only run in runlevel 2-5. Finit will also respawn sshd if it should crash, just like before.
Runlevels are needed in use-cases in many embedded devices. E.g., bootstrap, upgrading, and regular operation. It is completely up to the system administrator to setup the runlevels of the product or installation. At boot runlevel ‘S’ runs, well before any networking is up. This is used to to one-time probing and setup of the system. When done the runlevel defined in finit.conf, or the default 2, is started.
See the README for more information, or the code for the full details.