Tag Archives: Linux

Why I find developing on/for Windows exasperating

I ran DOS on my first PC. The natural progession unfolded, with me then running Windows 95, Windows 98, and Windows XP after that (Windows ME, like the Matrix sequels, was a collective bad dream that didn’t really happen). I used Borland’s IDE to write C code, then RHIDE with DJGPP since I couldn’t even imagine using a compiler from the command-line. I say that because I wasn’t “brought up” using *nix at all, and my only exposure was at university. These days however, I do nearly all of my development on Linux. Why? I find it to be a much, much better experience.

Somewhat unfortunately for me, my current job requires me to do Windows development. And every time I boot into Windows or have to fix Windows-specific problems, it makes me want to cry. Why? Let me name some of the reasons why.

Speed, or the lack thereof. I haven’t done a thorough scientific analysis on this, because I don’t think it’d be worth my while to do so. It seems clear to me that NTFS is very very slow. Doing anything on it, from running CMake to compiling to linking, seems to take forever. To the point that it makes me actively wonder how anyone manages to get anything done on Windows. I can rebuild the reference D compiler on my laptop in about 1.6s after modifying one file. On Windows the same build, on the same machine, takes ~1 minute. Given that I find 1.6s infuriatingly slow, you can imagine what sorts of dark swear words I reserve for waiting for a whole minute while what would have been considered a supercomputer a few years ago decides to go get anything done.

Dependencies. Unlike *nix, there is no standard path(s) to look up libraries. Granted, even different Linux distros use different conventions and paths from each other, but libraries are usually installed with a package manager anyway so mostly you don’t care.  And if you did, your linker would find them anyway without the need for extra flags. Need to link to, say, nanomsg on Windows? Good luck with that. Ah, but there’s vcpkg, I hear you say. Apparently Visual Studio auto-magically finds the libraries that vcpkg “installs”. Job done if you’re clicking a button in an IDE, not so much if you’re using a real build system running in CI. It _could_ be just as easy as adding a flag to your linker, but, alas, the .lib files don’t all end up in the same directory. vcpkg allows me to download libraries without having to write Powershell, but then actually linking is, for lack of a better word, “fun”. On Linux? pacman -S nanomsg; ninja

Batch files and/or powershell. I personally find bash horrible to write code in, but then I do Windows work and remember there’s worse. So much worse. Sigh.

Bash. I’ll explain. Git bash is amazing, I remember a time before that existed (I tried, unsucessfully, to compile bash from source for Windows with at least 3 different implementations back in the day). So why am I complaining? First of all, because I use zsh and haven’t seen an easy way to do that yet on Windows. Secondly, because building on Windows from the command-line often requires cmd.exe. Building C++ code? I’m not going to write my own bash version of vcvarsall.bat just to do that. Commands have a habit of spitting out error messages with backslashes (cos, duh, Windows), and good luck copying and pasting that into your bash shell.

Tooling. Want to create a zip? You’ll have to download and install a 3rd party tool. Oh, but the binary doesn’t get added to the PATH, so you’ll have to write out the full path in your batch file and pray one of your machines doesn’t install it to a different location.

Things are better than they used to be on Windows. We now have the Linux subsystem, git bash, and alternatives to the horrible built-in terminal emulator. To me, it just makes things less bad, and the moment I’m back on Arch Linux it feels like coming home from a not particularly good holiday.

Tagged , , ,

Adding Java and C++ to the MQTT benchmarks or: How I Learned to Stop Worrying and Love the Garbage Collector

This is a followup to my first post, where I compared different MQTT broker implementations written in D, C, Erlang and Go. Then my colleague who wrote the Erlang version decided to write a Java version too, and I felt compelled to do a C+11 implementation. This was only supposed to simply add the results of those two to the benchmarks but unfortunately had problems with the C++ version, which led to the title of this blog post. More on that later. Suffice it to say for now that the C++ results should be taken with a large lump of salt. Results:

loadtest (throughput - bigger is better)
Connections:         500            750            1k
D + vibe.d:          166.9 +/- 1.5  171.1 +/- 3.3  167.9 +/- 1.3
C (Mosquitto):       122.4 +/- 0.4   95.2 +/- 1.3   74.7 +/- 0.4
Erlang:              124.2 +/- 5.9  117.6 +/- 4.6  117.7 +/- 3.2
Go:                  100.1 +/- 0.1   99.3 +/- 0.2   98.8 +/- 0.3
Java:                105.1 +/- 0.5  105.8 +/- 0.3  105.8 +/- 0.5
C++11 + boost::asio: 109.6 +/- 2.0  107.8 +/- 1.1  108.5 +/- 2.6

pingtest (throughput constrained by latency - bigger is better)
parameters:          400p 20w       200p 200w      100p 400w
D + vibe.d:          50.9 +/- 0.3   38.3 +/- 0.2   20.1 +/- 0.1
C (Mosquitto):       65.4 +/- 4.4   45.2 +/- 0.2   20.0 +/- 0.0
Erlang:              49.1 +/- 0.8   30.9 +/- 0.3   15.6 +/- 0.1
Go:                  45.2 +/- 0.2   27.5 +/- 0.1   16.0 +/- 0.1
Java:                63.9 +/- 0.8   45.7 +/- 0.9   23.9 +/- 0.5
C++11 + boost::asio: 50.8 +/- 0.9   44.2 +/- 0.2   21.5 +/- 0.4

In loadtest the C++ and Java implementations turned out to be in the middle of the pack with comparable performance between the two. Both of them are slightly worse than Erlang and D is still a good distance ahead. In pingtest it gets more interesting: Java mostly matches the previous winner (the C version) and beats it in the last benchmark, so it’s now the clear winner. The C++ version matches both of those in the middle benchmark, does well in the last one but only performs as well as the D version in the first one. A win for Java.

Now about my C++ woes: I brought it on myself a little bit, but the way I approached it was by trying to minimise the amount of work I had to do. After all, writing C++ takes a long while at the best of times so I went and ported it from my D version by translating it by hand. I gleaned a few insights from doing so:

  • Using C++11 made my life a lot easier since it closes the gap with D considerably.  const and immutable became const auto, auto remained the same except when used as a return value, etc.
  • Having also written both C++ and D versions of the serialisation libraries I used as well as the unit-testing ones made things a lot easier, since I used the same concepts and names.
  • I’m glad I took the time to port the unit tests as well. I ended up introducing several bugs in the manual translation.
  • A lot of those bugs were initialisation errors that simply don’t exist in D. Or Java. Or Go. Sigh.
  • I hate headers with a burning passion. Modules should be the top C++17 priority IMHO since there’s zero chance of them making into C++14.
  • I missed slices. A lot. std::vector and std::deque are poor substitutes.
  • Trying to port code written in a garbage collected language and trying to simply introduce std::unique_ptr and std::shared_ptr where appropriate was a massive PITA. I’m not even sure I got it right, more on that below.

The C++ implementation is incomplete and will continue to be like that, since I’m now bored of it, tired, and just want to move on. It’s also buggy. All of the loadtest benchmarks were done with only 1000 messages instead of the values at the top since it crashes if left to run for long enough. I’m not going to debug it because it’s not going to be any fun and nobody is paying me to do it.

It’s not optimised either. I never even bothered to run a profiler. I was going to do it as soon as I fixed all the bugs but I gave up long before that. I know it’s doing excessive copying because copying vectors of bytes around was the easiest way I could get it to compile after copying the D code using slices. It was on my TODO list to remove and replace with iterators, but, as I mentioned, it’s not going to happen.

I reckon a complete version would probably do as well as Java at pingtest but have a hunch that D would probably still win loadtest. This is, of course, pure speculation. So why did I bother to include the C++ results? I thought it would still be interesting and give a rough idea of how it would compare. I wish I had the energy to finish it, but I just wasn’t having fun anymore and I don’t see the point. Writing it from scratch in C++ would have been a better idea, but it definitely would have taken a longer amount of time. It would’ve looked similar to what I have now anyway (I’d still be the author), but I have the feeling it would have fewer bugs. Thinking about memory management from the start is very different from trying to apply smart pointers to an already existing design that depended on a garbage collector.

My conclusion from all of this is that I really don’t want to write C++ again unless I have to. And that for all the misgivings I had about a garbage collector, it saves me time that I would’ve used tracking down memory leaks, double frees and all of those other “fun” activities. And, at least for this exercise, it doesn’t even seem to make a dent in performance. Java was the pingtest winner after all, but its GC is a lot better than D’s. To add insult to C++’s injury, that Java implementation took Patrick a morning to write from scratch, and an afternoon to profile and optimise. It took me days to port an existing working implementation from the closest language there is to C++ and ended up with a crashing binary. It just wasn’t worth the time and effort, but at least now I know that.

Tagged , , , , , , , , , , , , , , , , ,

Go vs D vs Erlang vs C in real life: MQTT broker implementation shootout.

At work we recently started using the MQTT protocol, which uses a publish / subscribe model. It’s simple in the good way and well thought out. We went with an open source implementation named Mosquitto. A few weeks ago on the way back from lunch break my colleague Jeff told me he was writing an MQTT broker in Go, his new favourite language. We’re using MQTT at work, I guess he was looking for a new project to write in Go and voilà. It should be a good fit, after all, this is the type of application that Go was made for. But hubris caught up to him when he uttered “And, of course, it’ll be super fast. It won’t even be fair to other languages”. I’m paraphrasing, but that’s how I remember it. You can read Jeff’s account here.

I’m not a fan of Go at all. I wasn’t particularly impressed when I first read about it but given how much I keep hearing about it on proggit and from Jeff himself, I gave it a go a few months back writing a genetic algorithm framework in it. I came out of that experience liking it even less. It’s just not for me. Go is an opinionated language, which would be fine if I agreed with its creators’ opinions. The way my brain works is that I’m on the opposite side of nearly all of them. It does have a few things I like. The absence of semicolons and parentheses, for instance. Goroutines and channels are a huge win. I can live without exceptions, even though I’d rather not, but generics? They can pry them away from my cold dead hands.

D, on the other hand… now we’re talking. Everything I like about C++ and more, with none of the warts. Of course, it has its own warts too, but nothing’s perfect. So, as a D fan and not so much of a Go one, I took Jeff’s statement as a gauntlet to the face. I learned of vibe.d watching the dconf2013 videos and really liked its idea of a synchronous API on top of asynchronous IO. I was convinced I could at least match a Go implementation’s performance, if not exceed it. So I wrote enough of an MQTT broker implementation to be able to run Jeff’s Go benchmark and compare performances. I reached a version that was faster than his after about 2 days. He came up with a second benchmark and my implementation performed poorly, so I went back to optimising. Around this time another colleague wanted in on the competition and used it as an excuse to learn Erlang, and wrote his own implementation. A few rounds of optimising later, and the results were in, which I’ve included below. Explanations on methodology follow.

 
loadtest (throughput - bigger is better)
Connections:   100            500            750            1k
D + vibe.d:    121.7 +/- 1.5  166.9 +/- 1.5  171.1 +/- 3.3  167.9 +/- 1.3
C (Mosquitto): 106.1 +/- 0.8  122.4 +/- 0.4   95.2 +/- 1.3   74.7 +/- 0.4
Erlang:        104.1 +/- 2.2  124.2 +/- 5.9  117.6 +/- 4.6  117.7 +/- 3.2
Go:             90.9 +/- 11   100.1 +/- 0.1   99.3 +/- 0.2   98.8 +/- 0.3

pingtest (latency - bigger is better)
parameters:    400p 20w       200p 200w      100p 400w
D + vibe.d:    50.9 +/- 0.3   38.3 +/- 0.2   20.1 +/- 0.1
C (Mosquitto): 65.4 +/- 4.4   45.2 +/- 0.2   20.0 +/- 0.0
Erlang:        49.1 +/- 0.8   30.9 +/- 0.3   15.6 +/- 0.1
Go:            45.2 +/- 0.2   27.5 +/- 0.1   16.0 +/- 0.1

All of the numbers are thousands of messages received by the client application per second. All measurements were done on my laptop, a Lenovo W530 running Arch Linux so all of the TCP connections were on localhost. Each number is the mean of several measurements, and I used the standard deviation as an estimate of the systematic error. All of the MQTT broker implementations run in one system thread. Using multiple threads resulted in no performance benefits for latency and worse performance for throughput.

Mosquitto was compiled with gcc 4.8.2, the Go implementation was executed with go run, the D implementation was compiled with dmd 2.0.64.2 and the Erlang version I’m not sure. I installed the Arch Linux erlang package and used my colleague’s Makefile without looking at it.

The two benchmarks are loadtest and pingtest. The former measures throughput whereas the latter measures latency. In loadtest a few hundred connections are set up to the broker. Half of these subscribe to a topic and the other half publishes to that topic as fast as possible. The benchmark ends when all of the subscribers have received a certain number of messages, determined by a command-line argument. I varied the number of connections to see how that would affect each broker. There was no contest here, the D implementation was by far the fastest. With a 100 connections I think there wasn’t enough work to do so that all implementations ended up waiting on IO. Except for Mosquitto, they all scaled rather nicely. I had problems measuring Jeff’s implementation due to a bug. He knows about the bug but just can’t be bothered fixing it. The numbers were taken from Go 1.1 (the pingtest numbers are Go 1.2). When his implementation works, Go 1.2 produces a binary that performs on the order of 10%-15% faster than the numbers above, which might mean equivalent performance to the Erlang implementation. I even think the bug shows up more often in Go 1.2 exactly because the resulting binary is more performant.

In pingtest Jeff tried to write a better benchmark and it measures latency. The two main command-line arguments are the number of connection pairs and the number of wildcard subscribers. For each pair, one of the connections subscribes to a request topic unique to that pair and the partner connection subscribes to a reply topic. One partner publishes a request and waits for the other connection to publish a reply. The number of messages sent per second now depends on the round-trip time between these two. Additionally, the wildcard subscribers receive both the request and reply messages from the first connection pair. The number before the ‘p’ is the number of connection pairs, and the number before the ‘w’ is the number of wildcard subscriber connections. Here Mosquitto is the fastest, but the performance difference diminishes with more wildcards, being on par with the D implementation in the last column. I’m not sure why it’s the fastest. I think there’s a possibility that vibe.d might be switching to the “wrong” fiber but that’s pure speculation on my part.

What about readability and ease of writing? I can’t read Erlang so I can’t comment on that. Despite my preference for D I think the D and Go implementations are equally readable. Since the Erlang unit tests are in the same files as the implementation, it’s hard to know exactly how many lines long it is. It gets worse since it implements most of MQTT, the D implementation essentially only implements what’s necessary to run the benchmarks. With those caveats (and the fact that dependencies aren’t counted) the 3 implementations clock in at somewhere between 800 and 1000 lines, without filtering out blank lines and comments.

Could they be optimised further? Probably. In the end the choice of algorithm and data structures matter more than the programming language so my personal advice is to choose the language that makes you productive. None of them magically made the implementations performant; we all had to profile, analyse, optimise, try ideas and measure. I loved writing it in D, but then again I’m a convert. I particularly enjoyed using the serialisation library I wrote for it, Cerealed. Much of the typical bit twiddling boilerplate in networking code disappeared, and that was only made possible by D’s compile-time reflection and user-defined attributes.

Source:

D: https://github.com/atilaneves/mqtt
C: https://bitbucket.org/oojah/mosquitto/
Go: https://github.com/jeffallen/mqtt
Erlang: https://bitbucket.org/pvalsecc/erlangmqtt
Tagged , , , , , , , , , , ,