Tag Archives: c++

Type inference debate: a C++ culture phenomenon?

I read two C++ subreddit threads today on using the auto keyword. They’re both questions: the first one asks why certain people seem to dislike using type inference, while the second asks about what commonly taught guidelines should be considered bad practice. A few replies there mention auto.

This confuses me for more than one reason. The first is that I’m a big proponent of type inference. Having to work to tell a computer what it already knows is one of my pet peeves. I also believe that wanting to know the exact type of a variable is a, for lack of a better term, “development smell”, especially in typed languages with generics. I think that the possible operations on a type are what matters, and figuring out the exact type if needed is a tooling problem that is solved almost everywhere. If you don’t at least have “jump to definition” in your development environment, you should.

The other main source of confusion for me is why this debate seems restricted to the C++ community, at least according to my anectodal evidence. As far as I’m aware of, type inference is taken to be a good thing almost universally in Go, D, Rust, Haskell, et al. I don’t have statistics on this at all, but from what I’ve seen so far it seems to be the case. Why is that?

One reason might be change. As I learned reading Peopleware, humans aren’t too fond of it. C++ lacked type inference until C++11, and whoever learned it then might not like using auto now; it’s “weird”, so it must be bad. I’ve heard C programmers who “like” looping over a collection using an integer index and didn’t understand when I commented on their Python code asking them to please not do that. “Like” here means “this is what I’m used to”. Berating people for disliking change is definitely not the way forward, however. One must take their feelings into account if one wants to effect change. Another lesson learned from Peopleware: the problem is hardly ever technical.

Otherwise, I’m not sure. I’m constantly surprised by what other people find to be more or less readable when it comes to code. There are vocal opponents to operator overloading, for instance. I have no idea why.

What do you think? Where do you stand on auto?

Advertisements
Tagged , , , , , , , , ,

Build systems are stupid

Big complicated projects nearly always need a build system. They might be generating code. They might be compiling different branches depending on the platform or a build-time configuration option. They might turn on/off unity builds for C++.  For most simple projects however, build systems are stupid.

Let’s say you have a single C file. We need to tell the compiler the name of the file we want to compile, and the name of the resulting binary unless we’re happy with it being called a.out, which is unlikely:

gcc -Wall -Wextra -Werror -o foo foo.c

Maybe it’s the first commit in brand new repository and all you have is foo.c in there. Why am I telling the compiler what to build? What else would it build??

$ ls
foo.c
$ gcc
gcc: fatal error: no input files
compilation terminated.

“Real projects don’t look like that!”. Ok:

$ ls
build include src 
$ ls src
foo.c main.c
$ ls include
foo.h

I doubt any humans who have ever written C before would be confused as to how this project is supposed to be compiled. And yet, this is probably the simplest way I can come up with to tell the computer to do it properly:

CFLAGS ?= -Iinclude -Wall -Wextra -Werror -g

SRCS := $(shell find src -name *.c)
HDRS := $(shell find include -name *.h)
OBJS := $(subst src,build,$(subst .c,.o,$(SRCS)))

build/foo: $(OBJS)
	gcc -o $@ $^

build/%.o: src/%.c $(HDRS)
	gcc $(CFLAGS) -o $@ -c $<

Is there a better way than $(subst)? Probably. I don’t care. I was also too lazy to figure out dependencies so all source files depend on all headers. That’s a lot of lines of Makefile and all I wanted to specify were that source files are in src, headers are in include, binaries go inbuild, and that the compiler flags are -Wall -Wextra -Werror  -g. The first three are obvious from the existence and names of those directories, so there are 8 lines of code here to tell a computer what the compiler flags are. Robots: 1 Humans: 0.

CMake isn’t that much better:

cmake_minimum_required (VERSION 2.8.11)
project (foo)

file(GLOB SRC_FILES ${PROJECT_SOURCE_DIR}/src/*.c)
add_executable(foo ${SRC_FILES})
target_include_directories(foo PRIVATE ${PROJECT_SOURCE_DIR}/include)

Robots: 2 Humans: 0.

I’ll probably get told off for using file(GLOB) because what I really should be doing is manually listing source files because reasons. Why would I have files in src if I don’t want them to be compiled? In what universe does it make sense for a person whose job it is to tell computers what to do so that they don’t have to do it themselves to tell a build system which files to build? In my opinion every new language should default to doing what Pony does. Running the compiler just builds everything in the current directory tree.

I also think that new languages should ship with a compiler-based build system, i.e. the compiler is the build system. The reason for that is that as a developer, I want to rebuild the minimum possible amount of code that is required after my edits to run my tests. The CMake solution above fails at this: the compiler calculates dependencies and automatically feeds it into the build system, but those dependencies are too coarse.

If I touch any header, all source files that #include it will be recompiled. This happens even if it’s an additive change, which by definition can’t affect any existing source files.  If I change a function’s signature that is only used in foo.c, well tough: I’ll have to wait until bar.c, baz.c and quux.c all get recompiled for the temerity of depending on different functions from the same header.

This problem is bad enough in C and C++ but actually gets worse for languages with modules: now every file is a “header”. Changes to the implementation trigger recompilations, because our build systems are too stupid to know any better. I can’t see how a good build system can be an external tool unless the compiler is a library, and even then it would be doing the same work twice since dependency calculations are better done while a module/translation unit is being compiled.

Taking it a step further, the whole concept of caring about files and modules is probably outdated anyway. Why not a Smalltalk-like environment? You make changes to the source and your “image” gets rebuilt in the background, one part of the AST at a time, automagically resolving and rebuilding dependencies (at the function level!).

Computers and build systems: what have you done for me lately?

Tagged , , , , ,

My DConf2019 talk

I’ll be speaking at DConf this year. I haven’t yet decided how to structure the talk yet, but I’d like to take a page out of Dan Sak’s excellent CppCon 2016 talk that covered technical and psychological aspects of language adoption. In his case he was addressing a C++ audience on the difficulty of convincing C programmers to adopt C++ (an admirable goal!). In mine, it’s how do I out-C++ C++?

The genesis for my talk was my decision a few years ago to write tests for a legacy C codebase. I wanted to use D, but even as a fan of the language I chose to write the tests in C++ instead. And if even I wouldn’t have picked D for that task, who would?

I still think it was the right decision to make. The first step in”importing” the legacy project into a C++ test looks like this:

extern "C" {
    #include "legacy.h"
}

There is no second step. From here one #includes a test library such as catch2 and starts writing tests for the C functions in the API.

As far as I know, every language under the sun can interface with C. There’s usually (always?) some way to do FFI, and examples invariably show how to call a function that takes 2 integers, maybe even a const char* parameter. Unfortunately, that’s not what real codebases need to do. They need to call a function that takes a pointer to a structure that’s defined in a different header, which has 3 pointers to structures that are themselves defined in different headers, which…

You get the idea. Writing the definitions by hand error-prone, boring, and time-consuming. Even if correct, nobody seems to talk about the elephant in the C API room: macros. Not once have I encountered a non-trivial C API that doesn’t require the C preprocessor to use properly. C++ users are at an advantage here: just use the macros. Everyone else has to get by with ad-hoc solutions. And if the function-like macro implementations change, the code will break.

I think that the only way to make it so that C++ isn’t the obvious and/or only choice is that a challenger allows one to write `#include “legacy.h”` and have it work just as easily as it does natively. That’s why I started the dpp project, and I’m looking forward to talking about the technical challenges I’ve encountered on my journey so far.

See you at DConf!

Tagged , , , , , , ,

Issues DIP1000 can’t (yet) catch

D’s DIP1000 is its attempt to avoid memory safety issues in the language. It introduces the notion of scoped pointers and in principle (modulo bugs) prevents one from writing memory unsafe code.

I’ve been using it wherever I can since it was implemented, and it’s what makes it possible for me to have written a library allowing for fearless concurrency. I’ve also written a D version of C++’s std::vector that leverages it, which I thought was safe but turns out to have had a gaping hole.

I did wonder if Rust’s more complicated system would have advantages over DIP1000, and now it seems that the answer to that is yes. As the the linked github issue above shows,  there are ways of creating dangling pointers in the same stack frame:

void main() @safe {
    import automem.vector;

    auto vec1 = vector(1, 2, 3);
    int[] slice1 = vec1[];
    vec1.reserve(4096);  // invalidates slice1 here
}

Fortunately this is caught by ASAN if slice1 is used later. I’ve worked around the issue for now by returning a custom range object from the vector that has a pointer back to the container – that way it’s impossible to “invalidate iterators” in C++-speak. It probably costs more in performance though due to chasing an extra pointer. I haven’t measured to confirm the practical implications.

In Rust, the problem just can’t occur. This fails to compile (as it should):

use vec;

fn main() {
    let mut v = vec![1, 2, 3, 4, 5];
    let s = v.as_slice();
    println!("org v: {:?}\norg s: {:?}", v, s);
    v.resize(15, 42);
    println!("res: v: {:?}\norg s: {:?}", v, s);
}

I wonder if D can learn/borrow/steal an idea or two about only having one mutable borrow at a time. Food for thought.

Tagged , , , ,

The joys of translating C++’s std::function to D

I wrote a program to translate C headers to D. Translating C was actually more challenging than I thought; I even got to learn things I didn’t know about the language even though I’ve known it for 24 years. The problems that I encountered were all minor though, and to the extent of my knowledge have all been resolved (modulo bugs).

C++ is a much larger language, so the effort should be considerably more. I didn’t expect it to be as hard as it’s been however, and in this blog I want to talk about how “interesting” it was to translate C++11’s std::function by hand.

The first issue for most languages would be that it relies on template specialisation:

template<typename>
class function;  // doesn't have a definition anywhere

template<typename R, typename... Args>
class function<R(Args...)> { /* ... */ }

This is a way of constraining the std::function template to only accept function types. Perhaps surprisingly to some, the C++ syntax for the type of a function that takes two ints and returns a double is double(int, int). I doubt most people see this outside of C++ standard library templates. If it’s still confusing, think of double(int, int) as the type that is obtained by deferencing a pointer of type double(*)(int, int).

D is, as far as I know, the only other language other than C++ to support partial template specialisation. There are however two immediate problems:

  • function is a keyword in D
  • There is no D syntax for a function type

I can mitigate the name issue by calling the symbol function_ instead; however, this will affect name mangling, meaning nothing will actually link. D does have pragma(mangle) to tell the compiler how to mangle symbols, but std::function is a template; it doesn’t have any mangling until it’s instantiated. Let’s worry about that later and call the template function_ for now.

The second issue can be worked around:

// C++: `using funPtr = double(*)(int, int);`
alias funPtr = double function(int, int);
// C++: `using funType = double(int, int);`
alias funType = typeof(*funPtr.init);

As in C++, the function type is the type one gets from deferencing a function pointer. Unlike C++, currently there’s no syntax to write it directly. First attempt:

// helper to automate getting an alias to a function type
template FunctionType(R, Args...) {
    alias ptr = R function(Args);
    alias FunctionType = typeof(*ptr.init);
}

struct function_(T);
struct function_(T: FunctionType!(R, Args), R, Args...) { }

This doesn’t work, probably due to this bug preventing the helper template FunctionType from working as intended. Let’s forget the template constraint:

extern(C++, "std") {
    struct function_(T) {
        import std.traits: ReturnType, Parameters;
        alias R = ReturnType!T;
        alias Args = Parameters!T;
        // In C++: `R operator()(Args) const`;
        R opCall(Args) const;
    }
}

void main() {
    alias funPtr = double function(double);
    alias funType = typeof(*funPtr.init);
    function_!funType f;
    double result = f(3.3);
}

This compiles but it doesn’t link: there’s an undefined reference to std::function_::operator()(double) const. Looking at the symbols in the object files using nm, we see that g++ emitted _ZNKSt8functionIFddEEclEd but dmd is trying to link to _ZNKSt9function_IFddEEclEd. As expected, name mangling issues related to renaming the symbol.

We could manually add a pragma(mangle) to tell D how to mangle the operator for the double(double) template instantiation, but that solution doesn’t scale. CTFE (constexpr if you speak C++ but not D) to the rescue!

// snip - as before
pragma(mangle, opCall.mangleof.fixMangling)
R opCall(Args) const;

// (elsewhere at file scope)
string fixMangling(string str) {
    import std.array: replace;
    return str.replace("9function_", "8function");
}

What’s going on here is an abuse of D’s compile-time power. The .mangleof property is a compile-time string that tells us how a symbol is going to be mangled. We pass this string to the fixMangling function which is evaluated at compile-time and fed back to the compiler telling it what symbol name to actually use. Notice that function_ is still a template, meaning .mangleof has a different value for each instantiation. It’s… almost magical. Hacky, but magical.

The final code compiles and links. Actually creating a valid std::function<double(double)> from D code is left as an exercise to the reader.

 

Tagged , , , , , , ,

Comparing Pythagorean triples in C++, D, and Rust

EDIT: As pointed out in the Rust reddit thread, the Rust version can be modified to run faster due to a suble difference between ..=z and ..(z+1). I’ve updated the measurements with rustc0 being the former and rustc1 being the latter. I’ve also had to change some of the conclusions.

You may have recently encountered and/or read this blog post criticising a possible C++20 implementation of Pythagorean triples using ranges. In it the author benchmarks different implemetations of the problem, comparing readability, compile times, run times and binary sizes. My main language these days is D, and given that D also has ranges (and right now, as opposed to a future version of the language), I almost immediately reached for my keyboard. By that time there were already some D and Rust versions floating about as a result of the reddit thread, so fortunately for lazy me “all” I had to next was to benchmark the lot of them. All the code for what I’m about to talk about can be found on github.

As fancy/readable/extensible as ranges/coroutines/etc. may be, let’s get a baseline by comparing the simplest possible implementation using for loops and printf. I copied the “simplest.cpp” example from the blog post then translated it to D and Rust. To make sure I didn’t make any mistakes in the manual translation, I compared the output to the canonical C++ implementation (output.txt in the github repo). It’s easy to run fast if the output is wrong, after all. For those not familiar with D, dmd is the reference compiler (compiles fast, generates sub-optimal code compared to modern backends) while ldc is the LLVM based D compiler (same frontend, uses LLVM for the backend). Using ldc also makes for a more fair comparison with clang and rustc due to all three using the same backend (albeit possibly different versions of it).

All times reported will be in milliseconds as reported by running commands with time on my Arch Linux Dell XPS15. Compile times were measured by excluding the linker, the commands being clang++ -c src.cpp, dmd -c src.d, ldc -c src.d, rustc --emit=obj src.rs for each compiler (obviously for unoptimised builds). The original number of iterations was 100, but the runtimes were so short as to be practically useless for comparisons, so I increased that to 1000. The times presented are a result of running each command 10 times and taking the mininum, except for the clang unoptmised build of C++ ranges because, as a wise lady on the internet once said, ain’t nobody got time for that (you’ll see why soon). Optimised builds were done with -O2 for clang and ldc,  -O -inline for dmd and -C opt-level=2 for rustc. The compiler versions were clang 7.0.1, dmd 2.083.1, ldc 1.13.0 and rustc 1.31.1. The results for the simple, readable, non-extensible version of the problem (simple.cpp, simple.d, simple.rs in the repo):

Simple          CT (ms)  RT (ms)

clang debug      59       599
clang release    69       154
dmd debug        11       369
dmd release      11       153
ldc debug        31       599
ldc release      38       153
rustc0 debug    100      8445
rustc0 release  103       349
rustc1 debug             6650
rustc1 release            217

C++ run times are the same as D when optimised, with compile times being a lot shorter for D. Rust stands out here for both being extremely slow without optimisations, compiling even slower than C++, and generating code that takes around 50% longer to run even with optimisations turned on! I didn’t expect that at all.

The simple implementation couples the generation of the triples with what’s meant to be done with them. One option not discussed in the original blog to solve that is to pass in a lambda or other callable to the code instead of hardcoding the printf. To me this is the simplest way to solve this, but according to one redditor there are composability issues that may arise. This might or might not be important depending on the application though. One can also compose functions in a pipeline and pass that in, so I’m not sure what the problem is. In any case, I wrote 3 implementations and timed them (lambda.cpp, lambda.d, lambda.rs):

Lambda          CT (ms)  RT (ms)

clang debug      188       597
clang release    203       154
dmd debug         33       368
dmd release       37       154
ldc debug         59       580
ldc release       79       154
rustc0 debug     111      9252
rustc0 release   134       352
rustc1 debug              6811
rustc1 release             154

The first thing to notice is, except for Rust (which was slow anyway), compile times have risen by about a factor of 3: there’s a cost to being generic. Run times seem unaffected except that the unoptimised Rust build got slightly slower. I’m glad that my intuition seems to have been right on how to extend the original example: keep the for loops, pass a lambda, profit. Sure, the compile-times went up but in the context of a larger project this probably wouldn’t matter that much. Even for C++, 200ms is… ok. Performance-wise, it’s now a 3-way tie between the languages, and no runtime penalty compared to the non-generic version. Nice!

Next up, the code that motivated all of this: ranges. I didn’t write any of it: the D version was from this forum post, the C++ code is in the blog (using the “available now” ranges v3 library), and the Rust code was on reddit. Results:

Range           CT (ms)   RT (ms)

clang debug     4198     126230
clang release   4436        294
dmd debug         90      12734
dmd release      106       7755
ldc debug        158      15579
ldc release      324       1045
rustc0 debug     128      11018
rustc0 release   180        422
rustc1 debug               8469
rustc1 release              168

I have to hand it to rustc – whatever you throw at it the compile times seem to be flat. After modifying the code as mentioned in the edit at the beginning, it’s now the fastest out of the 3!

dmd compile times are still very fast, but the generated code isn’t. ldc does better at optimising, but the runtimes are pretty bad for both of them. I wonder what changes would have to be made to the frontend to generate the same code as for loops.

C++? I only ran the debug version once. I’m just not going to wait for that long, and besides, whatever systematic errors are in the measurement process don’t really matter when it takes over 2 minutes. Compile times are abysmal, the only solace being that optimising the code only takes 5% longer time than to not bother at all.

In my opinion, none of the versions are readable. Rust at least manages to be nearly as fast as the previous two versions, with C++ taking twice as long as it used to for the same task. D didn’t fare well at all where performance is concerned. It’s possible to get the ldc optimised version down to ~700ms by eliminating bounds checking, but even then it wouldn’t seem to be worth the bother.

My conclusion? Don’t bother with the range versions. Just… don’t. Nigh unreadable code that compiles and runs slowly.

Finally, I tried the D generator version on reddit. There’s a Rust version on Hacker News as well, but it involves using rust nightly and enabling features, and I can’t be bothered figuring out how to do any of that. If any Rustacean wants to submit something that works with a build script, please open a PR on the repo.

Generator      CT (ms)  RT (ms)

dmd debug      208      381
dmd release    222      220
ldc debug      261      603
ldc release    335      224

Compile times aren’t great (the worst so far for D), but the runtimes are quite acceptable. I had a sneaky suspicion that the runtimes here are slower due to startup code for std.concurrency so I increased N to 5000 and compared this version to the simplest possible one at the top of this post:

N=5k                    RT (ms)
dmd relase simple       8819
dmd release generator   8875

As expected, the difference vanished.

Final conclusions: for me,  passing a lambda is the way to go, or generators if you have them (C++ doesn’t have coroutines yet and the Rust version mentioned above needs features that are apparently only available in the nightly builds). I like ranges in both their C++ and D incarnations, but sometimes they just aren’t worth it. Lambdas/generators FTW!

Tagged , , , , ,

libclang: not as great as I thought

I’ve been hearing about the delights of libclang for a while now. A compiler as a library, what a thought! Never-get-it-wrong again parsing/completion/whathaveyou! Amazing.

Then I tried using it.

If you’re parsing C (and maybe Objective C, but I wouldn’t know), then it’s great. It does what it says on the tin and then some, and all the information is at your fingertips. C++? Not so much.

libclang is the C API of the clang frontend. The “real” code is written in C++, but it’s unstable in the sense that there’s no API guarantees. The C API however is stable. It’s also the only option if you want to use the compiler as a library from a different language.

As I’ve found out, the only C++ entities that are exposed by libclang are the ones that the authors have needed, which leaves a lot to be desired. Do you want to get a list of a struct’s template parameters? You can get the number of them, and you can get a type template argument at a particular index after that. That sounds great, until you realise that some template arguments are values, and you can’t get those. At all. You can’t even tell if they’re values or not. I had to come up with a heuristic where I’d call clang_Type_getTemplateArgumentAsType and then use the type kind to determine if it’s a value or not (`CXType_Invalid` means a value, `CXType_Unexposed` means a non-specialised type argument and anything else is a specialised template type argument. Obviously.). Extracting the value? That involves going through the tokens of the struct and finding the ith token in the angle brackets. Sigh.

And this is because it’s a templated struct/class. Template functions don’t need any of this and are better supported because reasons.

Then there are bugs. Enum constants in template structs show up as 0 for no reason at all, and template argument naming is inconsistent:

template <typename> struct Struct;
template <typename T> struct Struct {
    Struct(const Struct& other) {}
};

See that copy constructor above? Its only parameter is, technically, of const Struct<T>& type. So you’d think libclang would tell you that the type’s spelling is T, but no, it’s type-parameter-0-0. Remove the first line with the struct declaration? Then the type template argument’s spelling is, as a normal person would have guessed, T. If the declaration names the type as `T` it also works as expected. I assume again it’s because reasons.

It’s bad enough that I’m not the only one to have encountered issues. There’s at least one library I know of written to get around libclang’s problems, but I can’t use it for my project because it’s written in C++.

I’m going to eventually have to submit patches to libclang myself, but I have no idea how the approval process over there is like.

Tagged , , ,

Myths programmers believe

I’m fascinated by misconceptions. I used to regularly host pub quizzes (aka trivia nights in some parts of our blue marble), and anyone who’s attended a few of my quizzes know that if it sounds like a trick question, it probably is. I’ve done more than a fair share of “True or False” rounds where the answer to all the questions was false but hardly anyone noticed the pattern because each and every single one of them were things that people believe but simply aren’t true. Wait, how does that relate to programming? Read on.

Programmers that comment on Hacker News or Proggit links are not a randomly sampled cross section of the industry. They’re better. As with anyone who’s ever attended a conference, they’re self selected individuals who are more passionate about their craft than the average bear. And yet, I see comments over and over again, from well-meaning savvy coders about beliefs and misconceptions that seem to be as widespread as they are provably false. Below are some of my favourites, with explanations as to what they mean.

C is magically fast.

I’ve written about this before. If anything on this list is going to get me downvoted, cause cognitive dissonance, and start flame wars, this will probably be the one.

As everything else in this blog post, I just… don’t understand. Maybe it’s because I learned C back when it was considered slow for games programming and only asm would do. Or maybe it’s that it all compiles down to machine code anyway, and I don’t harbour homeopathic-like tendencies to think that the asm, like water, has memory of whence it came, causing the CPU to execute it faster. Which brings me to…

GC languages are magically slow.

Where GC means “tracing GC”. If I had a penny for each time someone comments on a thread about how tracing GCs just aren’t acceptable when one does serious systems programming I’d have… a couple of pounds and/or dollars.

I kind of understand this one, given that I believed it myself until 4 years ago. I too thought that the magical pixies working in the background to clear up my mess “obviously” cause a program to run slower. I’ll do a blog post on this one specifically.

Emacs is a text-mode program.

Any time somebody argues that text editors can’t compete with IDEs, somebody invariably asks why they’d want to use a text-mode editor. I’m an Emacs user, and I don’t know why anyone would want to do that either. Similarly…

Text editors can’t and don’t have features like autocomplete, go to definition, …

This usually shows up in the same discussions where Emacs is a terminal program. It also usually comes up when someone says an IDE is needed because of the features above. I don’t program without them, I think it’d be crazy to. Then again, I’ve rarely seen anyone use their editor of choice well. I’ve lost count of how many times I’ve watched someone open a file in vim, realise it’s not the one they want, close vim, open another file, close it again… aaarrrgh.

I’ve also worked with someone who “liked Eclipse”, but didn’t know “go to definition” was a thing. One wonders.

Given that the C ABI is the lingua franca of programming, the implementation should be written in C

C is the true glue language when it comes to binary. It got that way for historical reasons, but there’s no denying it reigns supreme in that respect. For reasons that dumbfound me, a lot of people take this to mean that if you have a C API, then obviously the only language you can implement it in is C. Never mind that the Visual Studio libc is written in C++, this is why you pick C!

C and C++ are the same language, and its real name is C/C++.

And nobody includes Objective C in there (which, unlike C++, is an actual superset of C), because… reasons? There are so many things that are common between the two of them that sometimes it makes sense to write C/C++ because whatever it is you’re saying applied to both of them. But that’s not how I see it used most of the time.

Rust is unique in its fearless concurrency.

I guess all the people writing Pony, D, Haskell, Erlang, etc. didn’t get the memo. Is it way safer than in C++? Of course it is, but that’s a pretty low bar to clear.

The first time I wrote anything in Rust it took me 30min to write a deadlock. Not what I call fearless. “Fearless absence of data races”? Not as catchy, I guess.

You can only write kernels in C

Apparently Haiku and Redox don’t exist. Or any number of unikernel frameworks.

The endianness of your platform matters.

This is another one that’s always confused me. I never even understood why `htons` and `ntohs` existed. There’s so much confusion over this, and I’ve failed so miserably at explaining it to other people, that I just point them to someone who’s explained it better than I can.

EDIT: I didn’t mean that endianness never matter, it’s just that 99.9% of the time people think it does, it doesn’t. I had to read up on IEEE floats once, but that doesn’t mean that in the normal course of programming I need to know which bits are the mantissa.

If your Haskell code compiles, it works!

Err… no. Just… no.

Simple languages get rid of complexity.

Also known as the “sweep the dust under the rug” fallacy.

EDIT: I apparently wasn’t clear about this. I mean that the inherent complexity of the task at hand doesn’t go away and might even be harder to solve with a simple language. There are many joke languages that are extremely simple but that would be a nightmare to code anything in.

Line coverage is a measure of test quality.

I think measuring coverage is important. I think looking at pretty HTML coverage reports helps to write better software. I’m confident that chasing line coverage targets is harmful and pointless. Here’s a silly function:

int divide(int x, int y) { return x + y }

Here’s the test suite:

TEST(divide) { divide(4, 2); }

100% test coverage. The function doesn’t even return the right answers to anything. Oops. Ok, ok, so we make sure we actually assert on something:

TEST(divide) { 
    assert(divide(4, 2) == 2); 
    assert(divide(6, 3) == 2);
}

int divide(int x, int y) { return 2; }

100% test coverage. Hmm…

TEST(divide) { 
    assert(divide(4, 2) == 2); 
    assert(divide(9, 3) == 3);
}

int divide(int x, int y) { return x / y; }

Success! Unless, of course, you pass in 0 for y…

Measure test coverage. Look at the reports. Make informed decisions on what to do next. I’ve also written about this before.

C maps closely to hardware.

If “hardware” means a PDP11 or a tiny embedded device, sure. There’s a reason you can’t write a kernel in pure C with no asm. And then there’s all of this: SIMD, multiple CPU cores, cache hiearchies, cache lines, memory prefetching, out-of-order execution, etc. At least it has atomics now.

I wonder if people will still say C maps closely to hardware when they’re writing code for quantum computers. It sounds silly, but I’ve heard worse.

Also, Lisp machines never existed and FPGAs/GPUs aren’t hardware (remember Poe’s law).

No other language can do X

No matter what your particular X is, Lisp probably can.

 

Tagged , ,

#include C headers in D code

I’ll lead with a file:

// stdlib.dpp
#include <stdio.h>
#include <stdlib.h>

void main() {
    printf("Hello world\n".ptr);

    enum numInts = 4;
    auto ints = cast(int*) malloc(int.sizeof * numInts);
    scope(exit) free(ints);

    foreach(int i; 0 .. numInts) {
        ints[i] = i;
        printf("ints[%d]: %d ".ptr, i, ints[i]);
    }

    printf("\n".ptr);
}

The keen eye will notice that, except for the two include directives, the file is just plain D code. Let’s build and run it:

% d++ stdlib.dpp
% ./stdlib
Hello world
ints[0]: 0 ints[1]: 1 ints[2]: 2 ints[3]: 3

Wait, what just happened?

You just saw a D file directly #include two headers from the C standard library and call two functions from them, which was then compiled and run. And it worked!

Why? I mean, just… why?

I’ve argued before that #include is C++’s killer feature. Interfacing to existing C or C++ libraries is, for me, C++’s only remaining use case. You include the relevant headers, and off you go. No bindings, no nonsense, it just works. As a D fan, I envied that. So this is my attempt to eliminate that “last” (again, for me, reasonable people may disagree) use case where one would reach for C++ as the weapon of choice.

There’s a reason C++ became popular. Upgrading to it from C was a decision with essentially 0 risk.  I wanted that “just works” feature for D.

How?

d++ is a compiler wrapper. By default it uses dmd to compile the D code, but that’s configurable through the –compiler option. But dmd can’t compile code with #include directives in it (the lexer won’t even like it!), so what gives?

d++ will go through a .dpp file, and upon encountering an #include directive it expands it in-place, similarly to what would happen with a C or C++ compiler. Differently from clang or gcc however, the header file can’t just be inserted in, since the syntax of the declarations is in a different language. So it uses libclang to parse the header, and translates all of the declarations on the fly. This is trickier than it sounds since C and C++ allow for things that aren’t valid in D.

There’s one piece of the usability puzzle that’s missing from that story: the preprocessor. C header files have declarations but also macros, and some of those are necessary to use the library as it was intended. One can try and emulate this with CTFE functions in D, and sometimes it works. But I don’t want “sometimes”, I want guarantees, and the only way to do that is… to use the C preprocessor.

Blasphemy, I know. But since worse is better, d++ redefines all macros in the included header file so they’re available for use by the D program. It then runs the C preprocessor on the result of expanding all the #include directives, and the final result is a regular D file that can be compiled by dmd.

What next?

Bug fixing and C++ support. I won’t be happy until this works:

#include <vector>
void main() {
    auto v = std.vector!int();
    v.push_back(42);
}

Code or it didn’t happen

I almost forgot: https://github.com/atilaneves/dpp.

 

Tagged , ,

On ESR’s thoughts on C and C++

ESR wrote two blog posts about moving on from C recently. As someone who has been advocating for never writing new code in C again unless absolutely necessary, I have my own thoughts on this. I have issues with several things that were stated in the follow-up post.

C++ as the language to replace C. Which ain’t gonna happen” – except it has. C++ hasn’t completely replaced C, but no language ever will. There’s just too much of it out there. People will be maintaining C code 50 years from now no matter how many better alternatives exist. If even gcc switched to C++…

It’s true that you’re (usually) not supposed to use raw pointers in C++, and also true that you can’t stop another developer in the same project from doing so. I’m not entirely sure how C is better in that regard, given that _all_ developers will be using raw pointers, with everything that entails. And shouldn’t code review prevent the raw pointers from crashing the party?

if you can mentally model the hardware it’s running on, you can easily see all the way down” – this used to be true, but no longer is. On a typical server/laptop/desktop (i.e. x86-64), the CPU that executes the instructions is far too complicated to model, and doesn’t even execute the actual assembly in your binary (xor rax, rax doesn’t xor anything, it just tells the CPU a register is free). C doesn’t have the concept of cache lines, which is essential for high performance computing and on any non-trivial CPU.

One way we can tell that C++ is not sufficient is to imagine an alternate world in which it is. In that world, older C projects would routinely up-migrate to C++“. Like gcc?

Major OS kernels would be written in C++“. I don’t know about “major”, but there’s  BeOS/Haiku and IncludeOS.

Not only has C++ failed to present enough of a value proposition to keep language designers uninterested in imagining languages like D, Go, and Rust, it has failed to displace its own ancestor.” – I think the problem with this argument is the (for me) implicit assumption that if a language is good enough, “better enough” than C, then logically programmers will switch. Unfortunately, that’s not how humans behave, as as much as some of us would like to pretend otherwise, programmers are still human.

My opinion is that C++ is strictly better than C. I’ve met and worked with many bright people who disagree. There’s nothing that C++ can do to bring them in – they just don’t value the trade-offs that C++ makes/made. Some of them might be tempted by Rust, but my anedoctal experience is that those that tend to favour C over C++ end up liking Go a lot more. I can’t stand Go myself, but the things about Go that I don’t like don’t bother its many fans.

My opinion is also that D is strictly better than C++, and I never expect the former to replace the latter. I’m even more fuzzy on that one than the reason why anybody chooses to write C in a 2017 greenfield project.

My advice to everyone is to use whatever tool you can be most productive in. Our brains are all different, we all value completely different trade-offs, so use the tool that agrees with you. Just don’t expect the rest of the world to agree with you.

 

Tagged , , ,