Monthly Archives: March 2019

My DConf2019 talk

I’ll be speaking at DConf this year. I haven’t yet decided how to structure the talk yet, but I’d like to take a page out of Dan Sak’s excellent CppCon 2016 talk that covered technical and psychological aspects of language adoption. In his case he was addressing a C++ audience on the difficulty of convincing C programmers to adopt C++ (an admirable goal!). In mine, it’s how do I out-C++ C++?

The genesis for my talk was my decision a few years ago to write tests for a legacy C codebase. I wanted to use D, but even as a fan of the language I chose to write the tests in C++ instead. And if even I wouldn’t have picked D for that task, who would?

I still think it was the right decision to make. The first step in”importing” the legacy project into a C++ test looks like this:

extern "C" {
    #include "legacy.h"
}

There is no second step. From here one #includes a test library such as catch2 and starts writing tests for the C functions in the API.

As far as I know, every language under the sun can interface with C. There’s usually (always?) some way to do FFI, and examples invariably show how to call a function that takes 2 integers, maybe even a const char* parameter. Unfortunately, that’s not what real codebases need to do. They need to call a function that takes a pointer to a structure that’s defined in a different header, which has 3 pointers to structures that are themselves defined in different headers, which…

You get the idea. Writing the definitions by hand error-prone, boring, and time-consuming. Even if correct, nobody seems to talk about the elephant in the C API room: macros. Not once have I encountered a non-trivial C API that doesn’t require the C preprocessor to use properly. C++ users are at an advantage here: just use the macros. Everyone else has to get by with ad-hoc solutions. And if the function-like macro implementations change, the code will break.

I think that the only way to make it so that C++ isn’t the obvious and/or only choice is that a challenger allows one to write `#include “legacy.h”` and have it work just as easily as it does natively. That’s why I started the dpp project, and I’m looking forward to talking about the technical challenges I’ve encountered on my journey so far.

See you at DConf!

Tagged , , , , , , ,

When you can’t (and shouldn’t) unit test

I’m a unit test aficionado, and, as such, have attempted to unit test what really shouldn’t be. It’s common to get excited by a new hammer and then seeing nails everywhere, and unit testing can get out of hand (cough! mocks! cough!).

I still believe that the best tests are free from side-effects, deterministic and fast. What’s important to me isn’t whether or not this fits someone’s definition of what a unit test is, but that these attributes enable the absence of slow and/or flaky tests. There is however another class of tests that are the bane of my existence: brittle tests. These are the ones that break when you change the production code despite your app/library still working as intended. Sometimes, insisting on unit tests means they break for no reason.

Let’s say we’re writing a new build system. Let’s also say that said build system works like CMake does and spits out build files for other build systems such as ninja or make. Our unit test fan comes along and writes a test like this:

assert make_output == "all: foo\nfoo: foo.c\n\tgcc -o foo foo.c"

I believe this to be a bad test, and the reason why is that it’s checking the implementation instead of the behaviour of the production code. Consider what happens when the implementation is changed without affecting behaviour:

all: foo\nfoo: foo.c\n\tgcc -o $@ $<

The behaviour is the same as before: any time `foo.c` is changed, `foo` will get recompiled. The implementation not only isn’t the same, it’s arguably better now, and yet the assertion in the test would fail. I think we can all agree that the ROI for this test is negative if this is all it takes to break it.

The orthodox unit test approach to situations like these is to mock the service in question, except most people don’t get the memo that you should only mock code you own. We don’t control GNU make, so we shouldn’t be doing that. It’s impossible to copy make exactly in a mock/stub/etc. and it’s foolish to even try. We (mostly) don’t care about the string that our code outputs, we care that make interprets that string with the correct semantics.

My conclusion is that I shouldn’t even try to write unit tests for code like this. Integration tests exist for a reason.

Tagged , , , , ,

Issues DIP1000 can’t (yet) catch

D’s DIP1000 is its attempt to avoid memory safety issues in the language. It introduces the notion of scoped pointers and in principle (modulo bugs) prevents one from writing memory unsafe code.

I’ve been using it wherever I can since it was implemented, and it’s what makes it possible for me to have written a library allowing for fearless concurrency. I’ve also written a D version of C++’s std::vector that leverages it, which I thought was safe but turns out to have had a gaping hole.

I did wonder if Rust’s more complicated system would have advantages over DIP1000, and now it seems that the answer to that is yes. As the the linked github issue above shows,  there are ways of creating dangling pointers in the same stack frame:

void main() @safe {
    import automem.vector;

    auto vec1 = vector(1, 2, 3);
    int[] slice1 = vec1[];
    vec1.reserve(4096);  // invalidates slice1 here
}

Fortunately this is caught by ASAN if slice1 is used later. I’ve worked around the issue for now by returning a custom range object from the vector that has a pointer back to the container – that way it’s impossible to “invalidate iterators” in C++-speak. It probably costs more in performance though due to chasing an extra pointer. I haven’t measured to confirm the practical implications.

In Rust, the problem just can’t occur. This fails to compile (as it should):

use vec;

fn main() {
    let mut v = vec![1, 2, 3, 4, 5];
    let s = v.as_slice();
    println!("org v: {:?}\norg s: {:?}", v, s);
    v.resize(15, 42);
    println!("res: v: {:?}\norg s: {:?}", v, s);
}

I wonder if D can learn/borrow/steal an idea or two about only having one mutable borrow at a time. Food for thought.

Tagged , , , ,

The joys of translating C++’s std::function to D

I wrote a program to translate C headers to D. Translating C was actually more challenging than I thought; I even got to learn things I didn’t know about the language even though I’ve known it for 24 years. The problems that I encountered were all minor though, and to the extent of my knowledge have all been resolved (modulo bugs).

C++ is a much larger language, so the effort should be considerably more. I didn’t expect it to be as hard as it’s been however, and in this blog I want to talk about how “interesting” it was to translate C++11’s std::function by hand.

The first issue for most languages would be that it relies on template specialisation:

template<typename>
class function;  // doesn't have a definition anywhere

template<typename R, typename... Args>
class function<R(Args...)> { /* ... */ }

This is a way of constraining the std::function template to only accept function types. Perhaps surprisingly to some, the C++ syntax for the type of a function that takes two ints and returns a double is double(int, int). I doubt most people see this outside of C++ standard library templates. If it’s still confusing, think of double(int, int) as the type that is obtained by deferencing a pointer of type double(*)(int, int).

D is, as far as I know, the only other language other than C++ to support partial template specialisation. There are however two immediate problems:

  • function is a keyword in D
  • There is no D syntax for a function type

I can mitigate the name issue by calling the symbol function_ instead; however, this will affect name mangling, meaning nothing will actually link. D does have pragma(mangle) to tell the compiler how to mangle symbols, but std::function is a template; it doesn’t have any mangling until it’s instantiated. Let’s worry about that later and call the template function_ for now.

The second issue can be worked around:

// C++: `using funPtr = double(*)(int, int);`
alias funPtr = double function(int, int);
// C++: `using funType = double(int, int);`
alias funType = typeof(*funPtr.init);

As in C++, the function type is the type one gets from deferencing a function pointer. Unlike C++, currently there’s no syntax to write it directly. First attempt:

// helper to automate getting an alias to a function type
template FunctionType(R, Args...) {
    alias ptr = R function(Args);
    alias FunctionType = typeof(*ptr.init);
}

struct function_(T);
struct function_(T: FunctionType!(R, Args), R, Args...) { }

This doesn’t work, probably due to this bug preventing the helper template FunctionType from working as intended. Let’s forget the template constraint:

extern(C++, "std") {
    struct function_(T) {
        import std.traits: ReturnType, Parameters;
        alias R = ReturnType!T;
        alias Args = Parameters!T;
        // In C++: `R operator()(Args) const`;
        R opCall(Args) const;
    }
}

void main() {
    alias funPtr = double function(double);
    alias funType = typeof(*funPtr.init);
    function_!funType f;
    double result = f(3.3);
}

This compiles but it doesn’t link: there’s an undefined reference to std::function_::operator()(double) const. Looking at the symbols in the object files using nm, we see that g++ emitted _ZNKSt8functionIFddEEclEd but dmd is trying to link to _ZNKSt9function_IFddEEclEd. As expected, name mangling issues related to renaming the symbol.

We could manually add a pragma(mangle) to tell D how to mangle the operator for the double(double) template instantiation, but that solution doesn’t scale. CTFE (constexpr if you speak C++ but not D) to the rescue!

// snip - as before
pragma(mangle, opCall.mangleof.fixMangling)
R opCall(Args) const;

// (elsewhere at file scope)
string fixMangling(string str) {
    import std.array: replace;
    return str.replace("9function_", "8function");
}

What’s going on here is an abuse of D’s compile-time power. The .mangleof property is a compile-time string that tells us how a symbol is going to be mangled. We pass this string to the fixMangling function which is evaluated at compile-time and fed back to the compiler telling it what symbol name to actually use. Notice that function_ is still a template, meaning .mangleof has a different value for each instantiation. It’s… almost magical. Hacky, but magical.

The final code compiles and links. Actually creating a valid std::function<double(double)> from D code is left as an exercise to the reader.

 

Tagged , , , , , , ,