Writing unit tests is easy with Dlang. The unittest block allows you to start writing tests and to be productive with no special setup.
Unfortunately the assert expression does not help you to write expressive asserts, and in case of a failure it's hard to find why an assert failed. The fluent-asserts library allows you to more naturally specify the expected outcome of a TDD or BDD-style test.
- Add the DUB dependency: https://code.dlang.org/packages/fluent-asserts
$ dub add fluent-asserts- Use it:
unittest {
true.should.equal(false).because("this is a failing assert");
}
unittest {
Assert.equal(true, false, "this is a failing assert");
}- Run the tests:
âžś dub test --compiler=ldc2The library provides the expect, should templates and the Assert struct.
expect is the main assert function exposed by this library. It takes a parameter which is the value that is tested. You can
use any assert operation provided by the base library or any other operations that was registered by a third party library.
Expect expect(T)(lazy T testedValue, ...);
Expect expect(void delegate() callable, ...);
...
expect(testedValue).to.equal(42);In addition, the library provides the not and because modifiers that allow to improve your asserts.
not negates the assert condition:
expect(testedValue).to.not.equal(42);because allows you to add a custom message:
expect(true).to.equal(false).because("of test reasons");
/// will output this message: Because of test reasons, true should equal `false`.because also supports format strings for dynamic messages:
foreach (i; 0..100) {
result.should.equal(expected).because("at iteration %s", i);
}withContext attaches key-value debugging data:
result.should.equal(expected)
.withContext("userId", 42)
.withContext("input", testInput);
/// On failure, displays: CONTEXT: userId = 42, input = ...should is designed to be used in combination with Uniform Function Call Syntax (UFCS), and
is an alias for expect.
auto should(T)(lazy T testData, ...);So the following statements are equivalent
testedValue.should.equal(42);
expect(testedValue).to.equal(42);In addition, you can use not and because modifiers with should.
not negates the assert condition:
testedValue.should.not.equal(42);
true.should.equal(false).because("of test reasons");Assert is a wrapper for the expect function, that allows you to use the asserts with a different syntax.
For example, the following lines are equivalent:
expect(testedValue).to.equal(42);
Assert.equal(testedValue, 42);All the asserts that are available using the expect syntax are available with Assert. If you want to negate the check,
just add not before the assert name:
Assert.notEqual(testedValue, 42);The recordEvaluation function allows you to capture the result of an assertion without throwing an exception on failure. This is useful for testing assertion behavior itself, or for inspecting the evaluation result programmatically.
import fluentasserts.core.lifecycle : recordEvaluation;
unittest {
auto evaluation = ({
expect(5).to.equal(10);
}).recordEvaluation;
// Inspect the evaluation result
assert(evaluation.result.expected == "10");
assert(evaluation.result.actual == "5");
}The function:
- Takes a delegate containing the assertion to execute
- Temporarily disables failure handling so the test doesn't abort
- Returns the
Evaluationstruct containing the result
The Evaluation.result provides access to:
expected- the expected value as a stringactual- the actual value as a stringnegated- whether the assertion was negated with.notmissing- array of missing elements (for collection comparisons)extra- array of extra elements (for collection comparisons)
This is particularly useful when writing tests for custom assertion operations or when you need to verify that assertions produce the correct error messages.
fluent-asserts tracks assertion counts for monitoring test behavior:
import fluentasserts.core.lifecycle : Lifecycle;
// Run some assertions
expect(1).to.equal(1);
expect("hello").to.contain("ell");
// Access statistics
auto stats = Lifecycle.instance.statistics;
writeln("Total: ", stats.totalAssertions);
writeln("Passed: ", stats.passedAssertions);
writeln("Failed: ", stats.failedAssertions);
// Reset statistics
Lifecycle.instance.resetStatistics();The AssertionStatistics struct contains:
totalAssertions- Total number of assertions executedpassedAssertions- Number of assertions that passedfailedAssertions- Number of assertions that failedreset()- Resets all counters to zero
By default, fluent-asserts behaves like D's built-in assert: assertions are enabled in debug builds and disabled (become no-ops) in release builds. This allows you to use fluent-asserts as a replacement for assert in your production code without any runtime overhead in release builds.
Default behavior:
- Debug build: assertions enabled
- Release build (
dub build -b releaseor-releaseflag): assertions disabled (no-op)
Force enable in release builds:
dub.sdl:
versions "FluentAssertsDebug"
dub.json:
{
"versions": ["FluentAssertsDebug"]
}Force disable in all builds:
dub.sdl:
versions "D_Disable_FluentAsserts"
dub.json:
{
"versions": ["D_Disable_FluentAsserts"]
}Check at compile-time:
import fluent.asserts;
static if (fluentAssertsEnabled) {
// assertions are active
} else {
// assertions are disabled (release build)
}During unittest builds, the library automatically installs a custom handler for D's built-in assert statements. This provides fluent-asserts style error messages even when using standard assert:
unittest {
assert(1 == 2, "math is broken");
// Output includes ACTUAL/EXPECTED formatting and source location
}The handler is only active during version(unittest) builds, so it won't affect release builds. It is installed using pragma(crt_constructor), which runs before druntime initialization. This approach avoids cyclic module dependency issues that would occur with static this().
If you need to temporarily disable this handler during tests:
import core.exception;
// Save and restore the handler
auto savedHandler = core.exception.assertHandler;
scope(exit) core.exception.assertHandler = savedHandler;
// Disable fluent handler
core.exception.assertHandler = null;The library provides assertions for checking memory allocations:
// Check GC allocations
({ auto arr = new int[100]; }).should.allocateGCMemory();
({ int x = 5; }).should.not.allocateGCMemory();
// Check non-GC allocations (malloc, etc.)
({
import core.stdc.stdlib : malloc, free;
auto p = malloc(1024);
free(p);
}).should.allocateNonGCMemory();Note: Non-GC memory measurement uses process-wide metrics (mallinfo on Linux, phys_footprint on macOS). This is inherently unreliable during parallel test execution because allocations from other threads are included. For accurate non-GC memory testing, run tests single-threaded with dub test -- -j1.
Even though this library has an extensive set of operations, sometimes a new operation might be needed to test your code. Operations are functions that receive an Evaluation and modify it to indicate success or failure. The operation sets the expected and actual fields on evaluation.result when there is a failure. You can check any of the built in operations for a reference implementation.
void customOperation(ref Evaluation evaluation) @safe nothrow {
// Perform your check
bool success = /* your logic */;
if (!success) {
evaluation.result.expected = "expected value description";
evaluation.result.actual = "actual value description";
}
}Once the operation is ready to use, it has to be registered with the global registry:
static this() {
// bind the type to different matchers
Registry.instance.register!(SysTime, SysTime)("between", &customOperation);
Registry.instance.register!(SysTime, SysTime)("within", &customOperation);
// or use * to match any type
Registry.instance.register("*", "*", "customOperation", &customOperation);
}In order to setup an Evaluation, the actual and expected values need to be converted to a string. Most of the time, the default serializer will do a great job, but sometimes you might want to add a custom serializer for your types.
static this() {
HeapSerializerRegistry.instance.register(&jsonToString);
}
string jsonToString(Json value) {
/// you can add here your custom serializer for Jsons
}Areas for potential improvement:
- Reduce Evaluator duplication -
Evaluator,TrustedEvaluator, andThrowableEvaluatorshare similar code that could be consolidated with templates or mixins. - Simplify the Registry - The type generalization logic could benefit from clearer naming or documentation.
- Remove ddmp dependency - For simpler diffs or no diffs, removing the ddmp dependency would simplify the build.
- Consistent error messages - Standardize error message patterns across operations for more predictable output.
- Make source extraction optional - Source code tokenization runs on every assertion; making it opt-in could improve performance.
- GC allocation optimization - Several hot paths use string/array concatenation that could be optimized with
Appenderor pre-allocation.
MIT. See LICENSE for details.
