Reminds me of the awesome bug report I saw once: ‘Everything is broken. Steps to reproduce: do anything. Expected result: it should work’. –Felipe Knorr Kuhn


⎕IO  0
]box on
]rows on
Was ON

APL programmers don’t need to test their code, as they always write correct code. Next chapter…

Ahem. Ninja master @ngn offered up the following unit testing framework on APL Orchard:

 ⍝ usage: expectedoutput ≡ f input.  prints 1 for ok and 0 for failure

Whilst @ngn was (at least partially) joking, he’s got a point.

There is no official testing framework blessed by Dyalog. However, APL programmers do of course test their code. We can learn a lot from Dyalog’s published source code. Here, for example, is the test suite for the Link package. This is a fairly hefty namespace running to 2k+ LOC. If the code looks unfamiliar, it’s because it’s written in the tradfn style.

A ‘framework’?

No. But let’s look at how unit testing is handled in other languages. In Python there are (too) many testing frameworks to choose from, all different. The original unit testing framework, included in Python by default, is the imaginatively named unittest module. unittest talks about test suites comprising of related test cases, controlled by a runner and context managed by fixtures. There is no reason why we couldn’t have something similar in APL if we wanted to build such a thing. But we probably want to find a more APL-y way of doing that.

Here’s what I’d want from my testing system:

  1. Ability to automatically run all tests, and get a report back on which tests succeeded.

  2. Ability to run a single test.

  3. Easy way to create more tests and have them be picked up by the test runner.

Here’s the first example from the unittest docs:

import unittest

class TestStringMethods(unittest.TestCase):

    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')

    def test_isupper(self):

    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        # check that s.split fails when the separator is not a string
        with self.assertRaises(TypeError):

if __name__ == '__main__':

To add further tests, we just add methods starting with test_ to the TestStringMethods class, and the unittest.main() method will run them for us. So for our first attempt, let’s try to replicate that functionality in a Dyalog namespace. Here’s our test runner, which simply executes all functions where the name starts with test_ and produces a little report.

:Namespace unittest
    ⎕IO  0
        tests  'test_.+'⎕S'&'⎕NL ¯3
        0=≢tests: 'no tests found'
        {,('.'/⍨30-≢),'[FAIL]' '[OK]'}tests (¨tests,¨' ⍬')
no tests found

Let’s make some functions that we can unit test. We can take the ones from the Python example above.

split≠⊆⊢         ⍝ Won't complain about a non-string separator, but hey, why should it?

Now we can write the tests themselves. Note that as in this case the functions we’re testing are defined outside the unittest namespace, so we need to prefix the calls with #.. Note how we’re using the test framework proposed by @ngn, outlined above :)

unittest.test_upper{'FOO'#.upper 'foo'}
unittest.test_isupper{(#.isupper 'FOO')∧~#.isupper 'Foo'}
test_isupper..................[OK] test_upper....................[OK]

Let’s add a test that fails.

unittest.test_isupper2{(#.isupper 'FOO')∧~#.isupper 'BAR'} ⍝ Failing test
test_isupper..................[OK] test_isupper2.................[FAIL] test_upper....................[OK]
unittest.test_split{'hello' 'world'' '#.split 'hello world'}
test_isupper..................[OK] test_isupper2.................[FAIL] test_split....................[OK] test_upper....................[OK]

We can of course run single tests trivially:


Nice, simple, and surprisingly useful.

Data-driven testing

Data-driven testing, also known as parametrized testing, is where you provide essentially a table of inputs and expected outputs and let your testing framework run them all. This approach isn’t supported out of the box in Python’s basic unittest module. However, other more fully-featured Python frameworks, such as pytest, do.

Here’s how that can look:

import pytest

@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Here the decorator @pytest.mark.parametrize defines the test function arguments, and then provides a list of tuples. The test runner will then call the test function with the arguments as given by each tuple in turn. Let’s see if we can achieve something similar in APL.

This sounds like a job for an operator: we pass a function to the test runner, and a vector of 3-“tuples” representing left argument, right argument and expected outcome.

_test{(/)0 0⍺⍺/¯11⊢↑}
split _test (' ' ('hello world') ('hello' 'world')) (',' ('hello,world') ('hello' 'world'))
1 1

What we need now is a convenient way to specify such parameter sets so they can be picked up by the test runner. We can do this by defining variables named fn_testdata, and have that be picked up by our unit testing namespace.

split_testdata(' ' ('hello world') ('hello' 'world')) (',' ('hello,world') ('hello' 'world'))
split _test split_testdata
1 1
:Namespace datatest
    ⎕IO  0
    _test{(/)0 0⍺⍺/¯11⊢↑}
    run{ ⍝ ⍵ -- ns containing functions to be tested
        params  '_'(≠⊆⊢)¨'[^_]+_testdata'⎕S'&'⎕NL¯2.1    ⍝ https://aplcart.info/?q=%E2%8E%95NL#
        0=≢params: 'no test parameter sets found'
        funs  ¨¯11params                            ⍝ Corresponding functions defined?
        testable  funs/⍨funs.⎕NL¯3
        result{(.)_test ,'_testdata'}¨testable    ⍝ Run the tests
        {,('.'/⍨30-≢),'[',(⍕+/),'/',(⍕≢),']'}testable result ⍝ Format
datatest.split_testdata(' ' ('hello world') ('hello' 'world')) (',' ('hello,world') ('hello' 'world')) ⍝ dyadic function
datatest.isupper_testdata( ('FOO') 1) ( ('Foo') 0) ( (,'F') 1) ⍝ monadic function
datatest.run ⎕THIS
isupper.......................[3/3] split.........................[2/2]

So there we have it. Of course, in a real project you may want a slightly more fleshed out test framework, capable of testing for exceptions etc.