Unit Testing

ForML provides a custom testing framework for user-defined operators. It is built on top of the standard unittest library with an API specialized to cover all the standard operator outcomes while minimizing any boiler-plating. Internally it uses the Virtual launcher to carry out the particular test scenarios as a genuine ForML workflow wrapping the tested operator.

The tests need to be placed under the tests/ folder of your project (note unittest requires all test files and the tests/ directory itself to be python modules hence it needs to contain the appropriate __init__.py files).

The testing framework is available after importing the forml.testing module:

from forml import testing

See also

See the tutorials for a real case unit test implementations.

Operator Test Case Outcome Assertions

The testing framework allows to assert following possible outcomes of the particular operator under the test:

INIT_RAISES

is a scenario, where the operator raises an exception right upon initialization. This can be used to assert an expected (hyper)parameter validation.

Synopsis:

mytest1 = testing.Case(arg1='foo').raises(ValueError, 'invalid value of arg1')
PLAINAPPLY_RAISES

asserts an exception to be raised when executing the apply-mode of an operator without any previous train execution.

Synopsis:

mytest2 = testing.Case(arg1='bar').apply('foo').raises(RuntimeError, 'Not trained')
PLAINAPPLY_RETURNS

is an assertion of an output value of the successful outcome of the apply-mode executed again without the previous train-mode.

Synopsis:

mytest3 = testing.Case(arg1='bar').apply('baz').returns('foo')
STATETRAIN_RAISES

checks the train-mode of given operator fails with the expected exception.

Synopsis:

mytest4 = testing.Case(arg1='bar').train('baz').raises(ValueError, 'wrong baz')
STATETRAIN_RETURNS

compares the output value of the successfully completed train-mode with the expected value.

Synopsis:

mytest5 = testing.Case(arg1='bar').train('foo').returns('baz')
STATEAPPLY_RAISES

asserts an exception to be raised from the apply-mode when executed after the previous successful train-mode.

Synopsis:

mytest6 = testing.Case(arg1='bar').train('foo').apply('baz').raises(ValueError, 'wrong baz')
STATEAPPLY_RETURNS

is a scenario, where the apply-mode executed after the previous successful train-mode returns the expected value.

Synopsis:

mytest7 = testing.Case(arg1='bar').train('foo').apply('bar').returns('baz')

Operator Test Suite

All test case assertions of the same operator are defined within the operator test suite that’s created simply as follows:

class TestMyTransformer(testing.operator(mymodule.MyTransformer)):
    """MyTransformer unit tests."""
    # Test scenarios
    invalid_params = testing.Case('foo').raises(TypeError, 'takes 1 positional argument but 2 were given')
    not_trained = testing.Case().apply('bar').raises(ValueError, "Must be trained ahead")
    valid_transformation = testing.Case().train('foo').apply('bar').returns('baz')

You simply create the suite by inheriting your Test... class from the testing.operator() utility wrapping your operator under the test. You then put your operator scenarios (test case outcome assertions) right into the body of your test suite class.

Running Your Tests

All the suites are transparently expanded into a full-blown unittest.TestCase definition so from here you would treat them as normal unit tests, which means you can simply run them using the usual:

$ forml project test
running test
TestNaNImputer
Test of Invalid Params ... ok
TestNaNImputer
Test of Not Trained ... ok
TestNaNImputer
Test of Valid Imputation ... ok
TestTitleParser
Test of Invalid Params ... ok
TestTitleParser
Test of Invalid Source ... ok
TestTitleParser
Test of Valid Parsing ... ok
----------------------------------------------------------------------
Ran 6 tests in 0.591s

OK

Custom Value Matchers

All the .returns() assertions are implemented using the unittest.TestCase.assertEqual() which compares the expected and actual values checking for object.__eq__() equality. If this is not a valid comparison for the particular data types used by the operator, you have to supply a custom matcher as a second parameter to the assertion. The matcher needs to be a callable with the following signature of typing.Callable[[typing.Any, typing.Any], bool], where the first argument is expected and the second is the actual value.

This can be useful for example for pandas.DataFrame, which does not support simple boolean equality check. The following example uses a custom matcher for comparing the memory consumption of the tested datasets:

def size_equals(expected: object, actual: object) -> bool:
    """Custom object comparison logic based on their size."""
    return sys.getsizeof(actual) == sys.getsizeof(expected)


class TestFooBar(testing.operator(FooBar)):
    """Unit testing the FooBar operator."""
    # Dataset fixtures
    INPUT = ...
    EXPECTED = ...

    # Test scenarios
    valid_parsing = testing.Case().apply(INPUT).returns(EXPECTED, size_equals)

For convenience, there is a number of explicit matchers provided as part of the forml.testing package:

forml.testing.pandas_equals(expected: NDFrame, actual: NDFrame) bool[source]

Compare Pandas DataFrames for equality.

Parameters:
expected: NDFrame

Instance of the expected data representation.

actual: NDFrame

Test case produced data.

Returns:

True if the data is equal.