Jamoma API  0.6.0.a19
Unit Testing
Author
Jamoma, Timothy Place & Nathan Wolek

Running Tests in IDE

Jamoma's Build & Test system delivers a unit testing solution that allows you to "Test Early. Test Often. Test Automatically" (Hunt & Thomas 1999). Whenever tests exist within a given library, the corresponding makefiles have been configured to run these tests during each build attempt. Therefore, if any part of a test fails, the build attempt will stop and report an error. In Xcode 5, an error looks something like this:

TTBufferXcodeAssertionFail.png

The biggest benefit for this system is the ability to receive immediate feedback when something breaks. If a test exists and your changes to the code cause that test to fail, you get feedback from the IDE as soon as you try to build. This makes it much easier to code via test driven development or red-green-refactor approach.

Running Tests in Ruby

The Ruby implementation provides another easy method to perform unit tests. In the Jamoma/Core/DSP/Tests folder, there is a simple example (gain.test.rb) which looks like this:

#!/usr/bin/env ruby -wKU
# encoding: utf-8
require 'Jamoma'
environment = TTObject.new "environment"
environment.set "benchmarking", 1
o = TTObject.new "gain"
o.send "test"
err, cpu = o.send "getProcessingBenchmark", 1
puts
puts "time spent calculating audio process method: #{cpu} µs"
puts

The require statement loads the Jamoma Foundation. The following line instantiates the TTGain class. Once we have an instance, we send it the test message to run the test. You can run this ruby script in the terminal by typing 'ruby gain.test.rb' and it will quickly return the results to you.

Writing Tests in C++

Any object inheriting from TTDataObjectBase will inherit a 'test' message. TTDataObjectBase defines a virtual default test method. This test will be run unless you specify your own test method. The default test method simply reports a failure because you haven't written a custom test. To define your test method, you can use the following prototype (which is the same as for any message with arguments in Jamoma):

/** Default (empty) template for unit tests.
@param returnedTestInfo Returned information on the outcome of the unit test(s)
@return #kTTErrNone if tests exists and they all pass, else #TTErr error codes depending on the outcome of the test.
*/
virtual TTErr test(TTValue& /*returnedTestInfo*/)
{
logMessage("No Tests have been written for this class -- please supply a test method.\n");
return kTTErrGeneric;
}

You can then implement a test with code such as the block that follows. A test may make 'assertions' that certain conditions be true. If any of these conditions are not true, then they are logged to the console and test will fail.

TTErr TTGain::test(TTValue& returnedTestInfo)
{
// preliminary setup
int errorCount = 0;
int testAssertionCount = 0;
TTTestLog("Testing Parameter value conversions");
// N test assertions
// Test 1: trival value conversion
this->set("midiGain", 100);
TTTestAssertion("midi gain of 100 == linear gain of 1.",
TTTestFloatEquivalence(this->mGain, 1.0),
testAssertionCount,
errorCount);
// Test 2: trival value conversion
this->set("midiGain", 99);
TTTestAssertion("midi gain of 99 != linear gain of 1.",
TTTestFloatEquivalence(this->mGain, 1.0, false),
testAssertionCount,
errorCount);
// Test 3: audio test
// set the input signals 1
// apply -6 dB gain
// check that the signals are properly scaled
// create 1 channel audio signals
TTAudio input(1);
TTAudio output(1);
input.allocWithVectorSize(64);
output.allocWithVectorSize(64);
for (int i=0; i<64; i++)
input.rawSamples()[0][i] = 1.0;
this->set("gain", -6.0);
this->process(input, output);
TTSampleValuePtr samples = output.rawSamples()[0];
int validSampleCount = 0;
for (int i=0; i<64; i++)
validSampleCount += TTTestFloatEquivalence(0.5011872336272722, samples[i]);
TTTestAssertion("accumulated audio error at gain = -6 dB",
validSampleCount == 64,
testAssertionCount,
errorCount);
TTTestLog("Number of bad samples: %i", 64-validSampleCount);
// Wrap up the test results to pass back to whoever called this test
return TTTestFinish(testAssertionCount, errorCount, returnedTestInfo);
}

Further Considerations

The following section is an attempt to anticipate developer questions about unit testing and provide brief answers. If your question is not answered below, please do not hesitate to contact the authors.

If an assertion fails, what happens to the build attempt? Be aware that a project will not build if it contains a test assertion that fails. You should either solve the problem so that it passes OR comment out the assertion and log an issue in our GitHub repository.

I am noticing an odd behavior in a specific class. How can I test it? If no test currently exists, create a new test that demonstrates the problem and fails each time you build. You can then set out to work on a fix and know immediately when you have a solution, because the project will build upon success.

I am adding a class to a project. How can I make sure it will leverage the Build & Test system? The most important requirement is that you design tests at the same time you design your class. You must then make sure that your class includes the designated tag for that project, which can be easily found in the test.cpp file of the relevant directory.

I am creating a new library or extension. How can I make sure it will leverage the Build & Test system? This is more involved than simply adding a class, but the most important requirement is to establish a unique tag for the library and a customized test.cpp file. Consult one of the authors above for further advice on how to go about creating a new library that uses Build & Test.