FIXME: The goals of the test suite and an overview of it
The most common way of running tests from the test suite is to use the top level make target verify which installs a test Pike in the build directory and use it while running the entire test suite. The following test-related make targets are defined in the top level make file.
It is possible to alter the flags given to the test_install program by using the TESTARGS make variable.
make verify TESTARGS="-a -v4 -l2 -t1 -c1 -m"
The actual testing is done by the program bin/test_pike.pike, which can be run as a stand alone application to test any Pike binary with any test suite or test suites. The Pike binary that executes the test program will be tested, and it will be tested with the test suites provided as arguments to the test program.
/home/visbur/Pike/7.2/bin/pike bin/test_pike.pike testsuite1 testsuite2
The individual testsuite files are generated from testsuite.in files scattered about the lib/ and src/ trees. When you run the make targets described above, those are made for you automagically, but to do it by hand (i e if you added a test to one of them), cd to the top directory and run
make testsuites
The testsuite files have now appeared in build/arch in locations corresponding to where they lived in the pike tree, except those from the lib/ hierarchy; those end up in build/arch/tlib.
The test_pike.pike program takes the following attributes.
0 | No extra printouts. |
1 | Some additional information printed out after every finished block of tests. |
2 | Some extra information about test that will or won't be run. |
3 | Every test is printed out. |
4 | Time spent in individual tests are printed out. |
10 | The actual pike code compiled, including wrappers, is printed. Note that the code will be quoted. |
$ pike bin/test_pike.pike -v1 testsuite Doing tests in testsuite (1 tests) Total tests: 1 (0 tests skipped)
$ pike bin/test_pike.pike -v2 testsuite Doing tests in testsuite (1 tests) Doing test 1 (1 total) at /home/nilsson/Pike/7.3/lib/modules/ADT.pmod/testsuite.in:9 Failed tests: 0. Total tests: 1 (0 tests skipped)
$ pike bin/test_pike.pike -v4 testsuite Doing tests in testsuite (1 tests) Doing test 1 (1 total) at /home/nilsson/Pike/7.3/lib/modules/ADT.pmod/testsuite.in:9 0: mixed a() { 1: object s = ADT.Stack(); 2: s->push(1); 3: return s->pop(); 4: ; } 5: mixed b() { return 1; } Time in a(): 0.000, Time in b(): 0.000000 Failed tests: 0. Total tests: 1 (0 tests skipped)
$ pike bin/test_pike.pike -v10 testsuite Doing tests in testsuite (1 tests) Doing test 1 (1 total) at /home/nilsson/Pike/7.3/lib/modules/ADT.pmod/testsuite.in:9 0: mixed a() { 1: object s = ADT.Stack(); 2: s->push(1); 3: return s->pop(); 4: ; } 5: mixed b() { return 1; } 0: mixed a() { 1: object s = ADT.Stack(); 2: s->push(1); 3: return s->pop(); 4: ; } 5: mixed b() { return 1; } 6: int __cpp_line=__LINE__; int __rtl_line=[int]backtrace()[-1][1]; 7: 8: int \30306\30271\30310=0; 9: Time in a(): 0.000, Time in b(): 0.000000 Failed tests: 0. Total tests: 1 (0 tests skipped)
1 | _verify_internals is run before every test. |
2 | _verify_internals is run after every compilation. |
3 | _verify_internals is run after every test. |
4 | An extra gc and _verify_internals is run before every test. |
X<0 | For values below zero, _verify_internals will be run before every n:th test, where n=abs(X). |
Whenever you write a new function in a module or in Pike itself it is good to add a few test cases in the test suite to ensure that regressions are spotted as soon as they appear or to aid in finding problems when porting Pike to another platform. Since you have written the code, you are the one best suited to come up with tricky tests cases. A good test suite for a function includes both some trivial tests to ensure that the basic functionality works and some nasty tests to test the borderlands of what the function is capable of, e.g. empty in parameters.
Also, when a bug in Pike has been found, a minimized test case the triggers the bug should also be added to the test suite. After all, this test case has proven to be a useful once.
The test_any macro tests if the result of two pike expressions are similar, e.g. if a==b. Technically the actual test preformed is !(a!=b). The first expression should be a complete block, that returns a value, while the other expression should be a simple pike statement.
test_any([[ int f (int i) {i = 0; return i;}; return f (1); ]],0)
The test_any_equal macro tests if the result of two pike expressions are identical, e.g. if equal(a,b). The first expression should be a complete block, that returns a value, while the other expression should be a simple pike statement.
test_any_equal([[ mixed a=({1,2,3}); a[*] += 1; return a; ]], [[ ({2,3,4}) ]])
The test_eq macro tests if the result of two pike statements are similar, e.g. if a==b. Technicaly the actual test performed is !(a!=b).
test_eq(1e1,10.0);
The test_equal macro tests if the result of two pike statements are identical, e.g. if equal(a,b).
test_equal([[ ({10,20})[*] + 30 ]], [[ ({40, 50}) ]])
test_do simply executes its code. This test fails if there is any compilation error or if an error is thrown during execution.
test_do([[ int x; if (time()) x = 1; else foo: break foo; ]])
This test succeeds if the pike expression is evaluated into a non-zero value.
test_true([[1.0e-40]]);
This test succeeds if the pike expression is evaluated into a zero value.
test_false(glob("*f","foo"))
The test_compile macro only tries to compile an expression. It fails upon compilarion warnings or errors.
test_compile([[Stdio.File foo=Stdio.File();]])
Tests if the code compiles, just as test_compile, but is a complete block of code and not just an expression.
test_compile_any([[ void foo() { Stdio.File bar(int x, int y) { return 0; }; } ]])
Does the inverse of test_compile; verifies that the code does not compile.
test_compile_error([[ int a="a"; ]])
Does the inverse of test_compile_any; verifies that the code does not compile.
test_compile_error_any([[ int a=5; string b="a"; a=b; ]])