Erlang Central

Testing Strategies

Revision as of 07:16, 3 September 2006 by Rvg (Talk | contribs)

Contents

Authors

Rudolph van Graan

This page is under construction it needs to be filled out.

How to deal with test data

Mnesia

  1. Before starting Mnesia, make sure that all previous test schemas are cleaned out
  2. Create a new Mnesia schema
  3. Install the Mnesia tables
  4. Run the code that puts the basic data in place
    1. Insert the minimum number of rows
    2. Try to use unique keys for each test, so they don't interfere with each other
  5. Run the tests
  6. Leave the database intact so you can perform analysis after a failure

Relational Databases where your system controls the data

This is basically the same as the mnesia test case.

  1. To get your initial data into the database:
    1. Recreate the database schema or
    2. Clear all the tables that you require
  2. Insert test data
  3. Run the tests
  4. Leave the database for later analysis

Relational Databases where your system doesn't control the data

This is a difficult problem to solve. Especially where the database can potentially change.

Try the following

  • Copy the database onto a clean machine and make a backup of its state, possibly deleting a lot of the records to get its size as small as possible
  • Before running a set of suites or a test, restore this database

Modules without side-effects

Modules with side-effects

In this scenario you need to make sure you clean up after running the test. Examples where you need this:

Testing functions in ETS

[{conf,
  start_ets_owner_process,[test_1,
			   test_2],
  stop_ets_owner_process}]

The test case start_ets_owner_process would spawn a process that owns the ets table. It stores the PID of the process in the Test Configuration returned by start_ets_owner_process.

Notes

  1. Don't use spawn_link(...) as the process would terminate as soon as the system exits the test case

Testing applications in the context of an entire system

Testing applications that depend on other applications

Running variations of tests

An example where you need to run variations of tests is when your system supports multiple versions of the same protocol. Here is a configuration case illuastrating it:

[
 {conf,
 connect_version_1,[test_function_1,       % Function 1 and 2 are common to both protocols
		    test_function_2,
		    test_function_3_fails  % There should be an error here as this function doesn't exist in V1
		   ],
 disconnect_version_1},
{conf,
 connect_version_2,[test_function_1,
		    test_function_2,
		    test_function_3         % Function 3 was added, so it must succeed
		   ],
 disconnect_version_2}
]