databene

 
  • Increase font size
  • Default font size
  • Decrease font size

ContiPerf 2

 

Overview

ContiPerf is a lightweight testing utility that enables the user to easily leverage JUnit 4 test cases as performance tests e.g. for continuous performance testing. It is inspired by JUnit 4's easy test configuration with annotations and by JUnitPerf's idea of wrapping Unit tests for performance testing, but more powerful and easier to use:

  • Using Java annotations for defining test execution characterisitics and performance requirements
  • You can mark a test to run a certain number of times or to be repeatedly executed for a certain amount of time
  • Performance requirements can be maximum, average, medium or any percentile exectution time
  • Perfect control and readability of performance requirements
  • You can run tests in two different modes, using them as simple unit tests or performance tests
  • Easy integration with Eclipse and Maven
  • Export of execution summary to a CSV file
  • Small library without external dependencies (only JUnit)
  • Easy extension with custom statistics evaluation and tracking mechanisms

Here is a very simple test:

  import org.junit.*;
  import org.databene.contiperf.*;

  public class SmokeTest {

      @Rule
      public ContiPerfRule i = new ContiPerfRule();

      @Test
      @PerfTest(invocations = 1000, threads = 20)
      @Required(max = 1200, average = 250)

      public void test1() throws Exception {
          Thread.sleep(200);
      }

  }

 

Defining an attribute of type ContiPerfRule with the annotation @Rule activates ContiPerf. You can then choose from a different settings for specifying test execution (@PerfTest) and performance requirements (@Required). In the example the test is configured to be executed 1000 times with 20 concurrent threads, so each thread does 50 invocations. A maximum execution time of 1.2 seconds and and an average below or equals 250 milliseconds are tolerated.

As small but important detail is, that ContiPerf makes a difference to the behavior, you might expect from JUnit: ContiPerf creates a new test class instance and setup for each test method, but not for each test invocation! ContiPerf's purpose ist to do performance testing of your code, not of JUnit and the garbage collector! So, the behavior is as follows:

For each test method, a new instance of the test class is created and all further invocations of this method will happen on one and the same Java object!

First the @Before method is called. Then the test method is invoked 1,000 times subsequently and finally the @After method(s). ContiPerf 2 supports this behaviour for all JUnit 4 versions since version 4.7.

An example: Assuming you have a test class with two test methods, test1() and test2() which are executed two times each, a @Before method called before() and an @After method called after(), the invocation sequence is

constructor()
before()
test1()
test1()
after()

constructor()
before()
test2()
test2()
after()


You may annotate a test class itself with @PerfTest and @Required. This way, defaults are defined for all test methods that do not have method annotations:

  import org.junit.*;
  import org.databene.contiperf.*;

  @PerfTest(invocations = 5)
  @Required(max = 1200, average = 250)

  public class SmokeTest {

      @Rule
      public ContiPerfRule i = new ContiPerfRule();

      @Test
      public void test1() throws Exception {
          Thread.sleep(200);
      }

      @Test
      public void test2() throws Exception {
          Thread.sleep(150);
      }

 }

 

Performance Test Suites

Test suites are a powerful mechanism. They can be used for, e.g.

  • reusing existing unit tests that are not ContiPerf-aware in a manner that configures repeated execution and adds performance requirements
  • defining general performance requirements in a ContiPerf test class and reusing this in different test scenarios that induce different execution characteristics, e.g. long-term,  expected load, load peak, etc.
  • ...

The general suite concept was introduced in JUnit: You take one ore more test classes which contain one or more test methods and group them into a test suite. When running the test suite, all tests contained in the related test classes are executed. While most JUnit users only it as grouping mechanism, ContiPerf allows you to add ContiPerf functionality to JUnit and ContiPerf tests.

A ContiPerf test suite is defined with two main annotations, @RunWith and @SuiteClasses and can be configured with the already known @PerfTest and @Required annotations:

@RunWith(ContiPerfSuiteRunner.class)
@SuiteClasses(MyApplicationTest.class)

@PerfTest(invocations = 1000, threads = 30)
public static class PeakLoadTest {
}

You always need to use @RunWith(ContiPerfSuiteRunner.class) for executing a suite. The @SuiteClasses annotation must provide a comma-separated list of all classes that contain the 'real' test implementations. If you add @PerfTest and @Required annotations they apply as defaults for all 'real' tests that do not have a specific annotation.

However, there is a limitation: ContiPerf Test Suites can only wrap test classes which use the default JUnit4 test runner. Tests that use another runner, indicated by a @RunWith annotation, do not interoperate correctly with ContiPerf. The technical reason is, that there are two JUnit design issues which prevent use for performance testing: The first one is that JUnit does not propagate @Rules from test suites to the wrapped tests, The second one is, that JUnit has the default behavior of repeating test class instantiation, setup, teardown and result tracking for each single test method invocation. Applied to several thousand performance test invocation, this would induce severe additional load and would render the approach useless for most local perfomance test purposes.

Wait Times

In order not to have tests run continuously at full speed, but to achieve some more user-interaction-like behavior, timers can be used: They incur a wait time between invocations. ContiPerf comes with some predefined timers ({@link ConstantTimer}, {@link RandomTimer} and {@link CumulatedTimer}) and you can easily define custom ones.


As an example,

@PerfTest(invocations = 1000, threads = 10, timer = RandomTimer.class, timerParams = { 30, 80 })

causes ContiPerf to wait for 30 to 80 milliseconds between the test invocations of each thread. 

 

Ramp-up and warm-up time

If the tested component or system would be overloaded if all thread were immediately accessing it, a ramp-up mechanism can be used: When specifying a {@link #rampUp()} time, the test run begins with a single thread. After the ramp-up period, a second thread is added, after one more ramp-up period a third and so on until the full number of threads has been reached. In order to ease switching between ramp-up scenarios, the {@link #duration()} always specifies the time running with full number of threads,  ramp-up times are always added to the duration.

As an example, the annotaton

@PerfTest(threads = 10, duration = 60000, rampUp = 1000)

makes ContiPerf start with one thread, add a new thread each second until 10 threads are reached (which is the case after 9 seconds) and then runs the test at the full number of threads for 60 seconds. Consequentially, the total amount of time of test runs is 69 seconds. For measuring only under full load, you can configure a warmUp time to tell ContiPerf after which amount of time it should begin to measure and validate test execution. For the example above, a minimum rampUp time of 9 seconds is useful:

 

@PerfTest(threads = 10, duration = 60000, rampUp = 1000, warmUp = 9000)

 

Executing multiple test methods in parallel

In order to simulate different concurrent actions of several users, ContiPerf can execute all test methods of a class in parallel. Classes to be executed that way need to be run with the ParallelRunner. 

An example:

@RunWith(ParallelRunner.class)
public class ParallelRunnerTest {
   
    @Rule public ContiPerfRule rule = new ContiPerfRule();
   
    @Test
    @PerfTest(duration = 2000, threads = 3, timer = ConstantTimer.class, timerParams = { 1200 })
    public void test1() throws Exception {
        System.out.println("test1()");
    }
   
    @Test
    @PerfTest(duration = 3000, threads = 2, timer = ConstantTimer.class, timerParams = { 700 })
    public void test2() throws Exception {
        System.out.println("test2()");
    }   
}

 

Using ContiPerf in an IDE

For running the test with your IDE (e.g. Eclipse), simply add the contiperf.jar to the classpath and run the JUnit test. If any requirement is violated, this is reported by a test failure with appropriate message.

 

Integrating ContiPerf with Maven

Simply add a dependency to contiperf with scope test. ...and don't forget to have a dependency to JUnit as the test invoker.

    <dependencies>

        ...

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.7</version>
            <scope>test</scope>
        </dependency>
       
        <dependency>
            <groupId>org.databene</groupId>
            <artifactId>contiperf</artifactId>
            <version>2.1.0</version>
            <scope>test</scope>
        </dependency>


    </dependencies>

 

ContiPerf will be automatically activated when JUnit encounters a test class with @Rule ContiPerfRule...

  mvn test  

 

Dual use with Maven

Using the same test class implementation for unit tests and performance tests is usually not recommendable, but if you insist to do so, you can run the tests in two different modes:

  1. By default, performance testing is activated
  2. You can suppress performance testing, reducing the test to its unit test character by invoking Maven with the system property -Dcontiperf.active=false:
    mvn test -DargLine="-Dcontiperf.active=false"

License

ContiPerf is Open Source and you can choose among the following licenses:

  • Apache License 2.0
  • Lesser GNU Public License (LGPL) 3.0
  • Eclipse Public License 1.0
  • BSD License

Requirements

You need at least Java 5 and JUnit 4.7 to use ContiPerf

 

Getting help / getting involved

If you are stuck, found a bug, have ideas for ContiPerf or want to help, visit the forum.

 

Defining performance test execution

@PerfTest(invocations = 300)
Executes the test 300 times, regardless of the number of threads.
If left out, ContiPerf defaults to invocations=1, ignoring the 'threads' count
@PerfTest(threads=30)

Executes the test in 30 concurrent threads.
The total number of 'invocations' is distributed evenly over the threads.
Thus, for invocations=300, threads=30, each thread performs 10 invocations.
If left out, ContiPerf defaults to a single thread.

@PerfTest(duration = 20000)
Executes the test repeatedly for at least 20,000 milliseconds (20 seconds)

 

Defining performance requirements

@Required(throughput = 20)
Requires to have at least 20 test executions per second
@Required(average = 50)Requires an average execution time of not more than 50 milliseconds
@Required(median = 45)
Requires that 50% of all executions do not take longer than 45 milliseconds
@Required(max = 2000)
Requires that no invocation takes more than 2000 milliseconds (2 seconds)
@Required(totalTime = 5000)
Requires that the sum of all execution times is not more than 5000 milliseconds (5 seconds)  
@Required(percentile90 = 3000)
Requires that 90% of all executions do not take longer than 3000 milliseconds
@Required(percentile95 = 5000)Requires that 95% of all executions do not take longer than 5000 milliseconds
@Required(percentile99 = 10000)Requires that 99% of all executions do not take longer than 10000 milliseconds

@Required(percentiles = "66:200,96:500")
 

Requires that 66% of all executions do not take longer than 200 milliseconds
and 96% of all executions do not take longer than 500 milliseconds

 

Standard ContiPerf Report

ContiPerf writes invocation statistics to a log file 'target/contiperf-report/index.html'. It displays an overview of all test methods with success or failure indicator, followed by detailed reports for each test method with a latency distribution graph and statistical information.

ContiPerf Report

 

Using alternative clocks

By default ContiPerf uses the system's most exact clock. You can use other clocks alternatively and even plug in custom clock implementations. The following clocks are provided with ContiPerf (in package org.databene.contiperf.clock):

  • SystemClock
  • CpuClock
  • UserClock

If nothing else is specified, the SystomClock is used. Alternative clocks can be used using the @PerfTest annotation's 'clock' tag:

@PerfTest(invocations = 10, clocks = { SystemClock.class, UserClock.class, CpuClock.class })

When multiple clocks are specified, only the first one is used in requirement checking.

 

Release History

v1.0   2010-03-29 First release

v1.01 2010-04-08 Bug fix release

v1.02 2010-04-12 Bug fix release for non-Maven-based projects

v1.03 2010-04-19 Supporting multithreaded test execution

v1.04 2010-04-23 Bug fix release

v1.05 2010-05-24 Supporting test suites

v1.06 2010-05-24 Reduced memory footprint and performance impact

v1.07 2010-11-14 Bug fix release

v2.0  2011-09-11 Consistent support for all JUnit 4 versions since 4.7, HTML report with distribution charts

v2.1.0 2012-04-09 Ramp-up, warm-up, wait times, ParallelRunner

v2.2.0 2012-05-25 Alternative Clocks