While simulation is running performance metrics will be collected out of measureable Spec instances defined in your simulations and their DSL methods e.g HttpSpec. A measureable spec create a metric instance for each execution. A metric instance contains the name of the measurement point, that is defined in the measureable spec http("Discovery Request") and used in reporting and monitoring. A measurement instance holds the name of the measure point, performance metric e.g execution time and the status of the execution, e.g for HttpSpec the HTTP status codes 200, 404, 500, etc. Rhino creates then a performance report and outputs to stdout:

if the full stats option is set in the rhino.properties:

simulation.output.style=full

otherwise, the simple output is the default one, if you omit the configuration:

for the test instance:

  @Dsl(name = "Load DSL Discovery and GET")
  public DslBuilder loadTestDiscoverAndGet() {
    return dsl().
        run(http("Discovery Request")
            .header(session -> headerValue(X_REQUEST_ID, "Rhino-123")
            .header(X_API_KEY, SimulationConfig.getApiKey())
            .auth()
            .endpoint(DISCOVERY_ENDPOINT)
            .get()
            .saveTo("result");
  }

The test execution’s output in stdout, is not the only place where the metrics can be printed out. The test exection output in stdout helps certainly to monitor the test run during the simulation execution, however, it is not that handy if you need to create dashboards e.g by using Gatling. The simulation metrics gathered in measurements can be written into simulation log files for further processing. So as to enable simulation logging, the @Logging annotation can be used on simulation classes:

@Simulation(name = "Server-Status Simulation")
@Logging(file = "/var/log/simulation.log", formatter = GatlingLogFormatter.class)
public class PerformanceTestingExample {
  // your DSLs here
}

Formatter attribute defines the format of the log file you want to use. The framework currently supports Gatling simulation file format only with GatlingLogFormatter.class, so the Gatling simulation reports can be generated by using Gatling tooling. You can also write your own log formatter and add it to your class.

Another option is to use Influx DB to store the metrics as time-series data. Please refer Influx DB integraton for further information.

Defining custom measurement points

As the framework collects metrics from measureable Spec executions, you can still define your own measurement scope by wrapping your dsl items with measure() DSL to collect aggregated metrics:

  @Dsl(name = "Load DSL Discovery and GET")
  public DslBuilder loadTestDiscoverAndGet() {
    return dsl()
        .measure("Outer Measurement",
            dsl()
                .run(discovery())
                .measure("Inner Measurement", run(getResource())));
  }

  private HttpRetriableDsl getResource() {
    return http("Get Request")
        .auth()
        .header(session -> headerValue("X-Request-Id", "Test-" + provider.take()))
        .endpoint(s -> FILE_ENDPOINT)
        .get();
  }

  private HttpDsl discovery() {
    return http("Discovery Request")
        .auth()
        .header(session -> headerValue("X-Request-Id", "Test-" + provider.take()))
        .endpoint(DISCOVERY_ENDPOINT)
        .get()
        .saveTo("result");
  }

In the example above, the outer measure() DSL item wraps discovery() and getResource() calls, while measuring the entire execution time. In this way, you can aggragate the metrics of multiple executions together:

You can check out DSL documentation for further information.

Handling retries in Metrics

You can make the HttpSpecs retry the request by adding retryIf-expression, till the expected result is returned. In the example below, the first attempt of get request results in HTTP 404, and the second attempt succeeds with an HTTP 200:

http("Monitor")
    .header(session -> headerValue(X_REQUEST_ID, "Rhino-" + uuidProvider.take()))
    .header(X_API_KEY, SimulationConfig.getApiKey())
    .auth()
    .endpoint(session -> MONITOR_ENDPOINT)
    .get()
    .saveTo("result")
    .retryIf(response -> response.getStatusCode() != 200, 2)

In this case, each Http request within the “Monitor” spec, and the retries will be measured individually in reports:

However, you may want to see the cummulative execution time of Monitor spec including all retries. By using cummulative() expression, you can tell the framework to measure the entire execution time of the spec including the retries:

http("Monitor")
    .header(session -> headerValue(X_REQUEST_ID, "Rhino-" + uuidProvider.take()))
    .header(X_API_KEY, SimulationConfig.getApiKey())
    .auth()
    .endpoint(session -> MONITOR_ENDPOINT)
    .get()
    .saveTo("result")
    .retryIf(response -> response.getStatusCode() != 200, 2)
    .cummulative();

The reporting now gives out all attempts as if they were a single request: