While simulation is running performance metrics will be collected from Spec instances e.g HttpSpec by the framework. A metric instance contains the name of the measurement point, it is used in reporting and monitoring and the status of the execution, which is a string representation and performance metrics like elapsed-time. Rhino creates then a performance report and outputs to stdout:

Starting load test for 5 minutes ...
Preparation in progress.
Number of users logged in : 2
Tests started : 20:45:35
Elapsed : 1 secs ETA : 20:50:35 (duration 5 mins)
-- Number of executions --------------------------------------------------
> Discovery        Discovery Request                      200 2
-- Response Time ---------------------------------------------------------
> Discovery        Discovery Request                      200 12 ms

                             Average Response Time        12 ms
                                     Total Request         2 

for the test instance:

  @Dsl(name = "Load DSL Discovery and GET")
  public DslBuilder loadTestDiscoverAndGet() {
    return dsl().
        run(http("Discovery Request")
            .header(session -> headerValue(X_REQUEST_ID, "Rhino-123")
            .header(X_API_KEY, SimulationConfig.getApiKey())

The test execution’s output in stdout, is not the only place where the metric instances are collected. The test exection output in stdout helps certainly to monitor the test run during the simulation execution. However, it is not that handy if you need to create dashboards e.g in Gatling with the results.

The simulation metrics gathered in measurements can be written into simulation log files or sent to time-series database for further processing. So as to enable simulation logging, for instance, you need to add @Logging annotation to the class:

@Simulation(name = "Server-Status Simulation")
@Logging(file = "/var/log/simulation.log", formatter = GatlingLogFormatter.class)
public class PerformanceTestingExample {
  // your DSLs here

Formatter attribute defines the format of the log file you want to use. The framework currently supports Gatling simulation file format only with GatlingLogFormatter.class, so the Gatling simulation reports can be generated by using Gatling tooling. You can write your own log formatter and add your class here.

Another option is to use Influx DB to store the metrics as time-series data. Please refer Influx DB integraton for further information.

Selective Metrics

As the framework collects metrics from Spec executions, you can still define your own measurement scope by using measure() DSL to collect metrics:

    dsl().measure("Total Execution",
        run(http("Discovery Request")
        .run(http("Get Request")

In this way, you can aggragate the metrics of multiple executions together:

-- Number of executions ------------------------------------------
>       Discovery Request 3                            200 11
>       Discovery Request                              200 12
>       Get Request                                    200 3
>       Total Execution                                    3
-- Response Time -------------------------------------------------
>       Discovery Request 3                            200 124 ms
>       Discovery Request                              200 104 ms
>       Get Request                                    200 207 ms
>       Total Execution                                    375 ms

                 Average Response Time                     143 ms
                       Total Request                  40 

You can check out DSL documentation for further information.