This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Developer Guide

All Purpose Interface | Developer Guide

Abstract

This document explains:

  1. How to develop a test program from scratch.
  2. How to add self-programmed measurements to the HTTP Test Wizard (as plug-ins).

The product’s open architecture enables you to develop plug-ins, scripts and programs that measure anything that has numeric value - no matter which protocol is used!

The measured data are evaluated in real time and displayed as diagrams and lists. In addition to successfully measured values, also errors like timeouts or invalid response data can be collected and displayed in real time.

At least in theory, programs and scripts of any programming language can be executed, as long as such a program or script supports the All Purpose Interface.

In practice there are currently two options for integrating your own measurements into the Real Load Platform:

  1. Write an HTTP Test Wizard Plug-In in Java that performs the measurement. This has the advantage that you only have to implement a subset of the “All Purpose Interface” yourself:

    • Declare Statistic
    • Register Sample Start
    • Add Sample Long
    • Add Sample Error
    • [Optional: Add Counter Long, Add Average Delta And Current Value, Add Efficiency Ratio Delta, Add Throughput Delta, Add Test Result Annotation Exec Event]

    Such plug-ins can be developed quite quickly, as all other functions of the “All Purpose Interface” are already implemented by the HTTP Test Wizard.

    Tip: An HTTP Test Wizard session can also only consist of plug-ins, i.e. you can “misuse” the HTTP Test Wizard to only carry out measurements that you have programmed yourself: Plug-In Example

  2. Write a test program or from scratch. This can currently be programmed in Java or PowerShell (support for additional programming languages will be added in the future). This is more time-consuming, but has the advantage that you have more freedom in program development. In this case you have to implement all functions of the “All Purpose Interface”.

Interface Specification

Basic Requirements for all Programs and Scripts

The All Purpose Interface must be implemented by all programs and scripts which are executed on the Real Load Platform. The interface is independent of any programming language and has only three requirements:

  1. The executed program or script must be able to be started from a command line, and passing program or script arguments must be supported.
  2. The executed program or script must be able to read and write files.
  3. The executed program or script must be able to measure one or more numerical values.

All of this seems a bit trivial, but has been chosen deliberately. So that the interface can support almost all programming languages.

Generic Program and Script Arguments

Each executed program or script must support at least the following arguments:

  • Number of Users: The total number of simulated users (integer value > 0).
  • Test Duration: The maximum test duration in seconds (integer value > 0).
  • Ramp Up Time: The ramp up time in seconds until all simulated users are started (integer value >= 0). Example: If 10 users are started within 5 seconds then the first user is started immediately and then the remaining 9 users are started in (5 seconds / 9 users) = 0.55 seconds intervals.
  • Max Session Loops: The maximum number of session loops per simulated user (integer value > 0, or -1 means infinite number of session loops).
  • Delay Per Session Loop: The delay in milliseconds before a simulated user starts a next session loop iteration (integer value >= 0) – but not applied for the first session loop iteration.
  • Data Output Directory: The directory to which the measured data have to be written. In addition, other data can also written to this directory like for example debug information.

Implementation Note: The test ends if either the Test Duration is elapsed or if Max Session Loops are reached for all simulated users. Currently executed sessions are not aborted.

In addition, the following arguments are optional, but also standardized:

  • Description: A brief description of the test
  • Debug Execution: Write debug information about the test execution to stdout
  • Debug Measuring: Write debug information about the declared statistics and the measured values to stdout
Argument Java PowerShell
Number of Users -users number -totalUsers number
Test Duration -duration seconds -inputTestDuration seconds
Ramp Up Time -rampupTime seconds -rampUpTime seconds
Max Session Loops -maxLoops number -inputMaxLoops number
Delay Per Session Loop -delayPerLoop milliseconds -inputDelayPerLoopMillis milliseconds
Data Output Directory -dataOutputDir path -dataOutDirectory path
Description -description text -description text
Debug Execution -debugExec -debugExecution
Debug Measuring -debugData -debugMeasuring

Single-Threaded Scripts vs. Multiple-Threaded Programs

For scripts which don’t support multiple threads the Real Load Platform starts for each simulated user a own operating system process per simulated user. On the other hand, for programs which support multiple threads, only one operating system process is started for all simulated users.

Scripts which are not able to run multiple threads must support the following additional generic command line argument:

  • Executed User Number: The currently executed user (integer value > 0). Example: If 10 scripts are started then 1 is passed to the first started script, 2 is passed to the second started script, .. et cetera.
Argument PowerShell
Executed User Number -inputUserNo number

Specific Program and Script Arguments

Additional program and script specific arguments are supported by the Real Load Platform. Hoverer, their values are not validated by the platform.

Job Control Files

During the execution of a test the Real Load Platform can create and delete at runtime additional control files in the Data Output Directory of a test job. The existence, and respectively the absence of such control files must be frequently checked by the running script or program, but not too often to avoid CPU and I/O overload. Rule of thumb: Multi-threaded programs should check the existence of such files every 5..10 seconds. Single-threaded scripts should check such files before executing a new session loop iteration.

The following control files are created or removed in the Data Output Directory by the Real Load Platform:

  • DKFQS_Action_AbortTest.txt If the existence of this file is detected then the test executions must be aborted gracefully as soon as possible. Currently executed session loops are not aborted.
  • DKFQS_Action_SuspendTest.txt If the existence of this file is detected then the further execution of session loops is suspended until the file is removed by the Real Load Platform. Currently executed session loops are not interrupted on suspend. When resuming the test then the Ramp Up Time as passed as generic argument to the script or program must be re-applied. If a suspended test runs out of Test Duration then the test must end.

Testjob Data Files

When a test job is started by the Real Load Platform on a Measuring Agent, then the Real Load Platform creates at first for each simulated user an empty data file in the Data Output Directory of the test job:

Data File: user_<Executed User Number>_statistics.out

Example: user_1_statistics.out, user_2_statistics.out, user_3_statistics.out, .. et cetera.

After that, the test script(s) or test program is started as operating system process. The test script or the test program has to write the current state of the simulated user and measured data to the corresponding Data File of the simulated user in JSON object format (append data to the file only – don’t create new files).

The Real Load Platform component Measuring Agent and the corresponding Data Collector are listening to these data files and interpret the measured data at real-time, line by line as JSON objects.

“alt attribute”

Writing JSON Objects to the Data Files

The following JSON Objects can be written to the Data Files:

JSON Object Description
Declare Statistic Declare a new statistic
Register Execute Start Registers the start of a user
Register Execute Suspend Registers that the execution of a user is suspended
Register Execute Resume Registers that the execution of a user is resumed
Register Execute End Registers that a user has ended
Register Loop Start Registers that a user has started a session loop iteration
Register Loop Passed Registers that a session loop iteration of a user has passed
Register Loop Failed Registers that a session loop iteration of a user has failed
Register Sample Start Statistic-type sample-event-time-chart: Registers the start of measuring a sample
Add Sample Long Statistic-type sample-event-time-chart: Registers that a sample has measured and report the value
Add Sample Error Statistic-type sample-event-time-chart: Registers that the measuring of a sample has failed
Add Counter Long Statistic-type cumulative-counter-long: Add a positive delta value to the counter
Add Average Delta And Current Value Statistic-type average-and-current-value: Add delta values to the average and set the current value
Add Efficiency Ratio Delta Statistic-type efficiency-ratio-percent: Add efficiency ratio delta values
Add Throughput Delta Statistic-type throughput-time-chart: Add a delta value to a throughput
Add Error Add an error the test result
Add Test Result Annotation Exec Event Add an annotation event to the test result

Note that the data of each JSON object must be written as a single line which end with a \r\n line terminator.

Program Sequence

“alt attribute”

“alt attribute”

“alt attribute”

JSON Object Specification

Declare Statistic Object

Before the measurement of data begins, the corresponding statistics must be declared at runtime. Each declared statistic must have a unique ID. Multiple declarations with the same ID are crossed out by the platform.

Currently 5 types of statistics are supported:

  • sample-event-time-chart : This is the most common statistic type and contains continuously measured response times or any other continuously measured values of any unit. Information about failed measurements can also be added to the statistic. Statistics of this type are added to the ‘Overview Statistic’ area and can also displayed as a chart (see picture below).
  • cumulative-counter-long : This is a single counter whose value is continuously increased during the test. Statistics of this type are added to the ‘Test-Specific Values’ area.
  • average-and-current-value : This is a separately measured mean value and the last measured current value. Statistics of this type are added to the ‘Test-Specific Values’ area.
  • efficiency-ratio-percent : This is a measured efficiency in percent (0..100%). Statistics of this type are added to the ‘Test-Specific Values’ area.
  • throughput-time-chart : This is a measured throughput per second. Statistics of this type are added to the ‘Test-Specific Values’ area.

“alt attribute”

It’s also supported to declare new statistics at any time during test execution, but the statistics must be declared first, before the measured data are added.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "DeclareStatistic",
  "type": "object",
  "required": ["subject", "statistic-id", "statistic-type", "statistic-title"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'declare-statistic'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "Unique statistic id"
    },
    "statistic-type": {
      "type": "string",
      "description": "'sample-event-time-chart' or 'cumulative-counter-long' or 'average-and-current-value' or 'efficiency-ratio-percent' or 'throughput-time-chart'"
    },
    "statistic-title": {
      "type": "string",
      "description": "Statistic title"
    },
    "statistic-subtitle": {
      "type": "string",
      "description": "Statistic subtitle | only supported by 'sample-event-time-chart'"
    },
    "y-axis-title": {
      "type": "string",
      "description": "Y-Axis title | only supported by 'sample-event-time-chart'. Example: 'Response Time'"
    },
    "unit-text": {
      "type": "string",
      "description": "Text of measured unit. Example: 'ms'"
    },
    "sort-position": {
      "type": "integer",
      "description": "The UI sort position"
    },
    "add-to-summary-statistic": {
      "type": "boolean",
      "description": "If true = add the number of measured and failed samples to the summary statistic | only supported by 'sample-event-time-chart'. Note: Synthetic measured data like Measurement Groups or Delay Times should not be added to the summary statistic"
    },
    "background-color": {
      "type": "string",
      "description": "The background color either as #hex-triplet or as bootstrap css class name, or an empty string = no special background color. Examples: '#cad9fa', 'table-info'"
    }
  }
}

Example: 
{
  "subject":"declare-statistic",
  "statistic-id":1,
  "statistictype":"sample-event-time-chart",
  "statistic-title":"GET http://192.168.0.111/",
  "statistic-subtitle":"",
  "y-axis-title":"Response Time",
  "unit-text":"ms",
  "sort-position":1,
  "add-to-summarystatistic":true,
  "background-color":""
}

After the statistics are declared then the activities of the simulated users can be started. Each simulated user must report the following changes of the current user-state:

  • register-execute-start : Register that the simulated user has started the test.
  • register-execute-suspend : Register that the simulated user suspend the execution of the test.
  • register-execute-resume : Register that the simulated user resume the execution of the test.
  • register-execute-end : Register that the simulated user has ended the test.

Register Execute Start Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteStart",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-start'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-start","timestamp":1596219816129}

Register Execute Suspend Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteSuspend",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-suspend'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-suspend","timestamp":1596219816129}

Register Execute Resume Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteResume",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-resume'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-resume","timestamp":1596219816129}

Register Execute End Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteEnd",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-end'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-end","timestamp":1596219816129}

Once a simulated user has started its activity it measures the data in so called ‘session loops’. Each simulated must report when a session loop iteration starts and ends:

  • register-loop-start : Register the start of a session loop iteration.
  • register-loop-passed : Register that a session loop iteration has passed / at end of the session loop iteration.
  • register-loop-failed : Register that a session loop iteration has failed / if the session loop iteration is aborted.

Register Loop Start Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopStart",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-start'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-start","timestamp":1596219816129}

Register Loop Passed Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopPassed",
  "type": "object",
  "required": ["subject", "loop-time", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-passed'"
    },
    "loop-time": {
      "type": "integer",
      "description": "The time it takes to execute the loop in milliseconds"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-passed","loop-time":1451, "timestamp":1596219816129}

Register Loop Failed Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopFailed",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-failed'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-failed","timestamp":1596219816129}

Within a session loop iteration the samples of the declared statistics are measured. For sample-event-time-chart statistics the simulated user must report when the measuring of a sample starts and ends:

  • register-sample-start : Register that the measuring of a sample has started.
  • add-sample-long : Add a measured value to a declared statistic.
  • add-sample-error : Add an error to a declared statistic.

Register Sample Start Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterSampleStart",
  "type": "object",
  "required": ["subject", "statistic-id", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-sample-start'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-sample-start","statisticid":2,"timestamp":1596219816165}

Add Sample Long Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddSampleLong",
  "type": "object",
  "required": ["subject", "statistic-id", "value", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-sample-long'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "value": {
      "type": "integer",
      "description": "The measured value"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"add-sample-long","statisticid":2,"value":105,"timestamp":1596219842468}

Add Sample Error Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddSampleError",
  "type": "object",
  "required": ["subject", "statistic-id", "error-subject", "error-severity",
  "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-sample-error'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "error-subject": {
      "type": "string",
      "description": "The subject or title of the error"
    },
    "error-severity": {
      "type": "string",
      "description": "'warning' or 'error' or 'fatal'"
    },
    "error-type": {
      "type": "string",
      "description": "The type of the error. Errors which contains the same error
    type can be grouped."
    },
    "error-log": {
      "type": "string",
      "description": "The error log. Multiple lines are supported by adding \r\n line terminators."
    },
    "error-context": {
      "type": "string",
      "description": "Context information about the condition under which the error occurred. Multiple lines are supported by adding \r\n line terminators."
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{
  "subject":"add-sample-error",
  "statistic-id":2,
  "error-subject":"Connection refused (Connection refused)",
  "error-severity":"error",
  "error-type":"java.net.ConnectException",
  "error-log":"2020-08-01 21:24:51.662 | main-HTTPClientProcessing[3] | INFO | GET http://192.168.0.111/\r\n2020-08-01 21:24:51.670 | main-HTTPClientProcessing[3] | ERROR | Failed to open or reuse connection to 192.168.0.111:80 |
 java.net.ConnectException: Connection refused (Connection refused)\r\n",
  "error-context":"HTTP Request Header\r\nhttp://192.168.0.111/\r\nGET / HTTP/1.1\r\nHost: 192.168.0.111\r\nConnection: keep-alive\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate\r\n",
  "timestamp":1596309891672
}

Note about the error-severity :

  • warning : After the error has occurred then the simulated user continues with the execution of the current session loop. Error color = yellow.
  • error : After the error has occurred then the simulated aborts the execution of the current session loop iteration, and starts the execution of the next session loop iteration. Error color = red.
  • fatal : After the error has occurred then the simulated user aborts any further execution of the test, which means that the test has ended for this simulated user. Error color = black.

Implementation note: After an error has occurred, the simulated user should wait at least 100 milliseconds before continuing his activities. This is to prevent that within a few seconds several thousand errors are measured and reported to the UI

Add Counter Long Object (cumulative-counter-long only)

For cumulative-counter-long statistics there is no such 2-step mechanism as for ‘sample-event-time-chart’ statistics. The value can simple increased by reporting a Add Counter Long object.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddCounterLong",
  "type": "object",
  "required": ["subject", "statistic-id", "value"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-counter-long'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "value": {
      "type": "integer",
      "description": "The value to increment"
    }
  }
}

Example: 
{"subject":"add-counter-long","statistic-id":10,"value":2111}

Add Average Delta And Current Value Object (average-and-current-value only)

To update a average-and-current-value statistic the delta (difference) values of the cumulated sum and the delta (difference) of the cumulated number of values has to be reported. The platform calculates then the average value by dividing the cumulated sum by the cumulated number of values. In addition, the last measured value must also be reported.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddAverageDeltaAndCurrentValue",
  "type": "object",
  "required": ["subject", "statistic-id", "sumValuesDelta", "numValuesDelta", "currentValue", "currentValueTimestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-average-delta-and-current-value'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "sumValuesDelta": {
      "type": "integer",
      "description": "The sum of delta values to add to the average"
    },
    "numValuesDelta": {
      "type": "integer",
      "description": "The number of delta values to add to the average"
    },
    "currentValue": {
      "type": "integer",
      "description": "The current value, or -1 if no such data is available"
    },
    "currentValueTimestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the current value, or -1 if no such data is available"
    }
  }
}

Example: 
{
  "subject":"add-average-delta-and-current-value",
  "statistic-id":100005,
  "sumValuesDelta":6302,
  "numValuesDelta":22,
  "currentValue":272,
  "currentValueTimestamp":1634401774374
}

Add Efficiency Ratio Delta Object (efficiency-ratio-percent only)

To update a efficiency-ratio-percent statistic, the delta (difference) of the number of efficient performed procedures and the delta (difference) of the number of inefficient performed procedures has to be reported.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddEfficiencyRatioDelta",
  "type": "object",
  "required": ["subject", "statistic-id", "efficiencyDeltaValue", "inefficiencyDeltaValue"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-efficiency-ratio-delta'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "efficiencyDeltaValue": {
      "type": "integer",
      "description": "The number of efficient performed procedures to add"
    },
    "inefficiencyDeltaValue": {
      "type": "integer",
      "description": "The number of inefficient performed procedures to add"
    }
  }
}

Example: 
{
  "subject":"add-efficiency-ratio-delta",
  "statistic-id":100006,
  "efficiencyDeltaValue":6,
  "inefficiencyDeltaValue":22
}

Add Throughput Delta Object (throughput-time-chart only)

To update a throughput-time-chart statistic, the delta (difference) value from a last absolute, cumulated value to the current cumulated value has to be reported, whereby the current time stamp is included in the calculation.

Although this type of statistic always has the unit throughput per second, a measured delta (difference) value can be reported at any time.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddThroughputDelta",
  "type": "object",
  "required": ["subject", "statistic-id", "delta-value", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-throughput-delta'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "delta-value": {
      "type": "number",
      "description": "the delta (difference) value"
    },
    "timestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the delta (difference) value"
    }
  }
}

Example: 
{
  "subject":"add-throughput-delta",
  "statistic-id":100003,
  "delta-value":0.53612,
  "timestamp":1634401774410
}

Add Error Object

Add an error to the test result.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddError",
  "type": "object",
  "required": ["subject", "statistic-id", "error-subject", "error-severity",
  "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-error'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id, or -1 if this error is not bound to any statistic"
    },
    "error-subject": {
      "type": "string",
      "description": "The subject or title of the error"
    },
    "error-severity": {
      "type": "string",
      "description": "'warning' or 'error' or 'fatal'"
    },
    "error-type": {
      "type": "string",
      "description": "The type of the error. Errors which contains the same error type can be grouped."
    },
    "error-log": {
      "type": "string",
      "description": "The error log. Multiple lines are supported by adding \r\n line terminators."
    },
    "error-context": {
      "type": "string",
      "description": "Context information about the condition under which the error occurred. Multiple lines are supported by adding \r\n line terminators."
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{
  "subject":"add-error",
  "statistic-id":-1,
  "error-subject":"Connection refused (Connection refused)",
  "error-severity":"error",
  "error-type":"java.net.ConnectException",
  "error-log":"2020-08-01 21:24:51.662 | main-HTTPClientProcessing[3] | INFO | GET http://192.168.0.111/\r\n2020-08-01 21:24:51.670 | main-HTTPClientProcessing[3] | ERROR | Failed to open or reuse connection to 192.168.0.111:80 |
 java.net.ConnectException: Connection refused (Connection refused)\r\n",
  "error-context":"HTTP Request Header\r\nhttp://192.168.0.111/\r\nGET / HTTP/1.1\r\nHost: 192.168.0.111\r\nConnection: keep-alive\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate\r\n",
  "timestamp":1596309891672
}

Note: Do not use this error object for sample-event-time-chart(s).

Add Test Result Annotation Exec Event Object

Add an annotation event to the test result.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddTestResultAnnotationExecEvent",
  "type": "object",
  "required": ["subject", "event-id", "event-text", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-test-result-annotation-exec-event'"
    },
    "event-id": {
      "type": "integer",
      "description": "The event id, valid range: -1 .. -999999"
    },
    "event-text": {
      "type": "string",
      "description": "the event text"
    },
    "timestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the event"
    }
  }
}

Example: 
{
  "subject":"add-test-result-annotation-exec-event",
  "event-id":-1,
  "event-text":"Too many errors: Test job stopped by plug-in",
  "timestamp":1634401774410
}

Notes:

  • The event id must be in the range from -1 (minus one) to -999999.
  • Events with the same event id are merged to one event.

[End of Interface Specification]

Example

HTTP Test Wizard Plug-In

This plug-in “measures” a random value, and is executed in this example as the only part of an HTTP Test Wizard session.

The All Purpose Interface JSON objects are written using the corresponding methods of the com.dkfqs.tools.javatest.AbstractJavaTest class. This class is located in the JAR file com.dkfqs.tools.jar which is already predefined for all plug-ins.

import com.dkfqs.tools.javatest.AbstractJavaTest;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginContext;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginInterface;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginSessionFailedException;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginTestFailedException;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginUserFailedException;
import com.dkfqs.tools.logging.LogAdapterInterface;
import java.util.ArrayList;
import java.util.List;
// add your imports here

/**
 * HTTP Test Wizard Plug-In 'All Purpose Interface Example'.
 * Plug-in Type: Normal Session Element Plug-In.
 * Created by 'DKF' at 24 Sep 2021 22:50:04
 * DKFQS 4.3.22
 */
@AbstractJavaTestPluginInterface.PluginResourceFiles(fileNames={"com.dkfqs.tools.jar"})
public class AllPurposeInterfaceExample implements AbstractJavaTestPluginInterface {
	private LogAdapterInterface log = null;
	
	private static final int STATISTIC_ID = 1000;
	private AbstractJavaTest javaTest = null;       // refrence to the generated test program
	
	/**
	 * Called by environment when the instance is created.
	 * @param log the log adapter
	 */
	@Override
	public void setLog(LogAdapterInterface log) {
		this.log = log;
	}
	
	/**
	 * On plug-in initialize. Called when the plug-in is initialized. <br>
	 * Depending on the initialization scope of the plug-in the following specific exceptions can be thrown:<ul>
	 * 	<li>Initialization scope <b>global:</b> AbstractJavaTestPluginTestFailedException</li>
	 * 	<li>Initialization scope <b>user:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException</li>
	 * 	<li>Initialization scope <b>session:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginSessionFailedException</li>
	 * </ul>
	 * @param javaTest the reference to the executed test program, or null if no such information is available (in debugger environment)
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws AbstractJavaTestPluginSessionFailedException if the plug-in signals that the 'user session' has to be aborted (abort current session - continue next session)
	 * @throws AbstractJavaTestPluginUserFailedException if the plug-in signals that the user has to be terminated
	 * @throws AbstractJavaTestPluginTestFailedException if the plug-in signals that the test has to be terminated
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onInitialize(AbstractJavaTest javaTest, AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws AbstractJavaTestPluginSessionFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginTestFailedException, Exception {
		// log.message(log.LOG_INFO, "onInitialize(...)");
		
		// --- vvv --- start of specific onInitialize code --- vvv ---
		if (javaTest != null) {
		    this.javaTest = javaTest;
		    
		    // declare the statistic
		    javaTest.declareStatistic(STATISTIC_ID, 
            		                  AbstractJavaTest.STATISTIC_TYPE_SAMPLE_EVENT_TIME_CHART,
            		                  "My Measurement",
            		                  "",
            		                  "My Response Time",
            		                  "ms",
            		                  STATISTIC_ID,
            		                  true,
            		                  "");
		}
		// --- ^^^ --- end of specific onInitialize code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

	/**
	 * On plug-in execute. Called when the plug-in is executed. <br>
	 * Depending on the execution scope of the plug-in the following specific exceptions can be thrown:<ul>
	 * 	<li>Initialization scope <b>global:</b> AbstractJavaTestPluginTestFailedException</li>
	 * 	<li>Initialization scope <b>user:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException</li>
	 * 	<li>Initialization scope <b>session:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginSessionFailedException</li>
	 * </ul>
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws AbstractJavaTestPluginSessionFailedException if the plug-in signals that the 'user session' has to be aborted (abort current session - continue next session)
	 * @throws AbstractJavaTestPluginUserFailedException if the plug-in signals that the user has to be terminated
	 * @throws AbstractJavaTestPluginTestFailedException if the plug-in signals that the test has to be terminated
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onExecute(AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws AbstractJavaTestPluginSessionFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginTestFailedException, Exception {
		// log.message(log.LOG_INFO, "onExecute(...)");
		
		// --- vvv --- start of specific onExecute code --- vvv ---
		if (javaTest != null) {
		    
		    // register the start of the sample 
		    javaTest.registerSampleStart(STATISTIC_ID);
		    
		    // measure the sample
		    final long min = 1L;
		    final long max = 20L;
		    long responseTime = Math.round(((Math.random() * (max - min)) + min));
		    
		    // add the measured sample to the statistic
		    javaTest.addSampleLong(STATISTIC_ID, responseTime);
		    
		    /*
		    // error case
		    javaTest.addSampleError(STATISTIC_ID,
                                    "My error subject",
                                    AbstractJavaTest.ERROR_SEVERITY_WARNING,
                                    "My error type",
                                    "My error response text or log",
                                    "");
            */
		}
		// --- ^^^ --- end of specific onExecute code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

	/**
	 * On plug-in deconstruct. Called when the plug-in is deconstructed.
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onDeconstruct(AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws Exception {
		// log.message(log.LOG_INFO, "onDeconstruct(...)");
		
		// --- vvv --- start of specific onDeconstruct code --- vvv ---
		// no code here
		// --- ^^^ --- end of specific onDeconstruct code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

}

“alt attribute”

“alt attribute”

Debugging the Interface

  1. In order to debug the processing of the reported data of the interface, activate the “Debug Measuring” checkbox when starting the test job.
  2. After the test job has completed, select in the Test Jobs menu at the corresponding test job the option “Job Log Files” and then select the file “DataCollector.out”.
  3. Review the “DataCollector.out” file for any errors. Lines which contains “| Tailer data” reflect the raw reported data.

“alt attribute”

“alt attribute”

1 - Developing a JUnit Monitoring Test

This example shows a JUnit Test that executes a DNS query to resolve and verify a hostname of a domain.

Abstract

This example shows a JUnit Test that executes a DNS query to resolve a hostname of a domain. In addition, the received IP V4 address is verified. You can use this example to verify that a DNS hostname is currently defined and pointing to the correct IP V4 address.

If you scroll through the code below, you will notice that the Java class is not part of a Java Package and extends AbstractJUnitTest, and that there is small code at @Before and @After. This is necessary so that the JUnit test can run on the RealLoad infrastructure.

The code at @Test corresponds to a normally programmed JUnit test, with the only exception that a special logger is used (class MemoryLogAdapter, method log.message).

TestDnsARecord.java[Source Code]

To compile the code, the JAR files com.dkfqs.tools.jar and junit-4.13.2.jar are required. The Java Doc of com.dkfqs.tools is published at https://download.realload.com

A simple test case was deliberately chosen so that you can become familiar with the RealLoad product. We recommend that you carry out the steps described here yourself. After you know how to get a JUnit test running, you can create any JUnit test yourself, or migrate existing JUnit tests to RealLoad - simply by replacing the @Test method of this example.

1. Create the File TestDnsARecord.java in the Project Tree

You must first create an empty file named TestDnsARecord.java in a Project’s ‘Resource Set’. ‘Resource Sets’ are something like sub-directories of a project, which contain all the files necessary to define and execute a test. New Projects and Resource Sets can be created in the Projects Menu.

Create a new Project and Resource Set: JUnit Tests / Test DNS A Record. Create new Project with Resource Set

Create a new (empty) file: TestDnsARecord.java in the Resource Set. Create Empty File

After the empty file has been created, click on ‘Edit File’ and paste the source code shown above into the editor. Edit File

Then save the file and close the editor. Save File

2. Compile the TestDnsARecord.java File and Define the RealLoad ‘Test’

Click on the ‘Compile Java File’ icon. Compile File

Select the JAR files com.dkfqs.tools.jar and junit-4.13.2.jar and click the ‘Compile’ button. Compile File

After the file has been successfully compiled click on ‘Define or Update JUnit Test’. Define or Update JUnit Test

Select the @Test method (in this case resolveARecord) and click ‘Select’. Leave the switches at ‘Additional Required JAR Libraries’ as shown (default setting) and then click on ‘Define Test’. Define Test

Enter the ‘Test Description’ and click on ‘Define Test’. Define Test

The RealLoad ‘Test’ is now defined. From here you can now create both a Load Test Job and a Monitoring Job. RealLoad ‘Test’ is Defined

3. Verify the Test

To verify that the test works correctly, first start it as a Load Test Job with only one simulated user and one loop. Click ‘Define Test Job’. Define Test Job

Click ‘Continue’. Define Load Test Job

Select the Measuring Agent on which the Load Test Job will be executed and turn on the switch ‘Debug Execution’. Then click on ‘Define Load Test Job’. Define Load Test Job

The Load Test Job is then created and in the state ‘Defined’. Click on ‘Start Test Job’. Start Test Job

The settings of the Load Test Job are displayed, which can also be modified here (not necessary in this case). Click ‘Start Test Job’. Start Test Job

Wait a few seconds until you receive the notification that the Load Test Job has completed. Then delete the notification. Load Test Job completed Notification

From the Load Test Job drop-down menu, select ‘Job Log Files’. Select ‘Job Log Files’

Select the Job Log File ‘users.out’. Select ‘users.out’ File

If no error or Java stack trace is displayed, the Load Test Job was executed successfully. Check ‘users.out’ File

4. Troubleshooting a Test

If the job log file users.out shows an error, you need to modify the Java code of the test, or you need to add additional JAR files required to run the test. After editing the Java code, compile the Java file again and then click the ‘Define or Update JUnit Test’ button after successful compilation. Then jump directly into the Load Test Jobs menu, ‘clone’ the Load Test Job and run it again. ‘Clone’ the Load Test Job

5. Define the Monitoring Job

If this is your first Monitoring Job, you must first create a ‘Monitoring Group’. Navigate to Monitoring, click the ‘Configuration’ tab and then click ‘Add Monitoring Group’. Note: There is also separate help for monitoring. Navigate to Monitoring Configuration

Enter the ‘Group Title’ and select at least one ‘Measuring Agent’ on which the Monitoring Job(s) will be executed. Then click ‘Add Monitoring Group’. Add Monitoring Group

In the ‘Monitoring Group’ click ‘Monitoring Jobs’ and then click ‘Add Monitoring Job’. Add Monitoring Job

Select the ‘Test’ of the Monitoring Job. Select ‘Test’ of Monitoring Job

Turn on the switch ‘Debug Execution’ and click ‘Define Monitoring Job’. Define Monitoring Job

Enable the execution of the Monitoring Group and navigate to ‘Dashboard’. Enable Monitoring Group and Navigate to Dashboard

The Monitoring Job is now defined and will be executed periodically. For additional help configuring monitoring (e.g. adding ‘Alert Devices’), see Monitoring Help. Monitoring Job is Defined

6. Optional: Create a Generic DNS A-Record Test

As you saw in the code before, the following 3 variables are defined as constants:

private static final String dnsServer = "8.8.8.8";  // use Google LLC public DNS server
private static final String dnsHostnameToResolve = "www.realload.com";  // the DNS host name to resolve
private static final String expectedIpV4Address = "83.150.39.46";       // the expected IP V4 address of the resolved host name

This section now describes how you can dynamically initialize these variables using User Input Fields.

The ‘User Input Fields’ values can be entered when starting a Load Test Job and when defining a Monitoring Job. This makes the test reusable, i.e. you can add the same ‘Test’ multiple times to multiple Monitoring Jobs, but use different values for the ‘User Input Fields’ for each Monitoring Job.

First you have to create a file that contains the definitions of the User Input Fields. To do this, invoke the User Input Fields Wizard. Invoke the ‘User Input Fields Wizard’

Enter (add) the 3 User Input Fields and save the file at JUnit Tests / Test DNS A Record / InputFields_TestDnsARecordGeneric.json

GUI Label Variable Name Input Type Default Value
DNS Server dnsServer String 8.8.8.8
Hostname to Resolve dnsHostnameToResolve String www.realload.com
Expected IP Address expectedIpV4Address String 83.150.39.46

Add the 3 User Input Fields

Then copy the file TestDnsARecord.java to TestDnsARecordGeneric.java and then edit TestDnsARecordGeneric.java Copy Java File

Copy Java File

Edit TestDnsARecordGeneric.java

At the beginning of the Java class replace the class name and modify the 3 variables:

public class TestDnsARecordGeneric extends AbstractJUnitTest
{
    private String dnsServer;  // example: 8.8.8.8
    private String dnsHostnameToResolve; // example: www.realload.com
    private String expectedIpV4Address; // example: 83.150.39.46

Replace also the class name in the constructor:

    /**
     * Constructor.
     */
    public TestDnsARecordGeneric() {
        // disable DNSJava to search for default DNS servers (DNSJava is integrated in com.dkfqs.tools)
        System.setProperty("dnsjava.noDefaultDnsServers", "true");
    }

And modify the code @Before as follows:

    /**
    * Prepare the test.
    */
    @Before
    public void setUp() {
        if (isArgDebugExecution()) {
            log.setLogLevel(LOG_DEBUG);
        }

        // get the user input fields
        dnsServer = getUserInputField("dnsServer", null);
        if (dnsServer == null) {
            throw new RuntimeException("User input field 'dnsServer' missing");
        }
        dnsHostnameToResolve = getUserInputField("dnsHostnameToResolve", null);
        if (dnsHostnameToResolve == null) {
            throw new RuntimeException("User input field 'dnsHostnameToResolve' missing");
        }
        expectedIpV4Address = getUserInputField("expectedIpV4Address", null);
        if (expectedIpV4Address == null) {
            throw new RuntimeException("User input field 'expectedIpV4Address' missing");
        }

        openAllPurposeInterface();
    }

Then save and compile TestDnsARecordGeneric.java Compile TestDnsARecordGeneric.java

At ‘Define Test’ turn on the switch InputFields_TestDnsARecordGeneric.json at ‘Additional Required Resource Files’. Turn on Switch InputFields_TestDnsARecordGeneric.json

Define the Test and the corresponding Load Test Job as usual, and then start the Load Test Job. You can now enter the values for the 3 User Input Fields. Load Test Job with User Input Fields

Check the log file users.out after the Load Test Job is completed. Check Job Output File

When defining a Monitoring Job, you can also enter the values for the User Input Fields. Monitoring Job with User Input Fields

Final Dashboard

7. Conclusion and Prospects

As you have seen, a JUnit Test can be run as both a Load Test Job and a Monitoring Job.

Additionally, a Test Job Template can also be defined from any ‘Load Test Job’, which can then be part of a Test Suite that is executed as Regression Test. This means that you can add multiple JUnit Tests to a Test Suite and execute them in a single run as a Regression Test.

Last but not least, note that JUnit Tests can also be executed with many, up to several thousands, virtual users - for Monitoring Jobs, for Load Tests and for Regression Tests. This is a generic feature of the RealLoad architecture which applies to all kind of RealLoad ‘Tests’.