1 - Getting started

Getting started with Real Load

This section of the documentation will walk you through the first steps in order to getting started with the Real Load product.

Follow the next sections of the document and you should be able to run your first basic load test script using the SaaS Evaluaiton scenario within 20 minutes or so.

Please to let us know if encounter any issues while getting started, as that will help us updating this documentation to make it as clear and user friendly as possible.

1.1 - Deployment types

Chose the right deployment type for you

First of all you’ll have to decide which Real Load deployment type better suits your needs. These guidlines might assist you in making an informed decision.

If your intent is to first evaluate the product we strongly suggest to chose the “quick evaluation” option.

Architecture

There are two key components that make up the Real Load application:

The Portal Server

  • Exposes the main GUI to end users.
  • All tasks to prepare test scripts are performed from here.
  • Used to trigger test executions and visualize results.

Measuring Agents

  • Load test scripts are triggered from agents.
  • More than one agent can be deployed.
  • The agent needs to be reachable from the portal server and needs to be able to reach the servers to be load tested.

Depending on your network topology, the location of the Measuring Agent (externally exposed or not) will be the main factor dictating whether you can use our SaaS solution (cloud hosted) or you’ll have to proceed with an on-premises deployment.

The following diagram summarizes the architecture:

Architecture

Quick evaluation

If you’d just like to perform an initial evaluation the Real Load product your best option would be to use our SaaS offering. Simply create yourself an account in our cloud based portal and once you’re setup you’ll be able to prepare a load testing script and perform a low-volume test against an intenet exposed server.

Requirements / Constraints

  • Perform a functional evaluation without having to deploy any software on-premises.
  • Do not want to incure any cloud related (AWS / Azure) costs.

Infrastructure Requirements

  • The website you want to run the load test against need to be publicly reachable.

SaaS offering (all cloud based)

If all the servers you’re planning to run your load test against are cloud hosted (AWS, Azure, etc…) you might be able to use our existing AWZ or Azure agent image as load test generators, controlled by our cloud based portal.

Requirements / Constraints

  • The website you want to run the load test is hosted in the cloud but doesn’t need to be publicly reachable. It needs to be reachable from a cloud deployed Real Load agent.
  • You’ll have to start a Real Load agent instance (Virtual Machine) under your own cloud account.

Infrastructure Requirements

  • You’ll have to run some instances of our agent AWS or Azure images under your own cloud account. A suitable AWS or Azure subscription will be required. This machine needs to be assigned a public IP address so that it is reachable from our Portal Server.

Hybrid offering (controller on-premises, agents cloud or on-premises based)

If at least some of the servers you’re planning to run your load test against are hosted within an internal network that is not externally reachable and exposing the real-load agents to the internet is not an option, you’ll have to deploy the Real Load portal on-premises, so that both internal and external (if any agent) Real Load agents are reachable.

Requirements / Constraints

  • At least some of the servers you want to load test are not exposed externally.
  • Exposing the Real Load agent externally is not an option.

Infrastructure Requirements

  • You’ll have to deploy the Real Load portal internally on a supported operating system.
  • You’ll have to deploy the Real Load agent(s) internally on a supported operating system.
  • If required, you might have to run some instances of our agent AWS or Azure images under your own cloud account. A suitable AWS or Azure subscription will be required.

On-premises offering

Obviously you can run all of your Real Load infrastructure internally. The Real Load software doesn’t require any connecivity to external systeems in this deployment scneario. This scenario is very similar to the last scenario…

Requirements / Constraints

  • At least some of the servers you want to load test are not exposed externally.
  • Exposing the Real Loaf agent externally is not an option.

Infrastructure Requirements

  • You’ll have to deploy the Real Load portal internally on a supported operating system.
  • You’ll have to deploy the Real Load agent(s) internally on a supported operating system.

1.2 - Portal Sign Up and Login

To get started you’ll need to setup and account for yourself at the Real Load portal. See here how…

If you already have an account, you can login at this link:

https://portal.realload.com

Signing Up

In order to login to the portal you’ll first have to setup an account. Go to the portal URL and click on the Sign Up button or go this URL: https://portal.realload.com/SignUp

You’ll need to provide:

  • Email address.
  • Mobile phone number.
  • No credit card required, it’s completely free.

Sign Up Process

When sign up, you will get 20 free Cloud Credits, which can be used to start Measuring Agents (= load generators) directly from the portal server.

Video

This is a brief description of the information required to sing up to the Real Load portal.

Step 1

Provide your details, including email and mobile number.

Step 2

Validate your email address.

Step 3

Validate your mobile number

Step 4

Configure your nickname and password

Setting Up Measuring Agents

In order to debug and execute a load test, you’ll need at least a Measuring Agent (= load generator).

There are 3 ways to set up Measuring Agents:

  1. If you have Cloud Credits (20 free CC already credited at sign up), you can easily launch cloud-based Measuring Agents directly from the portal server. Additional Cloud Credits can be purchased at the Real Load Store.
  2. You can launch cloud-based Measuring Agents with the Desktop Companion (locally installable Windows program) under your own AWS account, whereby you can use pre-built AWS EC2 AMIs.
  3. You can also download the Measuring Agent software and install it on your own servers. After that you can register your Measuring Agent(s) at the portal server.

Option 2 and 3 do not require any Cloud Credits.

Video: Define and run a simple test with a Measurement Agent instance launched by Cloud Credits.

1.3 - Create a simple REST GET test

This section gets you kickstarted with a simple REST test that executes requests on an API resource via a GET request.

Pre-requisites

To configure and execute this simple test you’ll need:

  • Access to the Real Load portal. If you haven’t signed up, do so first: Sign Up)
  • A REST API URL you can test with a GET request. You can otherwise use the REST API mentioned in the description
  • Approx. 20 minutes of your time
  • A cup of tea or coffee

Prepare the project

  • Create a new project called “Simple REST test” (… or something that makes sense to you)
  • Click on the create project icon (pointed at by the green arrow) and then enter a suitable name for the project.

Create Project

Then define a new resource set in the freshly created project. I’ll call this “GET tests”, but again feel free to chose any name that makes sense to you.

Create Resource Set

Create the test script using the HTTP test wizard

Now we’ll create the test script with the help of the HTTP test wizard.

Enter a name for the test script and click on OK.

Then click on “Add” and select “URL” from the menu. This is to add a new URL request to the test script.

Enter the URL of the REST API endpoint, for example https://www.realload.com/RealLoadListImagesWS/rest/AWSImages

  • In the “Verify HTTP Response” section add some validation assertions. In this case I configure response code HTTP 200 and in the response body the string “ap-southeast-2” should appear.
  • Then click on “Add URL”.

Debug your test

You can now test with the debugger that your test does what it is supposed to do.

Click on “Debug Session”:

Then click on Next Step to execute the REST request. Note the update in the area highlighted in red.

You can inspect the response header and body by clicking on the icons pointed at by the arrows:

The response content (body) will appear in the debugger. If all looks good, close the window.

Save and compile the test

The last step is to save the test and the execute it. First exit from the debugger window:

.. then save the session using a suitable name. Also select the resource group the test should be saved to.

Now generate the Java code that will contain your test logic which will be executed on the Measuring Agent. Click on “Generate Code”:

Then click on “Generate Source Code” and “Compile & Generate Jar”.

Now that the test code is compiled, you can define a new test that by clicking on “Define New Test”.

You’ll have to give a name to this test job, enter something meaningful to you:

Execute the test job

You’re now ready to execute the test. Click on the test you’ve just defined to create a new test job:

… select “continue”…

… select the agent to run the test job from.

You’re now ready to start the test. Click on “Test jobs” then click on “Start Test Job”:

.. select the number of Virtual Users to simulate, duration of the test and think time between test loops:

Monitor the test job

Once the test job is running click on the “Monitor Jobs” menu item:

… and you’ll be able to see measurement related to your test job:

Done, congrats, you’ve run your first load test with Real Load.

To learn more we suggest heading the to User Guide section were you’ll find detailed documentation in relation to the steps outlined in this document.

2 - Overview

What is Real Load?

Real Load is the future ready today - a next-generation load and stress testing tool.

At the core of the Real Load product is a universal measurement interface that supports to capture data of stress tests which can run against anything that has a response time. The product is highly scalable and can be used to run tests from a few hundred simulated users up to an almost unlimited load with millions of concurrent users.

In addition, you have access to advanced features for generating HTTP/S stress tests, such as to HTTP/S proxy recorders and to a HTTP test wizard.

The executed load tests can be either generated automatically by using the wizards, or alternatively programmed by hand, with all libraries and interfaces being fully documented in detail.

Regardless of what type of test you are running, all data reported to the universal interface during a test are displayed directly in real time in form of statistics and charts. Results from load releasing clusters are even displayed in real time by combining the cluster members data on the fly.

Real Load itself is written in Java - but can execute load and stress tests written in any programming language, since the universal measurement interface is file-based and supports any programming language that is capable to write data into a file.

3 - Release Notes

Real Load Release Notes

4.7.3 | 2022-8-28

  • New Features:
    • HTTP Test Wizard: New menu “URL Explorer” added. The URL Explorer shows all the details of the recorded HTTP session data and also supports searching within it. Furthermore, the URL Explorer also contains a “Variables Wizard” which displays all distinct values clearly sent in an HTTP session and makes it much easier to extract and assign session variables.
    • HTTP Test Wizard / Review and Edit URL Settings: This menu has been revised and enhanced. Two new auto-configurations wizards are available with which the content of HTTP responses can be automatically checked for keywords and with which the parallel execution of HTTP calls can be configured automatically.
    • HTTP Test Wizard / Debugger: UI improved.
    • A new “Text Phrase Tool” has added. This tool determines the most meaningful human-readable text phrases or keywords from any given content (HTML/JSON/XML). This tool has also been integrated into the HTTP Test wizard.
    • Remote Proxy Recorder: “URL Filter Quick Setting” menu added.
    • Remote Proxy Recorder: “Convert Recording to HTTP Test Wizard Session” improved and enhanced.
    • Portal Server (all menus): Large screen support has been improved.
    • Admin Menu: Mobile Companion API integrated and “Mobile Companion App - Test & Logs” menu added.
  • Bug Fixes:
    • HTTP Test Wizard / Generate Code: The measured statistic IDs of executed load tests have now the same values as the indexes of the HTTP Test Wizard session elements.
    • Load test programs generated by the HTTP Test Wizard did not correctly measure the response time in some cases when an HTTP response had no response content (bug fix in com.dkfqs.tools.jar, new version 2.3.0).
  • Portal Server version 4.7.3 requires now Measuring Agent and Cluster Controller version 4.5.0, and Remote Proxy Recorder version 1.0.0

4.6.8 | 2022-5-19

  • Bug Fixes:
    • Adding of cluster members to AWS/EC2 ‘Cluster Controller’ instances which have launched by cloud credits did fail.
    • The status of cluster jobs was not updated by the real-time monitor.
    • Deleting of test jobs did fail if the corresponding cluster was no longer defined.
    • Starting of cloud instances is now prevented if the portal’s internal monitoring system is no longer working.
  • Portal Server version 4.6.8 requires now Measuring Agent version 4.2.0 (bug fix: incorrect measurement results were reported sporadically)

4.6.7 | 2022-5-18

  • New Features:
    • AWS/EC2 ‘Remote Proxy Recorders’ can now be started by cloud credits.
    • Preview of ‘Remote Proxy Recorder CA Root Certificate’ added when downloading a remote proxy recorder CA root certificate.
    • 40 seconds countdown to left navigation added when launching EC2 instances by cloud credits.

4.6.5 | 2022-5-10

  • New Features:
    • Support for team member accounts added:
      • Depending on the license (price plan), a normal user (main user) can also define sub-users (so-called team member accounts).
      • Team members can login as usual, with their email and password.
      • Team members can either have the same rights as the main user or alternatively only have read rights.
      • The team members (including the main user) can communicate with each other via push messages. The portal also shows which team member is currently online. Offline team members receive an email instead of an online push notification.
      • A list of all team members is provided for each team member (inclusive the main user), which contains the profile text and profile image of each team member.
      • Team member accounts are not portal-wide ‘public users’, and cannot act as public technical expert.
  • Bug Fixes:
    • Deleting of test jobs did fail if the corresponding measuring agent was no longer defined.

4.6.4 | 2022-4-18

  • New Features:
    • Support of special licenses to top up the amount of ‘Cloud Credits’.
    • License restrictions on the maximum number of Measuring Agents and Remote Proxy Recorders no longer apply when such components are launched by spending Cloud Credits.
    • The Web Browser interface has been optimized to support large screens.

4.6.2 | 2022-4-12

  • New Features:
    • Support of ‘Sign in with Microsoft’ added.

4.6.0 | 2022-4-10

  • New Features:
    • The list of excluded AWS/EC2 regions can now be edited in the admin menu.
    • The additional data transfer costs of AWS/EC2 instances which are started via the Portal Server are now billed via Cloud Credits.
    • A new Measuring Agent version 4.1.0 is available which is able to automatically configure the Java memory of the ‘Data Collector’ and the Java memory of executed Java test programs.
  • Portal Server version 4.6.0 supports Measuring Agent version 4.0.4 and version 4.1.0

In order to enable the automatic Java memory configuration of a Measuring Agent 4.1.0, the following Java program arguments must be set in the startup script: -autoAdjustMemory -osReservedMemory 1GB

Furthermore, the Java memory of the Measuring Agent should be set in the startup script as shown in the table below:

OS Physical Memory Measuring Agent Java -Xmx setting
<2 GiB 256m
2..3 GiB 512m
4..7 GiB 512m
8..15 GiB 1536m
16..31 GiB 3072m
32..63 GiB 4096m
64..96 GiB 6144m
>96 GiB 8192m
Odd number of GiB should be rounded up (e.g. 7.7 = 8 = 1536m).

Example: sudo -H -u dkfqs bash -c ‘CLASSPATH=/home/dkfqs/agent/bin/bcpkix-jdk15on-160.jar:/home/dkfqs/agent/bin/bcprov-jdk15on-160.jar:/home/dkfqs/agent/bin/bctls-jdk15on-160.jar:/home/dkfqs/agent/bin/DKFQSMeasuringAgent.jar;export CLASSPATH;nohup java -Xmx512m -DdkfqsMeasuringAgentProperties=/home/dkfqs/agent/config/measuringagent.properties -Dnashorn.args="–no-deprecation-warning" com.dkfqs.measuringagent.internal.StartDKFQSMeasuringAgent -autoAdjustMemory -osReservedMemory 1GB 1>/home/dkfqs/agent/log/MeasuringAgent.log 2>&1 &’

4.5.0 | 2022-3-27

  • New Features:
    • Amazon AWS/EC2 instances of ‘Measuring Agents’ and ‘Cluster Controllers’ can now be started directly in the Portal Server - without the customer needing an Amazon AWS account. The costs of such started AWS/EC2 instances are charged to the customer via so-called ‘Cloud Credits’, whereby the number of ‘Cloud Credits’ can be purchased as an integral part of a license or can also be purchased separately. For customers who register for the first time at the Portal Server, a free amount of ‘Cloud Credits’ will be credited to allow them try out this functionality. Note: It is still fully supported to launch Amazon AWS/EC2 instances of ‘Measuring Agents’ and ‘Cluster Controllers’ with your own Amazon AWS account by using the standalone tool ‘Desktop Companion’.
    • User Top Navigation: Link to Download Server and to Real Load Store added (both links are manageable by the administrator menu).
    • HTTP Test Wizard: Review and Edit URLs menu added.
    • Launched Cloud Instances menu added.
    • Cloud Credit Statement menu added.
    • Measuring Agents & Cluster Controllers menu: The ‘List of Predefined AWS/EC2 Measuring Agent AMIs’ shows no longer incompatible AMI versions.
    • The following new functions have been added for administrators:
      • User Accounts menu: Cloud Credit Statement menu per user added.
      • Server Settings menu: AWS/EC2 Cloud Provider API Settings section added.
      • Customize Auth Users Navigation menu: Download Components and Shop sections added.
      • Cloud Providers & Instance Type Credit Costs menu added.
      • Inspect AWS/EC2 Cloud Instances menu added.
      • Launched Cloud Instances menu added.
      • Cloud Credit Transactions menu added.

Advantages and disadvantages of ‘Cloud Credits’ versus launching AWS/EC2 instances by the ‘Desktop Companion’:

Benefits of ‘Cloud Credits’:

  • Full cost control. Cloud instances are automatically terminated after a selected time.
  • Easy to use. Seamless integration with the portal server.
  • No Amazon AWS account required and therefore no security issue for unauthorized AWS access.

Advantages of the ‘Desktop Companion’:

  • Lower costs for cloud instances compared to ‘Cloud Credits’(AWS self-cost are charged only).
  • Suitable for customers who use cloud instances frequently and are aware of AWS/EC2 costs involved.

4.4.4 | 2022-1-30

  • New Features:
    • Load-releasing Clusters of Measuring Agents are now supported. In order to operate a cluster, a separate process/component is required (so-called “Cluster Controller”) which is not part of the portal server and is operated by the customer himself. Cluster-Features:
      • Multiple clusters can be registered in the portal server using the UI, and the cluster members (measuring agents) can be assigned to the clusters with simple clicks.
      • Depending on the power of the Cluster Controller, a cluster can have several 100 members, which allows to perform load tests of almost unlimited strength (up to millions of concurrent users).
      • A cluster job can be started as easily as a normal job. The only difference is that a cluster is selected for execution instead of a single measuring agent.
      • Optionally, the content of input files can be automatically divided among the cluster members (e.g. if they contain user account credentials).
      • Cluster jobs support all runtime features like normal jobs: suspend job, resume job, stop job and kill job. Entering annotations at runtime is also supported.
      • The real-time display of cluster jobs is exactly the same as for normal jobs, but the statistics displayed at real time already contain the combined values of all cluster members. In addition, a table about the current operating status of each cluster member is available.
      • After a cluster job has been completed, the test result contains the combined values of all cluster members (analogous to the real-time display). In addition, the individual test results of the cluster members can also be displayed.
    • The following functions have been added to the “Remote User API”:
      • getMinRequiredMeasuringAgentVersion
      • getMinRequiredClusterControllerVersion
      • getMinRequiredProxyRecorderVersion
      • getMeasuringAgentClusters
      • getClusterControllers
      • pingClusterController
      • addMeasuringAgentCluster
      • addMemberToMeasuringAgentCluster
      • removeMemberFromMeasuringAgentCluster
      • pingMeasuringAgentClusterMembers
      • setMeasuringAgentClusterActive
      • deleteMeasuringAgentCluster
    • Portal Server “Measuring Agents” menu: The AWS/EC2 popup highlights now incompatible AMI versions.
    • Portal Server “Test Jobs” menu: The log files of executed jobs can now directly copied to the project tree.
  • Bug Fixes:
    • A test job can now also have the state “execution failed”.
  • Portal Server version 4.4.4 requires now Measuring Agent version 4.0.4

4.3.23 | 2021-11-13

  • New Features:
    • HTTP Test Wizard: New Session Element ‘Outbound IP Address’ added. In order that this element can be used, multiple valid IP addresses must be assigned to the network interface of the Measuring Agent(s).
    • HTTP Test Wizard: Support of specific HTTP processing timeout per URL added.
    • Test Results: Support of annotation and annotation events added. The annotation and annotation events can be added at runtime during the test and are shown in the test result. In addition, annotation events can also be reported via the All Purpose Interface and generated, for example, by HTTP Test Wizard plug-ins.
  • Bug Fixes:
    • The cached HTTP Test Wizard Session in the Portal Server UI is now updated when a variable is defined, modified or deleted, and when a variable extractor or a variable assigner is deleted.
    • Invalid Java code was generated by the HTTP Test Wizard if the error handling of URLs was set to ‘Continue as usual’.
    • Performance bottleneck fixed in com.dkfqs.tools.http.HTTPClient (occurred since previous version 4.3.22).
    • An incorrect error exception was thrown in com.dkfqs.tools.crypto.EncryptedSocket when an SSL handshake timeout occurred
    • Realtime charts sometimes showed inexact measurement results.
  • Portal Server version 4.3.23 requires now Measuring Agent version 3.9.33 and DKFQS Tools version 2.2.25

4.3.22 | 2021-10-24

  • Documentation: All Purpose Interface added.
  • The All Purpose Interface has been extended by 3 new statistics types that can be declared at runtime:
    • average-and-current-value : An average and current value
    • efficiency-ratio-percent : An efficiency ratio in percent (0..100%)
    • throughput-time-chart : A chart of a throughput per second
  • The measurement results of the HTTP Test Wizard now contain the following additional test-specific values (if the executed HTTP Test Wizard session contains URL session elements):
    • Total Bytes Sent
    • Total Bytes Received
    • Network Throughput in Mbps (real-time: current value, test result: average value and chart)
    • Average TCP Connect Time in milliseconds (real-time: + latest value)
    • Average SSL Handshake Time in milliseconds (real-time: + latest value)
    • HTTP Keep-Alive Efficiency (0..100%)
  • Bug fix: The time in the name of the test result files is now always set in the time zone of the portal server, regardless of which time zone the Measuring Agents are operated.
  • Portal Server version 4.3.22 requires now Measuring Agent version 3.9.32 and DKFQS Tools version 2.2.24

4.3.21 | 2021-09-18

  • Searching for a text in all files of the project tree is now supported.
  • Support to upload files by drag and drop added.
  • Bug fix for assigning variables to HTTP requests in HTTP Test Wizard debugger.
  • Bug fix on real time statistics and test results if the measured unit is other than ms.
  • User profile images can now have a max size of 400 KB instead of 200 KB.
  • CA Root Certificates of HTTP/S Proxy Recorder(s) can now be downloaded via the Portal Server UI.
  • Documentation: User Guide added.
  • SNMP Plug-In published at https://portal.realload.com/publicPublishedPlugins

Portal Server Installation / Ubuntu 20: The “fontconfig” package has to be installed in order that the captcha generator is working:

sudo apt-get update
sudo apt-get install fontconfig

4.3.20 | 2021-08-07

  • HTTP Test Wizard Plug-Ins can now be published and are then available to other users.
  • The following HTTP Test Wizard Plugin fields are now protected by algorithms and cannot be manually modified by the JSON editor:
    • pluginId
    • authorNickname
  • A (new) “Resources Library” project is automatically created for each user in the Project Tree which contains by default the following “Resource Sets”:
    • “HTTP Test Wizard Plug-Ins”: by default empty
    • “Java”: contans always the latest version of com.dkfqs.tools.jar
    • “PowerShell”: contans always the latest version of DKFQSLibrary2.psm1
  • The “Resources Library” project contains common resources which are used by multiple projects. The users can add additional resource sets and files as needed to this project.
  • The file com.dkfqs.tools.jar is no longer copied to the corresponding resource set when a HTTP Test Wizard test is generated. Instead of this the generated test contain now a reference to “Resources Library / Java / com.dkfqs.tools.jar”.
  • New created HTTP Test Wizard Plug-Ins contain now by default a reference to the resource file “Resources Library / Java / com.dkfqs.tools.jar”.
  • More than 50 minor bugs have been fixed and some improvements have been made to the portal user interface.
  • Portal Server version 4.3.20 requires now Measuring Agent version 3.9.31
  • Existing tests and plug-ins should be upgraded to use com.dkfqs.tools.jar version 2.2.21 (located at “Resource Sets / Java”). This means that the tests have to be generated and defined once again.
  • The tuning parameters of Linux operating systems on which “Measuring Agents” run must be increased:

in /etc/security/limits.conf add or modify:

# TCP/IP Tuning
# =============
* soft     nproc          262140
* hard     nproc          262140
* soft     nofile         262140
* hard     nofile         262140
root soft     nproc          262140
root hard     nproc          262140
root soft     nofile         262140
root hard     nofile         262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=8966

if you get a value less than 262140 then add in /etc/systemd/system.conf

# System Tuning
# =============
DefaultTasksMax=262140

Reboot the system and verify the settings. Enter: ulimit -n

output: 262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=262140

4.3.18 | 2021-07-06

  • Support of Licenses added. Multiple licenses per user account are supported. New licenses can be entered via the Admin Menu and via the User Menu. Users whose license has expired can enter a new license during sign-in.
  • Pluggable Architecture for License Providers implemented.
  • Remote Admin API and Remote User API added. The corresponding API Authentication Tokens can be generated via the Admin Menu and via the User Menu. See API Documentation.
  • Unix Time Tool in User Menu added.
  • User accounts that are since a long time expired are now automatically deleted - including all user specific data. The number of days between the expiry date and the deletion date can be configured in the Admin Menu.
  • Deleted user accounts can now be viewed in the Admin Menu.
  • Portal Server version 4.3.18 requires now Measuring Agent version 3.9.30

4.3.14 | 2021-04-08

  • Support of multiple Price Plans / Limits for User Accounts such as Disk-Space, Number of Measuring Agents, Account expires time … (Admin Menu).
  • Access to Measuring Agents can now (optional) protected by an Authentication Token (password).
  • Configurable Default Price Plan for Sign Up (Admin Menu).
  • Configurable HTML content for Sign Up steps 1 to 4 and for (new) Sign Up completed “Welcome Page” (Admin Menu).
  • Test jobs are now digitally signed. This means that the following job settings on a Measuring Agent cannot be modified after the job has been transmitted to the Measuring Agent: type of job, local test job ID, number of users, maximum test duration.

4 - All Purpose Interface

All Purpose Interface | Developer Guide

Abstract

This document explains:

  1. How to develop a test program from scratch.
  2. How to add self-programmed measurements to the HTTP Test Wizard (as plug-ins).

The product’s open architecture enables you to develop plug-ins, scripts and programs that measure anything that has numeric value - no matter which protocol is used!

The measured data are evaluated in real time and displayed as diagrams and lists. In addition to successfully measured values, also errors like timeouts or invalid response data can be collected and displayed in real time.

At least in theory, programs and scripts of any programming language can be executed, as long as such a program or script supports the All Purpose Interface.

In practice there are currently two options for integrating your own measurements into the DKFQS platform:

  1. Write an HTTP Test Wizard Plug-In in Java that performs the measurement. This has the advantage that you only have to implement a subset of the “All Purpose Interface” yourself:

    • Declare Statistic
    • Register Sample Start
    • Add Sample Long
    • Add Sample Error
    • [Optional: Add Counter Long, Add Average Delta And Current Value, Add Efficiency Ratio Delta, Add Throughput Delta, Add Test Result Annotation Exec Event]

    Such plug-ins can be developed quite quickly, as all other functions of the “All Purpose Interface” are already implemented by the HTTP Test Wizard.

    Tip: An HTTP Test Wizard session can also only consist of plug-ins, i.e. you can “misuse” the HTTP Test Wizard to only carry out measurements that you have programmed yourself: Plug-In Example

  2. Write a test program or from scratch. This can currently be programmed in Java or PowerShell (support for additional programming languages will be added in the future). This is more time-consuming, but has the advantage that you have more freedom in program development. In this case you have to implement all functions of the “All Purpose Interface”.

Interface Specification

Basic Requirements for all Programs and Scripts

The All Purpose Interface must be implemented by all programs and scripts which are executed on the DKFQS Platform. The interface is independent of any programming language and has only three requirements:

  1. The executed program or script must be able to be started from a command line, and passing program or script arguments must be supported.
  2. The executed program or script must be able to read and write files.
  3. The executed program or script must be able to measure one or more numerical values.

All of this seems a bit trivial, but has been chosen deliberately. So that the interface can support almost all programming languages.

Generic Program and Script Arguments

Each executed program or script must support at least the following arguments:

  • Number of Users: The total number of simulated users (integer value > 0).
  • Test Duration: The maximum test duration in seconds (integer value > 0).
  • Ramp Up Time: The ramp up time in seconds until all simulated users are started (integer value >= 0). Example: If 10 users are started within 5 seconds then the first user is started immediately and then the remaining 9 users are started in (5 seconds / 9 users) = 0.55 seconds intervals.
  • Max Session Loops: The maximum number of session loops per simulated user (integer value > 0, or -1 means infinite number of session loops).
  • Delay Per Session Loop: The delay in milliseconds before a simulated user starts a next session loop iteration (integer value >= 0) – but not applied for the first session loop iteration.
  • Data Output Directory: The directory to which the measured data have to be written. In addition, other data can also written to this directory like for example debug information.

Implementation Note: The test ends if either the Test Duration is elapsed or if Max Session Loops are reached for all simulated users. Currently executed sessions are not aborted.

In addition, the following arguments are optional, but also standardized:

  • Description: A brief description of the test
  • Debug Execution: Write debug information about the test execution to stdout
  • Debug Measuring: Write debug information about the declared statistics and the measured values to stdout
Argument Java PowerShell
Number of Users -users number -totalUsers number
Test Duration -duration seconds -inputTestDuration seconds
Ramp Up Time -rampupTime seconds -rampUpTime seconds
Max Session Loops -maxLoops number -inputMaxLoops number
Delay Per Session Loop -delayPerLoop milliseconds -inputDelayPerLoopMillis milliseconds
Data Output Directory -dataOutputDir path -dataOutDirectory path
Description -description text -description text
Debug Execution -debugExec -debugExecution
Debug Measuring -debugData -debugMeasuring

Single-Threaded Scripts vs. Multiple-Threaded Programs

For scripts which don’t support multiple threads the DKFQS Platform starts for each simulated user a own operating system process per simulated user. On the other hand, for programs which support multiple threads, only one operating system process is started for all simulated users.

Scripts which are not able to run multiple threads must support the following additional generic command line argument:

  • Executed User Number: The currently executed user (integer value > 0). Example: If 10 scripts are started then 1 is passed to the first started script, 2 is passed to the second started script, .. et cetera.
Argument PowerShell
Executed User Number -inputUserNo number

Specific Program and Script Arguments

Additional program and script specific arguments are supported by the DKFQS platform. Hoverer, their values are not validated by the platform.

Job Control Files

During the execution of a test the DKFQS Platform can create and delete at runtime additional control files in the Data Output Directory of a test job. The existence, and respectively the absence of such control files must be frequently checked by the running script or program, but not too often to avoid CPU and I/O overload. Rule of thumb: Multi-threaded programs should check the existence of such files every 5..10 seconds. Single-threaded scripts should check such files before executing a new session loop iteration.

The following control files are created or removed in the Data Output Directory by the DKFQS Platform:

  • DKFQS_Action_AbortTest.txt If the existence of this file is detected then the test executions must be aborted gracefully as soon as possible. Currently executed session loops are not aborted.
  • DKFQS_Action_SuspendTest.txt If the existence of this file is detected then the further execution of session loops is suspended until the file is removed by the DKFQS Platform. Currently executed session loops are not interrupted on suspend. When resuming the test then the Ramp Up Time as passed as generic argument to the script or program must be re-applied. If a suspended test runs out of Test Duration then the test must end.

Testjob Data Files

When a test job is started by the DKFQS Platform on a Measuring Agent, then the DKFQS Platform creates at first for each simulated user an empty data file in the Data Output Directory of the test job:

Data File: user_<Executed User Number>_statistics.out

Example: user_1_statistics.out, user_2_statistics.out, user_3_statistics.out, .. et cetera.

After that, the test script(s) or test program is started as operating system process. The test script or the test program has to write the current state of the simulated user and measured data to the corresponding Data File of the simulated user in JSON object format (append data to the file only – don’t create new files).

The DKFQS Platform component Measuring Agent and the corresponding Data Collector are listening to these data files and interpret the measured data at real-time, line by line as JSON objects.

“alt attribute”

Writing JSON Objects to the Data Files

The following JSON Objects can be written to the Data Files:

JSON Object Description
Declare Statistic Declare a new statistic
Register Execute Start Registers the start of a user
Register Execute Suspend Registers that the execution of a user is suspended
Register Execute Resume Registers that the execution of a user is resumed
Register Execute End Registers that a user has ended
Register Loop Start Registers that a user has started a session loop iteration
Register Loop Passed Registers that a session loop iteration of a user has passed
Register Loop Failed Registers that a session loop iteration of a user has failed
Register Sample Start Statistic-type sample-event-time-chart: Registers the start of measuring a sample
Add Sample Long Statistic-type sample-event-time-chart: Registers that a sample has measured and report the value
Add Sample Error Statistic-type sample-event-time-chart: Registers that the measuring of a sample has failed
Add Counter Long Statistic-type cumulative-counter-long: Add a positive delta value to the counter
Add Average Delta And Current Value Statistic-type average-and-current-value: Add delta values to the average and set the current value
Add Efficiency Ratio Delta Statistic-type efficiency-ratio-percent: Add efficiency ratio delta values
Add Throughput Delta Statistic-type throughput-time-chart: Add a delta value to a throughput
Add Test Result Annotation Exec Event Add an annotation event to the test result

Note that the data of each JSON object must be written as a single line which end with a \r\n line terminator.

Program Sequence

“alt attribute”

“alt attribute”

“alt attribute”

JSON Object Specification

Declare Statistic Object

Before the measurement of data begins, the corresponding statistics must be declared at runtime. Each declared statistic must have a unique ID. Multiple declarations with the same ID are crossed out by the platform.

Currently 5 types of statistics are supported:

  • sample-event-time-chart : This is the most common statistic type and contains continuously measured response times or any other continuously measured values of any unit. Information about failed measurements can also be added to the statistic. Statistics of this type are added to the ‘Overview Statistic’ area and can also displayed as a chart (see picture below).
  • cumulative-counter-long : This is a single counter whose value is continuously increased during the test. Statistics of this type are added to the ‘Test-Specific Values’ area.
  • average-and-current-value : This is a separately measured mean value and the last measured current value. Statistics of this type are added to the ‘Test-Specific Values’ area.
  • efficiency-ratio-percent : This is a measured efficiency in percent (0..100%). Statistics of this type are added to the ‘Test-Specific Values’ area.
  • throughput-time-chart : This is a measured throughput per second. Statistics of this type are added to the ‘Test-Specific Values’ area.

“alt attribute”

It’s also supported to declare new statistics at any time during test execution, but the statistics must be declared first, before the measured data are added.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "DeclareStatistic",
  "type": "object",
  "required": ["subject", "statistic-id", "statistic-type", "statistic-title"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'declare-statistic'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "Unique statistic id"
    },
    "statistic-type": {
      "type": "string",
      "description": "'sample-event-time-chart' or 'cumulative-counter-long' or 'average-and-current-value' or 'efficiency-ratio-percent' or 'throughput-time-chart'"
    },
    "statistic-title": {
      "type": "string",
      "description": "Statistic title"
    },
    "statistic-subtitle": {
      "type": "string",
      "description": "Statistic subtitle | only supported by 'sample-event-time-chart'"
    },
    "y-axis-title": {
      "type": "string",
      "description": "Y-Axis title | only supported by 'sample-event-time-chart'. Example: 'Response Time'"
    },
    "unit-text": {
      "type": "string",
      "description": "Text of measured unit. Example: 'ms'"
    },
    "sort-position": {
      "type": "integer",
      "description": "The UI sort position"
    },
    "add-to-summary-statistic": {
      "type": "boolean",
      "description": "If true = add the number of measured and failed samples to the summary statistic | only supported by 'sample-event-time-chart'. Note: Synthetic measured data like Measurement Groups or Delay Times should not be added to the summary statistic"
    },
    "background-color": {
      "type": "string",
      "description": "The background color either as #hex-triplet or as bootstrap css class name, or an empty string = no special background color. Examples: '#cad9fa', 'table-info'"
    }
  }
}

Example: 
{
  "subject":"declare-statistic",
  "statistic-id":1,
  "statistictype":"sample-event-time-chart",
  "statistic-title":"GET http://192.168.0.111/",
  "statistic-subtitle":"",
  "y-axis-title":"Response Time",
  "unit-text":"ms",
  "sort-position":1,
  "add-to-summarystatistic":true,
  "background-color":""
}

After the statistics are declared then the activities of the simulated users can be started. Each simulated user must report the following changes of the current user-state:

  • register-execute-start : Register that the simulated user has started the test.
  • register-execute-suspend : Register that the simulated user suspend the execution of the test.
  • register-execute-resume : Register that the simulated user resume the execution of the test.
  • register-execute-end : Register that the simulated user has ended the test.

Register Execute Start Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteStart",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-start'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-start","timestamp":1596219816129}

Register Execute Suspend Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteSuspend",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-suspend'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-suspend","timestamp":1596219816129}

Register Execute Resume Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteResume",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-resume'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-resume","timestamp":1596219816129}

Register Execute End Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteEnd",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-end'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-end","timestamp":1596219816129}

Once a simulated user has started its activity it measures the data in so called ‘session loops’. Each simulated must report when a session loop iteration starts and ends:

  • register-loop-start : Register the start of a session loop iteration.
  • register-loop-passed : Register that a session loop iteration has passed / at end of the session loop iteration.
  • register-loop-failed : Register that a session loop iteration has failed / if the session loop iteration is aborted.

Register Loop Start Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopStart",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-start'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-start","timestamp":1596219816129}

Register Loop Passed Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopPassed",
  "type": "object",
  "required": ["subject", "loop-time", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-passed'"
    },
    "loop-time": {
      "type": "integer",
      "description": "The time it takes to execute the loop in milliseconds"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-passed","loop-time":1451, "timestamp":1596219816129}

Register Loop Failed Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopFailed",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-failed'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-failed","timestamp":1596219816129}

Within a session loop iteration the samples of the declared statistics are measured. For sample-event-time-chart statistics the simulated user must report when the measuring of a sample starts and ends:

  • register-sample-start : Register that the measuring of a sample has started.
  • add-sample-long : Add a measured value to a declared statistic.
  • add-sample-error : Add an error to a declared statistic.

Register Sample Start Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterSampleStart",
  "type": "object",
  "required": ["subject", "statistic-id", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-sample-start'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-sample-start","statisticid":2,"timestamp":1596219816165}

Add Sample Long Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddSampleLong",
  "type": "object",
  "required": ["subject", "statistic-id", "value", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-sample-long'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "value": {
      "type": "integer",
      "description": "The measured value"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"add-sample-long","statisticid":2,"value":105,"timestamp":1596219842468}

Add Sample Error Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddSampleError",
  "type": "object",
  "required": ["subject", "statistic-id", "error-subject", "error-severity",
  "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-sample-error'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "error-subject": {
      "type": "string",
      "description": "The subject or title of the error"
    },
    "error-severity": {
      "type": "string",
      "description": "'warning' or 'error' or 'fatal'"
    },
    "error-type": {
      "type": "string",
      "description": "The type of the error. Errors which contains the same error
    type can be grouped."
    },
    "error-log": {
      "type": "string",
      "description": "The error log. Multiple lines are supported by adding \r\n line terminators."
    },
    "error-context": {
      "type": "string",
      "description": " Context information about the condition under which the error occurred. Multiple lines are supported by adding \r\n line terminators."
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{
  "subject":"add-sample-error",
  "statistic-id":2,
  "error-subject":"Connection refused (Connection refused)",
  "error-severity":"error",
  "error-type":"java.net.ConnectException",
  "error-log":"2020-08-01 21:24:51.662 | main-HTTPClientProcessing[3] | INFO | GET http://192.168.0.111/\r\n2020-08-01 21:24:51.670 | main-HTTPClientProcessing[3] | ERROR | Failed to open or reuse connection to 192.168.0.111:80 |
 java.net.ConnectException: Connection refused (Connection refused)\r\n",
  "error-context":"HTTP Request Header\r\nhttp://192.168.0.111/\r\nGET / HTTP/1.1\r\nHost: 192.168.0.111\r\nConnection: keep-alive\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate\r\n",
  "timestamp":1596309891672
}

Note about the error-severity :

  • warning : After the error has occurred then the simulated user continues with the execution of the current session loop. Error color = yellow.
  • error : After the error has occurred then the simulated aborts the execution of the current session loop iteration, and starts the execution of the next session loop iteration. Error color = red.
  • fatal : After the error has occurred then the simulated user aborts any further execution of the test, which means that the test has ended for this simulated user. Error color = black.

Implementation note: After an error has occurred, the simulated user should wait at least 100 milliseconds before continuing his activities. This is to prevent that within a few seconds several thousand errors are measured and reported to the UI

Add Counter Long Object (cumulative-counter-long only)

For cumulative-counter-long statistics there is no such 2-step mechanism as for ‘sample-event-time-chart’ statistics. The value can simple increased by reporting a Add Counter Long object.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddCounterLong",
  "type": "object",
  "required": ["subject", "statistic-id", "value"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-counter-long'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "value": {
      "type": "integer",
      "description": "The value to increment"
    }
  }
}

Example: 
{"subject":"add-counter-long","statistic-id":10,"value":2111}

Add Average Delta And Current Value Object (average-and-current-value only)

To update a average-and-current-value statistic the delta (difference) values of the cumulated sum and the delta (difference) of the cumulated number of values has to be reported. The platform calculates then the average value by dividing the cumulated sum by the cumulated number of values. In addition, the last measured value must also be reported.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddAverageDeltaAndCurrentValue",
  "type": "object",
  "required": ["subject", "statistic-id", "sumValuesDelta", "numValuesDelta", "currentValue", "currentValueTimestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-average-delta-and-current-value'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "sumValuesDelta": {
      "type": "integer",
      "description": "The sum of delta values to add to the average"
    },
    "numValuesDelta": {
      "type": "integer",
      "description": "The number of delta values to add to the average"
    },
    "currentValue": {
      "type": "integer",
      "description": "The current value, or -1 if no such data is available"
    },
    "currentValueTimestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the current value, or -1 if no such data is available"
    }
  }
}

Example: 
{
  "subject":"add-average-delta-and-current-value",
  "statistic-id":100005,
  "sumValuesDelta":6302,
  "numValuesDelta":22,
  "currentValue":272,
  "currentValueTimestamp":1634401774374
}

Add Efficiency Ratio Delta Object (efficiency-ratio-percent only)

To update a efficiency-ratio-percent statistic, the delta (difference) of the number of efficient performed procedures and the delta (difference) of the number of inefficient performed procedures has to be reported.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddEfficiencyRatioDelta",
  "type": "object",
  "required": ["subject", "statistic-id", "efficiencyDeltaValue", "inefficiencyDeltaValue"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-efficiency-ratio-delta'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "efficiencyDeltaValue": {
      "type": "integer",
      "description": "The number of efficient performed procedures to add"
    },
    "inefficiencyDeltaValue": {
      "type": "integer",
      "description": "The number of inefficient performed procedures to add"
    }
  }
}

Example: 
{
  "subject":"add-efficiency-ratio-delta",
  "statistic-id":100006,
  "efficiencyDeltaValue":6,
  "inefficiencyDeltaValue":22
}

Add Throughput Delta Object (throughput-time-chart only)

To update a throughput-time-chart statistic, the delta (difference) value from a last absolute, cumulated value to the current cumulated value has to be reported, whereby the current time stamp is included in the calculation.

Although this type of statistic always has the unit throughput per second, a measured delta (difference) value can be reported at any time.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddThroughputDelta",
  "type": "object",
  "required": ["subject", "statistic-id", "delta-value", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-throughput-delta'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "delta-value": {
      "type": "number",
      "description": "the delta (difference) value"
    },
    "timestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the delta (difference) value"
    }
  }
}

Example: 
{
  "subject":"add-throughput-delta",
  "statistic-id":100003,
  "delta-value":0.53612,
  "timestamp":1634401774410
}

Add Test Result Annotation Exec Event Object

Add an annotation event to the test result.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddTestResultAnnotationExecEvent",
  "type": "object",
  "required": ["subject", "event-id", "event-text", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-test-result-annotation-exec-event'"
    },
    "event-id": {
      "type": "integer",
      "description": "The event id, valid range: -1 .. -999999"
    },
    "event-text": {
      "type": "string",
      "description": "the event text"
    },
    "timestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the event"
    }
  }
}

Example: 
{
  "subject":"add-test-result-annotation-exec-event",
  "event-id":-1,
  "event-text":"Too many errors: Test job stopped by plug-in",
  "timestamp":1634401774410
}

Notes:

  • The event id must be in the range from -1 (minus one) to -999999.
  • Events with the same event id are merged to one event.

[End of Interface Specification]

Example

HTTP Test Wizard Plug-In

This plug-in “measures” a random value, and is executed in this example as the only part of an HTTP Test Wizard session.

The All Purpose Interface JSON objects are written using the corresponding methods of the com.dkfqs.tools.javatest.AbstractJavaTest class. This class is located in the JAR file com.dkfqs.tools.jar which is already predefined for all plug-ins.

import com.dkfqs.tools.javatest.AbstractJavaTest;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginContext;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginInterface;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginSessionFailedException;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginTestFailedException;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginUserFailedException;
import com.dkfqs.tools.logging.LogAdapterInterface;
import java.util.ArrayList;
import java.util.List;
// add your imports here

/**
 * HTTP Test Wizard Plug-In 'All Purpose Interface Example'.
 * Plug-in Type: Normal Session Element Plug-In.
 * Created by 'DKF' at 24 Sep 2021 22:50:04
 * DKFQS 4.3.22
 */
@AbstractJavaTestPluginInterface.PluginResourceFiles(fileNames={"com.dkfqs.tools.jar"})
public class AllPurposeInterfaceExample implements AbstractJavaTestPluginInterface {
	private LogAdapterInterface log = null;
	
	private static final int STATISTIC_ID = 1000;
	private AbstractJavaTest javaTest = null;       // refrence to the generated test program
	
	/**
	 * Called by environment when the instance is created.
	 * @param log the log adapter
	 */
	@Override
	public void setLog(LogAdapterInterface log) {
		this.log = log;
	}
	
	/**
	 * On plug-in initialize. Called when the plug-in is initialized. <br>
	 * Depending on the initialization scope of the plug-in the following specific exceptions can be thrown:<ul>
	 * 	<li>Initialization scope <b>global:</b> AbstractJavaTestPluginTestFailedException</li>
	 * 	<li>Initialization scope <b>user:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException</li>
	 * 	<li>Initialization scope <b>session:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginSessionFailedException</li>
	 * </ul>
	 * @param javaTest the reference to the executed test program, or null if no such information is available (in debugger environment)
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws AbstractJavaTestPluginSessionFailedException if the plug-in signals that the 'user session' has to be aborted (abort current session - continue next session)
	 * @throws AbstractJavaTestPluginUserFailedException if the plug-in signals that the user has to be terminated
	 * @throws AbstractJavaTestPluginTestFailedException if the plug-in signals that the test has to be terminated
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onInitialize(AbstractJavaTest javaTest, AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws AbstractJavaTestPluginSessionFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginTestFailedException, Exception {
		// log.message(log.LOG_INFO, "onInitialize(...)");
		
		// --- vvv --- start of specific onInitialize code --- vvv ---
		if (javaTest != null) {
		    this.javaTest = javaTest;
		    
		    // declare the statistic
		    javaTest.declareStatistic(STATISTIC_ID, 
            		                  AbstractJavaTest.STATISTIC_TYPE_SAMPLE_EVENT_TIME_CHART,
            		                  "My Measurement",
            		                  "",
            		                  "My Response Time",
            		                  "ms",
            		                  STATISTIC_ID,
            		                  true,
            		                  "");
		}
		// --- ^^^ --- end of specific onInitialize code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

	/**
	 * On plug-in execute. Called when the plug-in is executed. <br>
	 * Depending on the execution scope of the plug-in the following specific exceptions can be thrown:<ul>
	 * 	<li>Initialization scope <b>global:</b> AbstractJavaTestPluginTestFailedException</li>
	 * 	<li>Initialization scope <b>user:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException</li>
	 * 	<li>Initialization scope <b>session:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginSessionFailedException</li>
	 * </ul>
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws AbstractJavaTestPluginSessionFailedException if the plug-in signals that the 'user session' has to be aborted (abort current session - continue next session)
	 * @throws AbstractJavaTestPluginUserFailedException if the plug-in signals that the user has to be terminated
	 * @throws AbstractJavaTestPluginTestFailedException if the plug-in signals that the test has to be terminated
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onExecute(AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws AbstractJavaTestPluginSessionFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginTestFailedException, Exception {
		// log.message(log.LOG_INFO, "onExecute(...)");
		
		// --- vvv --- start of specific onExecute code --- vvv ---
		if (javaTest != null) {
		    
		    // register the start of the sample 
		    javaTest.registerSampleStart(STATISTIC_ID);
		    
		    // measure the sample
		    final long min = 1L;
		    final long max = 20L;
		    long responseTime = Math.round(((Math.random() * (max - min)) + min));
		    
		    // add the measured sample to the statistic
		    javaTest.addSampleLong(STATISTIC_ID, responseTime);
		    
		    /*
		    // error case
		    javaTest.addSampleError(STATISTIC_ID,
                                    "My error subject",
                                    AbstractJavaTest.ERROR_SEVERITY_WARNING,
                                    "My error type",
                                    "My error response text or log",
                                    "");
            */
		}
		// --- ^^^ --- end of specific onExecute code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

	/**
	 * On plug-in deconstruct. Called when the plug-in is deconstructed.
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onDeconstruct(AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws Exception {
		// log.message(log.LOG_INFO, "onDeconstruct(...)");
		
		// --- vvv --- start of specific onDeconstruct code --- vvv ---
		// no code here
		// --- ^^^ --- end of specific onDeconstruct code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

}

“alt attribute”

“alt attribute”

Debugging the Interface

  1. In order to debug the processing of the reported data of the interface, activate the “Debug Measuring” checkbox when starting the test job.
  2. After the test job has completed, select in the Test Jobs menu at the corresponding test job the option “Job Log Files” and then select the file “DataCollector.out”.
  3. Review the “DataCollector.out” file for any errors. Lines which contains “| Tailer data” reflect the raw reported data.

“alt attribute”

“alt attribute”

5 - API

Portal Server APIs

The portal server has two APIs:

  • Remote Admin API
  • Remote User API

Both APIs require so-called “Authentication Tokens” to verify the execution authorization. If 5 or more invalid “Authentication Tokens” are sent within 60 seconds, the corresponding remote IP address is blocked for 30 minutes.

5.1 - Remote Admin API

Portal Server Remote Admin API Specification

Generating an Authentication Token

To perform a Remote Admin API call, you must first generate an “Admin API Auth Token” in the Administrator Menu of the portal. When generating the token you can enter a purpose (only used as information) and also restrict the remote IP addresses for which the token is valid. You can also specify whether the token has read/write access or read/only access.

“alt attribute”

API Functions

The API supports the following functions (so-called “actions”):

  • getAllUserAccounts (Get all user accounts of the portal server)
  • getAllPricePlans (Get all price plans of the portal server)
  • addLicenseCertificateToUser (Add a (new) license certificate to an user)
  • getServerSettings (Get the server settings)
  • setServerMaintenanceMode (Turn the server maintenance mode on or off)
URL HTTP Method
https://portal.realload.com/RemoteAdminAPI POST
All data are sent and received in JSON data format. The “authTokenValue” and the “action” must always be sent when an API call is made.

Example

API HTTP/S Request

curl -v --request POST --header "Content-Type: application/json" --data "@getAllUserAccounts.json"  https://portal.realload.com/RemoteAdminAPI

API Request Data

{
  "authTokenValue": "8mKSz1UzaQg17kfu",
  "action": "getAllUserAccounts"
}

API Response Data

{"allUserAccountsArray":
[{"userId":13,"nickname":"DKF","firstName":"Max","lastName":"Fischer","primaryEmail":"max@dkfqa.com","primarySMSPhone":"+41771111111","secondaryEmail":"","secondarySMSPhone":"","accountBlocked":false,"accountCreateTime":1538556183756,"lastLoginTime":1625181623869,"lastLoginIP":"127.0.0.1","pricePlanId":1,"accountExpiresTime":-1,"pricePlanTitle":"Unlimited"},{"userId":18,"nickname":"AX","firstName":"Alex","lastName":"Fischer","primaryEmail":"alexfischer66@yahoo.com","primarySMSPhone":"+41781111111","secondaryEmail":"","secondarySMSPhone":"","accountBlocked":false,"accountCreateTime":1539874749677,"lastLoginTime":1616111301975,"lastLoginIP":"127.0.0.1","pricePlanId":1,"accountExpiresTime":-1,"pricePlanTitle":"Unlimited"},{"userId":22,"nickname":"Kes","firstName":"Kesorn","lastName":"Fischer","primaryEmail":"gsklsta@yahoo.com","primarySMSPhone":"+66000000000","secondaryEmail":"","secondarySMSPhone":"","accountBlocked":false,"accountCreateTime":1605303204754,"lastLoginTime":1624389324770,"lastLoginIP":"127.0.0.1","pricePlanId":6,"accountExpiresTime":-1,"pricePlanTitle":"BASIC1"},{"userId":48,"nickname":"BET","firstName":"Bettina","lastName":"Meier","primaryEmail":"b123456@lucini.id.au","primarySMSPhone":"+61404905702","secondaryEmail":"","secondarySMSPhone":"","accountBlocked":false,"accountCreateTime":1623719604561,"lastLoginTime":-1,"lastLoginIP":"","pricePlanId":6,"accountExpiresTime":1625061600000,"pricePlanTitle":"BASIC1"}],
"isError":false}

If the API call is successful, then the response field “isError” is false. If a numeric field has a value of -1 (minus one), this means “no data” or “unlimited” depending on the context.

getAllUserAccounts

Specific Request Fields:

  • [none]

Specific Error Flags:

  • [none]

getAllPricePlans

Specific Request Fields:

  • [none]

Specific Error Flags:

  • [none]

addLicenseCertificateToUser

Specific Request Fields:

  • mapToUserEmailAddress
  • mapToUserMobilePhone
  • licenseProvider
  • licenseCertificate

The license is successfully assigned to a user if either mapToUserEmailAddress or mapToUserMobilePhone matches to a user account.

Specific Error Flags:

  • writeAccessError
  • mapToUserError
  • licenseProviderError
  • licenseCertificateError
  • licenseCertificateAlreadyAddedError
  • pricePlanError

JSON Request Example:

{
  "authTokenValue":"8mKSz1UzaQg17kfu",
  "action":"addLicenseCertificateToUser",
  "licenseProvider": "Real Load Pty Ltd / nopCommerce",
  "mapToUserEmailAddress": "max@dkfqa.com",
  "mapToUserMobilePhone": "+41771111111",
  "licenseCertificate": "-----BEGIN CERTIFICATE-----\r\nMIIEnjCCA4agAwIBAgIEyDnukzANBgkqhkiG9w0BA ...... Hn/UMGAGRB6xF4w+TewYqTAZrdhi/WLyYwg==\r\n-----END CERTIFICATE-----"
}

JSON Response Example (Success Case):

{"licenseId":12,"cloudCreditLicenseId":-1,"userId":13,"isCloudCreditsLicense":false,"isError":false}

JSON Response Example (Error Case):

{"isError":true,"genericErrorText":"","writeAccessError":false,"licenseProviderError":false,"mapToUserError":false,"pricePlanError":false,"licenseCertificateAlreadyAddedError":false,"licenseCertificateError":true}

getServerSettings

Specific Request Fields:

  • [none]

Specific Error Flags:

  • [none]

JSON Response Example:

{
  "isServerMaintenanceMode":false,
  "isSignInSelectPricePlanFromMultipleValidLicenseCertificates":true,
  "isSignInExpiredAccountCanEnterLicenseCertificate":true,
  "isSignUpEnabled":true,
  "isSignUpRequiresInvitationTicket":false,
  "signUpDefaultPricePlanId":2,
  "signUpDefaultAccountExpiresInDays":14,
  "deleteExpiredUserAccountsAfterDays":183,
  "isError":false
}

setServerMaintenanceMode

Specific Request Fields:

  • serverMaintenanceMode

Specific Error Flags:

  • writeAccessError

JSON Request Example:

{
  "authTokenValue":"8mKSz1UzaQg17kfu",
  "action":"setServerMaintenanceMode",
  "serverMaintenanceMode":true
}

JSON Response Example (Success Case):

{"isServerMaintenanceMode":true,"isError":false}

5.2 - Remote User API

Portal Server Remote User API Specification

Generating an Authentication Token

To perform a Remote User API call, you must first sign in into the portal and generate an “API Authentication Token”. When generating the token you can enter a purpose (only used as information) and also restrict the remote IP addresses for which the token is valid.

“alt attribute”

“alt attribute”

API Functions

The API supports the following functions (so-called “actions”):

Common Functions:

  • getUserAccountInfo (Get information about the own user account)
  • getPricePlanInfo (Get information about the current price plan)

Projects, Resource Sets and Files Functions:

  • getProjectTree (Get the project tree inclusive all resource sets and all file information)
  • createProject (Create a new project)
  • deleteProject (Delete a project)
  • getResourceSetsOfProject (Get all resource sets of a project, inclusive all file information)
  • createResourceSet (Create a new resource set)
  • deleteResourceSet (Delete a resource set)
  • getFilesInfoOfResourceSet (Get all files information of a resource set)
  • createFile (Create or overwrite a file)
  • getFile (Get the content of a file and the file information)
  • deleteFile (Delete a file)

Measuring Agents Functions:

  • getMeasuringAgents (Get all defined measuring agents)
  • getMinRequiredMeasuringAgentVersion (Get the minimum required measuring agent version)
  • addMeasuringAgent (Add a new measuring agent)
  • pingMeasuringAgent (Ping a measuring agent)
  • setMeasuringAgentActive (Set the state of a measuring agent to active or inactive)
  • deleteMeasuringAgent (Delete a measuring agent)

Measuring Agent Cluster Functions:

  • getMeasuringAgentClusters (Get all defined measuring agent clusters)
  • getClusterControllers (Get all cluster controllers and for each cluster controller a list of measuring agent clusters which are referencing the cluster controller)
  • getMinRequiredClusterControllerVersion (Get the minimum required cluster controller version)
  • pingClusterController (Ping a cluster controller)
  • addMeasuringAgentCluster (Add a new measuring agent cluster)
  • addMemberToMeasuringAgentCluster (Add a member to a measuring agent cluster)
  • removeMemberFromMeasuringAgentCluster (Remove a member from a measuring agent cluster)
  • pingMeasuringAgentClusterMembers (Ping the cluster members of a measuring agent cluster via cluster controller)
  • setMeasuringAgentClusterActive (Set the state of a measuring agent cluster to active or inactive)
  • deleteMeasuringAgentCluster (Delete a measuring agent cluster)

HTTP/S Remote Proxy Recorders Functions:

  • getProxyRecorders (Get all defined HTTP/S proxy recorders)
  • getMinRequiredProxyRecorderVersion (Get the minimum required HTTP/S proxy recorder version)
  • addProxyRecorder (Add a new HTTP/S proxy recorder)
  • pingProxyRecorder (Ping a HTTP/S proxy recorder)
  • deleteProxyRecorder (Delete a HTTP/S proxy recorder)
URL HTTP Method
https://portal.realload.com/RemoteUserAPI POST
All data are sent and received in JSON data format. The “authTokenValue” and the “action” must always be sent when an API call is made.

Example

API HTTP/S Request

curl -v --request POST --header "Content-Type: application/json" --data {\"authTokenValue\":\"jPmFClqeDUXaEk8Q274q\",\"action\":\"getUserAccountInfo\"}  https://portal.realload.com/RemoteUserAPI

API Request Data

{
  "authTokenValue": "jPmFClqeDUXaEk8Q274q",
  "action": "getUserAccountInfo"
}

API Response Data

{
  "userAccountInfo":{
    "userId":48,
    "nickname":"BET",
    "firstName":"Bettina",
    "lastName":"MeierHans",
    "primaryEmail":"bettina@meierhans.id.au",
    "primarySMSPhone":"+61401111111",
    "secondaryEmail":"",
    "secondarySMSPhone":"",
    "accountBlocked":false,
    "accountCreateTime":1623719604561,
    "lastLoginTime":1625348376450,
    "lastLoginIP":"127.0.0.1",
    "pricePlanId":6,
    "accountExpiresTime":1625953109397,
    "pricePlanTitle":"BASIC1"
    },
  "isError":false
}

If the API call is successful, then the response field “isError” is false. If a numeric field has a value of -1 (minus one), this means “no data” or “unlimited” depending on the context.

getUserAccountInfo

Specific Request Fields:

  • [none]

Specific Error Flags:

  • [none]

getPricePlanInfo

Specific Request Fields:

  • [none]

Specific Error Flags:

  • [none]

JSON Response Example:

{
  "pricePlanInfo":{
    "pricePlanId":6,
    "title":"BASIC1",
    "description":"",
    "isDeprecated":false,
    "lastModified":1625348413042,
    "maxDiskSpaceMB":1024,
    "maxSubUserAccounts":0,
    "maxMeasuringAgentsOwnedByUser":3,
    "maxRemoteProxyRecordersOwnedByUser":3,
    "executeLoadJobsEnabled":true,
    "executeMonitoringJobsEnabled":false,
    "apiAccessEnabled":true,
    "maxStartLoadJobsLast24h":24,
    "maxUsersPerLoadJob":500,
    "maxDurationPerLoadJob":1800
  },
  "isError":false
}

The unit for “maxDurationPerLoadJob” is seconds.

getProjectTree

Specific Request Fields:

  • [none]

Specific Error Flags:

  • [none]

JSON Response Example:

{
  "projectsArray": [
    {
      "projectId": 97,
      "projectName": "Common",
      "projectDescription": "",
      "resourceSetsArray": [
        {
          "resourceSetId": 154,
          "resourceSetName": "Input Files",
          "resourceSetDescription": "",
          "filesArray": [
            {
              "fileName": "InputFile.txt",
              "fileSize": 233,
              "fileHashCode": 1873256029,
              "fileLastModified": 1613835992073
            }
          ]
        },
        {
          "resourceSetId": 155,
          "resourceSetName": "Jar Files",
          "resourceSetDescription": "",
          "filesArray": [
            {
              "fileName": "com.dkfqs.tools.jar",
              "fileSize": 578087,
              "fileHashCode": -2033420926,
              "fileLastModified": 1613838181727
            }
          ]
        },
        {
          "resourceSetId": 156,
          "resourceSetName": "Plug-Ins",
          "resourceSetDescription": "",
          "filesArray": [
            {
              "fileName": "HttpSessionPlugin_ChangeCopyright.json",
              "fileSize": 5321,
              "fileHashCode": 1958407366,
              "fileLastModified": 1613838287871
            }
          ]
        }
      ]
    },
...
...
...
  ],
  "isError": false
}

createProject

Specific Request Fields:

  • projectName
  • projectDescription (optional)

Response Fields:

  • projectId

Specific Error Flags:

  • projectNameError
  • diskSpaceLimitExceededError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"createProject",
  "projectName":"My New Project",
  "projectDescription": "Created by API call"
}

JSON Response Example (Success Case):

{"projectId":113,"isError":false}

JSON Response Example (Error Case):

{"isError":true,"genericErrorText":"","diskSpaceLimitExceededError":false,"projectNameError":true}

deleteProject

Specific Request Fields:

  • projectId
  • moveToTrash (optional, default: false)

Specific Error Flags:

  • projectIdError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"deleteProject",
  "projectId":113,
  "moveToTrash":false
}

getResourceSetsOfProject

Specific Request Fields:

  • projectId

Specific Error Flags:

  • projectIdError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getResourceSetsOfProject",
  "projectId":97
}

JSON Response Example (Success Case):

{
  "resourceSetsArray": [
    {
      "resourceSetId": 154,
      "resourceSetName": "Input Files",
      "resourceSetDescription": "",
      "filesArray": [
        {
          "fileName": "InputFile.txt",
          "fileSize": 233,
          "fileHashCode": 1873256029,
          "fileLastModified": 1613835992073
        }
      ]
    },
    {
      "resourceSetId": 155,
      "resourceSetName": "Jar Files",
      "resourceSetDescription": "",
      "filesArray": [
        {
          "fileName": "com.dkfqs.tools.jar",
          "fileSize": 578087,
          "fileHashCode": -2033420926,
          "fileLastModified": 1613838181727
        }
      ]
    },
    {
      "resourceSetId": 156,
      "resourceSetName": "Plug-Ins",
      "resourceSetDescription": "",
      "filesArray": [
        {
          "fileName": "HttpSessionPlugin_ChangeCopyright.json",
          "fileSize": 5321,
          "fileHashCode": 1958407366,
          "fileLastModified": 1613838287871
        }
      ]
    }
  ],
  "isError": false
}

createResourceSet

Specific Request Fields:

  • projectId
  • resourceSetName
  • resourceSetDescription (optional)

Response Fields:

  • resourceSetId

Specific Error Flags:

  • projectIdError
  • resourceSetNameError
  • diskSpaceLimitExceededError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"createResourceSet",
  "projectId":97,
  "resourceSetName":"My New Resource Set",
  "resourceSetDescription": "Created by API call"
}

JSON Response Example (Success Case):

{"resourceSetId":172,"isError":false}

JSON Response Example (Error Case):

{"isError":true,"genericErrorText":"","resourceSetNameError":true,"projectIdError":false,"diskSpaceLimitExceededError":false}

deleteResourceSet

Specific Request Fields:

  • projectId
  • resourceSetId
  • moveToTrash (optional, default: false)

Specific Error Flags:

  • projectIdError
  • resourceSetIdError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"deleteResourceSet",
  "projectId":97,
  "resourceSetId":172,
  "moveToTrash":false
}

getFilesInfoOfResourceSet

Specific Request Fields:

  • projectId
  • resourceSetId

Specific Error Flags:

  • projectIdError
  • resourceSetIdError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getFilesInfoOfResourceSet",
  "projectId":23,
  "resourceSetId":143
}

JSON Response Example:

{
  "filesArray": [
    {
      "fileName": "DKFQSLibrary2.psm1",
      "fileSize": 16339,
      "fileHashCode": -1503445747,
      "fileLastModified": 1603566144851
    },
    {
      "fileName": "powershell-http-bern2.ps1",
      "fileSize": 12900,
      "fileHashCode": -1174212096,
      "fileLastModified": 1603566162094
    },
    {
      "fileName": "TestResult_powershell-http-bern2Neu_2020-10-24@21-06-04.json",
      "fileSize": 14395,
      "fileHashCode": -951574615,
      "fileLastModified": 1603566379097
    },
    {
      "fileName": "TestResult_powershell-http-bern2Neu_2020-10-24@21-09-45.json",
      "fileSize": 55128,
      "fileHashCode": 1499924815,
      "fileLastModified": 1603566591322
    }
  ],
  "isError": false
}

createFile

Specific Request Fields:

  • projectId
  • resourceSetId
  • fileName
  • fileContentB64 (the content of the file, in Base64 format)

Response Fields:

  • fileName
  • fileSize
  • fileHashCode
  • fileLastModified

Specific Error Flags:

  • projectIdError
  • resourceSetIdError
  • fileNameError
  • diskSpaceLimitExceededError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"createFile",
  "projectId":23,
  "resourceSetId":143,
  "fileName":"test.txt",
  "fileContentB64":"VGhpcyBpcyB0aGUgY29udGVudCBvZiB0aGUgZmlsZS4=" 
}

JSON Response Example (Success Case):

{
  "fileName":"test.txt",
  "fileSize":32,
  "fileHashCode":-1460278014,
  "fileLastModified":1625423562384,
  "isError":false
}

JSON Response Example (Error Case):

{"isError":true,"genericErrorText":"","projectIdError":false,"resourceSetIdError":false,"diskSpaceLimitExceededError":false,"fileNameError":true}

getFile

Specific Request Fields:

  • projectId
  • resourceSetId
  • fileName

Response Fields:

  • fileName
  • fileContentB64 (the content of the file, in Base64 format)
  • fileSize
  • fileHashCode
  • fileLastModified

Specific Error Flags:

  • projectIdError
  • resourceSetIdError
  • fileNameError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getFile",
  "projectId":23,
  "resourceSetId":143,
  "fileName":"test.txt"
}

JSON Response Example (Success Case):

{
  "fileName":"test.txt",
  "fileContentB64":"VGhpcyBpcyB0aGUgY29udGVudCBvZiB0aGUgZmlsZS4=",
  "fileSize":32,
  "fileHashCode":-1460278014,
  "fileLastModified":1625423562384,
  "isError":false
}

deleteFile

Specific Request Fields:

  • projectId
  • resourceSetId
  • fileName
  • moveToTrash (optional, default: false)

Response Fields:

  • fileDeleted (a flag which is true if the file was deleted)

Specific Error Flags:

  • projectIdError
  • resourceSetIdError
  • fileNameError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"deleteFile",
  "projectId":23,
  "resourceSetId":143,
  "fileName":"test.txt",
  "moveToTrash":false
}

JSON Response Example (Success Case):

{"fileDeleted":true,"isError":false}

JSON Response Example (Error Case):

{"isError":true,"genericErrorText":"","projectIdError":false,"resourceSetIdError":false,"fileNameError":true}

getMeasuringAgents

Specific Request Fields:

  • [none]

Response Fields:

  • agentId (the unique measuring agent id)
  • createdBySystem (normally false, true = the user cannot modify or delete the measuring agent)
  • ownerUserId (always the same as the user account id)
  • agentActive (flag: if false then the availability of the measuring agent is not monitored)
  • agentDescription
  • agentHost
  • agentPort
  • authToken (the authentication token to access the measuring agent, or an empty string = no access protection | don’t confuse it with the API authTokenValue)

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getMeasuringAgents"
}

JSON Response Example (Success Case):

{
  "measuringAgentsArray": [
    {
      "agentId": 46,
      "createdBySystem": false,
      "ownerUserId": 13,
      "agentActive": true,
      "agentDescription": "Local Agent",
      "agentHost": "127.0.0.1",
      "agentPort": 8080,
      "authToken": "OrKmpkbyNWEHok"
    },
    {
      "agentId": 49,
      "createdBySystem": false,
      "ownerUserId": 13,
      "agentActive": false,
      "agentDescription": "Rasberry 1",
      "agentHost": "192.168.0.51",
      "agentPort": 8080,
      "authToken": ""
    },
    {
      "agentId": 50,
      "createdBySystem": false,
      "ownerUserId": 13,
      "agentActive": true,
      "agentDescription": "Ubuntu 10",
      "agentHost": "192.168.0.110",
      "agentPort": 8080,
      "authToken": ""
    },
    {
      "agentId": 51,
      "createdBySystem": false,
      "ownerUserId": 13,
      "agentActive": true,
      "agentDescription": "Ubuntu 11",
      "agentHost": "192.168.0.111",
      "agentPort": 8080,
      "authToken": ""
    }
  ],
  "isError": false
}

getMinRequiredMeasuringAgentVersion

Specific Request Fields:

  • [none]

Response Fields:

  • minRequiredMeasuringAgentVersion (the minimum required measuring agent version)

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getMinRequiredMeasuringAgentVersion"
}

JSON Response Example (Success Case):

{
  "minRequiredMeasuringAgentVersion":"3.9.34",
  "isError":false
}

addMeasuringAgent

Specific Request Fields:

  • agentDescription (must be unique across all measuring agents of the user, cannot be an empty string)
  • agentHost
  • agentPort
  • agentActive (flag: if false then the availability of the measuring agent is not monitored)
  • agentAuthToken (the authentication token to access the measuring agent, or an empty string = no access protection)

Response Fields (JSON object “measuringAgent”):

  • agentId (the unique measuring agent id)
  • createdBySystem (always false for this function)
  • ownerUserId (always the same as the user account id)
  • agentActive (flag: if false then the availability of the measuring agent is not monitored)
  • agentDescription
  • agentHost
  • agentPort
  • authToken (the authentication token to access the measuring agent, or an empty string = no access protection)

Specific Error Flags:

  • agentDescriptionError
  • agentHostError
  • agentPortError
  • maxNumberMeasuringAgentsLimitExceededError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"addMeasuringAgent",
  "agentDescription":"Ubuntu 12",
  "agentHost":"192.168.0.112",
  "agentPort":8080,
  "agentActive": true,
  "agentAuthToken": "nixda"
}

JSON Response Example (Success Case):

{
  "measuringAgent": {
    "agentId": 53,
    "createdBySystem": false,
    "ownerUserId": 13,
    "agentActive": true,
    "agentDescription": "Ubuntu 12",
    "agentHost": "192.168.0.112",
    "agentPort": 8080,
    "authToken": "nixda"
  },
  "isError": false
}

pingMeasuringAgent

Specific Request Fields:

  • agentId

Response Fields (JSON object “agentResponse”):

  • pingFromRemoteIp
  • pingFromRemoteUserId
  • productVersion (measuring agent version | don’t confuse with portal server version)
  • limitMaxUsersPerJob (limit of the measuring agent, -1 = unlimited | don’t confuse with price plan limit)
  • limitMaxJobDurationSeconds (limit of the measuring agent, -1 = unlimited | don’t confuse with price plan limit)
  • osName
  • osVersion
  • javaVersion
  • javaVendor
  • javaMaxMemory
  • systemTime
  • deltaTimeMillis
  • agentStartupTimeStamp
  • httpExecuteTimeMillis

Specific Error Flags:

  • agentIdError
  • agentAccessDeniedError
  • agentVersionOutdatedError
  • agentNotReachableError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"pingMeasuringAgent",
  "agentId":48
}

JSON Response Example (Success Case):

{
  "agentResponse": {
    "pingFromRemoteIp": "83.150.39.44",
    "pingFromRemoteUserId": 13,
    "productVersion": "3.9.30",
    "limitMaxUsersPerJob": 500,
    "limitMaxJobDurationSeconds": 300,
    "osName": "Linux",
    "osVersion": "4.15.0-136-generic",
    "javaVersion": "11.0.1",
    "javaVendor": "Oracle Corporation",
    "javaMaxMemory":"2048 MB",
    "systemTime": 1625513238236,
    "deltaTimeMillis": 841,
    "agentStartupTimeStamp": 1622836702172,
    "httpExecuteTimeMillis": 247
  },
  "isError": false
}

JSON Response Example (Error Case 1):

{
  "isError": true,
  "genericErrorText": "API V1 request to 192.168.0.51:8080 timed out",
  "agentIdError": false,
  "agentAccessDeniedError": false,
  "agentNotReachableError": true,
  "agentVersionOutdatedError": false
}

JSON Response Example (Error Case 2):

{
  "isError": true,
  "genericErrorText": "Min. measuring agent version required: 3.9.30",
  "agentIdError": false,
  "agentAccessDeniedError": false,
  "agentNotReachableError": false,
  "agentVersionOutdatedError": true
}

setMeasuringAgentActive

Specific Request Fields:

  • agentId
  • agentActive

Response Fields:

  • [none]

Specific Error Flags:

  • agentIdError
  • agentAccessDeniedError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"setMeasuringAgentActive",
  "agentId":46,
  "agentActive":false
}

JSON Response Example (Success Case):

{"isError":false}

deleteMeasuringAgent

Specific Request Fields:

  • agentId

Response Fields:

  • [none]

Specific Error Flags:

  • agentIdError
  • agentAccessDeniedError
  • agentDeleteDeniedError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"deleteMeasuringAgent",
  "agentId":54
}

JSON Response Example (Success Case):

{"isError":false}

getMeasuringAgentClusters

Specific Request Fields:

  • [none]

Response Fields (JSON array “measuringAgentClustersArray”):

  • clusterId (the unique cluster id)
  • createdBySystem (normally false, true = the user cannot modify or delete the cluster)
  • ownerUserId (always the same as the user account id)
  • clusterActive (flag: if false then the availability of the cluster is not monitored)
  • clusterDescription
  • controllerHost (the hostname or IP address of the cluster controller)
  • controllerPort (the IP port of the cluster controller)
  • controllerAuthToken (the authentication token to access the cluster controller, or an empty string = no access protection | don’t confuse it with the API authTokenValue)
  • clusterMembersArray
    • clusterMemberId (the unique cluster member id)
    • loadFactor (integer 0..100: the default load factor of this cluster member)
    • agentId (the referenced measuring agent id)
    • agentActive (flag: if false then the availability of the measuring agent is not monitored)
    • agentDescription
    • agentHost
    • agentPort
    • agentAuthToken (the authentication token to access the measuring agent, or an empty string = no access protection | don’t confuse it with the API authTokenValue)

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getMeasuringAgentClusters"
}

JSON Response Example:

{
  "measuringAgentClustersArray": [
    {
      "clusterId": 11,
      "createdBySystem": false,
      "ownerUserId": 13,
      "clusterActive": true,
      "clusterDescription": "C1",
      "controllerHost": "192.168.0.50",
      "controllerPort": 8083,
      "controllerAuthToken": "aberaber",
      "clusterMembersArray": [
        {
          "clusterMemberId": 59,
          "loadFactor": 1,
          "agentId": 64,
          "agentActive": true,
          "agentDescription": "Agent 1",
          "agentHost": "192.168.0.10",
          "agentPort": 8080,
          "agentAuthToken": "OrKmAAbyNWEHok"
        },
        {
          "clusterMemberId": 60,
          "loadFactor": 1,
          "agentId": 59,
          "agentActive": true,
          "agentDescription": "Ubuntu 10",
          "agentHost": "192.168.0.110",
          "agentPort": 8080,
          "agentAuthToken": "asc7jhacab"
        },
        {
          "clusterMemberId": 61,
          "loadFactor": 1,
          "agentId": 60,
          "agentActive": true,
          "agentDescription": "Ubuntu 11",
          "agentHost": "192.168.0.111",
          "agentPort": 8080,
          "agentAuthToken": "66ascascsdac"
        }
      ]
    },
    {
      "clusterId": 14,
      "createdBySystem": false,
      "ownerUserId": 13,
      "clusterActive": true,
      "clusterDescription": "C2",
      "controllerHost": "192.168.0.50",
      "controllerPort": 8083,
      "controllerAuthToken": "aberaber",
      "clusterMembersArray": [
        {
          "clusterMemberId": 66,
          "loadFactor": 1,
          "agentId": 56,
          "agentActive": true,
          "agentDescription": "Test System",
          "agentHost": "192.168.0.60",
          "agentPort": 8080,
          "agentAuthToken": "aberdoch"
        },
        {
          "clusterMemberId": 67,
          "loadFactor": 1,
          "agentId": 59,
          "agentActive": true,
          "agentDescription": "Ubuntu 10",
          "agentHost": "192.168.0.110",
          "agentPort": 8080,
          "agentAuthToken": "asc7jhacab"
        }
      ]
    }
  ],
  "isError": false
}

getClusterControllers

Specific Request Fields:

  • [none]

Response Fields (JSON array “clusterControllersArray”):

  • controllerHost (the hostname or IP address of the cluster controller)
  • controllerPort (the IP port of the cluster controller)
  • controllerAuthToken (the authentication token to access the cluster controller)
  • measuringAgentClustersArray (an array of measuring agent clusters which are referencing this cluster controller)
    • clusterId
    • clusterDescription
    • clusterActive

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getClusterControllers"
}

JSON Response Example:

{
  "clusterControllersArray": [
    {
      "controllerHost": "192.168.0.33",
      "controllerPort": 8083,
      "controllerAuthToken": "2fasdtfffe",
      "measuringAgentClustersArray": [
        {
          "clusterId": 11,
          "clusterDescription": "C1",
          "clusterActive": 1
        },
        {
          "clusterId": 13,
          "clusterDescription": "C2",
          "clusterActive": 1
        },
        {
          "clusterId": 14,
          "clusterDescription": "C3",
          "clusterActive": 1
        }
      ]
    },
    {
      "controllerHost": "192.168.0.50",
      "controllerPort": 8083,
      "controllerAuthToken": "asfsdgh763",
      "measuringAgentClustersArray": [
        {
          "clusterId": 15,
          "clusterDescription": "C4",
          "clusterActive": 1
        },
        {
          "clusterId": 16,
          "clusterDescription": "C7",
          "clusterActive": 1
        }
      ]
    }
  ],
  "isError": false
}

getMinRequiredClusterControllerVersion

Specific Request Fields:

  • [none]

Response Fields:

  • minRequiredClusterControllerVersion (the minimum required cluster controller version)

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getMinRequiredClusterControllerVersion"
}

JSON Response Example:

{
  "minRequiredClusterControllerVersion":"4.0.4",
  "isError":false
}

pingClusterController

Specific Request Fields:

  • controllerHost (the cluster controller host name or IP address)
  • controllerPort (the cluster controller IP port)
  • controllerAuthToken (the authentication token to access the cluster controller, or an empty string = no access protection)

Response Fields (JSON object “controllerResponse”):

  • pingFromRemoteIp
  • pingFromRemoteUserId
  • productVersion (cluster controller version | don’t confuse with portal server version)
  • osName
  • osVersion
  • javaVersion
  • javaVendor
  • javaMaxMemory
  • systemTime
  • deltaTimeMillis
  • controllerStartupTimeStamp
  • httpExecuteTimeMillis
  • clusterControllerOutdated

Specific Error Flags:

  • controllerHostError
  • controllerPortError
  • controllerVersionOutdatedError
  • controllerNotReachableError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"pingClusterController",
    "controllerHost":"192.168.0.50",
    "controllerPort":8083,
    "controllerAuthToken":"hagsajjs99"
}

JSON Response Example (Success Case):

{
  "controllerResponse": {
    "pingFromRemoteIp": "192.168.0.100",
    "pingFromRemoteUserId": 13,
    "productVersion": "4.0.4",
    "osName": "Linux",
    "osVersion": "4.15.0-135-generic",
    "javaVersion": "11.0.1",
    "javaVendor": "Oracle Corporation",
    "javaMaxMemory": "512 MB",
    "systemTime": 1643406118552,
    "deltaTimeMillis": 1120,
    "controllerStartupTimeStamp": 1643322597013,
    "httpExecuteTimeMillis": 249,
    "clusterControllerOutdated": false
  },
  "isError": false
}

JSON Response Example (Error Case 1):

{
  "isError": true,
  "genericErrorText": "API call pingGetControllerInfo failed. Error code = 18, Error message = Invalid authentication token",
  "controllerHostError": false,
  "controllerVersionOutdatedError": false,
  "controllerNotReachableError": true,
  "controllerPortError": false
}

JSON Response Example (Error Case 2):

{
  "isError": true,
  "genericErrorText": "Min. cluster controller version required: 4.0.4",
  "controllerHostError": false,
  "controllerVersionOutdatedError": true,
  "controllerNotReachableError": false,
  "controllerPortError": false
}

addMeasuringAgentCluster

Specific Request Fields:

  • clusterActive (flag: if false then the availability of the cluster is not monitored)
  • clusterDescription (must be unique across all measuring agent clusters and all measuring agents of the user, cannot be an empty string)
  • controllerHost (the cluster controller host name or IP address)
  • controllerPort (the cluster controller IP port)
  • controllerAuthToken (the authentication token to access the cluster controller, or an empty string = no access protection)
  • clusterMembersArray (an array of cluster members - can also be empty)
    • agentId (the referenced measuring agent id)
    • loadFactor (integer 0..100: the load factor of this cluster member, recommended value = 1)

Response Fields:

  • clusterId (the unique cluster id)
  • clusterMembersArray (the array of cluster members)
    • clusterMemberId (the unique cluster member id)
    • agentId (the referenced measuring agent id)
    • loadFactor (integer 0..100: the load factor of this cluster member)

Specific Error Flags:

  • clusterDescriptionError
  • controllerHostError
  • controllerPortError
  • agentIdError
  • loadFactorError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"addMeasuringAgentCluster",
  "clusterActive":true,
  "clusterDescription":"C7",
  "controllerHost":"192.168.0.50",
  "controllerPort":8083,
  "controllerAuthToken":"aberaber",
  "clusterMembersArray":[
    {
      "agentId":59,
      "loadFactor":1
    },
    {
      "agentId":60,
      "loadFactor":1
    }
  ]
}

JSON Response Example (Success Case):

{
  "clusterId":16,
  "clusterMembersArray":[
    {
      "clusterMemberId":71,
      "agentId":59,
      "loadFactor":1
    },
    {
      "clusterMemberId":72,
      "agentId":60,
      "loadFactor":1
    }
  ],
  "isError":false
}

JSON Response Example (Error Case):

{
  "isError": true,
  "genericErrorText": "Invalid agentId = 101",
  "controllerHostError": false,
  "agentIdError": true,
  "controllerPortError": false,
  "loadFactorError": false,
  "clusterDescriptionError": false
}

addMemberToMeasuringAgentCluster

Specific Request Fields:

  • clusterId
  • agentId (the referenced measuring agent id)
  • loadFactor (integer 0..100: the load factor of this cluster member, recommended value = 1)

Response Fields (JSON object “clusterMember”):

  • clusterMemberId (the unique cluster member id)
  • agentId
  • loadFactor

Specific Error Flags:

  • clusterIdError
  • clusterAccessDeniedError
  • clusterModifyDeniedError
  • agentIdError
  • agentAccessDeniedError
  • loadFactorError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"addMemberToMeasuringAgentCluster",
  "clusterId":17,
  "agentId":64,
  "loadFactor":1
}

JSON Response Example:

{
  "clusterMember": {
    "clusterMemberId": 75,
    "agentId": 64,
    "loadFactor": 1
  },
  "isError": false
}

removeMemberFromMeasuringAgentCluster

Specific Request Fields:

  • clusterId
  • clusterMemberId

Response Fields:

  • [none]

Specific Error Flags:

  • clusterIdError
  • clusterAccessDeniedError
  • clusterModifyDeniedError
  • clusterMemberIdError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"removeMemberFromMeasuringAgentCluster",
  "clusterId":17,
  "clusterMemberId":75
}

JSON Response Example:

{"isError":false}

pingMeasuringAgentClusterMembers

Specific Request Fields:

  • clusterId

Response Fields (JSON object “controllerResponse”):

  • productVersion (cluster controller version)
  • clusterConnectResult (the connect information to the cluster members)
    • measuringAgentClusterMemberArray (the array of cluster members)
      • clusterMemberId
      • loadFactor
      • agentId
      • agentActive
      • agentDescription
      • agentHost
      • agentPort
      • agentAuthToken
    • connectSuccessfulClusterMemberArray (the array of cluster member ids to which the connection was successful established)
    • connectFailedClusterMemberArray (the array of cluster members to which the connection has failed)
      • clusterMemberId
      • errorMessage
    • clusterConnectStartTimestamp
    • clusterConnectDurationMillis
  • clusterActionResult (the ping result of the cluster members)
    • actionSuccessfulClusterMemberArray (the array of cluster member ids which have performed the ping to the measuring agent)
    • actionFailedClusterMemberArray (the array of cluster members which have not performed the ping to the measuring agent)
      • clusterMemberId
      • errorMessage
    • jsonResponseClusterMemberArray (the array of cluster member which have performed the ping)
      • clusterMemberId
      • jsonResponseObject (the pong response of the cluster member)
        • productVersion (measuring agent product version)
        • systemTime
        • deltaTimeMillis (the OS time difference in milliseconds between the cluster controller and the measuring agent)
        • osName
        • osVersion
        • javaVersion
        • javaVendor
        • javaMaxMemory
        • samplingGranularityMillis (the data collector sampling granularity in milliseconds)
        • isError (boolean flag, normally always false)
        • measuringAgentOutdated (a boolean flag, true = measuring agent product version is outdated)
    • clusterActionStartTimestamp
    • clusterActionDurationMillis
  • httpExecuteTimeMillis
  • clusterControllerOutdated

Specific Error Flags:

  • clusterIdError
  • clusterAccessDeniedError
  • controllerVersionOutdatedError
  • controllerNotReachableError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"pingMeasuringAgentClusterMembers",
  "clusterId":16
}

JSON Response Example (Success Case):

{
  "controllerResponse": {
    "productVersion": "4.0.4",
    "clusterConnectResult": {
      "measuringAgentClusterMemberArray": [
        {
          "clusterMemberId": 71,
          "loadFactor": 1,
          "agentId": 59,
          "agentActive": true,
          "agentDescription": "Ubuntu 10",
          "agentHost": "192.168.0.110",
          "agentPort": 8080,
          "agentAuthToken": "agsdhagsj"
        },
        {
          "clusterMemberId": 72,
          "loadFactor": 1,
          "agentId": 60,
          "agentActive": true,
          "agentDescription": "Ubuntu 11",
          "agentHost": "192.168.0.111",
          "agentPort": 8080,
          "agentAuthToken": "nvbjnvbnn"
        }
      ],
      "connectSuccessfulClusterMemberArray": [
        71,
        72
      ],
      "connectFailedClusterMemberArray": [],
      "clusterConnectStartTimestamp": 1643410829270,
      "clusterConnectDurationMillis": 79
    },
    "clusterActionResult": {
      "actionSuccessfulClusterMemberArray": [
        71,
        72
      ],
      "actionFailedClusterMemberArray": [],
      "jsonResponseClusterMemberArray": [
        {
          "clusterMemberId": 71,
          "jsonResponseObject": {
            "productVersion": "4.0.4",
            "systemTime": 1643410829340,
            "deltaTimeMillis": -10,
            "osName": "Linux",
            "osVersion": "5.4.0-92-generic",
            "javaVersion": "11.0.1",
            "javaVendor": "Oracle Corporation",
            "javaMaxMemory": "2048 MB",
            "samplingGranularityMillis": 4000,
            "isError": false,
            "measuringAgentOutdated": false
          }
        },
        {
          "clusterMemberId": 72,
          "jsonResponseObject": {
            "productVersion": "4.0.4",
            "systemTime": 1643410829351,
            "deltaTimeMillis": -10,
            "osName": "Linux",
            "osVersion": "5.4.0-92-generic",
            "javaVersion": "11.0.1",
            "javaVendor": "Oracle Corporation",
            "javaMaxMemory": "2048 MB",
            "samplingGranularityMillis": 4000,
            "isError": false,
            "measuringAgentOutdated": false
          }
        }
      ],
      "clusterActionStartTimestamp": 1643410829349,
      "clusterActionDurationMillis": 43
    },
    "httpExecuteTimeMillis": 1778,
    "clusterControllerOutdated": false
  },
  "isError": false
}

JSON Response Example (Error Case / Partly failed):

{
  "controllerResponse": {
    "productVersion": "4.0.4",
    "clusterConnectResult": {
      "measuringAgentClusterMemberArray": [
        {
          "clusterMemberId": 71,
          "loadFactor": 1,
          "agentId": 59,
          "agentActive": true,
          "agentDescription": "Ubuntu 10",
          "agentHost": "192.168.0.110",
          "agentPort": 8080,
          "agentAuthToken": "marderzahn"
        },
        {
          "clusterMemberId": 72,
          "loadFactor": 1,
          "agentId": 60,
          "agentActive": true,
          "agentDescription": "Ubuntu 11",
          "agentHost": "192.168.0.111",
          "agentPort": 8080,
          "agentAuthToken": "marderzahn"
        }
      ],
      "connectSuccessfulClusterMemberArray": [
        72
      ],
      "connectFailedClusterMemberArray": [
        {
          "clusterMemberId": 71,
          "errorMessage": "Connection refused (Connection refused)"
        }
      ],
      "clusterConnectStartTimestamp": 1643414272214,
      "clusterConnectDurationMillis": 97
    },
    "clusterActionResult": {
      "actionSuccessfulClusterMemberArray": [
        72
      ],
      "actionFailedClusterMemberArray": [],
      "jsonResponseClusterMemberArray": [
        {
          "clusterMemberId": 72,
          "jsonResponseObject": {
            "productVersion": "4.0.4",
            "systemTime": 1643414272310,
            "deltaTimeMillis": -8,
            "osName": "Linux",
            "osVersion": "5.4.0-92-generic",
            "javaVersion": "11.0.1",
            "javaVendor": "Oracle Corporation",
            "javaMaxMemory": "2048 MB",
            "samplingGranularityMillis": 4000,
            "isError": false,
            "measuringAgentOutdated": false
          }
        }
      ],
      "clusterActionStartTimestamp": 1643414272311,
      "clusterActionDurationMillis": 21
    },
    "httpExecuteTimeMillis": 1769,
    "clusterControllerOutdated": false
  }
}

setMeasuringAgentClusterActive

Specific Request Fields:

  • clusterId
  • clusterActive

Response Fields:

  • [none]

Specific Error Flags:

  • clusterIdError
  • clusterAccessDeniedError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"setMeasuringAgentClusterActive",
  "clusterId":16,
  "clusterActive":true
}

JSON Response Example (Success Case):

{"isError":false}

deleteMeasuringAgentCluster

Specific Request Fields:

  • clusterId

Response Fields:

  • [none]

Specific Error Flags:

  • clusterIdError
  • clusterAccessDeniedError
  • clusterDeleteDeniedError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"deleteMeasuringAgentCluster",
  "clusterId":16
}

JSON Response Example (Success Case):

{"isError":false}

getProxyRecorders

Specific Request Fields:

  • [none]

Response Fields (JSON array “proxyRecordersArray”):

  • recorderId (the unique proxy recorder id)
  • createdBySystem (normally false, true = the user cannot modify or delete the proxy recorder)
  • ownerUserId (always the same as the user account id)
  • recorderDescription
  • recorderProxyHost
  • recorderProxyPort (HTTP and HTTPS port of the proxy)
  • recorderProxyAuthUsername (proxy authentication username, or an empty string = no proxy authentication required)
  • recorderProxyAuthPassword (proxy authentication password)
  • recorderControlPort (the proxy recorder control port)
  • recorderControlAuthToken (the authentication token to access the proxy recorder control port, or an empty string = no access protection | don’t confuse it with the API authTokenValue)

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getProxyRecorders"
}

JSON Response Example:

{
  "proxyRecordersArray": [
    {
      "recorderId": 3,
      "createdBySystem": false,
      "ownerUserId": 13,
      "recorderDescription": "Erster",
      "recorderProxyHost": "192.168.0.40",
      "recorderProxyPort": 8082,
      "recorderProxyAuthUsername": "",
      "recorderProxyAuthPassword": "",
      "recorderControlPort": 8081,
      "recorderControlAuthToken": ""
    },
    {
      "recorderId": 4,
      "createdBySystem": false,
      "ownerUserId": 13,
      "recorderDescription": "proxy.realload.com",
      "recorderProxyHost": "proxy.realload.com",
      "recorderProxyPort": 8082,
      "recorderProxyAuthUsername": "max.meier",
      "recorderProxyAuthPassword": "123456",
      "recorderControlPort": 8081,
      "recorderControlAuthToken": "aZujkl97zuwert"
    }
  ],
  "isError": false
}

getMinRequiredProxyRecorderVersion

Specific Request Fields:

  • [none]

Response Fields:

  • minRequiredProxyRecorderVersion (the minimum required HTTP/S proxy recorder version)

Specific Error Flags:

  • [none]

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"getMinRequiredProxyRecorderVersion"
}

JSON Response Example (Success Case):

{
  "minRequiredProxyRecorderVersion":"0.2.2",
  "isError":false
}

addProxyRecorder

Specific Request Fields:

  • recorderDescription (must be unique across all HTTP/S proxy recorders of the user, cannot be an empty string)
  • recorderProxyHost
  • recorderProxyPort (HTTP and HTTPS port of the proxy)
  • recorderProxyAuthUsername (proxy authentication username, or an empty string = no proxy authentication required)
  • recorderProxyAuthPassword (proxy authentication password, applied if recorderProxyAuthUsername is not an empty string)
  • recorderControlPort (the proxy recorder control port)
  • recorderControlAuthToken (the authentication token to access the proxy recorder control port, or an empty string = no access protection)

Response Fields (JSON object “proxyRecorder”):

  • recorderId (the unique HTTP/S proxy recorder id)
  • createdBySystem (always false for this function)
  • ownerUserId (always the same as the user account id)
  • recorderDescription
  • recorderProxyHost
  • recorderProxyPort
  • recorderProxyAuthUsername
  • recorderProxyAuthPassword
  • recorderControlPort
  • recorderControlAuthToken

Specific Error Flags:

  • recorderDescriptionError
  • recorderProxyHostError
  • recorderProxyPortError
  • recorderControlPortError
  • maxNumberProxyRecordersLimitExceededError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"addProxyRecorder",
  "recorderDescription":"My New Proxy Recorder",
  "recorderProxyHost":"192.168.0.148",
  "recorderProxyPort":8082,
  "recorderProxyAuthUsername":"max.meier",
  "recorderProxyAuthPassword":"123456",
  "recorderControlPort":8081,
  "recorderControlAuthToken":"aZujkl97zuwert"
}

JSON Response Example (Success Case):

{
  "proxyRecorder": {
    "recorderId": 10,
    "createdBySystem": false,
    "ownerUserId": 13,
    "recorderDescription": "My New Proxy Recorder",
    "recorderProxyHost": "192.168.0.148",
    "recorderProxyPort": 8082,
    "recorderProxyAuthUsername": "max.meier",
    "recorderProxyAuthPassword": "123456",
    "recorderControlPort": 8081,
    "recorderControlAuthToken": "aZujkl97zuwert"
  },
  "isError": false
}

pingProxyRecorder

Specific Request Fields:

  • recorderId

Response Fields (JSON object “pongResponse”):

  • pingFromRemoteIp
  • pingFromRemoteUserId
  • productVersion (the remote proxy recorder version | don’t confuse with portal server version)
  • recorderComponentVersion (the proxy recorder component version | don’t confuse with portal server version)
  • isRecording
  • recordHostFilter
  • numRecordedElements
  • osName
  • osVersion
  • javaMemoryMB
  • javaVersion
  • javaVendor
  • systemTime
  • deltaTimeMillis
  • httpExecuteTimeMillis

Specific Error Flags:

  • recorderIdError
  • recorderAccessDeniedError
  • recorderNotReachableError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"pingProxyRecorder",
  "recorderId":4
}

JSON Response Example (Success Case):

{
  "pongResponse": {
    "pingFromRemoteIp": "83.150.39.44",
    "pingFromRemoteUserId": 13,
    "productVersion": "0.2.0",
    "recorderComponentVersion": "1.1.0",
    "isRecording": false,
    "recordHostFilter": "www.dkfqa.com",
    "numRecordedElements": 0,
    "osName": "Linux",
    "osVersion": "5.4.0-74-generic",
    "javaMemoryMB": 2048,
    "javaVersion": "11.0.1",
    "javaVendor": "Oracle Corporation",
    "systemTime": 1625529858405,
    "deltaTimeMillis": 790,
    "httpExecuteTimeMillis": 88
  },
  "isError": false
}

JSON Response Example (Error Case):

{
  "isError": true,
  "genericErrorText": "connect timed out",
  "recorderNotReachableError": true,
  "recorderIdError": false,
  "recorderAccessDeniedError": false
}

deleteProxyRecorder

Specific Request Fields:

  • recorderId

Response Fields:

  • [none]

Specific Error Flags:

  • recorderIdError
  • recorderAccessDeniedError
  • recorderDeleteDeniedError

JSON Request Example:

{
  "authTokenValue":"jPmFClqeDUXaEk8Q274q",
  "action":"deleteProxyRecorder",
  "recorderId":10
}

JSON Response Example (Success Case):

{"isError":false}

6 - Installation

How to install Real Load

Usually you just need a Portal Server account and to install the “Desktop Companion” on your laptop which enables you to record and upload HTTP/S sessions and to launch Measuring Agents on Amazon EC2. You don’t need a special installation license for the Desktop Companion. However, user licenses are required for using the Portal Server and to perform tests on Measuring Agents - see https://shop.realload.com/.

The Real Load components Measuring Agent(s), Cluster Controller(s) and Remote Proxy Recorder(s) can also installed/operated on your own hosted machines.

The installation and operation of an own dedicated Portal Server requires a contract with us and a special, commercial license.

The software can be downloaded from https://download.realload.com

Prerequisites

Supported operating systems / for all Real Load components

  • Windows 10 / Windows Server 2012 or newer.
  • Centos 8
  • Red Hat Linux 6
  • Ubuntu 16, 18 or 20
  • OS X

Installation

Install using the installer

Desktop Companion Windows installer: https://download.realload.com/desktop_companion/latest_win64

Manual installation

Follow the links below to perform a manual installation.

6.1 - Ubuntu 16/18/20 Measuring Agent manual install

Ubuntu 16/18/20 Measuring Agent Install Instructions

Prerequisites

Supported Hardware

  • Amazon EC2 Cloud instances, or
  • Own hosted Servers with any Intel or AMD CPU, or
  • Own hosted Raspberry Pi 4 Model B / 8 GB (ARM CPU) / Ubuntu 20 only / for weak load tests up to max. 100 concurrent users (with loop iteration delay = 1000 ms)

Minimum Requirements

  • Minimum required CPU Cores of Processor: 4
  • Minimum required Memory: 8 GB
  • Minimum required Disk: 64 GB
  • Minimum required Network Speed: 100 Mbps (1000 Mbps or faster strongly recommended)

Usual Requirements

  • Suggested Hardware for performing load tests up to 500 concurrent users: Intel CPU i3 / 16 GB Memory / Disk: 256 GB
  • Suggested Hardware for performing load tests up to 1000 concurrent users: Intel CPU i5 / 16 GB Memory / Disk: 512 GB
  • Suggested Hardware for performing load tests up to 5000 concurrent users: Intel CPU i7 / 64 GB Memory / Disk: 1024 GB

Rule of Thumb for Amazon EC2 Instances

  • Per EC2 vCPU, 100 virtual users can simulated
  • Required Memory: 5 GB + (1 GB per 100 virtual users)

Environment and Location

Tests performed from ‘Measuring Agents’ which are virtualized or which run in a container environment measure often incorrect results. Because additional CPU and Network delays occur at virtualization/container level. It’s recommended that you use BARE-METAL-SERVERS to perform your tests. Alternatively you can also use Amazon EC2 Cloud instances.

You can place your ‘Measuring Agents’ at any location (anywhere at the internet or inside your local DMZ). Depending on which kind of traffic you have to test. Note that your Measuring Agents - usually running on TCP/IP port 8080 (HTTPS) - must be reachable form the ‘Portal Server’, and that you have to enable the corresponding inbound firewall rule.

Network & System Tuning

In /etc/sysctl.conf add:

# TCP/IP Tuning
# =============
fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576

in /etc/security/limits.conf add:

# TCP/IP Tuning
# =============
* soft     nproc          262140
* hard     nproc          262140
* soft     nofile         262140
* hard     nofile         262140
root soft     nproc          262140
root hard     nproc          262140
root soft     nofile         262140
root hard     nofile         262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=8966

if you get a value less than 262140 then add in /etc/systemd/system.conf

# Ubuntu Tuning
# =============
DefaultTasksMax=262140

Reboot the system and verify the settings. Enter: ulimit -n

output: 262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=262140

Install Dependencies

Install haveged

sudo apt-get update
sudo apt-get install haveged

Configure the UFW Firewall (optional)

sudo ufw allow ssh
sudo ufw allow 8080/tcp
sudo ufw logging off
sudo ufw enable

Enter: sudo ufw status verbose

Status: active
Logging: off
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
8080/tcp                   ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)
8080/tcp (v6)              ALLOW IN    Anywhere (v6)

Install OpenJDK Java 8 and 11 / For Intel and AMD CPUs

Get the Java Installation Kits

wget https://download.java.net/openjdk/jdk8u41/ri/openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz

Install OpenJDK Java 8

gunzip openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
tar -xvf  openjdk-8u41-b04-linux-x64-14_jan_2020.tar
rm openjdk-8u41-b04-linux-x64-14_jan_2020.tar
sudo bash *******
mkdir /opt/OpenJDK
mv java-se-8u41-ri /opt/OpenJDK
cd /opt/OpenJDK
ls -al
chown root -R java-se-8u41-ri
chgrp root -R java-se-8u41-ri
exit # end sudo bash

Verify the Java 8 installation.

/opt/OpenJDK/java-se-8u41-ri/bin/java -version

openjdk version "1.8.0_41"
OpenJDK Runtime Environment (build 1.8.0_41-b04)
OpenJDK 64-Bit Server VM (build 25.40-b25, mixed mode)

Install OpenJDK Java 11

gunzip openjdk-11.0.1_linux-x64_bin.tar.gz
tar -xvf openjdk-11.0.1_linux-x64_bin.tar
rm openjdk-11.0.1_linux-x64_bin.tar
sudo bash
mv jdk-11.0.1 /opt/OpenJDK
cd /opt/OpenJDK
ls -al
chown root -R jdk-11.0.1
chgrp root -R jdk-11.0.1

Execute the following commands (still as sudo bash):

update-alternatives --install "/usr/bin/java" "java" "/opt/OpenJDK/jdk-11.0.1/bin/java" 1
update-alternatives --install "/usr/bin/javac" "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac" 1
update-alternatives --install "/usr/bin/keytool" "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool" 1
update-alternatives --install "/usr/bin/jar" "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar" 1
update-alternatives --set "java" "/opt/OpenJDK/jdk-11.0.1/bin/java"
update-alternatives --set "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac"
update-alternatives --set "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool"
update-alternatives --set "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar"
exit # end sudo bash

Verify the Java 11 installation.

java -version

openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment 18.9 (build 11.0.1+13)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.1+13, mixed mode)

Install OpenJDK Java 8 and 11 / For Raspberry Pi 4 Model B / ARM CPU

sudo apt install openjdk-8-jre-headless
sudo apt install openjdk-8-jdk-headless
sudo apt install openjdk-11-jre-headless
sudo apt install openjdk-11-jdk-headless

Verify the Java installation.

java -version

openjdk version "11.0.10" 2021-01-19
OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.10)
OpenJDK 64-Bit Server VM (build 11.0.10+9-Ubuntu-0ubuntu1.20.10, mixed mode)

Install PowerShell (optional)

You only need to install powershell if you run load tests with powershell scripts.

# Install PowerShell
sudo snap install powershell --classic

# Start PowerShell
pwsh
exit

Install the Measuring Agent

Create the DKFQS account which is running the Measuring Agent

sudo adduser dkfqs    # follow the questions, remember or write down the password

Install the Measuring Agent

Login with the dkfqs account (SSH) - or - Enter: sudo -u dkfqs bash | OR: Install Samba to get convenient access to /home/dkfqs as Samba dkfqs user

Create the directory /home/dkfqs/agent (as dkfqs user):

cd /home/dkfqs
mkdir agent

Create the following sub-directories at /home/dkfqs/agent (as dkfqs user):

  • bin
  • config
  • internalData
  • log
  • scripts
  • usersData
cd /home/dkfqs/agent
mkdir bin config internalData log scripts usersData

Copy the following files to the bin directory /home/dkfqs/agent/bin

  • bcpkix-jdk15on-160.jar
  • bcprov-jdk15on-160.jar
  • bctls-jdk15on-160.jar
  • DKFQSMeasuringAgent.jar

Copy the following files to the config directory /home/dkfqs/agent/config

  • datacollector.properties
  • measuringagent.properties

Modify the measuringagent.properties file. Set the following properties:

  • HttpsCertificateCN (set the public DNS name or the IP address for the automatically generated SSL/TLS server certificate)
  • HttpsCertificateIP (set the public IP address for the automatically generated SSL/TLS server certificate)
  • PowerShellCore6Path
  • OpenJDK8JavaPath
  • OpenJDK8JavaJobDefaultXmx (set around 20% of total OS memory - example: 1024m)
  • OpenJDK11JavaPath
  • OpenJDK11JavaJobDefaultXmx (set around 20% of total OS memory - example: 1024m)

Example: datacollector.properties

# local TCP/HTTPS data collector ports
DataCollectorPortStartRange=44444
DataCollectorPortEndRange=45000
DataCollectorPortExcludeList=

LogLevel=info
MaxLifeTimeMinutes=240

MaxWebSocketConnectTimeSeconds=14400
MaxInboundWebSocketTrafficPerConnection=67108864
MaxInboundWebSocketPayloadPerFrame=1048576
MaxInboundWebSocketFramesPerIPTimeFrame=10
MaxInboundWebSocketFramesPerIPLimit=1000

RealtimeStatisticsSamplingGranularityMillis=4000

Example: measuringagent.properties

HttpsPort=8080
HttpsCertificateCN=agent2.realload.com
HttpsCertificateIP=83.150.39.43
LogLevel=info

# AuthTokenEnabled: true or false, if true = the AuthTokenValue must be configured at portal server measuring agent settings
AuthTokenEnabled=false
# If AuthTokenEnabled is true, but AuthTokenValue is undefined or an empty string, then the (permanent) AuthTokenValue is automatically generated and printed at the log output
# AuthTokenValue=

MeasuringAgentLogFile=/home/dkfqs/agent/log/MeasuringAgent.log
MeasuringAgentInternalDataDirectory=/home/dkfqs/agent/internalData
MeasuringAgentUsersDataRootDirectory=/home/dkfqs/agent/usersData

ApiV1MaxRequestSizeMB=256
ApiV1WorkerThreadBusyTimeoutSeconds=330
ApiV1WorkerThreadExecutionTimeoutSeconds=300

MaxWebSocketConnectTimeSeconds=14400
MaxInboundWebSocketTrafficPerConnection=67108864
MaxInboundWebSocketPayloadPerFrame=20971520
MaxInboundWebSocketFramesPerIPTimeFrame=10
MaxInboundWebSocketFramesPerIPLimit=1000

DataCollectorProcessJavaPath=java
DataCollectorProcessJavaXmx=512m
DataCollectorPropertiesPath=/home/dkfqs/agent/config/datacollector.properties

# Settings for Supported Scripts / Programing Languages
PowerShellCore6Path=/snap/bin/pwsh
OpenJDK8JavaPath=/opt/OpenJDK/java-se-8u41-ri/bin/java
OpenJDK8JavaJobDefaultXmx=512m
OpenJDK11JavaPath=/opt/OpenJDK/jdk-11.0.1/bin/java
OpenJDK11JavaJobDefaultXmx=512m

# Limits
# LimitMaxUsersPerJob=500
# LimitMaxJobDurationSeconds=300

First Test - Start the Measuring Agent manually (as dkfqs user)

cd /home/dkfqs/agent/bin
export CLASSPATH=bcpkix-jdk15on-160.jar:bcprov-jdk15on-160.jar:bctls-jdk15on-160.jar:DKFQSMeasuringAgent.jar
java -Xmx512m -DdkfqsMeasuringAgentProperties=../config/measuringagent.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.measuringagent.internal.StartDKFQSMeasuringAgent

Data Collector service port range from 44444 to 45000
LimitMaxUsersPerJob = unlimited
LimitMaxJobDurationSeconds = unlimited
X509 TLS server certificate generated for CN = 192.168.0.51
Internal RSA 2048 bit keypair generated in 373 ms
2021-03-11 18:20:27.947 | QAHTTPd | WARN | QAHTTPd V1.3-U started
2021-03-11 18:20:27.990 | QAHTTPd | INFO | HTTPS server starting at port 8080
2021-03-11 18:20:28.089 | QAHTTPd | INFO | HTTPS server ready at port 8080

Create the Measuring Agent Startup Script (as root)

sudo bash # become root
cd /etc/init.d
vi MeasuringAgent

Edit - create /etc/init.d/MeasuringAgent

#!/bin/sh
# /etc/init.d/MeasuringAgent
# install with: update-rc.d MeasuringAgent defaults

### BEGIN INIT INFO
# Provides:          MeasuringAgent
# Required-Start:    $local_fs $network $time $syslog
# Required-Stop:     $local_fs $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start MeasuringAgent daemon at boot time
# Description:       MeasuringAgent daemon
### END INIT INFO

case "$1" in
  start)
    if [ -f /home/dkfqs/agent/log/MeasuringAgent.log ]; then
       mv /home/dkfqs/agent/log/MeasuringAgent.log /home/dkfqs/agent/log/MeasuringAgent.log_$(date +"%Y_%m_%d_%H_%M")
    fi
    sudo -H -u dkfqs bash -c 'CLASSPATH=/home/dkfqs/agent/bin/bcpkix-jdk15on-160.jar:/home/dkfqs/agent/bin/bcprov-jdk15on-160.jar:/home/dkfqs/agent/bin/bctls-jdk15on-160.jar:/home/dkfqs/agent/bin/DKFQSMeasuringAgent.jar;export CLASSPATH;nohup java -Xmx512m -DdkfqsMeasuringAgentProperties=/home/dkfqs/agent/config/measuringagent.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.measuringagent.internal.StartDKFQSMeasuringAgent -autoAdjustMemory -osReservedMemory 1GB 1>/home/dkfqs/agent/log/MeasuringAgent.log 2>&1 &'
    ;;
  stop)
       PID=`ps -o pid,args -e | grep "StartDKFQSMeasuringAgent" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "MeasuringAgent stopped with pid(s) : $PID"
          kill -9 ${PID} 1> /dev/null 2>&1
       fi
    ;;
  status)
       PID=`ps -o pid,args -e | grep "StartDKFQSMeasuringAgent" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "MeasuringAgent running with pid(s) : $PID"
       else
          echo "No MeasuringAgent running"
       fi
    ;;
  *)
    echo "Usage: /etc/init.d/MeasuringAgent {start|stop|status}"
    exit 1
    ;;
esac

exit 0

The Java memory of the Measuring Agent should be set in the startup script as shown in the table below:

OS Physical Memory Java -Xmx setting
<2 GiB 256m
2..3 GiB 512m
4..7 GiB 512m
8..15 GiB 1536m
16..31 GiB 3072m
32..63 GiB 4096m
64..96 GiB 6144m
>96 GiB 8192m
Odd number of GiB should be rounded up (e.g. 7.7 = 8 = 1536m).

Change owner and file protection of /etc/init.d/MeasuringAgent (root at /etc/init.d):

chown root MeasuringAgent
chgrp root MeasuringAgent
chmod 755 MeasuringAgent

Register /etc/init.d/MeasuringAgent to be started at system boot (root at /etc/init.d):

update-rc.d MeasuringAgent defaults

Reboot the system. Login as dkfqs and check /home/dkfqs/agent/log/MeasuringAgent.log

Register and Verify the Measuring Agent

  • Sign-in at the ‘Portal Server’
  • Select at Top Navigation ‘Measuring Agents’
  • Add your new Measuring Agent
  • Ping the Measuring Agent at application level

“alt attribute”

“alt attribute”

6.2 - Ubuntu 16/18/20 Cluster Controller manual install

Ubuntu 16/18/20 Cluster Controller Install Instructions

Prerequisites

Supported Hardware

  • Amazon EC2 Cloud instance, or
  • Own hosted server with any Intel or AMD CPU

Minimum Requirements

  • Minimum required CPU Cores of Processor: 2
  • Minimum required Memory: 8 GB
  • Minimum required Disk: 64 GB
  • Minimum required Network Speed: 100 Mbps (1000 Mbps or faster strongly recommended)

Environment and Location

In terms of network technology, the cluster controller should be as close as possible to the cluster members.

The simultaneous operation of a cluster controller together with one or more measuring agents on the same machine is possible, although not recommended. This means that the cluster controller should be operated on its own machine - especially if a cluster contains more than 100 members.

The time difference of the operating system time between the cluster controller and the cluster members must not be greater than one second (1000 ms). It is recommended to use the same time server for the cluster controller and the cluster members.

The Portal Server supports the use of multiple cluster controllers. Each cluster controller can manage multiple clusters. And each measuring agent can be a member of multiple clusters.

Network & System Tuning

In /etc/sysctl.conf add:

# TCP/IP Tuning
# =============
fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576

in /etc/security/limits.conf add:

# TCP/IP Tuning
# =============
* soft     nproc          262140
* hard     nproc          262140
* soft     nofile         262140
* hard     nofile         262140
root soft     nproc          262140
root hard     nproc          262140
root soft     nofile         262140
root hard     nofile         262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=8966

if you get a value less than 262140 then add in /etc/systemd/system.conf

# Ubuntu Tuning
# =============
DefaultTasksMax=262140

Reboot the system and verify the settings. Enter: ulimit -n

output: 262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=262140

Install Dependencies

Install haveged

sudo apt-get update
sudo apt-get install haveged

Install OpenJDK 11

Get the Java Installation Kit

wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz

Install OpenJDK Java 11

gunzip openjdk-11.0.1_linux-x64_bin.tar.gz
tar -xvf openjdk-11.0.1_linux-x64_bin.tar
rm openjdk-11.0.1_linux-x64_bin.tar
sudo bash
mv jdk-11.0.1 /opt/OpenJDK
cd /opt/OpenJDK
ls -al
chown root -R jdk-11.0.1
chgrp root -R jdk-11.0.1

Execute the following commands (still as sudo bash):

update-alternatives --install "/usr/bin/java" "java" "/opt/OpenJDK/jdk-11.0.1/bin/java" 1
update-alternatives --install "/usr/bin/javac" "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac" 1
update-alternatives --install "/usr/bin/keytool" "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool" 1
update-alternatives --install "/usr/bin/jar" "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar" 1
update-alternatives --set "java" "/opt/OpenJDK/jdk-11.0.1/bin/java"
update-alternatives --set "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac"
update-alternatives --set "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool"
update-alternatives --set "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar"
exit # end sudo bash

Verify the Java 11 installation.

java -version

openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment 18.9 (build 11.0.1+13)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.1+13, mixed mode)

Install the Cluster Controller

Create the DKFQS account which is running the Cluster Controller

sudo adduser dkfqs    # follow the questions, remember or write down the password

Install the Cluster Controller

Login with the dkfqs account (SSH) - or - Enter: sudo -u dkfqs bash | OR: Install Samba to get convenient access to /home/dkfqs as Samba dkfqs user

Create the directory /home/dkfqs/controller (as dkfqs user):

cd /home/dkfqs
mkdir controller

Create the following sub-directories at /home/dkfqs/controller (as dkfqs user):

  • bin
  • config
  • internalData
  • log
  • scripts
  • usersData
cd /home/dkfqs/controller
mkdir bin config internalData log scripts usersData

Copy the following files to the bin directory /home/dkfqs/controller/bin

  • bcpkix-jdk15on-160.jar
  • bcprov-jdk15on-160.jar
  • bctls-jdk15on-160.jar
  • DKFQSMeasuringAgent.jar

Copy the following files to the config directory /home/dkfqs/controller/config

  • clustercontroller.properties

Modify the clustercontroller.properties file. Set the following properties:

  • HttpsCertificateCN (set the public DNS name or the IP address for the automatically generated SSL/TLS server certificate)
  • HttpsCertificateIP (set the public IP address for the automatically generated SSL/TLS server certificate)
  • AuthTokenValue

Example: clustercontroller.properties

HttpsPort=8083
HttpsCertificateCN=192.168.0.50
HttpsCertificateIP=192.168.0.50
LogLevel=info

# AuthTokenEnabled: true or false, if true = the AuthTokenValue must be configured at portal server measuring agent cluster settings
AuthTokenEnabled=true
# If AuthTokenEnabled is true, but AuthTokenValue is undefined or an empty string, then the (permanent) AuthTokenValue is automatically generated and printed at the log output
AuthTokenValue=aberaber

ClusterControllerLogFile=/home/dkfqs/controller/log/ClusterController.log
ClusterControllerInternalDataDirectory=/home/dkfqs/controller/internalData
ClusterControllerUsersDataRootDirectory=/home/dkfqs/controller/usersData

ApiMaxRequestSizeMB=256
ApiWorkerThreadBusyTimeoutSeconds=330
ApiWorkerThreadExecutionTimeoutSeconds=300

MaxWebSocketConnectTimeSeconds=14400
MaxInboundWebSocketTrafficPerConnection=83886080
MaxInboundWebSocketPayloadPerFrame=20971520
MaxInboundWebSocketFramesPerIPTimeFrame=10
MaxInboundWebSocketFramesPerIPLimit=1000

First Test - Start the Cluster Controller manually (as dkfqs user)

cd /home/dkfqs/controller/bin
export CLASSPATH=bcpkix-jdk15on-160.jar:bcprov-jdk15on-160.jar:bctls-jdk15on-160.jar:DKFQSMeasuringAgent.jar
java -Xmx512m -DdkfqsClusterControllerProperties=../config/clustercontroller.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.measuringagent.clustercontroller.StartDKFQSClusterController

Cluster Controller V4.0.4
Max. Memory = 512 MB
AuthTokenEnabled = true
AuthTokenValue = ********
X509 TLS server certificate generated for CN = 192.168.0.50
Internal RSA 2048 bit keypair generated in 305 ms
2022-01-29 20:45:20.118 | QAHTTPd | WARN | QAHTTPd V1.3-Y started
2022-01-29 20:45:20.219 | QAHTTPd | INFO | HTTPS server starting at port 8083
2022-01-29 20:45:20.278 | QAHTTPd | INFO | HTTPS server ready at port 8083

Create the Cluster Controller Startup Script (as root)

sudo bash # become root
cd /etc/init.d
vi ClusterController

Edit - create /etc/init.d/ClusterController

#!/bin/sh
# /etc/init.d/ClusterController
# install with: update-rc.d ClusterController defaults

### BEGIN INIT INFO
# Provides:          ClusterController
# Required-Start:    $local_fs $network $time $syslog
# Required-Stop:     $local_fs $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start ClusterController daemon at boot time
# Description:       ClusterController daemon
### END INIT INFO

case "$1" in
  start)
    if [ -f /home/dkfqs/controller/log/ClusterController.log ]; then
       mv /home/dkfqs/controller/log/ClusterController.log /home/dkfqs/controller/log/ClusterController.log_$(date +"%Y_%m_%d_%H_%M")
    fi
    sudo -H -u dkfqs bash -c 'CLASSPATH=/home/dkfqs/controller/bin/bcpkix-jdk15on-160.jar:/home/dkfqs/controller/bin/bcprov-jdk15on-160.jar:/home/dkfqs/controller/bin/bctls-jdk15on-160.jar:/home/dkfqs/controller/bin/DKFQSMeasuringAgent.jar;export CLASSPATH;nohup java -Xmx6144m -DdkfqsClusterControllerProperties=/home/dkfqs/controller/config/clustercontroller.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.measuringagent.clustercontroller.StartDKFQSClusterController 1>/home/dkfqs/controller/log/ClusterController.log 2>&1 &'
    ;;
  stop)
       PID=`ps -o pid,args -e | grep "StartDKFQSClusterController" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "ClusterController stopped with pid(s) : $PID"
          kill -9 ${PID} 1> /dev/null 2>&1
       fi
    ;;
  status)
       PID=`ps -o pid,args -e | grep "StartDKFQSClusterController" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "ClusterController running with pid(s) : $PID"
       else
          echo "No ClusterController running"
       fi
    ;;
  *)
    echo "Usage: /etc/init.d/ClusterController {start|stop|status}"
    exit 1
    ;;
esac

exit 0

Change owner and file protection of /etc/init.d/ClusterController (root at /etc/init.d):

chown root ClusterController
chgrp root ClusterController
chmod 755 ClusterController

Register /etc/init.d/ClusterController to be started at system boot (root at /etc/init.d):

update-rc.d ClusterController defaults

Reboot the system. Login as dkfqs and check /home/dkfqs/controller/log/ClusterController.log

Define a Cluster and Verify the Cluster Controller

  • Sign-in at the ‘Portal Server’
  • Select at Top Navigation ‘Measuring Agents’
  • Add a ‘Measuring Agent Cluster’
  • Add one or more cluster members
  • Ping the Cluster Controller at application level
  • Ping the cluster members by the Cluster Controller, and verify that the absolute value of OS Δ Time for each cluster member is not greater than 1000 ms

“alt attribute”

“alt attribute”

“alt attribute”

“alt attribute”

“alt attribute”

6.3 - Ubuntu 16/18/20 Remote Proxy Recorder manual install

Ubuntu 16/18/20 Remote Proxy Recorder Install Instructions

Prerequisites

Supported Hardware

  • Amazon EC2 Cloud instance, or
  • Own hosted server with any Intel or AMD CPU

Minimum Requirements

  • Minimum required CPU Cores of Processor: 2
  • Minimum required Memory: 8 GB
  • Minimum required Disk: 32 GB
  • Minimum required Network Speed: 100 Mbps (1000 Mbps or faster strongly recommended)

Environment and Location

The Remote Proxy Recorder can be placed at any network location, but the control port must be reachable from the Portal Server.

Network & System Tuning

In /etc/sysctl.conf add:

# TCP/IP Tuning
# =============
fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576

in /etc/security/limits.conf add:

# TCP/IP Tuning
# =============
* soft     nproc          262140
* hard     nproc          262140
* soft     nofile         262140
* hard     nofile         262140
root soft     nproc          262140
root hard     nproc          262140
root soft     nofile         262140
root hard     nofile         262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=8966

if you get a value less than 262140 then add in /etc/systemd/system.conf

# Ubuntu Tuning
# =============
DefaultTasksMax=262140

Reboot the system and verify the settings. Enter: ulimit -n

output: 262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=262140

Install Dependencies

Install haveged

sudo apt-get update
sudo apt-get install haveged

Install OpenJDK 11

Get the Java Installation Kit

wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz

Install OpenJDK Java 11

gunzip openjdk-11.0.1_linux-x64_bin.tar.gz
tar -xvf openjdk-11.0.1_linux-x64_bin.tar
rm openjdk-11.0.1_linux-x64_bin.tar
sudo bash
mv jdk-11.0.1 /opt/OpenJDK
cd /opt/OpenJDK
ls -al
chown root -R jdk-11.0.1
chgrp root -R jdk-11.0.1

Execute the following commands (still as sudo bash):

update-alternatives --install "/usr/bin/java" "java" "/opt/OpenJDK/jdk-11.0.1/bin/java" 1
update-alternatives --install "/usr/bin/javac" "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac" 1
update-alternatives --install "/usr/bin/keytool" "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool" 1
update-alternatives --install "/usr/bin/jar" "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar" 1
update-alternatives --set "java" "/opt/OpenJDK/jdk-11.0.1/bin/java"
update-alternatives --set "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac"
update-alternatives --set "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool"
update-alternatives --set "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar"
exit # end sudo bash

Verify the Java 11 installation.

java -version

openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment 18.9 (build 11.0.1+13)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.1+13, mixed mode)

Generate the Certificate Authority (CA) Root Certificate

For technical reasons, the Remote Proxy Recorder generates “fake” web server certificates during operation in order to break the encryption between the web browser and the web servers and to record the data exchanged.

In order for this to work, the Remote Proxy Recorder needs its own CA root certificate, which you then have to import into your browser.

Thus for security reasons, never use a CA root certificate from us or someone else for the Remote Proxy Recorder root certificate. Always create your own CA root certificate.

Example:

C:\Scratch2>openssl genrsa -des3 -out myCAPrivate.key 2048
Generating RSA private key, 2048 bit long modulus
.......................+++
.............................+++
unable to write 'random state'
e is 65537 (0x10001)
Enter pass phrase for myCAPrivate.key:
Verifying - Enter pass phrase for myCAPrivate.key:

C:\Scratch2>openssl pkcs8 -topk8 -inform PEM -outform PEM -in myCAPrivate.key -out myCAPrivateKey.pem -nocrypt
Enter pass phrase for myCAPrivate.key:

C:\Scratch2>openssl req -x509 -new -nodes -key myCAPrivate.key -sha256 -days 3700 -out myCARootCert.pem
Enter pass phrase for myCAPrivate.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:CH
State or Province Name (full name) [Some-State]:Bern
Locality Name (eg, city) []:Bern
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Pty Ltd
Organizational Unit Name (eg, section) []:QA
Common Name (e.g. server FQDN or YOUR name) []:DKFQS Proxy Recorder Root
Email Address []:

C:\Scratch2>dir
 Volume in drive C is OS
 Volume Serial Number is AEF7-CFB1

 Directory of C:\Scratch2

06 Feb 2022  20:44    <DIR>          .
06 Feb 2022  20:44    <DIR>          ..
06 Feb 2022  20:40             1.743 myCAPrivate.key
06 Feb 2022  20:41             1.704 myCAPrivateKey.pem
06 Feb 2022  20:44             1.350 myCARootCert.pem
               3 File(s)          4.797 bytes
               2 Dir(s)  310.772.580.352 bytes free

Install the Remote Proxy Recorder

Create the DKFQS account which is running the Remote Proxy Recorder

sudo adduser dkfqs    # follow the questions, remember or write down the password

Install the Remote Proxy Recorder

Login with the dkfqs account (SSH) - or - Enter: sudo -u dkfqs bash | OR: Install Samba to get convenient access to /home/dkfqs as Samba dkfqs user

Create the directory /home/dkfqs/proxy (as dkfqs user):

cd /home/dkfqs
mkdir proxy

Create the following sub-directories at /home/dkfqs/proxy (as dkfqs user):

  • bin
  • config
  • log
cd /home/dkfqs/controller
mkdir bin config log

Copy the following files to the bin directory /home/dkfqs/proxy/bin

  • bcmail-jdk15on-168.jar
  • bcpg-jdk15on-168.jar
  • bcpkix-jdk15on-168.jar
  • bcprov-jdk15on-168.jar
  • bctls-jdk15on-168.jar
  • com.dkfqs.remoteproxyrecorder.jar

Copy the following files to the config directory /home/dkfqs/proxy/config

  • config.properties
  • myCAPrivateKey.pem (the private key of your self generated CA root certificate)
  • myCARootCert.pem (your self generated CA root certificate)

Modify the config.properties file. Set (modify) the following properties:

  • ControlServerHttpsCertificateCN (set the public DNS name or the IP address of the Remote Proxy Recorder)
  • ControlServerHttpsCertificateIP (set the public IP address of the Remote Proxy Recorder)
  • ControlServerAuthToken
  • ProxyServerDefaultCaRootCertFilePath (set the CA root certificate of the proxy)
  • ProxyServerDefaultCaRootPrivateKeyFilePath (set the private key of the CA root certificate)

Example: config.properties

ControlServerLogLevel=info
ControlServerHttpsPort=8081
ControlServerHttpsCertificateCN=proxy2.realload.com
ControlServerHttpsCertificateIP=83.150.39.45
#Note: the control server authentication token is required to connect to the control server
ControlServerAuthToken=krungthep

ProxyServerLogLevel=warn
ProxyServerPort=8082
ProxyServerDefaultCaRootCertFilePath=/home/dkfqs/proxy/config/myCARootCert.pem
ProxyServerDefaultCaRootPrivateKeyFilePath=/home/dkfqs/proxy/config/myCAPrivateKey.pem
#Note: the proxy authentication credentials are replaced on the fly when the portal user connects via the control interface to the control server
ProxyServerDefaultAuthenticationUsername=max
ProxyServerDefaultAuthenticationPassword=meier

First Test - Start the Remote Proxy Recorder manually (as dkfqs user)

cd /home/dkfqs/proxy/bin
export CLASSPATH=bcmail-jdk15on-168.jar:bcpg-jdk15on-168.jar:bcpkix-jdk15on-168.jar:bcprov-jdk15on-168.jar:bctls-jdk15on-168.jar:com.dkfqs.remoteproxyrecorder.jar
java -Xmx2048m -DconfigProperties=../config/config.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.remoteproxyrecorder.main.StartRemoteProxyRecorder

> Remote Proxy Recorder V0.2.0
> Max. Memory = 2048 MB
> Internal RSA 2048 bit keypair generated in 85 ms
> 2021-06-05 23:24:37.710 | QAHTTPd | WARN | QAHTTPd V1.3-V started
> 2021-06-05 23:24:37.710 | QAHTTPd | INFO | HTTPS server starting at port 8081
> 2021-06-05 23:24:37.726 | QAHTTPd | INFO | HTTPS server ready at port 8081
> 2021-06-05 23:24:38.722 | Proxy | WARN | ProxyRecorder V1.1.0 started at port 8082

Create the Remote Proxy Recorder Startup Script (as root)

sudo bash # become root
cd /etc/init.d
vi RemoteProxyRecorder

Edit - create /etc/init.d/RemoteProxyRecorder

#!/bin/sh
# /etc/init.d/RemoteProxyRecorder
# install with: update-rc.d RemoteProxyRecorder defaults

### BEGIN INIT INFO
# Provides:          RemoteProxyRecorder
# Required-Start:    $local_fs $network $time $syslog
# Required-Stop:     $local_fs $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start RemoteProxyRecorder daemon at boot time
# Description:       RemoteProxyRecorder daemon
### END INIT INFO

case "$1" in
  start)
    if [ -f /home/dkfqs/proxy/log/RemoteProxyRecorder.log ]; then
       mv /home/dkfqs/proxy/log/RemoteProxyRecorder.log /home/dkfqs/proxy/log/RemoteProxyRecorder.log_$(date +"%Y_%m_%d_%H_%M")
    fi
    sudo -H -u dkfqs bash -c 'CLASSPATH=/home/dkfqs/proxy/bin/bcmail-jdk15on-168.jar:/home/dkfqs/proxy/bin/bcpg-jdk15on-168.jar:/home/dkfqs/proxy/bin/bcpkix-jdk15on-168.jar:/home/dkfqs/proxy/bin/bcprov-jdk15on-168.jar:/home/dkfqs/proxy/bin/bctls-jdk15on-168.jar:/home/dkfqs/proxy/bin/com.dkfqs.remoteproxyrecorder.jar;export CLASSPATH;nohup java -Xmx4096m -DconfigProperties=/home/dkfqs/proxy/config/config.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.remoteproxyrecorder.main.StartRemoteProxyRecorder 1>/home/dkfqs/proxy/log/RemoteProxyRecorder.log 2>&1 &'
    ;;
  stop)
       PID=`ps -o pid,args -e | grep "StartRemoteProxyRecorder" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "RemoteProxyRecorder stopped with pid(s) : $PID"
          kill -9 ${PID} 1> /dev/null 2>&1
       fi
    ;;
  status)
       PID=`ps -o pid,args -e | grep "StartRemoteProxyRecorder" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "RemoteProxyRecorder running with pid(s) : $PID"
       else
          echo "No RemoteProxyRecorder running"
       fi
    ;;
  *)
    echo "Usage: /etc/init.d/RemoteProxyRecorder {start|stop|status}"
    exit 1
    ;;
esac

exit 0

Change owner and file protection of /etc/init.d/RemoteProxyRecorder (root at /etc/init.d):

chown root RemoteProxyRecorder
chgrp root RemoteProxyRecorder
chmod 755 RemoteProxyRecorder

Register /etc/init.d/RemoteProxyRecorder to be started at system boot (root at /etc/init.d):

update-rc.d RemoteProxyRecorder defaults

Reboot the system. Login as dkfqs and check /home/dkfqs/proxy/log/RemoteProxyRecorder.log

Register and Verify the Remote Proxy Recorder

  • Sign-in at the ‘Portal Server’
  • Follow the instructions at User Guide

6.4 - Centos 8 Portal Server manual install

Centos 8 Portal Server manual install instructions

Prepare your system

Install Centos 8 minimal server.

Disable SELinux in /etc/selinux/config

Network & System Tuning

Open ports 443 and 80 on firewall:

firewall-cmd --zone=public --add-service=http
firewall-cmd --zone=public --add-service=https
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload

In /etc/sysctl.conf add:

# TCP/IP Tuning
# =============
fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576

in /etc/security/limits.conf add:

# TCP/IP Tuning
# =============
* soft     nproc          262140
* hard     nproc          262140
* soft     nofile         262140
* hard     nofile         262140
root soft     nproc          262140
root hard     nproc          262140
root soft     nofile         262140
root hard     nofile         262140

Reboot system and check with ulimit -n : The output should be 262140

Install dependencies

Install SQLite

sudo yum update
yum install sqlite

Install haveged

yum -y install epel-release
yum repolist
yum install haveged

Other tools

yum install unzip
yum install tar

Install JDKs (As root)

Download OpenJDK 11 and 8 (TODO: URLs) and then:

cd /opt/OpenJDK
tar xzvf openjdk-11.0.1_linux-x64_bin.tar.gz
update-alternatives --install "/usr/bin/java" "java" "/opt/OpenJDK/jdk-11.0.1/bin/java" 1
update-alternatives --install "/usr/bin/javac" "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac" 1
update-alternatives --install "/usr/bin/keytool" "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool" 1
update-alternatives --install "/usr/bin/jar" "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar" 1
update-alternatives --set "java" "/opt/OpenJDK/jdk-11.0.1/bin/java"
update-alternatives --set "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac"
update-alternatives --set "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool"
update-alternatives --set "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar"

tar xzvf openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz

Java installation validation steps

java -version
openjdk version "11.0.1" 2018-10-16/

opt/OpenJDK/java-se-8u41-ri/bin/java -version
openjdk version "1.8.0_41"

Install Real Load

Create the DKFQS account

sudo adduser -m dkfqs 

su - dkfqs
cd /home/dkfqs
mkdir portal
cd /home/dkfqs/portal
mkdir backup bin config db htdocs jks log scripts usersLib usersData

Copy various files into place

cp /opt/install_sw/Common/*.jar /home/dkfqs/portal/bin/
cp /opt/install_sw/V4.2.11/PortalServer/bin/DKFQS.jar /home/dkfqs/portal/bin/
cp /opt/install_sw/V4.2.11/PortalServer/config/* /home/dkfqs/portal/config/

Copy the htdocs.jar file to the htdocs directory /home/dkfqs/portal/htdocs

Navigate to /home/dkfqs/portal/htdocs and un-jar the file:

jar -xvf htdocs.jar
rm htdocs.jar  (and delete the jar)
rm -R META-INF (delete the META-INF directory)

Create SQLite DBs

Copy the following files to the db directory /home/dkfqs/portal/db

  • CreateNewAdminDB.sql
  • CreateNewOperationsDB.sql
  • CreateNewUsersDB.sql

Login with the dkfqs account, navigate to /home/dkfqs/portal/db and create the Admin, Operations and the Users DB:

sqlite3 AdminAccounts.db < CreateNewAdminDB.sql
sqlite3 Operations.db < CreateNewOperationsDB.sql
sqlite3 Users.db < CreateNewUsersDB.sql

Allow privileged port binding

Allow un-privileged accounts to bind to privileged ports (80, 443)

sysctl net.ipv4.ip_unprivileged_port_start=0

Create services

Create the /home/dkfqs/portal/bin/portal.sh file:

#!/usr/bin/bash

case "$1" in
  start)
    if [ -f /home/dkfqs/portal/log/DKFQS.log ]; then
       mv /home/dkfqs/portal/log/DKFQS.log /home/dkfqs/portal/log/DKFQS.log_$(date +"%Y_%m_%d_%H_%M")
    fi
    CLASSPATH=/home/dkfqs/portal/bin/bcpkix-jdk15on-160.jar:/home/dkfqs/portal/bin/bcprov-jdk15on-160.jar:/home/dkfqs/portal/bin/bctls-jdk15on-160.jar:/home/dkfqs/portal/bin/DKFQS.jar;export CLASSPATH;nohup java -Xmx2048m -DdkfqsProperties=/home/dkfqs/portal/config/dkfqs.properties -DrewriteProperties=/hom
e/dkfqs/portal/config/rewrite.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.server.internal.StartDKFQSserver 1>/home/dkfqs/portal/log/DKFQS.log 2>&1 &
    ;;
  stop)
       PID=`ps -o pid,args -e | grep "StartDKFQSserver" | egrep -v grep | awk '{​​​​print $1}​​​​'`
       if [ ! -z "$PID" ] ; then
          echo "DKFQS stopped with pid(s) : $PID"
          kill -9 ${​​​​PID}​​​​ 1> /dev/null 2>&1
       fi
    ;;
  status)
       PID=`ps -o pid,args -e | grep "StartDKFQSserver" | egrep -v grep | awk '{​​​​print $1}​​​​'`
       if [ ! -z "$PID" ] ; then
          echo "DKFQS running with pid(s) : $PID"
       else
          echo "No DKFQS running"
       fi
    ;;
  *)
    echo "Usage: /etc/init.d/DKFQS {​​​​start|stop|status}​​​​"
    exit 1
    ;;
esac

exit 0

Create the unit file

Create the file /etc/systemd/system/DKFQSPortal.service with the below content:

[Unit]
Description=DKFQS portal
After=network.target

[Service]
User=dkfqs
Group=dkfqs
Type=simple
RemainAfterExit=yes
ExecStart=/home/dkfqs/portal/bin/portal.sh start
ExecStop=/home/dkfqs/portal/bin/portal.sh stop
TimeoutStartSec=0

[Install]
WantedBy=default.target

Start the services

systemctl daemon-reload
systemctl enable DKFQSPortal.service
systemctl start DKFQSPortal.service
journalctl -ex (... to check that no errors occured..)

6.5 - Ubuntu 16/18/20 Portal Server manual install

Ubuntu 16/18/20 Portal Server Install Instructions

Prerequisites

Supported Hardware

  • Amazon EC2 Cloud instances
  • Own hosted Servers with any Intel or AMD CPU

Minimum Hardware Requirements

  • Minimum required CPU Cores of Processor: 4
  • Minimum required RAM: 16 GB
  • Minimum required Disk: 512 GB
  • Minimum required Network Speed: 1000 Mbps

Email Server

The Portal Server sends its emails via SMTP. You need an email server which receive and forward these SMTP messages.

Twilio SMS Gateway

If the Portal Server will be operated/configured in such a way that any person can “sign up” (= self registration) you need a customer account for the Twilio SMS Gateway www.twilio.com/docs/sms

Network & System Tuning

In /etc/sysctl.conf add:

# TCP/IP Tuning
# =============
fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576

In /etc/security/limits.conf add:

# TCP/IP Tuning
# =============
* soft     nproc          262140
* hard     nproc          262140
* soft     nofile         262140
* hard     nofile         262140
root soft     nproc          262140
root hard     nproc          262140
root soft     nofile         262140
root hard     nofile         262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=8966

if you get a value less than 262140 then add in /etc/systemd/system.conf

# Ubuntu Tuning
# =============
DefaultTasksMax=262140

Reboot the system and verify the settings. Enter: ulimit -n

output: 262140

Enter: systemctl show -p TasksMax user-0

output: TasksMax=262140

Forward the external TCP/IP server port 80 (HTTP) to port 8000, and forward external port 443 (HTTPS) to port 8001

Create/edit the file DKFQSiptables in /etc/network/if-pre-up.d/ and add:

#!/bin/sh
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8000
iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8000
iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 8001
iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 443 -j REDIRECT --to-ports 8001
exit 0

Then give execute permission to /etc/network/if-pre-up.d/DKFQSiptables : sudo chmod 755 /etc/network/if-pre-up.d/DKFQSiptables

Reboot the machine and check with:

sudo iptables -L -t nat

> Chain PREROUTING (policy ACCEPT)
> target     prot opt source               destination
> REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:http redir ports 8000
> REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:https redir ports 8001
> REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:http redir ports 8000
> REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:https redir ports 8001
>
> Chain INPUT (policy ACCEPT)
> target     prot opt source               destination
>
> Chain OUTPUT (policy ACCEPT)
> target     prot opt source               destination
> REDIRECT   tcp  --  anywhere             localhost            tcp dpt:https redir ports 8001
> REDIRECT   tcp  --  anywhere             localhost            tcp dpt:http redir ports 8000
> REDIRECT   tcp  --  anywhere             localhost            tcp dpt:https redir ports 8001
> REDIRECT   tcp  --  anywhere             localhost            tcp dpt:http redir ports 8000
>
> Chain POSTROUTING (policy ACCEPT)
> target     prot opt source               destination

Install Dependencies

Install fontconfig

sudo apt-get update
sudo apt-get install fontconfig

Install haveged

sudo apt-get update
sudo apt-get install haveged

Install SQLite

sudo apt-get update
sudo apt install sqlite

Install OpenJDK Java 8 and 11

Get the Java Installation Kits

wget https://download.java.net/openjdk/jdk8u41/ri/openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz

Install OpenJDK Java 8

gunzip openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
tar -xvf  openjdk-8u41-b04-linux-x64-14_jan_2020.tar
rm openjdk-8u41-b04-linux-x64-14_jan_2020.tar
sudo bash *******
mkdir /opt/OpenJDK
mv java-se-8u41-ri /opt/OpenJDK
cd /opt/OpenJDK
ls -al
chown root -R java-se-8u41-ri
chgrp root -R java-se-8u41-ri
exit # end sudo bash

Verify the Java 8 installation.

/opt/OpenJDK/java-se-8u41-ri/bin/java -version

openjdk version "1.8.0_41"
OpenJDK Runtime Environment (build 1.8.0_41-b04)
OpenJDK 64-Bit Server VM (build 25.40-b25, mixed mode)

Install OpenJDK Java 11

gunzip openjdk-11.0.1_linux-x64_bin.tar.gz
tar -xvf openjdk-11.0.1_linux-x64_bin.tar
rm openjdk-11.0.1_linux-x64_bin.tar
sudo bash
mv jdk-11.0.1 /opt/OpenJDK
cd /opt/OpenJDK
ls -al
chown root -R jdk-11.0.1
chgrp root -R jdk-11.0.1

Execute the following commands (still as sudo bash):

update-alternatives --install "/usr/bin/java" "java" "/opt/OpenJDK/jdk-11.0.1/bin/java" 1
update-alternatives --install "/usr/bin/javac" "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac" 1
update-alternatives --install "/usr/bin/keytool" "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool" 1
update-alternatives --install "/usr/bin/jar" "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar" 1
update-alternatives --set "java" "/opt/OpenJDK/jdk-11.0.1/bin/java"
update-alternatives --set "javac" "/opt/OpenJDK/jdk-11.0.1/bin/javac"
update-alternatives --set "keytool" "/opt/OpenJDK/jdk-11.0.1/bin/keytool"
update-alternatives --set "jar" "/opt/OpenJDK/jdk-11.0.1/bin/jar"
exit # end sudo bash

Verify the Java 11 installation.

java -version

openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment 18.9 (build 11.0.1+13)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.1+13, mixed mode)

Install the Portal Server

Create the DKFQS account which is running the Portal Server

sudo adduser dkfqs    # follow the questions, remember or write down the password

Install the Portal Server

Login with the dkfqs account (SSH) - or - Enter: sudo -u dkfqs bash | OR: Install Samba to get convenient access to /home/dkfqs as Samba dkfqs user

Create the directory /home/dkfqs/portal (as dkfqs user):

cd /home/dkfqs
mkdir portal

Create the following sub-directories at /home/dkfqs/agent (as dkfqs user):

  • backup
  • bin
  • config
  • db
  • htdocs
  • jks
  • log
  • scripts
  • usersLib
  • usersData
cd /home/dkfqs/portal
mkdir backup bin config db htdocs jks log scripts usersLib usersData

Copy the following files to the bin directory /home/dkfqs/portal/bin

  • bcpkix-jdk15on-160.jar
  • bcprov-jdk15on-160.jar
  • bctls-jdk15on-160.jar
  • DKFQS.jar

Copy the following files to the db directory /home/dkfqs/portal/db

  • CreateNewAdminDB.sql
  • CreateNewOperationsDB.sql
  • CreateNewUsersDB.sql

Edit the file CreateNewUsersDB.sql and modify the following line to set the nickname, the email, the phone number and the temporary password of the Admin account. Note: The nickname must always start with “Admin-”

insert into AdminAccountsTable (adminUserId, nickname, adminPrimaryEmail, adminPrimaryPhone, initialPassword) values (1, "Admin-One", "falarasorn@yahoo.com", "+43123456789", "ginkao1234");

Navigate to /home/dkfqs/portal/db and create the Admin, Operations and the Users DB (as dkfqs user):

sqlite3 AdminAccounts.db < CreateNewAdminDB.sql
sqlite3 Operations.db < CreateNewOperationsDB.sql
sqlite3 Users.db < CreateNewUsersDB.sql

Copy the following file to the htdocs directory /home/dkfqs/portal/htdocs

  • htdocs.jar

Navigate to /home/dkfqs/portal/htdocs and execute (as dkfqs user):

jar -xvf htdocs.jar
rm htdocs.jar   # delete the jar)
rm -R META-INF  # delete the META-INF directory)

Copy the following file to the jks directory /home/dkfqs/portal/jks

  • dkfqscom.jks

Copy the following files to the usersLib directory /home/dkfqs/portal/usersLib

  • com.dkfqs.tools.jar
  • DKFQSLibrary2.psm1

Copy the following files to the bin directory /home/dkfqs/portal/config

  • dkfqs.properties
  • rewrite.properties
  • twilio.properties

Modify the dkfqs.properties file. Set the following properties:

  • ServerName
  • ServerDNSName
  • DNSJavaDefaultDNSServers
  • UsersMailServerHost
  • UsersMailFrom
  • UsersMailServerAuthUser
  • UsersMailServerAuthPassword
  • ServerStatusPageEnabledIPList
  • AlertMailServerHost
  • AlertMailFrom
  • AlertMailToList
  • AlertMailBounceAddress
  • AlertMailServerAuthUser
  • AlertMailServerAuthPassword

Example: dkfqs.properties

IsProduction=true
ServerName=192.168.0.50
ServerDNSName=192.168.0.50
DiskDocumentRootDirectory=/home/dkfqs/portal/htdocs
SQLiteDBDirectory=/home/dkfqs/portal/db
UsersDataRootDirectory=/home/dkfqs/portal/usersData
OSProcessLogFile=/home/dkfqs/portal/log/DKFQS.log
LogLevel=info
StaticContentMaxAgeTime=7200
MaxHTTPRequestSize=20240000
MaxInvalidAnonymousSessionsPerIPLimit=32
AnonymousSessionTimeout=1200
MaxAnonymousSessionTime=21600
MaxWebSocketConnectTimeSeconds=14400
MaxInboundWebSocketTrafficPerConnection=67108864
MaxInboundWebSocketPayloadPerFrame=1048576
MaxInboundWebSocketFramesPerIPTimeFrame=10
MaxInboundWebSocketFramesPerIPLimit=1000
HTTPExternalServerPort=80
HTTPInternalServerPort=8000
HTTPSExternalServerPort=443
HTTPSInternalServerPort=8001
HTTPSKeyStoreFile=/home/dkfqs/portal/jks/dkfqscom.jks
HTTPSKeyStorePassword=topsecret
#
FileTreeApiMaxRequestSizeMB=256
FileTreeApiWorkerThreadBusyTimeoutSeconds=330
FileTreeApiWorkerThreadExecutionTimeoutSeconds=300
TestjobsApiMaxRequestSizeMB=256
TestjobsApiWorkerThreadBusyTimeoutSeconds=330
TestjobsApiWorkerThreadExecutionTimeoutSeconds=300
#
DNSJavaDefaultDNSServers=8.8.8.8,8.8.4.4
#
JavaSDK8BinaryPath=/opt/OpenJDK/java-se-8u41-ri/bin
JavaSDK11BinaryPath=/opt/OpenJDK/jdk-11.0.1/bin
HTTPTestWizardJavaCodeLibraries=/home/dkfqs/portal/usersLib/com.dkfqs.tools.jar
#
UserSignInURL=/SignIn
UsersMailServerHost=192.168.1.4
UsersMailFrom=xxxxxxxxx@xxxxxxx.com
UsersMailServerAuthUser=xxxxxxxxx@xxxxxxx.com
UsersMailServerAuthPassword=*********
UsersMailTransmitterThreads=2
UsersMailDebugSMTP=false
#
smsGatewaysClassNames=com.dkfqs.server.sms.twilio.TwilioSMSGateway
#
# ServerStatusPageEnabledIPList=127.0.0.1,192.168.0.99
ServerStatusPageEnabledIPList=*.*.*.*
AdminSignInURL=/AdminSignIn
AlertMailEnabled=true
AlertMailServerHost=192.168.1.4
AlertMailFrom=xxxxxxxxx@xxxxxxx.com
AlertMailToList=yyyyyyyyy@xxxxxxx.com,zzzzzzzzz@xxxxxxx.com
AlertMailBounceAddress=bbbbbbb@xxxxxxx.com
AlertMailServerAuthUser=xxxxxxxxx@xxxxxxx.com
AlertMailServerAuthPassword=*******
AlertMailDebugSMTP=false
AlertMailNotifyStartup=false
SecurityMaxRequestsPerIpLimit=200
SecurityMaxRequestsPerIpTimeFrame=10
SecurityMaxInvalidRequestsPerIpLimit=12
SecurityMaxInvalidRequestsPerIpTimeFrame=60
SecurityMaxAnonymousFormSubmitPerIpLimit=8
SecurityMaxAnonymousFormSubmitPerIpTimeFrame=60
SecurityMaxAuthenticationFailuresPerIpLimit=5
SecurityMaxAuthenticationFailuresPerIpTimeFrame=60
#
MeasuringAgentConnectTimeout=10

Modify the twilio.properties file. Set the following properties:

  • sid
  • authToken
  • fromTwilioPhoneNumber

Example: twilio.properties

apiURLMainPath=https://api.twilio.com/2010-04-01/Accounts/
sid=********************************
authToken=********************************
fromTwilioPhoneNumber=+1123456789
tcpConnectTimoutMillis=10000
sslHandshakeTimeoutMillis=5000
httpProcessingTimeoutMillis=10000

First Test - Start the Portal Server manually (as dkfqs user)

cd /home/dkfqs/portal/bin
export CLASSPATH=bcpkix-jdk15on-160.jar:bcprov-jdk15on-160.jar:bctls-jdk15on-160.jar:DKFQS.jar
java -Xmx2048m -DdkfqsProperties=../config/dkfqs.properties -DrewriteProperties=../config/rewrite.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.server.internal.StartDKFQSserver

Internal RSA 2048 bit keypair generated in 1220 ms
2021-03-10 22:27:25.040 | QAHTTPd | INFO | SQL connection pool for DB "UsersDB" initialized
2021-03-10 22:27:25.062 | QAHTTPd | INFO | SQL connection pool for DB "AdminAccountsDB" initialized
2021-03-10 22:27:25.068 | QAHTTPd | INFO | Alarm adapter "IP Blacklist Alarm Adapter" started
2021-03-10 22:27:25.069 | QAHTTPd | WARN | QAHTTPd V1.3-U started
2021-03-10 22:27:25.071 | QAHTTPd | INFO | Execute PreUpStartupLoadIPRangeBlacklist
2021-03-10 22:27:25.082 | QAHTTPd | INFO | HTTP server starting at port 8000
2021-03-10 22:27:25.109 | QAHTTPd | INFO | HTTP server ready at port 8000
2021-03-10 22:27:25.110 | QAHTTPd | INFO | HTTPS server starting at port 8001
2021-03-10 22:27:25.124 | QAHTTPd | INFO | HTTPS server ready at port 8001
2021-03-10 22:27:25.821 | EMAIL-1 | INFO | Email transmitter thread started
2021-03-10 22:27:25.822 | EMAIL-2 | INFO | Email transmitter thread started
2021-03-10 22:27:25.828 | main | INFO | Twilio SMS Gateway registered
2021-03-10 22:27:25.853 | main | INFO | Twilio SMS Gateway initialized
2021-03-10 22:27:25.857 | SMS-Dispatcher | INFO | Thread started

Create the Portal Server Startup Script (as root)

sudo bash # become root
cd /etc/init.d
vi DKFQS

Edit - create /etc/init.d/DKFQS

#!/bin/sh
# /etc/init.d/DKFQS
# install with: update-rc.d DKFQS defaults

### BEGIN INIT INFO
# Provides:          DKFQS
# Required-Start:    $local_fs $network $time $syslog
# Required-Stop:     $local_fs $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start DKFQS daemon at boot time
# Description:       DKFQS daemon
### END INIT INFO

case "$1" in
  start)
    if [ -f /home/dkfqs/portal/log/DKFQS.log ]; then
       mv /home/dkfqs/portal/log/DKFQS.log /home/dkfqs/portal/log/DKFQS.log_$(date +"%Y_%m_%d_%H_%M")
    fi
    sudo -H -u dkfqs bash -c 'CLASSPATH=/home/dkfqs/portal/bin/bcpkix-jdk15on-160.jar:/home/dkfqs/portal/bin/bcprov-jdk15on-160.jar:/home/dkfqs/portal/bin/bctls-jdk15on-160.jar:/home/dkfqs/portal/bin/DKFQS.jar;export CLASSPATH;nohup java -Xmx3072m -DdkfqsProperties=/home/dkfqs/portal/config/dkfqs.properties -DrewriteProperties=/home/dkfqs/portal/config/rewrite.properties -Dnashorn.args="--no-deprecation-warning" com.dkfqs.server.internal.StartDKFQSserver 1>/home/dkfqs/portal/log/DKFQS.log 2>&1 &'
    ;;
  stop)
       PID=`ps -o pid,args -e | grep "StartDKFQSserver" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "DKFQS stopped with pid(s) : $PID"
          kill -9 ${PID} 1> /dev/null 2>&1
       fi
    ;;
  status)
       PID=`ps -o pid,args -e | grep "StartDKFQSserver" | egrep -v grep | awk '{print $1}'`
       if [ ! -z "$PID" ] ; then
          echo "DKFQS running with pid(s) : $PID"
       else
          echo "No DKFQS running"
       fi
    ;;
  *)
    echo "Usage: /etc/init.d/DKFQS {start|stop|status}"
    exit 1
    ;;
esac

exit 0

Change owner and file protection of /etc/init.d/DKFQS (root at /etc/init.d):

chown root DKFQS
chgrp root DKFQS
chmod 755 DKFQS

Register /etc/init.d/DKFQS to be started at system boot (root at /etc/init.d):

update-rc.d DKFQS defaults

Reboot the system. Then check /home/dkfqs/portal/log/DKFQS.log

Administrator Sign In

Enter in your browser https://admin-portal-host/admin .

You will get a browser warning because the SSL server certificate is expired. Ignore the warning and enter in the Sign In the email address and the password as you have set in CreateNewAdminDB.sql.

You will now asked to set a new password. Then you are signed in.

“alt attribute”

Disable Sign Up if you don’t have an SMS gateway:

“alt attribute”

User accounts can be added directly:

“alt attribute”

Replace the SSL Server Certificate

If you or your company can already issue SSL server certificates you can skip the next sub-chapter. Continue in such a case with “Convert and Install the SSL Server Certificate”.

Get a Let's Encrypt SSL Server Certificate | Ubuntu 20

Make sure that your portal server has a public, valid DNS name.

Install certbot:

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

Stop the Portal Server:

sudo /etc/init.d/DKFQS stop

To get the SSL server certificate enter:

sudo certbot certonly --standalone   # enter your email address and the DNS name ou your portal server, follow the instructions

On success certbot generates the following two files:

  • fullchain.pem
  • privkey.pem

Start the Portal Server:

sudo /etc/init.d/DKFQS start

Convert and Install the SSL Server Certificate

Become root and navigate to the directory where the fullchain.pem and privkey.pem files are located. Enter:

sudo bash
openssl pkcs12 -export -in fullchain.pem -inkey privkey.pem -out your-certificate-name.p12  # convert cert to PKCS12 file
keytool -importkeystore -srckeystore your-certificate-name.p12 -srcstoretype PKCS12 -destkeystore your-certificate-name.jks -deststoretype JKS  # convert PKCS12 file to Java keystore file

Copy the Java keystore file to /home/dkfqs/portal/jks

cp your-certificate-name.jks /home/dkfqs/portal/jks

Edit /home/dkfqs/portal/config/dkfqs.properties and replace:

HTTPSKeyStoreFile=/home/dkfqs/portal/jks/your-certificate-name.jks
HTTPSKeyStorePassword=*********

Restart the Portal Server

sudo /etc/init.d/DKFQS stop
sudo /etc/init.d/DKFQS start

Create a Cron Job to Renew the Let's Encrypt SSL Certificate

Create at your home directory the sub-directory system_cronjobs_scripts and add/edit the file “DKFQS_certbot_renew” in this directory. Replace the ********* placeholders with your real values.

#!/bin/sh
#
# renew the letsencrypt DKFQS certificate
# =======================================
certbot renew
#
# set the default working directory
cd /home/*********/system_cronjobs_scripts
#
# cleanup in any case
rm -f *.jks
rm -f *.p12
#
# convert the letsencrypt certificate to PKCS12 and place it in the default directory
openssl pkcs12 -export -in /etc/letsencrypt/live/*********/fullchain.pem -inkey /etc/letsencrypt/live/*********/privkey.pem -out ./*********.p12 -passin pass:******** -passout pass:********
#
# convert the PKCS12 certificate to a Java keystore
echo ******** | keytool -importkeystore -srckeystore *********.p12 -srcstoretype PKCS12 -destkeystore *********.jks -deststoretype JKS -storepass ********
#
# update DKFQS keystore file
cp *********.jks /home/dkfqs/portal/jks
chown dkfqs /home/dkfqs/portal/jks/*********.jks
chgrp dkfqs /home/dkfqs/portal/jks/*********.jks
chmod 600 /home/dkfqs/portal/jks/*********.jks
#
# restart DKFQS server
/etc/init.d/DKFQS stop
sleep 5
sudo /etc/init.d/DKFQS start
#
# cleanup again in any case
rm -f *.jks
rm -f *.p12
#
# display DKFQS log file
sleep 5
cat /home/dkfqs/portal/log/DKFQS.log
#
# all done
exit 0
sudo bash # become root
chmod 700 DKFQS_certbot_renew # change file protection and set execute bit
./DKFQS_certbot_renew  # try out manually

Add the file to crontab. Important: The last line in crontab must be an empty line!

sudo crontab -e
1 1 1 * * /home/*********/system_cronjobs_scripts/DKFQS_certbot_renew > /home/*********/system_cronjobs_scripts/DKFQS_certbot_renew.log 2>&1


7 - User Guide

Portal Server User Guide

Thank you for using the Real Load product.

On this first page of the user guide you will learn:

  • How to perform a simple HTTP/S test
  • The basic concepts of the product
  • How to determine the system capacity of a stressed server

Projects Menu

“alt attribute”

After you have signed in to the portal server, you will see a navigation bar whose first menu on the left is Projects. This is a file browser that displays all of your files. The view is divided into Projects and Resource Sets which contain the files for executing your tests and the measured test results. You can also store additional files in a Resource Set which contain e.g. instructions on how a test should be performed.

Among other things, you can:

  • Create, clone and delete projects.
  • Create, clone and delete resource sets (subdirectories of projects).
  • Upload and download files, create new files, display the content of files, edit and delete files.

There is also a recycle bin from which you can restore deleted projects, resources sets and files.

“alt attribute”

Measuring Agents and Proxy Recorders

“alt attribute”

In order to perform a test you need at least one “Measuring Agent” (load generator). If you want to test entire Web pages, you also need an “HTTP/S Proxy Recorder” to record the HTTP/S calls of the Web pages.

Ready-to-use Amazon EC2 instances of these two components are already available, which you can start by yourself. Alternatively, you can install an operate “Measuring Agents” and “HTTP/S Proxy Recorders” on your own systems.

After you have started a “Measuring Agent” or “HTTP/S Proxy Recorder” you can register it in the portal server, and check the reachability by a “ping” at application level.

“alt attribute”

“alt attribute”

HTTP Test Wizard

“alt attribute”

Tests can be created in a number of different ways. You can even develop test programs from scratch that measure anything and use any protocol.

To keep it simple in the first step, we introduce the “HTTP Test Wizard”, with which you can easily define HTTP/S sessions and run them as tests.

Test Preparation

  1. Navigate to the “Projects Menu” and create a new Project and a new Resouce Set.
  2. Navigate to the “HTTP Test Wizard”.
  3. Create a New Session in the “HTTP Test Wizard”.
  4. Save the New Session to the Project and Resouce Set which you have created before.

“alt attribute”

You now have an empty session to which you can add HTTP requests (URL calls).

Test Creation

Now add one or more URLs to the test. If you want to subdivide the URLs into individual groups, first add a “Measuring Group”, then add the associated URLs and then again, add the next “Measuring Group” with the corresponding URLs.

If you define a “Measuring Group”, the (separately measured) response time of the group is measured overall URLs of the group.

“alt attribute”

Entering a URL is easy. Select the HTTP method (GET, POST, …) and enter the absolute URL. If you want to execute an HTTP POST method, first select the requests content type and then enter the data of the post request. Finally click at the “Add URL” button at the bottom of the dialog.

After you’ve created the test, it looks similar to this:

“alt attribute”

Save the test again (save session) and then debug the test.

Debugging the Test

In order to debug a test, at least one “Measuring Agent” must be available. This is because it is a remote debugger.

If you click on “Debug Session”, the test is automatically transmitted to one of your measuring agents and the debugger is started. If you have registered several measuring agents, you can select in the debugger at the top right on which measuring agent the debug session will be executed.

“alt attribute”

During debugging, please pay close attention to the HTTP status code you get from the URLs. For example, if you receive a 404 status code (not found), you have probably entered a faulty url. In such a case, cancel the debug session, correct the URL and then start the debugger once again.

As you may have already noticed, the HTTP status code of the URLs has not yet been checked by the debugger. Return to the HTTP Session Wizard and configure the corresponding HTTP status code for each URL (section “Verify HTTP Response”). Don’t forget to click at the “Modify URL” button at the bottom of the dialog.

“alt attribute”

Debug your test one last time and then save it.

Generating the Code

Generating the code is pretty easy. First click on “Generate Source Code” and then on “Compile & Generate JAR”.

“alt attribute”

“alt attribute”

Then click on “Define New Test”.

Defining the Test

Defining a test with the “HTTP Test Wizarzd” is also quite easy. Optionally, you can enter here a test description. Then click on “Define Test”.

“alt attribute”

After you have clicked on “Define Test” the test is created and the view changes from the “HTTP Test Wizarzd” menu to the “Tests” menu.

Tests Menu

The Test Menu shows all tests that you have defined - across all Projects and Resource Sets. You can also filter the view according to Projects and Resource Sets, and sort the tests in different ways.

“alt attribute”

Note that a test is something like a bracket that only contains references to the files that are required for the test execution. There a no files stored inside the test itself.

On the one hand, this means that a test become invalid/corrupted if you delete the corresponding resource files of the test in the Projects Menu. On the other hand, this also means that all changes to files that are saved in the tree of the Projects Menu are immediately applied to the test.

Each test has a base location from which it was defined (Project and Resource Set) and to where also the test results are saved.

A test itself can reference its resource files in two ways:

  • Resource Files that reside in the base location of the test (the test core files).
  • Referenced Files that are located in other Projects / Resource Sets. These are typically libraries, plug-ins and input files which are used/shared by several, different tests.

Defining a Test Job

After you have clicked on a test on “Define Test Job” and then on “Continue”, the intermediate “Define Test Job” menu is displayed.

“alt attribute”

In contrast to a “test”, all referenced files of the test are copied into the “test job”. For this reason there is also the option “Synchronize Files at Start of Job” which you should always have switched on. At “Additional Job Arguments” you can enter test-specific command line arguments that are transferred directly to the test program or test script. However, you usually do not need to enter anything.

After you have clicked on “Define Load Test Job” the test job is created and the view changes from the “Tests” menu to the “Test Jobs” menu.

Test Jobs Menu

In this menu all test jobs are displayed with their states. The green point at the top right next to the title indicates that all measurement agents that are set to active can also be reached. The color of the point changes to yellow or red if one or more measuring agents cannot be reached / are not available.

“alt attribute”

A test job can have one of the following state:

  • invalid: The job is damaged and can only be deleted.
  • defined: The job is defined locally in the portal server, but has not yet been transmitted to a masuring agent. Jobs that are in this state can still be modified.
  • submitted: The job was transmitted to a measuring agent, but is not yet ready to be started.
  • ready to run: The job on the measuring agent is ready to start (the corresponding data collector process is already running on the measuring agent, but the job itself is not yet started).
  • start failed: The start of the job has failed.
  • running: The job is currently running and can be monitored at real time.
  • completed: A job that was previously running is now completed (done).

As soon as a job is in the “defined” state, it has a “local job Id”. If the job is then transmitted to a measuring agent, the job has additionally a “remote job Id”.

Starting a Test Job

After you have clicked on “Start Test Job” in a test job you can select/modify the measuring agent on which the job will be executed and configure the job settings.

“alt attribute”

Input Fields:

  • Measuring Agent: The measuring agent on which the test is executed.
  • Number of Users: The number of simulated users that are started.
  • Max. Test Duration: The maximum test duration.
  • Max. Loops per User: The maximum number of sessions executed per user (the number of iterations of the test per user). Each time a new session of a user is started, the user’s session context is reset.
  • Loop Iteration Delay: The delay time after a session of a user has ended until the next session of the user is started.
  • Ramp Up Time: The length of time at the beginning of the test until all simulated users are started. Example: with 20 simulated users and a time of 10 seconds, a new user is started each 0.5 seconds.
  • Additional Arguments: Additional values which are transferred on the command line when the test script is started. These arguments are test specific. For tests that were created with the “HTTP Test Wizard” you can specify for example the following values: “-tcpTimeout 8000 -sslTimeout 5000 -httpTimeout 60000” (TCP connect timeout / SSL handshake timeout / HTTP processing timeout) which are considered by the executed URL calls and override the default values.
  • Debug Execution: This option effects that detailed information are written to the log file of the test. For example variable values which have been extracted from input files or from HTTP responses as well as variable values which are assigned at runtime. Only activate this option if you have problems with the execution of the test.
  • Debug Measuring: Effects that the Data Collector of the Measuring Agent writes the JSON objects of the DKFQS All Purpose Interface to its log file. This option can be enabled to debug self-developed tests that have been written from scratch.

Normally you do not have to enter any “Additional Arguments” and leave “Debug Execution” and “Debug Measuring” switched off.

After clicking on “Start Test Job”, the job is started on the measuring agent and the status of the job is now “Running”. Then click on “Monitor Jobs”.

“alt attribute”

Real-Time Monitor Menu

The real-time display shows all currently running jobs including their measured values and measured errors.

“alt attribute”

You can also suspend a running job for a while and resume it later. However this has no effect on the “Max. Test Duration”.

“alt attribute”

After the job is completed you can click on “Analyze Result”. The view changes then to the “Test Results” menu.

“alt attribute”

Test Results Menu

The “Test Results” menu is a workspace into which you can load multiple test results. You can switch back and forth between the test results. As in the “Real-Time Monitor” menu, all measured values and all measured errors are displayed. In addition, percentile statistics and diagrams of error types are also displayed in this menu.

This menu enables you also to combine several test results into a so-called “load curve” - and thus to determine the maximum number of users that a system such a Web server can handle (see next chapter).

“alt attribute”

The Summary Statistic of a test result contains some interesting values:

  • Avg. Measured Samples per Second: These are the number of successful measured values per second, counted over the whole test (in other products also so-called as “hits per second”).
  • Max. Concurrent Users: These are the maximum number of concurrent user that have been really reached during the test (which may differ from the “Number of Users” defined when starting the test).
  • Max. Pending Samples: This is the maximum amount of requests to the server, for which no immediately response has been received, measured over the whole test (the maximum traffic jam of the requests).
  • Average CPU Load: This is the average CPU load in percent on the measuring agent (load generator) which was captured during the execution of the test. If this value is greater than 95% the test is invalid because the measuring agent itself was overloaded.

Since several thousand to several million response times can be measured in a very short time during a test, the successfully measured response times are summarized in the response time diagrams at 4-second intervals. For this reason, a minimum value, a average value and a maximum value is displayed for each 4-second interval in such diagrams.

However, this summarization is not performed for percentile statistics and for measured errors. Every single measured value is taken into account here.

Determining System Capacity

The maximum capacity of a system, such as the maximum number of users that a web server can handle, can be determined by a so-called “load curve”.

To obtain such a load curve, you must repeat the same test several times, by increasing the number of users with each test. For example a test series with 10, 50, 100, 200, 400, 800, 1200 and 1600 users.

The easiest way to repeat a test is to clone a test job. You can then enter the higher number of users when starting the test.

“alt attribute”

A measured load curve looks like this, for example:

“alt attribute”

As you can see, the throughput of the server increases linearly up to 400 users - with the response times remaining more or less the same (Avg. Passed Session Time). Then, with 800, 1200 and 1600 users, only individual errors are measured at first, then also many errors, with the response times now increasing sharply.

This means that the server can serve up to 400 users without any problems.
But could you operate the server with 800 users if you accept longer response times? With 800 users, 745,306 URL calls were successfully measured, with only 50 errors occurring. To find it out, let’s compare the detailed response times of “Page 1” of 400 users with 800 users.

Response Times of “Page 1” at 400 Users: “alt attribute”

Response Times of “Page 1” at 800 Users: “alt attribute”

The 95% percentile value at 400 users is 224 milliseconds and increases to 1952 milliseconds at 800 users. Now you could say that it just takes longer. However, if you look at the red curve of the outliers, these are only one time a little bit more than 1 second at 400 users, but often more than 8 seconds at 800 users. Conclusion: The server cannot be operated with 800 users because it is then overloaded.

Now let’s do one last test with 600 users. Result:

“alt attribute”

The throughput of the server at 600 users is a little bit higher than at 400 users and also little bit higher than at 800 users. No errors were measured.

Response Times of “Page 1” at 600 Users: “alt attribute”

The 95% percentile value at 600 users is 650 milliseconds, and there are only two outliers with a little bit more than one second. Final Conclusion: The server can serve up to 600 Users, but no more.

Troubleshooting a Test

In rare cases it can happen that a test does not measure anything (neither measurement results nor errors). In this case you should either wait until the test is finished or stop it directly in the “Real Time Monitor” menu.

Then you can then acquire the test log files in the “Test Jobs” menu and search for errors.

“alt attribute”

“alt attribute”

7.1 - AWS Measuring Agents

How to locate and use AWS based Mesuring Agents

This document describes how to launch Measuring Agents in AWS as EC2 instances. Readers are assumed to be familiar with AWS EC2 terms used in here, in particular if launching Measuring Agents manually.

Listing available AWS AMIs

We make pre-installed Measuring Agents available for everybody to use in AWS redy-to-run AMIs. No additional costs (beyong the costs charged by AWS) apply when using these images.

To obtain a list of these ready to run AMIs proceed you can use one of these options:

Via our Desktop Companion

Our Desktop Companion allows you to manage AWS Measuring Agents. You can launch and terminate agents as well as register them directly with the Real Load Portal with a few mouseclicks.

Refer to the Desktop Companion Documentation for further information.

Via the Real Load portal

Login to the Real Load portal and then head to the Measuring Agents configuration section. Then click on the AWS logo:

… and a list of available AWS AMIs will be displayed.

Via the AWS Console

Using the AWS console locate AMIs belonging to our AWS account 775953462348 in your preferred AWS region. This can be done by looking for public images owned by our account, as illustrated in this screenshot:

For example in the Sydney region (ap-southeast-2) you’ll find the above listed AMI. If you can’t find an AMI in your desired region please contact us so that we can make it available.

Launching an AMI

Via our Desktop Companion

Our Desktop Companion allows you to launch AMIs with a mouseclick.

Refer to the Desktop Companion Documentation for further information.

Via the AWS Console

Once you’ve located a suitable AMI you can launch it as you would with any other Linux image as illustrated in this section.

Configure a Security Group

In order for the AMI to be reachable from the Real Load Portal you’ll need to configure a Security Group allowing inbound connections to port 8080 as a minimum.

Set the Measuring Agent secret

In order to protect access to your Measuring Agent we strongly recommend setting a non-default secret when launching the AMI.

The secret can be set at launch time by providing User Data to the AMI as follows (replace “secret” with the secret of your choice):

#!/bin/sh
echo "AGENT_SECRET=secret" > /home/ec2-user/agent_secret.sh

(Optional) Set instance tag

If you want this instance to appear in the Desktop Companion application then you’ll have to set the “REAL_LOAD_AGENT” tag to “true” when launching.

Select the Security Group

(Optional) Select SSH key

If for some reasons you think you might need to login via SSH to the Measuring Agent instance, then select an SSH key. This is not required for normal agent operations.

Configure EC2 instance in Real Load Portal

Via our Desktop Companion

Our Desktop Companion allows you to configue EC2 instances with a mouseclick.

Refer to the Desktop Companion Documentation for further information.

Manual Configuration

Once the EC2 instance is up and running, you’ll need to configure it in the Real Load portal so that you can execute load tests from it.

Once added you can validate connectivity to the agent as shown in this screenshot:

Terminating EC2 instances

Once you’ve completed your load testing it’s important to terminate the Measuring Agent instance to avoid unwatend AWS charges.

Via our Desktop Companion

Our Desktop Companion allows you to terminate EC2 instances and remove it from the Real Load Portal configuration with a mouseclick.

Refer to the Desktop Companion Documentation for further information.

Via the AWS Console

Terminate the instace as you would terminate any other Linux instance.

Remove EC2 instance from Real Load Portal

Via our Desktop Companion

Our Desktop Companion allows you to remove Measuring Agent instances with a mouseclick.

Refer to the Desktop Companion Documentation for further information.

Manual Configuration

From the Meausring Agents menu select to delete the agent.

Alternatively you could decide to keep the agent configured and simply deactivate it. Disabling is recommended to avoid warning messages in the Portal console about failing connectivity tests.

7.2 - HTTP Test Wizard

User Guide | HTTP Test Wizard

“alt attribute”

Overview and Functionality

The HTTP Tests Wizard supports you to create sophisticated tests in an easy way. You can compile (define) the entire test via the user interface and then generate a ready-to-run test program from this definition. A powerful debugger helps you to perfect your test.

As the name suggests, the HTTP Tests Wizard is optimized for the execution of HTTP/S tests. However, by using plug-ins, any other protocols can also be tested and measured (such as SMTP, DNS queries, DB-SQL queries, etc.).

HTTP Tests Wizard Features:

  • No programming required (except for self written plug-ins). You can create your test through the powerful user interface by assembling all required functionalities via simple dialogs.
  • The executed HTTP/S calls can be either entered manually our can be inported from a Proxy Recorder session.
  • Variable values (variables) can be extracted and assigned directly via the user interface.
  • Integrated Remote-Debugger: The defined test can be debugged in advance on any Measuring Agent - before generating the test program. The degugger supports also to define variables as well as to define variable-extractors and variable-assigners at the fly, which are adopted into the test.
  • After the test has been passed by the debugger, the defined test can be automatically converted into a performance-optimized test program.

Session and Session Elements

The test sequence is referred to as a so-called “Session”, whereby each simulated user repeatedly executes the session in a loop (so called “User Session” or “Session Loop”).

In order to define a test sequence, you can add various “Session Elements” to a session:

  • User Input Field: Displays an additional input field when starting the test and stores the entered value in a global variable.
  • Measurement Group: Measures the execution time over several session elements.
  • URL: Executes a HTTP request and receives the HTTP response. Measures the execution time, and verifies the HTTP response. The HTTP response can either received synchronously or asynchronously (see also URL Synchronisation Point). Supports “Bound to URL Plug-Ins” which, among other things, can modify HTTP request and postprocess HTTP responses.
  • URL Synchronisation Point: Synchronizes asynchronously received HTTP responses.
  • Delay Time: Delays the execution of the user session for an amount of time.
  • Basic Authentication: Adds a Basic Authentication username and password to all or selected URLs.
  • SSL/TLS Client Certificate: Adds a SSL/TLS Client Certificate to all or selected URLs.
  • General Request Header Field: Adds a HTTP request header field (for example “User-Agent”) to all or selected URLs.
  • Cookie Modifier: Sets/Extracts/Deletes cookies.
  • Plug-In: Initializes and executes a “Normal Session Element Plug-In”.
  • Input File: Reads an input file line by line, tokenizes the line and extracts the values of the tokens into variables.
  • Output File: Stores the values of variables into an output file.

“alt attribute”

Adding session elements is in most cases quite simple and self-explanatory, but it can be a bit challenging for URLs and Input Files. For this reason, adding of these two session elements is described in more detail in the next two sub-chapters.

Adding URLs

When adding an URL you have at least to select the HTTP Method (for example GET) and to enter the absolute URL (https://<host>/path).

Checking the HTTP response code and the HTTP response content is optional, but strongly recommended, as otherwise the test result may contain false positive results.

“alt attribute”

The following fields can be entered or selected:

  • General Area
    • HTTP Method: Select the HTTP method (GET, POST, …).
    • URL: Enter an absolute URL, inclusive (optional) query string parameters. Example: https://www.dkfqa.com/?v=1&w=2
    • Execute - synchronous or asynchronous: Select how the HTTP response is received. If you choose “asynchronous” then you have also to add an “URL Synchronisation Point” to the test sequence after the asynchronous executed URLs.
    • Error Handling: Select what happens if the HTTP request fails or if the HTTP response is invalid. “Final End” means that the whole test is aborted.
    • Enable Implicit Assigners ${<variable-name>}: If checked, then placeholders for variables are considered in the URL as well as in the HTTP request header and content. Example: URL = https://${vHost}/?v=1&w=2, Variable vHost = “www.realload.com”, Result = https://www.realload.com/?v=1&w=2 . Note insted of using inplicit Assigners you can also define Variable Assigers (see next chapter: Variables).
  • HTTP Request Content Area | Only fill in this area if the HTTP request contains a content, for example for POST requests.
    • Request Content Type: Select or enter the request content type. For example “application/x-www-form-urlencoded” or “application/json”.
    • Request Content Charset: Select or enter the request content charset - or let the field blank if not applicable.
    • Direct Value or Read From File: Select if the request content data are directly entered here or are read from a file.
    • Request Content Data: Enter the request content data (if they are not read from a file). Note: select first the Request Content Type before you enter the Request Content Data.
    • Also send Zero Content Length: If checked, but no content is available, the HTTP request header field “Content-Length: 0” is sent.
  • Additional HTTP Header Fields Area
    • You can enter additional HTTP request header fields here which are only applied for this URL. Note: additional HTTP request header fields that apply to all URLs should instead be defined by using a “General Request Header Field” session element.
  • Verify HTTP Response Area | Note: you should configure at least a HTTP response code in order to verify the test result.
    • Verify HTTP Status Code: Select the expected HTTP response code(s).
    • Verify Content Type: Select the expected HTTP response content type(s).
    • Verify Content Text: Enter text fragments that must be present in the HTTP response content, or let the fields blank if not applicable.
  • Plug-Ins Area
    • Here you can add “Bound to URL Plug-Ins” (see chapter “Plug-Ins”).

Adding Input Files

Input files are text files (*.txt, *.csv) whose content is read line by line during the test. Each line is divided into tokens from which values of variables are extracted. Empty lines are skipped. Note that the variables must first be defined before you can add an input file.

The following fields can be entered or selected:

  • Token Delimiter: The token delimiter. This is usually a single character, but can be also a string.
  • Comment Tag: Lines which are starting with the comment tag are skipped. You can place also the comment tag within a line which means that the remaining part of the line is ignored. The comment tag is usually a single character, but can be also a string.
  • Cyclic Read: If not checked, then the test will end when no further line can be read (at eof). On the other hand, if checked, the file is re-read after the end of file was reached.
  • Randomize Lines: if checked then the order of the lines is randomized each time when the file is read.
  • Trim Values: If checked then whitespace characters are removed from the start and end of the extracted values (tokens).
  • Remove Double Quotes: If checked then double quotes are removed from the extracted values (tokens).
  • Scope: Get Next Line per:
    • Global: Reads only a single one line of the file. The line is read at the start of the test.
    • User: Reads each time a line from the file when a simulated user is started.
    • User Session: Reads each time a line from the file when a simulated user starts a new session loop iteration.
  • 6x Variable - Token[Index]: You can configure up to 6 token indexes whose values are extracted into variables.

Variables

The HTTP Tests Wizard (as well as the debugger) supports to define variables and to extract and assign variable values from/to session elements. Variable definitions which are (remotely) made in the debugger are automatically synchronized with the HTTP Test Wizard at portal server side.

The data type of a variable is always a string, which can also be empty, but is usually never null.

Defining Variables

“alt attribute”

When defining a variable, the following attributes can be set:

  1. Variable Name
  2. Scope: “Global”, “User” or “User Session”
  3. Defailt Value (initial value)

For variables with the scope “Global” there is only one instance which is initialized when the test is started.

For variables that have the scope “User”, there is a separate instance for each simulated user, which is initialized when the user is started.

In the case of variables with the scope “User Session”, a new instance is created and initialized at the start of each iteration of the session loop. The visibility is resricted to the current session loop of the simulated user.

Variable Extractors and Assigers

Variable extractors are definitions which contain instructions about how to extract a value from a session element into a variable. Variable assigners, on the other hand, are definitions that contain instructions how to assign or replace a value of a session element.

The following two images show a variable extractor that extracts a dynamic value from URL[2], and a variable assigner which assigns the value to a JSON post request of URL[3].

“alt attribute”

“alt attribute”

While variables (almost) always have to be defined manually, the definition of variable extractors and assigners is usually done implicitly (semi-automatically), depending on the context of the session element.

For example, if you define an input file, you must first create the variables that contain the extracted values (for username and password in this example). If you then assign the line token numbers to the variables, the corresponding variable extractors are automatically created.

“alt attribute”

“alt attribute”

“alt attribute”

Note: In this example above, a separate username and password are read from the input file for each simulated user and assigned to the value of the corresponding variables. For this reason the variables have the scope “User”, and the input file has the scope “Get Next Line per User”.

Assigning and Extracting Variable Values to/from URLs

Apart from Implicit Assigners (see “Adding URLs”), variable values of URLs can only be extracted and assigned by using the debugger. However, this is not a problem as definitions made in the debugger are automatically synchronized with the portal server.

Extracting variable values from URLs during debugging is quite easy. After executing a URL in the debugger, click on the symbol of the HTTP response header or of the HTTP response content and then extract the value.

“alt attribute”

“alt attribute”

Assigning variable values to URLs is a little bit more tricky. You have to interrupt the HTTP request in the debugger in the middle - after the request is initialized, but before the request is send to the server.

For this you have temporary to enable the option “HTTP Request Breakpoint” in the debugger, before you execute the URL (Next Step):

“alt attribute”

After you have clicked on “Next Step” you can assign the variable value to the pending HTTP request:

“alt attribute”

Click on the symbol of the URL, or of the HTTP request header or of the HTTP request content, and then assign the variable value:

“alt attribute”

“alt attribute”

Plug-Ins

Plug-Ins are reusable HTTP Test Wizard extensions that are manually programmed. They can also be published by manufacturing users and imported by other users. Therefore, before you start programming your own plug-in, take first a look at the already published plug-ins.

Plug-In Types

There are 3 types of plug-ins:

  • Normal Session Element Plug-Ins: As the name suggests, such plug-ins are executed as a normal session element. This type of plug-in is versatile and can, among other things, process variables, abort the current session, the user or the entire test, or even perform independent measurements with any protocol.
  • Bound to URL Plug-Ins: Such plug-ins are linked to a URL session element and can change the HTTP request before sending it to the server and post-process the received HTTP response - such as extracting variable values or checking the content of the HTTP response. Such plug-ins can also abort the current session, the user, or the entire test.
  • Java Source Code Modifier Plug-Ins: These are special plug-ins that can subsequently modify the source code generated by the HTTP Test Wizard. Usually such plug-ins are only used as a workaround if the HTTP Test Wizard generates faulty or incomplete source code.

Importing Published Plug-Ins

“alt attribute”

“alt attribute”

After you have imported the plug-in you have to load and compile it - then save the compiled plug-in.

“alt attribute”

“alt attribute”

“alt attribute”

“alt attribute”

After you have saved the compiled plug-in you can add it to your HTTP Test Wizard session. Plug-ins of the type “Java Source Code Modifier Plug-Ins” do not have to be added, but can be called directly after the source code of the test program has been generated.

Developing own Plug-ins

Own plug-ins can be developed in Java 8 or 11. The following example shows a plug-in which decode a base64 encoded string.

“alt attribute”

After clicking at “New Plug-In” the plugin type has to be selected (in this example “Normal Session Element Plug-In”): “alt attribute”

At the “General Settings” tab you have at least to enter the Plug-In Title and the Java Class Name. You should als enter a Plug-In Description which can be formatted by simple HTML tags. The onInitialize Scope can be set to “Global”, as this plug-in does not require any initialization. The onExecute Scope is set to “User Session” in order that all kind of variabe scopes are supported. “alt attribute”

At the next tab - “Input & Output Values” - a plug-in input and a plug-in output (Java-)variable is defined for the plug-in method onExecute. The values of this two Java variables will later correspond with two HTTP Test Wizard variables (which may have a different variable name): “alt attribute”

When you click at the next tab - “Resources Files” - you will see that the Java library com.dkfqs.tools.jar is already added to the plugin. This is normal as all kind of plug-ins require this library. Here you may add also other Java libraries required by the plug-in - but in this case no other libraries are needed. “alt attribute”

Now at the “Source Code & Compile” tab, you can first generate the plug-in template. Then you have to extend the generated code with your own code, ie in this example you have to complete the Java import definitions and program the inner logic of the Java method onExecute. Then compile the plug-in.

“alt attribute”

“alt attribute”

At the last tab “Test & Save” you can first test the plug-in (remotely) on any Measuring Agent. To perform the test enter for example “SGVsbG8gV29ybGQ=” as input and you will see as output “Hello World”. “alt attribute”

“alt attribute”

Finally save the plug-in: “alt attribute”

After you have saved the plug-in click at the “Close” button. Then you will see the new plug-in in the plug-in list: “alt attribute”

Publishing Plug-Ins

If your plug-in can be used universally, it would be nice if you also publish it - to make it available to other users.

Note that you have to enable in your Profile Settings the option “Public In-App User” in order that you entitled to publish plug-ins.

Publishing plug-ins is especially useful for users who have additionally activated the option “Publish My Profile as Technical Expert” in their Profile Settings. This will significantly improve your visibility and competence.

“alt attribute”

“alt attribute”

7.3 - HTTP/S Remote Proxy Recorder

User Guide | HTTP/S Remote Proxy Recorder

By using an HTTP/S Proxy Recorder, the HTTP/S traffic from Web browsers and technical Web clients can be recorded and easily converted into a HTTP Test Wizard session.

The HTTP Test Wizard session can then be debugged and post-processed. Finally, an executable test program can be generated from the recorded session.

Functional Overview: Remote Proxy Recorder

“alt attribute”

An HTTP/S Remote Proxy Recorder has two service ports:

  • A Control Port that is addressed by the portal server.
  • A Proxy Port that records the data traffic from a Web browser or from a technical Web client.

All data traffic that passes through the proxy port is first decrypted by the Proxy Recorder and then encrypted again before it is forwarded to the target Web server(s).

In order to record a Web surfing session by a Web browser you have to start two different Web browser products on your local machine. For example:

  • A Chrome browser to operate the portal server (Web Browser 1), and
  • A Firefox browser to record the Web surfing session (Web Browser 2)

We recommend to use always Firefox as Web Browser 2.

To be able to record a Web surfing session, you have to reconfigure Web Browser 2:

  • Import the Proxy Server CA Root Certificate into the Web Browser 2.
  • Configure the Network Settings of Web Browser 2 to use the Proxy Server.

Additional note: Before you start recording a Web surfing session in Web Browser 2, you must always clear the browser cache.

Once the recording is completed you should undo the configuration changes in Web Browser 2 (restore the original network settings and delete the Proxy Server CA Root Certificate).

Registering a Proxy Recorder at the Portal

After you have started a HTTP/S Proxy Recorder as a cloud instance or installed it on one of your systems, you have to register it at the Portal Server.

“alt attribute”

“alt attribute”

You can set any user name and password for Proxy Authentication. The Control API Authentication Token was defined/set by the person who started or installed the Proxy Recorder.

Once you have registered the Proxy Recorder you can try to ping it on the application level:

“alt attribute”

“alt attribute”

Then try to connect to the Control Port of the Proxy Recorder. If this works, the Proxy Recorder is now ready for use.

“alt attribute”

“alt attribute”

Configuring Web Browser 2 for Recording

Import the Proxy Server CA Root Certificate

Click at the certificate icon and store the certificate at any location.

“alt attribute”

“alt attribute”

Then open the Web browser settings and import the certificate.

“alt attribute”

“alt attribute”

“alt attribute”

Since there is a small bug in some versions of Firefox, you have to close and reopen the Certificate Manager to view the imported certificate.

“alt attribute”

Configure the Network Settings

“alt attribute”

“alt attribute”

Recording a Web Surfing Session

Navigate with Web Browser 1 to the Proxy Recorder control menu.

“alt attribute”

Then configure the “Hosts Recording Filter” and click on the “Start Recording” icon.

“alt attribute”

In Web Browser 2, first clear the browser cache and then enter the URL where you want the recording to start.

“alt attribute”

“alt attribute”

In Web Browser 1 you can now see the recording of the first page. Before navigating to the next page in Web Browser 2, first insert a “Page Break” in Web Browser 1 with a brief description of the next page.

“alt attribute”

“alt attribute”

Then navigate to the next page in Web Browser 2.

“alt attribute”

In Web Browser 1 you can now see the recording of the next page.

“alt attribute”

Then continue as before. Insert a Page Break in Web Browser 1, navigate to a next page in Web Browser 2. and so on …

After you have done the recording click on the “Stop Recording” icon. Then convert the recording into an HTTP Test Wizard session.

“alt attribute”

Converting the Recorded Session

Here you can select different options. The HTTP Status Code Filter also supports * as a wildcard. An exclamation mark in the font of a value means “do not / exclude”.

The option Enable Parallel Execution of HTTP Requests should be enabled if you have recorded a session with a Web browser (as described above), but should be disabled if the recording was performed by a technical Web client.

“alt attribute”

Once you have clicked on the Convert button, the recorded session is converted and directly loaded into the HTTP Test Wizard.

“alt attribute”

After that debug the converted session in the HTTP Test Wizard and then save the HTTP Test Wizard session. Finally generate and execute the test.

8 - Desktop Companion User Guide

Real Load Desktop companion - Introduction

The Desktop Companion is a small application that allows you to manage some tasks related to the DKFQS Platform and the Real Load Portal.

In particular the desktop companion allows you to:

  • Generate HTTP Load Test scripts from HTTP archives (.har) files
  • Locally run the Real Load Proxy Recorder
  • Perform some basic editing of test scripts
  • Upload test scripts directly to the Real Load Portal
  • Manage AWS Measuring Agents (launch, terminate, register with Real Load Portal)

See sections below to learn more about the Desktop Companion.

8.1 - Release Notes

Desktop Companion change log

0.25 | 12 Sept 2022

0.24 | 21 Mar 2022

0.23 | 25 Jan 2022

0.22 | 13 Dec 2021

0.21 | 9 Dec 2021

0.20 | 3 Dec 2021

  • Usability: Added help links to online documentation.
  • Usability: Added various configuration checks and related alert dialogs.
  • Usability: On exit, added alert in relation to running AWS EC2 instances.
  • Usability: List of AWS instances is updated via a background thread, periodically.
  • Usability: List of registered Measuring Agents is updated via a background thread, periodically.
  • Usability: The AWS region where an EC2 instance is launched is automatically added to “My Regions” list.
  • Installer: Created a Windows full installer that includes a JRE.
  • Bugfix: Fixed an issue with HAR files containing POST requests with no content type.
  • Bugfix: Updated JavaFX to v17
  • Download URL JAR: https://download.realload.com/desktop_companion/RealLoadDesktopCompanion-0.20.jar
  • Downalod URL Windows full installer: https://download.realload.com/desktop_companion/RLDTC_windows-x64_0_20.exe

0.10 | 8 Nov 2021

8.2 - Desktop Companion - Pre-requisites

Read this first.

Before running the Desktop Companion make sure you satisfy the following pre-requisites:

JRE 11 (Mandatory to execute JAR file)

The Desktop Companion is a Java application currently delivered as a .jar file. At this stage there is no full installer for the application, which means that as a pre-requisite to run the application you’ll need to install a Java 11 JRE.

While it should run on any systems that support Java 11 and FXML, it was only tested on the following platforms:

Operating System JRE 11 Vendor and Build JRE download link Comments
Windows 10 Microsoft “11.0.11” 2021-04-20 https://docs.microsoft.com/en-au/java/openjdk/download
Windows 11 Microsoft https://docs.microsoft.com/en-au/java/openjdk/download
Apple OS X Microsoft “build 11.0.13+8-LTS” https://docs.microsoft.com/en-au/java/openjdk/download

AWS API Security Credentials (Optional)

In order to use the AWS integration features, you’ll need to prepare Security Credentials with AWS IAM.

Seach for “IAM” in the AWS search bar and click on “Users”:

Then add a new user:

Enter a suitable user name and select “Access key” as the AWS credential type.

On the next page set an appropriate policy for the user. Usin the “AmazonEC2FullAccess” provides sufficinet permissions.

Skip the “Add tags” screen and go to the “Review” screen and click on “Create user”. Make sure the permissions you’ve assigned to the user appear on this screen.

Finally take a note of the Access Key ID and Secret Access Key as thess need to be configured in the Real Load Desktop Companion.

Real Load Portal User API Authentication Token (Optional)

In order to register with the Real Load portal the AWS EC2 instances you’ve launched, the Desktop Companion requires that you configure an authentication token.

To create an authentication token proceed as follows:

Login to the Real Load Portal and click on the User -> API Authenticaiton Tokens menu.

Then click on the “Add API Authentication Tokens” button and enter a suitable purpose description. Optionally you can restrict the src IP address from where this token can be used.

Take note of the authentication Token Value as you’ll need to configure it in the Desktop Companion.

8.3 - Desktop Companion - Download and run

Where to donwload the application from and how to run.

Downloading the full installer (Windows)

The full installer includes a JRE as well as the Desktop Companion application itself. This is the preferred method of deploying the application, as it avoids issues that might occur because of JRE customization, etc…

Additionally the application can be started from the Windows start menu, as any other application.

The latest version of the full installer can be downloaded from here: https://download.realload.com/desktop_companion/latest_win64

Downloading standalone JAR file only

You can download the latest version of the Desktop Companion as an executable .jar file from here: https://download.realload.com/desktop_companion/latest

There is no installer to be executed, the application can be launched by double clicking on the .jar file. If that doesn’t work, consider launching from the command line.

Windows: Launch from file explorer

If you’ve only got one JDK/JRE version installed on your Windows computer then it’s very likely that .jar files are associated with the correct JRE. In that case you should be able to launch the application by double clicking on the .jar file.

If the application doesn’t start it’s possible the .jar extensions is associated with the incorrect JRE or another application altogether. You can try to right click on the .jar file and the “Open with” from the context menu.

Then selecte the OpenJRE launcher:

… or alternatively navigate to the JRE 11 binary, as shown in this screenshot:

Similarly on Windows 11:

s

If the application doesn’t start, consider starting it from the command line as explained below.

Windows: Launch from command line

Open a Terminal window. First validate that the java binary in your path is a JRE 11 binary by running this command:

java -version

The output should indicate you’re running a JRE 11 binary, as shown here:

If the version of the JRE is 11 then you can run the following command from the folder where the Desktop Companion .jar file was downloaded:

java.exe -jar .\RealLoadCompanionFXML-0.1.jar

OS X: Launch from finder

If you only have JRE 11 installed on your Mac, you should be able to launch the application simply by double clicking on its icon:

Alternatively right-click on the icon and “Opne With” Jar Launcher.

If this doesn’t work, refer below on how to launch from the command line.

OS X: Launch from command line

Open a Terminal window. First validate that the java binary in your path is a JRE 11 binary by running this command:

java -version

The output should indicate you’re running a JRE 11 binary, as shown here:

If a different JRE version is returned (like for example a JRE 8 binary) then you’ll need to specify the full path to the java 11 command. For example, assuming you installed the Microsoft OpenJDK 11, you would typically find it at this location:

/Library/Java/JavaVirtualMachines/microsoft-11.jdk/Contents/Home/bin/java

To run the Desktop Companion, assuming the .jar file is present in the current directory, run the following command from a Terminal window:

/Library/Java/JavaVirtualMachines/microsoft-11.jdk/Contents/Home/bin/java -jar RealLoadCompanionFXML-0.1.jar

8.4 - Desktop Companion - Settings Menu

Configure the Desktop Companion for integration with AWS and the Real Load Portal.

Various application settings are configurable in the File -> Settings menu.

General settings

  • Default HAR input folder: Location where to look for .har (HTTP Archives) files by default. This should be the location where your browser writes files to.

  • Default export folder: This is the location where Real Load HTTP test scripts will be exported to in JSON format.

  • Agent Secret: The Measuring Agent secret to authenticate connections from the Real Load Portal to the Cloud Agent instance (AWS EC2 and Azure instances in future).

User API settings

  • Portal URL: The endpoint of the Real Load portal User API. Unless you have an on-premise installation, use the default value https://portal.realload.com/RemoteUserAPI

  • Authentication Token: Enter here the authentication token that was generated by the Portal as part of the pre-requisites steps. Click on the “Test” button to test the API token.

  • Refresh Interval: How frequently should the list of Measuring Agents registered with the Real Cloud portal be refreshed. Any value 60 seconds or lower indicates no background refresh, you’ll need to manually trigger a refresh via context menu. By default set to 61 seconds (background refresh enabled).

AWS settings

  • AWS Access Key: Paste the AWS Access Key that was obained as part of the preparation steps.

  • Secret Access Key: Paster the AWS Secret Access key that was obained as part of the preparation steps. Use the “Test” button to validate the credentials.

  • Preferred Instance Type: The EC2 instance type to use when launching a new Mesauring Agent instance.

  • AWS EC2 Refresh Interval: How frequently should the list of AWS EC2 instances be refreshed. Any value 60 seconds or lower indicates no background refresh, you’ll need to manually trigger a refresh via context menu. By default set to 61 seconds (background refresh enabled).

  • My AWS Regions: Select the AWS regions you commonly use. To select multiple regions hold the CTRL key while selecting.

Proxy Recorder settings

  • Proxy Port: The TCP port on the local machine that will listen for incoming proxy connections. You’ll need to configure this port as the HTTP proxy in your browser.

  • Export CA Certificate: This exports the CA certificate used by reverse proxy to sign SSL certificates. The exported CA should be added as a trusted CA to the browser you’re planning to use to record HTTP requests.

The next settings should be left to default values, unless there are specific reasons for changing them.

  • Proxy Backend Server Start Port Range: Starting TCP source port range for connections generated by the Proxy.
  • Proxy Backend Server End Port Range: Ending (high) TCP source port range for connections generated by the Proxy.
  • Debug HTTP Headers: Enables capturing HTTP headers values in Proxy debug log.

8.5 - Desktop Companion - File Menu

Import HTTP archives and export test scripts locally or to the Portal.

Import HAR file

To import an HTTP Archive generate by a browser use the corresponding option in the file menu and select the relevant .har file. All requests present in the file will appear in the editor tab.

Export test script to JSON file

You can export your test script to a Real Load JSON session file by selecting the corresponding option in the file menu. The file will be saved in the folder you select.

You’ll then need to manually upload the file to a Resource Set in the Real Load Portal using the upload function.

Export test script to Portal

Test scripts can be directly uploaded to the Real Load Portal via the “Export session to portal…” menu. You’ll need to select the resource set to save the load test to:

You have 2 options:

  • Overwrite existing test script: If you select and existing HTTP test script without modifying the filename this will effectively overwrite it.
  • Select a resource set: On the other hand if you select a resource set (… “Web Test” in the above screenshot for example) you’ll then have to enter a filename for your new test script.

8.6 - Desktop Companion - Editor

Simple request editor.

In the request editor tab you’ll be able to perform some basic editing of the requests imported from an HAR file or recorded by the proxy recorder.

For more complex editing please use the HTTP wizard in the Real Cloud portal after uploading your test script there.

Overview

The Editor tab is split in 3 main parts:

  • HTTP Requests lists: This section shows all HTTP requests imported from an HAR or recorded by the Recording Proxy. The context menu accessible via a right click allows you to perform some basic editing functions.
  • Request details: When selecting a request some additional details about the request will appear, like values extracted from HTTP headers, etc…
  • Unique domains: In this section all unique domain names will be listed. Clicking on a domain will select all relevant rows in the requests table.

Delete Requests

You can mark requestes in the main editor window by clicking on them. Hold the ctrl keyty to mark multiple request or use the shift key to mark a range of requests.

Once marked you can delete the requests by right-clicking on a marked request and selecting “Delete selected” from the context menu:

Adding time delays

To add a time delay (currently hardcoded to 1 second) before or after a request, select the relevant context menu item while hovering over a request.

Time delays rows appear as requests of type “T” in the main editor window:

Domain based selection

In order to bulk select all requests belonging to specific domains select one or more domains (by holding the CTRL key). This will select corresponding requests in the requests list which can then be easilly deleted.

It is also possible to directly delete requests belonging to one or multiple domains using the context menu.

8.7 - Desktop Companion - Measuring Agents

Launch and terminate AWS based measuring agents and manage their registration with the Portal.

The Measuring Agents tab allows you to manage Cloud Based (… currently AWS) Measuring Agents. This section of the application will be of most use if you configured AWS credentials in the preferences section of the application.

If no AWS credentials are configured, you won’t be able to start and terminate AWS instances.

Listing Measuring Agents AMIs

In the left pane you’ll see a list of available AWS AMIs. You can further filter the list of AMIs by selecting a specific version and/or by selecteing an AWS region.

Launching an instance

To launch an new EC2 instance right-click on the relevant AMI and select “Launch”. A screen to confirm the launch will be displayed. Confirm by clicking on the “Launch” button.

When launching an instance in an AWS EC2 region that is not yet part of your “My Regions” list, the region will be automatically added to the list.

Launching a new instaance will automatically trigger a refresh of the AWS Measuring Agents list. It might take a few second for the list to update and the new instance to be reflected in it.

Then confirm the launch action:

Listing running EC2 instances

After launching an instance go to the top right part of the window listing the running instances and right click on “Refresh”. This will retrieve all running EC2 instances from the preferred AWS regions and display them in the table.

Registering an instance

To register an AWS instance with the Real Load Portal right-click on the instance and then select the “Register with portal” menu item. The instance ID will be used as the description in the Real Load Portal.

List registered instances

In the left bottom part of the window you’ll see the Measuring Agents currently registered on the Portal. To update the list right click and select the “Refresh” option.

De-register an instance

In order to de-register an instance from the Portal right click on the instance name and select De-register.

Terminate an instance

To terminate an AWS EC2 instance right click on the instance name.

8.8 - Desktop Companion - Proxy Recorder

Run the proxy recorder locally on your desktop.

The Desktop Companion allows you to run the Proxy Recorder locally on your desktop. Recorded requests will then appear in the Editor tab. Refer to the Proxy Recorder documentation for further information on how it works.

There are two main sections in this chapter:

  • How to configure your browser.
  • Hot to start/stop the recording process.

Configure your browser

You’ll have to configure the port the Real Load Proxy listens on (default: 18080) in your browser. We recommend to use a browser that allows you to configure proxy settings independently from the Operating System proxy settings. FireFox is a good candidate for this and we’ve documented its configuration in this section.

Import Reverse Proxy CA certificate

For the browser to trust SSL certificates issued by the Recording Proxy, you’ll need to import the Proxy’s CA certificate as follows:

  • Export the certificate to a file: In the Desktop Companion’s Proxy Setting tab, click the “Export CA Certificate. Then select a folder where the certificate is to be exported. It will be saved in file “RecProxyCert.cer”.

  • Import certificate: Navigate to the certificate settings in FireFox:

Then click on the “Auhtorities” tab, “Import” button and select the “RecProxyCert.cer” that you’ve just exported from the Desktop Companion.

Make sure you trust the certificate to identify websites:

Configure proxy settings

In the FireFox “Settings” and scroll to the bottom of the page where “Network Settings” are located . The below window should be displayed and configure the Port to be 18080 for both HTTP and HTTPS. Configure the host and port as shown in the below screenshot. Also tick the “Also use this proxy for HTTPS”.

Test browser configuration

Navigate to any SSL enabled page. You shouldn’t see any warnings about untrusted SSL certificates beings used.

If you check the certificate of the site you’re visiting, the issuer should be “Real Load Pty Ltd”, as shown in this screenshot.

Start the recording process

To start the recording process go to the Proxy Recorder tab and click on the “Start Recorder” button. Note that when the recording process is started, previous Proxy Recorder logs are purged.

Stop the recording process

To stop the recording process clik on the “Stop Recorder” button. Then navigate to the “Editor” window and you should see all recorded requests as in this example: