Real Load News

Regression testing using Test Templates and Suites

Execute performance regression testing using test Templates and Suites

An exciting set of new features was added in Real Load v4.8.24. Test Templates and Test Suites.

Test Templates allow you to:

  • Pre-define a load test’s execution parameters like VUs, duration, etc..
  • Each Template represents a specific performance test script.

Test Suites will allow you to:

  • Execute one or more test templates to build complex execution scenarios simulating a variety of activities.
  • Organize multiple Test Templates in test groups. Execution within a test group can be parallel or sequential.
  • Have multiple execution groups within a test suite.
  • Configure sequential or parallel execution of test groups.

Once a Test Suite has been executed you can:

  • Compare results at the test suite level with multiple previous executions.
  • Compare results at the template level with multiple previous executions.

Test Suites allow you to implement regression testing by executing a specific set of performance tests using the very same execution parameters. You can even automate execution of Test Suites by triggering it via the APIs exposed by the product.

All of this is documented in this short video (9 minutes) which walks you through these new features.

As always, feedback or questions are welcome using our contact form.

Monitoring SSL endpoints

Leverage Real Load JUnit tests to monitor SSL endpoints

Last week I wrote an article to illustrate how the ability to execute JUnit tests opens a whole new world of synthetic monitoring possibilities.

This week I’ve implemented another use case to detect SSL certificate related issues which I’ve actually seen causing issues in production environments. Most issues were were caused by expired certificates or CRLs, affecting websites, APIs or VPNs.

I’ve implemented a series of JUnit tests to verify these attributes of an SSL endpoint:

  • Check for weak SSL cipher suites
  • Check for weak SSL protocols (SSL v1.0 or v1.1 for example)
  • Check SSL certificate expiration date. Alert if less than 30 days in future
  • Check whether the certificate ended up in the CRL by mistake
  • Check whether the currently published CRL is not expiring in then next 2 days

Below you’ll find the JUnit code to implement the above tests. Once the code is deployed to the Real Load platform you can configure a Synthetic Monitoring task to execute it regularly.

The code is intended for demonstration purposes only. Actual production code could be enhanced to perform additional checks on the presented SSL certificates or the other certs in the keychain. The code could also be extended to retrieve a list of SSL endpoints to be validated from a document or an API of some sort, to simplify maintenance.

Happy monitoring!

Required dependencies (Maven)

    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <dependency>
            <groupId>com.dkfqs.tools</groupId>
            <artifactId>tools</artifactId>
            <version>4.8.23</version>
        </dependency>
        <dependency>
            <groupId>org.bouncycastle</groupId>
            <artifactId>bcpkix-jdk15to18</artifactId>
            <version>1.68</version>
            <type>jar</type>
        </dependency>

This is the code implementing the SSL checks mentioned above. You’ll notice a few variables at the top to set hostname, port and some other configuration parameters…


import com.dkfqs.tools.crypto.EncryptedSocket;
import com.dkfqs.tools.javatest.AbstractJUnitTest;
import static com.dkfqs.tools.javatest.AbstractJUnitTest.isArgDebugExecution;
import static com.dkfqs.tools.logging.LogAdapterInterface.LOG_DEBUG;
import static com.dkfqs.tools.logging.LogAdapterInterface.LOG_ERROR;
import com.dkfqs.tools.logging.MemoryLogAdapter;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import javax.net.ssl.SSLSocket;
import java.io.ByteArrayInputStream;
import java.io.DataInputStream;
import java.net.URL;
import java.net.URLConnection;
import java.security.cert.Certificate;
import java.security.cert.CertificateFactory;
import java.security.cert.X509CRL;
import java.security.cert.X509CRLEntry;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Calendar;
import java.util.Date;
import java.util.List;
import junit.framework.TestCase;
import org.bouncycastle.asn1.ASN1InputStream;
import org.bouncycastle.asn1.ASN1Primitive;
import org.bouncycastle.asn1.DERIA5String;
import org.bouncycastle.asn1.DEROctetString;
import org.bouncycastle.asn1.x509.CRLDistPoint;
import org.bouncycastle.asn1.x509.DistributionPoint;
import org.bouncycastle.asn1.x509.DistributionPointName;
import org.bouncycastle.asn1.x509.Extension;
import org.bouncycastle.asn1.x509.GeneralName;
import org.bouncycastle.asn1.x509.GeneralNames;
import org.bouncycastle.cert.jcajce.JcaX509CertificateHolder;
import org.junit.Assert;
import org.junit.Ignore;

public class TestSSLPort extends AbstractJUnitTest {

    private final static int TCP_CONNECT_TIMEOUT_MILLIS = 3000;
    private final static int SSL_HANDSHAKE_TIMEOUT_MILLIS = 2000;
    private String sslEndPoint = "www.realload.com"; // The SSL endpoint to validate
    private String DNofCertWithCRLdistPoint ="R3"; // The DN of the issuing CA - Used for CRL checks
    private int certExpirationDaysAlertThreshold = -30; //How many days before cert expiration we should be alerted.
    private int crlExpirationDaysAlertThreshold = -2; //How many days before CRL expiration we should be alerted.
    private final MemoryLogAdapter log = new MemoryLogAdapter();  // default log level is LOG_INFO

    @Before
    public void setUp() throws Exception {
        if (isArgDebugExecution()) {
            log.setLogLevel(LOG_DEBUG);
        }
        //log.setLogLevel(LOG_DEBUG);

        openAllPurposeInterface();
        log.message(LOG_DEBUG, "Testing SSL enpoint " + sslEndPoint + ":" + sslEndpointPort);
    }

    @Test
    public void CheckForWeakCipherSuites() throws Exception {
        SSLSocket sslSocket = SSLConnect(sslEndPoint, sslEndpointPort);
        String[] cipherSuites = sslSocket.getEnabledCipherSuites();
        // Add all weak ciphers here....
        TestCase.assertNotNull(cipherSuites);
        TestCase.assertFalse(Arrays.asList(cipherSuites).contains("SSL_RSA_WITH_DES_CBC_SHA"));
        TestCase.assertFalse(Arrays.asList(cipherSuites).contains("SSL_DHE_DSS_WITH_DES_CBC_SHA"));
        sslSocket.close();
    }

    @Test
    public void CheckForWeakSSLProtocols() throws Exception {
        SSLSocket sslSocket = SSLConnect(sslEndPoint, sslEndpointPort);
        String[] sslProtocols = sslSocket.getEnabledProtocols();
        TestCase.assertNotNull(sslProtocols);
        TestCase.assertFalse(Arrays.asList(sslProtocols).contains("TLSv1.0"));
        TestCase.assertFalse(Arrays.asList(sslProtocols).contains("TLSv1.1"));
        sslSocket.close();
    }

    @Test
    public void CheckCertExpiration30Days() throws Exception {
        SSLSocket sslSocket = SSLConnect(sslEndPoint, sslEndpointPort);
        TestCase.assertNotNull(sslSocket);
        Certificate[] peerCerts = sslSocket.getSession().getPeerCertificates();
        for (Certificate cert : peerCerts) {
            if (cert instanceof X509Certificate) {
                X509Certificate x = (X509Certificate) cert;
                if (x.getSubjectDN().toString().contains(sslEndPoint)) {
                    JcaX509CertificateHolder certAttrs = new JcaX509CertificateHolder(x);
                    Date expDate = certAttrs.getNotAfter();
                    log.message(LOG_DEBUG, "Cert expiration: " + expDate + " " + sslEndPoint);
                    Calendar c = Calendar.getInstance();
                    c.setTime(expDate);
                    c.add(Calendar.DATE, certExpirationDaysAlertThreshold);
                    if (new Date().after(c.getTime())) {
                        log.message(LOG_ERROR, "Cert expiration: " + expDate + ". Less than " + certExpirationDaysAlertThreshold + "days in future");
                        Assert.assertEquals(true, false);
                    }
                }
            }
        }
        sslSocket.close();
    }

    @Test
    public void CheckCRLRevocationStatus() throws Exception {
        SSLSocket sslSocket = SSLConnect(sslEndPoint, sslEndpointPort);
        TestCase.assertNotNull(sslSocket);
        Certificate[] peerCerts = sslSocket.getSession().getPeerCertificates();
        for (Certificate cert : peerCerts) {
            if (cert instanceof X509Certificate) {
                X509Certificate certToBeVerified = (X509Certificate) cert;
                if (certToBeVerified.getSubjectDN().toString().contains(DNofCertWithCRLdistPoint)) {
                    checkCRLRevocationStatus(certToBeVerified);
                }
            }
        }
        sslSocket.close();
    }

    @Test
    public void CheckCRLUpdateDueLess2Days() throws Exception {
        SSLSocket sslSocket = SSLConnect(sslEndPoint, sslEndpointPort);
        TestCase.assertNotNull(sslSocket);
        Certificate[] peerCerts = sslSocket.getSession().getPeerCertificates();
        for (Certificate cert : peerCerts) {
            if (cert instanceof X509Certificate) {
                X509Certificate certToBeVerified = (X509Certificate) cert;
                if (certToBeVerified.getSubjectDN().toString().contains(DNofCertWithCRLdistPoint)) {
                    checkCRLUpdateDueLessXDays(certToBeVerified);
                }
            }
        }
        sslSocket.close();
    }

    @After
    public void tearDown() throws Exception {
        closeAllPurposeInterface();
        log.writeToStdoutAndClear();
    }

    private static SSLSocket SSLConnect(String host, int port) throws Exception {
        EncryptedSocket encryptedSocket = new EncryptedSocket(host, port);
        encryptedSocket.setTCPConnectTimeoutMillis(TCP_CONNECT_TIMEOUT_MILLIS);
        encryptedSocket.setSSLHandshakeTimeoutMillis(SSL_HANDSHAKE_TIMEOUT_MILLIS);
        SSLSocket sslSocket = encryptedSocket.connect();
        return sslSocket;
    }

    private void checkCRLRevocationStatus(X509Certificate certificate) throws Exception {

        List<String> crlUrls = getCRLDistributionEndPoints(certificate);
        CertificateFactory cf;
        cf = CertificateFactory.getInstance("X509");

        // Loop through all CRL distribution enpoints
        for (String urlS : crlUrls) {
            log.message(LOG_DEBUG, "CRL endpoint: " + urlS);
            URL url = new URL(urlS);
            URLConnection connection = url.openConnection();
            X509CRL crl = null;
            try (DataInputStream inStream = new DataInputStream(connection.getInputStream())) {
                crl = (X509CRL) cf.generateCRL(inStream);
            }
            X509CRLEntry revokedCertificate = crl.getRevokedCertificate(certificate.getSerialNumber());

            if (revokedCertificate != null) {
                log.message(LOG_DEBUG, "Revoked");
                Assert.assertEquals(true, false);
            } else {
                log.message(LOG_DEBUG, "Valid");
            }
        }
    }

    private void checkCRLUpdateDueLessXDays(X509Certificate certificate) throws Exception {
        List<String> crlUrls = getCRLDistributionEndPoints(certificate);
        CertificateFactory cf;
        cf = CertificateFactory.getInstance("X509");

        // Loop through all CRL distribution enpoints
        for (String urlS : crlUrls) {
            log.message(LOG_DEBUG, "CRL endpoint: " + urlS);
            URL url = new URL(urlS);
            URLConnection connection = url.openConnection();
            X509CRL crl = null;
            try (DataInputStream inStream = new DataInputStream(connection.getInputStream())) {
                crl = (X509CRL) cf.generateCRL(inStream);
            }
            Date nextUpdateDueBy = crl.getNextUpdate();
            log.message(LOG_DEBUG, "CRL Next Update: " + nextUpdateDueBy + " " + urlS);
            Calendar c = Calendar.getInstance();
            c.setTime(nextUpdateDueBy);
            c.add(Calendar.DATE, crlExpirationDaysAlertThreshold);
            if (new Date().after(c.getTime())) {
                log.message(LOG_ERROR, "CRL " + urlS + " expiration: " + nextUpdateDueBy + ". Less than " + crlExpirationDaysAlertThreshold + " days in future");
                Assert.assertEquals(true, false);
            }
        }
    }

    private List<String> getCRLDistributionEndPoints(X509Certificate certificate) throws Exception {
        byte[] crlDistributionPointDerEncodedArray = certificate.getExtensionValue(Extension.cRLDistributionPoints.getId());

        ASN1InputStream oAsnInStream = new ASN1InputStream(new ByteArrayInputStream(crlDistributionPointDerEncodedArray));
        ASN1Primitive derObjCrlDP = oAsnInStream.readObject();
        DEROctetString dosCrlDP = (DEROctetString) derObjCrlDP;
        oAsnInStream.close();

        byte[] crldpExtOctets = dosCrlDP.getOctets();
        ASN1InputStream oAsnInStream2 = new ASN1InputStream(new ByteArrayInputStream(crldpExtOctets));
        ASN1Primitive derObj2 = oAsnInStream2.readObject();
        CRLDistPoint distPoint = CRLDistPoint.getInstance(derObj2);
        oAsnInStream2.close();

        List<String> crlUrls = new ArrayList<String>();
        for (DistributionPoint dp : distPoint.getDistributionPoints()) {
            DistributionPointName dpn = dp.getDistributionPoint();
            // Look for URIs in fullName
            if (dpn != null) {
                if (dpn.getType() == DistributionPointName.FULL_NAME) {
                    GeneralName[] genNames = GeneralNames.getInstance(dpn.getName()).getNames();
                    // Look for an URI
                    for (int j = 0; j < genNames.length; j++) {
                        if (genNames[j].getTagNo() == GeneralName.uniformResourceIdentifier) {
                            String url = DERIA5String.getInstance(genNames[j].getName()).getString();
                            crlUrls.add(url);
                        }
                    }
                }
            }
        }
        return crlUrls;
    }

}

P.S: Some of the code above was inspired from this Stack Overflow post.

New test execution APIs

Trigger Real Load test programmatically

Version 8.4.23 of the Real Load platform expose new API methods that allow triggering Load Test scripts programmatically, via REST API calls. This enhancement is quite important, as it allows you to automate performance test execution as part of build processes, etc…

Another use case would be to regularly execute performance test to maintain data volume in application databases. One of the applications I work with (Outseer’s Fraud Manager on Premise) has got internal housekeeping processes that over time will remove runtime data from its DB. While this is to be expected, it might be detrimental to environments to be used for performance testing, where you might want to simulate production like data volume, or at least maintain data to a specific size.

In this case, a good way to maintain the application data volume to a given size would be to simulate on a daily basis the same number of transactions that actually occur in production environment, by simulating a similar volume of API calls.

In this blog post I’ll illustrate with a simple PowerShell example how you can automate execution of a Real Load performance test using the newly exposed API methods.

The new API methods

As you can see in the v4.8.23 release notes, Real Load now exposes these new API methods:

  • getTestjobTemplates
  • defineNewTestjobFromTemplate
  • submitTestjob
  • makeTestjobReadyToRun
  • startTestjob
  • getMeasuringAgentTestjobs
  • getTestjobOutDirectoryFilesInfo
  • getTestjobOutDirectoryFile
  • saveTestjobOutDirectoryFileToProjectTree
  • deleteTestjob

Pre-requisites

Before getting started with the script, you’ll need to:

  1. Obtain the API authentication token from the Real Load portal:

  2. Configure a Load Testing template and note its ID

In the Load Test Jobs menu, look for a recently load test job that you’d like to trigger via the new APIs. To create a template from it, select the item pointed out in the screenshot:

  1. Get the AgentID from where you want to generate execute the test

… as shown in this screenshot.

Prepare you script

The next step involves preparing a script to invoke the Real Load API methods to trigger the test. I’ve used the following PowerShell script, which I’ll execute regularly as an Azure RunBook.

We’ll use 4 API methods:

  • defineNewTestjobFromTemplate
  • submitTestjob
  • makeTestjobReadyToRun
  • startTestjob

You’ll need to change the value of the agentId and templateId variables as needed. If you’re planning to run this script from an on-premises scheduler then you can hardcode the authToken value, instead of invoking Get-AutomationVariable.

$url='https://portal.realload.com/RemoteUserAPI'
$authToken=Get-AutomationVariable -Name 'RL_Portal_authToken'
$agentId=78
$templateId=787676

## Define test
$body = @{
  'authTokenValue'=$authToken
  'action'='defineNewTestjobFromTemplate'
  'templateId'=$templateId
  'measuringAgentOrClusterId'=$agentId
  'isCluster'=$false
  'jobDescription'= 'Daily seeding test'
}
$definedJobResp = Invoke-RestMethod -Method 'Post' -Uri $url -Body ($body|ConvertTo-Json) -ContentType "application/json"
Write-Host "JobId:" $definedJobResp.newTestjobId

if ($definedJobResp.isError -eq $true)
{
    Write-Error "defineNewTestjobFromTemplate failed"
    exit -1
}


## Submit job
$body = @{
  'authTokenValue'=$authToken
  'action'='submitTestjob'
  'localTestjobId'=$definedJobResp.newTestjobId
}
$submitJobResp = Invoke-RestMethod -Method 'Post' -Uri $url -Body ($body|ConvertTo-Json) -ContentType "application/json"

if ($submitJobResp.isError -eq $true)
{
    Write-Error "submitTestjob failed"
    exit -1
}

## Make job ready to run
$body = @{
  'authTokenValue'=$authToken
  'action'='makeTestjobReadyToRun'
  'localTestjobId'=$definedJobResp.newTestjobId
}
$makeJobReadyToRunResp = Invoke-RestMethod -Method 'Post' -Uri $url -Body ($body|ConvertTo-Json) -ContentType "application/json"

if ($makeJobReadyToRunResp.isError -eq $true)
{
    Write-Error "makeTestjobReadyToRun failed"
    exit -1
}
Write-Host "LocalTestJobId:" $makeJobReadyToRunResp.agentResponse.testjobProperties.localTestjobId
Write-Host "Max Test duration (s):" $makeJobReadyToRunResp.agentResponse.testjobProperties.testjobMaxTestDuration

## Start Job
$body = @{
  'authTokenValue'=$authToken
  'action'='startTestjob'
  'localTestjobId'=$definedJobResp.newTestjobId
}
$startTestjobnResp = Invoke-RestMethod -Method 'Post' -Uri $url -Body ($body|ConvertTo-Json) -ContentType "application/json"

if ($startTestjobnResp.isError -eq $true)
{
    Write-Error "startTestjob failed"
    exit -1
}
Write-Host "Job state:" $startTestjobnResp.agentResponse.testjobProperties.testjobState

Configure RunBook variable

Configure an Azure RunBook variable by the same name as used in the powershell script, in this example “RL_Portal_authToken”. Make sure you set the type to String and select the “Encrypted” flag.

Create a new PowerShell RunBook

Now create a new RunBook of type PowerShell and copy and paste the above code in it. Use the “Test pane” to test execution and when successfully tested save and publish.

Create a schedule

Create a schedule to fit your needs. In this example, it’s an hourly schedule with an expiration date set (optional).

Associate the schedule to the RunBook

The last step is to associate the schedule to the RunBook:

Done. From now on the selected perfomance test will be executed as per the configured schedule. You’ll be able to see the results of the executed Load Test in the Real Load portal.

Other use cases

The use case illustrated in this blog is a trivial use case. You can use similar code to integrate Real Load in your build pipelines of almost any CI/CD tool that you’re using.

A simple use case for JUnit testing

JUnit testing

A key feature that was added to the Real Load product recently is the support of JUnit tests. In a nutshell, it is possible to execute JUnit code as part of Synthetic Monitoring or even Performance Test scripts.

A simple example… Some DNS testing (again)

A while ago, I’ve implemented a simple DNS record testing script using an HTTP Web Test Plugin. Well, if I had to implement the same test today, I’d implement it using a JUnit test, as it looks more straightforward to me.

Prepare the JUnit code

First you’ll need to prepare your JUnit test code. You’ll need to compile your JUnit tests in a .jar archive, so what I did I created in my preferred IDE (NetBeans) a new Maven project.

The dependencies I’ve used for the DNS tests are:

    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <dependency>
            <groupId>dnsjava</groupId>
            <artifactId>dnsjava</artifactId>
            <version>3.5.2</version>
        </dependency>
    </dependencies>

Below the code implementing the DNS lookups against 4 specific servers. As you can see, each JUnit test executes the lookup against a different server:

import java.net.UnknownHostException;
import static junit.framework.Assert.assertEquals;
import org.junit.Test;
import org.xbill.DNS.*;

public class DNSNameChecker {

    public DNSNameChecker() {
    }

    @Test
    public void testGoogle8_8_8_8() {

        String ARecord = CNAMELookup("8.8.8.8", "kb.realload.com");
        assertEquals("www.realload.com.", ARecord);
    }

    @Test
    public void testGoogle8_8_4_4() {

        String ARecord = CNAMELookup("8.8.4.4", "kb.realload.com");
        assertEquals("www.realload.com.", ARecord);
    }

    @Test
    public void testOpenDNS208_67_222_222() {

        String ARecord = CNAMELookup("208.67.222.222", "kb.realload.com");
        assertEquals("www.realload.com.", ARecord);
    }

    @Test
    public void testSOA_ns10_dnsmadeeasy_com() {

        String ARecord = CNAMELookup("ns10.dnsmadeeasy.com", "kb.realload.com");
        assertEquals("www.realload.com.", ARecord);
    }

    private String CNAMELookup(String DNSserver, String CNAME) {
        try {
            Resolver dnsResolver = null;
            dnsResolver = new SimpleResolver(DNSserver);
            Lookup l = new Lookup(Name.fromString(CNAME), Type.CNAME, DClass.IN);
            l.setResolver(dnsResolver);
            l.run();

            if (l.getResult() == Lookup.SUCCESSFUL) {
                // We'll only get back one A record, so we'll only return
                // the first record.
                return (l.getAnswers()[0].rdataToString());
            }

        } catch (UnknownHostException | TextParseException ex) {
            return null;
        }
        // Return null in all other cases, which means some sort of error
        // occurred while doing the lookup.
        return null;
    }

}

Test your code and then compile it into a .jar file.

Upload the .jar file to Real Load portal

The next step is to upload the .jar file containing the JUnit tests to the Real Load portal.

Configure the test

Now go to the Tests tab and define the new JUnit test:

Select the .jar file containing the tests. You’ll then be presented with a page listing all tests found in the .jar file. Also make sure you select any dependencies required by your JUnit code, in this example the dnsjava and the slf4j jar files.

To verify that the test works, you might want to execute it once as a Performance Test (1 VU, 1 cycle).

Configure Synthetic Monitoring job

Now that the test is defined, add a Monitoring Job to execute a regular intervals your JUnit test. Assuming you already have a Monitoring Group defined, you’ll be able to add a JUnit test by selecting the test you’ve just configured:

Review results

Done! Your JUnit tests will now execute as per configuration of your Monitoring Group, and you’ll be able to review historic data via the Real Load portal:

Test script involving OTPs? No problems

Real Load plugin framework allows you to generate OTPs…

Say that you have an application that requires customers to authenticate using a Time based One Time Password (TOTP), generated by a mobile application. These OTPs are typically generated by implementing the algorithm described in RFC 6238.

Thanks to Real Load’s ability to implement plugins using the HTTP Test Wizard, it’s quite straightforward to generate OTPs in order to performance test your applications or to be used as part of synthetic monitoring.

So… from theory to practice. We had to implement such a plugin in order to performance test a third party product that requires users to submit a valid TOTP, if challenged. First, we’ve selected a Java TOTP implementation capable of generating OTPs as per the above RFC. There a are a few implementations out there, we decided to use Bastiaan Jansen’s implementation. A few lines of Java code only are required to generate the OTPs, and this implementation relies on one dependency only, so it was the perfect candidate.

Define input and output parameters

First thing to do is to define which input and output values the plugin requires. Things are quite straightforward in this case, as most of the TOTP related parameters are well known (like OTP interval, number of digits and HMAC algorithm), so we’ve hardcoded them in the plugin’s logic. The only variable is the secret (B32 encoded) that is required to generate the TOTP, which which is specific to each Virtual User.

The only output value is the generated One Time Password.

Using the PLugin Wizard, we configured the parameters as follows:

Implement the plugin logic

Next step is to implement the logic to generate the TOTP. In theory you can go key in your Java code in the Plugin Wizard shown in the screenshot below, but I’ve actually prepared the code in a separate IDE and then copy and pasted it back into online editor. Plz note that the Wizard will produce all scaffolding code, you just have to add the code shown between lines 108 and 115.

Test

You’re now ready to test the plugin in the Wizard by going to the Test and Save tab. Provide the TOTP secret as base32 encoded string in the input parameter field:

… then test that the returned value is correct. Compare the value to the value generated by an online generator, there are a few out there.

Add to your test script

The last step is to add the plugin to your test script and invoke it at the right spot, like shown here:

The plugin’s output value will be assigned to a variable, which in turn will be used in the next test step.

You can now use the script both for synthetic monitoring or performance testing on the Real Load platform , even for scenarios where a user has to provide a valid OTP.

Application High Latency Is Outage !

Application High Latency Is Outage.

Application High Latency Is Outage.

If an application experiences high latency to the point of becoming unavailable or experiencing an outage, it can have severe consequences for businesses and their reputation.

In such cases, it is crucial to address the issue promptly and effectively.

Below are some steps that can be taken inorder to avoid the application high latency issues.

  1. Load Testing and Capacity Planning: Conduct regular load testing to ensure that the application can handle anticipated user loads without significant latency issues. Use the insights gained from load testing to plan for future capacity needs and scale the infrastructure accordingly.

  2. Monitoring and Alerting: Enhance your monitoring and alerting systems to detect and notify potential latency issues early on. Implement proactive monitoring for performance metrics, response times, and key indicators of the application’s health. Set up alerts to notify the appropriate teams when thresholds are breached.

For taking these above mentioned steps you need a cost effective tool to provide flawless Load Testing & Synthetic (Proactive) monitoring experience.

Welcome to Real Load: The next generation Load Testing & Synthetic Monitoring tool!

Real Load offers a Synthetic Monitoring and Load Testing solution that is flexible enough to cater for testing of a variety of applications such as:

  1. Web applications (Both legacy and modern single page applications)
  2. Mobile Applications
  3. APIs
  4. Any other network protocol, provided a Java client implementation exists
  5. Custom Scripts written like in PowerShell
  6. Database
  7. Message Queues and more…

All Purpose Interface!

All Purpose Interface is the uniqueness of Real Load.

Want to create your scripts in any programming language, which you are comfortable with ? Check Out Real Load’s All Purpose Interface.

The uniqueness of Real Load when compared to other tools is it’s All Purpose Interface, which enables customers to define scripts in any programming languages. The only requirement is a script or program must comply in order to be executed by the Real Load Platform.

How it works

This document explains:

  1. How to develop a test program from scratch.
  2. How to add self-programmed measurements to the HTTP Test Wizard (as plug-ins).

The product’s open architecture enables you to develop plug-ins, scripts and programs that measure anything that has numeric value - no matter which protocol is used!

The measured data are evaluated in real time and displayed as diagrams and lists. In addition to successfully measured values, also errors like timeouts or invalid response data can be collected and displayed in real time.

At least in theory, programs and scripts of any programming language can be executed, as long as such a program or script supports the All Purpose Interface.

In practice there are currently two options for integrating your own measurements into the Real Load Platform:

  1. Write an HTTP Test Wizard Plug-In in Java that performs the measurement. This has the advantage that you only have to implement a subset of the “All Purpose Interface” yourself:

    • Declare Statistic
    • Register Sample Start
    • Add Sample Long
    • Add Sample Error
    • [Optional: Add Counter Long, Add Average Delta And Current Value, Add Efficiency Ratio Delta, Add Throughput Delta, Add Test Result Annotation Exec Event]

    Such plug-ins can be developed quite quickly, as all other functions of the “All Purpose Interface” are already implemented by the HTTP Test Wizard.

    Tip: An HTTP Test Wizard session can also only consist of plug-ins, i.e. you can “misuse” the HTTP Test Wizard to only carry out measurements that you have programmed yourself: Plug-In Example

  2. Write a test program or from scratch. This can currently be programmed in Java or PowerShell (support for additional programming languages will be added in the future). This is more time-consuming, but has the advantage that you have more freedom in program development. In this case you have to implement all functions of the “All Purpose Interface”.

Interface Specification

Basic Requirements for all Programs and Scripts

The All Purpose Interface must be implemented by all programs and scripts which are executed on the Real Load Platform. The interface is independent of any programming language and has only three requirements:

  1. The executed program or script must be able to be started from a command line, and passing program or script arguments must be supported.
  2. The executed program or script must be able to read and write files.
  3. The executed program or script must be able to measure one or more numerical values.

All of this seems a bit trivial, but has been chosen deliberately. So that the interface can support almost all programming languages.

Generic Program and Script Arguments

Each executed program or script must support at least the following arguments:

  • Number of Users: The total number of simulated users (integer value > 0).
  • Test Duration: The maximum test duration in seconds (integer value > 0).
  • Ramp Up Time: The ramp up time in seconds until all simulated users are started (integer value >= 0). Example: If 10 users are started within 5 seconds then the first user is started immediately and then the remaining 9 users are started in (5 seconds / 9 users) = 0.55 seconds intervals.
  • Max Session Loops: The maximum number of session loops per simulated user (integer value > 0, or -1 means infinite number of session loops).
  • Delay Per Session Loop: The delay in milliseconds before a simulated user starts a next session loop iteration (integer value >= 0) – but not applied for the first session loop iteration.
  • Data Output Directory: The directory to which the measured data have to be written. In addition, other data can also written to this directory like for example debug information.

Implementation Note: The test ends if either the Test Duration is elapsed or if Max Session Loops are reached for all simulated users. Currently executed sessions are not aborted.

In addition, the following arguments are optional, but also standardized:

  • Description: A brief description of the test
  • Debug Execution: Write debug information about the test execution to stdout
  • Debug Measuring: Write debug information about the declared statistics and the measured values to stdout
Argument Java PowerShell
Number of Users -users number -totalUsers number
Test Duration -duration seconds -inputTestDuration seconds
Ramp Up Time -rampupTime seconds -rampUpTime seconds
Max Session Loops -maxLoops number -inputMaxLoops number
Delay Per Session Loop -delayPerLoop milliseconds -inputDelayPerLoopMillis milliseconds
Data Output Directory -dataOutputDir path -dataOutDirectory path
Description -description text -description text
Debug Execution -debugExec -debugExecution
Debug Measuring -debugData -debugMeasuring

Single-Threaded Scripts vs. Multiple-Threaded Programs

For scripts which don’t support multiple threads the Real Load Platform starts for each simulated user a own operating system process per simulated user. On the other hand, for programs which support multiple threads, only one operating system process is started for all simulated users.

Scripts which are not able to run multiple threads must support the following additional generic command line argument:

  • Executed User Number: The currently executed user (integer value > 0). Example: If 10 scripts are started then 1 is passed to the first started script, 2 is passed to the second started script, .. et cetera.
Argument PowerShell
Executed User Number -inputUserNo number

Specific Program and Script Arguments

Additional program and script specific arguments are supported by the Real Load Platform. Hoverer, their values are not validated by the platform.

Job Control Files

During the execution of a test the Real Load Platform can create and delete at runtime additional control files in the Data Output Directory of a test job. The existence, and respectively the absence of such control files must be frequently checked by the running script or program, but not too often to avoid CPU and I/O overload. Rule of thumb: Multi-threaded programs should check the existence of such files every 5..10 seconds. Single-threaded scripts should check such files before executing a new session loop iteration.

The following control files are created or removed in the Data Output Directory by the Real Load Platform:

  • DKFQS_Action_AbortTest.txt If the existence of this file is detected then the test executions must be aborted gracefully as soon as possible. Currently executed session loops are not aborted.
  • DKFQS_Action_SuspendTest.txt If the existence of this file is detected then the further execution of session loops is suspended until the file is removed by the Real Load Platform. Currently executed session loops are not interrupted on suspend. When resuming the test then the Ramp Up Time as passed as generic argument to the script or program must be re-applied. If a suspended test runs out of Test Duration then the test must end.

Testjob Data Files

When a test job is started by the Real Load Platform on a Measuring Agent, then the Real Load Platform creates at first for each simulated user an empty data file in the Data Output Directory of the test job:

Data File: user_<Executed User Number>_statistics.out

Example: user_1_statistics.out, user_2_statistics.out, user_3_statistics.out, .. et cetera.

After that, the test script(s) or test program is started as operating system process. The test script or the test program has to write the current state of the simulated user and measured data to the corresponding Data File of the simulated user in JSON object format (append data to the file only – don’t create new files).

The Real Load Platform component Measuring Agent and the corresponding Data Collector are listening to these data files and interpret the measured data at real-time, line by line as JSON objects.

“alt attribute”

Writing JSON Objects to the Data Files

The following JSON Objects can be written to the Data Files:

JSON Object Description
Declare Statistic Declare a new statistic
Register Execute Start Registers the start of a user
Register Execute Suspend Registers that the execution of a user is suspended
Register Execute Resume Registers that the execution of a user is resumed
Register Execute End Registers that a user has ended
Register Loop Start Registers that a user has started a session loop iteration
Register Loop Passed Registers that a session loop iteration of a user has passed
Register Loop Failed Registers that a session loop iteration of a user has failed
Register Sample Start Statistic-type sample-event-time-chart: Registers the start of measuring a sample
Add Sample Long Statistic-type sample-event-time-chart: Registers that a sample has measured and report the value
Add Sample Error Statistic-type sample-event-time-chart: Registers that the measuring of a sample has failed
Add Counter Long Statistic-type cumulative-counter-long: Add a positive delta value to the counter
Add Average Delta And Current Value Statistic-type average-and-current-value: Add delta values to the average and set the current value
Add Efficiency Ratio Delta Statistic-type efficiency-ratio-percent: Add efficiency ratio delta values
Add Throughput Delta Statistic-type throughput-time-chart: Add a delta value to a throughput
Add Test Result Annotation Exec Event Add an annotation event to the test result

Note that the data of each JSON object must be written as a single line which end with a \r\n line terminator.

Program Sequence

“alt attribute”

“alt attribute”

“alt attribute”

JSON Object Specification

Declare Statistic Object

Before the measurement of data begins, the corresponding statistics must be declared at runtime. Each declared statistic must have a unique ID. Multiple declarations with the same ID are crossed out by the platform.

Currently 5 types of statistics are supported:

  • sample-event-time-chart : This is the most common statistic type and contains continuously measured response times or any other continuously measured values of any unit. Information about failed measurements can also be added to the statistic. Statistics of this type are added to the ‘Overview Statistic’ area and can also displayed as a chart (see picture below).
  • cumulative-counter-long : This is a single counter whose value is continuously increased during the test. Statistics of this type are added to the ‘Test-Specific Values’ area.
  • average-and-current-value : This is a separately measured mean value and the last measured current value. Statistics of this type are added to the ‘Test-Specific Values’ area.
  • efficiency-ratio-percent : This is a measured efficiency in percent (0..100%). Statistics of this type are added to the ‘Test-Specific Values’ area.
  • throughput-time-chart : This is a measured throughput per second. Statistics of this type are added to the ‘Test-Specific Values’ area.

“alt attribute”

It’s also supported to declare new statistics at any time during test execution, but the statistics must be declared first, before the measured data are added.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "DeclareStatistic",
  "type": "object",
  "required": ["subject", "statistic-id", "statistic-type", "statistic-title"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'declare-statistic'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "Unique statistic id"
    },
    "statistic-type": {
      "type": "string",
      "description": "'sample-event-time-chart' or 'cumulative-counter-long' or 'average-and-current-value' or 'efficiency-ratio-percent' or 'throughput-time-chart'"
    },
    "statistic-title": {
      "type": "string",
      "description": "Statistic title"
    },
    "statistic-subtitle": {
      "type": "string",
      "description": "Statistic subtitle | only supported by 'sample-event-time-chart'"
    },
    "y-axis-title": {
      "type": "string",
      "description": "Y-Axis title | only supported by 'sample-event-time-chart'. Example: 'Response Time'"
    },
    "unit-text": {
      "type": "string",
      "description": "Text of measured unit. Example: 'ms'"
    },
    "sort-position": {
      "type": "integer",
      "description": "The UI sort position"
    },
    "add-to-summary-statistic": {
      "type": "boolean",
      "description": "If true = add the number of measured and failed samples to the summary statistic | only supported by 'sample-event-time-chart'. Note: Synthetic measured data like Measurement Groups or Delay Times should not be added to the summary statistic"
    },
    "background-color": {
      "type": "string",
      "description": "The background color either as #hex-triplet or as bootstrap css class name, or an empty string = no special background color. Examples: '#cad9fa', 'table-info'"
    }
  }
}

Example: 
{
  "subject":"declare-statistic",
  "statistic-id":1,
  "statistictype":"sample-event-time-chart",
  "statistic-title":"GET http://192.168.0.111/",
  "statistic-subtitle":"",
  "y-axis-title":"Response Time",
  "unit-text":"ms",
  "sort-position":1,
  "add-to-summarystatistic":true,
  "background-color":""
}

After the statistics are declared then the activities of the simulated users can be started. Each simulated user must report the following changes of the current user-state:

  • register-execute-start : Register that the simulated user has started the test.
  • register-execute-suspend : Register that the simulated user suspend the execution of the test.
  • register-execute-resume : Register that the simulated user resume the execution of the test.
  • register-execute-end : Register that the simulated user has ended the test.

Register Execute Start Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteStart",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-start'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-start","timestamp":1596219816129}

Register Execute Suspend Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteSuspend",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-suspend'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-suspend","timestamp":1596219816129}

Register Execute Resume Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteResume",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-resume'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-resume","timestamp":1596219816129}

Register Execute End Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterExecuteEnd",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-execute-end'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-execute-end","timestamp":1596219816129}

Once a simulated user has started its activity it measures the data in so called ‘session loops’. Each simulated must report when a session loop iteration starts and ends:

  • register-loop-start : Register the start of a session loop iteration.
  • register-loop-passed : Register that a session loop iteration has passed / at end of the session loop iteration.
  • register-loop-failed : Register that a session loop iteration has failed / if the session loop iteration is aborted.

Register Loop Start Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopStart",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-start'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-start","timestamp":1596219816129}

Register Loop Passed Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopPassed",
  "type": "object",
  "required": ["subject", "loop-time", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-passed'"
    },
    "loop-time": {
      "type": "integer",
      "description": "The time it takes to execute the loop in milliseconds"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-passed","loop-time":1451, "timestamp":1596219816129}

Register Loop Failed Object

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterLoopFailed",
  "type": "object",
  "required": ["subject", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-loop-failed'"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-loop-failed","timestamp":1596219816129}

Within a session loop iteration the samples of the declared statistics are measured. For sample-event-time-chart statistics the simulated user must report when the measuring of a sample starts and ends:

  • register-sample-start : Register that the measuring of a sample has started.
  • add-sample-long : Add a measured value to a declared statistic.
  • add-sample-error : Add an error to a declared statistic.

Register Sample Start Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "RegisterSampleStart",
  "type": "object",
  "required": ["subject", "statistic-id", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'register-sample-start'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"register-sample-start","statisticid":2,"timestamp":1596219816165}

Add Sample Long Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddSampleLong",
  "type": "object",
  "required": ["subject", "statistic-id", "value", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-sample-long'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "value": {
      "type": "integer",
      "description": "The measured value"
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{"subject":"add-sample-long","statisticid":2,"value":105,"timestamp":1596219842468}

Add Sample Error Object (sample-event-time-chart only)

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddSampleError",
  "type": "object",
  "required": ["subject", "statistic-id", "error-subject", "error-severity",
  "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-sample-error'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "error-subject": {
      "type": "string",
      "description": "The subject or title of the error"
    },
    "error-severity": {
      "type": "string",
      "description": "'warning' or 'error' or 'fatal'"
    },
    "error-type": {
      "type": "string",
      "description": "The type of the error. Errors which contains the same error
    type can be grouped."
    },
    "error-log": {
      "type": "string",
      "description": "The error log. Multiple lines are supported by adding \r\n line terminators."
    },
    "error-context": {
      "type": "string",
      "description": " Context information about the condition under which the error occurred. Multiple lines are supported by adding \r\n line terminators."
    },
    "timestamp": {
      "type": "integer",
      "description": "Unix-like time stamp"
    }
  }
}

Example: 
{
  "subject":"add-sample-error",
  "statistic-id":2,
  "error-subject":"Connection refused (Connection refused)",
  "error-severity":"error",
  "error-type":"java.net.ConnectException",
  "error-log":"2020-08-01 21:24:51.662 | main-HTTPClientProcessing[3] | INFO | GET http://192.168.0.111/\r\n2020-08-01 21:24:51.670 | main-HTTPClientProcessing[3] | ERROR | Failed to open or reuse connection to 192.168.0.111:80 |
 java.net.ConnectException: Connection refused (Connection refused)\r\n",
  "error-context":"HTTP Request Header\r\nhttp://192.168.0.111/\r\nGET / HTTP/1.1\r\nHost: 192.168.0.111\r\nConnection: keep-alive\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate\r\n",
  "timestamp":1596309891672
}

Note about the error-severity :

  • warning : After the error has occurred then the simulated user continues with the execution of the current session loop. Error color = yellow.
  • error : After the error has occurred then the simulated aborts the execution of the current session loop iteration, and starts the execution of the next session loop iteration. Error color = red.
  • fatal : After the error has occurred then the simulated user aborts any further execution of the test, which means that the test has ended for this simulated user. Error color = black.

Implementation note: After an error has occurred, the simulated user should wait at least 100 milliseconds before continuing his activities. This is to prevent that within a few seconds several thousand errors are measured and reported to the UI

Add Counter Long Object (cumulative-counter-long only)

For cumulative-counter-long statistics there is no such 2-step mechanism as for ‘sample-event-time-chart’ statistics. The value can simple increased by reporting a Add Counter Long object.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddCounterLong",
  "type": "object",
  "required": ["subject", "statistic-id", "value"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-counter-long'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "value": {
      "type": "integer",
      "description": "The value to increment"
    }
  }
}

Example: 
{"subject":"add-counter-long","statistic-id":10,"value":2111}

Add Average Delta And Current Value Object (average-and-current-value only)

To update a average-and-current-value statistic the delta (difference) values of the cumulated sum and the delta (difference) of the cumulated number of values has to be reported. The platform calculates then the average value by dividing the cumulated sum by the cumulated number of values. In addition, the last measured value must also be reported.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddAverageDeltaAndCurrentValue",
  "type": "object",
  "required": ["subject", "statistic-id", "sumValuesDelta", "numValuesDelta", "currentValue", "currentValueTimestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-average-delta-and-current-value'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "sumValuesDelta": {
      "type": "integer",
      "description": "The sum of delta values to add to the average"
    },
    "numValuesDelta": {
      "type": "integer",
      "description": "The number of delta values to add to the average"
    },
    "currentValue": {
      "type": "integer",
      "description": "The current value, or -1 if no such data is available"
    },
    "currentValueTimestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the current value, or -1 if no such data is available"
    }
  }
}

Example: 
{
  "subject":"add-average-delta-and-current-value",
  "statistic-id":100005,
  "sumValuesDelta":6302,
  "numValuesDelta":22,
  "currentValue":272,
  "currentValueTimestamp":1634401774374
}

Add Efficiency Ratio Delta Object (efficiency-ratio-percent only)

To update a efficiency-ratio-percent statistic, the delta (difference) of the number of efficient performed procedures and the delta (difference) of the number of inefficient performed procedures has to be reported.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddEfficiencyRatioDelta",
  "type": "object",
  "required": ["subject", "statistic-id", "efficiencyDeltaValue", "inefficiencyDeltaValue"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-efficiency-ratio-delta'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "efficiencyDeltaValue": {
      "type": "integer",
      "description": "The number of efficient performed procedures to add"
    },
    "inefficiencyDeltaValue": {
      "type": "integer",
      "description": "The number of inefficient performed procedures to add"
    }
  }
}

Example: 
{
  "subject":"add-efficiency-ratio-delta",
  "statistic-id":100006,
  "efficiencyDeltaValue":6,
  "inefficiencyDeltaValue":22
}

Add Throughput Delta Object (throughput-time-chart only)

To update a throughput-time-chart statistic, the delta (difference) value from a last absolute, cumulated value to the current cumulated value has to be reported, whereby the current time stamp is included in the calculation.

Although this type of statistic always has the unit throughput per second, a measured delta (difference) value can be reported at any time.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddThroughputDelta",
  "type": "object",
  "required": ["subject", "statistic-id", "delta-value", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-throughput-delta'"
    },
    "statistic-id": {
      "type": "integer",
      "description": "The unique statistic id"
    },
    "delta-value": {
      "type": "number",
      "description": "the delta (difference) value"
    },
    "timestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the delta (difference) value"
    }
  }
}

Example: 
{
  "subject":"add-throughput-delta",
  "statistic-id":100003,
  "delta-value":0.53612,
  "timestamp":1634401774410
}

Add Test Result Annotation Exec Event Object

Add an annotation event to the test result.

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "AddTestResultAnnotationExecEvent",
  "type": "object",
  "required": ["subject", "event-id", "event-text", "timestamp"],
  "properties": {
    "subject": {
      "type": "string",
      "description": "Always 'add-test-result-annotation-exec-event'"
    },
    "event-id": {
      "type": "integer",
      "description": "The event id, valid range: -1 .. -999999"
    },
    "event-text": {
      "type": "string",
      "description": "the event text"
    },
    "timestamp": {
      "type": "integer",
      "description": "The Unix-like timestamp of the event"
    }
  }
}

Example: 
{
  "subject":"add-test-result-annotation-exec-event",
  "event-id":-1,
  "event-text":"Too many errors: Test job stopped by plug-in",
  "timestamp":1634401774410
}

Notes:

  • The event id must be in the range from -1 (minus one) to -999999.
  • Events with the same event id are merged to one event.

[End of Interface Specification]

Example

HTTP Test Wizard Plug-In

This plug-in “measures” a random value, and is executed in this example as the only part of an HTTP Test Wizard session.

The All Purpose Interface JSON objects are written using the corresponding methods of the com.dkfqs.tools.javatest.AbstractJavaTest class. This class is located in the JAR file com.dkfqs.tools.jar which is already predefined for all plug-ins.

import com.dkfqs.tools.javatest.AbstractJavaTest;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginContext;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginInterface;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginSessionFailedException;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginTestFailedException;
import com.dkfqs.tools.javatest.AbstractJavaTestPluginUserFailedException;
import com.dkfqs.tools.logging.LogAdapterInterface;
import java.util.ArrayList;
import java.util.List;
// add your imports here

/**
 * HTTP Test Wizard Plug-In 'All Purpose Interface Example'.
 * Plug-in Type: Normal Session Element Plug-In.
 * Created by 'DKF' at 24 Sep 2021 22:50:04
 * DKFQS 4.3.22
 */
@AbstractJavaTestPluginInterface.PluginResourceFiles(fileNames={"com.dkfqs.tools.jar"})
public class AllPurposeInterfaceExample implements AbstractJavaTestPluginInterface {
	private LogAdapterInterface log = null;
	
	private static final int STATISTIC_ID = 1000;
	private AbstractJavaTest javaTest = null;       // refrence to the generated test program
	
	/**
	 * Called by environment when the instance is created.
	 * @param log the log adapter
	 */
	@Override
	public void setLog(LogAdapterInterface log) {
		this.log = log;
	}
	
	/**
	 * On plug-in initialize. Called when the plug-in is initialized. <br>
	 * Depending on the initialization scope of the plug-in the following specific exceptions can be thrown:<ul>
	 * 	<li>Initialization scope <b>global:</b> AbstractJavaTestPluginTestFailedException</li>
	 * 	<li>Initialization scope <b>user:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException</li>
	 * 	<li>Initialization scope <b>session:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginSessionFailedException</li>
	 * </ul>
	 * @param javaTest the reference to the executed test program, or null if no such information is available (in debugger environment)
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws AbstractJavaTestPluginSessionFailedException if the plug-in signals that the 'user session' has to be aborted (abort current session - continue next session)
	 * @throws AbstractJavaTestPluginUserFailedException if the plug-in signals that the user has to be terminated
	 * @throws AbstractJavaTestPluginTestFailedException if the plug-in signals that the test has to be terminated
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onInitialize(AbstractJavaTest javaTest, AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws AbstractJavaTestPluginSessionFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginTestFailedException, Exception {
		// log.message(log.LOG_INFO, "onInitialize(...)");
		
		// --- vvv --- start of specific onInitialize code --- vvv ---
		if (javaTest != null) {
		    this.javaTest = javaTest;
		    
		    // declare the statistic
		    javaTest.declareStatistic(STATISTIC_ID, 
            		                  AbstractJavaTest.STATISTIC_TYPE_SAMPLE_EVENT_TIME_CHART,
            		                  "My Measurement",
            		                  "",
            		                  "My Response Time",
            		                  "ms",
            		                  STATISTIC_ID,
            		                  true,
            		                  "");
		}
		// --- ^^^ --- end of specific onInitialize code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

	/**
	 * On plug-in execute. Called when the plug-in is executed. <br>
	 * Depending on the execution scope of the plug-in the following specific exceptions can be thrown:<ul>
	 * 	<li>Initialization scope <b>global:</b> AbstractJavaTestPluginTestFailedException</li>
	 * 	<li>Initialization scope <b>user:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException</li>
	 * 	<li>Initialization scope <b>session:</b> AbstractJavaTestPluginTestFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginSessionFailedException</li>
	 * </ul>
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws AbstractJavaTestPluginSessionFailedException if the plug-in signals that the 'user session' has to be aborted (abort current session - continue next session)
	 * @throws AbstractJavaTestPluginUserFailedException if the plug-in signals that the user has to be terminated
	 * @throws AbstractJavaTestPluginTestFailedException if the plug-in signals that the test has to be terminated
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onExecute(AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws AbstractJavaTestPluginSessionFailedException, AbstractJavaTestPluginUserFailedException, AbstractJavaTestPluginTestFailedException, Exception {
		// log.message(log.LOG_INFO, "onExecute(...)");
		
		// --- vvv --- start of specific onExecute code --- vvv ---
		if (javaTest != null) {
		    
		    // register the start of the sample 
		    javaTest.registerSampleStart(STATISTIC_ID);
		    
		    // measure the sample
		    final long min = 1L;
		    final long max = 20L;
		    long responseTime = Math.round(((Math.random() * (max - min)) + min));
		    
		    // add the measured sample to the statistic
		    javaTest.addSampleLong(STATISTIC_ID, responseTime);
		    
		    /*
		    // error case
		    javaTest.addSampleError(STATISTIC_ID,
                                    "My error subject",
                                    AbstractJavaTest.ERROR_SEVERITY_WARNING,
                                    "My error type",
                                    "My error response text or log",
                                    "");
            */
		}
		// --- ^^^ --- end of specific onExecute code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

	/**
	 * On plug-in deconstruct. Called when the plug-in is deconstructed.
	 * @param pluginContext the plug-in context
	 * @param inputValues the list of input values
	 * @return the list of output values
	 * @throws Exception if an error occurs in the implementation of this method
	 */
	@Override
	public List<String> onDeconstruct(AbstractJavaTestPluginContext pluginContext, List<String> inputValues) throws Exception {
		// log.message(log.LOG_INFO, "onDeconstruct(...)");
		
		// --- vvv --- start of specific onDeconstruct code --- vvv ---
		// no code here
		// --- ^^^ --- end of specific onDeconstruct code --- ^^^ ---
		
		return new ArrayList<String>();		// no output values
	}

}

“alt attribute”

“alt attribute”

Debugging the Interface

  1. In order to debug the processing of the reported data of the interface, activate the “Debug Measuring” checkbox when starting the test job.
  2. After the test job has completed, select in the Test Jobs menu at the corresponding test job the option “Job Log Files” and then select the file “DataCollector.out”.
  3. Review the “DataCollector.out” file for any errors. Lines which contains “| Tailer data” reflect the raw reported data.

“alt attribute”

“alt attribute”

Real Load Synthetic Monitoring Generally Available!

Real Load Synthetic Monitoring Generally Available.

Performance testing as a main and synthetic monitoring as a dessert? Now available at Real Load.

Recently we’ve launched a new version of our Real Load portal which adds the ability to periodically monitor your applications. The best thing to it is that you can re-use already developed load testing scripts to periodically monitor your applications. It makes sense to re-use the same underlying technology for both tasks, correct?

And, of course, nobody forces you to have the main followed by the dessert. You can also first have the dessert (… synthetic monitoring) followed by the main (… load testing). Or perhaps the dessert is the main for you, you can arrange the menu as it best suits your taste.

How it works

As for load testing, you’ll first have to prepare your testing script. Let’s assume you’ve already prepared it using our Wizards. Once the script is ready, the only remaining thing to do is to schedule it for periodic execution.

Configuring Monitoring Groups and Jobs

You can setup so called Monitoring Groups which will be made up of a number of Monitoring Jobs. Each monitoring job executes one of the prepared test scripts.

In this screenshot you’ll see one Monitoring Group called “Vinnuo APIs” (in the red box) which executes one test script, highlighted in the green box.

Scheduling execution interval and location

The key properties you can configure on a Monitoring Group are:

  • Execution interval: Down to 1 minute, depending on licensing level.
  • Execution timeout: How long to wait for the job to complete, before considering it failed.
  • Max. Data Storage: How long to retain job execution results. Up to approx. 1 year, depending on licensing level.
  • Measuring Agents: The location (agents) to executed the monitoring jobs from. We recommend executing the job from at least 2 agents.

Last, you can enable/disable execution for the group by using the Execution Enabled toggle switch.

Configuring Monitoring Job

Next, you’ll add at least one Monitoring Job to the Monitoring test. Simply select from one of your projects a Test that was previously prepared.

In this example, I’ve picked one of the test that generates the relevant REST API call(s).

Next you’ll need to configured these key parameters relating to the execution of the test script. If you’re familiar with the load testing features of our product, these parameters will be familiar:

  • Number of Users: The number of Virtual Users to simulate. Given this is a monitoring job, this would typically be a low number.
  • Max test Duration: This will limit the duration of the job. Again, given this is a monitoring job the Max duration should be kept short.
  • Max Loops per User: The maximum number of iterations of the test script executed by each Virtual User. One iteration should typically be sufficient for a monitoring job.

Alerting Groups and Devices

A synthetic monitoring solution wouldn’t be complete with an alerting functionality.

You can configure Alerting Groups to which you can assign a number of different device types to be alerted. Supported device types are:

  • Email: An email address to deliver the alert to.
  • SMS: SMS alerting. This is subject to additional costs (SMS delivery costs).
  • Webhook: You can configure a WebHook, for example integrating with an existing alerting system.

In this example, I’ve created an alerting group called “SafeArea IT”…

… to which I’ve assigned one email alerting device. Needless to say, you can assign the same alerting group to multiple

Now that the Alerting Groups are configured, you can configure alerting at the Monitoring Group or Monitoring Job level, whichever best suits your use case. Simply click on the alert icon and assign the Alerting Groups accordingly:

Monitoring Dashboard

Once you’re done with the configuration, you’ll be able to monitor the health of your applications from the Real Time Dashboard. Please note that his dashboard is in evolution, and we’re adding new features on a regular basis.

For now, you’ll be able to:

  • See the overall status of all your synthetic monitoring jobs.
  • Look at the results of the last test execution for each job by clicking on the graph symbol pointed at by the red arrow.
  • Look at the logs of the last execution by clicking at the symbol pointed at by the green arrow.
  • Look at the overall scheduling log by clicking at the symbol pointed at by the blue arrow.

This screenshot shows the logs collected for the last job execution (green arrow). Note the additional logfiles (higlighted in the green box) that you can look at.

Interested?

Regardless of whether your looking for a Synthetic Monitoring or Perfomance Testing solution, we can satisfy both needs.

Sign up for a free account on our portal portal.realload.com and click at “Sign up” (no Credit Card required). Then reach out to us at support@realload.com so that we can get you started with your first project.

Happy monitoring and load testing!

Support for SSL Cert Client authentication in Proxy Recorder

Added support for SSL Client certs to proxy recorder

Do you need to record HTTP requests for your test script that require SSL Cert Client authentication? Tick, we support this use case now…

How does it work…

The Proxy Recorder has been enhanced to support recording against websites or applications that require presenting a valid SSL Client certificate.

From an high-level point of view this is how things work:

  • 1 - SSL Client certificates are uploaded to the Real Load Portal. Each certificate is associated with an hostname (or IP address) of the target server, so that the Proxy Recorder knows when to present the SSL Client certificate.
  • 2 - The Real Portal will then share the SSL Client certs with Proxy Recorder. Currently Cloud Hosted proxy recorders are supported.
  • 3 - The tester then executes the steps to be recorded and included in the test script.
  • 4 - When the Proxy Recorder attempts to access hosts that required SSL Client authentication, the relevant SSL Client will be applied.
drawing

SSL Client Certificate Configuration

SSL Client certificates in the .pfx/.p12 format need to be uploaded to the Real Load Portal server.

The configuration of such SSL Client certificates in the Real Load Portal server is done by going to the Remote Proxy Recorders menu item and then clicking on the certificate symbol:

drawing

Then provide details about the certificate your uploading. Importantly the target server host must exactly match the hostname (or IP address) that will appear in HTTP requests.

drawing

Done. Once uploaded, using the Proxy Recorder attempt to access a resource that requires SSL Client Cert authentication. You should be able to access the resource.

Some SQL performance testing today?

SQL load testing

Most performance testing scenarios involve an application or an API presented over the HTTP or HTTPS protocol. The Real Load performance testing framework is capable of supporting essentially any type network application, as long as there is a way to generate valid client requests.

Real Load testing scripts are Java based applications that are executed by our platform. While our portal offers a wizard to easily create tests for the HTTP protocol, you can write write a performance test application for any network protocol by implementing such a Java based application.

This article illustrates how to prepare a simple load test application for a non-HTTP application. I’ve chosen to performance test our lab MS-SQL server. What I want to find out is how the SQL server performs if multiple threads attempt to update data stored in the same row. While the test sounds academic, this is a scenario I’ve seen leading to performance issues in real life applications…

Requirements

Key requirements to implement such an application are:

  • You’ll need Java client libraries (… and related dependencies) implementing the protocol you want to test. In this case I’ll use MicroSoft’s JDBC driver and Hikari as the SQL connection pool manager.
  • You’ll need to determine what logic your load test application should execute. In this example, I’ll run an update SQL statement.
  • You’ll need to determine the metrics you want to measure during test execution. We’ll collect time to obtain a connection from the pool and the time to execute the SQL operation.
  • Make sure the Measuring Agent has network access to the service to be tested (… MS-SQL DB in this case).
  • Last, you’ll need some Java skills to put together the load testing application or access to somebody that will do that for you.

Step 1 - Implement the test script as a Java application

Using your preferred Java development environment, create a project and add the following dependencies to it:

  • DKFQSTools.jar - Required for all performance testing applications
  • mssql-jdbc.jar (The MS-SQL JDBC driver)
  • hikari-cp.jar (JDBC connection pooling)
  • slf4j-api.jar (Required by Hikari)

In NetBeans, the dependencies section would look as follows:

Once the dependencies are configured in your project, we’ll implemented the test logic (the AbstractJavaTest interface). For this application, we’ll create the below class.

Of particular relevance are these methods:

  • declareStatistics(): This is where you declare statistics metrics to be collected as the test is executed.
  • executeUserSession(): This method is invoked for every virtual user to be simulated. Note the SQL update statement that will be executed as part of this test script.

MSSQLTest.java

import com.dkfqs.tools.javatest.AbstractJavaTest;
import com.dkfqs.tools.javatest.AbstractJavaTestPeriodicThread;
import com.dkfqs.tools.javatest.AbstractJavaTestPeriodicThreadInterface;
import com.dkfqs.tools.logging.CombinedLogAdapter;
import com.zaxxer.hikari.HikariDataSource;
import com.zaxxer.hikari.pool.HikariPool;
import java.io.IOException;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
import java.time.Instant;
import javax.sql.DataSource;

@AbstractJavaTest.ResourceFiles(fileNames = {})
public class MSSQLTest extends AbstractJavaTest implements AbstractJavaTestPeriodicThreadInterface {

    private static HikariPool pool;
    private static HikariDataSource dataSource = null;

    /**
     * Static Main: Create a new instance per simulated user and execute the
     * test.
     *
     * @param args the command line arguments
     */
    public static void main(String[] args) throws SQLException, NoSuchFieldException, IllegalArgumentException, IllegalAccessException {
        stdoutLog.message(LOG_INFO, "Max. Java Memory = " + (Runtime.getRuntime().maxMemory() / (1024 * 1024)) + " MB");

        dataSource = new HikariDataSource();
        dataSource.setDriverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
        dataSource.setJdbcUrl("jdbc:sqlserver://192.168.11.61:1433;databaseName=DEMO_DB;multiSubnetFailover=true;applicationName=RealLoad");
        dataSource.setUsername("sqluser");
        dataSource.setPassword("password");
        dataSource.setMinimumIdle(100);
        dataSource.setMaximumPoolSize(2000);
        dataSource.setAutoCommit(true);
        dataSource.setLoginTimeout(3);
        dataSource.setConnectionTimeout(3000);

        java.lang.reflect.Field field;
        field = dataSource.getClass().getDeclaredField("pool");
        field.setAccessible(true);
        pool = (HikariPool) field.get(dataSource);

        // log test specific resource files, annotated by @AbstractJavaTest.ResourceFiles at class level
        logTestSpecificResourceFileNames(MSSQLTest.class);
        try {
            // get all generic command line arguments
            abstractMain(args);

            // create a new instance per simulated user
            for (int x = 0; x < getArgNumberOfUsers(); x++) {
                new MSSQLTest(x + 1);
            }

            // start the test
            stdoutLog.message(LOG_INFO, "[Start of Test]");
            try {
                // start the user threads
                startUserThreads();

                // wait for the user threads end
                waitUserThreadsEnd();
            } catch (InterruptedException ie) {
                stdoutLog.message(LOG_WARN, "Test aborted by InterruptedException");
            }

            stdoutLog.message(LOG_INFO, "[End of Test]");
        } catch (Exception ex) {
            stdoutLog.message(LOG_FATAL, "[Unexpected End of Test]", ex);
        } finally {
            closeOutputFiles();
        }
    }

    /**
     * Close all output files.
     */
    private static void closeOutputFiles() {
    }

    // - - - vvv - - - instance  - - - vvv - - -
    private CombinedLogAdapter log = new CombinedLogAdapter();

    /**
     * Constructor: Create a new instance per simulated user.
     *
     * @param userNumber the simulated user number
     * @throws IOException if the user statistics out file cannot be created
     */
    public MSSQLTest(int userNumber) throws IOException {
        super(userNumber);
        addSimulatedUser(this);
    }

    @Override
    public void declareStatistics() {
        declareStatistic(0, STATISTIC_TYPE_SAMPLE_EVENT_TIME_CHART, "Get connection from pool", "", "Execution Time", "ms", 0, true, "");
        declareStatistic(1, STATISTIC_TYPE_SAMPLE_EVENT_TIME_CHART, "Exec SQL Update stmnt ", "", "Execution Time", "ms", 1, true, "");
    }

    @Override
    public void executeUserStart(int userNumber) throws Exception {
        // start a periodic thread that reports summary measurement results measured across all simulated users
        if (userNumber == 1) {
            AbstractJavaTestPeriodicThread periodicThread = new AbstractJavaTestPeriodicThread(this, 1000L, this);
            periodicThread.setName("periodic-thread");
            periodicThread.setDaemon(true);
            periodicThread.start();
        }

    }

    @Override
    public int executeUserSession(int userNumber, int sessionLoopNumber) throws Exception {
        long measurementGroupStartTime$0 = System.currentTimeMillis();
        registerSampleStart(0);

        // 1- Get a connection from pool
        Connection connection = null;
        try {
            connection = dataSource.getConnection();
        } catch (Exception e) {
            log.message(LOG_ERROR, e.getMessage());
            return SESSION_STATUS_FAILED;
        }
        addSampleLong(0, System.currentTimeMillis() - measurementGroupStartTime$0);

        // 2 - Prepare SQL statement
        Statement st = connection.createStatement();
        String SQL = "update TEST_TABLE set VALUE_NUM = '7058195060625506304' where DATA_URI = '2566' AND DATA_URI = '0' AND DATA_ID = '-1'";

        // 3 - Execute statement
        registerSampleStart(1);
        long measurementGroupStartTime$1 = System.currentTimeMillis();
        st.executeUpdate(SQL);
        addSampleLong(1, System.currentTimeMillis() - measurementGroupStartTime$1);
        st.close();
        connection.close();

        // end of passed session
        return SESSION_STATUS_SUCCESS;
    }

    @Override
    public void executeUserSessionEnd(int sessionStatus, int userNumber, int sessionLoopNumber) throws Exception {
    }

    /**
     * Called periodically by an independent thread with the context of the
     * first simulated user. Reports summary measurement results which were
     * measured over all simulated users.
     *
     * @param abstractJavaTest the context of the first simulated user
     * @throws Exception if an error occurs - logged to stdout
     */
    @Override
    public void onPeriodicInterval(AbstractJavaTest abstractJavaTest) throws Exception {
      }

    @Override
    public void onUserSuspend(int userNumber) throws Exception {
    }

    @Override
    public void onUserResume(int userNumber) throws Exception {
    }

    @Override
    public void executeUserEnd(int userNumber) throws Exception {

    }

    @Override
    public void onUserTestAbort(int userNumber) throws Exception {

    }

}

Step 2 - Upload Java app and dependencies to Real Load portal

Once you’ve compiled your application and generated a jar file (… make sure the main class is mentioned in the META-INF/MANIFEST.MF file) you’re ready to configure the load test in the Real Load portal.

After logging into the portal, create a new project (… “MSSQL” in the below screenshot) and a new Resource Set (“Test 1”). Upload your performance test application jar file (“RealLoadTest3.jar” in this example) and all other dependencies.

Once everything is uploaded, define a new test by clicking on the Resource Set (“Test 1”). Make sure you select the .jar file you’ve developed as the “Executing Script” and tick all the required dependencies in the Resource list.

Step 3 - Execute load test

You’re now ready to execute your performance test. When starting the test job select how many threads (Users) should execute you test application, ramp up time and test execution duration.

While the performance test is executing, you’ll notice that the metrics you’ve declared in the Java source code appear in the real time monitoring window:

If you keep an eye on MS-SQL Management Studio, in the activity monitor you’ll notice that resource locking is the wait class with the highest wait times. Not so surprisingly I might add, given the nature of the test.

Also note that the waiting task number is very close to the number of virtual users (concurrent threads) simulated, approx. 100.

Once the test completed, you can review collected metrics. The graph at the bottom of this screenshot shows execution times throughout the test of the SQL update statement, as load ramped up.

Summarizing….

As you can see, it’s quite straightforward to prepare an application to performance test almost any network protocol.

Should you have a requirement to performance test an exotic protocol and your current tool doesn’t allow you to do so, do not hesitate to contact us. Perhaps we can help…

Thank you for reading and do not hesitate to reach out should you have any Qs.

New Feature - URL Explorer

Easy handling of random values in your load test scripts

An exciting new feature was added in Real Load v4.7.3. The URL Explorer feature allows you to quickly handle session specific random values that might appear in your load test script requests and responses.

In a nutshell, this is how things work:

  • Record a session using the Proxy Recorder.
  • Locate random values that might appear in your test script.
  • Search for these values in the recording using the URL Explorer.
  • Extract and assign these values to a variable.
  • Finally assign the value of the variable to all locations in the load test script where the same value appears.

All of this is documented in this short video (7 minutes) which walks you through the above process.

As always, feedback or questions are welcome using our contact form.

Oracle and async I/O... A world of a difference

What a difference enabling async I/O in Oracle makes…

While running a load test against an API product that I have to deal with in my other day to day job, I’ve noticed something in both the results and at the OS level (… on the DB server) that didn’t make much sense.

Odd results

The results of the performance test where somewhat OK but kinda unstable (… some strange variances in the response times). This graph tells the story better than 1000s of words.

Note the green line (transactions per second) going all over the place:

At first I suspected some sort of issue with the application server (Weblogic) and the DB Connection Pool. But all looked good there…

Then I’ve cast an eye on Oracle Enterprise Manager and noticed that most of the DB waits were related to I/O, although the storage of this particular test DB is located on a reasonably fast NVME SSD.

So I started looking at I/O stats on the Oracle Linux server hosting this DB. Being a lab DB, it’s more or less a standard Oracle install with not much performance tuning applied. Nor am I an Oracle expert that knows all secrets of the trade…

Anyways, there was one thing that somehow didn’t stack up: At the OS level, the % spent by the CPU in iowait was sporadically incredibly high (… 70%+ or so) with the CPU idle time plunging to less than 10%:

After reading various online articles about this, most of which suggested beefier HW or rewrite the app the app so that it would be more efficient with commits, it dawned on my that perhaps Oracle wasn’t using async I/O when writing to disk causing these high waitio stats.

I finally bumped into a few articles talking about async I/O settings in Oracle and found a few useful SQL queries…

This one will assist in figuring out whether async I/O is enabled on your Oracle DB files:

COL NAME FORMAT A50
SELECT NAME,ASYNCH_IO FROM V$DATAFILE F,V$IOSTAT_FILE I
WHERE  F.FILE#=I.FILE_NO
AND    FILETYPE_NAME='Data File';

… leading to a result like this. Note that for all files async IO is disabled…

So I decided to enable async I/O with these few SQL commands:

ALTER SYSTEM SET FILESYSTEMIO_OPTIONS=SETALL SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;

… and then checking again. As you can see, async I/O is enabled now:

NAME                                               ASYNCH_IO
-------------------------------------------------- ---------
/opt/oracle/oradata/AAOP74/datafile/o1_mf_system_j ASYNC_ON
zml41fy_.dbf

/opt/oracle/oradata/AAOP74/itblspc01.dbf           ASYNC_ON
/opt/oracle/oradata/AAOP74/datafile/o1_mf_sysaux_j ASYNC_ON
zml5rwh_.dbf

/opt/oracle/oradata/AAOP74/datafile/o1_mf_undotbs1 ASYNC_ON
_jzml6l1k_.dbf

/opt/oracle/oradata/AAOP74/dtblspc01.dbf           ASYNC_ON

NAME                                               ASYNCH_IO
-------------------------------------------------- ---------
/opt/oracle/oradata/AAOP74/datafile/o1_mf_users_jz ASYNC_ON
ml6m5f_.dbf

/opt/oracle/oradata/AAOP74/btblspc.dbf             ASYNC_ON
/opt/oracle/oradata/AAOP74/cm.dbf                  ASYNC_ON
/opt/oracle/oradata/AAOP74/cm.idx                  ASYNC_ON
/opt/oracle/oradata/AAOP74/bodtblspc.dbf           ASYNC_ON
/opt/oracle/oradata/AAOP74/boitblspc.dbf           ASYNC_ON
/opt/oracle/oradata/AAOP74/DTBLSPC03.dbf           ASYNC_ON
/opt/oracle/oradata/AAOP74/ITBLSPC03.idx           ASYNC_ON
/opt/oracle/product/19c/dbhome_1/dbs/reportdt.dat  ASYNC_ON

14 rows selected.

Smooth sailing….

Time to re-run the load test with my preferred tool and the results look encouraging.

As you can see the green results line is much more stable. Not only that, but number of transactions per second (TPS) increased to approx. 136 from 101 in the previous run. Response times also went down somewhat, from 90 to 70ish msecs.

The CPU waitio stats also dramatically improved on the Oracle server:

To summarize, it makes sense to scratch beyond the surface of performance bottlenecks before investing in HW upgrades or so… Sometimes the solution is a low-hanging fruit waiting to be picked.

External references:

ORACLE-BASE - Direct and Asynchronous I/O

I/O Configuration and Design

Apache HTTPD on FreeBSD and Linux Load Test

Comparison of infrastructure resource usage between Linux and FreeBSD HTTPD instances

For various reasons, I’ve had to perform a series of tests to ensure our Measuring Agent can generate traffic from a large number of source IP addresses. Aside from validating that capability, the by-result of the test is a somewhat interesting comparison of a FreeBSD and Linux based Apache HTTPD server.

Generating Load From Multiple IPs

First, a quick overview of what I wanted to prove: I needed to make sure that we can run a Load Test simulating a large number of source IP addresses. To validate this requirement, I’ve configured one of our Measuring Agents with approx. 12k IP addresses. I’ve used a bash script to do that, as otherwise it would take forever. All IPs are assigned as aliases to the NIC from where the load will be generated, and all IPs are within the same /16 subnet.

Finally, I’ve configured my Real Load test script with two additional steps:

  1. Step 0 that selects a random IP address configured on the NIC and stores it in a variable.
  2. Step 2 that instructs the load test to use as src IP the address stored in the variable.

Infrastructure Details

The hypervisor is a Windows 2019 Server Standard edition machine, running Hyper-V and fitted with an somewhat old Xeon E5-2683v3 CPU. The measuring agent and the tested servers are connected to the same virtual switch.

The Linux and FreeBSD VMs are minimal instals of their distributions, onto which I’ve installed the latest Apache HTTPD build offered by the built in software distribution mechanisms. That’s why the HTTPD versions are not identical.

In order for the results to be somewhat comparable, I’ve deployed the same set of static HTML pages on both servers. I’ve also aligned several key HTTPD config parameters on both systems, as shown in this table.

Parameter Measuring Agent FreeBSD HTTPD VM Linux HTTPD VM
OS Version RH 8.4 13.0 Oracle Lnx 8.4
RAM 4 GBs 4 GBs 4 GBs
vCPUs 10 4 4
HTTPD Version n/a 2.4.53 2.4.37
HTTPD MPM n/a event event
ServerLimit n/a 8192 8192
MaxRequestWorkers n/a 2048 2048
ThreadsPerChild n/a 25 25

See further down for other tuning parameters applied to the HTTPD VMs.

Load Test Execution and Result Metrics

I’ve then executed a 20 minutes 1000 VUs load test which, which is configured to maximize the number of HTTP requests generated. Apache is configured to server some static HTML pages, made up of text and some images.

This table summarizes metrics observed once the max. load was reached, approx. 10 minutes into the test. The PDF reports allow you to have a better glance into the test results.

Metric Linux HTTPD FreeBSD HTTPD
User CPU usage 21% 20%
System CPU usage 47 % 70%
Avg reqs/s 8.8k 10.3k
Avg network throughput 1.1 Gbps 1.3 Gbps
Hyper-V CPU usage 10% 11%
Test report PDF Linux Report PDF FreeBSD Report PDF
Test progress screenshot

Notes

  • CPU usage was measured with the “iostat 20” command.
  • Hyper-V CPU usage was taken from Windows Admin Center.

And the winner is…

… is difficult to pick, to be honest.

  • CPU usage, as measured by Hyper-V was a little bit higher for FreeBSD. CPU metrics measured within the VMs seem to indicate an overall higher CPU usage by FreeBSD (… in particular System CPU). Perhaps the Linux NIC driver is better optimized for Hyper-V.
  • FreeBSD HTTPD seems to deliver an higher throughput (network and avg requests/s).
  • FreeBSD HTTPD also seems to offer an higher HTTP Keep-Alive efficiency, which might partially explain the higher throughput.
  • Observations (like CPU usage, etc…) were averaged by “eyeballing” metrics displayed on screen. Expect some rounding error…

Assuming I had time to spend to better tune and align the two platforms, I might have been able to squeeze out a bit more performance from each server, but I doubt that would have materially changed the result in favor of one OS or the other. Obviously I’m happy to be proven wrong…

Feel free to email us with your feedback, I’ll be more than happy to test any further tuning suggestions.

OS Tuning

Below the OS level tuning that was applied to the Linux and FreeBSD servers. I didn’t have time to research in full each of the parameters mentioned below, they were mentioned in various other online sources and adopted. I’ve implemented the ones that seemed to make most sense…

Linux HTTPD (/etc/sysctl.conf)

The last 2 tunables were required to prevent the Linux server stopping accepting connections for various reasons…

fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576
net.core.message_cost=0
net.ipv4.neigh.default.gc_thresh3=64000

FreeBSD HTTPD (/etc/sysctl.conf)

kern.threads.max_threads_per_proc=4096
kern.ipc.somaxconn=4096
kern.ipc.maxsockets=204800
kern.ipc.nmbclusters=262144
kern.maxfiles=204800
kern.maxfilesperproc=200000
kern.maxvnodes=200000
net.inet.tcp.delayed_ack=0
net.inet.tcp.msl=5000
net.inet.tcp.maxtcptw=200000
net.inet.ip.intr_queue_maxlen=4096
net.inet.ip.dummynet.io_fast=1

Real Load Portal Generally Available!

The Real Load Portal portal.realload.com is open for publich registration.

The Real Load Portal portal.realload.com is open for publich registration.

Feel free to register for an account and trial our product by following the registration instructions here. You’ll be given a two weeks 100 VUs demo license and no credit card is required to sign up!

If you’d like a one-on-one session to guide you through the first steps of how to use our product, please do no hesitate to contact us at sales@realload.com .

Happy testing!

Desktop Companion 0.24 Released

Quick update walkthrough video

The Desktop Companion is a Desktop GUI that allows you to manage several Real Load aspects directly from your desktop.

We’ve released the latest update and this 5 minutes video illustrates the key changes.

Happy watching!

My system just got faster!

My system just got much faster… I wonder why.

Today I’ve started executing a lengthy performance test against a SOAP API to seed the underlying DB. For various reasons, I need to replicate the daily DB volume increase of a production system in my own lab DB.

I’ve prepared a Real Load test script and started hammering a server in my lab environment. I’ve noticed the performance wasn’t particularly good but that didn’t matter, as I wasn’t actually executing a performance test.

I’ve let the test run and went for lunch (… a sandwich). When I came back, I’ve noticed that my system became somehow much faster, raising from approx. 50 TPS to approx. 200 TPS. Each transaction represents a SOAP request…

See this graph from the real time monitoring window:

Knowing how this particular product works and knowing that typically the performance is limited by the performance of the underlying DB, I’ve started looking at various DB counters and one thing I’ve noticed is that the Response Time reported by MS SQL Studio on a particular DB file went down considerably (… from 100ms+ to 10-20ms).

That was curios…. why would this happen? I’ve then cast an eye on the metrics of my storage system (a TrueNAS self build…) and noticed that the ZFS L2 ARC read cache hits improved noticeably around that time. Notice the orange line, next to 0% hit ratio around 12PM and then raising to 90%+ after approx. 50 minutes.

Anyways… this just goes to show that having access to metrics of all infrastructure components during a load test is critical. But sometimes getting to these metrics can really be hard. Just need to persist to get to be bottom of things…..

Desktop Companion Enhancements

Features added to make recording of HTTP sessions more user friendly

This is a short update video to illustrate enhancements in the last update of the Desktop Companion.

Enhancements were done to the Proxy Recorder tab:

  • Allow adding page breaks as you navigate from page to page while recording.
  • Added a real-time counter of the requests being recorded.
  • Added a button to force the Desktop Companion window on top of others, so it doesn’t get hidden by browser windows.

All of the above is illustrated in this short video…

Desktop Companion Released

Conveniently manage AWS Measuring Agents from your desktop and more…

The Desktop Companion is a Desktop GUI that allows you to manage several Real Load aspects directly from your desktop.

It was freshly released in the last few days, plz do not hesitate to try it out. We’ve put together a short video that shows how to:

  • Prepare a simple load tests using the Recording Proxy on your Desktop.
  • Upload the load test to the Real Load Portal
  • Start an AWS EC2 Measuring Agent (Load Generator) from the Desktop Companion
  • Execute the load test script
  • Terminate the AWS EC2 Measuring Agent

All of this in 8 minutes…

It’s the first video I’ve ever had to publish, so I apologize in advance for the rather basic editing…

Real Load Plugins Introduction

Real Load plugins - Create, share or simply re-use.

A great new feature of the Real Load portal is that is allows you to share or simply consume plugins that have been prepared by others.

Plugins are written in Java. There are 3 types of plugin that are supported by the Real Load application:

  1. Session Element Plug-In - Typically used to generate custom data required by your load test script. For example:
  • Extract data from a DB.
  • Generate random data that follows a specific syntax.
  • Query an external webservice to obtain data to be injected in the load test.
  1. URL Plug-in - Allows you to modify request or response data:
  • Modify the HTTP request (…change the URL, etc…)
  • Modify response data.
  1. Java Source Code Modifier Plug-in - Allows to automtically modify a test script Java source code.

One of the key fetures of the product is that plugins can be optionally be published on the Real Load portal, for other users to consume. You can have a glimpse of available plugins here.

Interested in plugins but don’t know where to start? We’ll soon publish a getting started documentation on our website. In the meantime please reach out to us at support@realload.com

Real Load Demo Portal online

The Real Load Demo Portal demo2.realload.com is now up and running.

The demo portal demo2.realload.com is now available for selected customers who wish to evaluate the product functionality. You need an invitation code from us to sign up at the portal.

Quick Start Guide

  1. Navigate to demo2.realload.com and click at “Sign up”
  2. Enter your invitation code and follow the instructions
  3. Once you are signed in navigate to “Measuring Agents”
  4. Add the following Measuring Agent: agent2.realload.com port 8080
  5. Ping the Measuring Agent at application level
  6. Click at the “Wizards” icon to “HTTP Test Wizard”
  7. Define your first HTTP/S test, debug the test, save the session, generate the code and run your test

Note: The Measuring Agent agent2.realload.com has the following restrictions:

  • Maximum number of users per test job: 500
  • Maximum test job duration: 5 minutes

“alt attribute”

“alt attribute”

“alt attribute”