Asset 2 (1)

Integrating Apache Jmeter with Jenkins

In the world of software development, ensuring the performance and reliability of applications is paramount. One of the most popular tools for performance testing is Apache JMeter, known for its flexibility and scalability. Meanwhile, Jenkins has become the go-to choice for continuous integration and continuous delivery (CI/CD). Combining the power of JMeter with the automation capabilities of Jenkins can significantly enhance the efficiency of performance testing within the development pipeline. In this article, we’ll explore the integration of JMeter with Jenkins and how it can streamline the performance testing process.

Apache Jmeter

Apache JMeter is a powerful open-source tool designed for load testing, performance testing, and functional testing of applications. It provides a user-friendly GUI that allows testers to create and execute several types of tests, including HTTP, FTP, JDBC, LDAP, and more. JMeter supports simulating heavy loads on servers, analyzing overall performance metrics, and finding performance bottlenecks.

With its scripting and parameterization capabilities, JMeter offers flexibility and scalability for testing web applications, APIs, databases, and other software systems. Its extensive reporting features help teams assess application performance under different conditions, making it an essential tool for ensuring the reliability and scalability of software applications. More information available here Apache JMeter.

Jenkins

Jenkins is one of the most popular open-source automation servers widely used for continuous integration (CI) and continuous delivery (CD) processes in software development for several years now. It allows developers to automate the building, testing, and deployment of applications, thereby streamlining the development lifecycle. Jenkins supports integration with various version control systems like Git, SVN, and Mercurial, enabling automatic triggers for builds whenever code changes are pushed.

Its extensive plugin ecosystem provides flexibility to integrate with a wide range of tools and technologies, making it a versatile solution for managing complex CI/CD pipelines. Jenkins’ intuitive web interface, extensive plugin library, and robust scalability make it a popular choice for teams aiming to achieve efficient and automated software delivery processes. The Jenkins docs has a page to help with the Jenkins installation process.

Why Integrate JMeter with Jenkins?

Traditionally, performance testing has been a manual and time-consuming process, often conducted as a separate part of the development lifecycle by test teams. The results had to then be shared with the rest of the team as there was no automated execution or capturing of the test results as part of the CICD pipeline. However, in today’s fast-paced software development environment, there’s a growing need to automate the complete testing processes to include execution of Performance tests as part of the CICD pipeline. By integrating JMeter with Jenkins, organizations can achieve the following benefits:

Automation: Jenkins allows you to automate the execution of JMeter tests as part of your CI/CD pipeline, enabling frequent and consistent performance testing with minimal manual intervention.

Continuous Feedback: Incorporating performance tests into Jenkins pipelines provides immediate feedback on the impact of code changes on application performance, allowing developers to find and address performance issues early in the development cycle.

Reporting: Jenkins provides robust reporting and visualization capabilities, allowing teams to analyze test results and track performance trends over time, helping data-driven decision-making.

Our Proposed Approach & its advantages

We’ve adopted a new approach in addition to using the existing JMeter plugin for Jenkins wherein we enhance the Jenkins Pipeline to include detailed notifications and better result organization.

The key steps of our approach are as follows.

  1. We can install JMeter directly on the agent base OS. This ensures we have access to the latest features and updates.
  2. We use the powerful Blaze Meter plugin to generate our JMeter scripts.
  3. We’ve written a dedicated Jenkins pipeline to automate the execution of these Jmeter scripts.
  4. We have also defined the steps as part of the Jenkins script to distribute the Execution Status and log by email to chosen users.
  5. We also store the results in a configurable Path for future reference.

All of this ensures better automation, flexibility or control of the execution, notification and efficient performance testing as part of the CICD Pipeline.

Setting Up Blaze Meter & Capturing the Test Scripts

To automate the process of script writing we should use the Blaze Meter tool. Navigate to chrome extensions and search for Blaze Meter. Click on add to chrome. Access Blaze Meter official website and create an account there for further process.

Open chrome and now you can find the Blaze Meter extension on the top right corner.

Click on the Blaze Meter chrome extension. A toolbox will be visible. Open the application where you want to record the scripts for Jmeter. Click on the record button to start.

Navigate through the application and perform necessary operations as end users of the application would. Click on the stop button to stop the recording.

Now Blaze Meter has recorded the scripts. To save it in. jmx format, click on save, check the Jmeter only box and click on save as shown in the screenshot below.

For more information on how to record a Jmeter script using Blaze Meter, follow the link

Modifying the Test Scripts in JMeter

The recorded script can then be opened in JMeter, and necessary changes can be made as per the different Load and Performance Scenarios to be assessed for the application.

Select the generated .jmx file and click on open

In addition to these you can add listeners to the thread groups, for better visibility of each sample results.

Setting up a Jenkins pipeline to execute JMeter tests

Install Jenkins: If you haven’t already, install Jenkins on your server following the official documentation.

Create a New Pipeline Job: A new pipeline job should be created to orchestrate the performance testing process. Click on new item to create a pipeline job and select pipeline option.

Post creating a new pipeline, navigate to configure and mention the scheduled time in the below format.

(‘H 0 */3 * *’)

Define Pipeline Script: Configure the pipeline script to execute the JMeter test at regular intervals using a cron expression.

 pipeline {

    agent {

        label ‘{agent name}’

    }

This part of the Jenkins pipeline script specifies the agent (or node) where the pipeline should run. The label ‘{agent name}’ should be replaced with the label of the specific agent you want to use. This ensures that the pipeline will execute on a machine that matches the provided label.

stages {

      stage(‘Running JMeter Scripts’) {

          steps {

              script {

 sh ”’

    output_directory=”{path}/$(date +’%Y-%m-%d’)”

    mkdir -p “$output_directory”

    cd {Jmeter Path}/bin

    sh jmeter -n -t {Jmeter Script Path} -l “$output_directory/{Result file name}” -e -o “$output_directory”

    cp “$output_directory”/{Result file name} $WORKSPACE

    cp -r $output_directory $WORKSPACE

    ”’

       }

            }      

}

This stage named ‘Running JMeter Scripts’ has steps to execute a shell script. The script does the following:
1. Creates an output directory with the current date.
2. Navigates to the JMeter binary directory.
3. Runs the JMeter test script specified by {Jmeter Script Path}, storing the results in the created directory
4. Copies the result file and output directory to the Jenkins workspace for archiving.

post {

        always {

            script {

                currentBuild.result = currentBuild.result

                def date = sh(script: ‘date +\’%Y-%m-%d\”, returnStdout: true).trim()

                def subject = “${currentBuild.result}: Job ‘${env.JOB_NAME}'”

                def buildLog = currentBuild.rawBuild.getLog(1000)

                emailext(

                    subject: subject,

                    body: “”” Hi Team , Jmeter build was successful , please contact team for the results “””,

                    mimeType: ‘text/html’,

                    to: ‘{Receiver Email}’,

                    from: ‘{Sender Email}’,

                    attachLog: true ,

                )

            }

     }

This post block runs after the pipeline completes. It retrieves the last 1000 lines of the build log and sends an email notification with the build result, a message, and the build log attached to specified recipients.

View the generated reports.

In Linux instance navigate to the path where the .html files are stored i.e. the output report of the jmeter scripts

Before you open the HTML file, move the complete folder to your local device. Once the folder is moved open the .html file and you’ll be able to analyze the reports

Conclusion

By following the steps described and the approach suggested, we have shown how Integrating JMeter with Jenkins enables teams to automate performance testing and incorporate it seamlessly into the CI/CD pipeline. By scheduling periodic tests, storing results, and sending out email notifications, organizations can ensure the reliability and scalability of their applications with minimal manual intervention. Embrace the power of automation and elevate your performance testing efforts with Jenkins and JMeter integration. For any assistance in automation of your Performance tests please get in touch with us at [email protected] or leave us a note at <form> and we will get in touch with you.
unnamed

Pentaho vs Pyramid: A comprehensive comparison and Roadmap for Migration

Hitachi Vantara Pentaho has been a very popular Commercial Open-Source Business intelligence platform used extensively over the last 12+ years providing a comprehensive set of tools with great flexibility and extensibility and hence used to be featured in the Analysts reports including Gartner BI Magic Quadrant, Forrester and Dresner’s Wisdom of Crowds. 

Over the last 3-5 years, however, there has been a shift in the industry to the new Augmented era of Business Intelligence and Analytics and several niche Pentaho alternatives have emerged. Pyramid Analytics is one of these Pentaho replacement platforms and is consistently recognized as a leader in this space by several Analysts including Gartner, BARC, Forrester & Dresner having featured in their reports now for over 7 years in a row. 
Hitachi Vantara Pentaho hasn’t been able to keep pace and has since been dropped from these analyst reports. This series of articles are aimed to help current users of Pentaho and other similar old generation BI platforms like Jasper who are evaluating Pentaho replacements or alternatives. We try to map the most commonly used modules and features of Pentaho BI Platform to their equivalent in Pyramid Analytics, comparing and highlighting the improvements and also presenting a RoadMap for migration.

Architecture Overview and Comparison

About Pentaho

Pentaho BI Platform covers the entire spectrum of Analytics. It includes both web-based components and design tools. The design tools include Pentaho Data Integration for ETL, Metadata Editor, Schema workbench, Aggregate & Report Designers to build Reports and Weka for Data Science / Machine Learning.

Pentaho BI Server includes a set of Web Components including the User Console, Analyzer, Interactive Reports, Dashboard Designer, CTools and Data Source Model Editor / Wizard. Specific Design tools and Web Components can be used to generate different Analytical content depending on the specific use cases and requirements. The flexible and open architecture was one of the key reasons for its popularity for so long as there were not many Pentaho alternatives with similar capabilities, and hence, it enjoyed its days with little competition.

Please refer to this link for a detailed explanation of each of the above components and Design tools. 

About Pyramid Analytics

Pyramid Analytics is a Modern, Scalable, Enterprise Grade, End to End Cloud-Centric Unified Decision Intelligence platform for tomorrow’s Analytical needs. Being an adaptive analytics platform it provides different capabilities and experiences based on user needs and skills, all while managing content as a shared resource. It provides organizations with one analytics solution for everyone, across all user types and skill levels. Hence, proving itself as a worthy and capable Pentaho replacement platform.

Unlike Pentaho and other Pentaho replacement platforms, there are no different Client or Design tools that need to be installed on local systems by developers; instead, all Modules are hosted in a Server and can be accessed using just the browser.

Please refer to these Platform Overview & Pyramid Modules for a more detailed explanation of each of the above components and modules.

Mapping of Modules & Design Tools between Pyramid & Pentaho

Here is the mapping between the Modules of Pentaho with the corresponding ones of Pyramid Analytics.

Key Platform Capabilities & Differentiators

We have listed some of the Key Capabilities of both Pentaho and Pyramid Analytics Platforms and highlighted differences in terms of how they are built

Decision Intelligence and Augmented Analytics

As per Gartner, Decision Intelligence & Augmented analytics is the use of enabling technologies such as machine learning and AI to assist with data preparation, insight generation, and insight explanation to augment how people explore and analyze data in analytics and BI platforms. It also augments the expert and citizen data scientists by automating many aspects of data science, machine learning, and AI model development, management, and deployment. 

Pentaho doesn’t offer any Augmented Analytics or Decision Intelligence capability as part of its offerings. This feature makes Pyramid Analytics an even more solid Pentaho replacement option.

Pyramid offers augmented analytics capabilities in a couple of ways like Explain(NLQ & Chatbot). Smart Insights, Smart Model, Smart Discover, Smart Publish and Present, Data Structure Analyzer, Auto Discover, Auto recommendations. Among the most used are Auto-Discovery and Smart Discovery. It offers users the simplest method for building data visualizations in Pyramid through a simple point-and-click wizard. The wizard presents the user with an ultra-streamlined interface, consisting of the report canvas, the visualization menu, and a single unified drop zone.

Collaboration & Conversation

If there’s some discussion or real-time collaboration required between business users around Report or Dashboard, Pentaho users usually need to use mail or similar to create discussion about any issue or pointers related to reports.

However, Pyramid Analytics not only has inbuilt collaboration and conversation features where any user can write a comment and share it with a single user or group of users it also offers a very powerful Custom Workflow API to support integration with other applications. Other users also get notifications about new comments and accordingly respond or continue the conversation.

Dashboard & Data Visualization

Pentaho’s Dashboard Designer helps create Ad Hoc interactive visualizations and Dashboards with a 360-degree view of data through dynamic filter controls and content linking. Drag-and-drop, attribute highlighting, and zoom-in capabilities make it easy to isolate key trends, details, and patterns. We can also use the Open Source CTools component of Pentaho to build custom Dashboards but this requires highly technical Javascript skills. We can also integrate business analytics with other applications through portal and mashup integrations.

Pyramid offers a wide range of Visualization capabilities in the Discover, Present, and Illustrate Modules with a wide range of charts and Graphs. It has features like Time Intelligence, a wide range of formulae capabilities, better representation of GeoSpatial Data using inbuilt Maps capabilities. It also has the capability of Explain where we can ask questions to get the information needed and it provides the results using NLP. Users can also set alerts based on dynamic conditions without any coding, unlike Pentaho and other Pentaho alternatives. Using the powerful Publish module, you can create data-driven graphics, text, and visual elements which can be scheduled and delivered to users via email as PowerPoint, Word, Excel, PDF and other formats.

Data Sources Support

Pentaho supports more than 45 Databases using JDBC & JNDI  including the ability to retrieve information from Google Analytics and Salesforce. Pentaho Server also provides a Database connection wizard to create custom data sources. Details can be found here.

Pyramid Analytics also offers a wide range of data source connectivity options using JDBC, ODBC, OLAP(Microsoft Analysis Services, SAP HANA, SAP BW) and External applications like Facebook, Twitter & Salesforce. It provides an easy wizard to retrieve and Mash the Data by creating a logical Sematic layer. 

It should be highlighted that out-of-box connectivity to SAP HANA and BW makes it easy for SAP users to modernize their Analytical solution using Pyramid Analytics. You can find more details here.

Metadata Layer Support

Pentaho has two Design tools which help end-users to create the Meta Data required for Ad Hoc Reports creation – Schema Workbench and Metadata Editor. Schema Workbench helps in creating a Mondrian Schema which needs an underlying Database in Star Schema. This is OLAP technology and needs MDX language to query data.  Metadata Editor is used to create a Metadata model which primarily transforms DB physical structure to a business Logical Model.

The Pyramid Metadata layer is managed using the Model component. Everything in the Pyramid revolves around the Model. The model is highly sophisticated which facilitates all visualization capabilities. Pyramid Models are easy to create and can be created with no or little database changes. The model creation process comes with lots of Data preparation, Calculation features. The model also mimics OLAP concepts and more. 

Predictive Analytics

Pentaho product suite has module Weka that enables predictive analytics features like data preprocessing, classification, association, time series analysis, and clustering. However, there’s some effort to bring the data to nice visualization and then to consume it in the context of other analytical artefacts. The process is not easy to achieve with other Pentaho alternatives, but Pyramid Analytics solves this with an out-of-the-box solution. 

Pyramid has out of the box predictive modelling capabilities as part of the whole analytical process which can be executed seamlessly. To facilitate the AI framework, Pyramid comes with tools to deliver machine learning in R, Python, Java, JavaScript, and Ruby (with more to be added in the future)

Natural Language Processing

Pentaho can be integrated with external tools like Spark MLlib, Weka, Tensorflow, and Keras but these are not suitable for NLP use cases. The same is the case with many other Pentaho replacement solutions.

Pyramid’s Explain and Ask Question using Natural Language Query (NLQ) however supports easy text-based searches, allowing users to type a question in conversational language and get answers instantly in the form of automatic data visualizations. Users can enhance the output by customizing the underlying semantic model according to their business needs.

Native Mobile Applications

Considering today’s need by Business users to have instant access to Information and Data to make quick decisions and the fact that Mobiles are the de facto mediums, it is very important to deliver Analytical content including the KPI on the go and when offline. This can be achieved by the support of access to data on mobile devices. This is achieved by responsive web interfaces and mobile apps.

Pentaho doesn’t have a native mobile app but we can deliver Mobile-friendly content. using a mobile browser. 

Pyramid, on the other hand, offers a native mobile app and one of the best Pentaho alternatives that empower Business Users on the Go. The app can be downloaded from App stores.

Admin, Security & Management

User. Roles and Folder/file management is done by PUC(Pentaho User Console) when logged in as Administrator. Your predefined users and roles can be used for the Pentaho User Console (PUC) if you are already using a security provider such as LDAP, Microsoft Active Directory (MSAD), or Single Sign-On. Pentaho Data Integration (PDI) can also be configured to use your implementation of these providers or Kerberos to authenticate users and authorize data access.

Clustering and Load balancing need to be configured separately for PDI and BI servers.  The server is a Tomcat application so the clustering and load balancing generally follows accordingly. Upgrading the server version needs to follow an up-gradation path which involves possible changes to content artefacts. 

Pyramid has multiple layers of security which makes it very robust and offers secured content delivery. It also facilitates third-party security integration like Active Directory, Azure LDAPS, LDAP, OpenID and SAML. Pyramid has Advanced Administration with simplified Security handling, Monitoring, Fine Tuning, and Multi-Tenancy management without the need to edit and manage multiple configuration server files as in Pentaho. 

All can be done using the Browser by an Administrative user. Pulse Module which helps Pyramid Server hosted in the Cloud securely connect into Data repositories on-prem. Distributed Architecture inbuilt which offers easy dynamic scalability across Multiple Servers with Load Balancer built-in.

Conclusion 

With Pyramid ranking ahead of Pentaho in most of the features and capabilities, it is not surprising that it is rated so highly by all the analysts and it is a no-brainer to select Pyramid as your next-generation Pentaho replacement Enterprise Analytics and Decision Intelligence platform. More details on Why to Choose a Pyramid is provided here. 

We only covered the high-level aspects and differences with Pentaho as part of this article. In the next article, we delve deeper into the individual components and walk through how each of them from existing Pentaho-based solutions or Pentaho alternatives can be migrated into Pyramid by giving specific examples. 

Please get in touch with us here. if you are currently using Pentaho and want assistance with migrating to the Pyramid Analytics Platform.

824a67c0-e747-4b68-affa-55cc2025551c

Locust Load Testing in Kubernetes [with Concourse]

Load testing is the critical part of any web application quality test to know how it behaves when the load / volume of users accessing it are high. The first step of any load testing is to select the right tools. Locust is one of those tools which is popular in testing python-based
applications. For testers who prefer a command-line interface, Locust is faster, easier to configure, and  enables testing code to be easily reused between projects and different environments. Locust makes the effort to work around the limitation of resources while running the load
test by allowing you to spin up a distributed load test. Through this, we can simulate a large number of users by scaling the number of workers. To install the locust into your local system for development you can run the following
command:

$ pip install locust [you can use Anaconda command line also]For reference, you can look ahead: Locust.
Prerequisites:
1) Kubernetes
2) Concourse
3) Helm Kubernetes is a container management technology developed in Google lab to manage containerized applications in different kinds of environments such as physical, virtual, and cloud infrastructure. It is an open-source system that helps in creating and managing the containerization of applications. Helm chart is the package manager tool for Kubernetes which has the collection of files. Concourse CI is a system built with a loosely coupled microservice architecture. To manage the concourse pipeline by command-line interface we have to use fly.

Setup Locust in Kubernetes:
Add the Helm repository by the command below:
1) $ helm repo add stable https://charts.helm.sh/stable
2) $ helm repo list
Then we will get the output as one helm repo added as:

3) Create a ConfigMap with any name, here I’m calling it locust-worker-configs which holds our locust file. Since in our example we are running Odoo locusts it has multiple files and all the files in a particular folder are added to this ConfigMap which is later referenced in the locust pods (master and worker).

$ kubectl create configmap locust-worker-configs –from-file &lt;local dir&gt;/odoolocust -o
yaml –dry-run=client | kubectl apply -f –

4) Install locust:
$ helm install locust stable/locust -f values.YAML
Here is the list of all configurable values. Be sure to add the ConfigMap name created earlier in the values. YAML file.

After running this command, we will have the following output:

5) Check if all the pods are up and running.

$ kubectl get pods

6) Let’s check the service created:
$ kubectl get service

7) We have one service named “locust-master-svc” created and we can expose it by port forwarding to access from local.

$ kubectl port-forward service/locust-master-svc 8089:8089

We have the locust dashboard up and running from where we can provide the number of users, spawn rate, and host.

Below is the image of the load test which has three workers at a ready state and the CPU percentage metrics is mentioned:

Now we can connect to the remote locust instance and run our tests. The next steps will automate the tests and store the results in a google bucket using Concourse.

Concourse pipeline:

To create the ConfigMap and to run the pods, we have to provide the directory of our folder where all test scripts and concourse pipeline files are located. For that, check the folder structure below:

Here, we’re using pipelines as “locust-tests.pipeline.YAML”. It has two different jobs, one to update the locust scripts and the other to run the tests for a different number of users.


platform: Linux

groups:
– name: locust-test
jobs:
– update-locust-script
– users-50
– users-100
– users-150
– users-200

jobs:

– name: update-locust-script
plan:
– get: pipeline
trigger: true
– put: task
params:
wait_until_ready: 0
kubectl: create configmap locust-worker-configs –from-file pipeline/tasks/odoolocust/ -o yaml –dry-run=client -n | kubectl apply -f –

– name: users-50
plan:
– get: pipeline
– task: users-50
file: pipeline/tasks/locust-test.yaml
params:
user_count: 50

– name: users-100
plan:
– get: pipeline
– task: users-100
file: pipeline/tasks/locust-test.yaml
params:
user_count: 100

resource_types:
– name: kubernetes
type: docker-image
source:
repository: zlabjp/kubernetes-resource
tag: “1.17”

resources:
– name: pipeline
type: git
source:
uri:
branch:
private_key: {{git_private_key}}

– name: task
type: kubernetes
source:
server: {{cluster_url}}
namespace:
token: {{token}}
certificate_authority: ((kubernetes_ca_crt))

The pipeline

Under the task folder, we have 2 files and the Odoo locust folder has the locus file. The cred.py stated below is used in the locust-test. YAML to convert Google Cloud credentials retrieved from a secret manager into JSON file to authenticate Google Cloud to push the test results to a google bucket.

import json
f=open(‘credentials.json’)
d = f.read()

data = json.loads(d, strict=False)

with open(“cred.json”, “w”) as outfile:
json.dump(data, outfile)

                                                                      cred.py

The locust-test. YAML runs the actual tests for different numbers of users as mentioned in the pipeline.


platform: Linux

image_resource:
type: docker-image
source:
repository: google/cloud-sdk
tag: “alpine”

params:
KUBE_CONFIG: ((kubernetes-config))

inputs:
– name: pipeline

run:
path: sh
args:
– -exc
– |
set +x
mkdir -p /root/.kube/
printf %s “${KUBE_CONFIG}” > /root/.kube/config
curl -L -s https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
chmod +x /usr/local/bin/kubectl
cd pipeline/tasks/
cat << EOF >> credentials.json
((kubernetes-bucket-storage))
EOF
python3 cred.py
export GOOGLE_APPLICATION_CREDENTIALS=cred.json
POD_NAME=$(kubectl get pods -n <namespace> | grep locust-master | awk ‘{print $1}’)
kubectl exec –stdin –tty ${POD_NAME} bash -n <namespace> << EOF
mkdir tmp
cd tmp
locust -f /mnt/locust/test.py Seller –csv=${user_count}_users –headless –run-time 30m -u ${user_count} -r ${user_count} -H <host> –only-summary
cd ..
tar -zcf users_${user_count}.tgz tmp
rm -rf tmp
EOF
kubectl cp <namespace>/${POD_NAME}:/home/locust/users_${user_count}.tgz users_${user_count}.tgz
kubectl exec –stdin –tty ${POD_NAME} bash -n <namespace> << EOF
rm users_${user_count}.tgz
EOF
gcloud auth activate-service-account –project=<google_project_id> –key-file=cred.json
gsutil mv users_${user_count}.tgz gs://locust-tests/${user_count}_users/users_${user_count}_$(date ‘+%d-%m-%YT%H:%M:%S’).tgz

                                                                       Locust-test.yaml

When we trigger the jobs for 50 users it’ll run the test for the time mentioned in this line of the code where the run-time is 30 minutes.

locust -f /mnt/locust/test.py Seller –csv=${user_count}_users –headless –run-time 30m -u ${user_count} -r ${user_count} -H <host> –only-summary

Once the test is completed, it’ll push the .tgz file with the results to a google bucket named locust-tests and it has subfolders for a different number of users so that we can track the tests for different periods.

This flow can also be integrated with the deployment pipeline of the application so that it runs the test right after the deployment. Here we are using the Odoo locust script to test an Odoo application and the same procedure can be followed for any other application.

This is how we can completely automate the execution of Locust tests using Concourse.

Please get in touch for assistance with setup and use of Locust for your Performance testing needs.