Optimizing DevSecOps with SonarQube and DefectDojo Integration

Optimizing DevSecOps with SonarQube and DefectDojo Integration

In the evolving landscape of software development, the importance of integrating security practices into every phase of the software development lifecycle (SDLC) cannot be overstated.

This is where DevSecOps comes into play a philosophy that emphasizes the need to incorporate security as a shared responsibility throughout the entire development process, rather than treating it as an afterthought.

By shifting security left, organizations can detect vulnerabilities earlier, reduce risks, and improve compliance with industry standards. DevSecOps isn’t just about tools; it’s about creating a culture where developers, security teams, and operations work together seamlessly. However, having the right tools is essential to implementing an effective DevSecOps strategy.

In this article, we’ll delve into the setup of DefectDojo, a powerful open-source vulnerability management tool, and demonstrate how to integrate it with SonarQube to automate Code Analysis as part of the CICD Process and also capture the vulnerability identified in the testing so it can be tracked to closure. 

Why DevSecOps and What Tools to Use?

DevSecOps aims to automate security processes, ensuring that security is built into the CI/CD pipeline without slowing down the development process. Here are some key tools that can be used in a DevSecOps environment:

SonarQube: Static Code Analysis

SonarQube inspects code for bugs, code smells, and vulnerabilities, providing developers with real-time feedback on code quality. It helps identify and fix issues early in the development process, ensuring cleaner and more maintainable code.

Dependency Check: Dependency Vulnerability Scanning

Dependency Check scans project dependencies for known vulnerabilities, alerting teams to potential risks from third-party libraries. This tool is crucial for managing and mitigating security risks associated with external code components.

Trivy: Container Image Scanning

Trivy examines Docker images for vulnerabilities in both OS packages and application dependencies. By integrating Trivy into the CI/CD pipeline, teams ensure that the container images deployed are secure, reducing the risk of introducing vulnerabilities into production.

ZAP: Dynamic Application Security Testing

ZAP performs dynamic security testing on running applications to uncover vulnerabilities that may not be visible through static analysis alone. It helps identify security issues in a live environment, ensuring that the application is secure from real-world attacks.

Defect Dojo: Centralized Security Management

Defect Dojo collects and centralizes security findings from SonarQube, Dependency Check, Trivy, and ZAP, providing a unified platform for managing and tracking vulnerabilities. It streamlines the process of addressing security issues, offering insights and oversight to help prioritize and resolve them efficiently.

DefectDojo ties all these tools together by acting as a central hub for vulnerability management. It allows security teams to track, manage, and remediate vulnerabilities efficiently, making it an essential tool in the DevSecOps toolkit.

Installation and Setup of DefectDojo

Before integrating DefectDojo with your security tools, you need to install and configure it. Here’s how you can do that:

Prerequisites: 

1. Docker
2. Jenkins

Step1: Download the official documentation from DefectDojo github repository using the link

Step2: Once downloaded, navigate to the path where you have downloaded the repository and run the below command.

docker-compose up -d –-build

Step3: DefectDojo will start running and you will be able to access it with http://localhost:8080/ . Enter the default credentials that you have set and login to the application.

Step4: Create a product by navigating as shown below

Step5: Click on the created product name and create an engagement by navigating as shown below

Step6: Once an engagement is created click on the created engagement name and create a test from it.

Step7: Copy the test number and enter the test number in the post stage of the pipeline.

Step8: Building a pipeline in Jenkins to integrate it with DefectDojo inorder to generate vulnerability report

pipeline {

    agent {

        label ‘Agent-1’

    }

    environment {

        SONAR_TOKEN = credentials(‘sonarqube’)

    }

    stages {

        stage(‘Checkout’) {

            steps {

                checkout([$class: ‘GitSCM’, branches: [[name: ‘dev’]],

                    userRemoteConfigs: [[credentialsId: ‘parameterized credentials’, url: ‘repositoryURL’]]

                ])

            }

        }

        stage(‘SonarQube Analysis’) {

            steps {

                script {

                    withSonarQubeEnv(‘Sonarqube’) {

                        def scannerHome = tool name: ‘sonar-scanner’, type: ‘hudson.plugins.sonar.SonarRunnerInstallation’

                        def scannerCmd = “${scannerHome}/bin/sonar-scanner”

                        sh “${scannerCmd} -Dsonar.projectKey=ProjectName -Dsonar.sources=./ -Dsonar.host.url=Sonarqubelink -Dsonar.login=${SONAR_TOKEN} -Dsonar.java.binaries=./”

                    }

                }

            }

        }

        // Other stages…

        stage(‘Generate SonarQube JSON Report’) {

            steps {

                script {

                    def SONAR_REPORT_FILE = “/path/for/.jsonfile”

                    sh “””

                    curl -u ${SONAR_TOKEN}: \

                        “Sonarqube URL” \

                        -o ${SONAR_REPORT_FILE}

                    “””

                }

            }

        }

    }

    post {

        always {

            withCredentials([string(credentialsId: ‘Defect_Dojo_Api_Key’, variable: ‘API_KEY’)]) {

                script {

                    def defectDojoUrl = ‘Enter the defect dojo URL’  // Replace with your DefectDojo URL

                    def testId = ’14’  // Replace with the correct test ID

                    def scanType = ‘SonarQube Scan’

                    def tags = ‘Enter the tag name’

                    def SONAR_REPORT_FILE = “/path/where/.json/file is present”

                    sh “””

                    curl -X POST \\

                      ‘${defectDojoUrl}’ \\

                      -H ‘accept: application/json’ \\

                      -H ‘Authorization: Token ${API_KEY}’ \\

                      -H ‘Content-Type: multipart/form-data’ \\

                      -F ‘test=${testId}’ \\

                      -F ‘file=@${SONAR_REPORT_FILE};type=application/json’ \\

                      -F ‘scan_type=${scanType}’ \\

                      -F ‘tags=${tags}’

                    “””

                }

            }

        }

    }

}

The entire script is wrapped in a pipeline block, defining the CI/CD pipeline in Jenkins.

agent: Specifies the Jenkins agent (node) where the pipeline will execute. The label Agent-1 indicates that the pipeline should run on a node with that label.

environment: Defines environment variables for the pipeline.

SONAR_TOKEN: Retrieves the SonarQube authentication token from Jenkins credentials using the ID ‘sonarqube’. This token is needed to authenticate with the SonarQube server during analysis.

The pipeline includes several stages, each performing a specific task.

stage(‘Checkout’): The first stage checks out the code from the Git repository.

checkout: This is a Jenkins step that uses the GitSCM plugin to pull code from the specified branch.

branches: Indicates which branch (dev) to checkout.

userRemoteConfigs: Specifies the Git repository’s URL and the credentials ID (parameterized credentials) used to access the repository.

stage(‘SonarQube Analysis’): This stage runs a SonarQube analysis on the checked-out code.

withSonarQubeEnv(‘Sonarqube’): Sets up the SonarQube environment using the SonarQube server named Sonarqube, which is configured in Jenkins.

tool name: Locates the SonarQube scanner tool installed on the Jenkins agent.

sh: Executes the SonarQube scanner command with the following parameters:

-Dsonar.projectKey=ProjectName: Specifies the project key in SonarQube.

-Dsonar.sources=./: Specifies the source directory for the analysis (in this case, the root directory).

-Dsonar.host.url=Sonarqubelink: Specifies the SonarQube server URL.

-Dsonar.login=${SONAR_TOKEN}: Uses the SonarQube token for authentication.

-Dsonar.java.binaries=./: Points to the location of Java binaries (if applicable) for analysis.

stage(‘Generate SonarQube JSON Report’): This stage generates a JSON report of the SonarQube analysis.

SONAR_REPORT_FILE: Defines the path where the JSON report will be saved.

sh: Runs a curl command to retrieve the SonarQube issues data in JSON format:

  1. -u ${SONAR_TOKEN}:: Uses the SonarQube token for authentication.
  2. “Sonarqube URL”: Specifies the API endpoint to fetch the issues from SonarQube.
  3. -o ${SONAR_REPORT_FILE}: Saves the JSON response to the specified file path.

post: Defines actions to perform after the pipeline stages are complete. The always block ensures that the following steps are executed regardless of the pipeline’s success or failure.

withCredentials: Securely retrieves the DefectDojo API key from Jenkins credentials using the ID ‘Defect_Dojo_Api_Key’.

script: Contains the script block where the DefectDojo integration happens.

defectDojoUrl: The URL of the DefectDojo API endpoint for reimporting scans.

testId: The specific test ID in DefectDojo where the SonarQube report will be uploaded.

scanType: Indicates the type of scan (in this case, SonarQube Scan).

tags: Tags the scan in DefectDojo.

SONAR_REPORT_FILE: Points to the JSON file generated earlier.

sh: Runs a curl command to POST the SonarQube JSON report to DefectDojo:

-X POST: Specifies that this is a POST request.

-H ‘accept: application/json’: Indicates that the response should be in JSON format.

-H ‘Authorization: Token ${API_KEY}’: Authenticates with DefectDojo using the API key.

-F ‘test=${testId}’: Specifies the test ID in DefectDojo.

-F ‘file=@${SONAR_REPORT_FILE};type=application/json’: Uploads the JSON file as part of the request.

-F ‘scan_type=${scanType}’: Indicates the type of scan being uploaded.

-F ‘tags=${tags}’: Adds any specified tags to the scan in DefectDojo.

Step 9: Verify the Integration

After the pipeline executes, the vulnerability report generated by SonarQube will be available in DefectDojo under the corresponding engagement. You can now track, manage, and remediate these vulnerabilities using DefectDojo’s robust features.

Conclusion

Integrating security tools like SonarQube with DefectDojo is a critical step in building a secure DevSecOps pipeline. By automating vulnerability management and integrating it directly into your CI/CD pipeline, you can ensure that security remains a top priority throughout the development process.

In this article, we focused on setting up DefectDojo and integrating it with SonarQube. In future articles, we will cover the integration of additional tools like OWASP ZAP, Trivy, and Dependency-Check. Stay tuned to further enhance your DevSecOps practices.

Frequently Asked Questions (FAQs)

DefectDojo is an open-source application security vulnerability management tool. It helps organizations in managing, tracking, and remediating security vulnerabilities efficiently. It integrates with a variety of security tools for automating workflows and provides continuous security monitoring throughout the SDLC.

The prerequisites for setting up DefectDojo are Docker, to run DefectDojo inside a container, and Jenkins, for integrating pipelines in CI/CD.

To install DefectDojo:

  1. Download the official DefectDojo documentation from its GitHub repository.
  2. Navigate to the path where you downloaded the repository.
  3. Run the docker-compose up -d --build command to build and start the DefectDojo container and its dependencies.

Once DefectDojo is running, you can access it via a web browser at http://localhost:8080/. Log in with the credentials set during installation, and you can start managing security vulnerabilities by creating products, engagements, and tests.

Yes, DefectDojo integrates seamlessly with other security tools like SonarQube, OWASP ZAP, Trivy, and Dependency-Check. It allows the centralization of vulnerability management across multiple tools, making it an indispensable part of the DevSecOps pipeline.

Get in touch with us and elevate your software security and performance

Blogbanner_2

Keycloak deployment on Kubernetes with Helm charts using an external PostgreSQL database

Prerequisites:

  1. Kubernetes cluster set up and configured.
  2. Helm installed on your Kubernetes cluster.
  3. Basic understanding of Kubernetes concepts like Pods, Deployments, and Services.
  4. Familiarity with Helm charts and templating.

Introduction:

Deploying Keycloak on Kubernetes with an external PostgreSQL database can be challenging, especially when using Helm charts. One common issue is that keycloak deploys with  a default database service when we use Helm chart, making it difficult to integrate with an external database. 

In this article, we’ll explore the problem we encountered while deploying Keycloak on Kubernetes using Helm charts and describe the solution we implemented to seamlessly use an external PostgreSQL database.

Problem:

The primary issue we faced during the deployment of Keycloak on Kubernetes using Helm was the automatic deployment of a default database service. This default service conflicted with our requirement to use an external PostgreSQL database for Keycloak. The Helm chart, by default, would deploy an internal database, making it challenging to configure Keycloak to connect to an external database.

Problem Analysis

  1. Default Database Deployment: The Helm chart for Keycloak automatically deploys an internal PostgreSQL database. This default setup is convenient for simple deployments but problematic when an external database is required.
  2. Configuration Complexity: Customizing the Helm chart to disable the internal database and correctly configure Keycloak to use an external PostgreSQL database requires careful adjustments to the values.yaml file.
  3. Integration Challenges: Ensuring seamless integration with an external PostgreSQL database involves specifying the correct database connection parameters and making sure that these settings are correctly propagated to the Keycloak deployment.
  4. Persistence and Storage: The internal database deployed by default may not meet the persistence and storage requirements for production environments, where an external managed PostgreSQL service is preferred for reliability and scalability.

To address these issues, the following step-by-step guide provides detailed instructions on customizing the Keycloak Helm chart to disable the default database and configure it to use an external PostgreSQL database.

Overview Diagram:

Step-by-Step Guide

Step 1: Setting Up Helm Repository

If you haven’t already added the official Helm charts repository for Keycloak, you can add it using the following command:

helm repo add codecentric https://codecentric.github.io/helm-charts

helm repo update

By adding the official Helm charts repository for Keycloak, you ensure that you have access to the latest charts maintained by the developers. Updating the repository ensures you have the most recent versions of the charts.

Step 2: Customizing Helm Values

Objective: Customize the Keycloak Helm chart to avoid deploying the default database and configure it to use an external PostgreSQL database.

Configure Keycloak for development mode

Create a values.yaml File

  1. Create a new file named values.yaml.
  2. Add the following content to the file 

image:

  # The Keycloak image repository

  repository: quay.io/keycloak/keycloak

  # Overrides the Keycloak image tag whose default is the chart appVersion

  tag: 24.0.3

  # The Keycloak image pull policy

  pullPolicy: IfNotPresent


resources:

  requests:

    cpu: “500m”

    memory: “1024Mi”

  limits:

    cpu: “500m”

    memory: “1024Mi”


args:

  – start-dev

  – –hostname=<url>

  – –hostname-url=<url>

  – –verbose


autoscaling:

  # If `true`, a autoscaling/v2beta2 HorizontalPodAutoscaler resource is created (requires Kubernetes 1.18 or above)

  # Autoscaling seems to be most reliable when using KUBE_PING service discovery (see README for details)

  # This disables the `replicas` field in the StatefulSet

  enabled: false

  # The minimum and maximum number of replicas for the Keycloak StatefulSet

  minReplicas: 1

  maxReplicas: 2


ingress:

  enabled: true

  #hosts:

  #  – <url>

  ssl:

    letsencrypt: true

    cert_secret: <url>

  annotations:

    kubernetes.io/ingress.class: “nginx”

    kubernetes.io/tls-acme: “true”

    cert-manager.io/cluster-issuer: letsencrypt

    cert-manager.io/acme-challenge-type: dns01

    cert-manager.io/acme-dns01-provider: route53

    nginx.ingress.kubernetes.io/proxy-buffer-size: “128k”

    nginx.ingress.kubernetes.io/affinity: “cookie”

    nginx.ingress.kubernetes.io/session-cookie-name: “sticky-cookie”

    nginx.ingress.kubernetes.io/session-cookie-expires: “172800”

    nginx.ingress.kubernetes.io/session-cookie-max-age: “172800”

    nginx.ingress.kubernetes.io/affinity-mode: persistent

    nginx.ingress.kubernetes.io/session-cookie-hash: sha1

  labels: {}

  rules:

    –

      # Ingress host

      host: ‘<url>’

      # Paths for the host

      paths:

        – path: /

          pathType: Prefix

  tls:

    – hosts:

        – <url>

      secretName: “<url>”

      

extraEnv: |

 – name: PROXY_ADDRESS_FORWARDING

   value: “true”

 – name: QUARKUS_THREAD_POOL_MAX_THREADS

   value: “500”

 – name: QUARKUS_THREAD_POOL_QUEUE_SIZE

   value: “500”

 

This configuration file customizes the Keycloak Helm chart to set specific resource requests and limits, ingress settings, and additional environment variables. By setting the args to start Keycloak in development mode, you allow for easier initial setup and testing.

Configuring for Production Mode

  1. Add or replace the following content in values.yaml for production mode:

args:

  – start

  – –hostname=<url>

  – –hostname-url=<url>

  – –verbose

  – –optimized

  – -Dquarkus.http.host=0.0.0.0

  – -Dquarkus.http.port=8080

Note: The production configuration includes optimizations and ensures that Keycloak runs in a stable environment suitable for production workloads. The –optimized flag is added for performance improvements.

Configuring for External Database

  1. Add the following content to values.yaml to use an external PostgreSQL database:

args:

  – start

  – –hostname-url=<url>

  – –verbose

  – –db=postgres

  – –db-url=<jdbc-url>

  – –db-password=${DB_PASSWORD}

  – –db-username=${DB_USER}

 

postgresql:

  enabled: false

This configuration disables the default PostgreSQL deployment by setting postgresql.enabled to false. The database connection arguments are provided to connect Keycloak to an external PostgreSQL database.

Step 3: Deploying Keycloak with PostgreSQL and Custom Themes Using Helm

Objective: Add custom themes to Keycloak and deploy it using Helm.

  1. Add the following to values.yaml to include custom themes:

extraInitContainers: |

  – name: theme-provider

    image: <docker-hub-registry-url>

    imagePullPolicy: IfNotPresent

    command:

      – sh

    args:

      – -c

      – |

        echo “Copying custom theme…”

        cp -R /custom-themes/* /eha-clinic

    volumeMounts:

      – name: custom-theme

        mountPath: /eha-clinic

extraVolumeMounts: |

  – name: custom-theme

    mountPath: /opt/jboss/keycloak/themes/

extraVolumes: |

  – name: custom-theme

    emptyDir: {}

This configuration uses an init container to copy custom themes into the Keycloak container. The themes are mounted at the appropriate location within the Keycloak container, ensuring they are available when Keycloak starts.

Step 4: Configuring Keycloak

Objective: Log in to the Keycloak admin console and configure realms, users, roles, client applications, and other settings.

Access the Keycloak admin console using the URL provided by your ingress configuration.

  1. Log in with the default credentials (admin/admin).
  2. Configure the following according to your application requirements:
  • Realms
  • Users
  • Roles
  • Client applications

The Keycloak admin console allows for comprehensive configuration of all aspects of authentication and authorization, tailored to the needs of your applications.

Step 5: Configuring Custom Themes

Objective: Apply and configure custom themes within the Keycloak admin console.

  1. Log in to the Keycloak admin console using the default credentials (admin/admin).
  2. Navigate to the realm settings and select the “Themes” tab.
  3. Select and configure your custom themes for:
  • Login pages
  • Account pages
  • Email pages

Custom themes enhance the user experience by providing a personalized and branded interface. This step ensures that the authentication experience aligns with your organization’s branding and user interface guidelines.

Conclusion

By following the steps outlined in this article, you can deploy Keycloak with PostgreSQL on Kubernetes using Helm, while also incorporating custom themes to personalize the authentication experience. Leveraging Helm charts simplifies the deployment process, while Keycloak and PostgreSQL offer robust features for authentication and data storage. Integrating custom themes allows you to tailor the authentication pages according to your branding and user interface requirements, ultimately enhancing the user experience and security of your applications.

Frequently Asked Questions (FAQs)

Keycloak is a modern applications and services open-source identity and access management solution. It offers various functionalities such as single sign-on, identity brokering, and social login which help in making user identity management easy as well as application security

Deploying Keycloak on Kubernetes means that you can have an elastic application that can be extended according to the number of users, resistant to failure on case of external services unavailability or internal server crashes; it is also easy to manage, has numerous protocols for authentication mechanisms and connects with different types of external databases.

Helm charts are pre-configured Kubernetes resource packages that simplify the efficient management and deployment of applications on Kubernetes.

To disable the default PostgreSQL database, set postgresql.enabled to false in the values.yaml file.

Provide the necessary database connection parameters in the values.yaml file, including --db-url, --db-password, and --db-username.

You can add custom themes by configuring init containers in the values.yaml file to copy the themes into the Keycloak container and mounting them at the appropriate location.

Asset 2 (1)

Integrating Apache Jmeter with Jenkins

In the world of software development, ensuring the performance and reliability of applications is paramount. One of the most popular tools for performance testing is Apache JMeter, known for its flexibility and scalability. Meanwhile, Jenkins has become the go-to choice for continuous integration and continuous delivery (CI/CD). Combining the power of JMeter with the automation capabilities of Jenkins can significantly enhance the efficiency of performance testing within the development pipeline. In this article, we’ll explore the integration of JMeter with Jenkins and how it can streamline the performance testing process.

Apache Jmeter

Apache JMeter is a powerful open-source tool designed for load testing, performance testing, and functional testing of applications. It provides a user-friendly GUI that allows testers to create and execute several types of tests, including HTTP, FTP, JDBC, LDAP, and more. JMeter supports simulating heavy loads on servers, analyzing overall performance metrics, and finding performance bottlenecks.

With its scripting and parameterization capabilities, JMeter offers flexibility and scalability for testing web applications, APIs, databases, and other software systems. Its extensive reporting features help teams assess application performance under different conditions, making it an essential tool for ensuring the reliability and scalability of software applications. More information available here Apache JMeter.

Jenkins

Jenkins is one of the most popular open-source automation servers widely used for continuous integration (CI) and continuous delivery (CD) processes in software development for several years now. It allows developers to automate the building, testing, and deployment of applications, thereby streamlining the development lifecycle. Jenkins supports integration with various version control systems like Git, SVN, and Mercurial, enabling automatic triggers for builds whenever code changes are pushed.

Its extensive plugin ecosystem provides flexibility to integrate with a wide range of tools and technologies, making it a versatile solution for managing complex CI/CD pipelines. Jenkins’ intuitive web interface, extensive plugin library, and robust scalability make it a popular choice for teams aiming to achieve efficient and automated software delivery processes. The Jenkins docs has a page to help with the Jenkins installation process.

Why Integrate JMeter with Jenkins?

Traditionally, performance testing has been a manual and time-consuming process, often conducted as a separate part of the development lifecycle by test teams. The results had to then be shared with the rest of the team as there was no automated execution or capturing of the test results as part of the CICD pipeline. However, in today’s fast-paced software development environment, there’s a growing need to automate the complete testing processes to include execution of Performance tests as part of the CICD pipeline. By integrating JMeter with Jenkins, organizations can achieve the following benefits:

Automation: Jenkins allows you to automate the execution of JMeter tests as part of your CI/CD pipeline, enabling frequent and consistent performance testing with minimal manual intervention.

Continuous Feedback: Incorporating performance tests into Jenkins pipelines provides immediate feedback on the impact of code changes on application performance, allowing developers to find and address performance issues early in the development cycle.

Reporting: Jenkins provides robust reporting and visualization capabilities, allowing teams to analyze test results and track performance trends over time, helping data-driven decision-making.

Our Proposed Approach & its advantages

We’ve adopted a new approach in addition to using the existing JMeter plugin for Jenkins wherein we enhance the Jenkins Pipeline to include detailed notifications and better result organization.

The key steps of our approach are as follows.

  1. We can install JMeter directly on the agent base OS. This ensures we have access to the latest features and updates.
  2. We use the powerful Blaze Meter plugin to generate our JMeter scripts.
  3. We’ve written a dedicated Jenkins pipeline to automate the execution of these Jmeter scripts.
  4. We have also defined the steps as part of the Jenkins script to distribute the Execution Status and log by email to chosen users.
  5. We also store the results in a configurable Path for future reference.

All of this ensures better automation, flexibility or control of the execution, notification and efficient performance testing as part of the CICD Pipeline.

Setting Up Blaze Meter & Capturing the Test Scripts

To automate the process of script writing we should use the Blaze Meter tool. Navigate to chrome extensions and search for Blaze Meter. Click on add to chrome. Access Blaze Meter official website and create an account there for further process.

Open chrome and now you can find the Blaze Meter extension on the top right corner.

Click on the Blaze Meter chrome extension. A toolbox will be visible. Open the application where you want to record the scripts for Jmeter. Click on the record button to start.

Navigate through the application and perform necessary operations as end users of the application would. Click on the stop button to stop the recording.

Now Blaze Meter has recorded the scripts. To save it in. jmx format, click on save, check the Jmeter only box and click on save as shown in the screenshot below.

For more information on how to record a Jmeter script using Blaze Meter, follow the link

Modifying the Test Scripts in JMeter

The recorded script can then be opened in JMeter, and necessary changes can be made as per the different Load and Performance Scenarios to be assessed for the application.

Select the generated .jmx file and click on open

In addition to these you can add listeners to the thread groups, for better visibility of each sample results.

Setting up a Jenkins pipeline to execute JMeter tests

Install Jenkins: If you haven’t already, install Jenkins on your server following the official documentation.

Create a New Pipeline Job: A new pipeline job should be created to orchestrate the performance testing process. Click on new item to create a pipeline job and select pipeline option.

Post creating a new pipeline, navigate to configure and mention the scheduled time in the below format.

(‘H 0 */3 * *’)

Define Pipeline Script: Configure the pipeline script to execute the JMeter test at regular intervals using a cron expression.

 pipeline {

    agent {

        label ‘{agent name}’

    }

This part of the Jenkins pipeline script specifies the agent (or node) where the pipeline should run. The label ‘{agent name}’ should be replaced with the label of the specific agent you want to use. This ensures that the pipeline will execute on a machine that matches the provided label.

stages {

      stage(‘Running JMeter Scripts’) {

          steps {

              script {

 sh ”’

    output_directory=”{path}/$(date +’%Y-%m-%d’)”

    mkdir -p “$output_directory”

    cd {Jmeter Path}/bin

    sh jmeter -n -t {Jmeter Script Path} -l “$output_directory/{Result file name}” -e -o “$output_directory”

    cp “$output_directory”/{Result file name} $WORKSPACE

    cp -r $output_directory $WORKSPACE

    ”’

       }

            }      

}

This stage named ‘Running JMeter Scripts’ has steps to execute a shell script. The script does the following:
1. Creates an output directory with the current date.
2. Navigates to the JMeter binary directory.
3. Runs the JMeter test script specified by {Jmeter Script Path}, storing the results in the created directory
4. Copies the result file and output directory to the Jenkins workspace for archiving.

post {

        always {

            script {

                currentBuild.result = currentBuild.result

                def date = sh(script: ‘date +\’%Y-%m-%d\”, returnStdout: true).trim()

                def subject = “${currentBuild.result}: Job ‘${env.JOB_NAME}'”

                def buildLog = currentBuild.rawBuild.getLog(1000)

                emailext(

                    subject: subject,

                    body: “”” Hi Team , Jmeter build was successful , please contact team for the results “””,

                    mimeType: ‘text/html’,

                    to: ‘{Receiver Email}’,

                    from: ‘{Sender Email}’,

                    attachLog: true ,

                )

            }

     }

This post block runs after the pipeline completes. It retrieves the last 1000 lines of the build log and sends an email notification with the build result, a message, and the build log attached to specified recipients.

View the generated reports.

In Linux instance navigate to the path where the .html files are stored i.e. the output report of the jmeter scripts

Before you open the HTML file, move the complete folder to your local device. Once the folder is moved open the .html file and you’ll be able to analyze the reports

Conclusion

By following the steps described and the approach suggested, we have shown how Integrating JMeter with Jenkins enables teams to automate performance testing and incorporate it seamlessly into the CI/CD pipeline. By scheduling periodic tests, storing results, and sending out email notifications, organizations can ensure the reliability and scalability of their applications with minimal manual intervention. Embrace the power of automation and elevate your performance testing efforts with Jenkins and JMeter integration. For any assistance in automation of your Performance tests please get in touch with us at [email protected] or leave us a note at form and we will get in touch with you.

Frequently Asked Questions (FAQs)

You can download the latest version from the Apache JMeter homepage, after download extract the downloaded files to a directory on the agent machine and Make sure to add the JMeter bin directory in the system's PATH variable so that all JMeter commands can be run from the command line.

Its a Chrome extension that helps users to record and create JMeter scripts easily. To use it, install the BlazeMeter extension from the Chrome Web Store and record the desired scenarios on your web application, and export the recorded script in .jmx format. This script can then be modified in JMeter and used in your Jenkins pipeline for automated performance testing.

Create a new pipeline job in Jenkins and define the pipeline script to include stages for running JMeter tests. The script should include steps to execute JMeter commands, store the results, and send notifications. Here's a example script for you 

pipeline {

    agent { label 'your-agent-label'

 }

    stages {

        stage('Run JMeter Tests') {

            steps {

                script {

                    sh '''

                    output_directory="/path/to/output/$(date +'%Y-%m-%d')"

                    mkdir -p "$output_directory"

                    cd /path/to/jmeter/bin

                    sh jmeter -n -t /path/to/test/script.jmx -l "$output_directory/results.jtl" -e -o "$output_directory"

                    cp "$output_directory/results.jtl" $WORKSPACE

                    cp -r "$output_directory" $WORKSPACE

                    '''

                }

            }

        }

    }

    post {

        always {

            emailext (

                subject: "JMeter Test Results",

                body: "The JMeter tests have completed. Please find the results attached.",

                recipientProviders: [[$class: 'DevelopersRecipientProvider']],

                attachLog: true

            )

        }

    }

}

You can schedule the JMeter test execution using the cron syntax in the Jenkins pipeline configuration. For example, to run the tests every three hours, you can use:<br><br>

H 0 */3 * * *

<br><br>

This will trigger the pipeline at the top of the hour every three hours.

After the JMeter tests are executed, the results are typically stored in a specified directory on the Jenkins agent. You can navigate to this directory, download the results to your local machine, and open the HTML reports generated by JMeter for detailed analysis.