Asset 31

Comparison of and migrating from Pentaho Data Integration PDI/ Kettle to Apache HOP

Introduction

As Data Engineering evolves, so do the tools we use to manage and streamline our data workflows. Commercial Open-Source Pentaho Data Integration (PDI), commonly known as Kettle or Spoon, has been a popular choice for over a decade for many Data professionals. Hitachi Vantara acquired and continued to support Pentaho Community Edition along with the Commercial offering not just for the PDI / Data Integration platform but also the complete Business intelligence Suite which included a comprehensive set of tools with great flexibility, extensibility and hence used to be featured highly in the Analysts reports including Gartner BI Magic Quadrant, Forrester and Dresner’s Wisdom of Crowds. 

Over the last few years, however, there has been a shift in industry and several niche Pentaho alternatives have appeared. Also, an alternative is needed for the Pentaho Community Edition users since Hitachi Vantara / Pentaho has stopped releasing or supporting the Community Edition (CE) of Pentaho Business Intelligence and Data Integration platforms since November of 2022. With the emergence of Apache Hop (Hop Orchestration Platform), a top-level Apache Open-Source Project, organizations now have a modern, flexible alternative that builds on the foundations laid by PDI and it is one of the top Pentaho Data Integration alternatives.

This is the first part of a series of articles where we try to highlight why Apache Hop can be considered as a replacement for the Pentaho Data Integration platform as we explore its benefits and also list a few of its limitations currently. In the next part, we provide a step-by-step guide to make the transition as smooth as possible.

Current Pentaho Enterprise and Community edition Releases:

A summary of the Release dates for the recent versions of Pentaho Platform along with their support commitment is captured in this table. You will notice that the last CE version was released in Nov 2022 while 3 newer EE versions have been released since.

Enterprise Version

Release Date

Community Version

Release Date

Support

Pentaho 10.2

Expected in Q3 2024

NA

NA

Long Term

Pentaho 10.1 GA

March 5, 2024

NA

NA

Normal

Pentaho 10.0

December 01, 2023

NA

NA

Limited

Pentaho 9.5

May 31, 2023

NA

NA

Limited

Pentaho 9.4

November 01, 2022

9.4CE

Same as EE

Limited

Pentaho 9.3

May 04, 2022

9.3CE

Same as EE

Long Term

Pentaho 9.2

August 03, 2021

9.2CE

Same as EE

Unsupported

Pentaho 9.1

October 06, 2020

NA

 

Unsupported

Pentaho 9.0

February 04, 2020

NA

 

Unsupported

Pentaho 8.3

July 01, 2019

8.3CE

Same as EE

Unsupported

Additionally, Pentaho EE 8.2, 8.1, 8.0 and Pentaho 7.X are all unsupported versions on date.

Apache HOP - An Overview

Apache HOP is an open-source data integration and orchestration platform.

It allows users to design, manage, and execute data workflows (pipelines) and integration tasks (workflows) with ease. HOP’s visual interface, combined with its powerful backend, simplifies complex data processes, making it accessible for both technical and non-technical users.

Evolution from Kettle to HOP

As the visionary behind both Pentaho Data Integration (Kettle) and Apache HOP (Hop Orchestration Platform), Matt Casters has played a pivotal role in shaping the tools that power modern data workflows.

The Early Days: Creating Kettle

Matt Casters began his journey into the world of data integration in the early 2000s. Frustrated by the lack of flexible and user-friendly ETL (Extract, Transform, Load) tools available at the time, he set out to create a solution that would simplify the complex processes of data integration. This led to the birth of Kettle, an acronym for “Kettle ETTL Environment” (where ETTL stands for Extraction, Transformation, Transportation, and Loading).

Key Features of Kettle:

  1. Visual Interface: Kettle introduced a visual drag-and-drop interface, making it accessible to users without extensive programming knowledge.
  2. Extensibility: It was designed to be highly extensible, allowing users to create custom plugins and transformations.
  3. Open Source: Recognizing the power of community collaboration, Matt released Kettle as an open-source project, inviting developers worldwide to contribute and improve the tool.

Kettle quickly gained popularity for its ease of use, flexibility, and robust capabilities. It became a cornerstone for data integration tasks, helping organizations manage and transform their data with unprecedented ease.

The Pentaho Era

In 2006, Matt Casters joined Pentaho, a company dedicated to providing open-source business intelligence (BI) solutions. Kettle was rebranded as Pentaho Data Integration (PDI) and integrated into the broader Pentaho suite. This move brought several advantages:

  1. Resource Support: Being part of Pentaho provided Kettle with added resources, including development support, marketing, and a broader user base.
  2. Enhanced Features: Under Pentaho, PDI saw many enhancements, including improved scalability, performance, and integration with other BI tools.
  3. Community Growth: The backing of Pentaho helped grow the community of users and contributors, driving further innovation and adoption.

Despite these advancements, Matt Casters never lost sight of his commitment to open-source principles and community-driven development, ensuring that PDI stayed a flexible and powerful tool for users worldwide.

The Birth of Apache HOP

While PDI continued to evolve, Matt Casters recognized the need for a modern, flexible, and cloud-ready data orchestration platform. The landscape of data integration had changed significantly, with new challenges and opportunities emerging in the era of big data and cloud computing. This realization led to the creation of Apache HOP (Hop Orchestration Platform).

In 2020, Apache HOP was accepted as an incubator project by the Apache Software Foundation, marking a new chapter in its development and community support. This move underscored the project’s commitment to open-source principles and ensured that HOP would receive help from the robust governance and community-driven innovation that the Apache Foundation is known for.

Advantage of Apache HOP compared to Pentaho Data Integration

Apache HOP (Hop Orchestration Platform) and Pentaho Data Integration (PDI)/Kettle are both powerful data integration and orchestration tools. However, Apache HOP has several advantages over PDI, because of its evolution from PDI and adaptation to modern data needs. Below, we explore the key advantages of Apache HOP over Pentaho Data Integration Kettle:

Modern Architecture and Design

Feature

Apache HOP

PDI (Kettle)

Modular and Extensible Framework

Being more modern it is built as a modular and extensible architecture, allowing for easier customization and addition of new features. Users can add or remove plugins without affecting the core functionality.

While PDI is also extensible, its older architecture can make customization and plugin integration more cumbersome compared to HOP’s more streamlined approach.

Lightweight and Performance Optimized

Designed to be lightweight and efficient, improving performance, particularly for large-scale and complex workflows

Older codebase may not be as optimized for performance in modern, resource-intensive data environments.

Hop’s metadata-driven design and extensive plugin library offer greater flexibility for building complex data workflows. Users can also develop custom plugins to extend Hop’s capabilities to meet specific needs.

Enhanced User Interface and Usability

Feature

Apache HOP

PDI (Kettle)

Modern UI

Features a modern and intuitive user interface, making it easier for users to design, manage, and monitor data workflows.

Although functional, the user interface is dated and may not offer the same level of user experience and ease of use as HOP.

Improved Workflow Visualization

Provides better visualization tools for workflows and pipelines, helping users understand and debug complex data processes more effectively.

Visualization capabilities are good but can be less intuitive and harder to navigate compared to HOP.

 

The drag-and-drop functionality, combined with a cleaner and more organized layout, helps users create and manage workflows and pipelines more efficiently.

Apache HOP Web

Apache Hop also supports a Web interface for the development and maintenance of the HOP files unlike Pentaho Data Integration where this feature is still in Beta that too only for the Enterprise Edition. The web interface can be accessed through http://localhost:8080/hop/ui 

Accessing HOP Status Page: http://localhost:8080/hop/status/

https://hop.apache.org/dev-manual/latest/hopweb/index.html

Advanced Development and Collaboration Features

Feature

Apache HOP

PDI (Kettle)

Project-Based Approach

Uses a project-based approach, allowing users to organize workflows, configurations, and resources into cohesive projects. This facilitates better version control, collaboration, and project management.

Lacks a project-based organization, which can make managing complex data integration tasks more challenging.

Integration with Modern DevOps Practices

Designed to integrate seamlessly with modern DevOps tools and practices, including CI/CD pipelines and containerization.

Integration with DevOps tools is possible but not as seamless or integrated as with HOP, especially with the Community edition.

Apache HOP for CI/CD Integration with GitHub / Gitlab

Apache HOP (Hop Orchestration Platform) is a powerful and flexible data integration and orchestration tool. One of its standout features is its compatibility with modern development practices, including Continuous Integration and Continuous Deployment (CI/CD) pipelines. By integrating Apache HOP with GitHub, development teams can streamline their workflows, automate testing and deployment, and ensure consistent quality and performance. In this blog, we’ll explore the advanced features of Apache HOP that support CI/CD integration and provide a guide on setting it up with GitHub.

Why Integrate Apache HOP with CI/CD?

  1. Automation: Automate repetitive tasks such as testing, building, and deploying HOP projects. 2. Consistency: Ensure that all environments (development, testing, production) are consistent by using automated pipelines. 3. Faster Delivery: Speed up the delivery of updates and new features by automating the deployment process. 4. Quality Assurance: Integrate testing into the pipeline to catch errors and bugs early in the development cycle. 5. Collaboration: Improve team collaboration by using version control and automated workflows.

Advanced Features of Apache HOP for CI/CD

  1. Project-Based Approach
  • Apache HOP’s project-based architecture allows for easy organization and management of workflows, making it ideal for CI/CD pipelines.
  1. Command-Line Interface (CLI)
  • HOP provides a robust CLI that enables automation of workflows and pipelines, easing integration into CI/CD pipelines.
  1. Integration with Version Control Systems
  • Apache HOP supports integration with Git, allowing users to version control their workflows and configurations directly in GitHub.
  1. Parameterization and Environment Configurations
  • HOP allows parameterization of workflows and environment-specific configurations, enabling seamless transitions between development, testing, and production environments.
  1. Test Framework Integration
  • Apache HOP supports integration with various testing frameworks, allowing for automated testing of data workflows as part of the CI/CD pipeline.

Cloud-Native Capabilities

As the world moves towards cloud-first strategies, understanding how Apache HOP integrates with cloud environments is crucial for maximizing its potential. The cloud support for Apache HOP, exploring its benefits, features, and practical applications opens a world of possibilities for organizations looking to perfect their data workflows in the cloud. As cloud adoption continues to grow, using Apache HOP can help organizations stay ahead in the data-driven world

Feature

Apache HOP

PDI (Kettle)

Cloud Integration

Built with cloud integration in mind, providing robust support for deploying on various cloud platforms and integrating with cloud storage, databases, and services.control, collaboration, and project management.

While PDI can be used in cloud environments, it lacks the inherent cloud-native design and seamless integration capabilities of HOP especially for the Community edition.

Integration with Cloud Storage

Data workflows often involve large data sets stored in cloud storage solutions. Apache HOP provides out-of-the-box connectors for major cloud storage services:

  • Amazon S3: Seamlessly read from and write to Amazon S3 buckets.
  • Google Cloud Storage: Integrate with GCS for scalable and secure data storage.
  • Azure Blob Storage: Use Azure Blob Storage for efficient data handling.

Cloud-native Databases and Data Warehouses: 

Modern data architectures often leverage cloud-native databases and data warehouses. Apache HOP supports integration with:

  • Amazon RDS and Redshift: Connect to relational databases and data warehouses on AWS.
  • Google Big Query: Integrate with Big Query for fast, SQL-based analytics.
  • Azure SQL Database and Synapse Analytics: Use Microsoft’s cloud databases for scalable data solutions.

Cloud-native Data Processing

Apache HOP’s integration capabilities extend to cloud-native data processing services, allowing for powerful and scalable data transformations:

  • AWS Glue: Use AWS Glue for serverless ETL jobs.
  • Google Dataflow: Integrate with Dataflow for stream and batch data processing.
  • Azure Data Factory: Leverage ADF for hybrid data integration.

Security and Compliance

Security is paramount in cloud environments. Apache HOP supports various security protocols and practices to ensure data integrity and compliance:

  • Encryption: Support for encrypted data transfers and storage.
  • Authentication and Authorization: Integrate with cloud identity services for secure access control.

Compliance: Ensure workflows comply with industry standards and regulations

Features Summary and Comparison

Feature

Kettle

Hop

Projects and Lifecycle Configuration

No

Yes

Search Information in projects and configurations

No

Yes

Configuration management through UI and command line

No

Yes

Standardized shared metadata

No

Yes

Pluggable runtime engines

No

Yes

Advanced GUI features: memory, native zoom

No

Yes

Metadata Injection

Yes

Yes (most transforms)

Mapping (sub-transformation/pipeline

Yes

Yes(simplified)

Web Interface

Web Spoon

Hop Web

APL 2.0 license compliance

LGPL doubts regarding pentaho-metastore library

Yes

Pluggable metadata objects

No

Yes

GUI plugin architecture

XUL based (XML)

Java annotations

External Link:

https://hop.apache.org/tech-manual/latest/hop-vs-kettle/hop-vs-kettle.html

 

Community and Ecosystem

Open-Source Advantages

  • Apache HOP: Fully open-source under the Apache License, offering transparency, flexibility, and community-driven enhancements.
  • PDI (Kettle): While also open-source and having a large user base with extensive documentation, PDI’s development has slowed, and it has not received as many updates or new features as HOP. PDI’s development was and is more tightly controlled in the recent past by Hitachi Vantara, potentially limiting community contributions and innovation compared to HOP. 

Active Development and Community Support

Apache Hop is actively developed and maintained under the Apache Software Foundation, ensuring regular updates, bug fixes, and new features. The community support for Apache HOP is a cornerstone of its success. The Apache Software Foundation (ASF) has always championed the concept of community over code, and Apache HOP is a shining example of this ethos in action.

Why Community Support Matters

  1. Accelerated Development and Innovation: The community continuously contributes to the development and enhancement of Apache HOP. From submitting bug reports to developing new features, the community’s input is invaluable. This collaborative effort accelerates the innovation cycle, ensuring that Apache HOP stays innovative and highly functional.
  2. Resource Sharing: The Apache HOP community is a treasure trove of resources. From comprehensive documentation and how-to guides to video tutorials and webinars, community members create and share a wealth of knowledge. This collective pool of information helps both beginners and experienced users navigate the platform with ease.
  3. Peer Support and Troubleshooting: One of the standout benefits of community support is the peer-to-peer assistance available through forums, mailing lists, and chat channels. Users can seek help, share solutions, and discuss best practices. This collaborative troubleshooting often leads to quicker resolutions and deeper understanding of the platform.
  4. Networking and Collaboration: Being part of the Apache HOP community opens doors to networking opportunities. Engaging with other professionals in the field can lead to collaborative projects, job opportunities, and professional growth. It’s a platform for like-minded individuals to connect and create meaningful professional relationships.

All this can be seen from the frequent, consistent releases with key features released in each release captured in the table below.

Version

Release Date

Description

Apache Hop 3.0

Q4 2024

Future Release Items

Apache Hop 2.10

August 31, 2024

Upcoming… The Apache Hop 2.10 release introduced several new features and improvements. Key updates include Enhanced Plugin Management, Bug Fixes and Performance Enhancements, New Tools and Utilities.

Apache Hop 2.9

May 24, 2024

This version includes various new features like static schema metadata type, Crate DB database dialect and bulk loader, and several improvements in transforms. Check out What’s changed. Check here for more details.

Apache Hop 2.8

March 13, 2024

This update brought new AWS transforms (SNS Notify and SQS Reader), many bug fixes, and performance improvements​. Check here for more details.

Apache Hop 2.7

December 1, 2023

This release featured the Redshift bulk loader, JDBC driver refactoring, and other enhancements​. Check here for more details.

Apache Hop 2.6

September 19, 2023

This version included new Google transforms (Google Analytics 4 and Google Sheets Input/Output), an Apache Beam upgrade, and various bug fixes​. Check here for more details.

Apache Hop 2.5

July 18, 2023

This version focused on various bug fixes and new features, including an upgrade to Apache Beam 2.48.0 with support for Apache Spark 3.4, Apache Flink 1.16, and Google Cloud Dataflow. Additional updates included a new Intersystem IRIS database type, JSON input and output improvements, Salesforce input enhancements, an upgrade to Duck DB 0.8, and the addition of Polish language support​.Check here for more details.

Apache Hop 2.4

March 31, 2023

This update introduced new features like Duck DB support, a new script transform, and various improvements in existing transforms and documentation

Apache Hop 2.3

February 1, 2023

This release focused mainly on bug fixes and included a few new features. One significant update was the integration of Weblate, a new translation tool that simplifies the contribution of translations. Another key addition was the integration of the Vertica Bulk Loader into the main code base, enhancing data loading speeds to the Vertica analytical database. Check here for more details.

Apache Hop 2.2

December 6, 2022

This release involved significant improvements and fixes, addressing over 160 tickets. Key updates included enhancements to the Hop GUI, such as a new welcome dialog, navigation viewport, data grid toolbars, and a configuration perspective. Additionally, there were upgrades to various components, including Apache Beam and Google Dataflow​ (Apache Hop)​​ (Apache Issues)​. For more detailed information, you can visit the Apache Hop 2.2 release page. Check here for more details.

Apache Hop 2.1

October 14, 2022

This release included various new features such as MongoDB integration, Apache Beam execution, and new plugins for data profiling and documentation improvements​. Check here for more details.

Apache Hop 2.0

June 17, 2022

Introduced various bug fixes and improvements, including enhancements to the metadata injection functionality and documentation updates​. The update also included various new transform plugins such as Apache Avro File Output, Apache Doris Bulk Loader, Drools Rules Accumulator, and Drools Rules Executor, as well as a new Formula transform. Additionally, the user interface for the Dimension Lookup/Update transform was cleaned up and improved​. Check here for more details.

Apache Hop 1.2

March 7, 2022

This release included several improvements to Hop GUI, Docker support, Neo4j integration, and Kafka and Avro transforms. It also introduced the Hop Translator tool for easier localization efforts, starting with Chinese translations. Check here for more details.

Apache Hop 1.1

January 24, 2022

Some of the key updates in Apache Hop 1.1 included improvements in metadata injection, enhancements to the graphical user interface, support for more data formats, and various performance optimizations. Check here for more details.

Apache Hop 1.0

January 17, 2022

This version marked Hop’s transition from incubation, featuring clean architecture, support for over 20 plugin types, and a revamped Hop GUI for designing workflows and pipelines​. Check here for more details.

Additional Links:

https://hop.apache.org/categories/Release/ , https://hop.apache.org/docs/roadmap/ 

Few Limitations with HOP

While Apache HOP has several advantages compared to Pentaho ETL, by the nature of it being a comparatively newer platform, there are a few limitations we have encountered when using it. Some of these are already recorded as issues in the HOP Github and are scheduled to be fixed in upcoming releases.

Type

Details

HOP GUI

The HOP GUI application does not allowto the change “Project Home path” to a valid path after setting an invalid Project Path.

HOP GUI

Repeatedly Prompting to enter GitHub Credentials

HOP GUI

While Saving a New Pipeline, HOP GUI Appends the Previously Opened Pipeline Names

HOP Server

Multiple Hop Server Object IDs for a single HOP Pipeline on the HOP Server

HOP Server

Hop Server Objects (Pipeline/Workflow) Status is Null and the Metrics Information is not Shown

HOP Web

Unable to Copy the Transform from one Pipeline to Another Pipeline

HOP GUI

Log table options in Workflow Properties Tab

HOP GUI

Showing Folder Icon for HPL Files

HOP GUI

Dimension Lookup & Update Transform SQL Button Nullpointer Exception

There are very few issues which can act as an impediment to using Apache HOP depending on the specific use cases. We will talk more about it in the next blog article of this series.

Conclusion

Apache HOP brings a host of advantages over Pentaho Data Integration Kettle, driven by its modern architecture, enhanced usability, advanced development features, cloud-native capabilities, and active community support. These advantages make Apache HOP a compelling choice for organizations looking to streamline their data integration and orchestration processes, especially in today’s cloud-centric and agile development environments. By using Apache HOP, businesses can achieve more efficient, scalable, and manageable data workflows, positioning themselves for success in the data-driven future.

Most importantly, Hitachi Vantara / Pentaho has stopped releasing Community versions of PDI or security patches for nearly 2 years now and also removed the links to download older versions of the software from Source forge too. This makes it risky for users to continue using Pentaho Community Edition in Production due to any non-resolved vulnerabilities.

Need help to migrate your Pentaho Artifacts to Apache HOP? Our experts can help.

Asset-26

Simpler alternative to Kubernetes – Docker Swarm with Swarmpit

Introduction:

As more are more applications are moving to private or public cloud environments as part of modernization by adopting new microservices architecture, they are being containerized using Docker and are orchestrated using Kubernetes as popular platform choices. Kubernetes does offer many advanced features and great flexibility, making it suitable for complex and large-scale applications, however, it has a steeper learning curve requiring more time, effort and resources to set up, monitor and manage for simpler applications where these advanced features may not be needed.

Docker Swarm is another open-source orchestration platform with a much simpler architecture and can be used to do most of the activities that one does with Kubernetes including the deployment and management of containerized applications across a cluster of Docker hosts with built-in clustering capabilities and load balancing enabling you to manage multiple Docker hosts as a single virtual entity.

Swarmpit is a little-known open-source web-based interface which offers a simple and intuitive interface for easy monitoring and management of the Docker Swarm cluster.

In this article, we walk through the process of setting up a Docker Swarm cluster with one master node and two worker nodes and configure Swarmpit to easily manage and monitor this Swarm Cluster.

Problem Statement

Managing containerized applications at scale can be challenging due to the complexity of handling multiple nodes, ensuring high availability, and supporting scalability. Single-node deployments limit redundancy and scalability, while manually managing multiple nodes is time-consuming and error-prone.

Docker Swarm addresses these challenges with its clustering and orchestration capabilities, but monitoring and managing the cluster can still be complex. This is addressed using Swarmpit.

Use Cases

  1. High Availability: Ensure your application remains available even if individual nodes fail.
  2. Scalability: Easily scale services up or down to meet demand.
  3. Simplified Management: Use a single interface to manage multiple Docker hosts.
  4. Efficient Resource Utilization: Distribute container workloads across multiple nodes.

Why Choose Docker Swarm Over Kubernetes?

Docker Swarm is an excellent choice for smaller or less complex deployments because:

  1. Simplicity and Ease of Use: Docker Swarm is easier to set up and manage compared to Kubernetes. It integrates seamlessly with Docker’s command line tools, making it simpler for those already familiar with Docker.
  2. Faster Deployment: Swarm allows you to get your cluster up and running quickly without the intricate setup required by Kubernetes.
  3. High Availability and Scaling: Docker Swarm effectively manages high availability by redistributing tasks if nodes fail and supports multiple manager nodes. Scaling is straightforward with the docker service scale command and resource constraints. Kubernetes offers similar capabilities but with more advanced configurations, like horizontal pod autoscaling based on custom metrics for finer control.

Limitations of Docker Swarm

  1. Advanced Features and Extensibility: Docker Swarm lacks some of the advanced features and customization options found in Kubernetes, such as detailed resource management and extensive extensions.
  2. Ecosystem: Kubernetes has a larger community and more integrations, offering a broader range of tools and support.

While Kubernetes might be better for complex needs and extensive customization, Docker Swarm offers effective high availability and scaling in a more straightforward and manageable way for simpler use cases.

Comparison:

Prerequisites

Before you start, ensure you have the following:

  1. Three Ubuntu machines with Docker installed.
  2. Access to the machines via SSH.
  3. A basic understanding of Docker and Docker Compose.

Setting Up the Docker Swarm Cluster

To start, you need to prepare your machines. Begin by updating the system packages on all three Ubuntu machines. This ensures that you are working with the latest software and security updates. Use the following commands:

sudo apt update

sudo apt upgrade -y

Next, install Docker on each machine using:

sudo apt install -y docker.io

Enable and start Docker to ensure it runs on system boot:

sudo systemctl enable docker

sudo systemctl start docker 

It is essential that all nodes are running the same version of Docker to avoid compatibility issues. Check the Docker version on each node with:

docker –version

With the machines prepared, go to the instance where you want the manager node to be present and run the command below.

docker swarm init

This will initialize the swarm process and provide a token using which you can add the worker node.

Copy the above-given command by the system and paste it into the instance or machine where you need the worker node to be present. Now both the manager and worker’s nodes are ready.

Once all nodes are joined, verify that the Swarm is properly configured. On the master node, list all nodes using: 

docker node ls

This command should display the master node and the two worker nodes, confirming that they are part of the Swarm. Additionally, you can inspect the Swarm’s status with:

docker info

Check the Swarm section to ensure it is active and reflects the correct number of nodes.

To ensure that your cluster is functioning as expected, deploy a sample service.

Create a Docker service named my_service with three replicas of the nginx image: 

docker service create –name my_service –replicas 3 nginx

Verify the deployment by listing the services and checking their status:

docker service ls

docker service ps my_service

Managing and scaling your Docker Swarm cluster is straightforward. To scale the number of replicas for your service, use:

docker service scale my_service=5 

If you need to update the service to a new image, you can do so with: 

docker service update –image nginx:latest my_service

Troubleshooting Common Issues

Node Not Joining the Swarm

  1. Check Docker Version: Ensure all nodes are running compatible Docker versions.
  2. Firewall Settings: Make sure port 2377 is open on the master node.
  3. Network Connectivity: Verify network connectivity between nodes.

Deploying Swarmpit for Monitoring and Management

Swarmpit is a user-friendly web interface for monitoring and managing Docker Swarm clusters. It simplifies cluster administration, providing an intuitive dashboard to monitor and control your Swarm services and nodes. Here’s how you can set up Swarmpit and use it to manage your Docker Swarm cluster.

Deploy Swarmpit Container

On the manager node, deploy Swarmpit using the following Docker command:

docker run -d -p 8888:8888 –name swarmpit -v /var/run/docker.sock:/var/run/docker.sock swarmpit/swarmpit:latest

Access the Swarmpit Interface

Open your web browser and navigate to http://<manager-node-ip>:8888. You will see the Swarmpit login page. Use the default credentials (admin/admin) to log in. It is recommended to change the default password after the first login.

Using Swarmpit for Monitoring and Management

Once you have logged in, you can perform a variety of tasks through Swarmpit’s web interface:


Dashboard Overview

○ Cluster Health: Monitor the overall health and status of your Docker Swarm cluster.
○ Node Information: View detailed information about each node, including its status, resources, and running services.


Service Management

  1. Deploy Services: Easily deploy new services by filling out a form with the necessary parameters (image, replicas, ports, etc.).
  2. Scale Services: Adjust the number of replicas for your services directly from the interface.
  3. Update Services: Change service configurations, such as updating the Docker image or environment variables.
  4. Service Logs: Access logs for each service to troubleshoot and watch their behavior.

Container Management

  1. View Containers: List all running containers across the cluster, including their status and resource usage.
  2. Start/Stop Containers: Manually start or stop containers as needed.
  3. Container Logs: Access logs for individual containers for troubleshooting purposes.

Network and Volume Management

  • Create Networks: Define and manage custom networks for your services.
  • Create Volumes: Create and manage Docker volumes for persistent storage.

User Management

  1. Add Users: Create additional user accounts with varying levels of access and permissions.
  2. Manage Roles: Assign roles to users to control what actions they can perform within the Swarmpit interface.

Benefits of Using Swarmpit

  1. User-Friendly Interface: Simplifies the complex task of managing a Docker Swarm cluster with a graphical user interface.
  2. Centralized Management: Provides a single point of control over all aspects of your Swarm cluster, from node management to service deployment.
  3. Real-Time Monitoring: Offers real-time insights into the health and performance of your cluster and its services.
  4. Enhanced Troubleshooting: Facilitates easy access to logs and service status for quick issue resolution.

Conclusion

By integrating Swarmpit into the Docker Swarm setup, we get a powerful tool that streamlines cluster management and monitoring. Its comprehensive features and intuitive interface make it easier to maintain a healthy and efficient Docker Swarm environment, enhancing the ability to deploy and manage containerized applications effectively.

Frequently Asked Questions (FAQs)

Docker Swarm is a clustering tool that manages multiple Docker hosts as a single entity, providing high availability, load balancing, and easy scaling of containerized applications.

Install Docker on all machines, initialize the Swarm on the master with docker swarm init, and join worker nodes using the provided token. Verify the setup with docker node ls.

Docker Swarm offers high availability, scalability, simplified management, load balancing, and automated failover for services across a cluster.

Swarmpit is a web-based interface for managing and monitoring Docker Swarm clusters, providing a visual dashboard for overseeing services, nodes, and logs.

Access Swarmpit at http://<manager-node-ip>:8888, log in, and use the dashboard to monitor the health of nodes, view service status, and manage configurations.

Blogbanner_2

Keycloak deployment on Kubernetes with Helm charts using an external PostgreSQL database

Prerequisites:

  1. Kubernetes cluster set up and configured.
  2. Helm installed on your Kubernetes cluster.
  3. Basic understanding of Kubernetes concepts like Pods, Deployments, and Services.
  4. Familiarity with Helm charts and templating.

Introduction:

Deploying Keycloak on Kubernetes with an external PostgreSQL database can be challenging, especially when using Helm charts. One common issue is that keycloak deploys with  a default database service when we use Helm chart, making it difficult to integrate with an external database. 

In this article, we’ll explore the problem we encountered while deploying Keycloak on Kubernetes using Helm charts and describe the solution we implemented to seamlessly use an external PostgreSQL database.

Problem:

The primary issue we faced during the deployment of Keycloak on Kubernetes using Helm was the automatic deployment of a default database service. This default service conflicted with our requirement to use an external PostgreSQL database for Keycloak. The Helm chart, by default, would deploy an internal database, making it challenging to configure Keycloak to connect to an external database.

Problem Analysis

  1. Default Database Deployment: The Helm chart for Keycloak automatically deploys an internal PostgreSQL database. This default setup is convenient for simple deployments but problematic when an external database is required.
  2. Configuration Complexity: Customizing the Helm chart to disable the internal database and correctly configure Keycloak to use an external PostgreSQL database requires careful adjustments to the values.yaml file.
  3. Integration Challenges: Ensuring seamless integration with an external PostgreSQL database involves specifying the correct database connection parameters and making sure that these settings are correctly propagated to the Keycloak deployment.
  4. Persistence and Storage: The internal database deployed by default may not meet the persistence and storage requirements for production environments, where an external managed PostgreSQL service is preferred for reliability and scalability.

To address these issues, the following step-by-step guide provides detailed instructions on customizing the Keycloak Helm chart to disable the default database and configure it to use an external PostgreSQL database.

Overview Diagram:

Step-by-Step Guide

Step 1: Setting Up Helm Repository

If you haven’t already added the official Helm charts repository for Keycloak, you can add it using the following command:

helm repo add codecentric https://codecentric.github.io/helm-charts

helm repo update

By adding the official Helm charts repository for Keycloak, you ensure that you have access to the latest charts maintained by the developers. Updating the repository ensures you have the most recent versions of the charts.

Step 2: Customizing Helm Values

Objective: Customize the Keycloak Helm chart to avoid deploying the default database and configure it to use an external PostgreSQL database.

Configure Keycloak for development mode

Create a values.yaml File

  1. Create a new file named values.yaml.
  2. Add the following content to the file 

image:

  # The Keycloak image repository

  repository: quay.io/keycloak/keycloak

  # Overrides the Keycloak image tag whose default is the chart appVersion

  tag: 24.0.3

  # The Keycloak image pull policy

  pullPolicy: IfNotPresent


resources:

  requests:

    cpu: “500m”

    memory: “1024Mi”

  limits:

    cpu: “500m”

    memory: “1024Mi”


args:

  – start-dev

  – –hostname=<url>

  – –hostname-url=<url>

  – –verbose


autoscaling:

  # If `true`, a autoscaling/v2beta2 HorizontalPodAutoscaler resource is created (requires Kubernetes 1.18 or above)

  # Autoscaling seems to be most reliable when using KUBE_PING service discovery (see README for details)

  # This disables the `replicas` field in the StatefulSet

  enabled: false

  # The minimum and maximum number of replicas for the Keycloak StatefulSet

  minReplicas: 1

  maxReplicas: 2


ingress:

  enabled: true

  #hosts:

  #  – <url>

  ssl:

    letsencrypt: true

    cert_secret: <url>

  annotations:

    kubernetes.io/ingress.class: “nginx”

    kubernetes.io/tls-acme: “true”

    cert-manager.io/cluster-issuer: letsencrypt

    cert-manager.io/acme-challenge-type: dns01

    cert-manager.io/acme-dns01-provider: route53

    nginx.ingress.kubernetes.io/proxy-buffer-size: “128k”

    nginx.ingress.kubernetes.io/affinity: “cookie”

    nginx.ingress.kubernetes.io/session-cookie-name: “sticky-cookie”

    nginx.ingress.kubernetes.io/session-cookie-expires: “172800”

    nginx.ingress.kubernetes.io/session-cookie-max-age: “172800”

    nginx.ingress.kubernetes.io/affinity-mode: persistent

    nginx.ingress.kubernetes.io/session-cookie-hash: sha1

  labels: {}

  rules:

    –

      # Ingress host

      host: ‘<url>’

      # Paths for the host

      paths:

        – path: /

          pathType: Prefix

  tls:

    – hosts:

        – <url>

      secretName: “<url>”

      

extraEnv: |

 – name: PROXY_ADDRESS_FORWARDING

   value: “true”

 – name: QUARKUS_THREAD_POOL_MAX_THREADS

   value: “500”

 – name: QUARKUS_THREAD_POOL_QUEUE_SIZE

   value: “500”

 

This configuration file customizes the Keycloak Helm chart to set specific resource requests and limits, ingress settings, and additional environment variables. By setting the args to start Keycloak in development mode, you allow for easier initial setup and testing.

Configuring for Production Mode

  1. Add or replace the following content in values.yaml for production mode:

args:

  – start

  – –hostname=<url>

  – –hostname-url=<url>

  – –verbose

  – –optimized

  – -Dquarkus.http.host=0.0.0.0

  – -Dquarkus.http.port=8080

Note: The production configuration includes optimizations and ensures that Keycloak runs in a stable environment suitable for production workloads. The –optimized flag is added for performance improvements.

Configuring for External Database

  1. Add the following content to values.yaml to use an external PostgreSQL database:

args:

  – start

  – –hostname-url=<url>

  – –verbose

  – –db=postgres

  – –db-url=<jdbc-url>

  – –db-password=${DB_PASSWORD}

  – –db-username=${DB_USER}

 

postgresql:

  enabled: false

This configuration disables the default PostgreSQL deployment by setting postgresql.enabled to false. The database connection arguments are provided to connect Keycloak to an external PostgreSQL database.

Step 3: Deploying Keycloak with PostgreSQL and Custom Themes Using Helm

Objective: Add custom themes to Keycloak and deploy it using Helm.

  1. Add the following to values.yaml to include custom themes:

extraInitContainers: |

  – name: theme-provider

    image: <docker-hub-registry-url>

    imagePullPolicy: IfNotPresent

    command:

      – sh

    args:

      – -c

      – |

        echo “Copying custom theme…”

        cp -R /custom-themes/* /eha-clinic

    volumeMounts:

      – name: custom-theme

        mountPath: /eha-clinic

extraVolumeMounts: |

  – name: custom-theme

    mountPath: /opt/jboss/keycloak/themes/

extraVolumes: |

  – name: custom-theme

    emptyDir: {}

This configuration uses an init container to copy custom themes into the Keycloak container. The themes are mounted at the appropriate location within the Keycloak container, ensuring they are available when Keycloak starts.

Step 4: Configuring Keycloak

Objective: Log in to the Keycloak admin console and configure realms, users, roles, client applications, and other settings.

Access the Keycloak admin console using the URL provided by your ingress configuration.

  1. Log in with the default credentials (admin/admin).
  2. Configure the following according to your application requirements:
  • Realms
  • Users
  • Roles
  • Client applications

The Keycloak admin console allows for comprehensive configuration of all aspects of authentication and authorization, tailored to the needs of your applications.

Step 5: Configuring Custom Themes

Objective: Apply and configure custom themes within the Keycloak admin console.

  1. Log in to the Keycloak admin console using the default credentials (admin/admin).
  2. Navigate to the realm settings and select the “Themes” tab.
  3. Select and configure your custom themes for:
  • Login pages
  • Account pages
  • Email pages

Custom themes enhance the user experience by providing a personalized and branded interface. This step ensures that the authentication experience aligns with your organization’s branding and user interface guidelines.

Conclusion

By following the steps outlined in this article, you can deploy Keycloak with PostgreSQL on Kubernetes using Helm, while also incorporating custom themes to personalize the authentication experience. Leveraging Helm charts simplifies the deployment process, while Keycloak and PostgreSQL offer robust features for authentication and data storage. Integrating custom themes allows you to tailor the authentication pages according to your branding and user interface requirements, ultimately enhancing the user experience and security of your applications.

Frequently Asked Questions (FAQs)

Keycloak is a modern applications and services open-source identity and access management solution. It offers various functionalities such as single sign-on, identity brokering, and social login which help in making user identity management easy as well as application security

Deploying Keycloak on Kubernetes means that you can have an elastic application that can be extended according to the number of users, resistant to failure on case of external services unavailability or internal server crashes; it is also easy to manage, has numerous protocols for authentication mechanisms and connects with different types of external databases.

Helm charts are pre-configured Kubernetes resource packages that simplify the efficient management and deployment of applications on Kubernetes.

To disable the default PostgreSQL database, set postgresql.enabled to false in the values.yaml file.

Provide the necessary database connection parameters in the values.yaml file, including --db-url, --db-password, and --db-username.

You can add custom themes by configuring init containers in the values.yaml file to copy the themes into the Keycloak container and mounting them at the appropriate location.

Asset 2 (1)

Integrating Apache Jmeter with Jenkins

In the world of software development, ensuring the performance and reliability of applications is paramount. One of the most popular tools for performance testing is Apache JMeter, known for its flexibility and scalability. Meanwhile, Jenkins has become the go-to choice for continuous integration and continuous delivery (CI/CD). Combining the power of JMeter with the automation capabilities of Jenkins can significantly enhance the efficiency of performance testing within the development pipeline. In this article, we’ll explore the integration of JMeter with Jenkins and how it can streamline the performance testing process.

Apache Jmeter

Apache JMeter is a powerful open-source tool designed for load testing, performance testing, and functional testing of applications. It provides a user-friendly GUI that allows testers to create and execute several types of tests, including HTTP, FTP, JDBC, LDAP, and more. JMeter supports simulating heavy loads on servers, analyzing overall performance metrics, and finding performance bottlenecks.

With its scripting and parameterization capabilities, JMeter offers flexibility and scalability for testing web applications, APIs, databases, and other software systems. Its extensive reporting features help teams assess application performance under different conditions, making it an essential tool for ensuring the reliability and scalability of software applications. More information available here Apache JMeter.

Jenkins

Jenkins is one of the most popular open-source automation servers widely used for continuous integration (CI) and continuous delivery (CD) processes in software development for several years now. It allows developers to automate the building, testing, and deployment of applications, thereby streamlining the development lifecycle. Jenkins supports integration with various version control systems like Git, SVN, and Mercurial, enabling automatic triggers for builds whenever code changes are pushed.

Its extensive plugin ecosystem provides flexibility to integrate with a wide range of tools and technologies, making it a versatile solution for managing complex CI/CD pipelines. Jenkins’ intuitive web interface, extensive plugin library, and robust scalability make it a popular choice for teams aiming to achieve efficient and automated software delivery processes. The Jenkins docs has a page to help with the Jenkins installation process.

Why Integrate JMeter with Jenkins?

Traditionally, performance testing has been a manual and time-consuming process, often conducted as a separate part of the development lifecycle by test teams. The results had to then be shared with the rest of the team as there was no automated execution or capturing of the test results as part of the CICD pipeline. However, in today’s fast-paced software development environment, there’s a growing need to automate the complete testing processes to include execution of Performance tests as part of the CICD pipeline. By integrating JMeter with Jenkins, organizations can achieve the following benefits:

Automation: Jenkins allows you to automate the execution of JMeter tests as part of your CI/CD pipeline, enabling frequent and consistent performance testing with minimal manual intervention.

Continuous Feedback: Incorporating performance tests into Jenkins pipelines provides immediate feedback on the impact of code changes on application performance, allowing developers to find and address performance issues early in the development cycle.

Reporting: Jenkins provides robust reporting and visualization capabilities, allowing teams to analyze test results and track performance trends over time, helping data-driven decision-making.

Our Proposed Approach & its advantages

We’ve adopted a new approach in addition to using the existing JMeter plugin for Jenkins wherein we enhance the Jenkins Pipeline to include detailed notifications and better result organization.

The key steps of our approach are as follows.

  1. We can install JMeter directly on the agent base OS. This ensures we have access to the latest features and updates.
  2. We use the powerful Blaze Meter plugin to generate our JMeter scripts.
  3. We’ve written a dedicated Jenkins pipeline to automate the execution of these Jmeter scripts.
  4. We have also defined the steps as part of the Jenkins script to distribute the Execution Status and log by email to chosen users.
  5. We also store the results in a configurable Path for future reference.

All of this ensures better automation, flexibility or control of the execution, notification and efficient performance testing as part of the CICD Pipeline.

Setting Up Blaze Meter & Capturing the Test Scripts

To automate the process of script writing we should use the Blaze Meter tool. Navigate to chrome extensions and search for Blaze Meter. Click on add to chrome. Access Blaze Meter official website and create an account there for further process.

Open chrome and now you can find the Blaze Meter extension on the top right corner.

Click on the Blaze Meter chrome extension. A toolbox will be visible. Open the application where you want to record the scripts for Jmeter. Click on the record button to start.

Navigate through the application and perform necessary operations as end users of the application would. Click on the stop button to stop the recording.

Now Blaze Meter has recorded the scripts. To save it in. jmx format, click on save, check the Jmeter only box and click on save as shown in the screenshot below.

For more information on how to record a Jmeter script using Blaze Meter, follow the link

Modifying the Test Scripts in JMeter

The recorded script can then be opened in JMeter, and necessary changes can be made as per the different Load and Performance Scenarios to be assessed for the application.

Select the generated .jmx file and click on open

In addition to these you can add listeners to the thread groups, for better visibility of each sample results.

Setting up a Jenkins pipeline to execute JMeter tests

Install Jenkins: If you haven’t already, install Jenkins on your server following the official documentation.

Create a New Pipeline Job: A new pipeline job should be created to orchestrate the performance testing process. Click on new item to create a pipeline job and select pipeline option.

Post creating a new pipeline, navigate to configure and mention the scheduled time in the below format.

(‘H 0 */3 * *’)

Define Pipeline Script: Configure the pipeline script to execute the JMeter test at regular intervals using a cron expression.

 pipeline {

    agent {

        label ‘{agent name}’

    }

This part of the Jenkins pipeline script specifies the agent (or node) where the pipeline should run. The label ‘{agent name}’ should be replaced with the label of the specific agent you want to use. This ensures that the pipeline will execute on a machine that matches the provided label.

stages {

      stage(‘Running JMeter Scripts’) {

          steps {

              script {

 sh ”’

    output_directory=”{path}/$(date +’%Y-%m-%d’)”

    mkdir -p “$output_directory”

    cd {Jmeter Path}/bin

    sh jmeter -n -t {Jmeter Script Path} -l “$output_directory/{Result file name}” -e -o “$output_directory”

    cp “$output_directory”/{Result file name} $WORKSPACE

    cp -r $output_directory $WORKSPACE

    ”’

       }

            }      

}

This stage named ‘Running JMeter Scripts’ has steps to execute a shell script. The script does the following:
1. Creates an output directory with the current date.
2. Navigates to the JMeter binary directory.
3. Runs the JMeter test script specified by {Jmeter Script Path}, storing the results in the created directory
4. Copies the result file and output directory to the Jenkins workspace for archiving.

post {

        always {

            script {

                currentBuild.result = currentBuild.result

                def date = sh(script: ‘date +\’%Y-%m-%d\”, returnStdout: true).trim()

                def subject = “${currentBuild.result}: Job ‘${env.JOB_NAME}'”

                def buildLog = currentBuild.rawBuild.getLog(1000)

                emailext(

                    subject: subject,

                    body: “”” Hi Team , Jmeter build was successful , please contact team for the results “””,

                    mimeType: ‘text/html’,

                    to: ‘{Receiver Email}’,

                    from: ‘{Sender Email}’,

                    attachLog: true ,

                )

            }

     }

This post block runs after the pipeline completes. It retrieves the last 1000 lines of the build log and sends an email notification with the build result, a message, and the build log attached to specified recipients.

View the generated reports.

In Linux instance navigate to the path where the .html files are stored i.e. the output report of the jmeter scripts

Before you open the HTML file, move the complete folder to your local device. Once the folder is moved open the .html file and you’ll be able to analyze the reports

Conclusion

By following the steps described and the approach suggested, we have shown how Integrating JMeter with Jenkins enables teams to automate performance testing and incorporate it seamlessly into the CI/CD pipeline. By scheduling periodic tests, storing results, and sending out email notifications, organizations can ensure the reliability and scalability of their applications with minimal manual intervention. Embrace the power of automation and elevate your performance testing efforts with Jenkins and JMeter integration. For any assistance in automation of your Performance tests please get in touch with us at [email protected] or leave us a note at form and we will get in touch with you.

Frequently Asked Questions (FAQs)

You can download the latest version from the Apache JMeter homepage, after download extract the downloaded files to a directory on the agent machine and Make sure to add the JMeter bin directory in the system's PATH variable so that all JMeter commands can be run from the command line.

Its a Chrome extension that helps users to record and create JMeter scripts easily. To use it, install the BlazeMeter extension from the Chrome Web Store and record the desired scenarios on your web application, and export the recorded script in .jmx format. This script can then be modified in JMeter and used in your Jenkins pipeline for automated performance testing.

Create a new pipeline job in Jenkins and define the pipeline script to include stages for running JMeter tests. The script should include steps to execute JMeter commands, store the results, and send notifications. Here's a example script for you 

pipeline {

    agent { label 'your-agent-label'

 }

    stages {

        stage('Run JMeter Tests') {

            steps {

                script {

                    sh '''

                    output_directory="/path/to/output/$(date +'%Y-%m-%d')"

                    mkdir -p "$output_directory"

                    cd /path/to/jmeter/bin

                    sh jmeter -n -t /path/to/test/script.jmx -l "$output_directory/results.jtl" -e -o "$output_directory"

                    cp "$output_directory/results.jtl" $WORKSPACE

                    cp -r "$output_directory" $WORKSPACE

                    '''

                }

            }

        }

    }

    post {

        always {

            emailext (

                subject: "JMeter Test Results",

                body: "The JMeter tests have completed. Please find the results attached.",

                recipientProviders: [[$class: 'DevelopersRecipientProvider']],

                attachLog: true

            )

        }

    }

}

You can schedule the JMeter test execution using the cron syntax in the Jenkins pipeline configuration. For example, to run the tests every three hours, you can use:<br><br>

H 0 */3 * * *

<br><br>

This will trigger the pipeline at the top of the hour every three hours.

After the JMeter tests are executed, the results are typically stored in a specified directory on the Jenkins agent. You can navigate to this directory, download the results to your local machine, and open the HTML reports generated by JMeter for detailed analysis.

unnamed

Pentaho vs Pyramid: A comprehensive comparison and Roadmap for Migration

Hitachi Vantara Pentaho has been a very popular Commercial Open-Source Business intelligence platform used extensively over the last 12+ years providing a comprehensive set of tools with great flexibility and extensibility and hence used to be featured in the Analysts reports including Gartner BI Magic Quadrant, Forrester and Dresner’s Wisdom of Crowds. 

Over the last 3-5 years, however, there has been a shift in the industry to the new Augmented era of Business Intelligence and Analytics and several niche Pentaho alternatives have emerged. Pyramid Analytics is one of these Pentaho replacement platforms and is consistently recognized as a leader in this space by several Analysts including Gartner, BARC, Forrester & Dresner having featured in their reports now for over 7 years in a row. 
Hitachi Vantara Pentaho hasn’t been able to keep pace and has since been dropped from these analyst reports. This series of articles are aimed to help current users of Pentaho and other similar old generation BI platforms like Jasper who are evaluating Pentaho replacements or alternatives. We try to map the most commonly used modules and features of Pentaho BI Platform to their equivalent in Pyramid Analytics, comparing and highlighting the improvements and also presenting a RoadMap for migration.

Architecture Overview and Comparison

About Pentaho

Pentaho BI Platform covers the entire spectrum of Analytics. It includes both web-based components and design tools. The design tools include Pentaho Data Integration for ETL, Metadata Editor, Schema workbench, Aggregate & Report Designers to build Reports and Weka for Data Science / Machine Learning.

Pentaho BI Server includes a set of Web Components including the User Console, Analyzer, Interactive Reports, Dashboard Designer, CTools and Data Source Model Editor / Wizard. Specific Design tools and Web Components can be used to generate different Analytical content depending on the specific use cases and requirements. The flexible and open architecture was one of the key reasons for its popularity for so long as there were not many Pentaho alternatives with similar capabilities, and hence, it enjoyed its days with little competition.

Please refer to this link for a detailed explanation of each of the above components and Design tools. 

About Pyramid Analytics

Pyramid Analytics is a Modern, Scalable, Enterprise Grade, End to End Cloud-Centric Unified Decision Intelligence platform for tomorrow’s Analytical needs. Being an adaptive analytics platform it provides different capabilities and experiences based on user needs and skills, all while managing content as a shared resource. It provides organizations with one analytics solution for everyone, across all user types and skill levels. Hence, proving itself as a worthy and capable Pentaho replacement platform.

Unlike Pentaho and other Pentaho replacement platforms, there are no different Client or Design tools that need to be installed on local systems by developers; instead, all Modules are hosted in a Server and can be accessed using just the browser.

Please refer to these Platform Overview & Pyramid Modules for a more detailed explanation of each of the above components and modules.

Mapping of Modules & Design Tools between Pyramid & Pentaho

Here is the mapping between the Modules of Pentaho with the corresponding ones of Pyramid Analytics.

Key Platform Capabilities & Differentiators

We have listed some of the Key Capabilities of both Pentaho and Pyramid Analytics Platforms and highlighted differences in terms of how they are built

Decision Intelligence and Augmented Analytics

As per Gartner, Decision Intelligence & Augmented analytics is the use of enabling technologies such as machine learning and AI to assist with data preparation, insight generation, and insight explanation to augment how people explore and analyze data in analytics and BI platforms. It also augments the expert and citizen data scientists by automating many aspects of data science, machine learning, and AI model development, management, and deployment. 

Pentaho doesn’t offer any Augmented Analytics or Decision Intelligence capability as part of its offerings. This feature makes Pyramid Analytics an even more solid Pentaho replacement option.

Pyramid offers augmented analytics capabilities in a couple of ways like Explain(NLQ & Chatbot). Smart Insights, Smart Model, Smart Discover, Smart Publish and Present, Data Structure Analyzer, Auto Discover, Auto recommendations. Among the most used are Auto-Discovery and Smart Discovery. It offers users the simplest method for building data visualizations in Pyramid through a simple point-and-click wizard. The wizard presents the user with an ultra-streamlined interface, consisting of the report canvas, the visualization menu, and a single unified drop zone.

Collaboration & Conversation

If there’s some discussion or real-time collaboration required between business users around Report or Dashboard, Pentaho users usually need to use mail or similar to create discussion about any issue or pointers related to reports.

However, Pyramid Analytics not only has inbuilt collaboration and conversation features where any user can write a comment and share it with a single user or group of users it also offers a very powerful Custom Workflow API to support integration with other applications. Other users also get notifications about new comments and accordingly respond or continue the conversation.

Dashboard & Data Visualization

Pentaho’s Dashboard Designer helps create Ad Hoc interactive visualizations and Dashboards with a 360-degree view of data through dynamic filter controls and content linking. Drag-and-drop, attribute highlighting, and zoom-in capabilities make it easy to isolate key trends, details, and patterns. We can also use the Open Source CTools component of Pentaho to build custom Dashboards but this requires highly technical Javascript skills. We can also integrate business analytics with other applications through portal and mashup integrations.

Pyramid offers a wide range of Visualization capabilities in the Discover, Present, and Illustrate Modules with a wide range of charts and Graphs. It has features like Time Intelligence, a wide range of formulae capabilities, better representation of GeoSpatial Data using inbuilt Maps capabilities. It also has the capability of Explain where we can ask questions to get the information needed and it provides the results using NLP. Users can also set alerts based on dynamic conditions without any coding, unlike Pentaho and other Pentaho alternatives. Using the powerful Publish module, you can create data-driven graphics, text, and visual elements which can be scheduled and delivered to users via email as PowerPoint, Word, Excel, PDF and other formats.

Data Sources Support

Pentaho supports more than 45 Databases using JDBC & JNDI  including the ability to retrieve information from Google Analytics and Salesforce. Pentaho Server also provides a Database connection wizard to create custom data sources. Details can be found here.

Pyramid Analytics also offers a wide range of data source connectivity options using JDBC, ODBC, OLAP(Microsoft Analysis Services, SAP HANA, SAP BW) and External applications like Facebook, Twitter & Salesforce. It provides an easy wizard to retrieve and Mash the Data by creating a logical Sematic layer. 

It should be highlighted that out-of-box connectivity to SAP HANA and BW makes it easy for SAP users to modernize their Analytical solution using Pyramid Analytics. You can find more details here.

Metadata Layer Support

Pentaho has two Design tools which help end-users to create the Meta Data required for Ad Hoc Reports creation – Schema Workbench and Metadata Editor. Schema Workbench helps in creating a Mondrian Schema which needs an underlying Database in Star Schema. This is OLAP technology and needs MDX language to query data.  Metadata Editor is used to create a Metadata model which primarily transforms DB physical structure to a business Logical Model.

The Pyramid Metadata layer is managed using the Model component. Everything in the Pyramid revolves around the Model. The model is highly sophisticated which facilitates all visualization capabilities. Pyramid Models are easy to create and can be created with no or little database changes. The model creation process comes with lots of Data preparation, Calculation features. The model also mimics OLAP concepts and more. 

Predictive Analytics

Pentaho product suite has module Weka that enables predictive analytics features like data preprocessing, classification, association, time series analysis, and clustering. However, there’s some effort to bring the data to nice visualization and then to consume it in the context of other analytical artefacts. The process is not easy to achieve with other Pentaho alternatives, but Pyramid Analytics solves this with an out-of-the-box solution. 

Pyramid has out of the box predictive modelling capabilities as part of the whole analytical process which can be executed seamlessly. To facilitate the AI framework, Pyramid comes with tools to deliver machine learning in R, Python, Java, JavaScript, and Ruby (with more to be added in the future)

Natural Language Processing

Pentaho can be integrated with external tools like Spark MLlib, Weka, Tensorflow, and Keras but these are not suitable for NLP use cases. The same is the case with many other Pentaho replacement solutions.

Pyramid’s Explain and Ask Question using Natural Language Query (NLQ) however supports easy text-based searches, allowing users to type a question in conversational language and get answers instantly in the form of automatic data visualizations. Users can enhance the output by customizing the underlying semantic model according to their business needs.

Native Mobile Applications

Considering today’s need by Business users to have instant access to Information and Data to make quick decisions and the fact that Mobiles are the de facto mediums, it is very important to deliver Analytical content including the KPI on the go and when offline. This can be achieved by the support of access to data on mobile devices. This is achieved by responsive web interfaces and mobile apps.

Pentaho doesn’t have a native mobile app but we can deliver Mobile-friendly content. using a mobile browser. 

Pyramid, on the other hand, offers a native mobile app and one of the best Pentaho alternatives that empower Business Users on the Go. The app can be downloaded from App stores.

Admin, Security & Management

User. Roles and Folder/file management is done by PUC(Pentaho User Console) when logged in as Administrator. Your predefined users and roles can be used for the Pentaho User Console (PUC) if you are already using a security provider such as LDAP, Microsoft Active Directory (MSAD), or Single Sign-On. Pentaho Data Integration (PDI) can also be configured to use your implementation of these providers or Kerberos to authenticate users and authorize data access.

Clustering and Load balancing need to be configured separately for PDI and BI servers.  The server is a Tomcat application so the clustering and load balancing generally follows accordingly. Upgrading the server version needs to follow an up-gradation path which involves possible changes to content artefacts. 

Pyramid has multiple layers of security which makes it very robust and offers secured content delivery. It also facilitates third-party security integration like Active Directory, Azure LDAPS, LDAP, OpenID and SAML. Pyramid has Advanced Administration with simplified Security handling, Monitoring, Fine Tuning, and Multi-Tenancy management without the need to edit and manage multiple configuration server files as in Pentaho. 

All can be done using the Browser by an Administrative user. Pulse Module which helps Pyramid Server hosted in the Cloud securely connect into Data repositories on-prem. Distributed Architecture inbuilt which offers easy dynamic scalability across Multiple Servers with Load Balancer built-in.

Conclusion 

With Pyramid ranking ahead of Pentaho in most of the features and capabilities, it is not surprising that it is rated so highly by all the analysts and it is a no-brainer to select Pyramid as your next-generation Pentaho replacement Enterprise Analytics and Decision Intelligence platform. More details on Why to Choose a Pyramid is provided here. 

We only covered the high-level aspects and differences with Pentaho as part of this article. In the next article, we delve deeper into the individual components and walk through how each of them from existing Pentaho-based solutions or Pentaho alternatives can be migrated into Pyramid by giving specific examples. 

Please get in touch with us here. if you are currently using Pentaho and want assistance with migrating to the Pyramid Analytics Platform.

824a67c0-e747-4b68-affa-55cc2025551c

Locust Load Testing in Kubernetes [with Concourse]

Load testing is the critical part of any web application quality test to know how it behaves when the load / volume of users accessing it are high. The first step of any load testing is to select the right tools. Locust is one of those tools which is popular in testing python-based
applications. For testers who prefer a command-line interface, Locust is faster, easier to configure, and  enables testing code to be easily reused between projects and different environments. Locust makes the effort to work around the limitation of resources while running the load
test by allowing you to spin up a distributed load test. Through this, we can simulate a large number of users by scaling the number of workers. To install the locust into your local system for development you can run the following
command:

$ pip install locust [you can use Anaconda command line also]For reference, you can look ahead: Locust.
Prerequisites:
1) Kubernetes
2) Concourse
3) Helm Kubernetes is a container management technology developed in Google lab to manage containerized applications in different kinds of environments such as physical, virtual, and cloud infrastructure. It is an open-source system that helps in creating and managing the containerization of applications. Helm chart is the package manager tool for Kubernetes which has the collection of files. Concourse CI is a system built with a loosely coupled microservice architecture. To manage the concourse pipeline by command-line interface we have to use fly.

Setup Locust in Kubernetes:
Add the Helm repository by the command below:
1) $ helm repo add stable https://charts.helm.sh/stable
2) $ helm repo list
Then we will get the output as one helm repo added as:

3) Create a ConfigMap with any name, here I’m calling it locust-worker-configs which holds our locust file. Since in our example we are running Odoo locusts it has multiple files and all the files in a particular folder are added to this ConfigMap which is later referenced in the locust pods (master and worker).

$ kubectl create configmap locust-worker-configs –from-file &lt;local dir&gt;/odoolocust -o
yaml –dry-run=client | kubectl apply -f –

4) Install locust:
$ helm install locust stable/locust -f values.YAML
Here is the list of all configurable values. Be sure to add the ConfigMap name created earlier in the values. YAML file.

After running this command, we will have the following output:

5) Check if all the pods are up and running.

$ kubectl get pods

6) Let’s check the service created:
$ kubectl get service

7) We have one service named “locust-master-svc” created and we can expose it by port forwarding to access from local.

$ kubectl port-forward service/locust-master-svc 8089:8089

We have the locust dashboard up and running from where we can provide the number of users, spawn rate, and host.

Below is the image of the load test which has three workers at a ready state and the CPU percentage metrics is mentioned:

Now we can connect to the remote locust instance and run our tests. The next steps will automate the tests and store the results in a google bucket using Concourse.

Concourse pipeline:

To create the ConfigMap and to run the pods, we have to provide the directory of our folder where all test scripts and concourse pipeline files are located. For that, check the folder structure below:

Here, we’re using pipelines as “locust-tests.pipeline.YAML”. It has two different jobs, one to update the locust scripts and the other to run the tests for a different number of users.


platform: Linux

groups:
– name: locust-test
jobs:
– update-locust-script
– users-50
– users-100
– users-150
– users-200

jobs:

– name: update-locust-script
plan:
– get: pipeline
trigger: true
– put: task
params:
wait_until_ready: 0
kubectl: create configmap locust-worker-configs –from-file pipeline/tasks/odoolocust/ -o yaml –dry-run=client -n | kubectl apply -f –

– name: users-50
plan:
– get: pipeline
– task: users-50
file: pipeline/tasks/locust-test.yaml
params:
user_count: 50

– name: users-100
plan:
– get: pipeline
– task: users-100
file: pipeline/tasks/locust-test.yaml
params:
user_count: 100

resource_types:
– name: kubernetes
type: docker-image
source:
repository: zlabjp/kubernetes-resource
tag: “1.17”

resources:
– name: pipeline
type: git
source:
uri:
branch:
private_key: {{git_private_key}}

– name: task
type: kubernetes
source:
server: {{cluster_url}}
namespace:
token: {{token}}
certificate_authority: ((kubernetes_ca_crt))

The pipeline

Under the task folder, we have 2 files and the Odoo locust folder has the locus file. The cred.py stated below is used in the locust-test. YAML to convert Google Cloud credentials retrieved from a secret manager into JSON file to authenticate Google Cloud to push the test results to a google bucket.

import json
f=open(‘credentials.json’)
d = f.read()

data = json.loads(d, strict=False)

with open(“cred.json”, “w”) as outfile:
json.dump(data, outfile)

                                                                      cred.py

The locust-test. YAML runs the actual tests for different numbers of users as mentioned in the pipeline.


platform: Linux

image_resource:
type: docker-image
source:
repository: google/cloud-sdk
tag: “alpine”

params:
KUBE_CONFIG: ((kubernetes-config))

inputs:
– name: pipeline

run:
path: sh
args:
– -exc
– |
set +x
mkdir -p /root/.kube/
printf %s “${KUBE_CONFIG}” > /root/.kube/config
curl -L -s https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
chmod +x /usr/local/bin/kubectl
cd pipeline/tasks/
cat << EOF >> credentials.json
((kubernetes-bucket-storage))
EOF
python3 cred.py
export GOOGLE_APPLICATION_CREDENTIALS=cred.json
POD_NAME=$(kubectl get pods -n <namespace> | grep locust-master | awk ‘{print $1}’)
kubectl exec –stdin –tty ${POD_NAME} bash -n <namespace> << EOF
mkdir tmp
cd tmp
locust -f /mnt/locust/test.py Seller –csv=${user_count}_users –headless –run-time 30m -u ${user_count} -r ${user_count} -H <host> –only-summary
cd ..
tar -zcf users_${user_count}.tgz tmp
rm -rf tmp
EOF
kubectl cp <namespace>/${POD_NAME}:/home/locust/users_${user_count}.tgz users_${user_count}.tgz
kubectl exec –stdin –tty ${POD_NAME} bash -n <namespace> << EOF
rm users_${user_count}.tgz
EOF
gcloud auth activate-service-account –project=<google_project_id> –key-file=cred.json
gsutil mv users_${user_count}.tgz gs://locust-tests/${user_count}_users/users_${user_count}_$(date ‘+%d-%m-%YT%H:%M:%S’).tgz

                                                                       Locust-test.yaml

When we trigger the jobs for 50 users it’ll run the test for the time mentioned in this line of the code where the run-time is 30 minutes.

locust -f /mnt/locust/test.py Seller –csv=${user_count}_users –headless –run-time 30m -u ${user_count} -r ${user_count} -H <host> –only-summary

Once the test is completed, it’ll push the .tgz file with the results to a google bucket named locust-tests and it has subfolders for a different number of users so that we can track the tests for different periods.

This flow can also be integrated with the deployment pipeline of the application so that it runs the test right after the deployment. Here we are using the Odoo locust script to test an Odoo application and the same procedure can be followed for any other application.

This is how we can completely automate the execution of Locust tests using Concourse.

Please get in touch for assistance with setup and use of Locust for your Performance testing needs.

WhatsApp Image 2021-04-26 at 16.03.17

How to integrate a Bot in Rocketchat using RASA

This is the 2nd in our series of articles on using RocketChat as an Open Source Enterprise Communication

Platform as an alternative for Slack or Microsoft Team and enhancing the capabilities by integrating one of the most popular Open Source Chat Bots RASA to simplify and enhance customer engagement as one of the steps to accelerate the Digital transformation of the organization.