Integrating Apache Hop with n8n: Axxonet’s Blueprint for Scalable, Automation‑Driven Data Pipelines

Enterprises today are under pressure to modernise their data operations while reducing cost, improving reliability, and accelerating decision‑making. Automation platforms are no longer expected to trigger notifications or orchestrate APIs simply—they must reliably control data‑intensive pipelines that feed analytics, reporting, and operational intelligence.

At Axxonet, we combine n8n’s event‑driven automation with Apache Hop’s scalable data transformation engine to deliver a unified architecture that is robust, maintainable, and built for enterprise growth.

Why Enterprises Need More Than Automation Alone

Automation tools like n8n excel at orchestrating business logic, but they are not designed to handle the complexity and scale of modern data engineering. As data volumes grow, organisations face challenges such as:

  • Workflows are becoming unmanageable due to embedded transformation logic
  • Performance bottlenecks when processing large datasets
  • Difficulty maintaining JavaScript-heavy transformations
  • Limited reusability and governance across teams
  • Increased debugging and operational overhead

This is where Apache Hop becomes the perfect complement.

The Strategic Value of Apache Hop in Enterprise Data Pipelines

Apache Hop is a modern, open-source ETL and data orchestration platform built for clarity, scalability, and operational efficiency. 

Apache Hop’s architecture is fundamentally different from legacy ETL tools, focusing on metadata-driven design, portability, and cloud-native execution. We explored this in detail in our article on Apache Hop Architecture and development including how its design enables long-term scalability and maintainability.

For data management leaders, Hop offers:

Visual, Reusable, Governed Pipelines

Pipelines are easy to version, audit, and reuse—critical for teams managing dozens of data flows.

Enterprise-Grade Data Processing

Hop supports relational databases, cloud warehouses, files, streaming, and batch workloads—ensuring flexibility across environments.

Parameterised, Automation-Friendly Execution

Pipelines accept runtime parameters, enabling dynamic execution from n8n based on business events.

Clear Separation of Concerns

Automation logic stays in n8n; data transformation logic stays in Hop—reducing risk and improving maintainability.

The Combined Value: n8n + Apache Hop

For CXOs, the combined architecture delivers measurable business outcomes:

Enterprise PriorityHow n8n HelpsHow Hop HelpsCombined Impact
Operational EfficiencyAutomates triggers & workflowsExecutes heavy ETLFaster, leaner operations
ScalabilityEvent-driven orchestrationHigh-volume data processingScale automation & data pipelines independently
Governance & ComplianceAudit trails & workflow logsVersioned, visual pipelinesEnd-to-end traceability
Cost OptimizationLightweight automationOpen-source ETLLower TCO vs. proprietary tools
Time-to-ValueRapid workflow creationReusable pipelinesFaster deployment of new data products

This architecture is not just technically sound—it is strategically aligned with enterprise modernisation goals.

How Axxonet Implements This Architecture for Enterprises

Axxonet brings deep expertise in automation, data engineering, and enterprise integration. Our implementation approach ensures reliability, governance, and long-term scalability.

Architecture Overview: n8n + Apache Hop

1. Event-Driven Orchestration with n8n

We design n8n workflows that handle:

  • Business event triggers
  • Pre-execution validation
  • Parameter construction
  • Execution control (retries, timeouts, fail-fast logic)
  • Post-execution branching and notifications

This keeps automation logic clean, resilient, and easy to maintain.

2. Scalable ETL Pipelines with Apache Hop

Our Hop pipelines handle:

  • Multi-source extraction
  • Data cleansing, enrichment, and validation
  • Business rule application
  • Aggregations and transformations
  • Loading into analytics or operational systems

Pipelines are reusable across teams and environments.

3. Flexible Integration Patterns

Depending on scale and infrastructure, we integrate Hop with n8n using:

  • Command-line execution for lightweight deployments
  • Containerised Hop pipelines for scalable, isolated execution
  • Dynamic parameter passing for multi-tenant or multi-environment use cases

4. Enterprise-Ready Governance

Axxonet ensures:

  • CI/CD integration for pipelines
  • Git-based version control
  • Centralised logging and monitoring
  • Secrets management and environment isolation
  • Auditability across automation and ETL layers

This is where our engineering discipline becomes a differentiator.

Example Enterprise Use Case: Automated Analytics Ingestion

A daily reporting pipeline that consolidates data from multiple operational systems into an analytics warehouse.

n8n handles:

  • Triggering (schedule, webhook, or API)
  • Validating source system readiness
  • Passing runtime parameters
  • Monitoring execution status
  • Sending alerts and triggering downstream workflows

Apache Hop handles:

  • Extracting data from multiple systems
  • Cleansing, enriching, and validating records
  • Applying business rules
  • Loading curated data into the analytics database

Outcome for the enterprise:

  • Reduced manual effort
  • Faster data availability
  • Improved data quality
  • Clear separation of automation and transformation logic
  • Easier maintenance and scaling

Why CXOs Choose Axxonet

Enterprises partner with Axxonet because we deliver more than technology—we deliver outcomes.

Our Differentiators

  • Proven expertise in both automation and data engineering
  • Deep experience integrating n8n, Apache Hop, and enterprise systems
  • Strong architectural discipline for scalability and governance
  • Ability to deploy across cloud, hybrid, and on-prem environments
  • Accelerators for CI/CD, monitoring, and parameterised execution
  • A track record of reducing operational overhead and improving data reliability

We don’t just implement tools—we build future-proof data automation ecosystems.

Conclusion: A Future-Ready Foundation for Enterprise Data Operations

Integrating Apache Hop into n8n workflows enables organisations to scale automation beyond simple tasks into production-grade data pipelines. With Axxonet’s expertise, enterprises gain:

  • Lightweight, maintainable automation
  • Reusable, governed ETL pipelines
  • Faster deployment of analytics and reporting
  • Lower operational cost and higher reliability
  • A scalable architecture aligned with digital transformation goals

Our engineering approach aligns with modern DevOps practices, including Git-based versioning and automated deployment pipelines, which we detailed in our article on integrating Apache Hop with GitLab CI/CD for automated pipeline deployment .

For CXOs looking to modernise their data and automation landscape, this architecture provides a strategic, cost-effective, and future-ready foundation.

If your organisation is struggling with brittle automation or slow data pipelines, Axxonet can help you modernise with a scalable, event-driven architecture. 

References

Reach out to us at [email protected] or submit your details via our contact form.

axxonet blog banner-01-01-01

SQL Server Integration Service vs Apache Hop – Execution, Cloud Strategy, and the Microsoft Fabric Question (Part 2)

In Part 1 of this series, we explored how SQL Server Integration Services (SSIS) and Apache Hop differ in their architectural foundations and development philosophies. We examined how each tool approaches pipeline design, metadata management, portability, and developer experience highlighting the contrast between a traditional, Microsoft-centric ETL platform and a modern, open, orchestration-driven framework.

Architecture explains how a tool is built.

Adoption, however, depends on how well it runs in the real world.

In this second part, we shift focus from design philosophy to practical execution. We compare SSIS and Apache Hop across performance, scalability, automation, scheduling, cloud readiness, and operational flexibility the areas that ultimately determine whether an ETL platform can keep up with modern data platforms.

This comparison also reflects a broader transition currently facing many organizations. As Microsoft positions Microsoft Fabric as its strategic, cloud-first analytics platform, existing SSIS customers are increasingly encouraged to migrate or coexist within the Fabric ecosystem when modernizing their data platforms. Understanding how SSIS, Apache Hop, and Microsoft’s evolving cloud strategy differ in execution, cost, and operational flexibility is essential for making informed modernization decisions.

Performance & Scalability

SSIS

SSIS is built around a highly optimized, in-memory data flow engine that performs well for batch-oriented ETL workloads, especially when operating close to SQL Server. Its execution model processes rows in buffers, delivering reliable throughput for structured transformations and predictable data volumes.

Within a single server, SSIS supports parallel execution through multiple data flows and tasks. However, scaling is primarily achieved by increasing CPU, memory, or disk capacity on that server.

Key Characteristics

  • Efficient in-memory batch processing
  • Excellent performance for SQL Server-centric workloads
  • Parallelism within a single machine
  • Scaling achieved through stronger hardware

This model works well for stable, on-premise environments with known workloads. As data volumes grow or workloads become more dynamic, flexibility becomes limited.

Apache Hop

Apache Hop approaches performance differently. Pipeline design is separated from execution, allowing the same pipeline to run on different execution engines depending on scale and performance requirements.

Hop supports local execution for development and testing, as well as distributed execution using engines such as Apache Spark, Apache Flink, and Apache Beam. This enables true horizontal scaling across clusters rather than relying on a single machine.

Key Characteristics

  • Lightweight execution for low-latency workloads
  • Parallelism across nodes, not just threads
  • Native support for distributed compute engines
  • Suitable for batch and streaming-style workloads

Because pipelines do not need to be redesigned to scale, teams can start small and grow naturally as data volumes and complexity increase.

Comparative Summary

Aspect

SSIS

Apache Hop

Execution Model

Single-node

Single-node & distributed

Scaling Type

Vertical (scale-up)

Horizontal (scale-out)

Distributed Engines

Not native

Spark, Flink, Beam

Cloud Elasticity

Limited

Strong

Container/K8s Support

Not native

Native

Workload Flexibility

Predictable batch

Batch + scalable execution

Automation & Scheduling

SSIS: Built-in Scheduling with Tight Coupling

SSIS relies primarily on SQL Server Agent for automation and scheduling. Once packages are deployed to the SSIS Catalog (SSISDB), they are typically executed via SQL Agent jobs configured with fixed schedules, retries, and alerts.

This approach works well in traditional on-premise environments where SQL Server is always available and centrally managed. However, scheduling and orchestration logic are tightly coupled to SQL Server, which limits flexibility in distributed or cloud-native architectures.

Strengths

  • Built-in scheduling via SQL Server Agent
  • Integrated logging and execution history
  • Simple retry and failure handling

Limitations

  • Requires SQL Server to be running
  • Not cloud-agnostic
  • Difficult to integrate with external orchestration tools
  • Limited support for event-driven or dynamic workflows

While reliable, this model assumes a static infrastructure and does not align naturally with modern, elastic execution patterns.

Apache Hop: Decoupled, Orchestration-First Automation

Apache Hop deliberately avoids embedding a fixed scheduler. Instead, it exposes clear execution entry points (CLI and Hop Server), making it easy to integrate with industry-standard orchestration tools. This allows teams to choose the scheduler that best fits their infrastructure rather than being locked into one model.

Scheduling Apache Hop on Google Dataflow

Lean With Data and the Apache Beam team at Google work closely together to provide seamless integration between the Google Cloud platform and Apache Hop. The addition to schedule and run pipelines directly on Google Cloud follows this philosophy. Now you don’t have to worry about provisioning resources and are only billed for the compute time you use. This allows you to focus more on business problems and less on operational overhead.

Apache Hop with Apache Airflow

Apache Airflow is an open-source workflow orchestration tool originally developed by Airbnb. It allows you to define workflows as code, providing a dynamic, extensible platform to manage your data pipelines. Airflow’s rich features enable you to automate and monitor workflows efficiently, ensuring that data moves seamlessly through various processes and systems.

Apache Airflow is a powerful and versatile tool for managing workflows and automating complex data pipelines. Its “workflow as code” approach, coupled with robust scheduling, monitoring, and scalability features, makes it an essential tool for data engineers and data scientists. By adopting Airflow, you can streamline your workflow management, improve collaboration, and ensure that your data processes are efficient and reliable. Explore Apache Airflow today and discover how it can transform your data engineering workflows.

A detailed article by the Axonnet team provides an in-depth overview of how Apache Hop integrates with Apache Airflow, please follow the link: 

Streamlining Apache HOP Workflow Management with Apache Airflow

Apache Hop with Kubernetes CronJobs

Apache Hop integrates naturally with Kubernetes CronJobs, making it well suited for cloud-native and container-based ETL architectures. In this setup, Apache Hop is packaged as a Docker image containing the required pipelines, workflows, and runtime configuration. Kubernetes CronJobs are then used to schedule and trigger Hop executions at defined intervals, with each run executed as a separate, ephemeral pod.

This execution model provides strong isolation, as each ETL run operates independently and terminates once completed, eliminating the need for long-running ETL servers. Environment-specific configuration, credentials, and secrets are injected at runtime using Kubernetes ConfigMaps and Secrets, enabling the same Hop image and pipelines to be reused across development, test, and production environments without modification.

Comparative Summary

Aspect

SSIS

Apache Hop

Built-in Scheduler

SQL Server Agent

External (by design)

Orchestration Logic

Limited

Native workflows

Event-driven Execution

Limited

Strong

Cloud-native Scheduling

Azure-specific

Kubernetes, Airflow, Cron

CI/CD Integration

Moderate

Strong

Execution Flexibility

Server-bound

Fully decoupled

Cloud & Container Support

SSIS: Cloud-Enabled but Platform-Bound

SSIS was designed for on-premise Windows environments, and its cloud capabilities were introduced later. In Azure, SSIS packages typically run using Azure SSIS Integration Runtime within Azure Data Factory.

While this enables lift-and-shift migrations, the underlying execution model remains largely unchanged. Packages still depend on Windows-based infrastructure and SQL Server-centric components.

Cloud & Container Characteristics

  • Cloud support primarily through Azure Data Factory
  • Requires managed Windows nodes (SSIS-IR)
  • No native Docker or Kubernetes support
  • Limited portability across cloud providers
  • Scaling and cost tied to Azure runtime configuration

As a result, SSIS fits best in Azure-first or hybrid Microsoft environments, but is less suitable for multi-cloud or container-native strategies.

Microsoft Fabric: Microsoft’s Cloud Direction for SSIS Customers

As organizations move ETL workloads to the cloud, Microsoft increasingly positions Microsoft Fabric as the strategic destination for analytics and data integration workloads including those historically built using SSIS.

Microsoft Fabric is a unified, SaaS-based analytics platform that brings together data integration, engineering, warehousing, analytics, governance, and AI under a single managed environment. Rather than modernizing SSIS itself into a cloud-native execution engine, Microsoft’s approach has been to absorb SSIS use cases into a broader analytics platform.

For existing SSIS customers, this typically presents three cloud-oriented paths:

  1. Lift-and-Shift SSIS Using Azure SSIS Integration Runtime

Organizations can continue running SSIS packages in the cloud by hosting them on Azure SSIS Integration Runtime (SSIS-IR) within Azure Data Factory. This approach minimizes refactoring but preserves SSIS’s original execution model, including its reliance on Windows-based infrastructure and SQL Server-centric components.

  1. Gradual Transition into Microsoft Fabric

Microsoft Fabric introduces Fabric Data Factory, which shares conceptual similarities with Azure Data Factory but is tightly integrated with the Fabric ecosystem. Customers are encouraged to incrementally move data integration, analytics, and reporting workloads into Fabric, leveraging shared storage (OneLake), unified governance, and native Power BI integration.

  1. Platform Consolidation Around Fabric

At a broader level, Fabric represents Microsoft’s strategy to consolidate ETL, analytics, and AI workloads into a single managed platform. For organizations already heavily invested in Azure and Power BI, this provides a clear modernization path, but one that increasingly ties execution, storage, and analytics to Microsoft-managed services.

Implications for Cloud Adoption

From a cloud and container perspective, Fabric differs fundamentally from traditional SSIS deployments:

  • Execution is platform-managed, not user-controlled
  • Workloads are optimized for always-on analytics capacity, not ephemeral execution
  • Containerization and Kubernetes are abstracted away rather than exposed
  • Portability outside the Microsoft ecosystem is limited

This makes Fabric attractive for organizations seeking a fully managed analytics experience, but it also represents a shift from tool-level ETL execution to platform-level dependency.

Apache Hop: Cloud-Native and Container-First

Apache Hop embraces container-based execution through Docker, allowing pipelines and workflows to run in consistent, isolated environments. ETL logic and runtime dependencies can be packaged together, ensuring the same behavior across development, testing, and production.

Configuration is injected at runtime rather than hardcoded, making Hop naturally environment-agnostic. This approach aligns well with Kubernetes, CI/CD pipelines, and ephemeral execution models.

				
					Example Command:

#!/bin/bash

# Run the workflow

docker run -it --rm \

  --env HOP_LOG_LEVEL=Basic \

  --env HOP_FILE_PATH='${PROJECT_HOME}/code/

flights-processing.hwf' \

  --env HOP_PROJECT_FOLDER=/files \

  --env 

HOP_ENVIRONMENT_CONFIG_

FILE_NAME_PATHS=${PROJECT_HOME}/dev-env.json \

  --env HOP_RUN_CONFIG=local \

  --name hop-pipeline-container \

  -v /path/to/my-hop-project:/files \

  apache/hop:latest

# Check the exit code

if [ $? -eq 0 ]; then

    echo "Workflow executed successfully!"

else

    echo "Workflow execution failed. Check the logs for details".

Fi

This script runs the workflow and checks whether it completed successfully. You could easily integrate this into a larger CI/CD pipeline or set it up to run periodically.
				
			

Docker-based execution makes Apache Hop particularly well suited for CI/CD pipelines, cloud platforms, and Kubernetes-based deployments, where ETL workloads can be triggered on demand, scaled horizontally, and terminated after execution. Overall, this model aligns strongly with modern DevOps and cloud-native data engineering practices.

Cloud & Container Characteristics

  • Native Docker support
  • Kubernetes-ready (Jobs, CronJobs, autoscaling)
  • Cloud-agnostic (AWS, Azure, GCP)
  • Supports object storage, cloud databases, and APIs
  • Stateless, ephemeral execution model

This architecture enables teams to build once and deploy anywhere, without modifying pipeline logic.

Comparative Summary

Aspect

SSIS

Apache Hop

Cloud Strategy

Azure-centric

Cloud-agnostic

Container Support

Not native

Native Docker & K8s

Execution Model

Long-running runtime

Ephemeral, stateless

Multi-Cloud Support

Limited

Strong

CI/CD Integration

Moderate

Strong

Infrastructure Overhead

Higher

Lightweight

The “Microsoft Fabric Trap” (Cost & Strategy Perspective)

Cost vs Data Size RealityMicrosoft Fabric presents itself as a unified, future-ready data platform, combining data integration, analytics, governance, and AI under a single umbrella. While this approach can be compelling at scale, it introduces a common risk for small and mid-sized organizations what can be described as the “Fabric Trap.”

Cost vs Data Size Reality

One of the most overlooked aspects of Microsoft Fabric adoption is the mismatch between platform cost and actual data scale.

For many small and mid-sized organizations, real-world workloads look like this:

  • Total data volume well below 50 TB
  • A few hundred users at most (< 500)
  • Primarily batch ETL, reporting, and operational analytics
  • Limited or no advanced AI/ML workloads

In these scenarios, Fabric’s capacity-based licensing model often becomes difficult to justify.

Key cost-related realities:

  • You pay for capacity, not consumption
    Fabric requires reserving compute capacity regardless of whether workloads run continuously or only a few hours per day. Periodic ETL jobs often leave expensive capacity idle.
  • Costs scale faster than data maturity
    While Fabric is designed for large, multi-team analytics platforms, many organizations adopt it before reaching that scale resulting in enterprise-level costs for non-enterprise workloads.
  • User count amplifies total spend
    As reporting and analytics adoption grows, licensing and capacity planning become more complex and expensive, even when data volumes remain modest.
  • Cheaper alternatives handle this scale well
    Open-source databases like PostgreSQL comfortably support tens of terabytes for analytics workloads, and orchestration tools like Apache Hop deliver robust ETL and automation without licensing overhead.
  • ROI improves only at higher scale
    Fabric’s unified analytics, governance, and AI features begin to pay off primarily at larger data volumes, higher concurrency, and greater organizational complexity.

For organizations operating below this threshold, a modular open-source stack allows teams to scale incrementally, control costs, and postpone platform consolidation decisions until business and data requirements genuinely demand it.

Which One Should You Choose?

Choose SSIS/Fabric if:

  • Your ecosystem is entirely Microsoft
  • Your datasets live in SQL Server
  • You need a stable on-prem ETL with minimal DevOps complexity
  • Licensing is not a constraint
  • Your workloads justify always-on analytics capacity
  • You are comfortable adopting a platform-managed execution model
  • Vendor lock-in is an acceptable trade-off for consolidation

Choose Apache Hop if:

  • You prefer open-source tools
  • You need cross-platform or containerized ETL
  • Your data sources include cloud DBs, APIs, NoSQL, or diverse systems
  • You want modern DevOps support with Git-based deployments
  • You need scalable execution engines or distributed orchestration
  • You are a small to mid-sized organization modernizing ETL
  • Your data volumes are moderate (≪ 50 TB) with hundreds—not thousands—of users
  • You run periodic batch ETL and reporting, not always-on analytics
  • You want cloud, container, or hybrid execution without platform lock-in
  • You want to modernize without committing early to an expensive unified platform

Conclusion

Microsoft’s current cloud strategy places Fabric at the center of its analytics ecosystem, and for some organizations, that direction makes sense. However, for many small and mid-sized teams, this approach introduces unnecessary complexity, cost, and architectural rigidity, what we described earlier as the Microsoft Fabric Trap.

Apache Hop offers an alternative modernisation path:
One that focuses on execution flexibility, incremental scaling, and architectural intent, rather than platform consolidation.

Need Help Modernizing or Migrating?

If you’re:

  • Running SSIS today
  • Evaluating Fabric but unsure about cost or lock-in
  • Looking to modernise ETL using Apache Hop and open platforms

We help teams assess, migrate, and modernize SSIS workloads into Apache Hop–based architectures, with minimal disruption and a clear focus on long-term sustainability.

Reach out to us to discuss your migration or modernisation strategy.
We’ll help you choose the path that fits your data, your scale, and your future, and not just your vendor roadmap.

Official Links for Apache Hop and SSIS

When writing this blog about Apache Hop and SQL Server Integration Service, the following were the official documentation and resources referred to. Below is a list of key official links:

🔹 Apache Hop Official Resources

🔹 SQL Server Integration Service Official Resources

Install SQL Server Integration Services – SQL Server Integration Services (SSIS)

Development and Management Tools – SQL Server Integration Services (SSIS)

Integration Services (SSIS) Projects and Solutions – SQL Server Integration Services (SSIS)

SSIS Toolbox – SQL Server Integration Services (SSIS)

Other posts in the Apache HOP Blog Series

If you would like to enable this capability in your application, please get in touch with us at [email protected] or update your details in the form

axxonet blog banner

SQL Server Integration Service vs Apache Hop – How ETL Tools have evolved and where Modern Tools Fit In (Part 1 of 2)

Introduction

Before 2015, most ETL tools were designed for a world where data lived inside centralized databases, workloads ran on fixed on‑premise servers, and development happened inside proprietary IDEs. Tools like SSIS were built for this environment which are stable, tightly integrated with SQL Server, and optimized for Windows‑based enterprise data warehousing.

After 2015, the data landscape changed dramatically. Cloud platforms, distributed systems, containerization, and DevOps practices reshaped how data pipelines are built, deployed, and maintained. ETL tools had to evolve from server‑bound, vendor‑specific systems into flexible, portable, metadata‑driven platforms that could run anywhere.

This shift led to the rise of a broad ecosystem of open‑source ETL and orchestration tools, including Airflow, Talend Open Studio, Pentaho Kettle, Meltano, and more recently, Apache Hop—a modern, actively developed platform designed for cloud‑native and hybrid environments.

  • This article is Part 1 of a two‑part series.

Here, we focus on how SSIS and Apache Hop are built based on their architectural foundations, development philosophies, and the historical context that shaped them.

In Part 2, we will examine how these architectural differences translate into performance, scalability, automation, cloud readiness, and real‑world usage scenarios, helping you decide which tool best fits your future data strategy.

The Fundamental Distinctions

At a high level, SSIS and Apache Hop differ in how they are designed, deployed, and evolved.

  • SSIS is a Microsoft‑centric ETL tool built for on‑premise SQL Server environments. It offers a stable, tightly integrated experience for teams operating within the Windows and SQL Server ecosystem.
  • Apache Hop is an open‑source, cross‑platform orchestration framework built with modularity, portability, and cloud‑readiness in mind. It emphasizes metadata‑driven design, environment‑agnostic execution, and seamless movement across local, containerized, and distributed environments.

These foundational differences shape how each tool behaves across development, deployment, scaling, and modernization scenarios.

Overview of the Tools

What is SSIS?

SQL Server Integration Services (SSIS) is a mature ETL and data integration tool packaged with SQL Server. It provides a visual, drag‑and‑drop development experience inside Visual Studio, enabling teams to build batch processes, data pipelines, and complex transformations.

SSIS is optimized for Windows‑based enterprise environments and integrates deeply with SQL Server, SQL Agent, and the broader Microsoft data ecosystem.

Extended Capabilities

  • Built‑in transformations for cleansing, validating, aggregating, and merging data
  • Script Tasks using C# or VB.NET
  • SSIS Catalog for deployment, monitoring, and logging
  • High performance with SQL Server through native connectors

What is Apache Hop?

Apache Hop (Hop Orchestration Platform) is a modern, open‑source data orchestration and ETL platform under the Apache Foundation. It provides a clean, flexible graphical interface (Hop GUI) for designing pipelines and workflows across diverse data ecosystems.

Hop builds on the legacy of Pentaho Kettle but introduces a fully re‑engineered, metadata‑driven framework designed for portability and cloud‑native execution.

Extended Capabilities

  • Large library of transforms and connectors for databases, cloud services, APIs, and file formats
  • First‑class support for Docker, Kubernetes, and remote engines like Spark, Flink, and Beam
  • Pipelines‑as‑code (JSON/YAML) enabling DevOps workflows
  • Metadata injection for reusable, environment‑agnostic pipelines

Feature-by-Feature Comparison

1. Installation & Platform Support

SSIS

SSIS is tightly coupled with SQL Server and Windows. Installation typically involves SQL Server setup, enabling Integration Services, and configuring Visual Studio with SSDT.

Key Characteristics

  • Runs only on Windows
  • Requires SQL Server licensing
  • Vertical scaling
  • Cloud usage limited to Azure SSIS IR
  • No native container or Kubernetes support

This monolithic, server‑bound architecture works well in traditional environments but becomes restrictive in hybrid or multi‑cloud scenarios.

Apache Hop

Hop is lightweight and platform‑independent. It runs on Windows, Linux, and macOS, and supports local, remote, and containerized execution.

Typical Deployment Models

  • Local execution
  • Hop Server for remote execution
  • Docker containers
  • Kubernetes clusters
  • Integration with Airflow, Cron, and other schedulers

Key Characteristics

  • Fully cross‑platform
  • No licensing cost
  • Horizontal scaling via containers
  • Cloud‑agnostic
  • Metadata‑driven portability

Hop treats deployment as a first‑class concern, enabling “build once, run anywhere” pipelines.

Comparative Summary

Category

SSIS

Apache Hop

OS Support

Windows only

Windows, Linux, macOS

Deployment

Local server, SQL Agent

Desktop, server, Docker, Kubernetes

Licensing

SQL Server license

Free, open‑source

Hop aligns naturally with modern infrastructure patterns, while SSIS remains best suited for Microsoft‑centric environments.

Why Apache Hop Has an Advantage Here

Apache Hop aligns naturally with modern infrastructure patterns such as microservices, containers, and GitOps-driven deployments. Its ability to run the same pipelines across environments without modification significantly reduces operational overhead and future migration costs.

SSIS, while stable, is best suited for organizations that remain fully invested in Windows-based, on-premise architectures.

2. Development Environment

SSIS

SSIS development happens inside Visual Studio using SSDT. Pipelines are stored as binary .dtsx files, which complicates version control and collaboration.

Characteristics

  • Strongly UI‑driven
  • Script Tasks via C#/VB.NET
  • Harder Git diffs
  • Environment‑bound debugging
  • Manual multi‑environment handling

This often leads to developer‑machine dependency and challenges in CI/CD automation.

Apache Hop

Hop provides a standalone GUI with pipelines stored as human‑readable JSON/YAML. It embraces separation of logic and configuration through variables, parameters, and metadata injection.

Characteristics

  • No IDE dependency
  • Clean Git diffs
  • Metadata‑driven environment handling
  • Plugin and script extensibility
  • CI/CD‑friendly design
Metadata Injection in Hop

Metadata injection allows pipeline configuration (connections, file paths, parameters) to be supplied at runtime rather than hardcoded.

This enables:

  • Reusable pipelines
  • Clean environment promotion
  • Consistent DevOps workflows

The same pipeline can run in dev, test, and prod simply by changing metadata—not the pipeline itself.

Git integration in Apache Hop’s GUI

Git allows you to track changes to your project over time, collaborate with others without overwriting each other’s work, and roll back to previous versions if something goes wrong. Whether you’re working solo or in a team, using Git is a best practice that saves time and headaches down the road.

Using Git within Apache Hop’s GUI is a fantastic option if you prefer a visual interface. The integration helps you:

  • Track changes in real-time with color-coded file statuses.
  • Easily stage, commit, push, and pull changes without leaving the Hop environment.
  • Visually compare file revisions to see what’s changed between different versions of pipelines or workflows.

The built-in Git integration in Hop simplifies managing your project’s version history and collaborating with others.

This perspective gives you access to all the files associated with your project, such as workflows (hwf), pipelines (hpl), JSON, CSV, and more.

Throught this, your project is version-controlled, backed up, and ready for collaboration.

Comparative Summary

Aspect

SSIS

Apache Hop

Environment handling

Hardcoded/config files

Metadata injection

Pipeline portability

Limited

High

CI/CD friendliness

Moderate

Strong

Multi‑env support

Manual

Native

3. Transformations & Connectors

SSIS

SSIS provides strong built‑in transformations optimized for SQL Server and structured ETL patterns. However, connectors outside the Microsoft ecosystem are limited or require third‑party components.

Apache Hop

Hop offers a broad, extensible library of transforms and connectors, covering databases, cloud platforms, APIs, and big‑data ecosystems. Its plugin‑based architecture allows rapid adaptation to new technologies.

Hop also supports:

  • Nested workflows
  • Parallel pipeline execution
  • Streaming and batch patterns
  • ELT and ETL

Series and parallel execution

Comparative Summary

Aspect

SSIS

Apache Hop

Transformation style

Monolithic

Modular

Extensibility

Limited

Plugin‑based

API/cloud connectors

Limited

Strong

ELT support

Partial

Native

Ecosystem reach

Microsoft‑focused

Broad, cloud‑native

Reusability

Moderate

High

Conclusion (Part 1)

SSIS remains a strong and reliable option for organizations deeply embedded in the Microsoft ecosystem, offering stability, rich transformations, and tight SQL Server integration. However, its platform dependency and limited portability make it less adaptable to modern, cloud‑native workflows.

Apache Hop, on the other hand, embraces a metadata‑driven, platform‑agnostic approach, enabling greater reuse, cleaner DevOps practices, and seamless movement across environments. Its design aligns closely with today’s demands for flexibility, automation, and scalability.

  • Part 1 sets the stage by examining how these tools are built and how their architectural foundations differ.

In Part 2, we will explore how these differences translate into performance, scalability, automation, cloud readiness, and real‑world usage scenarios, helping you determine which tool best fits your future data strategy.

If you would like to enable this capability in your application, please get in touch with us at [email protected] or update your details in the form

WhatsApp Image 2025-12-03 at 21.20.40_5e74243b

Interactive What-If Analysis using Streamlit: Empowering Real-Time Decision Making

In today’s fast-moving business landscape, leaders need more than static reports — they need the ability to explore multiple business scenarios instantly. Whether it’s evaluating pricing strategies, forecasting operational costs, or measuring profitability under different assumptions, What-If Analysis has become an essential capability for modern enterprises.

At Axxonet, we leverage advanced Python-based tools like Streamlit to build interactive What-If dashboards that allow businesses to simulate outcomes in real time and make data-driven decisions with confidence.

What Is What-If Analysis?

What-If Analysis is a decision-support technique that helps organizations understand how changes in key inputs — such as revenue, unit cost, volume, or operational parameters — impact key metrics like profitability, efficiency, or ROI.

Traditional methods rely heavily on Excel or BI tools. While effective, these approaches are often:

  • Manual and time-consuming
  • Difficult to scale
  • Prone to formula errors
  • Dependent on licensed software
  • Limited in automation and real-time data connectivity

Interactive web-based tools overcome these limitations by making scenario exploration intuitive, visual, and real time.

Why Streamlit?

Streamlit provides a powerful yet simple framework to build secure, interactive, analytical applications without front-end development. We use Streamlit because:

  1. Open-Source & Free
  • No licensing restrictions, unlike Excel or proprietary BI tools.
  1. Extremely Fast Development
  • Widgets, charts, layouts, and logic can be built in minutes using pure Python.
  1. Real-Time Interaction
  • Any parameter change triggers instant recalculation and updates on the screen.
  1. Easy Integration With Databases

Streamlit connects effortlessly with:

  • PostgreSQL
  • MySQL
  • ClickHouse
  • Azure SQL
  • REST APIs
  • Any operational data source

This enables What-If dashboards to pull live, monthly-updating operational averages directly from backend systems.

  1. Automation & Scalability

Streamlit apps can:

  • Auto-refresh values
  • Run simulations at scale
  • Support multiple teams at once
  • Be embedded inside internal systems
  1. Ease of Deployment

Runs on:

  • Local machine
  • Docker container
  • Cloud (AWS, Azure, GCP)
  • Streamlit Community Cloud
  • Enterprise servers

7. Built-in Support for Charts, KPIs, and PDF Exports

  • Interactive dashboards and exportable insights make decision-making seamless.

Why Companies Need Interactive What-If Tools

  1. Instant Scenario Simulation
  • Decision-makers can adjust parameters on the fly and instantly see how results change considering live data from operational systems.
  1. Better Visibility for Strategic Planning
  • Dynamic dashboards help compare multiple business scenarios, enabling more informed choices.
  1. Reduction of Manual Work
  • Automated recalculation eliminates time-consuming spreadsheet operations where data needs to be extracted from multiple sources and compiled in sheets, which not only takes time but can also lead to errors when copying the data across multiple sources.
  1. Improved Collaboration
  • Teams across finance, logistics, or operations can access the same interactive tool via a browser using real time information.

How We Use Streamlit to Build What-If Dashboards

Streamlit provides an elegant framework for building data apps without front-end development.
At Axxonet, we extend Streamlit to create:

  • Clean and interactive user interfaces
  • Dynamic input controls for cost, revenue, and operational parameters
  • Real-time Key Metrics updates
  • Visual charts and insights for faster comprehension
  • Downloadable PDF summaries for easy sharing

The result is a seamless, responsive experience where any input change automatically updates the output metrics and visualisations, considering the real-time live backend database information. 

Profitability Simulation Dashboard

Below is a conceptual example of the type of dashboard we build:

  • Adjustable inputs for cost components, pricing, and volume
  • Real-time calculation of revenue, costs, and profit
  • Key Metrics widgets for quick interpretation
  • Charts showing cost breakdown, profit distribution, or sensitivity
  • Exportable reports summarizing key assumptions and outcomes

This approach helps businesses experiment with ideas before making real-world decisions — all in a secure browser-based environment.

Real Industry Use Cases

What-If analysis solutions support a wide range of industries and scenarios:

Financial Services

  • Profitability modelling
  • Interest rate sensitivity
  • Loan pricing scenarios

Logistics & Supply Chain

  • Trip-based cost modelling
  • Driver/vehicle scenario simulation
  • Fuel and toll forecasting

Retail & Consumer Business

  • Price optimization
  • Discount impact analysis

Operations & Planning

  • Resource allocation
  • Budget forecasting

Whether you’re planning next quarter’s financial forecast or optimizing operations, interactive What-If tools provide clarity and confidence.

Architecture Overview

The following diagram reflects the internal operational design of a Streamlit-based what-if simulator.

Our architecture is designed for:

  • Modularity — separate layers for UI, business logic, and calculations
  • Responsiveness — real-time recalculation and instant visual feedback
  • Scalability — deployable on cloud, server, or container environments
  • Security — access-controlled dashboards and isolated computation layers

We use modern Python frameworks and best practices to deliver a smooth experience without exposing internal complexities.

Deployment Options

Local Deployment

streamlit run app.py

Docker Deployment

FROM python:3.10-slim

WORKDIR /app

COPY . /app

RUN pip install streamlit pandas numpy reportlab plotly

EXPOSE 8501

CMD [“streamlit”, “run”, “app.py”, “–server.address=0.0.0.0”]

Cloud Deployment

  • Streamlit Community Cloud
  • Any container-based cloud platform (Azure, AWS, GCP)

Conclusion

Interactive What-If dashboards transform the way organisations evaluate scenarios, forecast outcomes, and make strategic decisions. By combining Streamlit’s powerful UI capabilities with our expertise in analytics and engineering, Axxonet delivers solutions that are:

  • Simple to use
  • Fully customizable
  • Real-time
  • Insight-driven

Businesses no longer need to rely on static spreadsheets — with dynamic What-If simulation, teams can explore opportunities, mitigate risks, and drive smarter decisions faster.

If you would like to enable this capability in your application, please get in touch with us at [email protected] or update your details in the form

References

The following were the official documentation and resources referred to.

  1. Streamlit Official Documentation — Widgets, Layout, API Reference
    https://docs.streamlit.io

2. Streamlit Deployment Documentation — Community Cloud, Docker, Configuration
https://docs.streamlit.io/streamlit-community-cloud
https://docs.streamlit.io/deploy/tutorials

Asset 57

Apache Hop Meets GitLab: CICD Automation with GitLab

Introduction

In our previous blog, we discussed Apache HOP in more detail. In case you have missed it, refer to Comparison of and migrating from Pentaho Data Integration PDI/ Kettle to Apache HOP. As a continuation of the Apache HOP article series, here we touch upon how to integrate Apache HOP with GitLab for version management and CI/CD. 

In the fast-paced world of data engineering and data science, organizations deal with massive amounts of data that need to be processed, transformed, and analyzed in real-time. Extract, Transform, and Load (ETL) workflows are at the heart of this process, ensuring that raw data is ingested, cleaned, and structured for meaningful insights. Apache HOP (Hop Orchestration Platform) has emerged as one of the most powerful open-source tools for designing, orchestrating, and executing ETL pipelines, offering a modular, scalable, and metadata-driven approach to data integration. 

However, as ETL workflows become more complex and business requirements evolve quickly, managing multiple workflows can be difficult. This is where Continuous Integration and Continuous Deployment (CI/CD) come into play. By automating the deployment, testing, and version control of ETL pipelines, CI/CD ensures consistency, reduces human intervention, and accelerates the development lifecycle.

This blog post explores Apache HOP integration with Gitlab, its key features, and how to leverage it to streamline and manage your Apache HOP workflows and pipelines.

Apache Hop:

Apache Hop (Hop Orchestration Platform) is a robust, open-source data integration and orchestration tool that empowers developers and data engineers to build, test, and deploy workflows and pipelines efficiently. One of Apache Hop’s standout features is its seamless integration with version control systems like Git, enabling collaborative development and streamlined management of project assets directly from the GUI.

GitLab:

GitLab is a widely adopted DevSecOps platform that provides built-in CI/CD capabilities, version control, and infrastructure automation. Integrating GitLab with Apache HOP for ETL development and deployment offers several benefits:

  1. Version Control for ETL Workflows
    • GitLab allows teams to track changes in Apache HOP pipelines, making it easier to collaborate, review, and revert to previous versions when needed.
    • Each change to an ETL workflow is documented, ensuring transparency and traceability in development.
  2. Automated Testing of ETL Pipelines
    • Data pipelines can break due to schema changes, logic errors, or unexpected data patterns.
    • GitLab CI/CD enables automated testing of HOP pipelines before deployment, reducing the risk of failures in production.
  3. Seamless Deployment to Multiple Environments
    • Using GitLab CI/CD pipelines, teams can deploy ETL workflows across different environments (development, staging, and production) without manual intervention.
    • Environment-specific configurations can be managed efficiently using GitLab variables.
  4. Efficient Collaboration & Code Reviews
    • Multiple data engineers can work on different aspects of ETL development simultaneously using GitLab’s branching and merge request features.
    • Code reviews ensure best practices are followed, improving the quality of ETL pipelines.
  5. Rollback and Disaster Recovery
    • If an ETL workflow fails in production, previous stable versions can be quickly restored using GitLab’s versioning and rollback capabilities.
  6. Security and Compliance
    • GitLab provides access control, audit logging, and security scanning features to ensure that sensitive ETL workflows adhere to compliance standards.

Jenkins:

Jenkins, one of the most widely used automation servers, plays a key role in enabling CI/CD by automating build, test, and deployment processes. Stay tuned for our upcoming articles on How to Integrate Jenkins with GitLab for managing, and deploying the Apache HOP artifacts.

In this blog post, we’ll explore how Git actions can be utilized in Apache Hop GUI to manage and track changes to workflows and pipelines effectively. We’ll cover the setup process, common Git operations, and best practices for using Git within Apache Hop.

Manual Git Integration for CI/CD Process (Problem Statement)

Earlier CI/CD for ETLs was a manual and tedious process. In ETL tools like Pentaho PDI or the older versions, we had to manually manage the CICD for the ETL artefacts (Apache Hop or Pentaho pipelines/transformation and workflows/jobs) with Git by following these summary steps:

  1. Create an Empty Repository
    • Log in to your Git account.
    • Create a new repository and leave it empty (do not add a README, .gitignore, or license file).
  2. Clone the Repository
    • Clone the empty repository to your local system.
    • This will create a local folder corresponding to your repository.
  3. Use the Cloned Folder as Project Home
    • Set the cloned folder as your Apache Hop project home folder.
    • Save all your pipelines (.hpl files), workflows (.hwf files), and configuration files in this folder.

Common Challenges in Pentaho Data Integration (PDI)

Pentaho Data Integration (PDI), also known as Kettle, has been widely used for ETL (Extract, Transform, Load) processes. However, as data workflows became more complex and teams required better collaboration, automation, and version control, Pentaho’s limitations in CI/CD (Continuous Integration/Continuous Deployment) and Git integration became apparent.

1. Lack of Native Git Support

  • PDI lacked built-in Git integration, making version control and collaboration difficult for teams working on large-scale data projects.

2. Manual Deployment Processes

  • Without automated CI/CD pipelines, teams had to manually deploy and migrate transformations, leading to inefficiencies and errors.

3. Limited Workflow Orchestration

  • Handling complex workflows required custom scripting and external tools, increasing development overhead.

4. Scalability Issues

  • PDI struggled with modern cloud-native architectures and containerized deployments, requiring additional customization.

Official Link: https://pentaho-public.atlassian.net/jira/software/c/projects/PDI

Birth of Apache Hop (Solution)

To automate the CICD process with the Apache HOP, we can now configure the Hop project to Git and manage the CICD process for the ETLs. This results in efficient ETL code integration and code management. The need for CI/CD and Git Integration in Pentaho Data Integration and other ETL tools led to Apache Hop.  To address these limitations, the Apache Hop project was created, evolving from PDI while introducing modern development practices such as:
  • Built-in Git Integration: Enables seamless version control, collaboration, and tracking of changes within the Hop GUI.
  • CI/CD Compatibility: Supports automated testing, validation, and deployment of workflows using tools like Jenkins, GitHub Actions, and GitLab CI/CD.
  • Improved Workflow Orchestration: Provides metadata-driven workflow design with enhanced debugging and visualization.
  • Containerization & Cloud Support: Fully supports Kubernetes, Docker, and cloud-native architectures for scalable deployments.
Official Link: https://hop.apache.org/manual/latest/hop-gui/hop-gui-git.html

Impact of Git and CI/CD in Apache Hop

The integration of Git and Continuous Integration/Continuous Deployment (CI/CD) practices into Apache Hop has significantly transformed the way data engineering teams manage and deploy their ETL workflows.

1. Enhanced Collaboration with Git

Apache Hop’s support for Git allows multiple team members to work on different parts of a data pipeline simultaneously. Each developer can clone the repository, make changes in isolated branches, and submit pull requests for review. Git’s version control enables teams to:

  • Track changes to workflows and metadata over time
  • Review historical modifications and troubleshoot regressions
  • Merge contributions efficiently while minimizing conflicts

This collaborative environment leads to better code quality, transparency, and accountability within the team.

2. Reliable Deployments through CI/CD Pipelines

By integrating Apache Hop with CI/CD tools like Jenkins, GitLab CI, or GitHub Actions, organizations can automate the process of testing, packaging, and deploying ETL pipelines. Benefits include:

  • Automated testing of workflows to ensure stability before production releases
  • Consistent deployment across development, staging, and production environments
  • Rapid iteration cycles, reducing the time from development to delivery

These pipelines reduce human error and enhance the repeatability of deployment processes.

3. Improved Agility and Scalability

The combination of Git and CI/CD fosters a modern DevOps culture within data engineering. Teams can:

  • React quickly to changing business requirements
  • Scale solutions across projects and environments with minimal overhead
  • Maintain a centralized repository for configuration and infrastructure-as-code artifacts

This level of agility makes Apache Hop a powerful and future-ready tool for enterprises aiming to modernize their data integration and transformation processes.

Why Use Git with Apache Hop?

Integrating Git with Apache Hop offers several benefits:

  1. Version Control
    • Track changes in pipelines and workflows with Git’s version history.
    • Revert to previous versions when needed.
  2. Collaboration
    • Multiple users can work on the same repository, ensuring smooth collaboration.
    • Resolve conflicts using Git’s merge and conflict resolution features.
  3. Centralized Management
    • Store pipelines, workflows, and associated metadata in a Git repository for centralized access.
  4. Branch Management
    •  Experiment with new features or workflows in isolated branches.
  5. Rollback
    •  Revert to earlier versions of workflows in case of issues.

By incorporating Git into your Apache Hop workflow, you ensure a smooth and organized development process.

Git Actions in Apache Hop GUI

Apache Hop GUI provides a range of Git-related actions to simplify version control tasks. 

These actions can be accessed from the toolbar or context menus within the application.

  1. Committing Changes
  • After modifying a workflow or pipeline, save the changes.
  • Use the Commit option in the GUI to add a descriptive message for the changes.
  1. Pulling Updates
  • Fetch the latest changes from the remote repository using the Pull option.
  • Resolve any conflicts directly in the GUI or using an external merge tool.
  1. Pushing Changes
  • Once you commit changes locally, use the Push option to sync them with the remote repository.
  1. Branching and Merging
  • Create new branches for feature development or experimentation.
  • Merge branches into the main branch to integrate completed features.
  1. Viewing History
  • View the commit history to understand changes made to workflows or pipelines over time.
  • Use the diff viewer to compare changes between commits.
  1. Reverting Changes
  • If a workflow is not functioning as expected, revert to a previous commit directly from the GUI.

In addition to adding and committing files, Apache Hop’s File Explorer perspective allows you to manage other Git operations:

  • Pull: To retrieve the latest changes from your remote repository, click the Git Pull button in the toolbar. This ensures you’re always working with the most up-to-date version of the project.
  • Revert: If you need to discard changes to a file or folder, select the file and click Git Revert in the Git toolbar.
  • Visual Diff: Apache Hop allows you to visually compare different versions of a file. Click the Git Info button, select a specific revision, and use the Visual Diff option to see the changes between two versions of a pipeline or workflow. This opens two tabs, showing the before and after states of your project.

Setting Up Git in Apache Hop GUI (CI/CD)

Apache Hop (Hop Orchestration Platform) provides Git integration to help users manage their workflows, pipelines, and metadata effectively. This integration allows version control of Hop projects, making it easier to track changes, collaborate, and revert to previous versions.

Apache Hop supports Git integration to track metadata changes such as:

  • Pipelines (ETL workflows)
  • Workflows (job orchestration)
  • Project metadata (variables, environment settings)
  • Database connections (stored securely)

With Git, users can:

  • Commit and push changes to a repository
  • Revert changes
  • Collaborate with other team members
  • Maintain a version history of Hop projects

Prerequisites

  • Install Apache Hop on your system.
  • Set up Git on your machine and configure it with your credentials (username and email).
  • Ensure you have access to a Git repository (local or remote).
  • Create or clone a repository to store Apache Hop files.

Create a GitLab access token to authenticate and push the code artifacts from Apache HOP.

Configure Git in Apache Hop

Here are the steps for configuring the CICD process using Gitlab in Apache HOP

Step 1: Launch Apache Hop GUI

Step 2: Navigate to the Preferences Menu

Step 3: Locate the Version Control Settings and Configure the Path to your Git Executable

Step 4: Optionally, Specify Default Repositories and Branch Names for your Projects

Step 5: Initialize a Git Repository

  • Create a new project in Apache Hop.
  • Open and verify the project’s folder in your file system.
  • Use Git to initialize the repository: Run the Init command in the Hop Project folder.

    $ git init
  • Add a .gitignore file to exclude temporary files generated by Hop:
      1. *.log
      2. *.bak

Step 6: Git Info

  • After successfully initializing the Git on your local Hop project folder.

Step 7: Adding Files to Git

  • Select the file(s) you want to add to Git from the File Explorer.
  • In the toolbar at the top, you’ll see the Git Add button. Clicking this will stage the selected files, meaning they’re ready to be committed to your Git repository.
  • Alternatively, right-click the file in the File Explorer and select Git Add.

Once staged, the file will change from red to blue, indicating that it’s ready to be committed.

Step 8: Committing Changes

  • Click the Git Commit button from the toolbar.
  • Select the files you’ve staged from the File Explorer that you want to include in the commit.
  • A dialog will prompt you to enter a commit message, this message should summarize the changes you’ve made. Confirm the commit.

Once committed, the blue files will return to a neutral color.

  • After a successful Commit, comment details are shown in the Revision Tab.

Step 9: Connect to a Remote Repository

Add a remote repository URL:
git remote add origin <repository_url>

  • Push the local repository to the remote repository:
    git push -u origin main

Step 10: Pushing Changes to a Remote Repository

  • In the Git toolbar, click the Git Push button.
  • Apache Hop will prompt you for your Git username and password. Enter the correct authentication details.
  • A confirmation message will appear, indicating that the push was successful.

GitLab / GitHub Operation

In the previous section of this blog, we have seen how to practically initialize a new Git project, commit the ETL files, and push the ETLs and config files from the Apache HOP GUI tool. This section will show how to manage merges and approve merge requests in GitLab after sending the push request from Apache HOP.

  1. Go to the GitLab project. 

Once the code is pushed from Hop, you should able to see the merge request notification as shown below:

2. Create a Merge request in the GitLab as shown below: 

3. After the merge request is sent, the corresponding user should review the code artifacts to Approve and Merge the request.

4. After the merge request is approved we should be able to see the merged artifacts in the GitLab project as shown in the screenshot below:

Note: Save Pipelines and Workflows:

  • Store your Hop .hpl (pipelines) and .hwf (workflows) files in the Git repository.
  • Use this folder as your main project path.

In the upcoming blogs, we will see how to streamline the Git operations (approval and merging) and deploy the merged development files to new environments (Dev->QA->Production) using Jenkins to implement more robust CICD operations. Stay tuned for the upcoming blog releases.

Best Practices for Git in Apache Hop

Apache Hop (Hop Orchestration Platform) provides seamless integration with Git, enabling version control for pipelines and workflows. This integration allows teams to collaborate effectively, track changes, and manage multiple versions of ETL processes.

  1. Use Descriptive Commit Messages
    • Ensure commit messages clearly describe the changes made.
    • Example: “Added error handling to data ingestion pipeline.”
  2. Commit Frequently
    • Break changes into small, logical units and commit regularly.
  3. Leverage Branching
    • Use branches for new features, bug fixes, or experimentation.
    • Merge branches only after thorough testing.
  4. Collaborate Effectively
    • Use pull requests to review and discuss changes with your team before merging.
  5. Keep the Repository Clean
    • Use a .gitignore file to exclude temporary files and logs generated by Hop.

Conclusion

Using Git with Apache Hop GUI combines the power of modern version control with an intuitive data integration platform. By integrating Git into your ETL workflows, you’ll enhance collaboration and organization and improve your projects’ reliability and maintainability. Integrating GitLab CI with Apache HOP revolutionizes ETL workflow management by automating testing, deployment, and monitoring. This Continuous Integration (CI) ensures that data pipelines remain reliable, scalable, and maintainable in the ever-evolving landscape of data engineering. By embracing CI/CD best practices, organizations can enhance efficiency, reduce downtime, and accelerate the delivery of high-quality data insights. 

Start leveraging Git actions in Apache Hop today to streamline your data orchestration projects. 

Up Next

In the part of the CICD blog, we will talk more about Jenkins for the Continuous Deployment of the CI/CD process. Stay tuned. 

Would you like us to officially set up Hop with GitLab & Jenkins CI/CD for your new project?

Official Links for Apache Hop and GitLab Integration

When writing this blog about Apache Hop and its integration with GitLab for CI/CD, the following were the official documentation and resources referred to. Below is a list of key official links:

1. Apache Hop Official Resources

2. GitLab CI/CD Official Resources

3. DevOps & CI/CD Best Practices

These links will help you explore Apache Hop, GitLab, and CI/CD automation in more depth. 

Other posts in the Apache HOP Blog Series

Asset 43@2x

Why choose Apache Druid over Snowflake

Introduction

In our previous blog, Apache Druid Integration with Apache Superset we talked about Apache Druid’s integration with Apache Superset. In case you have missed it, it is recommended you read it before continuing the Apache Druid blog series. In the Exploring Apache Druid: A High-Performance Real-Time Analytics Database blog post, we have talked about Apache Druid in more detail.

In this blog, we talk about the advantages of Druid over Snowflake. Apache Druid and Snowflake are both high-performance databases, but they serve different use cases, with Druid excelling in real-time analytics and Snowflake in traditional data warehousing.

Snowflake

Snowflake is a cloud-based data platform and data warehouse service that has gained significant popularity due to its performance, scalability, and ease of use. It is built from the ground up for the cloud and offers a unified platform for data warehousing, data lakes, data engineering, data science, and data application development

Apache Druid

Apache Druid is a high-performance, distributed real-time analytics database designed to handle fast, interactive queries on large-scale datasets, especially those with a time component. It is widely used for powering business intelligence (BI) dashboards, real-time monitoring systems, and exploratory data analytics.

Here are the advantages of Apache Druid over Snowflake, particularly in real-time, time-series, and low-latency analytics:

(a) Real-Time Data Ingestion

    • Apache Druid: Druid is designed for real-time data ingestion, making it ideal for applications where data needs to be available for querying as soon as it’s ingested. It can ingest data streams from sources like Apache Kafka or Amazon Kinesis and make the data immediately queryable.
    • Snowflake: Snowflake is primarily a batch-processing data warehouse. While Snowflake can load near real-time data using external tools or connectors, it is not built for real-time streaming data ingestion like Druid.

Advantage: Druid is superior for real-time analytics, especially for streaming data from sources like Kafka and Kinesis.

(b) Sub-Second Query Latency for Interactive Analytics

    • Apache Druid: Druid is optimized for sub-second query latency, which makes it highly suitable for interactive dashboards and ad-hoc queries. Its columnar storage format and advanced indexing techniques (such as bitmap and inverted indexes) enable very fast query performance on large datasets, even in real time.
    • Snowflake: Snowflake is highly performant for large-scale analytical queries, but it is designed for complex OLAP (Online Analytical Processing) queries over historical data, and query latency may be higher for low-latency, real-time analytics, particularly for streaming or fast-changing datasets.

Advantage: Druid offers better performance for low-latency, real-time queries in use cases like interactive dashboards or real-time monitoring.

(c) Time-Series Data Optimization

    • Apache Druid: Druid is natively optimized for time-series data, with its architecture built around time-based partitioning and segmenting. This allows for efficient querying and storage of event-driven data (e.g., log data, clickstreams, IoT sensor data).
    • Snowflake: While Snowflake can store and query time-series data, it does not have the same level of optimization for time-based queries as Druid. Snowflake is better suited for complex, multi-dimensional queries rather than the high-frequency, time-stamped event queries where Druid excels.

Advantage: Druid is more optimized for time-series analytics and use cases where data is primarily indexed and queried based on time.

(d) Streaming Data Support

    • Apache Druid: Druid has native support for streaming data platforms like Kafka and Kinesis, enabling direct ingestion from these sources and offering real-time visibility into data as it streams in.
    • Snowflake: Snowflake supports streaming data through tools like Snowpipe, but it works better with batch data loading or micro-batch processing. Its streaming capabilities are generally less mature and lower-performing compared to Druid’s real-time streaming ingestion.

Advantage: Druid has stronger native streaming data support and is a better fit for real-time analytics from event streams.

(e) Low-Latency Aggregations

    • Apache Druid: Druid excels in performing low-latency aggregations on event data, making it ideal for use cases that require real-time metrics, summaries, and rollups. For instance, it’s widely used in monitoring, fraud detection, ad tech, and IoT, where data must be aggregated and queried in near real-time.
    • Snowflake: While Snowflake can aggregate data, it is more optimized for batch-mode processing, where queries are run over large datasets. Performing continuous, real-time aggregations would require external tools and more complex architectures.

Advantage: Druid is better suited for real-time aggregations and rollups on streaming or event-driven data.

(f) High Cardinality Data Handling

    • Apache Druid: Druid’s advanced indexing techniques, like bitmap indexes and sketches (e.g., HyperLogLog), allow it to efficiently handle high-cardinality data (i.e., data with many unique values). This is important for applications like ad tech where unique user IDs, URLs, or click events are frequent.
    • Snowflake: Snowflake performs well with high-cardinality data in large-scale analytical queries, but its query execution model is generally more suited to aggregated batch processing rather than fast, high-cardinality lookups and filtering in real time.

Advantage: Druid has better indexing for real-time queries on high-cardinality datasets.

(g) Real-Time Analytics for Operational Use Cases

    • Apache Druid: Druid was built to serve operational analytics use cases where real-time visibility into systems, customers, or events is critical. Its ability to handle fast-changing data and return instant insights makes it ideal for powering monitoring dashboards, business intelligence tools, or real-time decision-making systems.
    • Snowflake: Snowflake is an excellent data warehouse for historical and batch-oriented analytics but is not optimized for operational or real-time analytics where immediate insights from freshly ingested data are needed.

Advantage: Druid is better suited for operational analytics and real-time, event-driven systems.

(h) Cost-Effectiveness for Real-Time Workloads

    • Apache Druid: Druid’s open-source nature means there are no licensing fees, and you only pay for the infrastructure it runs on (if you use a managed service or deploy it on the cloud). For organizations with significant real-time analytics workloads, Druid can be more cost-effective than cloud-based data warehouses, which charge based on storage and query execution.
    • Snowflake: Snowflake’s pricing is based on compute and storage usage, with charges for data loading, querying, and storage. For continuous, high-frequency querying (such as real-time dashboards), these costs can add up quickly.

Advantage: Druid can be more cost-effective for real-time analytics, especially in high-query environments with constant data ingestion.

(i) Schema Flexibility and Semi-Structured Data Handling

    • Apache Druid: Druid supports schema-on-read and is highly flexible in terms of handling semi-structured data such as JSON or log formats. This flexibility is particularly useful for use cases where the schema may evolve over time or when working with less structured data types.
    • Snowflake: Snowflake also handles semi-structured data like JSON, but it requires more structured schema management compared to Druid’s flexible schema handling, which makes Druid more adaptable to changes in data format or structure.

Advantage: Druid offers greater schema flexibility for semi-structured data and evolving datasets.

(j) Open-Source and Vendor Independence

    • Apache Druid: Druid is open-source, which gives users full control over deployment, management, and scaling without being locked into a specific vendor. This makes it a good choice for organizations that want to avoid vendor lock-in and have the flexibility to self-manage or choose different cloud providers.
    • Snowflake: Snowflake is a proprietary, cloud-based data warehouse. While Snowflake offers excellent cloud capabilities, users are tied to Snowflake’s platform and pricing model, which may not be ideal for organizations preferring more control or customization in their infrastructure.

Advantage: Druid provides more freedom and control as an open-source platform, allowing for vendor independence.

When to Choose Apache Druid over Snowflake

  • Real-Time Streaming Analytics: If your use case involves high-frequency event data or real-time streaming analytics (e.g., user behavior tracking, IoT sensor data, or monitoring dashboards), Druid is a better fit.
  • Interactive, Low-Latency Queries: For interactive dashboards requiring fast response times, especially with frequently updated data, Druid’s sub-second query performance is a significant advantage.
  • Time-Series and Event-Driven Data: Druid’s architecture is designed for time-series data, making it superior for analyzing log data, time-stamped events, and similar data.
  • Operational Analytics: Druid excels in operational analytics where real-time data ingestion and low-latency insights are needed for decision-making.
  • Cost-Effective Real-Time Workloads: For continuous real-time querying and analysis, Druid’s cost structure may be more affordable compared to Snowflake’s compute-based pricing.

Data Ingestion, Data Processing, and Data Querying

Apache Druid

Data Ingestion and Data Processing

Apache Druid is a high-performance real-time analytics database designed for fast querying of large datasets. One common use case is ingesting a CSV file into Druid to enable interactive analysis. In this guide, we will walk through each step to upload a CSV file to Apache Druid, covering both the Druid Console and the ingestion configuration details.

Note: We have used the default configuration settings that come with the trial edition of Apache Druid and Snowflake for this exercise. Results may vary depending on the configuration of the applications.

Prerequisites:

(a) Ensure your CSV file is:

    • Properly formatted, with headers in the first row.
    • Clean of inconsistencies (e.g., missing data or malformed values).
    • Stored locally or accessible via a URL if uploading from a remote location.

(b) Launch the Apache Druid Console.

(c) Start your Apache Druid cluster if it is not already running.

(d) Open the Druid Console by navigating to its URL (default: http://localhost:8888).

(e) Load Data: Select the Datasources Grid.

Create a New Data Ingestion Task

(a) Navigate to the Ingestion Section:

    • In the Druid Console, click on Data in the top navigation bar.
    • Select Load data to start a new ingestion task.

(b) Choose the Data Source:

    • Select Local disk if the CSV file is on the same server as the Druid cluster.
    • Select HTTP(s) if the file is accessible via a URL.
    • Choose Amazon S3, Google Cloud Storage, or other options if the file is stored in a cloud storage service.

(c) Upload or Specify the File Path:

    • For local ingestion: Provide the absolute path to the CSV file.
    • For HTTP ingestion: Enter the file’s URL.

(d) Start a New Batch Spec: We are using a new file for processing.

(e) Connect and parse raw data: Select Local Disk option.

(f) Specify the Data: Select the base directory of the file and choose the file type you are using to process. 

  •  File Format:
    • Select CSV as the file format.
    • If your CSV uses a custom delimiter (e.g., ;), specify it here.
  • Parse Timestamp Column:
    • Druid requires a timestamp column to index the data.
    • Select the column containing the timestamp (e.g., timestamp).
    • Specify the timestamp format if it differs from ISO 8601 (e.g., yyyy-MM-dd HH:mm:ss).
  •  Preview Data:
    • The console will show a preview of the parsed data.
    • Ensure that all columns are correctly identified.

(g) Connect: Connect details.

(h) Parse Data: At this stage, you should be able to see the data in Druid to parse it. 

(i) Parse Time: At this stage, you should be able to see the data in Druid to parse the time details. 

(j) Transform: At this stage, Druid allows you to perform some fundamental Transformations. 

(k) Filter: At this stage, Druid allows you to apply the filter conditions to your data.

(l) Configure Schema: At this stage, we are configuring the Schema details about the file that we had uploaded to the Druid.

  • Define Dimensions and Metrics:
    • Dimensions are fields you want to filter or group by (e.g., category).
    • Metrics are fields you want to aggregate (e.g., value).
  • Primary Timestamp:
    • Confirm the primary timestamp field.
  • Partitioning and Indexing:
    • Select a time-based partitioning scheme, such as day or hour.
    • Choose indexing options like bitmap or auto-compaction if needed.

(m) Partition: At this stage, Druid allows you to configure the Date specific granularity details. 

(n) Tune:

    • Max Rows in Memory:
      • Specify the maximum number of rows to store in memory during ingestion.
    • Segment Granularity:
      • Define how data is divided into segments (e.g., daily, hourly).
    • Partitioning and Indexing:
      • Configure the number of parallel tasks for ingestion if working with large datasets.

(o) Publish:

(p) Edit Spec:

(q) Submit Task:

    1. Generate the Ingestion Spec:
      • Review the auto-generated ingestion spec JSON in the console.
      • Edit the JSON manually if you need to add custom configurations.
    2. Submit the Task:
      • Click Submit to start the ingestion process.

(r) Monitor the Ingestion Process:

    1. Navigate to the Tasks section in the Druid Console.
    2. Monitor the progress of the ingestion task.
    3. If the task fails, review the logs for errors, such as incorrect schema definitions or file path issues.

Processing a CSV data load of 1 Million data in 33 sec.

Data Querying

Query the Ingested Data 

    1. Navigate to the Query Tab:
      • In the Druid Console, go to Query and select your new data source.
    2. Write and Execute Queries:
      • Use Druid SQL or the native JSON query language to interact with your data.

Example SQL Query:

Query Performance for querying and rendering of 1 million rows is 15 ms.

    1. Visualize Results:
      • View the query results directly in the console or connect Druid to a visualization tool like Apache Superset. In our upcoming blog we will see how to visualize this ingested data using Apache Superset. Meanwhile In our previous blog, Exploring Apache Druid: A High-Performance Real-Time Analytics Database, and Apache Druid Integration with Apache Superset  we discussed Apache Druid in more detail. In case you have missed it, it is recommended that you read it before continuing the Apache Druid blog series. In this blog, we talk about integrating Apache Superset and Apache Druid.

Uploading data sets to Apache Druid is a straightforward process, thanks to the intuitive Druid Console. By following these steps, you can ingest your data, configure schemas, and start analyzing it in no time. Whether you’re exploring business metrics, performing real-time analytics, or handling time-series data, Apache Druid provides the tools and performance needed to get insights quickly.

Snowflake

  1. Log in to the Snowflake application using the credentials.

2. Upload the CSV file to the Server.

3. The uploaded “customer.csv” file has 1 Million data rows and the size of the file is around 166 MB. 

4. Specify the Snowflake Database.

5. Specify the Snowflake Table. 

6. The CSV data is being processed into the specified Snowflake Database and table.

7. Snowflake allows us to format or edit the metadata information before loading the data into Snowflake DB. Update the details as per the data and click on the Load button.

8. Snowflake successfully loaded customer data. Snowflake has taken 45 sec to process the CSV data. Snowflake has taken 12+ seconds compared to Apache Druid to load, transform, and process 1 Million records.

9. Query Performance for rendering 1 million rows is 20 ms. Snowflake took 5+ seconds compared to Apache Druid for querying 1 Million records.

Conclusion

While Snowflake is a powerful, cloud-native data warehouse with strengths in batch processing, historical data analysis, and complex OLAP queries, Apache Druid stands out in scenarios where real-time, low-latency analytics are needed. Druid’s open-source nature, time-series optimizations, streaming data support, and operational focus make it the better choice for real-time analytics, event-driven applications, and fast, interactive querying on large datasets.

Apache Druid FAQ

Apache Druid is used for real-time analytics and fast querying of large-scale datasets. It's particularly suitable for time-series data, clickstream analytics, operational intelligence, and real-time monitoring.

Druid supports:

  • Streaming ingestion: Data from Kafka, Kinesis, etc., is ingested in real time.
  • Batch ingestion: Data from Hadoop, S3, or other static sources is ingested in batches.

Druid uses:

  • Columnar storage: Optimizes storage for analytic queries.
  • Advanced indexing: Bitmap and inverted indexes speed up filtering and aggregation.
  • Distributed architecture: Spreads query load across multiple nodes.

No. Druid is optimized for OLAP (Online Analytical Processing) and is not suitable for transactional use cases.

  • Data is segmented into immutable chunks and stored in Deep Storage (e.g., S3, HDFS).
  • Historical nodes cache frequently accessed segments for faster querying.
  • SQL: Provides a familiar interface for querying.
  • Native JSON-based query language: Offers more flexibility and advanced features.
  • Broker: Routes queries to appropriate nodes.
  • Historical nodes: Serve immutable data.
  • MiddleManager: Handles ingestion tasks.
  • Overlord: Coordinates ingestion processes.
  • Coordinator: Manages data availability and balancing.

Druid integrates with tools like:

  • Apache Superset
  • Tableau
  • Grafana
  • Stream ingestion tools (Kafka, Kinesis)
  • Batch processing frameworks (Hadoop, Spark)

Yes, it’s open-source and distributed under the Apache License 2.0.

Alternatives include:

  • ClickHouse
  • Elasticsearch
  • BigQuery
  • Snowflake

Whether you're exploring real-time analytics or need help getting started, feel free to reach out!

Watch the Apache Blog Series

Stay tuned for the upcoming Apache Blog Series:

  1. Exploring Apache Druid: A High-Performance Real-Time Analytics Database
  2. Unlocking Data Insights with Apache Superset
  3. Streamlining Apache HOP Workflow Management with Apache Airflow
  4. Comparison of and migrating from Pentaho Data Integration PDI/ Kettle to Apache HOP
  5. Apache Druid Integration with Apache Superset
  6. Why choose Apache Druid over Vertica
  7. Why choose Apache Druid over Snowflake
  8. Why choose Apache Druid over Google Big Query
  9. Integrating Apache Druid with Apache Superset for Realtime Analytics
Asset 37@2x

Apache Druid Integration with Apache Superset

Introduction

In our previous blog, Exploring Apache Druid: A High-Performance Real-Time Analytics Database, we discussed Apache Druid in more detail. In case you have missed it, it is recommended that you read it before continuing the Apache Druid blog series. In this blog, we talk about integrating Apache Superset and Apache Druid.

Apache Superset and Apache Druid are both open-source tools that are often used together in data analytics and business intelligence workflows. Here’s an overview of each and their relationship.

Apache Superset

Apache Superset is an open-source data exploration and visualization platform developed by Airbnb, it was later donated to the Apache Software Foundation. It is now a top-level Apache project, widely adopted across industries for data analytics and visualization. 

  1. Type: Data visualization and exploration platform.
  2. Purpose: Enables users to create dashboards, charts, and reports for data analysis.
  3. Features:
    1. User-friendly interface for querying and visualizing data.
    2. Wide variety of visualization types (charts, maps, graphs, etc.).
    3. SQL Lab: A rich SQL IDE for writing and running queries.
    4. Extensible: Custom plugins and charts can be developed.
    5. Role-based access control and authentication options.
    6. Integrates with multiple data sources, including databases, warehouses, and big data engines.
  4. Use Cases:
    1. Business intelligence (BI) dashboards.
    2. Interactive data exploration.
    3. Monitoring and operational analytics.

Apache Druid

Apache Druid is a high-performance, distributed real-time analytics database designed to handle fast, interactive queries on large-scale datasets, especially those with a time component. It is widely used for powering business intelligence (BI) dashboards, real-time monitoring systems, and exploratory data analytics.

  1. Type: Real-time analytics database.
  2. Purpose: Designed for fast aggregation, ingestion, and querying of high-volume event streams.
  3. Features:
    1. Columnar storage format: Optimized for analytic workloads.
    2. Real-time ingestion: Can process and make data available immediately.
    3. Fast queries: High-performance query engine for OLAP (Online Analytical Processing).
    4. Scalability: Handles massive data volumes efficiently.
    5. Support for time-series data: Designed for datasets with a time component.
    6. Flexible data ingestion: Supports batch and streaming ingestion from sources like Kafka, Hadoop, and object stores.
  4. Use Cases:
    1. Real-time dashboards.
    2. Clickstream analytics.
    3. Operational intelligence and monitoring.
    4. Interactive drill-down analytics.

How They Work Together

  1. Integration: Superset can connect to Druid as a data source, enabling users to visualize and interact with the data stored in Druid.
  2. Workflow:
    1. Data Ingestion: Druid ingests large-scale data in real-time or batches from various sources.
    2. Data Querying: Superset uses Druid’s fast query capabilities to retrieve data for visualizations.
    3. Visualization: Superset presents the data in user-friendly dashboards, allowing users to explore and analyze it.

Key Advantages of Using Them Together

  1. Speed: Druid’s real-time and high-speed querying capabilities complement Superset’s visualization features, enabling near real-time analytics.
  2. Scalability: The combination is suitable for high-throughput environments where large-scale data analysis is needed.
  3. Flexibility: Supports a wide range of analytics use cases, from real-time monitoring to historical data analysis.

Use Cases of Apache Druid and Apache Superset

  1. Fraud Detection: Analyzing transactional data for anomalies.
  2. Real-Time Analytics: Monitoring website traffic, application performance, or IoT devices in real-time.
  3. Operational Intelligence: Tracking business metrics like revenue, sales, or customer behaviour.
  4. Clickstream Analysis: Analyzing user interactions and behaviour on websites and apps.
  5. Log and Event Analytics: Parsing and querying log files or event streams.

Importance of Integrating Apache Druid with Apache Superset

Integrating Apache Druid with Apache Superset opens quite a few doors for the organization to really address high-scale data analysis and interactive visualization needs. Here’s why this combination is so powerful:

  1. High-Performance Analytics:

Druid’s Speed: Apache Druid optimizes low-latency queries and fast aggregation and is designed with large datasets in mind. That can be huge because quick response means Superset can deliver good, reusable, production-quality representations.

Real-Time Data: Druid supports real-time data ingestion, allowing Superset dashboards to visualise near-live data streams, making it suitable for monitoring and operational analytics.

  1. Advanced Query Capabilities:

Complex Aggregations: Druid’s ability to handle complex OLAP-style aggregations and filtering ensures that Superset can provide rich and detailed visualizations.

Native Time-Series Support: Druid’s in-built time-series functionalities allow seamless integration for Superset’s time-series charts and analysis.

  1. Scalability:

Designed for Big Data: Druid is highly scalable and can manage billions of rows in an efficient manner. Together with Superset, it helps dashboards scale easily for high volumes of data.

Distributed Architecture: Being by nature distributed, Druid ensures reliable performance of Superset even when it’s under high loads or many concurrent user sessions.

  1. Simplified Data Exploration:

Columnar Storage: This columnar storage format on Druid greatly accelerates the exploratory data analysis of Superset since it supports scans of selected columns efficiently.

Pre-Aggregated Data: Superset can avail use from the roll-up and pre-aggregation mechanisms in Druid for reducing query cost as well as improving the visualization performance.

  1. Flawless Integration:

Incorporated Connector: Superset is natively integrated with Druid. Therefore, there would not be any hassle regarding implementing it. It lets users query Druid directly and curate dashboards using minimal configuration. Moreover, non-technical users can also perform SQL-like queries (Druid SQL) within the interface.

  1. Advanced Features:

Granular Access Control: When using Druid with Superset, it is possible to enforce row-level security and user-based data access, ensuring that sensitive data is only accessible to authorized users.

Segment-Level Optimization: Druid’s segment-based storage enables Superset to optimize queries for specific data slices.

  1. Cost Efficiency:

Efficient Resource Utilization: Druid’s architecture minimizes storage and compute costs while providing high-speed analytics, ensuring that Superset dashboards run efficiently.

  1. Real-Time Monitoring and BI:

Operational Dashboards: In the case of monitoring systems, website traffic, or application logs, Druid’s real-time capabilities make the dashboards built in Superset very effective for real-time operational intelligence.

Streaming Data Support: Druid supports ingesting stream-processing-enabled data sources; for example, Kafka’s ingestion is supported. This enables Superset to visualize live data with minimal delay.

Prerequisites

  1. The Superset Docker Container or instance should be up and running.
  2. Installation of pydruid in Superset Container or instance.

Installing PyDruid

1. Enable Druid-Specific SQL Queries

PyDruid is the Python library that serves as a client for Apache Druid. This allows Superset to: 

  1. Run Druid SQL queries as efficiently as possible. 
  2. Access Druid’s powerful querying capabilities directly from Superset’s SQL Lab and dashboards.
  3. Without this client, Superset will not be able to communicate with the Druid database, which consequently results in partial functionality or functionality at large

2. Native Connection Support

Superset is largely dependent upon PyDruid for:

  1. Seamless connections to Apache Druid.
  2. Translate Superset queries into Druid-native SQL or JSON that the Druid engine understands.

3. Efficient Data Fetching

PyDruid optimizes data fetching from Druid by

  1. Managing the query API of Druid in regards to aggregation, filtering, and time-series analysis
  2. Reducing latency due to better management of connections and responses

    4. Advanced Visualization Features

Many features of advanced visualization available in Superset (like time-series charts and heat maps) depend on Druid’s capabilities, which PyDruid enables.  It also exposes to Superset Druid’s granularity, time aggregation and access to complex metrics pre-aggregated in Druid.

5. Integration with Superset’s Druid Connector

The Druid connector in Superset relies on PyDruid as a dependency to:

  1. Register Druid as a data source.
  2. Allow Superset users the capability to query and visually explore Druid datasets directly.
  3. Without PyDruid the Druid connector in Superset will not work, limiting your ability to integrate Druid as a backend for analytics.

6. Data Discovery PyDruid Enables Superset to:

Fetch metadata from Druid, including schema, tables, and columns. Enable ad hoc exploration of Druid data in SQL Lab.

Apache Druid as a Data Source

  1. Make sure your Druid database application is set up and working fine in the respective ports.

2. Install pydruid by logging into the container.

$ docker exec -it <superset_container_name> user=root /bin/bash 

$ pip install pydruid

3. Once pydruid is installed, navigate to settings and click on database connections

4. Click on Add Database and select Apache Druid as shown in the screenshot below:

5. Here we have to enter the SQLALCHEMY URI. To learn more about the SQLALCHEMY rules please follow the below link. 

https://superset.apache.org/docs/configuration/databases/

6. For Druid the SQLALCHEMY URI will be in the below format:

druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql

Note: 

druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql

According to the Alchemy URI “<Port-default-9088>” the default ports will be “8888” or “8081”. 

If the druid you have installed is running on http without any secure connection the Alchemy URI with the above rule will work without any issue.

Ex: druid://localhost:8888/druid/v2/sql/

If you have a secure connection normally druid://localhost:8888/druid/v2/sql/ will not work and provide an error. 

Here we will understand that we have to provide a domain name instead of localhost if we have enabled a secured connection for the druid UI. So we will make the changes and enter the URI as below

druid://domain.name.com:8888/druid/v2/sql/. This will also not work. 

According to other sources we have to make the changes like 

druid+https://localhost:8888/druid/v2/sql/

OR

druid+https://localhost:8888/druid/v2/sql/?ssl_verify=false

Neither of them will work

Normally once we enable a secured connection port HTTPS will be enabled to the site proxying the default ports of any of the other applications.

So if we enter the below SQL ALCHEMY URI as below

druid+https://domain.name.com:443/druid/v2/sql/?ssl=true

Superset will understand that the druid is enabled with a secured connection and running in the HTTPS port.

7. Since we have enabled the SSL to our druid application we have to enter the below SQLALCHEMY URI

druid+https://domain.name.com:443/druid/v2/sql/?ssl=true 

and click on test connection.

8. Once we click on test connection the Apache superset will be integrated with the Apache druid and will be ready to be worked on.

In the upcoming articles, we will talk about the data ingestion and visualization of Apache Druid data in the Apache Superset.

Apache Druid FAQ

Apache Druid is used for real-time analytics and fast querying of large-scale datasets. It's particularly suitable for time-series data, clickstream analytics, operational intelligence, and real-time monitoring.

Druid supports:

  1. Streaming ingestion: Data from Kafka, Kinesis, etc., is ingested in real-time.
  2. Batch ingestion: Data from Hadoop, S3, or other static sources is ingested in batches.

Druid uses:

  1. Columnar storage: Optimizes storage for analytic queries.
  2. Advanced indexing: Bitmap and inverted indexes speed up filtering and aggregation.
  3. Distributed architecture: Spreads query load across multiple nodes.

No. Druid is optimized for OLAP (Online Analytical Processing) and is not suitable for transactional use cases.

  1. Data is segmented into immutable chunks and stored in Deep Storage (e.g., S3, HDFS).
  2. Historical nodes cache frequently accessed segments for faster querying.
  1. SQL: Provides a familiar interface for querying.
  2. Native JSON-based query language: Offers more flexibility and advanced features.
  1. Broker: Routes queries to appropriate nodes.
  2. Historical nodes: Serve immutable data.
  3. MiddleManager: Handles ingestion tasks.
  4. Overlord: Coordinates ingestion processes.
  5. Coordinator: Manages data availability and balancing.

Druid integrates with tools like:

  1. Apache Superset
  2. Tableau
  3. Grafana
  4. Stream ingestion tools (Kafka, Kinesis)
  5. Batch processing frameworks (Hadoop, Spark)

Yes, it’s open-source and distributed under the Apache License 2.0.

Alternatives include:

  1. ClickHouse
  2. Elasticsearch
  3. BigQuery
  4. Snowflake

Apache Superset FAQ

Apache Superset is an open-source data visualization and exploration platform. It allows users to create dashboards, charts, and interactive visualizations from various data sources.

Superset offers:

  1. Line charts, bar charts, pie charts.
  2. Geographic visualizations like maps.
  3. Pivot tables, histograms, and heatmaps.
  4. Custom visualizations via plugins.

Superset supports:

  1. Relational databases (PostgreSQL, MySQL, SQLite).
  2. Big data engines (Hive, Presto, Trino, BigQuery).
  3. Analytics databases (Apache Druid, ClickHouse).

Superset is a robust, open-source alternative to commercial BI tools. While it’s feature-rich, it may require more setup and development effort compared to proprietary solutions like Tableau.

Superset primarily supports SQL for querying data. Its SQL Lab provides an environment for writing, testing, and running queries.

Superset supports:

  1. Database-based login.
  2. OAuth, LDAP, OpenID Connect, and other SSO mechanisms.
  3. Role-based access control (RBAC) for managing permissions.

Superset can be installed using:

  1. Docker: For containerized deployments.
  2. Python pip: For a native installation in a Python environment.
  3. Helm: For Kubernetes-based setups.

Yes. By integrating Superset with real-time analytics platforms like Apache Druid or streaming data sources, you can build dashboards that reflect live data.

  1. Requires knowledge of SQL for advanced queries.
  2. May not have the polish or advanced features of proprietary BI tools like Tableau.
  3. Customization and extensibility require some development effort.

Superset performance depends on:

  1. The efficiency of the connected database or analytics engine.
  2. The complexity of queries and visualizations.
  3. Hardware resources and configuration.

Would you like further details on specific features or help setting up either tool? Reach out to us

Watch the Apache Blog Series

Exploring Apache Druid_ A High-Performance Real-Time Analytics Database

Exploring Apache Druid: A High-Performance Real-Time Analytics Database

Introduction

Apache Druid is a distributed, column-oriented, real-time analytics database designed for fast, scalable, and interactive analytics on large datasets. It excels in use cases requiring real-time data ingestion, high-performance queries, and low-latency analytics. 

Druid was originally developed to power interactive data applications at Metamarkets and has since become a widely adopted open-source solution for real-time analytics, particularly in industries such as ad tech, fintech, and IoT.

It supports batch and real-time data ingestion, enabling users to perform fast ad-hoc queries, power dashboards, and interactive data exploration.

In big data and real-time analytics, having the right tools to process and analyze large volumes of data swiftly is essential. Apache Druid, an open-source, high-performance, column-oriented distributed data store, has emerged as a leading solution for real-time analytics and OLAP (online analytical processing) workloads. In this blog post, we’ll delve into what Apache Druid is, its key features, and how it can revolutionize your data analytics capabilities. Refer to the official documentation for more information.

Apache Druid

Apache Druid is a high-performance, real-time analytics database designed for fast slice-and-dice analytics on large datasets. It was created by Metamarkets (now part of Snap Inc.) and is now an Apache Software Foundation project. Druid is built to handle both batch and streaming data, making it ideal for use cases that require real-time insights and low-latency queries.

Key Features of Apache Druid:

Real-Time Data Ingestion

Druid excels at real-time data ingestion, allowing data to be ingested from various sources such as Kafka, Kinesis, and traditional batch files. It supports real-time indexing, enabling immediate query capabilities on incoming data with low latency.

High-Performance Query Engine

Druid’s query engine is optimized for fast, interactive querying. It supports a wide range of query types, including Time-series, TopN, GroupBy, and search queries. Druid’s columnar storage format and advanced indexing techniques, such as bitmap indexes and compressed column stores, ensure that queries are executed efficiently.

Scalable and Distributed Architecture

Druid’s architecture is designed to scale horizontally. It can be deployed on a cluster of commodity hardware, with data distributed across multiple nodes to ensure high availability and fault tolerance. This scalability makes Druid suitable for handling large datasets and high query loads.

Flexible Data Model

Druid’s flexible data model allows for the ingestion of semi-structured and structured data. It supports schema-on-read, enabling dynamic column discovery and flexibility in handling varying data formats. This flexibility simplifies the integration of new data sources and evolving data schemas.

Built-In Data Management

Druid includes built-in features for data management, such as automatic data partitioning, data retention policies, and compaction tasks. These features help maintain optimal query performance and storage efficiency as data volumes grow.

Extensive Integration Capabilities

Druid integrates seamlessly with various data ingestion and processing frameworks, including Apache Kafka, Apache Storm, and Apache Flink. It also supports integration with visualization tools like Apache Superset, Tableau, and Grafana, enabling users to build comprehensive analytics solutions.

Use Cases of Apache Druid

Real-Time Analytics

Druid is used in real-time analytics applications where the ability to ingest and query data in near real-time is critical. This includes monitoring applications, fraud detection, and customer behavior tracking.

Ad-Tech and Marketing Analytics

Druid’s ability to handle high-throughput data ingestion and fast queries makes it a popular choice in the ad tech and marketing industries. It can track user events, clicks, impressions, and conversion rates in real time to optimize campaigns.

IoT Data and Sensor Analytics

IoT applications produce time-series data at high volume. Druid’s architecture is optimized for time-series data analysis, making it ideal for analyzing IoT sensor data, device telemetry, and real-time event tracking.

Operational Dashboards

Druid is often used to power operational dashboards that provide insights into infrastructure, systems, or applications. The low-latency query capabilities ensure that dashboards reflect real-time data without delay.

Clickstream Analysis

Organizations leverage Druid to analyze user clickstream data on websites and applications, allowing for in-depth analysis of user interactions, preferences, and behaviors in real time.

The Architecture of Apache Druid

Apache Druid follows a distributed, microservice-based architecture. The architecture allows for scaling different components based on the system’s needs.

The main components are:

Coordinator and Overlord Nodes

  1. Coordinator Node: Manages data availability, balancing the distribution of data across the cluster, and overseeing segment management (segments are the basic units of storage in Druid).
  2. Overlord Node: Responsible for managing ingestion tasks. It works with the middle managers to schedule and execute data ingestion tasks, ensuring that data is ingested properly into the system.

Historical Nodes

Historical nodes store immutable segments of historical data. When queries are executed, historical nodes serve data from the disk, which allows for low-latency and high-throughput queries.

MiddleManager Nodes

MiddleManager nodes handle real-time ingestion tasks. They manage tasks such as ingesting data from real-time streams (like Kafka), transforming it, and pushing the processed data to historical nodes after it has persisted.

Broker Nodes

The broker nodes route incoming queries to the appropriate historical or real-time nodes and aggregate the results. They act as the query routers and perform query federation across the Druid cluster.

Query Nodes

Query nodes are responsible for receiving, routing, and processing queries. They can handle a variety of query types, including SQL, and route these queries to other nodes for execution.

Deep Storage

Druid relies on an external deep storage system (such as Amazon S3, Google Cloud Storage, or HDFS) to store segments of data permanently. The historical nodes pull these segments from deep storage when they need to serve data.

Metadata Storage

Druid uses an external relational database (typically PostgreSQL or MySQL) to store metadata about the data, including segment information, task states, and configuration settings.

Advantages of Apache Druid

  1. Sub-Second Query Latency: Optimized for high-speed data queries, making it perfect for real-time dashboards.
  2. Scalability: Easily scales to handle petabytes of data.
  3. Flexible Data Ingestion: Supports both batch and real-time data ingestion from multiple sources like Kafka, HDFS, and Amazon S3.
  4. Column-Oriented Storage: Efficient data storage with high compression ratios and fast retrieval of specific columns.
  5. SQL Support: Familiar SQL-like querying capabilities for easy data analysis.
  6. High Availability: Fault-tolerant and highly available due to data replication across nodes.

Getting Started with Apache Druid

Installation and Setup

Setting up Apache Druid involves configuring a cluster with different node types, each responsible for specific tasks:

  1. Master Nodes: Oversee coordination, metadata management, and data distribution.
  2. Data Nodes: Handle data storage, ingestion, and querying.
  3. Query Nodes: Manage query routing and processing.

You can install Druid using a package manager, Docker, or by downloading and extracting the binary distribution. Here’s a brief overview of setting up Druid using Docker:

  1. Download the Docker Compose File:
    $curl -O https://raw.githubusercontent.com/apache/druid/master/examples/docker-compose/docker-compose.yml
  2. Start the Druid Cluster: $ docker-compose up
  3. Access the Druid Console: Open your web browser and navigate to http://localhost:8888 to access the Druid console.

Ingesting Data

To ingest data into Druid, you need to define an ingestion spec that outlines the data source, input format, and parsing rules. Here’s an example of a simple ingestion spec for a CSV file:

JSON Code

{ “type”: “index_parallel”, “spec”: { “ioConfig”: { “type”: “index_parallel”, “inputSource”: { “type”: “local”, “baseDir”: “/path/to/csv”, “filter”: “*.csv” }, “inputFormat”: { “type”: “csv”, “findColumnsFromHeader”: true } }, “dataSchema”: { “dataSource”: “example_data”, “timestampSpec”: { “column”: “timestamp”, “format”: “iso” }, “dimensionsSpec”: { “dimensions”: [“column1”, “column2”, “column3”] } }, “tuningConfig”: { “type”: “index_parallel” } }}

Submit the ingestion spec through the Druid console or via the Druid API to start the data ingestion process.

Querying Data

Once your data is ingested, you can query it using Druid’s native query language or SQL. Here’s an example of a simple SQL query to retrieve data from the example_data data source:

SELECT  __time, column1, column2, columnFROM example_dataWHERE __time BETWEEN ‘2023-01-01’ AND ‘2023-01-31’

Use the Druid console or connect to Druid from your preferred BI tool to execute queries and visualize data.

Conclusion

Apache Druid is a powerful, high-performance real-time analytics database that excels at handling large-scale data ingestion and querying. Its robust architecture, flexible data model, and extensive integration capabilities make it a versatile solution for a wide range of analytics use cases. Whether you need real-time insights, interactive queries, or scalable OLAP capabilities, Apache Druid provides the tools to unlock the full potential of your data. Explore Apache Druid today and transform your data analytics landscape. Apache Druid has firmly established itself as a leading database for real-time, high-performance analytics. Its unique combination of real-time data ingestion, sub-second query speeds, and scalability makes it a perfect choice for businesses that need to analyze vast amounts of time-series and event-driven data. With growing adoption across industries.

Need help transforming your real-time analytics with high-performance querying? Contact our experts today!

Watch the Apache Druid Blog Series

Stay tuned for the upcoming Apache Druid Blog Series:

  1. Why choose Apache Druid over Vertica
  2. Why choose Apache Druid over Snowflake
  3. Why choose Apache Druid over Google Big Query
  4. Integrating Apache Druid with Apache Superset for Realtime Analytics
Streamlining Apache HOP Workflow Management with Apache Airflow

Streamlining Apache HOP Workflow Management with Apache Airflow

Introduction

In our previous blog, we talked about the Apache HOP in more detail. In case you have missed it, refer to it here https://analytics.axxonet.com/comparison-of-and-migrating-from-pdi-kettle-to-apache-hop/” page. As a continuation of the Apache HOP article series, here we touch upon how to integrate Apache Airflow and Apache HOP. In the fast-paced world of data engineering and data science, efficiently managing complex workflows is crucial. Apache Airflow, an open-source platform for programmatically authoring, scheduling, and monitoring workflows, has become a cornerstone in many data teams’ toolkits. This blog post explores what Apache Airflow is, its key features, and how you can leverage it to streamline and manage your Apache HOP workflows and pipelines.

Apache HOP

Apache HOP is an open-source data integration and orchestration platform. For more details refer to our previous blog here.

Apache Airflow

Apache Airflow is an open-source workflow orchestration tool originally developed by Airbnb. It allows you to define workflows as code, providing a dynamic, extensible platform to manage your data pipelines. Airflow’s rich features enable you to automate and monitor workflows efficiently, ensuring that data moves seamlessly through various processes and systems.

Use Cases:

  1. Data Pipelines: Orchestrating ETL jobs to extract data from sources, transform it, and load it into a data warehouse.
  2. Machine Learning Pipelines: Scheduling ML model training, batch processing, and deployment workflows.
  3. Task Automation: Running repetitive tasks, like backups or sending reports.

DAG (Directed Acyclic Graph):

  1. A DAG represents the workflow in Airflow. It defines a collection of tasks and their dependencies, ensuring that tasks are executed in the correct order.
  2. DAGs are written in Python and allow you to define the tasks and how they depend on each other.

Operators:

  • Operators define a single task in a DAG. There are several built-in operators, such as:
    1. BashOperator: Runs a bash command.
    2. PythonOperator: Runs Python code.
    3. SqlOperator: Executes SQL commands.
    4. HttpOperator: Makes HTTP requests.

Custom operators can also be created to meet specific needs.

Tasks:

  1. Tasks are the building blocks of a DAG. Each node in a DAG is a task that does a specific unit of work, such as executing a script or calling an API.
  2. Tasks are defined by operators and their dependencies are controlled by the DAG.

Schedulers:

  1. The scheduler is responsible for triggering tasks at the appropriate time, based on the schedule_interval defined in the DAG.
  2. It continuously monitors all DAGs and determines when to run the next task.

Executors:

  • The executor is the mechanism that runs the tasks. Airflow supports different types of executors:
    1. SequentialExecutor: Executes tasks one by one.
    2. LocalExecutor: Runs tasks in parallel on the local machine.
    3. CeleryExecutor: Distributes tasks across multiple worker machines.
    4. KubernetesExecutor: Runs tasks in a Kubernetes cluster.

Web UI:

  1. Airflow has a web-based UI that lets you monitor the status of DAGs, view logs, and check the status of each task in a DAG.
  2. It also provides tools to trigger, pause, or retry DAGs.

Key Features of Apache Airflow

Workflow as Code

Airflow uses Directed Acyclic Graphs (DAGs) to represent workflows. These DAGs are written in Python, allowing you to leverage the full power of a programming language to define complex workflows. This approach, known as “workflow as code,” promotes reusability, version control, and collaboration.

Dynamic Task Scheduling

Airflow’s scheduling capabilities are highly flexible. You can schedule tasks to run at specific intervals, handle dependencies, and manage task retries in case of failures. The scheduler executes tasks in a defined order, ensuring that dependencies are respected and workflows run smoothly.

Extensible Architecture

Airflow’s architecture is modular and extensible. It supports a wide range of operators (pre-defined tasks), sensors (waiting for external conditions), and hooks (interfacing with external systems). This extensibility allows you to integrate with virtually any system, including databases, cloud services, and APIs.

Robust Monitoring and Logging

Airflow provides comprehensive monitoring and logging capabilities. The web-based user interface (UI) offers real-time visibility into the status of your workflows, enabling you to monitor task progress, view logs, and troubleshoot issues. Additionally, Airflow can send alerts and notifications based on task outcomes.

Scalability and Reliability

Designed to scale, Airflow can handle workflows of any size. It supports distributed execution, allowing you to run tasks on multiple workers across different nodes. This scalability ensures that Airflow can grow with your organization’s needs, maintaining reliability even as workflows become more complex.

Getting Started with Apache Airflow

Installation using PIP

Setting up Apache Airflow is straightforward. You can install it using pip, Docker, or by deploying it on a cloud service. Here’s a brief overview of the installation process using pip:

1. Create a Virtual Environment (optional but recommended):

           python3 -m venv airflow_env

            source airflow_env/bin/activate

2. Install Apache Airflow: 

           pip install apache-airflow

3. Initialize the Database:

           airflow db init

4. Create a User:

           airflow users create –username admin –password admin –firstname Adminlastname User –role Admin –email [email protected]

5. Start the Web Server and Scheduler:

           airflow webserver –port 8080

           airflow scheduler

6. Access the Airflow UI: Open your web browser and go to http://localhost:8080.

Installation using Docker

Pull the docker image and run the container to access the Airflow web UI. Refer to the link for more details.

Creating Your First DAG

Airflow DAG Structure:

A DAG in Airflow is composed of three main parts:

  1. Imports: Necessary packages and operators.
  2. Default Arguments: Arguments that apply to all tasks within the DAG (such as retries, owner, start date).
  3. Task Definition: Define tasks using operators, and specify dependencies between them.

Scheduling:

Airflow allows you to define the schedule of a DAG using schedule_interval:

  1. @daily: Run once a day at midnight.
  2. @hourly: Run once every hour.
  3. @weekly: Run once a week at midnight on Sunday.
  4. Cron expressions, like “0 12 * * *”, are also supported for more specific scheduling needs.
  5. Define the DAG: Create a Python file (e.g., run_lms_transaction.py) in the dags folder of your Airflow installation directory.

    Example

from airflow import DAG

from airflow.operators.dummy import DummyOperator

from datetime import datetime

default_args = {

 ‘owner’: ‘airflow’,

 ‘start_date’: datetime(2023, 1, 1),

 ‘retries’: 1,

}

dag = DAG(‘example_dag’, default_args=default_args, schedule_interval=‘@daily’)

start = DummyOperator(task_id=‘start’, dag=dag)

end = DummyOperator(task_id=‘end’, dag=dag)

start >> end

2. Deploy the DAG: Save the file in the Dags folder. Place the DAG Python script in the DAGs folder (~/airflow/dags by default). Airflow will automatically detect and load the DAG.

3. Monitor the DAG: Access the Airflow UI, where you can view and manage the newly created DAG. Trigger the DAG manually or wait for it to run according to the defined schedule.

Calling the Apache HOP Pipelines/Workflows from Apache Airflow

In this example, we walk through how to integrate the Apache HOP with Apache Airflow. Here both the Apache Airflow and Apache HOP are running on two different independent docker containers. Apache HOP ETL Pipelines / Workflows are configured with a persistent volume storage strategy so that the DAG code can request execution from Airflow.  

Steps

  1. Define the DAG: Create a Python file (e.g., Stg_User_Details.py) in the dags folder of your Airflow installation directory.

from datetime import datetime, timedelta

from airflow import DAG

from airflow.operators.bash_operator import BashOperator

from airflow.operators.docker_operator import DockerOperator

from airflow.operators.python_operator import BranchPythonOperator

from airflow.operators.dummy_operator import DummyOperator

from docker.types import Mount

default_args = {

‘owner’                 : ‘airflow’,

‘description’           : ‘Stg_User_details’,

‘depend_on_past’        : False,

‘start_date’            : datetime(2022, 1, 1),

’email_on_failure’      : False,

’email_on_retry’        : False,

‘retries’               : 1,

‘retry_delay’           : timedelta(minutes=5)

}

with DAG(‘Stg_User_details’, default_args=default_args, schedule_interval=‘0 10 * * *’, catchup=False, is_paused_upon_creation=False) as dag:

    start_dag = DummyOperator(

        task_id=‘start_dag’

        )

    end_dag = DummyOperator(

        task_id=‘end_dag’

        )

    hop = DockerOperator(

        task_id=‘Stg_User_details’,

        # use the Apache Hop Docker image. Add your tags here in the default apache/hop: syntax

        image=‘test’,

        api_version=‘auto’,

        auto_remove=True,

        environment= {

            ‘HOP_RUN_PARAMETERS’: ‘INPUT_DIR=’,

            ‘HOP_LOG_LEVEL’: ‘TRACE’,

            ‘HOP_FILE_PATH’: ‘/opt/hop/config/projects/default/stg_user_details_test.hpl’,

            ‘HOP_PROJECT_DIRECTORY’: ‘/opt/hop/config/projects/’,

            ‘HOP_PROJECT_NAME’: ‘ISON_Project’,

            ‘HOP_ENVIRONMENT_NAME’: ‘ISON_Env’,

            ‘HOP_ENVIRONMENT_CONFIG_FILE_NAME_PATHS’: ‘/opt/hop/config/projects/default/project-config.json’,

            ‘HOP_RUN_CONFIG’: ‘local’,

        },

        docker_url=“unix://var/run/docker.sock”,

        network_mode=“bridge”,

        force_pull=False,

        mount_tmp_dir=False

        )

    start_dag >> hop >> end_dag

Note: For reference purposes only.

2. Deploy the DAG: Save the file in the dags folder. Airflow will automatically detect and load the DAG.

After successful deployment, we should see the new “Stg_User_Details” DAG listed in the Active Tab and All Tab from the Airflow Portal. As shown in the screenshot above.

3. Run the DAG: We can trigger pipelines or workflows using Airflow by clicking on the Trigger DAG option as shown below from the Airflow application.

4. Monitor the DAG: Access the Airflow UI, where you can view and manage the newly created DAG. Trigger the DAG manually or wait for it to run according to the defined schedule.

After successful execution, we should see the status message as shown an execution history along with log details. new “Stg_User_Details” DAG listed in the Active Tab and All Tab from the Airflow Portal. As shown in the screenshot above.

Managing and Scaling Workflows

  1. Use Operators and Sensors: Leverage Airflow’s extensive library of operators and sensors to create tasks that interact with various systems and handle complex logic.
  2. Implement Task Dependencies: Define task dependencies using the >> and << operators to ensure tasks run in the correct order.
  3. Optimize Performance: Monitor task performance through the Airflow UI and logs. Adjust task configurations and parallelism settings to optimize workflow execution.
  4. Scale Out: Configure Airflow to run in a distributed mode by adding more worker nodes, ensuring that the system can handle increasing workload efficiently.

Conclusion

Apache Airflow is a powerful and versatile tool for managing workflows and automating complex data pipelines. Its “workflow as code” approach, coupled with robust scheduling, monitoring, and scalability features, makes it an essential tool for data engineers and data scientists. By adopting Airflow, you can streamline your workflow management, improve collaboration, and ensure that your data processes are efficient and reliable. Explore Apache Airflow today and discover how it can transform your data engineering workflows.

Streamline your Apache HOP Workflow Management With Apache Airflow through our team of experts.

Upcoming Apache HOP Blog Series

Stay tuned for the upcoming Apache HOP Blog Series:

  1. Migrating from Pentaho ETL to Apache Hop
  2. Integrating Apache Hop with an Apache Superset
  3. Comparison of Pentaho ETL and Apache Hop
Unlocking Data Insights with Apache Superset

Unlocking Data Insights with Apache Superset

Introduction

In today’s data-driven world, having the right tools to analyze and visualize data is crucial for making informed decisions. Organizations rely heavily on actionable insights to make informed decisions. With vast amounts of data generated daily, visualizing it becomes crucial for deriving patterns, trends, and insights. One of the standout solutions in the open-source landscape is Apache Superset. Apache Superset, an open-source data exploration and visualization platform, has emerged as a powerful tool for modern data analytics. This powerful, user-friendly platform enables users to create, explore, and share interactive data visualizations and dashboards. Whether you’re a data scientist, analyst, or business intelligence professional, Apache Superset can significantly enhance your data analysis capabilities. In this blog post, we’ll dive deep into what Apache Superset is, its key features, architecture, installation process, use cases, and how you can leverage it to unlock valuable insights from your data. 

Apache Superset

Apache Superset is an open-source data exploration and visualization platform developed by Airbnb, it was later donated to the Apache Software Foundation. It is now a top-level Apache project, widely adopted across industries for data analytics and visualization. Apache Superset is designed to be a modern, enterprise-ready business intelligence web application that allows users to explore, analyze, and visualize large datasets. Superset’s intuitive interface allows users to quickly and easily create beautiful and interactive visualizations and dashboards from various data sources without needing extensive programming knowledge.

Superset is designed to be lightweight yet feature-rich, offering powerful SQL-based querying, interactive dashboards, and a wide variety of data visualization options—all through an intuitive web-based interface.

Key Features

Rich Data Visualizations

Superset offers a clean and intuitive interface that makes it easy for users to navigate and create visualizations. The drag-and-drop functionality simplifies the process of building charts and dashboards, making it accessible even to non-technical users. Superset provides a wide range of customizable visualizations. Whether it’s simple charts like bar charts, line charts, pie charts, scatter plots, geographical maps, or complex visuals like geospatial maps and heatmaps, Superset offers an extensive library to cover various data visualization needs. This flexibility allows users to choose the best way to represent their data, facilitating better analysis and understanding.

  1. Bar Charts: Perfect for comparing different categories of data.
  2. Line Charts: Excellent for time-series analysis.
  3. Heatmaps: Useful for showing data density or intensity.
  4. Geospatial Maps: Visualize location-based data on geographical maps.
  5. Pie Charts, Treemaps, Sankey Diagrams, and More: Additional options for exploring relationships and proportions in the data.

SQL-Based Querying

One of Superset’s most powerful features is its support for SQL-based querying. It provides an SQL editor where users can write and execute SQL queries directly against connected databases. For users who prefer working with SQL, Superset includes a powerful SQL editor called SQL Lab. This feature allows users to run queries, explore databases, and preview data before creating visualizations. SQL Lab supports syntax highlighting, autocompletion, and query history, enhancing the SQL writing experience.

Interactive Dashboards

Superset allows users to create interactive dashboards with multiple charts, filters, and data points. These dashboards can be customized and shared across teams to deliver insights interactively. Real-time data updates ensure that the latest metrics are always displayed.

Extensible and Scalable

Apache Superset is highly extensible and can connect to a variety of data sources such as:

  1. SQL-based databases (PostgreSQL, MySQL, Oracle, etc.)
  2. Big Data platforms (Presto, Druid, Hive, and more)
  3. Cloud-native databases (Google BigQuery, Snowflake, Amazon Redshift)

This versatility ensures that users can easily access and analyze their data, regardless of where it is stored. Its architecture supports horizontal scaling, making it suitable for enterprises handling large-scale datasets.

Security and Authentication

As an enterprise-ready platform, Superset offers robust security features, including role-based access control (RBAC), authentication, and authorization mechanisms. Additionally, Superset is designed to scale with your organization, capable of handling large volumes of data and concurrent users. Superset integrates with common authentication protocols (OAuth, OpenID, LDAP) to ensure secure access. It also provides fine-grained access control through role-based security, enabling administrators to control access to specific dashboards, charts, and databases.

Low-Code and No-Code Data Exploration

Superset is ideal for both technical and non-technical users. While advanced users can write SQL queries to explore data, non-technical users can use the point-and-click interface to create visualizations without requiring code. This makes it accessible to everyone, from data scientists to business analysts.

Customizable Visualizations

Superset’s visualization framework allows users to modify the look and feel of their charts using custom JavaScript, CSS, and the powerful ECharts and D3.js libraries. This gives users the flexibility to create branded and unique visual representations.

Advanced Analytics

Superset includes features for advanced analytics, such as time-series analysis, trend lines, and complex aggregations. These capabilities enable users to perform in-depth analysis and uncover deeper insights from their data.

Architecture of Apache Superset

Superset’s architecture is modular and designed to be scalable, making it suitable for both small teams and large enterprises

Here’s a breakdown of its core components:

Frontend (React-based):

Superset’s frontend is built using React, offering a smooth and responsive user interface for creating visualizations and interacting with data. The UI also leverages Bootstrap and other modern JavaScript libraries to enhance the user experience.

Backend (Python/Flask-based):

  1. The backend is powered by Python and Flask, a lightweight web framework. Superset uses SQLAlchemy as the SQL toolkit and Alembic for database migrations.
  2. Superset communicates with databases using SQLAlchemy to execute queries and fetch results.
  3. Celery and Redis can be used for background tasks and asynchronous queries, allowing for scalable query processing.

Metadata Database:

  1. Superset stores information about visualizations, dashboards, and user access in a metadata database. Common choices include PostgreSQL or MySQL.
  2. This database does not store the actual data being analyzed but rather metadata about the analysis (queries, charts, filters, and dashboards).

Caching Layer:

  1. Superset supports caching using Redis or Memcached. Caching improves the performance of frequently queried datasets and dashboards, ensuring faster load times.

Asynchronous Query Execution:

  1. For large datasets, Superset can run queries asynchronously using Celery workers. This prevents the UI from being blocked during long-running queries.

Worker and Beat​

This is one or more workers who execute tasks like run async queries or take snapshots of reports and send emails, and a “beat” that acts as the scheduler and tells workers when to perform their tasks. Most installations use Celery for these components.

Getting Started with Apache Superset

Installation and Setup

Setting up Apache Superset is straightforward. It can be installed using Docker, pip, or by deploying it on a cloud platform. Here’s a brief overview of the installation process using Docker:

1. Install Docker: Ensure Docker is installed on your machine.

2. Clone the Superset Repository:

git clone https://github.com/apache/superset.git

cd superset

3. Run the Docker Compose Command:

docker-compose -f docker-compose-non-dev.yml up

4. Initialize the Database:

docker exec -it superset_superset-worker_1 superset db upgrade

docker exec -it superset_superset-worker_1 superset init

5. Access Superset: Open your web browser and go to http://localhost:8088 to access the Superset login page.

Configuring the Metadata Storage

The metadata database is where chart and dashboard definitions, user information, logs, etc. are stored. Superset is tested to work with PostgreSQL and MySQL databases. In a Docker Compose installation, the data would be stored in a PostgreSQL container volume. The PyPI installation methods use a SQLite on-disk database. However, neither of these cases is recommended for production instances of Superset. For production, a properly configured, managed, standalone database is recommended. No matter what database you use, you should plan to back it up regularly. In the upcoming Superset blogs, we will go through how to configure the Apache Superset with Metadata storage. 

Creating Your First Dashboard

1. Connect to a Data Source: Navigate to the Sources tab and add a new database or table.

2. Explore Data: Use SQL Lab to run queries and explore your data.

3. Create Charts: Go to the Charts tab, choose a dataset, and select a visualization type. Customize your chart using the various configuration options.

4. Build a Dashboard: Combine multiple charts into a cohesive dashboard. Drag and drop charts, add filters, and arrange them to create an interactive dashboard.

More Dashboards:

Use Cases of Apache Superset

  1. Business Intelligence & Reporting Superset is widely used in organizations for creating BI dashboards that track KPIs, sales, revenue, and other critical metrics. It’s a great alternative to commercial BI tools like Tableau or Power BI, particularly for organizations that prefer open-source solutions.
  2. Data Exploration for Data Science Data scientists can leverage Superset to explore datasets, run queries, and visualize complex relationships in the data before moving to more complex machine learning tasks.
  3. Operational Dashboards Superset can be used to create operational dashboards that track system health, service uptimes, or transaction statuses in real-time. Its ability to connect to various databases and run SQL queries in real time makes it a suitable choice for this use case.
  4. Geospatial Analytics With built-in support for geospatial visualizations, Superset is ideal for businesses that need to analyze location-based data. For example, a retail business can use it to analyze customer distribution or store performance across regions.
  5. E-commerce Data Analysis Superset is frequently used by e-commerce companies to analyze sales data, customer behavior, product performance, and marketing campaign effectiveness.

Advantages of Apache Superset

  1. Open-source and Cost-effective: Being an open-source tool, Superset is free to use and can be customized to meet specific needs, making it a cost-effective alternative to proprietary BI tools.
  2. Rich Customizations: Superset supports extensive visual customizations and can integrate with JavaScript libraries for more advanced use cases.
  3. Easy to Deploy: It’s relatively straightforward to set up on both local and cloud environments.
  4. SQL-based and Powerful: Ideal for organizations with a strong SQL-based querying culture.
  5. Extensible: Can be integrated with other data processing or visualization tools as needed.

Sharing and Collaboration

Superset makes it easy to share your visualizations and dashboards with others. You can export and import dashboards, share links, and embed visualizations in other applications. Additionally, Superset’s role-based access control ensures that users only have access to the data and visualizations they are authorized to view.

Conclusion

Apache Superset is a versatile and powerful tool for data exploration and visualization. Its user-friendly interface, a wide range of visualizations, and robust integration capabilities make it an excellent choice for businesses and data professionals looking to unlock insights from their data. Whether you’re just getting started with data visualization or you’re an experienced analyst, Superset provides the tools you need to create compelling and informative visualizations. Give it a try and see how it can transform your data analysis workflow.

You can also get in touch with us and we will be Happy to help with your custom implementations.