, ,

Aug 11, 2025 | 4 Minute Read

How We Built A Modular, Multi-Environment Automation Testing Framework

Sadeesh Kumar MN, Staff Software Engineer

Table of Contents

Introduction

In fast-moving engineering organizations, quality can't afford to lag behind innovation. As release cycles shorten and digital experiences scale, the pressure on QA teams is enormous: how do you guarantee confidence without slowing delivery?

At Axelerant, we’ve seen too many teams either underinvest in automation or over-engineer it into complexity. So, when we faced the challenge of building a scalable testing framework across multiple services and environments, we asked ourselves: What does a future-ready QA strategy actually look like in practice?

This blog is a behind-the-scenes walkthrough of how we reimagined test automation for a cloud platform, not just as a technical system, but as an operating model for quality engineering. The entire approach revolves around building reliability, clarity, and speed into how your entire engineering team tests. If you’re leading a team that’s scaling, you’ll want to read every step.

Defining What To Test: Functional Coverage Built Around Real Journeys

We began by categorizing test coverage across:

  • User & Role Flows: From CO → MGR → GO → SA → AD → B1, B2, with verification of hierarchy and permissions
  • Transactional Workflows: Login → Deposit → Perform Action → Validate Market Outcome → Rollback
  • Streaming Validation: Compare internal odds vs 3rd party sources in real time
  • Settlement & PnL Checks: Ensure outcomes reflect correctly in account history, and user-level summaries
  • Account & Permission Validations: CO/MGR/EMP permissions to manage accounts
  • B2B Role and Whitelabel Testing: Testing admin vs user permissions, user role restrictions, and white-label visibility

Each of these was assigned to specific GitHub Actions jobs, with consistent naming patterns across environments.

Testing In Dev, UAT, And Prod Without Duplication

A truly scalable testing framework must work across environments without multiplying complexity. At Axelerant, we engineered our automation architecture to support parallel workflows in Dev, UAT, and Prod, without duplicating test logic.

We approached this challenge by separating test logic from environment context. Our design pattern looked like this:

  • Reusable Test Jobs: Centralized scripts in shared repositories
  • Environment-Specific Inputs: Passed as runtime parameters
  • Dynamic Job Names: Using _dev, _uat, _prod suffixes to maintain clarity

This enabled high traceability. When a test failed in UAT but passed in Dev, we could isolate environment-specific issues quickly.

Additionally, we embedded scheduling logic into each job:

  • Dev: Every push + on-demand
  • UAT: Scheduled sanity tests
  • Prod: High-risk workflows only, verified through staging first

This tiered testing model provided layered safety, giving every build a chance to pass lightweight checks before escalating to heavier tests.

GitHub Workflows Made Human-Friendly

Automation should empower teams, not gatekeep quality behind a wall of YAML syntax. To democratize access to testing, we built a UI-first test execution model within GitHub Actions.

Here’s what made it effective:

  • Preconfigured Branch Selection: Drop-down options to choose the relevant code branch
  • Job Selector Interface: Users could select from descriptive job names without editing code
  • One-Click Execution: After selection, the workflow ran instantly with real-time status updates

This turned our automation suite into a self-service testing portal for QA, developers, product managers, and even customer support leads. It also reduced dependency on the core automation team, accelerating team-wide productivity.

In high-pressure release cycles, we saw this model cut down triage time by hours, since anyone could replicate failures in Prod by running the corresponding UAT test job instantly.

Make Failures Loud And Actionable: Reporting And Monitoring

In most QA systems, test failure is a dead end. Logs are buried, context is missing, and handoffs take days. We wanted to flip that.

Our goal: make failures actionable and fast to resolve.

To achieve this, we implemented a robust test monitoring and feedback loop:

  • HTML Test Reports: Automatically generated, with summaries of test cases, pass/fail statuses, and durations
  • Assertion-First Logging: Failures were logged with the most relevant line highlighted.
  • Slack Alerts: Integrated with dedicated testing alert channels that tagged the QE owner immediately
  • Live GitHub Links: Every test alert linked directly to the failure step in the GitHub Action log

Most importantly, we set clear protocols:

  • QE leads triage immediately
  • If the root cause is suspected in app logic, relevant developers are added to the thread
  • Fixes are validated using on-demand reruns

This tight feedback loop built trust across teams. Engineering started seeing QA as an early warning system, not just a gatekeeper at the end of the pipeline.

Performance Testing: Going Beyond Functionality

While functional tests catch immediate issues in logic and flow, performance testing reveals how your system holds up under pressure. It's where we answer: Can our application scale with real-world traffic? Will it remain stable under concurrent loads?

To answer those questions confidently, we implemented a robust, CI-integrated performance testing layer that mirrors the end-user journey.

 

End-to-End Load Flows Simulated:

  • Login workflows for multiple accounts
  • User transactions in rapid succession
  • Result generation 

These flows were configured to run using GitHub Actions workflows, ensuring consistency with our functional testing stack. This allowed engineers to treat performance testing as part of their daily delivery rhythm, not a once-a-sprint checkbox.

 

Visual Feedback With Grafana:

The output of these tests was streamed into Grafana Cloud dashboards, which helped us:

  • Visualize latency spikes, throughput, and system health over time
  • Compare expected vs actual response time ranges
  • Share real-time performance stats with both Dev and QA teams

We deliberately kept these tests modular and composable, so they could be scaled horizontally across test types or reused for stress testing future features.

And most importantly, these tests were triggerable on demand, allowing developers to validate performance fixes as easily as they’d validate a feature branch.

This move transformed performance testing from a “release gate” into a continuous engineering insight, bridging the gap between testing and observability.

Untapped Modules and Future Optimization

Some test areas like B2B Whitelabel Management and Account Assignment, were documented but marked for maintenance due to app changes. These represent future coverage opportunities as the product evolves.

 

Module

Coverage Status

Notes

Whitelabel Management

To be maintained

Changing app logic

Account Assignment

Manual validation only

CI jobs not configured

Security Testing

Not implemented

On roadmap

We built this framework to adapt and extend, not remain static.

Beyond Automation: Building A Culture of Quality

This wasn’t just automation. It was a QA transformation, built from the ground up to prioritize real user journeys, transparency, and engineering alignment.

By thinking beyond scripts and adopting a platform approach to test automation, we created something flexible, repeatable, and human-friendly. And more importantly, we made quality everyone’s responsibility.

We’ve helped teams go from zero to mature QA practices in weeks, not months, by leaning into the same principles outlined here. If your org is dealing with fragmented testing, inconsistent coverage, or manual bottlenecks, this guide isn’t just a story. It’s your blueprint.

Want to explore how to evolve your automation practices? Let’s talk.

 

About the Author
Bassam Ismail, Director of Digital Engineering

Bassam Ismail, Director of Digital Engineering

Away from work, he likes cooking with his wife, reading comic strips, or playing around with programming languages for fun.


sadeesh-kumar

Sadeesh Kumar MN, Staff Software Engineer

Sailing through life on a ship woven with experiences and divine design, Sadeesh sees the world and the self with wonder and acceptance. Each day is a chapter in a story written by nature, filled with feelings, growth, and quiet revelations.

Leave us a comment

Back to Top