<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=278116885016877&amp;ev=PageView&amp;noscript=1">

Customer:

Open Scholar

Industry:
Higher Education

 

Services: 
Backend Development | Test Automation

 

Technologies:
Drupal 8 | Drupal Test Traits | PHPUnit | Angular framework

 

Tools:
Docker | Travis CI | GitHub | GitHub Actions


Platform:
Amazon Web Services

 

Faster, high quality releases using test automation

Axelerant collaborated with OpenScholar’s team to help deliver releases at a quicker pace by implementing test automation in a systematic way across the entire testing pyramid.

 

 

About the customer

OpenScholar offers a leading Drupal distribution for educational and research institutions that makes it easy for their faculty to have professional, research-centric websites, and to bring their ideas to the world. In essence, forging the new standard for sharing research and building their digital experiences, including institutes like Harvard and Princeton University.


 

Business Challenge

OpenScholar’s Drupal 7 distribution was being ported to Drupal 8 for better support and maintenance. The product had its own complete layer built above Drupal which resulted in complete customization.

The development team was distributed across different time zones, with each individual working on different features. The customized architecture was such that there were many global practices implemented, and these were inherited at the feature level too. Any changes done at the global level would be reflected in all the inherited features.

For instance, the product has a feature implementation wherein the user roles, the corresponding displays and all types of related permissions (view, add, edit and delete) of content types are bundled together and packaged as an application type within the product. Any permission-related changes made to one role would have repercussions in all other content types packed as an individual application type.

This is why there was a need to thoroughly test the customizations. The objective was not just to verify functionality accurately but also to achieve faster feedback at all levels of the implementation phase.

The OpenScholar team needed to: 

01. Identify a Test Automation Strategy


02. Identify Feature Breakdown Rules


03. Implement the CI/CD Strategy


04. Ensure Process Adherence


The team needed to identify a test automation strategy by working on the tools and techniques to be used.

They needed to Identify the thumb rules to break features into unit, integration and end-to-end tests.

They needed to implement the strategy for continuous integration and continuous delivery.

They also needed to ensure that this approach was adopted by everyone on the team. 


 

 

Solution

When considering how to ensure faster feedback, test automation is one of the first solutions that comes to mind. However, this is a whole ecosystem in itself, and not automating the right way can introduce some serious problems. There isn’t one right way to do it, and each project needs its own strategy.

 

The OpenScholar team decided to use Drupal Test Traits (DTT) as it provides the benefit of writing tests for already installed sites. Additionally, it provides a wrapper to Drupal core traits to create content (node, taxonomy, user, media, etc) which meant that the setup and teardown activity was taken care of by the flexibility offered by the DTT framework.

 

Axelerant collaborated with OpenScholar to help implement this system. For each feature, the automated test suite was implemented with tests spanning across the entire testing pyramid. Unit tests were written for each helper method. Integration tests were written to validate the integrations between various units. In the end, the most common end-to-end user workflows were verified through automated tests for functionality.

 

Independent jobs were written to first execute all unit tests, then integration tests, and all the functional/acceptance tests towards the end for each pull request (PR) on Travis CI.


 

Results

It’s often believed that automated tests cannot be designed unless the feature is stable and available on the QA environment. But in fact, the design of automated tests can begin as soon as the feature development begins. This is the approach we adopted in this project. 

 

Granular Feedback, Shortened Feedback Loops

Since the helper methods were reusable across the application by multiple developers along with the corresponding unit tests, achieving a high degree of test automation coverage at this level proved beneficial. The tests were quick to design and execute and helped provide granular feedback to developers. This helped shorten the feedback loop drastically when a modification was made.

 

Quicker Validation of Functionality

Integration tests were designed using only APIs and services. This resulted in faster validation of the functionality, as no part of the UI was referenced in these tests. Functionality was tested for all three user groups: individual scholars, small groups, and universities.

 

Logical Test Grouping for Quicker Triage

JavaScript-based tests ensured that end-to-end user workflows were verified for the most common workflows, without running the risk of missing out on any important functionality and not designing acceptance tests for each scenario at the UI level.


The jobs scheduled were logical and preemptive. These executed the unit tests first, then kernel tests, and acceptance tests towards the end. Since the tests and jobs were grouped this way, it helped the developers pinpoint the exact test that failed and identify the root cause. For instance, if a functional-js based test failed, the developer could easily look at the corresponding kernel and unit test, quickly identify the root cause of failing tests, and fix all of them with minimal effort.


Thorough testing at all these levels meant a better product delivered to the QA team, and ultimately to the end-user, at a faster pace.

 

Unit Level Tests

When the focus is heavy on designing only functional tests, the team can miss out on verifying core features or backend functionality. On achieving ~100 percent test coverage at the unit level, we ensured that the core services that are not directly exposed to the user at the user interface level were always running as expected through our automated checks. The team ensured that everyone used a common language to design these tests, along with good agreed-upon coding practices.

Strategic Use of Test Pyramid Using DTT

Using DTT, the team could efficiently take care of the additional setup and teardown activities that are needed in any test automation assignment, especially at the kernel level and the functional level execution. The common helper methods consumed by both kernel level and functional level tests (both written in their respective base classes) were identified and written in a separate class file as common traits, which drastically reduced the overall execution time. 

Quick Feedback and Releases

By implementing the concept of a test pyramid, the team had the maximum number of automated tests designed at the unit level, and the number decreased as the type of tests changed further up the pyramid. Neither test design nor execution for automated tests needed to wait until a stable feature was delivered to the QA or Software Development Engineer in Test (SDET) engineers. With this balance achieved, automated tests were executed on Travis CI efficiently, leading to faster feedback and quick releases.

Faster Verification of Modules

Although Drupal core and contrib modules have their own test suite that gets executed before the release, this project implemented a layer above Drupal. Having a rich test automation suite resulted in faster verification of the customizations built over Drupal whenever there were any upgrades to core or contrib modules. This resulted in the team being able to confidently release the product.