UI Automation is costly in terms of building and maintenance. A right measure of these suites would ensure we are not ending-up with heavy maintenance state without knowing the actual ROI.
A perfect state would be to know the value what these tests provide and quantify them to understand the actual ROI. Here’s what we could do to avoid the routine and measure the Functional Automation tests.
We(Raj Vasikarla & Suresh) followed the below approach to achieve the same:
- Build with Istanbul Instrumentation
- Run your end-to-end UI Automation Suite
- Generate the Coverage
Build & Serve with Istanbul Instrumentation: This is the first step in getting your code prepared for providing useful insights on the test automation code you write for your feature. There are many ways to instrument your react code, I am providing one such method below to instrument your code with istanbul.
For the below example we assume that there is already a grunt task that will build your react code and for the sake of simplicity we will update the same grunt task to do the necessary instrumentation.
- Add the loader for instrumenting the code with istanbul below, using the babel-plugin-istanbul
2. Define the filter for the source file including/excluding the folders which needs to be instrumented.
3. Update the loader
This can be verified by checking for “__coverage__” variable in the dev tools console window. If you do see that variable then your instrumentation is successful else it is not successful in which case you need to go back and check the instrumentation.
Run end-to-end UI Automation : Once the build is instrumented and started we can run UI Automation suites. As there are many UI Automation frameworks available in the open source world, we have decided to use webdriverio for various reasons. WebdriverIO has a bunch of features that allows automation engineers to setup and build an automation suite in no time. Below is an example of how a typical UI Automation test built with webdriverio would like :
More examples on how to use webdriverio could be found here. Once we have a suite of tests to run against the instrumented code, we need to generate the coverage and represent that in a readable format.
Generate the Coverage :
In the above snippet we enhance the webdriverio config file to do the below :
Generate coverage after each test. We leverage the afterTest() hook from webdriverio and capture the __coverage__ object once every test is completed. Coverage object thus captured is added to an array of coverageObjects at this step.
Merge the Overall Coverage : At the end of the entire suite, we merge all the coverage information from the coverageObjects and instruct istanbul to generate readable reports. Istanbul provide various formats of reports generation. We have opted “html” in the above snippet. Below is a screenshot of how a sample report would look like :