Anti-Patterns in Test Automation – Part 2


Welcome to part 2 of the anti-pattern series. And I can’t wait to start because I am beginning this article with a very famous BDD testing and collaboration tool Cucumber. Let’s jump right into it.

Cucumber in Every Recipe

Before I say anything, I want you to read this –

“If you need a testing tool for driving a mouse and a keyboard, don’t use Cucumber. There are other tools to do this with far less abstraction and typing overhead than Cucumber.”

-Aslak Helles√ły, creator of Cucumber

So if you are not following the process of BDD in your product development, then there exists one anti-pattern that is undoubtedly going to give you a hard time to achieve benefits from test automation success. And it is using Cucumber or other similar styling tools. 

And if you want to review your decision, you can do it now. If you are already too deep into it, consider starting a new suite for future testing. Getting up and running with a suite does not take time. Even if it did, it’s worth it.

There is a lot to talk about Cucumber and BDD. But I will try to cover an article about Why and Why Not BDD next time.

More the Merrier in UI Automation?

I can guarantee you that your team and stakeholders will lose trust in your automation and can even withdraw the program if you give slack to UI automation and try to test everything. More is not necessarily better. 

Imagine a team running 100 cases for ten days with an average 70% passing rate. Out of all the features tested here, 30% tests failed. Can a team say these failures are because of bugs in the product/feature? Let’s assume these are the bugs. With this information, it is 300 bugs that get reported in 10 days. It sounds exceptionally improbable. And yes, it is. 

These are failing tests, not because of actual software defects but are the result of a worthless automation effort. And referred to as False Positives

And honestly, nobody is investigating these failed automated tests. Instead, we should be asking these question to ourselves:

  • Are we going production with these 30% failures?
  • What values are these failed cases adding to product quality?

Think about a feature like Search or editing-profile in a popular web app. How ofter these features break? Our automated UI tests should be like this, staying green all the time and rarely failed only because of a regression issue. 

If we are determined to make our automation reliable, we have to remove this anti-pattern from the equation.

Using Excel as a Data Store

Test data is pre-requisite in most test automation, and it should always get managed by file formats like .property and .csv files. Although using excel as a data source should be avoided at all costs. I/O procedures here for excel is highly expensive and interprets to poor performance.

Whereas keeping track of the compatibility of third-party APIs like ApachePOI and JExcel and maintaining MS Excel subscription for all environments becomes an additional maintenance overhead. 

Here are a few reasons why I do not encourage teams using excel files, within automation solutions:

  • Logic to read and manage Excel is overhead.
  • Code requires timely maintenance and prone to error.
  • Occupies you time to automate and value to the business is zero.
  • Download, install and manage subscription for environments.

I always try to generate pre-requisite data within the suite, so that its unique for me in every run. And I save much time in maintenance between executions this way. 

Another use of Excel I have seen is for creating Keyword Driven Frameworks. I have created automated tests in Excel, and it was my first and last time. 

Fact is I haven’t seen anyone using it successfully in a long time. Instead, there are other prime ways of achieving a more robust and classic framework if anyone wants to try but Key-word driven is not one of them.

Hence it is an anti-pattern and should be avoided.

Few tips I can suggest for Data-driven testing are:

  1. Write utility functions for the generation of test data. 
  2. Use CSZ and JSON files. They are lightweight, easy to read.
  3. Generate pre-requisite data using web services.

Automation replaces Manual Testing

Straight answer, automation cannot replace required manual testing. Automated tests come with technical limitations, and scenarios are sometimes just not feasible to automate. It relates to the same argument of ‘automate all‘, I presented in my previous article. 

Moreover, automation is a set of defined instructions that can help us in finding regression defects but not the new bugs during feature development. In such cases, we do Exploratory testing that helps in uncovering bugs. 

Moreover, it is un-achievable to test for Usability and “how about doing it the other way” kind of approach with automation.

Automation, Functional and Performance Love Triangle

It has two folds. 

  • Can my automated functional tests measure the performance of my application?

Functional automation and performance are two different requirements. And both have got equally useful tools and techniques out there. Although one can argue that you can measure the UI performance from functional automation test reports, and yes, we can only if this suite my strategy. 

I prefer to keep my performance, load and monitoring requirements separate from functional automation. It keeps the demands from both these verticles separate and manageable.

  • How can I measure the performance of my functional automation tests?

Answer this, if one test runs in one minute, how much time ten tests will take to execute? You are doing it correctly if you say one minute. It means feedback time for the suite is one minute even if each test takes longer time for setting up preconditions and teardown when compared to setting it with running ten cases in sequence. It’s wise to do parallel execution of tests and is the most compelling way to scale automation. 

I hope you are enjoying reading about anti-patterns so far. I will conclude this series about anti-patterns in my next article, where I talk about, with a bit more focused coding conventions and code hygiene while implementing automation tools. 

Until then, happy testing and keep automating.