• info@helpingtesters.com
  • helpingtesters

Why Automation Fail in many Projects?

automation fail,test automation failure, automation testing failure

We always discuss the benefits of automation in every project, but we often forget to ask ourselves the question, can it be successfully implemented ie. automation fail? Currently, almost all projects incorporate automation to automate their applications especially with automation in agile.This helps them reduce manual effort and also provides continuous feedback about the operation of the application.

Even though all the above is promised of automation, reaching the goal is not a simple task. Every testing team faces one of the other hurdles, while they jumpstart automation testing in their project. This article is written to serve as a checklist, that lists all the things that might go wrong with automation testing. 

Challenges make Automation Fail

Before we find out the reason why automation fails in many projects, let us first know why automation testing is not implemented immediately when any project starts.

  • Expensive QA practice – Automation testing is an expensive QA process. To set up automation, the project manager has to allow time to testers to understand the scope of the project and choose the one that would best serve their purpose.

    • Even though there are a number of free automation tools in the market, they might not serve the needs of the project. Under such circumstances, opting for the paid tools is the only option.

    • Also, a QA who is already busy with manual testing, can’t devote time to pitch into automation. For this reason, a dedicated resource from the QA is the right choice.

  • The need of coding skills – Even though many automation tools provides the ability of record and playback feature, it is not a good practice. Every automation script should be written in such a way that it is easy to understand remains scalable.

    • Also, a tester should be able to understand the existing codebase and start improving or adding features to it. Such practices require a thorough understanding of the IDE and the programming language.

Culprits of Automation Fail

Automation testing holds a lot of promises. Once an automation framework is finalized and implemented, slowly yet steadily, most teams in spite of their best practices in automation testing face one or the other problem. Some of the prominent root causes being:

  • The absence of strong development skill – Yes, automation demands good coding ability & software testing skills. When an automation is freshly implemented, QA team has to decide the right framework that would suit the project’s need and demand.

    • Also, the automation scripts should be reviewed and refactored so that it adheres to the coding practices prevalent in the project.

    • If an automation framework is already in place, a QA member should be capable enough to understand the project structure and even contribute to it.

    • As testers are mostly involved in different QA practices, they seldom make an effort to upgrade their coding ability. Thus when assigned to a challenging automation project, automation script creation occurs at a slow pace.

  • Lack of product knowledge – To eliminate any overhead on the QA team, many projects hire dedicated automation testers, whose sole responsibility is to create scripts. They receive test data and test cases which need to be automated.

    • As these automation test cases are created blindly adhering to the data provided, the resulting script can lack quality.

    • Also, in the case of missing information or wrong test data, the automation tester has to consult the manual tester. Such incident can prolong the automation script creation process.

  • Shortage of quality scripts – The main reason for introducing automation in a project is to increase the testability of the application without any manual intervention. Even if the automation framework is stable, there are chances that it might not add any value to the project.

    • Test Data -If test data used in the automation scripts remain unchanged, most test cases would test the same features over and over again with the same data set. Even though the process is completely automated and works seamlessly, it won’t add any actual value to the QA team.

    • Test Scenarios – Approaching automation is different for all projects. In some projects, QA team try to automate each and every feature present in the application. While they try to maximize the number of features tested, they forget to take into account the realistic scenarios that an ordinary user would encounter. This results in dependency on manual testing, and in worst cases can even lead to bug slippage in production.

    • Multiple Platform/ Browser support – Most applications are built to be supported on different platforms and devices. But executing the same script again and again on different platforms requires different browser drivers and different systems as well. Also, automating a mobile application requires even more technical knowledge.

  • Application feature changes – When any new feature is added to the application, it is most likely that script would lead to automation fail. To make sure that the automation script works perfectly as before, QA has to make changes in the script.

    • Such changes are hard to estimate and can take up a lot of bandwidth of the QA team.

    • Unless the automation scripts are fixed, all the automation scripts affected by the new feature would make automation fail. Thus automation won’t add any value to the project.

    • As every automation script uses different element locators to mimic the user actions, any change in the element locator can cause the automation fail. Any new application feature change can change the element locators, and require automation maintenance.

  • Poor selection of automation tool – As we had earlier discussed while implementing automation in the project, selecting the right automation tool can make a lot of difference.

    • Every application has different technicalities and is supported on different systems. As the automation tools are not tailor-made for each and every application, QA team must weigh the pros and cons of each tool before finalizing it.

    • If automation tools selection is done in a hurry or if automating the application goes out of the scope of the automation tool, project automation has to be scrapped.

    • Such circumstances can prove to be very expensive for the project, and can also impact upcoming deliveries.

  • The absence of automated test scheduling – The purpose of automation is to run existing test scripts repetitively without the need of human intervention. For this reason, almost all stable QA automation are scheduled.

    • By using tools like Jenkins and Bamboo, QA team can schedule an automation test run for all the available test cases. Such test runs can be configured and set to execute a number of times. Such practices increase QA team’s faith in automation and also helps to capture bugs/ issues.

    • But when such test scheduling is not done, a tester has to manually run each and every script and monitor its progress. Such practices take away the advantage provided by automation.

  • Lack of resources – Ideally, application and the automation framework are set up on different servers. Also depending on the type of application, QA team might require different devices on which the automated tests would be executed.

    • As using emulators for automation doesn’t provide correct result, it is advised to either test the application on the cloud or on actual devices. Procuring all these facilities can be an expensive and lengthy procedure which lead to automation fail incurring losses.

    • Also, some automation tools like TestComplete are not available for free. In such cases, the project manager has to either inform the higher authorities to allocate the budget for the tool or tell the QA team to search for an alternative.

How to Prevent Project Automation from Failing?

To prevent automation fail, every team member is it QA or developer should work together. QA can invest their time in creating, running and fixing the test cases and scripts. The deep decision has to be made before starting automation is project i.e. to automate or not.

But in case there is a lot of workload and QA team also has to participate in manual or planning tasks, they can delegate their work to developers. Such step will ensure steady progress of the automation suite and won’t create any major hurdle all of a sudden. Apart from distributing automation tasks within the project members, the following steps can help strengthen project automation and reduce automation fail.

  • Automating integration test cases – Important automation fails reasons which are ignored. Feature specific automation doesn’t test the impact areas of the application when different events are triggered by a single user in a session. Automating integration test cases would not only increase test coverage but also, improve the quality of the automated test cases.

  • Using generic locators – In most automation scripts, application elements are identified using XPath, id, class name or CSS selector. But when the application UI components change, web driver fails to identify the elements the test case fails and creates script maintenance overhead for the QA team. To avoid such occurrences, generic selector objects should be used. This will not only remove the need for maintenance i.e. low automation fail but also create a strong automation suite.

  • Refactoring automation scripts – Rather than all QA testers merging their changes on the master branch, it is advisable to assign a gatekeeper who would monitor the code check-in and look out for any faulty code. The gatekeeper would make sure that all the code check-ins comply the coding standards, and then merge each commit into the master branch.

  • Cloud-based environment – If any project faces resource crunch (widely accepted automation fail reason), it is advisable to opt for cloud environments like Browserstack to test the application on different devices and browsers.

About the author

arindam bandyopadhyay author

Arindam Bandyopadhyay is an automation tester with over 5 years of experience in software testing. While during the day he juggles between Eclipse and spreadsheets, at night he lets his fingers do the talking as he writes about anything and everything his paradoxical mind desires.

Leave a Reply

Your email address will not be published.

Recent Posts