Automation Plan – Keep it simple

Nowadays, all companies started moving towards more and more automation. It is essential to have a plan in place; otherwise, automation may fail.

Here’s a guide (by Ray Claridge) to making automation a success.

  • Business buy in – Before starting to automate, make sure you’ve got buy in from line managers and developers. Remember, automating is time consuming and will cost your company money to get off the ground.
  • Plan – Don’t just start automating random functionality, have a plan and document explaining the approach and how long each test will take to develop. Remember, get sign off from all parties involved.
  • Identify high risk areas – Automating a fully fledged system is going to take a long time. So do some analysis to identify the high risk areas such as: most used, high volume, security or transactional sections and focus on them first.
  • Identify areas less likely to change – Maintaining automation test scripts is not a five minute job, so don’t start on areas that are likely to change. Equally, don’t assume that functionality less likely to change doesn’t need testing. Past experience has taught me never to assume.
  • Document your tests – You need to this so that it’s clear to others what exactly the tests cover. Also handy if your automated product is not available or your tests are falling over.
  • Keep track of your test runs – Keeping a chart of all you your tests and tracking automated vs manual effort, gives visibility that you’re saving your company money. Also handy when trying to get buy in.
  • Keep it simple – Remember, tests should be simple so they can be re-used again and again. This keeps down the costs when maintaining and allows others to pick them up in the future, especially if you’ve got a contractor in to write the tests.
  • And lastly one for all the Product Mangers, Development Managers and Business Units –
    Don’t assume that because you’ve got someone writing automated tests that all your code quality issues are over. Remember – automation is only as good as the tests written!

Download android app Software Testing – Full Stack QE / SDET and get the early access.


 

Common Pitfalls – Software Test Automation

Implementing an automation testing tool is not easy. Below are some of the common pitfalls which companies has come across:

  • The various tools used throughout the development lifecycle did not easily integrate.
  • Spending more time on automating test scripts than on actual testing. It is important to decide first which tests should be automated and which cannot be automated. 
  • Everyone on the testing team trying to automate scripts. It is preferable to assign test script generation to people who have some development background so that manual testers can concentrate on other testing aspects
  • Elaborate test scripts being developed, duplicating the development effort.
  • Test tool training being given late in the process, which results in test engineers having a lack of tool knowledge
  • Testers resisting the tool. It is important to have a Tool Champion who can advocate the features of the tool in the early stages to avoid resistance
  • Expectation for Return of Investment on Test Automation is high. When a testing tool is introduced, initially the testing scope will become larger but if automation is done correctly, then it will decrease in subsequent releases
  • Tool having problem in recognizing third-party controls (widgets).
  • A lack of test development guidelines
  • Reports produced by the tool being useless as the data required to produce the report was never accumulated in the tool
  • Tools being selected and purchased before a system engineering environment gets defined.
  • Various tool versions being in use resulting in scripts created in one tool not running in another. One way to prevent this is to ensure that tool upgrades are centralized and managed by a configuration management team.
  • The new tool upgrade not being compatible with the existing system engineering environment. It is first preferred to do a beta test with the new tool before rolling out in the project
  • The tool’s database not allowing for scalability. It is better to pick a tool that allows for scalability using a robust database.  Additionally, it is important to back up the test database
  • Incorrect use of a test tool’s management functionality resulting in waste of time.

Download android app Software Testing – Full Stack QE / SDET and get the early access.


 

Important factors to consider before Starting with Test Automation

Factors to consider before starting with Test Automation1. Scope – It is not practical to try to automate everything, nor is there the time available generally. Pick very carefully the functions/areas of the application that are to be automated.

2. Preparation Timeframe – The preparation time for automated test scripts has to be taken into account. In general, the preparation time for automated scripts can be up to 2/3 times longer than for manual testing. In reality, chances are that initially the tool will actually increase the testing scope. It is therefore very important to manage expectations. An automated testing tool does not replace manual testing, nor does it replace the test engineer. Initially, the test effort will increase, but when automation is done correctly it will decrease on subsequent releases.

3. Return on Investment – Because the preparation time for test automation is so long, I have heard it stated that the benefit of the test automation only begins to occur after approximately the third time the tests have been run.

4. When is the benefit to be gained? Choose your objectives wisely, and seriously think about when & where the benefit is to be gained. If your application is significantly changing regularly, forget about test automation – you will spend so much time updating your scripts that you will not reap many benefits. [However, if only disparate sections of the application are changing, or the changes are minor – or if there is a specific section that is not changing, you may still be able to successfully utilize automated tests]. Bear in mind that you may only ever be able to do a complete automated test run when your application is almost ready for release – i.e. nearly fully tested!! If your application is very buggy, then the likelihood is that you will not be able to run a complete suite of automated tests – due to the failing functions encountered

5. The Degree of Change – The best use of test automation is for regression testing, whereby you use automated tests to ensure that pre-existing functions (e.g. functions from version 1.0 – i.e. not new functions in this release) are unaffected by any changes introduced in version 1.1. And, since proper test automation planning requires that the test scripts are designed so that they are not totally invalidated by a simple gui change (such as renaming or moving a particular control), you need to take into account the time and effort required to update the scripts. For example, if your application is significantly changing, the scripts from version 1.0. may need to be completely re-written for version 1.1, and the effort involved may be at most prohibitive, at least not taken into account! However, if only disparate sections of the application are changing, or the changes are minor, you should be able to successfully utilize automated tests to regress these areas.

6. Test Integrity – how do you know (measure) whether a test passed or failed? Just because the tool returns a ‘pass’ does not necessarily mean that the test itself passed. For example, just because no error message appears does not mean that the next step in the script successfully completed. This needs to be taken into account when specifying test script fail/pass criteria.

7. Test Independence – Test independence must be built in so that a failure in the first test case won’t cause a domino effect and either prevents, or cause to fail, the rest of the test scripts in that test suite. However, in practice this is very difficult to achieve.

8. Debugging or “testing” of the actual test scripts themselves – time must be allowed for this, and to prove the integrity of the tests themselves.

9. Maintenance of Scripts – Finally, there is a high maintenance overhead for automated test scripts – they have to be continuously kept up to date, otherwise you will end up abandoning hundreds of hours work because there has been too many changes to an application to make modifying the test script worthwhile. As a result, it is important that the documentation of the test scripts is kept up to date also.


Download android app Software Testing – Full Stack QE and get the early access.


 

How Giants Test the Software – Google, Apple, Facebook, Amazon & Spotify

Google takes testing seriously:

To that end, Google employs a four-stage testing process for changes to the search engine, consisting of:

  • Testing by dedicated, internal testers (Google employees)
  • Further testing on a crowdtesting platform
  • “Dogfooding,” which involves having Google employees use the product in their daily work
  • Beta testing, which involves releasing the product to a small group of Google product end users

Facebook: Developer-driven testing

Facebook employs a wide variety of automated testing solutions. The tools that are used range from PHPUnit for back-end unit testing to Jest (a JavaScript test tool developed internally at Facebook) to Watir for end-to-end testing efforts.

Amazon: Deployment comes first

The feeling at Amazon is that its development and deployment processes are so mature (the company famously deploys software every 11.6 seconds!) that there is no need for elaborate and extensive testing efforts. It is all about making software easy to deploy, and, equally if not more important, easy to roll back in case of a failure.

Spotify: Squads, tribes and chapters

Testing at Spotify is taken very seriously. Just like programming, testing is considered a creative process, and something that cannot be (fully) automated. Contrary to most other companies mentioned in this article, Spotify heavily relies on dedicated testers that explore and evaluate the product, instead of trying to automate as much as possible.

Microsoft: Engineers and testers are one

Microsoft’s ratio of testers to developers is currently around 2:3 (SDETs)

Source:

5 effective and powerful ways to test like tech giants: https://bit.ly/2nkjIvy


Download android app Software Testing – Full Stack QE and get the early access.