Lets Make Software Testing Fun

Article by – softwaretestingsucks.com –

I have been in software testing for quite some time, at times it sucked, because usually its boring and sometimes the testing activity is looked down as a menial job done by a less skilled person.
Lets make an attempt to overcome the above mentioned sucking factors. To implement them, apart from individual testers effort,a large and important part has to be played by the management.

i. Testers right attitude: First realize how important testing activity is.No matter how much developing effort has been spent on making a highly desirable product with state of the art technology, if it does not correctly do what it is supposed to do, then it is doomed to death.An improperly tested application reaching customers directly translates to loss of business.

The right attitude of a tester is he/she must find satisfaction or joy in hunting down bugs and vice versa. My uncle sam who happens to be a hard core tester gave me this inspiring speech – “When a product comes for testing, you must pounce on it, use every method to break and destroy the system, you must inundate the developer with so many bugs, that he must curse himself why he has developed that product and why he did not find those bugs earlier .Destroy the product utterly and completely.Then you can right fully claim the title ‘Barbaric tester’.You must find that sadistic joy.Never ever assume that the product that comes for testing is bug free. In the vast land of the product code, the treasure of bugs are hidden somewhere and everywhere, now its your responsibility as a tester to dig them up, discover them and claim your treasure.”
Hmmmmmmm, developers, pardon my uncle SAM, my fellow comrades(testers) don’t take my uncle’s advice literally, the point is add a bit of imagination and big titles, see the testing activity in a new perspective and have fun.

ii. Usage of Automation tools: Manual testing is sometimes boring, but running the same set of test cases again and again, every time a bug is fixed in the name of ‘regression testing’, damn life sucks.
If a test case has to be run several times, automate it using tools like Win-runner or Rational robot.There are significant advantages of automating them – first, you can run them over night and come in the morning and check results, this way it saves a lot of time and effort.Second, since it sucks to execute the same test manually again and again, there is a higher probability that the tester in his half-wake state may miss a regression bug, Whereas the dumb automated test case checks all the check points every time you run it.
Try to Convince your management on purchasing Automation tool’s license.May be go for a pilot project initiative using trial version of the software.

iii. How to make bug hunting fun: The management should create healthy competition between testers.They must find a way to make their testers aggressive and make finding bugs a fun filled activity. Here is one way that comes to my mind:

-A ‘big bug board’ should be hung in a place visible to the entire team.The board should contain all the testers name and the number of valid bugs logged by them, which is updated each week.Every week, ranks should be awarded to the testers based on the bug count.The tester who logged most number of bugs should be ranked number one.Optionally he/she must be given a big title say ‘Tester of the week’.

iv. A little change once in a while: If a tester is confined to a single product or single module within a product for a long time, then it will suck. Once in a while he should be allowed to have some change(upon tester’s desire) either in terms of moving to a different product or some different kind of work like a pilot project of some new tool.

v. Equal Status: Any organization that understands the importance of customer satisfaction, knows the importance of the testing the product properly, before shipping it to customer.The management also have to realize that testing needs skill.The goal of any testing project is to test the product as thoroughly as possible, consuming as less resources as possible(time and man power).It needs skill and experience to design test cases that are small but tests large chunks of code in a product, that is easy to execute but catches a lot of bugs. It needs skill to identify weak areas in order to allocate more resources to it.It needs skill to use automation tools, test management tools, code coverage and performance testing tools.It takes some time for any new entrant into testing field to be a ‘trained bug catcher’ and experience in this field is valuable.
Management has to take explicit measures to provide equal status to testers.The following are some of the them:

  • For all the important meetings related to a product , testing team should be invited.Their views must be considered while taking important decisions like when to release the product?
  • A separate career path has to be created for testers.They must be made equal to a developer in terms of pay and position.
  • Management has to allocate adequate funds as priority, to testing team to create adequate infrastructure
  • Training programs has to organized for new entrants in testing field, to understand testing techniques and to learn the usage of testing tools.

Revolutionizing the Testing Process and Accuracy: The Decision Model [Testing Webinar]

The Relational Model changed the way we manage data. Today, the Decision Model organizes business rules and logic in a similarly rigorous manner.  This webinar with Barbara von Halle and Larry Goldberg discusses how the Decision Model enables innovation in the way we can generate test cases, minimize the testing needed, test the logic prior to coding, and assure completeness of testing procedures.


Learning Objectives:

  • Understand how and when program logic errors can be found prior to coding
  • Comprehend a high-level business-centric Decision Model from a testing perspective
  • Develop test cases from a single Rule Family in a Decision Model
  • Seek the minimum number of test cases for an entire Decision Model
  • Modify current testing scenarios to be Decision Model-driven and of higher quality
  • Provide test cases in parallel with completing requirements
  • Support test-driven development paradigms

    Its a Free Webinar. Webinar date: 9/2/2010
    Time: 11:00:00 AM to 12:30 PM
    Register Here: http://solutions.compaid.com/forms/WebinarA20100902?ProcessType=PreReg

    Testing without Requirements?

    [Author – Surbhi Shridhar] Are you doing testing without requirements? Are you doing testing as per the instructions given by development team? In this post we will discuss – 1. Some common risks/issues that you might face while testing the software without requirements, 2. Ways to tackle those risks and 3. Alternative ways of testing.
    On the fly development, pressurized deadlines, changing user requirements, legacy systems, large systems, varied requirements and many more are known influences for many a project manager to discard or not update the functional specification document. While a project manager/requirements manager may be aware of the system requirements, this can cause havoc for an independent test team.

    More so, with ever expanding teams, temporary resources, a large system may not be documented well enough or sometimes not at all, for the tester to form complete scenarios from the functional specification (if present) document.

    This can pose a great risk to the testing of the system. Requirements will never be clearly understood by the test team and the developers will adopt an ‘I am king’ attitude, citing incorrectly working functionality as the way the code or design is supposed to work. This can confuse end development and you may have a product on your hand that the business doesn’t relate to, the end users don’t understand and the testers can’t test.

    Dealing with a system that needs to be tested without functional specifications requires good planning, smart understanding and good documentation – lack of which caused the problem in the first place.
    Before I begin describing some ways in which you can tackle the problem – here are a few questions you could be asking yourself:

    1. What type of Software testing will be done on the system– unit / functional / performance / regression?
    2. Is it a maintenance project or is it a new product?
    3. From where have the developers achieved their understanding of the system?
    4. Is there any kind of documentation other than functional specification?
    5. Is there a business analyst or domain knowledge expert available?
    6. Is an end user available?
    7. Is here a priority on any part of the functionality of the system?
    8. Are there known high-risk areas?
    9. What is the level of expertise of the development team and the testing team?
    10. Are there any domain experts within your test team?
    11. Are there any known skill sets in your team?
    12. Are you considering manual/automated testing?

    All of the questions should help you identify the key areas for testing, the ignorable areas for testing, the strong and weak areas of your team. Once you ascertain the level of skills and expertise in your team, you have choices available to choose from.
    Some common risks/issues that you might face while testing the software without requirements are: –

    1. Half baked development – expected, since your requirements aren’t sufficient
    2. Inexperienced resources.
    3. Unavailable functional knowledge in your team.
    4. Ability to comprehend diagrammatic representation in various ways.
    5. Inadequate time to document functionality properly.
    6. Unavailability of business analysts, developers, and team architects.
    7. Constantly changing requirements.
    8. Level of detail required may be too high and time to produce the detail maybe be less.
    9. Conflicting diagrams/views arising from ambiguous discussions with business analysts and developers (yes, this is possible!)

    And some ways to tackle the risks are: –

    1. Know your team strength.
    2. Know the technical “modules” as developed by the developers.
    3. Collate the modules or decompose them further, depending on the type of testing you plan to do. For e.g. a system test will require you to combine modules while a unit test might require you to break down a module to extensive detail.
    4. Assign modules as per person strength and knowledge.
    5. Maintain a clean communication line between the team, developers and the business analysts.
    6. If the above is done, changing your documentation to match the ever-changing requirements should be possible, although- beware, it could create havoc with your estimates.
    7. When conflicts arise, make sure to discuss it out with the parties involved. In case that is not possible, refer to prototypes- if any. If there are no prototypes, validate it from an end user and the desired business approach/need.
    8. To avoid ambiguous interpretation of diagrams or documents, make sure to determine standards for your teams to follow. In case of diagrams, choose those approaches, which are clean and analysis free. When a diagram is open for analysis, it is open for interpretation, which may lead to errors. Maintaining error free documentation and tracking of the same can be painful for a large project.
    9. If level of detail required is high with limited time on your hands, document the basics – i.e. a very high-level approach and update the test cases as you go ahead with testing.
    10. Unfortunately there is little you can do in terms of inexperienced resources or inadequate functional knowledge expertise within your team. Be prepared to hire outside help or build up the knowledge by assigning a team member to the task.

    Alternative Ways of Testing
    If there is absolutely no time for you to document your system, do consider these other approaches of testing:
    User/Acceptance Testing: User Testing is used to identify issues with user acceptability of the system. The most trivial to the most complicated defects can be reported here. It is also possibly the best way to determine how well people interact with the system. Resources used are minimal and potentially requires no prior functional or system knowledge. In fact the more unknown a user is to the system, the more preferred he is for user testing. A regressive form of user testing can help uncover some defects in the high-risk areas.

    Random Testing: Random Testing has no rules. The tester is free to do what he pleases and as he seems fit. The system will benefit however, if the tester is functionally aware of the domain and has a basic understanding of business requirements. It will prevent reporting of unnecessary defects. A variation of random testing – called as Monte Carlo simulation, which is described as generation of values for uncertain variables over and over to simulate a model can also be used. This can be used for systems, which have large variation in input data of the numbers or data kind.

    Customer Stories: User/Customer stories can be defined as simple, clear, brief descriptions of functionality that will be valuable to real users. Small stories are written about the desired function. The team writing the stories will include customer, project manager, developer and the tester. Concise stories are hand written on small cards such that the stories are small and testable. Once a start is made available to the tester, available functional and system knowledge can help speed up the process of testing and further test case development.

    Conclusion
    In the end, all one requires is careful thought, planning and mapping out of tasks before one actually begins testing. Testers and Test Analysts usually react by pressing the panic button and begin testing – which ultimately leads to complete chaos and immeasurable quality of the product. Clear thinking with valid goals will help you and your product achieve desirable quality. You might


    Author – Surbhi Shridhar, B.Tech(I.T.) is a Senior Software Engineer. She has 4 years of valuable experience in the software field.
    References

    WAYS THAT TEAM MEMBERS BUILD TRUST WITH EACH OTHER

    (Just found an excellent article for testers.. via @http://www.estherderby.com) [A must read article for Test Leads, and testers]

    Building trust may seem mysterious—something that just happens or grows through some unknowable process. The good news is there are concrete actions that tend to build trust (and concrete actions that are almost guaranteed to break down trust).

    First, let’s agree on a definition of trust in the workplace. We all know that trust is the foundation for teamwork. But to hear some people talk about it, you’d think team members were getting married, not creating software together. What we need in the workplace is professional trust. Professional trust says, “I trust that you are competent to do the work, that you’ll share relevant information, and that you have good intentions towards the team.” Taken broadly, that’s trust about communication, commitment, and competence.

    Click here to read the complete article.
    [About Author – Esther works with individuals, teams, and managers to improve their ability to deliver valuable software.  Esther is recognized as a leader in the human-side of software development, including management, systems-thinking, organizational change, collaboration, team building, facilitation and retrospectives. She is the co-author of top rated book Agile Retrospectives: Making Good Teams Great.]

    Dummies Guide to Performance Testing | Performance Testing Life Cycle

    Dummies Guide to Performance Testing | Performance Testing Life Cycle

    On Readers Request, we are starting the series of “Dummies Guide to Performance Testing”. This is the first post, where we will go through the basics of Performance Testing:

    What is Performance Testing?
    Why Performance Testing?
    Tests carried out in Performance Testing
    When to start Performance Testing?
    Performance Test Process

    What is Performance Testing?
    Primary objective of the performance testing is “to demonstrate the system works functionally as per specifications with in given response time on a production sized database
    Why Performance Testing?
    – To assess the system capacity for growth
    The load and response data gained from the tests can be used to validate the capacity planning model and assist decision making.
    – To identify weak points in the architecture
    The controlled load can be increased to extreme levels to stress the architecture and break it bottlenecks and weak components can be fixed or replaced
    – To detect obscure bugs in software
    Tests executed for extended periods can cause failures caused by memory leaks and reveal obscure contention problems or conflicts
    – To tune the system
    Repeat runs of tests can be performed to verify that tuning activities are having the desired effect – improving performance.
    – To verify resilience & reliability
    Executing tests at production loads for extended periods is the only way to access the systems resilience and reliability to ensure required service levels are likely to be met.
    Tests carried out in Performance Testing:

      1. Performance-Tests: Used to test each part of the web application to find out what parts of the website are slow and how we can make them faster.
      2. Load-Tests: This type of test is done to test the website using the load that the customer expects to have on his site. This is something like a “real world test” of the website.

      • First we have to define the maximum request times we want the customers to experience, this is done from the business and usability point of view, not from a technical point of view. At this point we need to calculate the impact of a slow website on the company sales and support costs.
      • Then we have to calculate the anticipated load and load pattern for the website (Refer Annexure I for details on load calculation) which we then simulate using the Tool.
      • At the end we compare the test results with the requests times we wanted to achieve.

      3. Stress-Tests:

      • They simulate brute force attacks with excessive load on the web server. In the real world situations like this can be created by a massive spike of users – far above the normal usage – e.g. caused by a large referrer (imagine the website being mentioned on national TV…). The goals of stress tests are to learn under what load the server generates errors, whether it will come back online after such a massive spike at all or crash and when it will come back online.

    When to start Performance Testing?

    • It is even a good idea to start performance testing before a line of code is written at all! Early testing the base technology (network, load balancer, application-, database- and web-servers) for the load levels can save a lot of money when you can already discover at this moment that your hardware is to slow. Also the first stress tests can be a good idea at this point.
    • The costs for correcting a performance problem rise steeply from the start of development until the website goes productive and can be unbelievable high for a website already online.
    • As soon as several web pages are working the first load tests should be conducted and from there on should be part of the regular testing routine each day or week or for each build of the software.

    Generic Performance Test Process
    This is a general process for performance Testing. This process can be customized according to the project needs. Few more process steps can be added to the existing process, deleting any of the steps from the existing process may result in Incomplete process. If Client is using any of the tools, In this case one can blindly follow the respective process demonstrated by the tool.

      Performance Test Process

    We will go through the each box in upcoming posts.