CSTE Sample Papers | CSTE Sample Questions

On Readers request, I am posting CSTE Certification practice questions.

CSTE Certification Essay based questions:
1. What fields would you include in creating a new defect tracking program (used by QA, developers, etc)? (25 points)
a. Draw a pictorial diagram of a report you would create for developers to determine project status.
b. Draw a pictorial diagram of a report you would create for users and management to show project status.
2. What 3 tools would you purchase for your company for use in testing and justify why you would want them? (this question is in both essay parts, only rephrased. I think 10 points each time)
3. Describe the difference between validation and verification. (5 points)
4. Put the following testing types in order and give a brief description of each. System testing, acceptance testing, unit testing, integration testing, benefits realization testing. (10 points)
5. Describe automated capture/playback tools and list the benefits of using them. (10 points)
6. The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this new project. (30 points)
7. List what you think are the two primary goals of testing. (5 or 10 points)
8. If you company is going to conduct a review meeting, what position would u select in the review committee and why?
9. What are the three factors will effect the Testing to stop?
10. Write any three attributes which will impact the Testing Process ?
11.This is a Problem solving question, Write a Test Transaction for the below, If a company is going to deduct 6.2% of sales reduction on first $62,000 earning.
12. What activity is done in Acceptance Testing, which is not done in System testing to ensure the Customer requirements?
13. Prepare a checklist for the developers on Unit Testing before the application comes to Testing Department?
CSTE Multiple Choice Questions:
1. What is the percent of the total cost of quality that comes from rework?
2. What is the percent of the total gross of sales that come from product failure?
3. What is the cost of quality?
4. What is management by fact?
5. What are the three types of interfaces?
6. What three rules should be followed for all reviews?
7. What is boundary value testing?
8. What is decision/branch coverage strategy?
9. Which of the following is not one of the 6 Structural Test Approaches?
10. Which of the following is not one of the 8 Functional Test Approaches?
11. Which of the following is not a perspective of quality?

a. transcendent
b. product-based
c. translucent
d. user-based
e. value-based
f. manufacturing based

12. True or False. Effectiveness is doing things right and efficiency is doing the right things.
13. Which of the following is not one of Deming’s 14 points for management?

a. Adopt a new philosophy
b. Eliminate slogans, exhortations, and targets for the work force
c. Mobility of management
d. Create constancy of purpose
14. True or False. The largest cost of quality is from production failure.

15. Defects are least costly to correct at what stage of the development cycle?

a. Requirements
b. Analysis & Design
c. Construction
d. Implementation

16. A review is what category of cost of quality?

a. Preventive
b. Appraisal
c. Failure

17. True or False. A defect is related to the term fault.
18. What type of change do you need before you can obtain a behaviour change?

a. Lifestyle
b. Vocabulary
c. Internal
d. Management

19. Software testing accounts for what percent of software development costs?

a. 10-20
b. 40-50
c. 70-80
d. 5-10

20. The purpose of software testing is to:

a. Demonstrate that the application works properly
b. Detect the existence of defects
c. Validate the logical design

21. True or False. One of the key concepts of a task force is that the leader be an expert in leading groups as opposed to an expert in a topical area.
22. Match the following terms with their definitions:

a. Black box testing
b. White box testing
c. Conversion testing
d. Thread testing
e. Integration testing

Asnwers to advanced software Testing Interview Questions

Ans 1: Advantages of path coverage metric of software testing:
Path coverage requires extremely thorough testing.
Disadvantages of path coverage metric of software testing:
1) Since loops introduce an unbounded number of paths, this metric considers only a limited number of looping possibilities.
2) The number of paths is exponential to the number of branches. For example, a function containing 10 if-statements has 1024 paths to test. Adding just one
more if-statement doubles the count to 2048.
3) Many paths are impossible to exercise due to relationships of data.
Ans 2: Decision coverage has the main advantage of simplicity & is free from many problems of statement coverage.
Disadvantage of decision coverage is that this metric ignores branches within Boolean expressions which occur due to short-circuit operators.

Ans 3: Drawbacks of statement coverage metric of software testing:
1) It is insensitive to some of the control structures.
2) It does not report whether loops reach their termination condition – only whether the loop body was executed. With C, C++, and Java, this limitation
affects loops that contain break statements.
3) It is completely insensitive to the logical operators (|| and &&).
4) It cannot distinguish consecutive switch labels.

Ans 4: Advantages of statement coverage metric of software testing
1) The main advantage of statement coverage metric is that it can be applied directly to object code and does not require processing source code. Usually the performance profilers use this metric.
2) Bugs are evenly distributed through code; therefore the percentage of executable statements covered reflects the percentage of faults discovered.

Ans 5: Automation Testing versus Manual Testing Guidelines:
I met with my team’s automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?

Pros of Automation
• If you have to run a set of tests repeatedly, automation is a huge win for you
• It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
• It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nightly)
• Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.

Cons of Automation
• It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
• Can’t automate visual references, for example, if you can’t tell the font colour via code or the automation tool, it is a manual test.
Pros of Manual
• If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
• It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual
• Running tests manually can be very time consuming
• Each time there is a new build, the tester must rerun all required tests – which after a while would become very mundane and tiresome.
Other deciding factors:
• What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
• Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

Ans: Test Scenario is the user workflow in the application.
Example: Checking Mail in Gmail is a scenario, where user login, check the mail in inbox and then logoff. This application can have 2 different test case one for login and other one for inbox.
So Test Scenario can consist of different test cases.

Ans: The simple answer to the question ‘Can automated testing replace all manual testing’ is ‘No.’
Automated functional tests can be used for regression testing (which is a small part of the overall testing effort). If an organization is running the same manual regression tests repeatedly, then the automated tests can replace some of that effort, but they also add the effort to maintain the tests, which is sometimes more than the work required to just run the tests manually. When I say some of the effort, I mean that test failures from an automated test run still must be analyzed manually. Also, any part of the process of provisioning and setting up the machine to run the tests, kicking off the test run, and babysitting it along the way that isn’t automated will still require manual attention.

Ans: The basic assumptions behind coverage analysis tell us about the strengths and limitations of this testing technique. Some fundamental assumptions are listed
below.
* Bugs relate to control flow and you can expose Bugs by varying the control flow [Beizer1990 p.60]. For example, a programmer wrote “if (c)” rather than
“if (!c)”.
* You can look for failures without knowing what failures might occur and all tests are reliable, in that successful test runs imply program correctness
[Morell1990]. The tester understands what a correct version of the program would do and can identify differences from the correct behaviour.
* Other assumptions include achievable specifications, no errors of omission, and no unreachable code.
Clearly, these assumptions do not always hold. Coverage analysis exposes some plausible bugs but does not come close to exposing all classes of bugs.
Coverage analysis provides more benefit when applied to an application that makes a lot of decisions rather than data-centric applications, such as a
database application.

Ans: For a test automation the entry criteria are:
* Availability of a stable application under test (around some 80% of test cases passed).
* Availability of automation test tool with the required  Add-ins and Patches.
* Availability of stable and controlled test environment.
* Automation test strategy sign-off
-scope (type of tests)
-functionalities (features to be automated)
-assumptions
-features to be automated
* SIT or UAT sign off
* Signed off manual test cases to be provided
* Availability of stable test bed.

Ans : You are a Lead Automation engineer then what questions you would ask to yourself and your manager while deciding to automate the tests
Best approach would be to raise the following questions:
1) Automating this test and running it once will cost more than simply running it manually once. How much more?
2. An automated test has a finite lifetime, during which it must recoup that additional cost. Is this test likely to die sooner or later? What events are likely to end it?
3. During its lifetime, how likely is this test to find additional bugs (beyond whatever bugs it found the first time it ran)? How does this uncertain benefit balance against the cost of automation?
4. Return of Investment.

Ans: Keyword driven testing:
This requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that “drives” the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. In this method, the entire process is data-driven, including functionality
The merits of the Keyword Driven Testing are as follows:
– The Detail Test Plan can be written in Spreadsheet format containing all input and verification data.
– If “utility” scripts can be created by someone proficient in the automated tool’s Scripting language prior to the Detail Test Plan being written, then the tester can use the Automated Test Tool immediately via the “spreadsheet-input” method, without needing to learn the Scripting language.
– The tester need only learn the “Key Words” required, and the specific format to use within the Test Plan.

This allows the tester to be productive with the test tool very quickly, and allows more extensive training in the test tool to be scheduled at a more convenient time.

Demerits of keyword driven testing:
The demerits of the Keyword Driven Testing are as follows:
– Development of “customized” (Application-Specific) Functions and Utilities requires proficiency in the tool’s Scripting language. (Note that this is also true for any method)
– If application requires more than a few “customized” Utilities, this will require the tester to learn a number of “Key Words” and special formats. This can be time-consuming, and may have an initial impact on Test Plan Development. Once the testers get used to this, however, the time required to produce a test case is greatly improved.

Ans: The architecture of the “Test Plan Driven” method appears similar to that of the “Functional Decomposition” method, but in fact, they are substantially
different:
* Driver Script
* Performs initialization, if required;
* Calls the Application-Specific “Controller” Script, passing to it the file-names of the Test Cases (which have been saved from the spreadsheets as a
“tab-delimited” files);
* The “Controller” Script
* Reads and processes the file-name received from Driver;
* Matches on “Key Words” contained in the input-file
* Builds a parameter-list from the records that follow;
* Calls “Utility” scripts associated with the “Key Words”, passing the created parameter-list;
* Utility Scripts
* Process input parameter-list received from the “Controller” script;
* Perform specific tasks (e.g. press a key or button, enter data, verify data, etc.), calling “User Defined Functions” if required;
* Report any errors to a Test Report for the test case;
* Return to “Controller” script;
* User Defined Functions
* General and Application-Specific functions may be called by any of the above script-types in order to perform specific tasks;

Advantages:
This method has all of the advantages of the “Functional Decomposition” method, as well as the following:
* The Detail Test Plan can be written in Spreadsheet format containing all input and verification data. Therefore the tester only needs to write this once, rather than, for example, writing it in Word, and then creating input and verification files as is required by the “Functional Decomposition” method.

* Test Plan does not necessarily have to be written using MS Excel. Any format can be used from which either “tab-delimited” or “comma-delimited” files can be saved (e.g. Access Database, etc.).

* If “utility” scripts can be created by someone proficient in the Automated tool’s Scripting language prior to the Detail Test Plan being written, then the tester can use the Automated Test Tool immediately via the “spreadsheet-input” method, without needing to learn the Scripting language. The tester need
only learn the “Key Words” required, and the specific format to use within the Test Plan. This allows the tester to be productive with the test tool very quickly, and allows more extensive training in the test tool to be scheduled at a more convenient time.
* If Detailed Test Cases already exists in some other format, it is not difficult to translate these into the “spreadsheet” format.
* After a number of “generic” Utility scripts have already been created for testing an application, we can usually re-use most of these if we need to
test another application. This would allow the organization to get their automated testing “up and running” (for most applications) within a few days,
rather than weeks.
Disadvantages
* Development of “customized” (Application-Specific) Functions and Utilities requires proficiency in the tool’s Scripting language. Note that this is
also true of the “Functional Decomposition” method, and, frankly of any method used including “Record/Playback”.
* If application requires more than a few “customized” Utilities, this will require the tester to learn a number of “Key Words” and special formats.
This can be time-consuming, and may have an initial impact on Test Plan Development. Once the testers get used to this, however, the time required to
produce a test case is greatly improved.
Ans: In comparison testing, we compare the old application with new application and see whether the new application is working better than the old application or not.
Comparison Testing means comparing your software with the better one or your Competitor.
While comparison Testing we basically compare the Performance of the software.
For ex If you have to do Comparison Testing of PDF converter (Desktop Based Application) then you will compare your software with your Competitor on the
basis of:-
1.Speed of Conversion PDF file into Word.
2.Quality of converted file.
Ans: Parallel Testing
Parallel/audit testing is a type of testing where the tester reconciles the output of the new system to the output of the current system, in order to verify the new system operates correctly.
OR
Comparing our Product / Application build with other products existing in the market. Parallel Testing is also known as Comparative Testing / Competetive Testing.
Testing newly developed system and compare the results with already existing system to check any discrepancy between them.
Ans: Automated Testing:
Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.
Test automation can be expensive, and it is usually employed in combination with manual exploratory testing. It can be made cost-effective in the longer term
though, especially in regression testing. One way to generate test cases automatically is model-based testing where a model of the system is used for test case generation, but research continues into a variety of methodologies for doing so.
What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team has to take.
Selecting the correct features of the product for automation largely decides the success of the automation. Unstable features or the features which are undergoing changes should be avoided.
Ans: Data Flow Diagram (DFD)
Data Flow Diagram is a graphical representation of the “flow” of data through an information system. A data flow diagram can also be used for the visualization of data processing. It is common practice for a designer to draw a context-level DFD first which shows the interaction between the system and outside entities.
The process model is typically used in structured analysis and design methods. Also called a data flow diagram (DFD), it shows the flow of information through a system. Each process transforms inputs into outputs.
The model generally starts with a context diagram showing the system as a single process connected to external entities outside of the system boundary. This process explodes to a lower level DFD that divides the system into smaller parts and balances the flow of information between parent and child diagrams. Many diagram levels may be needed to express a complex system. Primitive processes, those that don’t explode to a child diagram, are usually described in a connected textual specification.

Ans: Traceability Matrix
Traceability means that we would like to be able to trace back and forth how and where any work product fulfils the directions of the preceding product.
The matrix deals with the where, while the how we have to do our self, once we know the where.
A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.

There can be more things included in a traceability matrix than shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.
Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan.

Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.

Ans: Defect leakage
Defect leakage refers to the defect Found reproduced by the Client or User, which the tester was unable to found.
Defect leakage is the number of bugs that are found in the field that were not found internally. There are a few ways to express this:
* total number of leaked defects (a simple count)
* defects per customer: number of leaked defects divided by number of customers running that release
* % found in the field: number of leaked defects divided by number of total defects found in that release

In theory, this can be measured at any stage – number of defects leaked from dev into QA, number leaked from QA into beta certification, etc. I’ve mostly used it for customers in the field, though.

Ans: Configuration Management
Configuration Management is a discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.

Configuration management (CM) is the detailed recording and updating of information that describes an enterprise’s computer systems and networks, including all hardware and software components. Such information typically includes the versions and updates that have been applied to installed software packages and the locations and network addresses of hardware devices. Special configuration management software is available. When a system needs a hardware or software upgrade, a computer technician can accesses the configuration management program and database to see what is currently installed. The technician can then make a more informed decision about the upgrade needed.

An advantage of a configuration management application is that the entire collection of systems can be reviewed to make sure any changes made to one system do not adversely affect any of the other systems.

Ans: Statement coverage in Software Testing – Has each line of the source code been executed?
Statement coverage is one of the ways of measuring code coverage. It describes the degree to which the software code of a program has been tested.

All the statements in the code must be executed and tested.
Statement coverage means statement wise we need to give proper test cases.
For your example we need 3 statement coverages test cases are needed. and 2 branch coverages are needed.

Branch coverage will be on true and false for if statements.
path coverage = branch coverage +1

Ans: Multiple condition coverage metric of software testing
Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. 100% multiple condition coverage implies 100% condition determination coverage.
Drawback of this metric is that it becomes tedious to find out the minimum number of test cases required, especially for very complex Boolean expressions.
Another drawback of this metric is that the number of test cases required can vary to a large extent among various conditions having similar complexity.

Ans: Difference between code coverage analysis & test coverage analysis
Both these terms are similar. Code coverage analysis is sometimes called test coverage analysis. The academic world generally uses the term “test coverage” whereas the practitioners use the term “code coverage”.
Ans: Structural testing & functional testing
Structural testing examines how the program works, taking into account possible pitfalls in the structure and logic.
Functional testing examines what the program accomplishes, without regard to how it works internally.

Ans: Probe Testing
It is almost same as Exploratory testing. It is a creative, intuitive process. Everything testers do is optimized to find bugs fast, so plans often change as testers learn more about the product and its weaknesses. Session-based test management is one method to organize and direct exploratory testing. It allows us to provide meaningful reports to management while preserving the creativity that makes exploratory testing work. This page includes an explanation of the method as well as sample session reports, and a tool we developed that produces metrics from those reports.

Ans: Purpose of Automation Testing Tools
The real use and purpose of automated test tools is to automate regression testing.
This means that we must have or must develop a database of detailed test cases that are repeatable, and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences.

Ans: Difference between Retesting and Regression Testing

Regression is also retesting, but the objective is different.
yes,
Regression testing is the testing which is done on every build change. We retest already tested functionality for any new bug introduced due to change.
Retesting is the testing of same functionality, this may be due to any bug fix, or due to any change in implementation technology.

Ans: Best sequence of coverage goals as implementation strategy
1) Invoke at least one function in 90% of the source files (or classes).
2) Invoke 90% of the functions.
3) Attain 90% condition/decision coverage in each function.
4) Attain 100% condition/decision coverage

Advanced Software Testing Interview Questions – Part 1 (Advanced Level)

Below are some advanced level/ Test lead level Software Testing Questions. Answer now and Judge yourself.
– Advantages & drawbacks of path coverage metric of software testing?
– Advantages & drawbacks of decision coverage metric of software testing?
– Drawbacks of statement coverage metric of software testing?
– Main advantages of statement coverage metric of software testing?
– Compare Automation Testing with Manual Testing.
– Define Test Scenario?
– Is there any Automated tool going to replace the testers?
– Are automated tool Replace Manual Testers?
– Basic assumptions behind coverage analysis?
– When to automate tests?
– If you are a Lead Automation engineer then what questions you would ask to yourself and your manager while deciding to automate the tests?

– What is :
“Key Word Driven” Method of Testing?
“Test Plan Driven” Method of Testing?
Comparison Testing?
Parallel Testing?
Automated Testing?
a Data Flow Diagram (DFD)?
a Traceability Matrix?
Defect Leakage?
Configuration Management?
Statement Coverage In software testing,?
multiple condition coverage metric of software testing?
the difference between code coverage analysis & test coverage analysis?
the difference between Structural testing & functional testing?
Probe Testing?
the purpose of Automated Test Tools?
Difference between Retest and Regression Testing?
the best sequence of coverage goals as implementation strategy?

Click here for Answers

Tips for Software Testing Resume

The purpose of this article to show you how to write a Software Testing Resume which get the most out of your qualifications.  A good software tester resume helps you articulate your skills and opens the door for a job interview. 
Lets Check the Sample Software Testing Resume below:
Always use MS Word to write your resume and a good font would be Verdana 10 or Arial 11.
Your Name
1234 Commonwealth Ave
Boston, USA-02134
1-234-567-0171 (Home)
1-234-567-0171 (Cell)
Youremail@gmail.com

Summary:
State this clearly in Bulleted points. This will make your achievements stand out:
Example:
• Has been working in the IT industry for more than eight and a half years and out of that over four years in SQA and testing arena.
• Has the ability and experience to understand the architecture and life cycle of a project in depth and would be an ideal candidate for a process oriented black box as well as white box testing and team management.
• Have in depth knowledge of processes and procedures needed in a professional performance management environment, well versed in testing methodologies as data driven tests, regression tests, code coverage, etc. and Mercury Interactive proprietary Test Scripting Language (TSL).

Skills/Tools:
The mentioned list below is quite comprehensive skill sets. But never list anything or too many thinks, particularly if you are writing fresher software testing resume with the hope that people wont able to catch it. Once you are hired, a falsehood on your resume can be grounds for termination. If your resume is examined as part of your promotion review, you could lose your job if someone finds a lie. Or, if your employer wants an excuse to fire you, he could investigate details on your resume with the hope of finding a lie.

  • Testing Tools: Win Runner 6.0/5.0, Load Runner 6.0/5.0, Silk Test 5.03, Silk Performer 3.5, Silk Test Radar 2.1, Test Track 5.0, Silk Test 5.03
  • Languages: Java, C++, C and Power Builder 4.0
  • Internet: ASP, Java Script, VB Script, HTML, XML and DHTML
  • GUI: Visual Basic 6.0/5.0/4.0/3.0, Oracle Forms 6i/4.5, Reports 6i/2.5, Crystal Reports 8.0/6.0
  • Statistical Package: SAS
  • Web Servers: IIS 5.0/4.0,Vignette Story Server, Web logic Server, SQL Server 7.0/6.0
  • Middleware: COM/DCOM, MTS
  • Trackers: PVCS Tracker, SQA Manager and Test Director
  • IDE’s: Visual Interdev, Home Site and Cold Fusion RDBMS: SQL Server 7.0/6.5, Oracle 8i/7.x/6.0, MS-Access
  • Operating Systems: Macintosh, Unix, Windows XP/2000/NT/98/95

Experience:
Company Name 1 and web site: It is a good practice to put the web site address so that people can find it quickly. Give the name of the latest company first.
May 2001 – Till date


Project  # 1 Customization, development and maintenance of the web enabled fee based portfolio management solutions.
Clients Give the name of the client
Duration From September 2001.
Role QA Lead
Technology Development: ASP, JavaScript, HTML. Java, .NET, COM, SQL Server, etc.
Testing: WinRunner, LoadRunner, Astra QuickTest, LoadTest, PVCS Tracker, Notify, Visual Source Safe, CSE HTML Validator, etc.
Description
  • Leading a team of 11 testers for various clients.
  • Planning and implementation of tests for various projects at various stages. 
  • Generation of manual and automated test plans and test cases.
  • Preparation of Automated Regression Test Suites for multiple projects.
  • Execution of automated regression tests on a day-to-day basis.
  • Assisting team members in generation of WinRunner, QuickTest scripts.
  • Administration of on-site and local installations of PVCS Tracker for defect tracking.
  • Performance testing using LoadRunner and Astra LoadTest.
  • Reporting at multiple levels for multiple projects.
  • Training members of QA Team in WinRunner, TSL, QuickTest and LoadTest.
  • Assisting QA Consultants for the CMMI compliance activities including Gap Analysis, Internal Audit, etc.

Similarly put project # 2 and so on..

You can also use this format :
Company:  Company name.
Web site: www.xyz.com
Role:  Team Leader
Duration:  January 1999 to March 2001.

Responsibilities:
• Leading teams of developers/designers/testers, system analysis, and quality assurance.
• Designing, Planning, Executing White Box and Black Box tests.
• Planning and scheduling multiple projects as well as overall administration and management.
• Development of various e-commerce web sites.
• Administration and Maintenance of Windows 2000 Advanced Server based production network and MS SQL Server based database.
• Maintenance of web sites including content management, online database administration using MS SQL Server Enterprise Manager and performance analysis.
• Remote administration of collocated web server, Windows 2000/IIS5, through PcAnywhere.
• Remote administration of LINUX/Apache server through SSH.

 Education:
• Master in Computer Applications (MCA)
• PG Diploma in Computer Application (PGDCA)
Personal Information: Age and Date of Birth: 
Sex: 
Marital status: 
Nationality: 
Place of Birth: 
Religion: 
Permanent Address:.
Passport Details:
Passport Number: 
Date of Issue: 
Date of Expiration: 
Place of Issue:

 Conclusion: Do not make your resume too long. On average try to keep it between 3 pages. Also if it is a Fresher Software Testing Resume then keep it in 1-2 pages only. If it is little more than one page then just try fit it into one page by doing little editing.

Bug triage Meeting Process

.. Continuing the Test Management Series

Bug Triage Meeting
A meeting held by the CFT**, QA Group, Test Manager, Quality Manager, Project Manager; Product Manager and Testing Leads for any project. The objective of the meeting is to prioritize and track the defects to be addressed, ensuring timely and accurate resolution. During iterative Quality Assurance testing, the QA department logs defects. In Bug Triage, the bugs are prioritized to determine when bug fixes are to be releases, the difficulty of the fix and the difficulty of retest.

[**CFT – Cross Functional Team – An inter-departmental team comprising a Cross-Functional Team Leader and team members representing a network of experienced and knowledgeable staff members. The Cross-Functional Team concept allows staffing levels for each development discipline to increase and decrease at the proper times during the development cycle.]

A bug triage meeting should be held regularly during the construction phase (or testing cycle) of a project. The Quality Assurance (QA) lead calls these meetings. The frequency and the number of occurrences will vary from project to project, but is typically based on the number of defects being reported, the overall project schedule, and the current status of the project (i.e. Red, Yellow, or Green Status).

Prior to the meeting, the QA lead will send out a bug report with the new defects reported in the current iteration. At the meeting, the CFT will reassess the severity and priority of each defect.
At the bug triage meeting, the CFT should also discuss the status of defects that were reported at the previous bug triage meeting.

At the bug triage meeting, each defect should be discussed, even those that are rated at a lower priority. The developer should present the level of complexity and the risk associated with fixing each defect. The CFT can then decide which defects should be addressed immediately or those that can wait for future release.

Triaging a bug involves:
1. Making sure the bug has enough information for the developers and makes sense
2. Making sure the bug is filed in the correct place
3. Making sure the bug has sensible “Severity” and “Priority” fields

DECIDING WHICH DEFECTS TO ADDRESS
To understand which defects to address during a project lifecycle, several factors should be considered:
• If the project is in the early iterations, it is feasible to address as many defects as possible, even the lower priority defects.
• If the project is near completion and its final stages of development and QA (i.e. less than 2 iterations left before completion), the CFT should concentrate on addressing only “A priority” defects or those that have low risk (i.e. the fix is not complex and re-testing is minimal).

BUG TRIAGE TEMPLATE
The QA lead will submit a bug report to the CFT prior to the scheduled Bug Triage meeting. This report will contain (but not limited to) the following fields and should be sorted by Fix Priority:
• ID
• Headline
• Date Reported
• Submitter
• Severity
• Fix Priority
• Owner
• Status

DEFECT TRACKING
During the Bug Triage Meeting, the QA engineer will be required to have running on a laptop or desktop. The QA engineer will make the changes / updates to each defect as it is being discussed. The “Comments” field is updated correctly. capture all changes to the defect entry.
At the conclusion of the meeting, the QA engineer will print a report from Defect Tracking System, capturing the updates. This report will serve as the meeting minutes.

ROLES & RESPONSIBILITIES of individuals in Bug Triage Meeting
1. Project Manager
– Assists in the prioritization of the defects
– Sends out meeting minutes when appropriate
– Tracks issues list
– Discusses the delivery date of next iteration to QA.

2. Product Manager
– Assists in the prioritization of the defects.

3. Test Lead (QA Lead)
– Calls the bug triage meeting
– Submits a defect report to the CFT, prior to the start of the meeting
– Assists in the prioritization/severity of the defects
– Assists in determining Root Cause of defect
– Manages defects in CQ
– Distributes updated defect report, capturing the notes from the bug triage meeting.

4. Development Lead and/or Developer
– Assists in the prioritization of the defects (don’t think development sets priority, they should set severity).
– Explains the level of complexity and the risk associated with each defect being presented at the bug triage meeting
– Assigns the defects to the appropriate developer
– Updates Resolution and development notes fields in CQ
– Assists in determining Root Cause of defect
– Discusses the delivery date of next iteration to QA.


5. User Transition Manager
– Ensures that appropriate User Representatives are invited to the bug triage meeting
– Assists in the prioritization of the defects

6. User Representative
– Assists in the prioritization of the defects

OUTPUTS
CFT is aligned on the severity and priority of defects discussed during the bug triage meeting.
EXIT CRITERIA of Bug Triage Meeting
QA lead will distribute the defect list from Defect Management System, capturing the updates.

METRICS –
In the end Bug Triage Metrics are prepared.
Hope this is helpful to all testing professionals in understanding the Bug Triage meeting process.