Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.
Activities
- Identify types of tests to be performed.
- Gather details about testing priorities and focus.
- Prepare Requirement Traceability Matrix (RTM).
- Identify test environment details where testing is supposed to be carried out.
- Automation feasibility analysis (if required).
- Deliverable
- RTM
- Automation feasibility report. (if applicable)
Test planning and control :
Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.
Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities.
Test analysis and design :
Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.
Test analysis and design has the following major tasks:
- Reviewing the test basis (such as requirements, architecture, design, interfaces).
- Evaluating testability of the test basis and test objects.
- Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.
- Designing and prioritizing test cases.
- Identifying necessary test data to support the test conditions and test cases.
- Designing the test environment set-up and identifying any required infrastructure and tools.
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities
- Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.
- Setup test Environment and test data
- Perform smoke test on the build
Deliverable
- Environment ready with test data set up
- Smoke Test Results.
Test implementation and execution :
Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:
- Developing, implementing and prioritizing test cases.
- Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
- Creating test suites from the test procedures for efficient test execution.
- Verifying that the test environment has been set up correctly.
- Executing test procedures either manually or by using test execution tools, according to the planned sequence.
- Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.
- Comparing actual results with expected results.
- Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).
- Repeating test activities as a result of action taken for each discrepancy. For example, re execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).
- Completed RTM with execution status
- Test cases updated with results
- Defect reports
Evaluating exit criteria and reporting:
Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
- Checking test logs against the exit criteria specified in test planning.
- Assessing if more tests are needed or if the exit criteria specified should be changed.
- Writing a test summary report for stakeholders.
Test closure activities :
Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. For example, when a software system is released, a test project is completed (or cancelled), a milestone has been achieved, or a maintenance release has been completed.
Test closure activities include the following major tasks:
IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:
Test Plan: a management planning document that shows:
Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.
Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification
Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed
Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next
Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed
Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing.
Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.
------------------------------------------------------------------------------------------------------
Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of activities carried out methodologically to help certify your software product. These activities (stages) constitute the Software Testing Life Cycle (STLC).
The different stages in Software Test Life Cycle -
Each of these stages have a definite Entry and Exit criteria , Activities & Deliverable associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverable for the different stages in STLC. Lets look into them in detail.
Test Planning:
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.
Activities
Test Case Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.
Activities
Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. For example, when a software system is released, a test project is completed (or cancelled), a milestone has been achieved, or a maintenance release has been completed.
Test closure activities include the following major tasks:
- Checking which planned deliverable have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
- Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
- Handover of testware to the maintenance organization.
- Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:
Test Plan: a management planning document that shows:
- How the testing will be done (including SUT (system under test) configurations).
- Who will do it
- What will be tested
- How long it will take (although this may vary, depending upon resource availability).
- What the test coverage will be, i.e. what quality level is required
Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.
Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification
Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed
Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next
Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed
Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing.
Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.
Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of activities carried out methodologically to help certify your software product. These activities (stages) constitute the Software Testing Life Cycle (STLC).
The different stages in Software Test Life Cycle -
Each of these stages have a definite Entry and Exit criteria , Activities & Deliverable associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverable for the different stages in STLC. Lets look into them in detail.
Test Planning:
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.
Activities
- Preparation of test plan/strategy document for various types of testing
- Test tool selection
- Test effort estimation
- Resource planning and determining roles and responsibilities.
- Training requirement
- Deliverable
- Test plan /strategy document.
- Effort estimation document.
This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.
Activities
- Create test cases, automation scripts (if applicable)
- Review and baseline test cases and scripts
- Create test data (If Test Environment is available)
- Deliverable
- Test cases/scripts
- Test data
Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities
Smoke Test Results:
Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.
Activities
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.
Activities
- Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.
- Setup test Environment and test data
- Perform smoke test on the build
- Deliverable
- Environment ready with test data set up
Smoke Test Results:
Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.
Activities
- Execute tests as per plan
- Document test results, and log defects for failed cases
- Map defects to test cases in RTM
- Retest the defect fixes
- Track the defects to closure
- Deliverable
- Completed RTM with execution status
- Test cases updated with results
- Defect reports
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.
Activities
- Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality
- Prepare test metrics based on the above parameters.
- Document the learning out of the project
- Prepare Test closure report
- Qualitative and quantitative reporting of quality of the work product to the customer.
- Test result analysis to find out the defect distribution by type and severity.
- Deliverable : Test Closure report , Test metrics
No comments:
Post a Comment