Wednesday, 1 July 2015

Risk management for testing...


Types of Risks in Software Projects
As testing is the last part of the project, it’s always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.

What is Risk?

“Risk are future uncertain events with a probability of occurrence and a potential for loss”

Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.

Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.

Categories of risks:

Schedule Risk: Project schedule get slip when project tasks and schedule release risks are not addressed properly. Schedule risks mainly affect on project and finally on company economy and may lead to project failure. Schedules often slip due to following reasons:
Wrong time estimation

Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.

Failure to identify complex functionalities and time required to develop those functionalities.

Unexpected project scope expansions.

Budget Risk:

Wrong budget estimation.

Cost overruns

Project scope expansion

Operational Risks:

Risks of loss due to improper process implementation, failed system or some external events risks.




Causes of Operational risks:




Failure to address priority conflicts

Failure to resolve the responsibilities

Insufficient resources

No proper subject training

No resource planning

No communication in team.

Technical risks:

Technical risks generally leads to failure of functionality and performance.

Causes of technical risks are:




Continuous changing requirements




No advanced technology available or the existing technology is in initial stages.

Product is complex to implement.

Difficult project modules integration.

Programmatic Risks:

These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.




These external events can be:




Running out of fund.

Market development

Changing customer product strategy and priority

Government rule changes.




Risk Identification

Risks are identified within the scope of the project. Risks can be identified using a number of resources e.g. project objectives, risk lists of past projects, prior system knowledge, understanding of system usage, understanding of system architecture/ design, prior customer bug reports/ complaints, project stakeholders and industry practices. For example, if certain areas of the system are unstable and those areas are being developed further in the current project, it should be listed as a risk.




It is good to document the identified risks in detail so that it stays in project memory and can be clearly communicated to project stakeholders. Usually risk identification is an iterative process. It is important to re-visit the risk list whenever the project objectives change or new business scenarios are identified. As the project proceeds, some new risks appear and some old risks disappear.




Risk Prioritization

It is simpler to prioritize a risk if the risk is understood accurately. Two measures, Risk Impact and Risk Probability, are applied to each risk. Risk Impact is estimated in tangible terms (e.g. dollar value) or on a scale (e.g. 10 to 1 or High to Low). Risk Probability is estimated somewhere between 0 (no probability of occurrence) and 1 (certain to occur) or on a scale (10 to 1 or High to Low). For each risk, the product of Risk Impact and Risk Probability gives the Risk Magnitude. Sorting the Risk Magnitude in descending order gives a list in which the risks at the top are the more serious risks and need to be managed closely.

Adding all the Risk Magnitudes gives an overall Risk Index of the project. If the same Risk Prioritization scale is used across projects, it is possible to identify the riskier projects by comparing the Risk Magnitudes.







Risk Treatment

Each risk in the risk list is subject to one or more of the following Risk Treatments.

a. Risk Avoidance: For example, if there is a risk related to a new component, it is possible to postpone this component to a later release. Risk Avoidance is uncommon because it impacts the project objectives e.g. delivery of new features.

b. Risk Transfer: For example, if the risk is insufficient security testing of the system, it may be possible to hire a specialized company to perform the security testing. Risk Transfer takes place when this vendor is held accountable for ample security testing of the system. Risk Transfer increases the project cost.

c. Risk Mitigation: This is a common risk treatment. The objective of Risk Mitigation is to reduce the Risk Impact or Risk Probability or both. For example, if the testing team is new and does not have prior system knowledge, a risk mitigation treatment may be to have a knowledgeable team member join the team to train others on-the-fly. Risk Mitigation also increases the project cost.

d. Risk Acceptance: Any risk not treated by any prior treatments has to be accepted. This happens when there is no viable mitigation available due to reasons such as cost. For example, if the test environment has only one server, risk acceptance means not building another server. If the existing server crashes, there will be down-time and it will be a real issue in the project.







Few other points are:

1. Risk management brings clarity and focus to the team and other stakeholders. Though the team should avoid burning more time on risk management if it is not providing more value.

2. The risk list should be a live document, consisting of current risks, their prioritization and treatment plans. The test approach and test plan should be synched with the risk list whenever the latter is updated.

3. Bigger projects commonly involve more stakeholders and have more formal risk management process

Defect lifecycle...


NEW: Tester finds a ‘bug’ and posts it with the status NEW. This bug is yet to be studied/approved. The fate of a NEW bug is one of ASSIGNED, DROPPED and DEFERRED.

ASSIGNED / OPEN: Test / Development / Project lead studies the NEW bug and if it is found to be valid it is assigned to a member of the Development Team. The assigned Developer’s responsibility is now to fix the bug and have it COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In that case, a bug can be open yet unassigned.

DEFERRED: If a valid NEW or ASSIGNED bug is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. This bug is ASSIGNED when the time comes.

DROPPED / REJECTED: Test / Development/ Project lead studies the NEW bug and if it is found to be invalid, it is DROPPED / REJECTED. Note that the specific reason for this action needs to be given.

COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the bug that is ASSIGNED to him or her. Now, the ‘fixed’ bug needs to be verified by the Test Team and the Development Team ‘assigns’ the bug back to the Test Team. A COMPLETED bug is either CLOSED, if fine, or REASSIGNED, if still not fine.

If a Developer cannot fix a bug, some organizations may offer the following statuses:

Won’t Fix / Can’t Fix: The Developer will not or cannot fix the bug due to some reason.

Can’t Reproduce: The Developer is unable to reproduce the bug.

Need More Information: The Developer needs more information on the bug from the Tester.

REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ bug is in fact not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it. A REASSIGNED bug needs to be COMPLETED again.

CLOSED / VERIFIED: If the Tester / Test Lead finds that the bug is indeed fixed and is no more of any concern, it is CLOSED / VERIFIED. This is the happy ending.

Bug Life Cycle Implementation Guidelines

Make sure the entire team understands what each bug status exactly means. Also, make sure the bug life cycle is documented.
Ensure that each individual clearly understands his/her responsibility as regards each bug.
Ensure that enough detail is entered in each status change. For example, do not simply DROP a bug but provide a reason for doing so.
If a bug tracking tool is being used, avoid entertaining any ‘bug related requests’ without an appropriate change in the status of the bug in the tool. Do not let anybody take shortcuts. Or else, you will never be able to get up-to-date bug metrics for analysis.

Test preparation process...

Baseline Documents
Construction of an application and testing are done using certain documents. These documents are written in sequence, each of it derived from the previous document.

Business Requirement
This document describes users needs for the application. This is done over a period of time, and going through various levels of requirements. This should also portrays functionalities that are technically feasible within the stipulated times frames for delivery of the application. As this contains user perspective requirements. User Acceptance Test is based on this document.

How to read a Business Requirement?
In case of the Integrated Test Process, this document is used to understand the user requirements and find the gaps between the User Requirement and Functional Specification. User Acceptance Test team should break the business requirement document into modules depending on how the user will use the application. While reading the document, test team should put themselves as end users of the application. This document would serve as a base for UAT test preparation.

Functional Specification
This document describes the functional needs, design of the flow and user maintained parameters. These are primarily derived from Business Requirement document, which specifies the client’s business needs.
The proposed application should adhere to the specifications specified in this document. This is used henceforth to develop further documents for software construction and validation and verification of the software. In order to achieve synchronization between the software construction and testing process. Functional Specification (FS) serves as the Base document.

How to read a Functional Specification?
The testing process begins by first understanding the functional specifications. The FS is normally divided into modules. The tester should understand the entire functionality that is proposed in the document by reading it thoroughly. It is natural for a tester at this point to get confused on the total flow and functionality. In order to overcome these, it is advisable for the tester to read the document multiple times, seeking clarifications then and there until clarity is achieved. Testers are then given a module or multiple modules for validation and verification. These modules then become the tester’s responsibility.
The Tester should then begin to acquire an in-depth knowledge of their respective modules. In the process, these modules should be split into segments like field level validations, module rules, business rules etc. In order to do the same modules precisely the tester should interpret importance and its role within the application. A high level understanding of the data requirements for respective modules is also expected from the tester at this point. Interaction with test lead at this juncture is crucial to draw a testing approach, like an end-to-end test coverage or individual test. (Explained later in the document)

Tester’s Reading Perspective
Functional specification is sometimes written assuming some level of knowledge of the Testers and constructors. We can categorize the explanations by

Explicit Rules: Functionality expressed as conditions clearly in writing, in the document.

Implicit Rules: Functionality that is implied based on what is expressed as a specification/condition or requirement of a user.

The tester must also bear in mind, the test type i.e. Integrated System Testing (IST) or User Acceptance Testing (UAT). Based on this, he should orient his testing approach.

Design Specification 
This document is prepared based on the functional specification. It contains the system architecture, table structures and program specifications. This is ideally prepared and used by the construction team. The Test Team should also have a detailed understanding of the design specification in order to understand the system architecture.

System Specification 
This document is a combination of functional specification and design specification. This is used in case of small applications or an enhancement to an application. Under such situations it may not be advisable make two documents.

Prototype
This is look and feel representation of the application that is proposed. This basically shows the placement of the fields, modules and generic flow of the application. The main objective of the prototype is to demonstrate the understanding of the application to the users and obtain their buy-in before actual design and construction begins.
The development team also uses the prototype as a guide to build the application. This is usually done using HTML or MS PowerPoint with user interaction facility.

Scenarios in Prototype
The flow and positioning of the fields and modules are projected using several possible business scenarios derived from the application functionality.
Testers should not expect all possible scenarios to be covered in the prototype.

Flow of Prototype
The flow and positioning are derived from initial documentation off the project. A project is normally dynamic during initial stages, and hence tester should bear in mind the changes to the specification, if any, while using the prototype to develop test conditions.
It is a value addition to the project when tester can identify mismatches between the specifications and prototype, as the application can be rectified in the initial stages itself.

Test estimation for testing process


Introduction:


In my opinion, one of the most difficult and critical activities in IT is the estimation process. I believe that it occurs because when we say that one project will be accomplished in such time by at such cost, it must happen.

The testing estimation process in place was quite simple. The inputs for the process, provided by the development team, were: the size of the development team and the number of working days needed for building a solution before starting systems tests.

The testing estimation process said that the number of testing engineers would be half of the number of development engineers and one third of the number of development working days.

1st Rule: Estimation shall be always based on the software requirements

All estimation should be based on what would be tested, i.e., the software requirements.

Normally, the software requirements were only established by the development team without any or just a little participation from the testing team. After the specification have been established and the project costs and duration have been estimated, the development team asks how long would take for testing the solution. The answer should be said almost right away. Then, the software requirements shall be read and understood by the testing team, too. Without the testing participation, no serious estimation can be considered.

2nd Rule: Estimation shall be based on expert judgment

Before estimating, the testing team classifies the requirements in the following categories:

" Critical: The development team has little knowledge in how to implement it;
" High: The development team has good knowledge in how to implement it but it is not an easy task;
" Normal: The development team has good knowledge in how to implement.

The experts in each requirement should say how long it would take for testing them. The categories would help the experts in estimating the effort for testing the requirements.

3rd Rule: Estimation shall be based on previous projects
All estimation should be based on previous projects. If a new project has similar requirements from a previous one, the estimation is based on that project.
4th Rule: Estimation shall be based on metrics
My organization has created an OPD, Organization Process Database, where the project metrics are recorded. We have recorded metrics from three years ago obtained from dozens of projects.
The number of requirements is the basic information for estimating a testing project. From it, my organization has metrics that guide us to estimate a testing project. The table below shows the metrics used to estimate a testing project. The team size is 01 testing engineer.

Metric Value

1.Number of test cases created for each requirement 4,53
2.Number of test cases developed by Working day 14,47
3.Number of test cases executed by Working day 10,20
4.Number of ARs for testcase 0,77
5.Number of ARs verified by Working day 24,64
For instance, if we have a project with 70 functional requirements and a testing team size of 2 engineers, we reach the following estimates:

Metric Value

Number of testcases - based on metric 1 317,10
Preparation phase - based on metric 2 11 working days
Execution phase - based on metric 3 16 working days
Number of ARs - based on metric 4 244 ARs
Regression phase - based on metric 5 6 working days

The testing duration is estimated in 22 (16+6) working days. Plus, 11 working days for preparing it.
A spreadsheet was created in order to find out the estimation and calculate the duration of tests and testing costs. They are based on the following formulas:
Testing working days = (Development working days) / 3.

Testing engineers = (Development engineers) / 2.

Testing costs = Testing working days * Testing engineers * person daily costs.
As the process was only playing with numbers, it was not necessary to register anywhere how the estimation was obtained.

To exemplify how the process worked, if one development team said that to deliver a solution for systems testing it would need 4 engineers and 66 working days then, the systems test would need 2 engineers (half) and 21 working days (one third). So, the solution would be ready for delivery to the customer after 87 (66+21) working days.

Just to be clear, in testing time, it was not included the time for developing the testcases and preparing the testing environment. Normally, it would need an extra 10 days for the testing team.

5th Rule: Estimation shall never forget the past

I have not sent away the past. The testing team continues using the old process and the spreadsheet. After the estimation is done following the new rules, the testing team estimates again using the old process in order to compare both results.

Normally, the results from the new estimate process are cheaper and faster than the old one in about 20 to 25%. If the testing team gets a different percentage, the testing team returns to the process in order to understand if something was missed.

6th Rule: Estimation shall be recorded

All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again. The testing team would not need to return for all steps and take the same decisions again. Sometimes, it is an opportunity to adjust the estimation made earlier.

7th Rule: Estimation shall be supported by tools

A new spreadsheet has been created containing metrics that help to reach the estimation quickly. The spreadsheet calculates automatically the costs and duration for each testing phase.
There is also a letter template that contains some sections such as: cost table, risks, and free notes to be filled out. This letter is sent to the customer. It also shows the different options for testing that can help the customer decides which kind of test he needs.

8th Rule: Estimation shall always be verified

Finally, All estimation should be verified. I've created another spreadsheet for recording the estimations. The estimation is compared to the previous ones recorded in a spreadsheet to see if they have similar trend. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.

Tuesday, 30 June 2015

Software Testing life cycle...STLC


Requirement Analysis

During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities
  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus. 
  • Prepare Requirement Traceability Matrix (RTM). 
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required). 
  • Deliverable 
  • RTM 
  • Automation feasibility report. (if applicable) 

Test planning and control :

Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.

Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities.


Test analysis and design :

Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.

Test analysis and design has the following major tasks:

  • Reviewing the test basis (such as requirements, architecture, design, interfaces).
  • Evaluating testability of the test basis and test objects.
  • Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.
  • Designing and prioritizing test cases.
  • Identifying necessary test data to support the test conditions and test cases.
  • Designing the test environment set-up and identifying any required infrastructure and tools.
Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities 

  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.  
  • Setup test Environment and test data  
  • Perform smoke test on the build 

Deliverable 

  • Environment ready with test data set up  
  • Smoke Test Results. 

Test implementation and execution :

Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.

Test implementation and execution has the following major tasks:

  • Developing, implementing and prioritizing test cases.
  • Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
  • Creating test suites from the test procedures for efficient test execution.
  • Verifying that the test environment has been set up correctly.
  • Executing test procedures either manually or by using test execution tools, according to the planned sequence.
  • Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.
  • Comparing actual results with expected results.
  • Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).
  • Repeating test activities as a result of action taken for each discrepancy. For example, re execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).
Deliverable 

  • Completed RTM with execution status  
  • Test cases updated with results  
  • Defect reports 

Evaluating exit criteria and reporting:

Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.

Evaluating exit criteria has the following major tasks:

  • Checking test logs against the exit criteria specified in test planning.
  • Assessing if more tests are needed or if the exit criteria specified should be changed.
  • Writing a test summary report for stakeholders.

Test closure activities :

Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. For example, when a software system is released, a test project is completed (or cancelled), a milestone has been achieved, or a maintenance release has been completed.

Test closure activities include the following major tasks:

  • Checking which planned deliverable have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
  • Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
  • Handover of testware to the maintenance organization.
  • Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
Documents:
IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:

Test Plan: a management planning document that shows: 

  • How the testing will be done (including SUT (system under test) configurations). 
  • Who will do it 
  • What will be tested 
  • How long it will take (although this may vary, depending upon resource availability). 
  • What the test coverage will be, i.e. what quality level is required 

Test Design Specification: detailing test conditions and the expected results as well as test pass criteria. 

Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification 

Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed 

Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next 

Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed 

Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing. 
Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders. 

------------------------------------------------------------------------------------------------------

Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of activities carried out methodologically to help certify your software product. These activities (stages) constitute the Software Testing Life Cycle (STLC).

The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria , Activities & Deliverable associated with it.

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverable for the different stages in STLC. Lets look into them in detail.


Test Planning:

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

  • Preparation of test plan/strategy document for various types of testing 
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities. 
  • Training requirement 
  • Deliverable 
  • Test plan /strategy document. 
  • Effort estimation document. 



Test Case Development

This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

  • Create test cases, automation scripts (if applicable) 
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available) 
  • Deliverable 
  • Test cases/scripts 
  • Test data 

Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities
  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build 
  • Deliverable 
  • Environment ready with test data set up 

Smoke Test Results:

Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities

  • Execute tests as per plan 
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure 
  • Deliverable 
  • Completed RTM with execution status 
  • Test cases updated with results 
  • Defect reports 
Test Cycle Closure

Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

  • Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality 
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity. 
  • Deliverable  : Test Closure report  , Test metrics

What is Defect, Bug, Error, Issue, Failure ?


Error: A mistake in coding is called error,error found by tester is called defect defect accepted by development team then it is called bug,build does not meet the requirements then it is failure

Mistake (an error): A human action that produces an incorrect result.

Fault: An incorrect step, process or data definition.

         - manifestation of the error in implementation
         - this is really nebulous, hard to pin down the 'location'
         -
When everything is correct but we are not able to get a result

Failure: An incorrect result


Bug: Deviation from the expected result.

Defect: Problem in algorithm leads to failure.

What are the different types of testing method?

White box testing :

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills are required and used to design test cases. While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.

White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase.

White box testing requires knowing what makes software secure or insecure, how to think like an attacker, and how to use different testing tools and techniques. The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.

White-box test design techniques include:

  • Control flow testing 
  • Data flow testing 
  • Branch testing : Decision coverage, related to branch testing, is the assessment of the percentage of decision outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test case suite. Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage. 
  • Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points. Decision coverage is stronger than statement coverage: 100% decision coverage guarantees 100% statement coverage, but not vice versa
  • Path testing 

Statement Testing and Coverage :

In component testing, statement coverage is the assessment of the percentage of executable statements that have been exercised by a test case suite. The statement testing technique derives test cases to execute specific statements, normally to increase statement coverage.
Statement coverage is determined by the number of executable statements covered by (designed or executed) test cases divided by the number of all executable statements in the code under test.

Black box testing :

Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output.
Typical black-box test design techniques include:

1. Equivalence partitioning:

Inputs to the software or system are divided into groups that are expected to exhibit similar behaviour, so they are likely to be processed in the same way. Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected. Partitions can also be identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions. Equivalence partitioning is applicable at all levels of testing.

Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.

2 Boundary value analysis

Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects. The maximum and minimum values of a partition are its boundary values. A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. Tests can be designed to cover both valid and

invalid boundary values. When designing test cases, a test for each boundary value is chosen.Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect finding capability is high; detailed specifications are helpful. This technique is often considered as an extension of equivalence partitioning. It can be used on equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out, transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values may also be used for test data selection.

3 Decision table testing

Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that a system is to implement. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean). The decision table contains the triggering conditions, often combinations of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combinations of triggering conditions. The strength of decision table testing is that it creates combinations of conditions that might not otherwise have been exercised during testing. It may be applied to all situations when the action of the software depends on several logical decisions.

4 State transition testing

A system may exhibit a different response depending on current conditions or previous history (its state). In this case, that aspect of the system can be shown as a state transition diagram. It allows the tester to view the software in terms of its states, transitions between states, the inputs or events that trigger state changes (transitions) and the actions which may result from those transitions. The states of the system or object under test are separate, identifiable and finite in number. A state table shows the relationship between the states and inputs, and can highlight possible transitions that are invalid. Tests can be designed to cover a typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.State transition testing is much used within the embedded software industry and technical automation in general. However, the technique is also suitable for modeling a business object having specific states or testing screen-dialogue flows (e.g. for internet applications or business scenarios).

5 Use case testing

Tests can be specified from use cases or business scenarios. A use case describes interactions between actors, including users and the system, which produce a result of value to a system user. Each use case has preconditions, which need to be met for a use case to work successfully. Each use case terminates with post-conditions, which are the observable results and final state of the system after the use case has been completed. A use case usually has a mainstream (i.e. most likely) scenario, and sometimes alternative branches.

Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system. Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation. They also help uncover integration defects caused by the interaction and interference of different components, which individual component testing would not see

Gray box testing :

Grey box testing involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

Unit testing :

unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. In object-oriented programming a unit is usually an interface, such as a class.[citation needed] Unit tests are created by programmers or occasionally by white box testers during the development process.

Integration testing

Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems. There may be more than one level of integration testing and it may be carried out on test objects of varying size.

For example:

1. Component integration testing tests the interactions between software components and is done after component testing;

2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems. Cross-platform issues may be significant.

The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.

Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be

incremental rather than “big bang”.

Top-down approach :Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

For EX : If we have Modules x,y,z . X module is ready and Need to Test it , But i calls functions from y and z.(Which is not ready)To test at a particular module we write a Small Dummy piece a code which Simulates Y and Z Which will return values for X, These piece of Dummy code is Called Stubs in a Top Down Integration

So Stubs are called Functions in Top Down Integration.
Bottom-up approach :Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage
Similar to the above ex: If we have Y and Z modules get ready and x module is not ready, and we need to test y and z modules Which return values from X,So to get the values from X We write a Small Pice of Dummy code for x which returns values for Y and Z,So these piece of code is called Drivers in Bottom Up Integration

So Drivers are calling Functions in Bottom Up Integration.

Sandwich Testing is an approach to combine top down testing with bottom up testing.

The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link

Regression testing : 

The intent of regression testing is to ensure that a change, such as a bugfix, did not introduce new faults.[1] One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software

Usability testing
Performance testing

Scalability testing : 

Scalability testing is an extension of performance testing. The purpose of scalability testing is to identify major workloads and mitigate bottlenecks that can impede the scalability of the application.

Use performance testing to establish a baseline against which you can compare future performance tests. As an application is scaled up or out, a comparison of performance test results will indicate the success of scaling the application. When scaling results in degraded performance, it is typically the result of a bottleneck in one or more resources.

Software stress testing
Recovery testing

Security testing : Security testing is a process to determine that an information system protects data and maintains functionality as intended.

The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.

Confidentiality: A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security....

Integrity : A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.

Authentication: This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring that a computer program is a trusted one.

Authorization: The process of determining that a requester is allowed to receive a service or perform an operation.
Access control is an example of authorization.......


Availability: Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.

Non-repudiation: In reference to digital security, nonrepudiation means to ensure that a transferred message has been sent and received by the parties claiming to have sent and received the message. Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message.

Security Testing Taxonomy:
Common terms used for the delivery of security testing;

Discovery - The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.

Vulnerability Scan - Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).

Vulnerability Assessment - This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.

Security Assessment - Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.

Penetration Test - Penetration test simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.

Security Audit - Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).

Security Review - Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)

Conformance testing : Conformance testing or type testing is testing to determine whether a product or system meets some specified standard that has been developed for efficiency or interoperability.

Conformance testing, also known as compliance testing, is a methodology used in engineering to ensure that a product, process, computer program or system meets a defined set of standards. These standards are commonly defined by large, independent entities such as the Institute of Electrical and Electronics Engineers (IEEE), the World Wide Web Consortium (W3C) or the European Telecommunications Standards Institute (ETSI).

Smoke testing : In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.

  • A smoke test is scripted, either using a written set of tests or an automated test 
  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide. 
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification). 
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth. 
  • Smoke test refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail.

SANITY TESTING:

  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. 
  • A sanity test is usually unscripted. 
  • A Sanity test is used to determine a small section of the application is still working after a minor change. 
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. 
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Compatibility testing : Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.).. 
  • Bandwidth handling capacity of networking hardware 
  • Compatibility of peripherals (Printer, DVD drive, etc.) 
  • Operating systems (MVS, UNIX, Windows, etc.) 
  • Database (Oracle, Sybase, DB2, etc.) 
  • Other System Software (Web server, networking/ messaging tool, etc.) 
  • Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.) 

System testing:

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic

The following examples are different types of testing that should be considered during System testing:

Graphical user interface testing

Usability testing

Performance testing

Compatibility testing

Error handling testing

Load testing

Volume testing

Stress testing

Security testing

Scalability testing

Sanity testing

Smoke testing

Exploratory testing

Ad hoc testing

Regression testing

Reliability testing

Installation testing

Maintenance testing

Recovery testing and failover testing.

Accessibility testing, including compliance with:

Alpha testing : At developer side by client

Beta testing : At client side by client