Tuesday, 30 June 2015

Software Testing life cycle...STLC


Requirement Analysis

During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities
  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus. 
  • Prepare Requirement Traceability Matrix (RTM). 
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required). 
  • Deliverable 
  • RTM 
  • Automation feasibility report. (if applicable) 

Test planning and control :

Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.

Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities.


Test analysis and design :

Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.

Test analysis and design has the following major tasks:

  • Reviewing the test basis (such as requirements, architecture, design, interfaces).
  • Evaluating testability of the test basis and test objects.
  • Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.
  • Designing and prioritizing test cases.
  • Identifying necessary test data to support the test conditions and test cases.
  • Designing the test environment set-up and identifying any required infrastructure and tools.
Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities 

  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.  
  • Setup test Environment and test data  
  • Perform smoke test on the build 

Deliverable 

  • Environment ready with test data set up  
  • Smoke Test Results. 

Test implementation and execution :

Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.

Test implementation and execution has the following major tasks:

  • Developing, implementing and prioritizing test cases.
  • Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
  • Creating test suites from the test procedures for efficient test execution.
  • Verifying that the test environment has been set up correctly.
  • Executing test procedures either manually or by using test execution tools, according to the planned sequence.
  • Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.
  • Comparing actual results with expected results.
  • Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).
  • Repeating test activities as a result of action taken for each discrepancy. For example, re execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).
Deliverable 

  • Completed RTM with execution status  
  • Test cases updated with results  
  • Defect reports 

Evaluating exit criteria and reporting:

Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.

Evaluating exit criteria has the following major tasks:

  • Checking test logs against the exit criteria specified in test planning.
  • Assessing if more tests are needed or if the exit criteria specified should be changed.
  • Writing a test summary report for stakeholders.

Test closure activities :

Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. For example, when a software system is released, a test project is completed (or cancelled), a milestone has been achieved, or a maintenance release has been completed.

Test closure activities include the following major tasks:

  • Checking which planned deliverable have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
  • Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
  • Handover of testware to the maintenance organization.
  • Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
Documents:
IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:

Test Plan: a management planning document that shows: 

  • How the testing will be done (including SUT (system under test) configurations). 
  • Who will do it 
  • What will be tested 
  • How long it will take (although this may vary, depending upon resource availability). 
  • What the test coverage will be, i.e. what quality level is required 

Test Design Specification: detailing test conditions and the expected results as well as test pass criteria. 

Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification 

Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed 

Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next 

Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed 

Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing. 
Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders. 

------------------------------------------------------------------------------------------------------

Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of activities carried out methodologically to help certify your software product. These activities (stages) constitute the Software Testing Life Cycle (STLC).

The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria , Activities & Deliverable associated with it.

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverable for the different stages in STLC. Lets look into them in detail.


Test Planning:

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

  • Preparation of test plan/strategy document for various types of testing 
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities. 
  • Training requirement 
  • Deliverable 
  • Test plan /strategy document. 
  • Effort estimation document. 



Test Case Development

This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

  • Create test cases, automation scripts (if applicable) 
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available) 
  • Deliverable 
  • Test cases/scripts 
  • Test data 

Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities
  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build 
  • Deliverable 
  • Environment ready with test data set up 

Smoke Test Results:

Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities

  • Execute tests as per plan 
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure 
  • Deliverable 
  • Completed RTM with execution status 
  • Test cases updated with results 
  • Defect reports 
Test Cycle Closure

Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

  • Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality 
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity. 
  • Deliverable  : Test Closure report  , Test metrics

What is Defect, Bug, Error, Issue, Failure ?


Error: A mistake in coding is called error,error found by tester is called defect defect accepted by development team then it is called bug,build does not meet the requirements then it is failure

Mistake (an error): A human action that produces an incorrect result.

Fault: An incorrect step, process or data definition.

         - manifestation of the error in implementation
         - this is really nebulous, hard to pin down the 'location'
         -
When everything is correct but we are not able to get a result

Failure: An incorrect result


Bug: Deviation from the expected result.

Defect: Problem in algorithm leads to failure.

What are the different types of testing method?

White box testing :

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills are required and used to design test cases. While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.

White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase.

White box testing requires knowing what makes software secure or insecure, how to think like an attacker, and how to use different testing tools and techniques. The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.

White-box test design techniques include:

  • Control flow testing 
  • Data flow testing 
  • Branch testing : Decision coverage, related to branch testing, is the assessment of the percentage of decision outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test case suite. Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage. 
  • Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points. Decision coverage is stronger than statement coverage: 100% decision coverage guarantees 100% statement coverage, but not vice versa
  • Path testing 

Statement Testing and Coverage :

In component testing, statement coverage is the assessment of the percentage of executable statements that have been exercised by a test case suite. The statement testing technique derives test cases to execute specific statements, normally to increase statement coverage.
Statement coverage is determined by the number of executable statements covered by (designed or executed) test cases divided by the number of all executable statements in the code under test.

Black box testing :

Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output.
Typical black-box test design techniques include:

1. Equivalence partitioning:

Inputs to the software or system are divided into groups that are expected to exhibit similar behaviour, so they are likely to be processed in the same way. Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected. Partitions can also be identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions. Equivalence partitioning is applicable at all levels of testing.

Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.

2 Boundary value analysis

Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects. The maximum and minimum values of a partition are its boundary values. A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. Tests can be designed to cover both valid and

invalid boundary values. When designing test cases, a test for each boundary value is chosen.Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect finding capability is high; detailed specifications are helpful. This technique is often considered as an extension of equivalence partitioning. It can be used on equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out, transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values may also be used for test data selection.

3 Decision table testing

Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that a system is to implement. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean). The decision table contains the triggering conditions, often combinations of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combinations of triggering conditions. The strength of decision table testing is that it creates combinations of conditions that might not otherwise have been exercised during testing. It may be applied to all situations when the action of the software depends on several logical decisions.

4 State transition testing

A system may exhibit a different response depending on current conditions or previous history (its state). In this case, that aspect of the system can be shown as a state transition diagram. It allows the tester to view the software in terms of its states, transitions between states, the inputs or events that trigger state changes (transitions) and the actions which may result from those transitions. The states of the system or object under test are separate, identifiable and finite in number. A state table shows the relationship between the states and inputs, and can highlight possible transitions that are invalid. Tests can be designed to cover a typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.State transition testing is much used within the embedded software industry and technical automation in general. However, the technique is also suitable for modeling a business object having specific states or testing screen-dialogue flows (e.g. for internet applications or business scenarios).

5 Use case testing

Tests can be specified from use cases or business scenarios. A use case describes interactions between actors, including users and the system, which produce a result of value to a system user. Each use case has preconditions, which need to be met for a use case to work successfully. Each use case terminates with post-conditions, which are the observable results and final state of the system after the use case has been completed. A use case usually has a mainstream (i.e. most likely) scenario, and sometimes alternative branches.

Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system. Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation. They also help uncover integration defects caused by the interaction and interference of different components, which individual component testing would not see

Gray box testing :

Grey box testing involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

Unit testing :

unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. In object-oriented programming a unit is usually an interface, such as a class.[citation needed] Unit tests are created by programmers or occasionally by white box testers during the development process.

Integration testing

Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems. There may be more than one level of integration testing and it may be carried out on test objects of varying size.

For example:

1. Component integration testing tests the interactions between software components and is done after component testing;

2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems. Cross-platform issues may be significant.

The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.

Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be

incremental rather than “big bang”.

Top-down approach :Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

For EX : If we have Modules x,y,z . X module is ready and Need to Test it , But i calls functions from y and z.(Which is not ready)To test at a particular module we write a Small Dummy piece a code which Simulates Y and Z Which will return values for X, These piece of Dummy code is Called Stubs in a Top Down Integration

So Stubs are called Functions in Top Down Integration.
Bottom-up approach :Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage
Similar to the above ex: If we have Y and Z modules get ready and x module is not ready, and we need to test y and z modules Which return values from X,So to get the values from X We write a Small Pice of Dummy code for x which returns values for Y and Z,So these piece of code is called Drivers in Bottom Up Integration

So Drivers are calling Functions in Bottom Up Integration.

Sandwich Testing is an approach to combine top down testing with bottom up testing.

The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link

Regression testing : 

The intent of regression testing is to ensure that a change, such as a bugfix, did not introduce new faults.[1] One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software

Usability testing
Performance testing

Scalability testing : 

Scalability testing is an extension of performance testing. The purpose of scalability testing is to identify major workloads and mitigate bottlenecks that can impede the scalability of the application.

Use performance testing to establish a baseline against which you can compare future performance tests. As an application is scaled up or out, a comparison of performance test results will indicate the success of scaling the application. When scaling results in degraded performance, it is typically the result of a bottleneck in one or more resources.

Software stress testing
Recovery testing

Security testing : Security testing is a process to determine that an information system protects data and maintains functionality as intended.

The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.

Confidentiality: A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security....

Integrity : A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.

Authentication: This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring that a computer program is a trusted one.

Authorization: The process of determining that a requester is allowed to receive a service or perform an operation.
Access control is an example of authorization.......


Availability: Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.

Non-repudiation: In reference to digital security, nonrepudiation means to ensure that a transferred message has been sent and received by the parties claiming to have sent and received the message. Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message.

Security Testing Taxonomy:
Common terms used for the delivery of security testing;

Discovery - The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.

Vulnerability Scan - Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).

Vulnerability Assessment - This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.

Security Assessment - Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.

Penetration Test - Penetration test simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.

Security Audit - Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).

Security Review - Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)

Conformance testing : Conformance testing or type testing is testing to determine whether a product or system meets some specified standard that has been developed for efficiency or interoperability.

Conformance testing, also known as compliance testing, is a methodology used in engineering to ensure that a product, process, computer program or system meets a defined set of standards. These standards are commonly defined by large, independent entities such as the Institute of Electrical and Electronics Engineers (IEEE), the World Wide Web Consortium (W3C) or the European Telecommunications Standards Institute (ETSI).

Smoke testing : In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.

  • A smoke test is scripted, either using a written set of tests or an automated test 
  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide. 
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification). 
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth. 
  • Smoke test refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail.

SANITY TESTING:

  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. 
  • A sanity test is usually unscripted. 
  • A Sanity test is used to determine a small section of the application is still working after a minor change. 
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. 
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Compatibility testing : Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.).. 
  • Bandwidth handling capacity of networking hardware 
  • Compatibility of peripherals (Printer, DVD drive, etc.) 
  • Operating systems (MVS, UNIX, Windows, etc.) 
  • Database (Oracle, Sybase, DB2, etc.) 
  • Other System Software (Web server, networking/ messaging tool, etc.) 
  • Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.) 

System testing:

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic

The following examples are different types of testing that should be considered during System testing:

Graphical user interface testing

Usability testing

Performance testing

Compatibility testing

Error handling testing

Load testing

Volume testing

Stress testing

Security testing

Scalability testing

Sanity testing

Smoke testing

Exploratory testing

Ad hoc testing

Regression testing

Reliability testing

Installation testing

Maintenance testing

Recovery testing and failover testing.

Accessibility testing, including compliance with:

Alpha testing : At developer side by client

Beta testing : At client side by client

Monday, 29 June 2015

What are the different types of Testing methodology or testing models ?

Waterfall model: 

The waterfall model adopts a 'top down' approach regardless of whether it is being used for software development or testing. The basic steps involved in this software testing methodology are:
Requirement analysis ,Test case design ,Test case implementation ,Testing, debugging and validating the code or product ,Deployment and maintenance 
In this methodology, you move on to the next step only after you have completed the present step. There is no scope for jumping backward or forward or performing two steps simultaneously. Also, this model follows a non-iterative approach. The main benefit of this methodology is its simplistic, systematic and orthodox approach. However, it has many shortcomings since bugs and errors in the code are not discovered until and unless the testing stage is reached. This can often lead to wastage of time, money and valuable resources.

V model : 

The V model gets its name from the fact that the graphical representation of the different test process activities involved in this methodology resembles the letter 'V'. The basic steps involved in this methodology are more or less the same as those in the waterfall model. However, this model follows both a 'top-down' as well as a 'bottom-up' approach (you can visualize them forming the letter 'V'). The benefit of this methodology is that in this case, both the development and testing activities go hand-in-hand. For example, as the development team goes about its requirement analysis activities, the testing team simultaneously begins with its acceptance testing activities. By following this approach, time delays are minimized and optimum utilization of resources is assured.

Spiral model : Spiral Model

As the name implies, the spiral model follows an approach in which there are a number of cycles (or spirals) of all the sequential steps of the waterfall model. Once the initial cycle is completed, a thorough analysis and review of the achieved product or output is performed. If it is not as per the specified requirements or expected standards, a second cycle follows, and so on. This 
methodology follows an iterative approach and is generally suited for very large projects having complex and constantly changing requirements.

RUP : Rational Unified Process (RUP)

The RUP methodology is also similar to the spiral model in the sense that the entire testing procedure is broken up into multiple cycles or processes. Each cycle consists of four phases namely; inception, elaboration, construction and transition. At the end of each cycle, the product or the output is reviewed and a further cycle (made up of the same four phases) follows if necessary. Today, you will find certain organizations and companies adopting a slightly modified version of the RUP, which goes by the name of Enterprise Unified Process (EUP).

Agile model : Agile Model

This methodology follows neither a purely sequential approach nor does it follow a purely iterative approach. It is a selective mix of both of these approaches in addition to quite a few new developmental methods. Fast and incremental development is one of the key principles of this methodology. The focus is on obtaining quick, practical and visible outputs and results, rather than merely following theoretical processes. Continuous customer interaction and participation is an integral part of the entire development process.

Agile methods break tasks into small increments with minimal planning, and do not directly involve long-term planning. Iterations are short time frames (time boxes) that typically last from one to four weeks. Each iteration involves a team working through a full software development cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to stakeholders. This minimizes overall risk and allows the project to adapt to changes quickly. Stakeholders produce documentation as required. An iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations may be required to release a product or new features.

RAD : Rapid Application Development (RAD)

The name says it all. In this case, the methodology adopts a rapid development approach by using the principle of component-based construction. After understanding the various requirements, a rapid prototype is prepared and is then compared with the expected set of output conditions and standards. Necessary changes and modifications are made after joint discussions with the customer or the development team (in the context of software testing). Though this approach does have its share of advantages, it can be unsuitable if the project is large, complex and happens to be of an extremely dynamic nature, wherein the requirements are constantly changing

What are different approach required for different types of testing?


Desktop Application testing:


1. User interface testing:

For any testing the first check is GUI. All the controls in the application should be used is proper manner.like size, font, place etc…

2. Functional testing:

  • In functional testing more stress should be given to functionality of the application . the product build is right or not.
  • Proper error message or warning message should be displayed in case of wrong input or action performed by the user.
  • The basic functionality is working or not like print in case of no printer is connected with the system.
  • The application is easily installation or not. 
  • In case of any changes done in the system like theam change or resolution change application should work properly. Test with multiple account on the desktop.
  • Sleep: While the application is running, put the system to sleep (S3). Wake the system up after two minutes. 
          a) Verify the application is still running.
          b) Verify there is no distortion or error

3. Compatibility testing: 

Test the application on different OS to find the compatibility .

4. Performance testing: 

Launch time required to start the application, Memory use

Web based application testing:

Step 1: User interface testing:

  • Content, wording , label used on web page should be correct and meaningful.
  • Wrap-around should occur properly.
  • Instruction given on web page should be correct.
  • Images on the web page should be placed properly and does not take long time to load.
  • All the controls placed properly.
  • View in text browser: Test each web page in text-only browser, or text-browser emulator. It will help you pick up on badly-chosen or missing ALT texts. 

Step 2 - Functional Testing

  • Check for broken links 
  • Validate the HTML: 
  • Disable the cookies from your browser settings. If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. See if appropriate messages are displayed.
  • (A cookie is a small piece of information stored as a text file on your computer that a web server uses when you browse certain web sites that you've visited before).
  • Switch JavaScript off: It is important to check that your site still functions with Java script disabled or provide proper Java script error message
  • Warning messages: Error/warning messages should be flash to user for incorrect inputs.

Step 3 - Interface Testing

  • Data display on browser should match with data available on server: To test browser and server interface, run queries on the database to make sure the transaction data is being retrieve and store properly.
  • Error Handling: Make sure system can handle application errors.

Step 4 - Compatibility Testing

  • Test on different Operating systems: Test your web application on different operating systems like Windows (XP, Vista, Win7 etc), Unix, MAC, Linux, Solaris with different OS flavors.
  • Test on different Browsers: Test web application on different browsers like: - Firefox, as that has the best standards compliance and is the second most-used browser. Internet Explorer for Windows – currently the most widely used browser (IE6, IE7, IE8). Opera – growing in popularity due to its speed and pretty good standards compliance. Mobile browsing: This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

Step 5 - Security Testing

  • Limit should be defined for the number of tries: Is there a maximum number of failed logins allowed before the server locks out the current user?
  • Verify rules for password selection.
  • Is there a timeout limit?
  • Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.
  • Clear your Cache: Be sure to clear the browser cache, including cookies, before each test.
  • Session hijacking: If your application has a session identifier number in the URL decrease that number by one and reload the page. The app has a session hijacking vulnerability if the app then "sees" you as a different user. 

Step 6 - Performance testing:

  • Can your site handle a large amount of users requesting a certain page.
  • Long period of continuous use: Is site able to run for long period, without downtime.

What is Testing , Why testing require and Who do the testing ?

What is Testing: 

Testing is to show that system is doing what it supposed to do and its not doing what it should not supposed to do. Testing is to show that there are defect in the system but cant prove that there is no defect.

Why testing require:

Testing required to prove the quality of the product by finding the defect. Once defects fixed we can say that quality can be improved by testing.

Who do the testing :

At different stages , different roles do the testing . As at unit level developer do the testing, integration level and system level testing done by Tester, User acceptance testing is done by client or end user.

What are the different types of testing:

Desktop application testing , web based application testing,  database testing, client server testing