Tuesday, 30 June 2015

What are the different types of testing method?

White box testing :

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills are required and used to design test cases. While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.

White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase.

White box testing requires knowing what makes software secure or insecure, how to think like an attacker, and how to use different testing tools and techniques. The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.

White-box test design techniques include:

  • Control flow testing 
  • Data flow testing 
  • Branch testing : Decision coverage, related to branch testing, is the assessment of the percentage of decision outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test case suite. Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage. 
  • Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points. Decision coverage is stronger than statement coverage: 100% decision coverage guarantees 100% statement coverage, but not vice versa
  • Path testing 

Statement Testing and Coverage :

In component testing, statement coverage is the assessment of the percentage of executable statements that have been exercised by a test case suite. The statement testing technique derives test cases to execute specific statements, normally to increase statement coverage.
Statement coverage is determined by the number of executable statements covered by (designed or executed) test cases divided by the number of all executable statements in the code under test.

Black box testing :

Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output.
Typical black-box test design techniques include:

1. Equivalence partitioning:

Inputs to the software or system are divided into groups that are expected to exhibit similar behaviour, so they are likely to be processed in the same way. Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected. Partitions can also be identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions. Equivalence partitioning is applicable at all levels of testing.

Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.

2 Boundary value analysis

Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects. The maximum and minimum values of a partition are its boundary values. A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. Tests can be designed to cover both valid and

invalid boundary values. When designing test cases, a test for each boundary value is chosen.Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect finding capability is high; detailed specifications are helpful. This technique is often considered as an extension of equivalence partitioning. It can be used on equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out, transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values may also be used for test data selection.

3 Decision table testing

Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that a system is to implement. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean). The decision table contains the triggering conditions, often combinations of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combinations of triggering conditions. The strength of decision table testing is that it creates combinations of conditions that might not otherwise have been exercised during testing. It may be applied to all situations when the action of the software depends on several logical decisions.

4 State transition testing

A system may exhibit a different response depending on current conditions or previous history (its state). In this case, that aspect of the system can be shown as a state transition diagram. It allows the tester to view the software in terms of its states, transitions between states, the inputs or events that trigger state changes (transitions) and the actions which may result from those transitions. The states of the system or object under test are separate, identifiable and finite in number. A state table shows the relationship between the states and inputs, and can highlight possible transitions that are invalid. Tests can be designed to cover a typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.State transition testing is much used within the embedded software industry and technical automation in general. However, the technique is also suitable for modeling a business object having specific states or testing screen-dialogue flows (e.g. for internet applications or business scenarios).

5 Use case testing

Tests can be specified from use cases or business scenarios. A use case describes interactions between actors, including users and the system, which produce a result of value to a system user. Each use case has preconditions, which need to be met for a use case to work successfully. Each use case terminates with post-conditions, which are the observable results and final state of the system after the use case has been completed. A use case usually has a mainstream (i.e. most likely) scenario, and sometimes alternative branches.

Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system. Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation. They also help uncover integration defects caused by the interaction and interference of different components, which individual component testing would not see

Gray box testing :

Grey box testing involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

Unit testing :

unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. In object-oriented programming a unit is usually an interface, such as a class.[citation needed] Unit tests are created by programmers or occasionally by white box testers during the development process.

Integration testing

Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems. There may be more than one level of integration testing and it may be carried out on test objects of varying size.

For example:

1. Component integration testing tests the interactions between software components and is done after component testing;

2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems. Cross-platform issues may be significant.

The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.

Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be

incremental rather than “big bang”.

Top-down approach :Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

For EX : If we have Modules x,y,z . X module is ready and Need to Test it , But i calls functions from y and z.(Which is not ready)To test at a particular module we write a Small Dummy piece a code which Simulates Y and Z Which will return values for X, These piece of Dummy code is Called Stubs in a Top Down Integration

So Stubs are called Functions in Top Down Integration.
Bottom-up approach :Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage
Similar to the above ex: If we have Y and Z modules get ready and x module is not ready, and we need to test y and z modules Which return values from X,So to get the values from X We write a Small Pice of Dummy code for x which returns values for Y and Z,So these piece of code is called Drivers in Bottom Up Integration

So Drivers are calling Functions in Bottom Up Integration.

Sandwich Testing is an approach to combine top down testing with bottom up testing.

The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link

Regression testing : 

The intent of regression testing is to ensure that a change, such as a bugfix, did not introduce new faults.[1] One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software

Usability testing
Performance testing

Scalability testing : 

Scalability testing is an extension of performance testing. The purpose of scalability testing is to identify major workloads and mitigate bottlenecks that can impede the scalability of the application.

Use performance testing to establish a baseline against which you can compare future performance tests. As an application is scaled up or out, a comparison of performance test results will indicate the success of scaling the application. When scaling results in degraded performance, it is typically the result of a bottleneck in one or more resources.

Software stress testing
Recovery testing

Security testing : Security testing is a process to determine that an information system protects data and maintains functionality as intended.

The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.

Confidentiality: A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security....

Integrity : A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.

Authentication: This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring that a computer program is a trusted one.

Authorization: The process of determining that a requester is allowed to receive a service or perform an operation.
Access control is an example of authorization.......


Availability: Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.

Non-repudiation: In reference to digital security, nonrepudiation means to ensure that a transferred message has been sent and received by the parties claiming to have sent and received the message. Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message.

Security Testing Taxonomy:
Common terms used for the delivery of security testing;

Discovery - The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.

Vulnerability Scan - Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).

Vulnerability Assessment - This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.

Security Assessment - Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.

Penetration Test - Penetration test simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.

Security Audit - Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).

Security Review - Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)

Conformance testing : Conformance testing or type testing is testing to determine whether a product or system meets some specified standard that has been developed for efficiency or interoperability.

Conformance testing, also known as compliance testing, is a methodology used in engineering to ensure that a product, process, computer program or system meets a defined set of standards. These standards are commonly defined by large, independent entities such as the Institute of Electrical and Electronics Engineers (IEEE), the World Wide Web Consortium (W3C) or the European Telecommunications Standards Institute (ETSI).

Smoke testing : In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.

  • A smoke test is scripted, either using a written set of tests or an automated test 
  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide. 
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification). 
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth. 
  • Smoke test refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail.

SANITY TESTING:

  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. 
  • A sanity test is usually unscripted. 
  • A Sanity test is used to determine a small section of the application is still working after a minor change. 
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. 
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Compatibility testing : Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.).. 
  • Bandwidth handling capacity of networking hardware 
  • Compatibility of peripherals (Printer, DVD drive, etc.) 
  • Operating systems (MVS, UNIX, Windows, etc.) 
  • Database (Oracle, Sybase, DB2, etc.) 
  • Other System Software (Web server, networking/ messaging tool, etc.) 
  • Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.) 

System testing:

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic

The following examples are different types of testing that should be considered during System testing:

Graphical user interface testing

Usability testing

Performance testing

Compatibility testing

Error handling testing

Load testing

Volume testing

Stress testing

Security testing

Scalability testing

Sanity testing

Smoke testing

Exploratory testing

Ad hoc testing

Regression testing

Reliability testing

Installation testing

Maintenance testing

Recovery testing and failover testing.

Accessibility testing, including compliance with:

Alpha testing : At developer side by client

Beta testing : At client side by client

No comments:

Post a Comment