Monday, February 4, 2008

SOFTWARE TESTING STANDARDS

Print E-mail

SOFTWARE TESTING STANDARDS

A Tutorial

Software testing is the process of testing the functionality and correctness of software by running it. Software testing is usually performed for one of two reasons: (1) defect detection, and (2) reliability estimation [1]. The problem of applying software testing to defect detection is that software can only suggest the presence of flaws, not their absence (unless the testing is exhaustive). The problem of applying software testing to reliability estimation is that the input distribution used for selecting test cases may be flawed. Software systems have evolved into the most complex artifacts ever created by humans. This has exacerbated the problems of testing software systems. So, people at standard organizations have come up with myriad number of testing standards that help testers maintaining quality of the system.

{mosgoogle}

ISO definition of a standard says:

Document, established by consensus and approved by a recognized body, that provides, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context” [2].

1 Why do we need testing standards?

  • Improve market confidence in third party quality management system certification through accredited certification bodies for the software sector
  • Improve professional practice amongst quality management system auditors in the

software sector

  • Publish authoritative guidance material for all software developers

Testing standard is beneficial to both the client and the developer of software systems. Unfortunately, there is no single software-testing standard that testers can adhere to. As will be shown, there are many standards that touch upon software testing, but many of these standards overlap and may contain to be contradictory requirements. Perhaps worse, there are large gaps in the coverage of software testing by standards, such as integration testing [3], where no useful standard exists at all. The rationale of testing standards is to define what constitutes an adequate proof that the units tested are an accurate representation of their specification. The motivation of testing standards is to ensure that the purpose of testing is met in a systematic way. Some of the prominent testing standards and their features are discussed in the following sections of this document.

2. Stages of the development of International Standards[4]:

An International Standard is the result of an agreement between the member bodies of ISO. International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a six-step process:

· Stage 1: Proposal stage: Propose the need of the standard.

· Stage 2: Preparatory stage: A group is set up.

· Stage 3: Committee stage: A consensus is reached and a draft international standard is submitted.

· Stage 4: Enquiry stage: Voting and comment is sought.

· Stage 5: Approval stage: a final yes/no is made.

· Stage 6: Publication stage: A final standard is published.

If a document with a certain degree of maturity is available at the start of a standardization project, for example a standard developed by another organization, it is possible to omit certain stages.

There are many testing standards. IEC and ISO are international standards. ANSI/IEEE standards are national US standards. IEEE standards are standards created by a professional association. A good example is IEEE/ ANSI 829-83, the software-testing standard. DoD standards are organization-internal standards, also known outside that organization. They are intended to provide government visibility into the development contractor’s activities and to insure third party maintainability.

3. OVERVIEW OF A FEW STANDARDS:

3.1 ANSI/IEEE 829 STANDARD FOR SOFTWARE TEST DOCUMENTATION [5]:

The intent of the ANSI/IEEE Standard for Software Test Documentation is to describe the form and content of a set of software test documents. The standardized test document can facilitate many advantages.

  • Facilitates a better way of communication by providing a common frame of reference
  • Serve as a completeness checklist for the associated testing process.
  • Provides a baseline for the evaluation of current test documentation practices.
  • The use of these documents significantly increases the manageability of testing [4].

This standard specifies the form and content of the following set of documents.

{mosgoogle}

a) A Test Plan describes the scope, approach, resources, and schedule of testing activities. It specifies the items to be tested, the features to be tested, the features not to be tested, the testing tasks to be performed, the personnel responsible for each task, the staffing and training needed and the risks and contingencies.

b) A Test Design Specification identifies the features to be covered by the design and its associated tests. It also identifies the test cases and test procedures and describes the feature pass/fail criteria.

c) A Test Case Specification documents the actual values used for input along with the anticipated outputs. A test case also identifies any constraints on the test procedures resulting from use of that specific test case. Test cases are separated from test designs to allow for use in more than one design and to allow for reuse in other situations.

d) A Test Procedure Specification describes steps required to operate the system and exercise test cases in order to implement the associated test design. Test procedures are separated from test-design specifications as they are intended to be followed step by step and should not have extraneous detail.

e) A Test Log is used to record what occurred during test execution.

f) A Test Incident Report describes any event that occurs during testing which requires further investigation. The events such as defects, requests for enhancements are included in this report.

g) A Test Summary report summarizes the testing activities associated with one or more test design specifications.

3.2. BS 7925-2 A Component Testing Standard [6]:

BS 7925-2 is defines a generic test process, test case design techniques and coverage measures for use in component testing.

Objective: The main objective of this standard is to define how well a software component was tested by dynamic testing. The intention was to start with component testing and and then to produce further standards to gradually build a framework of software testing standards.

3.2.1.Contents of BS 7925-2:

The standard consists of two parts:

(i) The Normative Part

This must be satisfied to comply with the standard.

The Normative Part consists of the following phases:

1. Scope

This defines the objectives specifying the process for dynamic component testing and techniques for design and measurement of the testing. The functional techniques apply to all the processes written in any language and that white box testing is restricted to testing of components in procedural languages.

2.Normative References and Definitions

BS 7925 is intended to be the first of a number of software testing standards and hence a single standard of software testing terminology was included to support the framework of all the standards under it.

3.Process:

At the project level there must be:

a) A project component test plan that requires the specification of dependencies between component tests and their sequence.

b) A Project Component Test strategy which requires the specification of the following:

· The test case design techniques (exhaustive).

· The criterion for test completion (exhaustive)

· The degree of independence of testers

· The approach to component testing like isolation, top down, bottom up etc.

· The test environment like including both hardware and software

· The test process for the software component testing

The sequence of activities defined by the generic test process must be followed for each test case.Each component tested must have a specification from which it is possible to derive the expected outcome for a given set of inputs.

The specific requirements for each of the individual activities in the generic test process are described as follows.

The generic model [7] for test process is as specified by the standard is

· Component Test Planning

o Specify how the component test strategy and project test plan apply to this component and test plan, listing any exceptions.

· Component Test Specification

o Specify test inputs for each test, selected using a test design technique

o Specify the expected outcomes for each test in advance

o Document the point of each test

o All test cases shall be repeatable

· Component Test Execution

o Each test case in the test specification shall be run.

· Component Test Recording

o Check and record the results of each test

o Analyse the reasons for failure to subsequently allow the removal of the fault.

o Record the test coverage achieved

o Record the test configuration.

· Checking the Component Test Completion:

o Check whether the test completion criterion is satisfied.

4.Relation with other standards:

It would align with ISO 9000 series of quality assurance standards, which require software developers to use software testing standards.

(ii) The Informative Annexes part

This provides informational support to the normative part by providing examples and advice.

3.3 Flight Software Unit Test Standard:[8]

NASA requirements for unit tests of flight software requirements.

a) Clear demonstration that the unit is implemented correctly.

b) Execution of every statement in the unit including all branches in the conditional s

statements.

c) Boundary Value Testing: Each conditional branch in the unit should be executed with data

at each side of the boundary of the condition and with data away from the boundary on

each side.

d) All operations that might cause erroneous execution should be tested explicitly.

e) All parameters and inputs to functions should be tested with nominal values and the

extreme values.

f) Unit test shall not require any changes to the module source code. If a preprocessor is used,

exactly the same preprocessor values should be set for the final release and unit test builds.

g) Unit tests should be repeatable and they should produce identical results on each run.

h) Unit tests should run without user interaction to the maximum possible extent.

i)If input data files are used by the unit test, they shall be treated as source code for the

purpose of configuration management. Such input data files should be in human readable

form; if this is not possible, a separate tool shall be provided to generate the actual test input

files from some human readable form.

j)Distinct elements of input vectors and matrices should have distinct values in order to catch

indexing errors.

k) When the order of inputs to an operation matters, the inputs should have distinct values to

catch order errors.

l) Unit tests should be compiled with the target compiler, using the same configuration

switches as the final target build, and run on the target hardware. There should be

documentation of which targets the unit has been tested on.

4. Detailed description of MIL- STD-498

Objective: The purpose of this standard is to establish software development and documentation requirements to be applied during the acquisition, development, or support of software systems[9].

Features:

  • All Departments and Agencies of the Department of Defense approve this Military Standard for use [10].
  • This standard merges DOD- STD-2167A and DOD- STD-7935A to define a set of activities and documentation suitable for the development of both weapon systems and Automated Information Systems [10].
  • This standard can be applied to contractors, subcontractors, or Government in-house agencies performing software development. For uniformity, the term "acquirer" is used for the organization requiring the technical effort, "developer" for the organization performing the technical effort, "contract" for the agreement between these parties [9].
  • Part or all of the software to which the standard is applied may be the software element of firmware; this standard does not apply to the hardware element of firmware [10].
  • This standard and its Data Item Descriptions (DIDs) are meant to be tailored for each item or type of software to which they are applied. The tailoring process for the standard includes deletion of non-applicable requirements and modification of requirements not meeting contract needs [10].

Though it specifies the activities to be undertaken in all the phases of software process, the activities that are undertaken in the testing phase are outlined in this document. Testing at all levels (viz. Unit, Integration and System levels) is discussed in this document. This standard describes testing at levels as illustrated below.

4.1. Unit Testing [10]:

1.1 Preparing for Unit testing:

The developer shall develop test cases, test procedures and test data for testing each software unit. The test cases shall cover all aspects of unit’s detailed design. The developer shall record this information in appropriate Software Development Files (SDF).

1.2 Performing Unit testing:

The developer shall test software corresponding to each software unit. The testing shall be in accordance with unit test cases and procedures.

1.3 Revision and retesting:

The developer shall make all necessary revisions to the software, perform all necessary retesting, and update the SDFs and other software products as needed, based on the results of the unit testing.

1.4 Analyzing and recording unit test results:

The developer shall analyze the unit test results and shall record the test and analysis results in appropriate SDFs.

4.2. Unit Integration and testing [10]:

The developer shall perform unit integration and testing in accordance with the following requirements.

  • Unit Integration and testing means integrating software units and testing the resulting software for requirements and continuing this process until all the units in the CSCI are integrated and tested.
  • If a CSCI is developed in multiple builds, unit integration and testing of that CSCI is not completed until the final build.

2.1 Preparing for Unit Integration and testing:

The developer shall develop test cases, test procedures and test data for unit integration and testing. The test cases shall cover all the aspects of CSCI-wide and CSCI architectural design. The developer shall record the information in appropriate SDfs.

2.2 Performing Unit Integration and testing:

The testing shall be in accordance with unit integration test cases and procedures.

2.2 Revision and retesting:

The developer shall make all necessary revisions to the software, perform all necessary retesting, and update the SDFs and other software products as needed, based on the results of Unit integration and testing.

2.3 Analyzing and recording results:

The developer shall analyze the unit integration and testing results and shall record the test and analysis results in appropriate SDFs.

4.3. CSCI Qualification testing [10]:

The developer shall perform CSCI qualification testing in accordance with the following requirements :

· CSCI qualification testing is performed to demonstrate to the acquirer that CSCI requirements have been met. This testing contrasts with developer-internal CSCI testing performed at the final stages of Unit Integration and testing.

· If a CSCI is developed in multiple builds then its qualification testing until the final build is completed.

3.1 Independence in CSCI qualification testing :

Qualification team is independent of development team. But the development team does contribute to the qualification testing.

3.2 Testing on the target computer system:

This involves testing on the target computer system or on an alternative system approved by the acquirer.

3.3 Preparing for CSCI qualification testing:

The developer shall develop test cases, test procedures and test data to be used for CSCI qualification testing and the traceability between the test cases and requirements. The developer notifies the acquirer of the time and location of CSCI qualification testing.

3.4 Dry run of CSCI qualification testing:

If this test is to be witnessed by the acquirer than the developer has to run dry tests to ensure that they are complete and accurate and that the software is ready for witnessed testing.

3.5 Performing CSCI qualification testing:

The developer shall perform CSCI qualification testing of each CSCI in accordance with testing procedures and cases.

3.6 Revision and retesting:

Activities are similar to that of Unit Integration and testing phase.

3.7 Analyzing and recording CSCI qualification test results:

Activities are similar to that of Unit Integration and testing phase.

4.4. CSCI/HWCI integration and testing [10]:

The developer shall participate in CSCI/HWCI integration and testing activities in accordance with the following requirements:

  • CSCI/HWCI integration and testing means integrating CSCIs with interfacing HWCI and CSCIs, and testing the resultant group for the intended functionality and continue this process until all HWCIs and CSCIs are integrated and tested. The last stage of this testing is developer-internal system testing.
  • If a system or CSCI is developed in multiple builds then this test is not completed until the final build.

All the activities are similar to that of Unit Integration and testing phase.

4.5. System Testing [10]:

Developers participate in system testing in accordance with the following requirements:

  • System qualification testing is performed to demonstrate to the acquirer that system requirements have been met. This is different from the developer-internal system testing performed at the final stages of the CSCI/HWCI integration and testing.
  • If a system is build in multiple builds then qualification testing of the system is not complete until the final build.

5.1 An independent team different from that of developers is maintained for this testing

5.2 This testing involves testing on the target system or on an alternative system approved by the acquirer.

5.3 Preparing for the system qualification testing.

5.4 Dry run of system qualification testing.

5.5 Performing system qualification testing.

5.6 Revision and retesting.

5.7 Analyzing and recording system qualification test results.

And the standard goes on discussing other phases and aspects of software development process.

5. Conclusion:

Ever-growing competition in the global economy led to the growing importance of quality of the system under development. There is a huge demand for standards that assess and maintain quality of the system. Most defense organizations and safety-critical systems should comply with the strict government and military standards (particularly MIL- STD-498). We would like to bring into being the fact that most other non-critical systems will comply with other less-stringent standards (viz. IEEE 1008) that they customize to meet their requirements.

“For non-critical systems, Standards may be tailored to meet the organization’s requirements.”

6. References:

[1] http://www.rspa.com/

Software Testing Techniques

[2] http://www.iso.ch/

Standards development

[3] http://www.testingstandards.co.uk/

Software testing standards – Stuart C.Reid

[4] http://www.ustreas.gov/hrsolutions/doc/test

IEEE standard for software test documentation.

[5] http://www.iso.ch/iso/en/stdsdevelopment/whowhenhow/how.html

ISO standards development.

[6] BS 7925-2: The Software Component Testing Standard “ Stuart C. Reid, Cranfield

University IEEE Software

[7] http://www.testingstandards.co.uk/

“Standard for Software Component Testing”, Working Draft 3.4, BCS SIGIST

[8]http://fse.gsfc.nasa.gov/linrary/

Flight Software Branch Unit Test Standard

[9] http://sunset.usc.edu/

System testing Extracts from Draft MILITARY STANDARD

[10] http://www.pogner.demon.co.uk/mil_498/

MIL- STD-498 Public Distribution Document

No comments:

Powered By Mushu

Powered By Mushu