Monday, February 4, 2008

Software Testing Guide Book - 02

14. Understanding Scenario Based Testing

Scenario Based Tests (SBT) are best suited when you need to tests need to concentrate on the functionality of the application, than anything else.

Let us take an example, where you are testing an application which is quite old (a legacy application) and it is a banking application. This application has been built based on the requirements of the organization for various banking purposes. Now, this application will have continuous upgrades in the working (technology wise and business wise). What do you do to test the application?

Let us assume that the application is undergoing only functional changes and not the UI changes. The test cases should be updated for every release. Over a period of time, maintaining the test ware becomes a major set back. The Scenario Based Tests would help you here.

As per the requirements, the base functionality is stable and there are no UI changes. There are only changes with respect to the business functionality. As per the requirements and the situation, we clearly understand that only regression tests need to be run continuously as part of the testing phase. Over a period of time, the individual test cases would become difficult to manage. This is the situation where we use Scenarios for testing.

What do you do for deriving Scenarios?

We can use the following as the basis for deriving scenarios:

5. From the requirements, list out all the functionalities of the application.

6. Using a graph notation, draw depictions of various transactions which pass through various functionalities of the application.

7. Convert these depictions into scenarios.

8. Run the scenarios when performing the testing.

Will you use Scenario Based Tests only for Legacy application testing?

No. Scenario Based Tests are not only for legacy application testing, but for any application which requires you to concentrate more on the functional requirements. If you can plan out a perfect test strategy, then the Scenario Based Tests can be used for any application testing and for any requirements.

Scenario Based tests will be a good choice with a combination of various test types and techniques when you are testing projects which adopt UML (Unified Modeling Language) based development strategies.

You can derive scenarios based on the Use Case’s. Use Case’s provide good coverage of the requirements and functionality.

15. Understanding Agile Testing

The concept of Agile testing rests on the values of the Agile Alliance Values, which states that:

We have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more." - http://www.agilemanifesto.org/

What is Agile testing?

1) Agile testers treat the developers as their customer and follow the agile manifesto. The Context driven testing principles (explained in later part) act as a set of principles for the agile tester.

2) Or it can be treated as the testing methodology followed by testing team when an entire project follows agile methodologies. If so what is the role of a tester in such a fast paced methodology?)

Traditional QA seems to be totally at loggerheads with the Agile manifesto in the following regard where:

· Process and tools are a key part of QA and testing.

· QA people seem to love documentation.

· QA people want to see the written specification.

· And where is testing without a PLAN?

So the question arises is there a role for QA in Agile projects?

There answer is maybe but the roles and tasks are different.

In the first definition of Agile testing we described it as one following the Context driven principles.

The context driven principles which are guidelines for the agile tester are:

1. The value of any practice depends on its context.

2. There are good practices in context, but there are no best practices.

3. People, working together, are the most important part of any project’s context.

4. Projects unfold over time in ways that are often not predictable.

5. The product is a solution. If the problem isn’t solved, the product doesn’t work.

6. Good software testing is a challenging intellectual process.

7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

http://www.context-driven-testing.com/

In the second definition we described Agile testing as a testing methodology adopted when an entire project follows Agile (development) Methodology. We shall have a look at the Agile development methodologies being practiced currently:

Agile Development Methodologies

· Extreme Programming (XP)

· Crystal

· Adaptive Software Development (ASD)

· Scrum

· Feature Driven Development (FDD)

· Dynamic Systems Development Method (DSDM)

· Xbreed

In a fast paced environment such as in Agile development the question then arises as to what is the “Role” of testing?

Testing is as relevant in an Agile scenario if not more than a traditional software development scenario.

Testing is the Headlight of the agile project showing where the project is standing now and the direction it is headed.

Testing provides the required and relevant information to the teams to take informed and precise decisions.

The testers in agile frameworks get involved in much more than finding “software bugs”, anything that can “bug” the potential user is a issue for them but testers don’t make the final call, it’s the entire team that discusses over it and takes a decision over a potential issues.

A firm belief of Agile practitioners is that any testing approach does not assure quality it’s the team that does (or doesn’t) do it, so there is a heavy emphasis on the skill and attitude of the people involved.

Agile Testing is not a game of “gotcha”, it’s about finding ways to set goals rather than focus on mistakes.

Among these Agile methodologies mentioned we shall look at XP (Extreme Programming) in detail, as this is the most commonly used and popular one.

The basic components of the XP practices are:

· Test- First Programming

· Pair Programming

· Short Iterations & Releases

· Refactoring

· User Stories

· Acceptance Testing

We shall discuss these factors in detail.

Test-First Programming

§ Developers write unit tests before coding. It has been noted that this kind of approach motivates the coding, speeds coding and also and improves design results in better designs (with less coupling and more cohesion)

§ It supports a practice called Refactoring (discussed later on).

§ Agile practitioners prefer Tests (code) to Text (written documents) for describing system behavior. Tests are more precise than human language and they are also a lot more likely to be updated when the design changes. How many times have you seen design documents that no longer accurately described the current workings of the software? Out-of-date design documents look pretty much like up-to-date documents. Out-of-date tests fail.

§ Many open source tools like xUnit have been developed to support this methodology.

Refactoring

§ Refactoring is the practice changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.

§ Traditional development tries to understand how all the code will work together in advance. This is the design. With agile methods, this difficult process of imagining what code might look like before it is written is avoided. Instead, the code is restructured as needed to maintain a coherent design. Frequent refactoring allows less up-front planning of design.

§ Agile methods replace high-level design with frequent redesign (refactoring). Successful refactoring But it also requires a way of ensuring checking whether that the behavior wasn’t inadvertently changed. That’s where the tests come in.

§ Make the simplest design that will work and add complexity only when needed and refactor as necessary.

§ Refactoring requires unit tests to ensure that design changes (refactorings) don’t break existing code.

Acceptance Testing

§ Make up user experiences or User stories, which are short descriptions of the features to be coded.

§ Acceptance tests verify the completion of user stories.

§ Ideally they are written before coding.

With all these features and process included we can define a practice for Agile testing encompassing the following features.

· Conversational Test Creation

· Coaching Tests

· Providing Test Interfaces

· Exploratory Learning

Looking deep into each of these practices we can describe each of them as:

Conversational Test Creation

§ Test case writing should be a collaborative activity including majority of the entire team. As the customers will be busy we should have someone representing the customer.

§ Defining tests is a key activity that should include programmers and customer representatives.

§ Don't do it alone.

Coaching Tests

§ A way of thinking about Acceptance Tests.

§ Turn user stories into tests.

§ Tests should provide Goals and guidance, Instant feedback and Progress measurement

§ Tests should be in specified in a format that is clear enough that users/ customers can understand and that is specific enough that it can be executed

§ Specification should be done by example.

Providing Test Interfaces

§ Developers are responsible for providing the fixtures that automate coaching tests

§ In most cases XP teams are adding test interfaces to their products, rather than using external test tools

Test Interaction Model

Exploratory Learning

§ Plan to explore, learn and understand the product with each iteration.

§ Look for bugs, missing features and opportunities for improvement.

§ We don’t understand software until we have used it.

We believe that Agile Testing is a major step forward. You may disagree. But regardless Agile Programming is the wave of the future. These practices will develop and some of the extreme edges may be worn off, but it’s only growing in influence and attraction. Some testers may not like it, but those who don’t figure out how to live with it are simply going to be left behind.

Some testers are still upset that they don’t have the authority to block the release. Do they think that they now have the authority to block the adoption of these new development methods? They’ll need to get on this ship and if they want to try to keep it from the shoals. Stay on the dock if you wish. Bon Voyage!

16. API Testing

Application programmable Interfaces (APIs) are collections of software functions or procedures that can be used by other applications to fulfill their functionality. APIs provide an interface to the software component. These form the critical elements for the developing the applications and are used in varied applications from graph drawing packages, to speech engines, to web-based airline reservation systems, to computer security components.

Each API is supposed to behave the way it is coded, i.e. it is functionality specific. These APIs may offer different results for different type of the input provided. The errors or the exceptions returned may also vary. However once integrated within a product, the common functionality covers a very minimal code path of the API and the functionality testing / integration testing may cover only those paths. By considering each API as a black box, a generalized approach of testing can be applied. But, there may be some paths which are not tested and lead to bugs in the application. Applications can be viewed and treated as APIs from a testing perspective.

There are some distinctive attributes that make testing of APIs slightly different from testing other common software interfaces like GUI testing.

ü Testing APIs requires a thorough knowledge of its inner workings - Some APIs may interact with the OS kernel, other APIs, with other software to offer their functionality. Thus an understanding of the inner workings of the interface would help in analyzing the call sequences and detecting the failures caused.

ü Adequate programming skills - API tests are generally in the form of sequences of calls, namely, programs. Each tester must possess expertise in the programming language(s) that are targeted by the API. This would help the tester to review and scrutinize the interface under test when the source code is available.

ü Lack of Domain knowledge – Since the testers may not be well trained in using the API, a lot of time might be spent in exploring the interfaces and their usage. This problem can be solved to an extent by involving the testers from the initial stage of development. This would help the testers to have some understanding on the interface and avoid exploring while testing.

ü No documentation – Experience has shown that it is hard to create precise and readable documentation. The APIs developed will hardly have any proper documentation available. Without the documentation, it is difficult for the test designer to understand the purpose of calls, the parameter types and possible valid/invalid values, their return values, the calls it makes to other functions, and usage scenarios. Hence having proper documentation would help test designer design the tests faster.

ü Access to source code – The availability of the source code would help tester to understand and analyze the implementation mechanism used; and can identify the loops or vulnerabilities that may cause errors. Thus if the source code is not available then the tester does not have a chance to find anomalies that may exist in the code.

ü Time constraints – Thorough testing of APIs is time consuming, requires a learning overhead and resources to develop tools and design tests. Keeping up with deadlines and ship dates may become a nightmare.

Testing of API calls can be done in isolation or in Sequence to vary the order in which the functionality is exercised and to make the API produce useful results from these tests. Designing tests is essentially designing sequences of API calls that have a potential of satisfying the test objectives. This in turn boils down to designing each call with specific parameters and to building a mechanism for handling and evaluating return values.

Thus designing of the test cases can depend on some of the general questions like

ü Which value should a parameter take?

ü What values together make sense?

ü What combination of parameters will make APIs work in a desired manner?

ü What combination will cause a failure, a bad return value, or an anomaly in the operating environment?

ü Which sequences are the best candidates for selection? etc.

Some interesting problems for testers being

1. Ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures. This includes assigning common parameter values as well as exploring boundary conditions.

2. Generating interesting parameter value combinations for calls with two or more parameters.

3. Determining the content under which an API call is made. This might include setting external environment conditions (files, peripheral devices, and so forth) and also internal stored data that affect the API.

4. Sequencing API calls to vary the order in which the functionality is exercised and to make the API produce useful results from successive calls.

By analyzing the problems listed above, a strategy needs to be formulated for testing the API. The API to be tested would require some environment for it to work. Hence it is required that all the conditions and prerequisites understood by the tester. The next step would be to identify and study its points of entry. The GUIs would have items like menus, buttons, check boxes, and combo lists that would trigger the event or action to be taken. Similarly, for APIs, the input parameters, the events that trigger the API would act as the point of entry. Subsequently, a chief task is to analyze the points of entry as well as significant output items. The input parameters should be tested with the valid and invalid values using strategies like the boundary value analysis and equivalence partitioning. The fourth step is to understand the purpose of the routines, the contexts in which they are to be used. Once all this parameter selections and combinations are designed, different call sequences need to be explored.

The steps can be summarized as following

1. Identify the initial conditions required for testing.

2. Identify the parameters – Choosing the values of individual parameters.

3. Identify the combination of parameters – pick out the possible and applicable parameter combinations with multiple parameters.

4. Identify the order to make the calls – deciding the order in which to make the calls to force the API to exhibit its functionality.

5. Observe the output.

1.Identify the initial condition:

The testing of an API would depend largely on the environment in which it is to be tested. Hence initial condition plays a very vital role in understanding and verifying the behavior of the API under test. The initial conditions for testing APIs can be classified as

ü Mandatory pre-setters.

ü Behavioral pre-setters.

Mandatory Pre-setters

The execution of an API would require some minimal state, environment. These type of initial conditions are classified under the mandatory initialization (Mandatory pre-setters) for the API. For example, a non-static member function API requires an object to be created before it could be called. This is an essential activity required for invoking the API.

Behavioral pre-setters

To test the specific behavior of the API, some additional environmental state is required. These types of initial conditions are called the behavioral pre-setters category of Initial condition. These are optional conditions required by the API and need to be set before invoking the API under test thus influencing its behavior. Since these influence the behavior of the API under test, they are considered as additional inputs other than the parameters

Thus to test any API, the environment required should also be clearly understood and set up. Without these criteria, API under test might not function as required and leave the tester’s job undone.

2.Input/Parameter Selection: The list of valid input parameters need to be identified to verify that the interface actually performs the tasks that it was designed for. While there is no method that ensures this behavior will be tested completely, using inputs that return quantifiable and verifiable results is the next best thing. The different possible input values (valid and invalid) need to be identified and selected for testing. The techniques like the boundary values analysis and equivalence-partitioning need to be used while trying to consider the input parameter values. The boundary values or the limits that would lead to errors or exceptions need to be identified. It would also be helpful if the data structures and other components that use these data structures apart from the API are analyzed. The data structure can be loaded by using the other components and the API can be tested while the other component is accessing these data structures. Verify that all other dependent components functionality are not affected while the API accesses and manipulates the data structures

The availability of the source code to the testers would help in analyzing the various inputs values that could be possible for testing the API. It would also help in understanding the various paths which could be tested. Therefore, not only are testers required to understand the calls, but also all the constants and data types used by the interface.

3. Identify the combination of parameters: Parameter combinations are extremely important for exercising stored data and computation. In API calls, two independently valid values might cause a fault when used together which might not have occurred with the other combinational values. Therefore, a routine called with two parameters requires selection of values for one based on the value chosen for the other. Often the response of a routine to certain data combinations is incorrectly programmed due to the underlying complex logic.

The API needs to be tested taking into consideration the combination of different parameter. The number of possible combinations of parameters for each call is typically large. For a given set of parameters, if only the boundary values have been selected, the number of combinations, while relatively diminished, may still be prohibitively large. For example, consider an API which takes three parameters as input. The various combinations of different values for the input values and their combinations needs to be identified.

Parameter combination is further complicated by the function overloading capabilities of many modern programming languages. It is important to isolate the differences between such functions and take into account that their use is context driven. The APIs can also be tested to check that there are no memory leaks after they are called. This can be verified by continuously calling the API and observing the memory utilization.

4.Call Sequencing: When combinations of possible arguments to each individual call are unmanageable, the number of possible call sequences is infinite. Parameter selection and combination issues further complicate the problem call-sequencing problem. Faults caused by improper call sequences tend to give rise to some of the most dangerous problems in software. Most security vulnerabilities are caused by the execution of some such seemingly improbable sequences.

5.Observe the output: The outcome of an execution of an API depends upon the behavior of that API, the test condition and the environment. The outcome of an API can be at different ways i.e., some could generally return certain data or status but for some of the API's, it might not return or shall be just waiting for a period of time, triggering another event, modifying certain resource and so on.

The tester should be aware of the output that needs to be expected for the API under test. The outputs returned for various input values like valid/invalid, boundary values etc needs to be observed and analyzed to validate if they are as per the functionality. All the error codes returned and exceptions returned for all the input combinations should be evaluated.

API Testing Tools: There are many testing tools available. Depending on the level of testing required, different tools could be used. Some of the API testing tools available are mentioned here.

JVerify: This is from Man Machine Systems.

JVerify is a Java class/API testing tool that supports a unique invasive testing model. The invasive model allows access to the internals (private elements) of any Java object from within a test script. The ability to invade class internals facilitates more effective testing at class level, since controllability and observability are enhanced. This can be very valuable when a class has not been designed for testability.

JavaSpec: JavaSpec is a SunTest's API testing tool. It can be used to test Java applications and libraries through their API. JavaSpec guides the users through the entire test creation process and lets them focus on the most critical aspects of testing. Once the user has entered the test data and assertions, JavaSpec automatically generates self-checking tests, HTML test documentation, and detailed test reports.

Here is an example of how to automate the API testing.

Assumptions: -

  1. Test engineer is supposed to test some API.
  2. The API’s are available in form of library (.lib).
  3. Test engineer has the API document.

There are mainly two things to test in API testing: -

  1. Black box testing of the API’s
  2. Interaction / integration testing of the API’s.

By black box testing of the API mean that we have to test the API for outputs. In simple words when we give a known input (parameters to the API) then we also know the ideal output. So we have to check for the actual out put against the idle output.

For this we can write a simple c program that will do the following: -

a) Take the parameters from a text file (this file will contain many of such input parameters).

b) Call the API with these parameters.

c) Match the actual and idle output and also check the parameters for good values that are passed with reference (pointers).

d) Log the result.

Secondly we have test the integration of the API’s.

For example there are two API’s say

Handle h = handle createcontext (void);

When the handle to the device is to be closed then the corresponding function

Bool bishandledeleted = bool deletecontext (handle &h);

Here we have to call the two API’s and check if they are handled by the created createcontext () and are deleted by the deletecontext ().

This will ensure that these two API’s are working fine.

For this we can write a simple c program that will do the following: -

a) Call the two API’s in the same order.

b) Pass the output parameter of the first as the input of the second

c) Check for the output parameter of the second API

d) Log the result.

The example is over simplified but this works because we are using this kind of test tool for extensive regression testing of our API library.

17. Understanding Rapid Testing

Rapid testing is the testing software faster than usual, without compromising on the standards of quality. It is the technique to test as thorough as reasonable within the constraints. This technique looks at testing as a process of heuristic inquiry and logically speaking it should be based on exploratory testing techniques.

Although most projects undergo continuous testing, it does not usually produce the information required to deal with the situations where it is necessary to make an instantaneous assessment of the product's quality at a particular moment. In most cases the testing is scheduled for just prior to launch and conventional testing techniques often cannot be applied to software that is incomplete or subject to constant change. At times like these Rapid Testing can be used.

It can be said that rapid testing has a structure that is built on a foundation of four components namely,

  • People
  • Integrated test process
  • Static Testing and
  • Dynamic Testing

There is a need for people who can handle the pressure of tight schedules. They need to be productive contributors even through the early phases of the development life cycle. According to James Bach, a core skill is the ability to think critically.

It should also be noted that dynamic testing lies at the heart of the software testing process, and the planning, design, development, and execution of dynamic tests should be performed well for any testing process to be efficient.

The Rapid Testing practice

It would help us if we scrutinize each phase of a development process to see how the efficiency, speed and quality of testing can be improved, bearing in mind the following factors:

  • Actions that the test team can take to prevent defects from escaping. For example, practices like extreme programming and exploratory testing.
  • Actions that the test team can take to manage risk to the development schedule.
  • The information that can be obtained from each phase so that the test team can speed up the activities.

If a test process is designed around the answers to these questions, both the speed of testing and the quality of the final product should be enhanced.

Some of the aspects that can be used while rapid testing are given below:

  1. Test for link integrity
  2. Test for disabled accessibility
  3. Test the default settings
  4. Check the navigation’s
  5. Check for input constraints by injecting special characters at the sources of data
  6. Run Multiple instances
  7. Check for interdependencies and stress them
  8. Test for consistency of design
  9. Test for compatibility
  10. Test for usability
  11. Check for the possible variability’s and attack them
  12. Go for possible stress and load tests
  13. And our favorite – banging the keyboard

18. Test Ware Development

Test Ware development is the key role of the Testing Team. What comprises Test Ware and some guidelines to build the test ware is discussed below:

18.1 Test Strategy

Before starting any testing activities, the team lead will have to think a lot & arrive at a strategy. This will describe the approach, which is to be adopted for carrying out test activities including the planning activities. This is a formal document and the very first document regarding the testing area and is prepared at a very early stag in SDLC. This document must provide generic test approach as well as specific details regarding the project. The following areas are addressed in the test strategy document.

18.1.1 Test Levels

The test strategy must talk about what are the test levels that will be carried out for that particular project. Unit, Integration & System testing will be carried out in all projects. But many times, the integration & system testing may be combined. Details like this may be addressed in this section.

18.1.2 Roles and Responsibilities

The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at a project level in this section. This may not have names associated: but the role has to be very clearly defined. The review and approval mechanism must be stated here for test plans and other test documents. Also, we have to state who reviews the test cases, test records and who approved them. The documents may go thru a series of reviews or multiple approvals and they have to be mentioned here.

18.1.3 Testing Tools

Any testing tools, which are to be used in different test levels, must be, clearly identified. This includes justifications for the tools being used in that particular level also.

18.1.4 Risks and Mitigation

Any risks that will affect the testing process must be listed along with the mitigation. By documenting the risks in this document, we can anticipate the occurrence of it well ahead of time and then we can proactively prevent it from occurring. Sample risks are dependency of completion of coding, which is done by sub-contractors, capability of testing tools etc.

18.1.5 Regression Test Approach

When a particular problem is identified, the programs will be debugged and the fix will be done to the program. To make sure that the fix works, the program will be tested again for that criteria. Regression test will make sure that one fix does not create some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated again, to make sure that nothing else is affected by a particular fix. How this is going to be carried out must be elaborated in this section. In some companies, whenever there is a fix in one unit, all unit test cases for that unit will be repeated, to achieve a higher level of quality.

18.1.6 Test Groups

From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. For example, in a railway reservation system, anything related to ticket booking is a functional group; anything related with report generation is a functional group. Same way, we have to identify the test groups based on the functionality aspect.

18.1.7 Test Priorities

Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones and if they fail, the product cannot be released. Some other test cases may be treated like cosmetic and if they fail, we can release the product without much compromise on the functionality. This priority levels must be clearly stated. These may be mapped to the test groups also.

18.1.8 Test Status Collections and Reporting

When test cases are executed, the test leader and the project manager must know, where exactly we stand in terms of testing activities. To know where we stand, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed and how many-failed etc. Also, how often we collect the status is to be clearly mentioned. Some companies will have a practice of collecting the status on a daily basis or weekly basis. This has to be mentioned clearly.

18.1.9 Test Records Maintenance

When the test cases are executed, we need to keep track of the execution details like when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location. This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories. The naming convention for the documents and files must also be mentioned.

18.1.10 Requirements Traceability Matrix

Ideally each software developed must satisfy the set of requirements completely. So, right from design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. Refer the following sample table which describes Requirements Traceability Matrix process. In this matrix, the rows will have the requirements. For every document {HLD, LLD etc}, there will be a separate column. So, in every cell, we need to state, what section in HLD addresses a particular requirement. Ideally, if every requirement is addressed in every single document, all the individual cells must have valid section ids or names filled in. Then we know that every requirement is addressed. In case of any missing of requirement, we need to go back to the document and correct it, so that it addressed the requirement.

For testing at each level, we may have to address the requirements. One integration and the system test case may address multiple requirements.

DTP Scenario No

DTC Id

Code

LLD Section

Requirement 1

+ve/-ve

1,2,3,4

Requirement 2

+ve/-ve

1,2,3,4

Requirement 3

+ve/-ve

1,2,3,4

Requirement 4

+ve/-ve

1,2,3,4

Requirement N

+ve/-ve

1,2,3,4

TESTER

TESTER

DEVELOPER

TEST LEAD

18.1.11 Test Summary

The senior management may like to have test summary on a weekly or monthly basis. If the project is very critical, they may need it on a daily basis also. This section must address what kind of test summary reports will be produced for the senior management along with the frequency.

The test strategy must give a clear vision of what the testing team will do for the whole project for the entire duration. This document will/may be presented to the client also, if needed. The person, who prepares this document, must be functionally strong in the product domain, with a very good experience, as this is the document that is going to drive the entire team for the testing activities. Test strategy must be clearly explained to the testing team members tight at the beginning of the project.

18.2 Test Plan

The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.

The plans are to be prepared by experienced people only. In all test plans, the ETVX {Entry-Task-Validation-Exit} criteria are to be mentioned. Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.

ETVX is a modeling technique for developing worldly and atomic level models. It sands for Entry, Task, Verification and Exit. It is a task-based model where the details of each task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task, Feedback In, Feedback Out, and measures.

There are two types of cells, unit cells and implementation cells. The implementation cells are basically unit cells containing the further tasks.

For example if there is a task of size estimation, then there will be a unit cell of size estimation. Then since this task has further tasks namely, define measures, estimate size. The unit cell containing these further tasks will be referred to as the implementation cell and a separate table will be constructed for it.

A purpose is also stated and the viewer of the model may also be defined e.g. top management or customer.

18.2.1 Unit Test Plan {UTP}

The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers, which contains the following sections.

18.2.1.1 What is to be tested?

The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case mostly the input units will be tested for the format, alignment, accuracy and the totals. The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions. This list may not be exhaustive; but it is better to have a complete list of these details.

18.2.1.2 Sequence of Testing

The sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc. Positive test cases prove that the system performs what is supposed to do; negative test cases prove that the system does not perform what is not supposed to do. Testing the screens, files, database etc., are to be given in proper sequence.

18.2.1.4 Basic Functionality of Units

How the independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level. Apart from the above sections, the following sections are addressed, very specific to unit testing.

· Unit Testing Tools

· Priority of Program units

· Naming convention for test cases

· Status reporting mechanism

· Regression test approach

· ETVX criteria

18.2.2 Integration Test Plan

The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections.

2.2.1 What is to be tested?

This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained.

18.2.2.1Sequence of Integration

When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using that data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.

18.2.2.2 List of Modules and Interface Functions

There may be N number of units in the application, but the units that are going to communicate with each other, alone are tested in this phase. If the units are designed in such a way that they are mutually independent, then the interfaces do not come into picture. This is almost impossible in any system, as the units have to communicate to other units, in order to get different types of functionalities executed. In this section, we need to list the units and for what purpose it talks to the others need to be mentioned. This will not go into technical aspects, but at a higher level, this has to be explained in plain English.

Apart from the above sections, the following sections are addressed, very specific to integration testing.

· Integration Testing Tools

· Priority of Program interfaces

· Naming convention for test cases

· Status reporting mechanism

· Regression test approach

· ETVX criteria

· Build/Refresh criteria {When multiple programs or objects are to be linked to arrived at single product, and one unit has some modifications, then it may need to rebuild the entire product and then load it into the integration test environment. When and how often, the product is rebuilt and refreshed is to be mentioned}.

18.2.3 System Test Plan {STP}

The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections normally present in system test plan.

18.2.3.1 What is to be tested?

This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All requirements are to be verified in the scope of system testing. This covers the functionality of the product. Apart from this what special testing is performed are also stated here.

18.2.3.2 Functional Groups and the Sequence

The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area, anything related to inter-branch transactions may be grouped into one area etc. Same way for the product being tested, these areas are to be mentioned here and the suggested sequences of testing of these areas, based on the priorities are to be described.

18.2.3.3 Special Testing Methods

This covers the different special tests like load/volume testing, stress testing, interoperability testing etc. These testing are to be done based on the nature of the product and it is not mandatory that every one of these special tests must be performed for every product.

Apart from the above sections, the following sections are addressed, very specific to system testing.

· System Testing Tools

· Priority of functional groups

· Naming convention for test cases

· Status reporting mechanism

· Regression test approach

· ETVX criteria

· Build/Refresh criteria

18.2.4 Acceptance Test Plan {ATP}

The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to system test, can be implemented to acceptance testing also.

Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.

A sample Test Plan Outline along with their description is as shown below:

Test Plan Outline

  1. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed.
  2. INTRODUCTION
  3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application.
  4. TEST ITEMS - List each of the items (programs) to be tested.
  5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which will be tested or demonstrated by the test.
  6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement which won't be tested and why not.
  7. APPROACH - Describe the data flows and test philosophy.
    Simulation or Live execution, Etc. This section also mentions all the approaches which will be followed at the various stages of the test execution.
  8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and tolerances
  9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?
    Under what circumstances it may be resumed in the middle?
    Establish check-points in long tests.
  10. TEST DELIVERABLES - What, besides software, will be delivered?
    Test report
    Test software
  11. TESTING TASKS Functional tasks (e.g., equipment set up)
    Administrative tasks
  12. ENVIRONMENTAL NEEDS
    Security clearance
    Office space & equipment
    Hardware/software requirements
  13. RESPONSIBILITIES
    Who does the tasks in Section 10?
    What does the user do?
  14. STAFFING & TRAINING
  15. SCHEDULE
  16. RESOURCES
  17. RISKS & CONTINGENCIES
  18. APPROVALS

The schedule details of the various test pass such as Unit tests, Integration tests, System Tests should be clearly mentioned along with the estimated efforts.

18.3 Test Case Documents

Designing good test cases is a complex art. The complexity comes from three sources:

§ Test cases help us discover information. Different types of tests are more effective for different classes of information.

§ Test cases can be “good” in a variety of ways. No test case will be good in all of them.

§ People tend to create test cases according to certain testing styles, such as domain testing or risk-based testing. Good domain tests are different from good risk-based tests.

What’s a test case?

“A test case specifies the pretest state of the IUT and its environment, the test inputs or conditions, and the expected result. The expected result specifies what the IUT should produce from the test inputs. This specification includes messages generated by the IUT, exceptions, returned values, and resultant state of the IUT and its environment. Test cases may also specify initial and resulting conditions for other objects that constitute the IUT and its environment.”

What’s a scenario?

A scenario is a hypothetical story, used to help a person think through a complex problem or system.

Characteristics of Good Scenarios

A scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate.

The primary objective of test case design is to derive a set of tests that have the highest attitude of discovering defects in the software. Test cases are designed based on the analysis of requirements, use cases, and technical specifications, and they should be developed in parallel with the software development effort.

A test case describes a set of actions to be performed and the results that are expected. A test case should target specific functionality or aim to exercise a valid path through a use case. This should include invalid user actions and illegal inputs that are not necessarily listed in the use case. A test case is described depends on several factors, e.g. the number of test cases, the frequency with which they change, the level of automation employed, the skill of the testers, the selected testing methodology, staff turnover, and risk.

The test cases will have a generic format as below.

Test case ID - The test case id must be unique across the application

Test case description - The test case description must be very brief.

Test prerequisite - The test pre-requisite clearly describes what should be present in the system, before the test can be executes.

Test Inputs - The test input is nothing but the test data that is prepared to be fed to the system.

Test steps - The test steps are the step-by-step instructions on how to carry out the test.

Expected Results - The expected results are the ones that say what the system must give as output or how the system must react based on the test steps.

Actual Results – The actual results are the ones that say outputs of the action for the given inputs or how the system reacts for the given inputs.

Pass/Fail - If the Expected and Actual results are same then test is Pass otherwise Fail.

The test cases are classified into positive and negative test cases. Positive test cases are designed to prove that the system accepts the valid inputs and then process them correctly. Suitable techniques to design the positive test cases are Specification derived tests, Equivalence partitioning and State-transition testing. The negative test cases are designed to prove that the system rejects invalid inputs and does not process them. Suitable techniques to design the negative test cases are Error guessing, Boundary value analysis, internal boundary value testing and State-transition testing. The test cases details must be very clearly specified, so that a new person can go through the test cases step and step and is able to execute it. The test cases will be explained with specific examples in the following section.

For example consider online shopping application. At the user interface level the client request the web server to display the product details by giving email id and Username. The web server processes the request and will give the response. For this application we will design the unit, Integration and system test cases.

The image “file:///E:/Documents%20and%20Settings/shilpad/Desktop/web.jpg” cannot be displayed, because it contains errors.

Figure 6.Web based application

Unit Test Cases (UTC)

These are very specific to a particular unit. The basic functionality of the unit is to be understood based on the requirements and the design documents. Generally, Design document will provide a lot of information about the functionality of a unit. The Design document has to be referred before UTC is written, because it provides the actual functionality of how the system must behave, for given inputs.

For example, In the Online shopping application, If the user enters valid Email id and Username values, let us assume that Design document says, that the system must display a product details and should insert the Email id and Username in database table. If user enters invalid values the system will display appropriate error message and will not store it in database.

Figure 7: Snapshot of Login Screen

Test Conditions for the fields in the Login screen

Email-It should be in this format (For Eg clickme@yahoo.com This e-mail address is being protected from spam bots, you need JavaScript enabled to view it ).

Username – It should accept only alphabets not greater than 6.Numerics and special type of characters are not allowed.

Test Prerequisite: The user should have access to Customer Login screen form screen

Negative Test Case

Project Name-Online shopping

Version-1.1

Module-Catalog

Test #

Description

Test Inputs

Expected Results

Actual results

Pass/Fail

1

Check for inputting values in Email field

Email=keerthi@rediffmail

Username=Xavier

Inputs should not be accepted. It should display message “Enter valid Email”

2

Check for inputting values in Email field

Email=john26#rediffmail.com

Username=John

Inputs should not be accepted. It should display message “Enter valid Email”

3

Check for inputting values in Username field

Email= shilpa@yahoo.com This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Username=Mark24

Inputs should not be accepted. It should display message “Enter correct Username”

Positive Test Case

Test #

Description

Test Inputs

Expected Results

Actual results

Pass/Fail

1

Check for inputting values in Email field

Email= shan@yahoo.com This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Username=dave

Inputs should be accepted.

2

Check for inputting values in Email field

Email= knki@rediffmail.com This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Username=john

Inputs should be accepted.

3

Check for inputting values in Username field

Email= xav@yahoo.com This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Username=mark

Inputs should be accepted.

Integration Test Cases

Before designing the integration test cases the testers should go through the Integration test plan. It will give complete idea of how to write integration test cases. The main aim of integration test cases is that it tests the multiple modules together. By executing these test cases the user can find out the errors in the interfaces between the Modules.

For example, in online shopping, there will be Catalog and Administration module. In catalog section the customer can track the list of products and can buy the products online. In administration module the admin can enter the product name and information related to it.

Table3: Integration Test Cases

Test #

Description

Test Inputs

Expected Results

Actual results

Pass/Fail

1

Check for Login Screen

Enter values in Email and UserName.

For Eg:

Email = shilpa@yahoo.com This e-mail address is being protected from spam bots, you need JavaScript enabled to view it

Username=shilpa

Inputs should be accepted.

Backend Verification

Select email, username from Cus;

The entered Email and Username should be displayed at sqlprompt.

2

Check for Product Information

Click product information link

It should display complete details of the product

3

Check for admin screen

Enter values in Product Id and Product name fields.

For Eg:

Product Id-245

Product name-Norton Antivirus

Inputs should be accepted.

Backend verification

Select pid , pname from Product;

The entered Product id and Product name should be displayed at the sql prompt.

NOTE: The tester has to execute above unit and Integration test cases after coding. And He/She has to fill the actual results and Pass/fail columns. If the test cases fail then defect report should be prepared.

System Test Cases: -

The system test cases meant to test the system as per the requirements; end-to end. This is basically to make sure that the application works as per SRS. In system test cases, (generally in system testing itself), the testers are supposed to act as an end user. So, system test cases normally do concentrate on the functionality of the system, inputs are fed through the system and each and every check is performed using the system itself. Normally, the verifications done by checking the database tables directly or running programs manually are not encouraged in the system test.

The system test must focus on functional groups, rather than identifying the program units. When it comes to system testing, it is assume that the interfaces between the modules are working fine (integration passed).

Ideally the test cases are nothing but a union of the functionalities tested in the unit testing and the integration testing. Instead of testing the system inputs outputs through database or external programs, everything is tested through the system itself. For example, in a online shopping application, the catalog and administration screens (program units) would have been independently unit tested and the test results would be verified through the database. In system testing, the tester will mimic as an end user and hence checks the application through its output.

There are occasions, where some/many of the integration and unit test cases are repeated in system testing also; especially when the units are tested with test stubs before and not actually tested with other real modules, during system testing those cases will be performed again with real modules/data in

19. Defect Management

Defects determine the effectiveness of the Testing what we do. If there are no defects, it directly implies that we don’t have our job. There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct. This implies that we lack the knack. In this section, let us understand Defects.

19.1 What is a Defect?

For a test engineer, a defect is following: -

  • Any deviation from specification
  • Anything that causes user dissatisfaction
  • Incorrect output
  • Software does not do what it intended to do.

Bug / Defect / Error: -

  • Software is said to have bug if it features deviates from specifications.
  • Software is said to have defect if it has unwanted side effects.
  • Software is said to have Error if it gives incorrect output.

But as for a test engineer all are same as the above definition is only for the purpose of documentation or indicative.

19.2 Defect Taxonomies

Categories of Defects:
All software defects can be broadly categorized into the below mentioned types:

Errors of commission: something wrong is done

Errors of omission: something left out by accident

Errors of clarity and ambiguity: different interpretations

Errors of speed and capacity

However, the above is a broad categorization; below we have for you a host of varied types of defects that can be identified in different software applications:

  1. Conceptual bugs / Design bugs
  2. Coding bugs
  3. Integration bugs
  4. User Interface Errors
  5. Functionality
  6. Communication
  7. Command Structure
  8. Missing Commands
  9. Performance
  10. Output
  11. Error Handling Errors
  12. Boundary-Related Errors
  13. Calculation Errors
  14. Initial and Later States
  15. Control Flow Errors
  16. Errors in Handling Data
  17. Race Conditions Errors
  18. Load Conditions Errors
  19. Hardware Errors
  20. Source and Version Control Errors
  21. Documentation Errors
  22. Testing Errors

19.3 Life Cycle of a Defect

The following self explanatory figure explains the life cycle of a defect:

20. Metrics for Testing

What is a Metric?

‘Metric’ is a measure to quantify software, software development resources, and/or the software development process. A Metric can quantify any of the following factors:

· Schedule,

· Work Effort,

· Product Size,

· Project Status, and

· Quality Performance

Measuring enables….

Metrics enables estimation of future work.

That is, considering the case of testing - Deciding the product is fit for shipment or delivery depends on the rate the defects are found and fixed. Defect collected and fixed is one kind of metric. (www.processimpact.com)

As defined in the MISRA Report,

It is beneficial to classify metrics according to their usage. IEEE 928.1 [4] identifies two classes:

i) Process – Activities performed in the production of the Software

ii) Product – An output of the Process, for example the software or its documentation.

Defects are analyzed to identify which are the major causes of defect and which is the phase that introduces most defects. This can be achieved by performing Pareto analysis of defect causes and defect introduction phases. The main requirements for any of these analysis is Software Defect Metrics.

Few of the Defect Metrics are:

Defect Density: (No. Of Defects Reported by SQA + No. Defects Reported By Peer Review)/Actual Size.

The Size can be in KLOC, SLOC, or Function Points. The method used in the Organization to measure the size of the Software Product.

The SQA is considered to be the part of the Software testing team.

Test effectiveness: ‘t / (t+Uat) where t=total no. of defects reported during testing and Uat = total no. of defects reported during User acceptance testing

User Acceptance Testing is generally carried out using the Acceptance Test Criteria according to the Acceptance Test Plan.

Defect Removal Efficiency:

(Total No Of Defects Removed /Total No. Of Defects Injected)*100 at various stages of SDLC

Description

This metric will indicate the effectiveness of the defect identification and removal in stages for a given project

Formula

· Requirements: DRE = [(Requirement defects corrected during Requirements phase) / (Requirement defects injected during Requirements phase)] * 100

· Design: DRE = [(Design defects corrected during Design phase) / (Defects identified during Requirements phase + Defects injected during Design phase)] * 100

· Code: DRE = [(Code defects corrected during Coding phase) / (Defects identified during Requirements phase + Defects identified during Design phase + Defects injected during coding phase)] * 100

· Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at all phases before and after delivery)] * 100

Metric Representation

Percentage

Calculated at

Stage completion or Project Completion

Calculated from

Bug Reports and Peer Review Reports

Defect Distribution: Percentage of Total defects Distributed across Requirements Analysis, Design Reviews, Code Reviews, Unit Tests, Integration Tests, System Tests, User Acceptance Tests, Review by Project Leads and Project Managers.

Software Process Metrics are measures which provide information about the performance of the development process itself.

Purpose:

1. Provide an Indicator to the Ultimate Quality of Software being Produced

2. Assists to the Organization to improve its development process by Highlighting areas of Inefficiency or error-prone areas of the process.

Software Product Metrics are measures of some attribute of the Software Product. (Example, Source Code).

Purpose:

1. Used to assess the quality of the output

What are the most general metrics?

Requirements Management

Metrics Collected

1. Requirements by state – Accepted, Rejected, Postponed

2. No. of baselined requirements

3. Number of requirements modified after base lining

Derived Metrics

1. Requirements Stability Index (RSI)

2. Requirements to Design Traceability

Project Management

Metrics Collected

Derived Metrics

1. Planned No. of days

2. Actual No. of days

1. Schedule Variance

1. Estimated effort

2. Actual Effort

1. Effort Variance

1. Estimated Cost

2. Actual Cost

1. Cost Variance

1. Estimated Size

2. Actual Size

1. Size Variance

Testing & Review

Metrics Collected

1. No. of defects found by Reviews

2. No. of defects found by Testing

3. No. of defects found by Client

4. Total No. of defects found by Reviews

Derived Metrics

1.Overall Review Effectiveness ( ORE)

2.Overall Test Effectiveness

Peer Reviews

Metrics Collected

1. KLOC / FP per person hour (Language) for Preparation

2. KLOC / FP per person hour (Language) for Review Meeting

3. No. of pages / hour reviewed during preparation

4. Average number of defects found by Reviewer during Preparation

5. No. of pages / hour reviewed during Review Meeting

6. Average number of defects found by Reviewer during Review Meeting

7. Review Team Size Vs Defects

8. Review speed Vs Defects

9. Major defects found during Review Meeting

10. Defects Vs Review Effort

Derived Metrics

1.Review Effectiveness (Major)

2.Total number of defects found by reviews for a project

Other Metrics

Metrics Collected

1.No. of Requirements Designed

2.No. of Requirements not Designed

3.No. of Design elements matching Requirements

4.No. of Design elements not matching Requirements

5.No. of Requirements Tested

6.No. of Requirements not Tested

7.No. of Test Cases with matching Requirements

8.No. of Test Cases without matching Requirements

9.No. of Defects by Severity

10. No. of Defects by stage of - Origin, Detection, Removal

Derived Metrics

1.Defect Density

2.No. of Requirements Designed Vs not Designed

3.No. of Requirements Tested Vs not Tested

4.Defect Removal Efficiency (DRE)

Some Metrics Explained

Schedule Variance (SV)

Description

This metric gives the variation of actual schedule vs. the planned schedule. This is calculated for each project – stage wise

Formula

SV = [(Actual no. of days – Planned no. of days) / Planned no. of days] * 100

Metric Representation

Percentage

Calculated at

Stage completion

Calculated from

Software Project Plan for planned number of days for completing each stage and for actual number of days taken to complete each stage

Defect Removal Efficiency (DRE)

Description

This metric will indicate the effectiveness of the defect identification and removal in stages for a given project

Formula

· Requirements: DRE = [(Requirement defects corrected during Requirements phase) / (Requirement defects injected during Requirements phase)] * 100

· Design: DRE = [(Design defects corrected during Design phase) / (Defects identified during Requirements phase + Defects injected during Design phase)] * 100

· Code: DRE = [(Code defects corrected during Coding phase) / (Defects identified during Requirements phase + Defects identified during Design phase + Defects injected during coding phase)] * 100

· Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at all phases before and after delivery)] * 100

Metric Representation

Percentage

Calculated at

Stage completion or Project Completion

Calculated from

Bug Reports and Peer Review Reports

Overall Review Effectiveness

Description

This metric will indicate the effectiveness of the Review process in identifying the defects for a given project

Formula

· Overall Review Effectiveness: ORE = [(Number of defects found by reviews) / (Total number of defects found by reviews + Number of defects found during Testing + Number of defects found during post-delivery)] * 100

Metric Representation

· Percentage

Calculated at

· Monthly

· Stage completion or Project Completion

Calculated from

· Peer reviews, Formal Reviews

· Test Reports

· Customer Identified Defects

Overall Test Effectiveness (OTE)

Description

This metric will indicate the effectiveness of the Testing process in identifying the defects for a given project during the testing stage

Formula

· Overall Test Effectiveness: OTE = [(Number of defects found during testing) / (Total number of defects found during Testing + Number of defects found during post delivery)] * 100

Metric Representation

· Percentage

Calculated at

· Monthly

· Build completion or Project Completion

Calculated from

· Test Reports

· Customer Identified Defects

Effort Variance (EV)

Description

This metric gives the variation of actual effort vs. the estimated effort. This is calculated for each project Stage wise

Formula

· EV = [(Actual person hours – Estimated person hours) / Estimated person hours] * 100

Metric Representation

· Percentage

Calculated at

· Stage completion as identified in SPP

Calculated from

· Estimation sheets for estimated values in person hours, for each activity within a given stage and Actual Worked Hours values in person hours.

Cost Variance (CV)

Description

This metric gives the variation of actual cost Vs the estimated cost. This is calculated for each project Stage wise

Formula

· CV = [(Actual Cost – Estimated Cost) / Estimated Cost] * 100

Metric Representation

· Percentage

Calculated at

· Stage completion

Calculated from

· Estimation sheets for estimated values in dollars or rupees, for each activity within a given stage

· Actual cost incurred

Size Variance

Description

This metric gives the variation of actual size Vs. the estimated size. This is calculated for each project stage wise

Formula

· Size Variance = [(Actual Size – Estimated Size) / Estimated Size] * 100

Metric Representation

· Percentage

Calculated at

· Stage completion

· Project Completion

Calculated from

· Estimation sheets for estimated values in Function Points or KLOC

· Actual size

Productivity on Review Preparation – Technical

Description

This metric will indicate the effort spent on preparation for Review. Use this to calculate for languages used in the Project

Formula

For every language (such as C, C++, Java, XSL, etc) used, calculate

· (KLOC or FP ) / hour (* Language)

*Language – C, C++, Java, XML, etc…

Metric Representation

· KLOC or FP per hour

Calculated at

· Monthly

· Build completion

Calculated from

· Peer Review Report

Number of defects found per Review Meeting

Description

This metric will indicate the number of defects found during the Review Meeting across various stages of the Project

Formula

· Number of defects per Review Meeting

Metric Representation

· Defects / Review Meeting

Calculated at

· Monthly

· Completion of Review

Calculated from

· Peer Review Report

· Peer Review Defect List

Review Team Efficiency (Review Team Size Vs Defects Trend)

Description

This metric will indicate the Review Team size and the defects trend. This will help to determine the efficiency of the Review Team

Formula

· Review Team Size to the Defects trend

Metric Representation

· Ratio

Calculated at

· Monthly

· Completion of Review

Calculated from

· Peer Review Report

· Peer Review Defect List

Review Effectiveness

Description

This metric will indicate the effectiveness of the Review process

Formula

Review Effectiveness = [(Number of defects found by Reviews) / ((Total number of defects found by reviews) + Testing)] * 100

Metric Representation

· Percentage

Calculated at

· Completion of Review or Completion of Testing stage

Calculated from

· Peer Review Report

· Peer Review Defect List

· Bugs Reported by Testing

Total number of defects found by Reviews

Description

This metric will indicate the total number of defects identified by the Review process. The defects are further categorized as High, Medium or Low

Formula

Total number of defects identified in the Project

Metric Representation

· Defects per Stage

Calculated at

· Completion of Reviews

Calculated from

· Peer Review Report

· Peer Review Defect List

Defects vs. Review effort – Review Yield

Description

This metric will indicate the effort expended in each stage for reviews to the defects found

Formula

· Defects / Review effort

Metric Representation

· Defects / Review effort

Calculated at

· Completion of Reviews

Calculated from

· Peer Review Report

· Peer Review Defect List

Requirements Stability Index (RSI)

Description

This metric gives the stability factor of the requirements over a period of time, after the requirements have been mutually agreed and baselined between Ivesia Solutions and the Client

Formula

· RSI = 100 * [ (Number of baselined requirements) – (Number of changes in requirements after the requirements are baselined) ] / (Number of baselined requirements)

Metric Representation

· Percentage

Calculated at

· Stage completion and Project completion

Calculated from

· Change Request

· Software Requirements Specification

Change Requests by State

Description

This metric provides the analysis on state of the requirements

Formula

· Number of accepted requirements

· Number of rejected requirements

· Number of postponed requirements

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· Change Request

· Software Requirements Specification

Requirements to Design Traceability

Description

This metric provides the analysis on the number of requirements designed to the number of requirements that were not designed

Formula

· Total Number of Requirements

· Number of Requirements Designed

· Number of Requirements not Designed

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· SRS

· Detail Design

Design to Requirements Traceability

Description

This metric provides the analysis on the number of design elements matching requirements to the number of design elements not matching requirements

Formula

· Number of Design elements

· Number of Design elements matching Requirements

· Number of Design elements not matching Requirements

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· SRS

· Detail Design

Requirements to Test case Traceability

Description

This metric provides the analysis on the number of requirements tested Vs the number of requirements not tested

Formula

· Number of Requirements

· Number of Requirements Tested

· Number of Requirements not Tested

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· SRS

· Detail Design

· Test Case Specification

Test cases to Requirements traceability

Description

This metric provides the analysis on the number of test cases matching requirements Vs the number of test cases not matching requirements

Formula

· Number of Requirements

· Number of Test cases with matching Requirements

· Number of Test cases not matching Requirements

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· SRS

· Test Case Specification

Number of defects in coding found during testing by severity

Description

This metric provides the analysis on the number of defects by the severity

Formula

· Number of Defects

· Number of defects of low priority

· Number of defects of medium priority

· Number of defects of high priority

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· Bug Report

Defects – Stage of origin, detection, removal

Description

This metric provides the analysis on the number of defects by the stage of origin, detection and removal.

Formula

· Number of Defects

· Stage of origin

· Stage of detection

· Stage of removal

Metric Representation

· Number

Calculated at

· Stage completion

Calculated from

· Bug Report

Defect Density

Description

This metric provides the analysis on the number of defects to the size of the work product

Formula

Defect Density = [Total no. of Defects / Size (FP / KLOC)] * 100

Metric Representation

· Percentage

Calculated at

· Stage completion

Calculated from

· Defects List

· Bug Report

How do you determine metrics for your application?

Objectives of Metrics are not only to measure but also understand the progress to the Organizational Goal.

The Parameters for determining the Metrics for an application:

· Duration

· Complexity

· Technology Constraints

· Previous Experience in Same Technology

· Business Domain

· Clarity of the scope of the project

One interesting and useful approach to arrive at the suitable metrics is using the Goal-Question-Metric Technique.

As evident from the name, the GQM model consists of three layers; a Goal, a Set of Questions, and lastly a Set of Corresponding Metrics. It is thus a hierarchical structure starting with a goal (specifying purpose of measurement, object to be measured, issue to be measured, and viewpoint from which the measure is taken). The goal is refined into several questions that usually break down the issue into its major components. Each question is then refined into metrics, some of them objective, some of them subjective. The same metric can be used in order to answer different questions under the same goal. Several GQM models can also have questions and metrics in common, making sure that, when the measure is actually taken, the different viewpoints are taken into account correctly (i.e., the metric might have different values when taken from different viewpoints).

In order to give an example of application of the model:


Goal Purpose Issue Object View Point

Improve the timeliness of Change Request Processing from the Project Manager’s viewpoint

Question

What is the current Change Request Processing Speed?

Metric

Average Cycle Time
Standard Deviation
% cases outside of the upper limit

Question

Is the performance of the process improving?

Metric

Current average cycle time

Baseline average cycle time

100

Subjective rating of manager's satisfaction

When do you determine Metrics?

When the requirements are understood in a high-level, at this stage, the team size, project size must be known to an extent, in which the project is at a "defined" stage.

References

  • Effective Methods of Software Testing, William E Perry.

· Software Engineering – A Practitioners Approach, Roger Pressman.

· An API Testing Method by Alan A Jorgensen and James A Whittaker.

· API Testing Methodology by Anoop Kumar P, working for Novell Software Development (I) Pvt Ltd., Bangalore.

· “Why is API Testing Different “by Nikhil Nilakantan, Hewlett Packard and Ibrahim K. El-Far, Florida Institute of Technology.

GNU Free Documentation License

Version 1.2, November 2002

Copyright (C) 2000,2001,2002 Free Software Foundation, Inc.

59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

Everyone is permitted to copy and distribute verbatim copies

of this license document, but changing it is not allowed.

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

· A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

· B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

· C. State on the Title page the name of the publisher of the Modified Version, as the publisher.

· D. Preserve all the copyright notices of the Document.

· E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

· F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

· G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.

· H. Include an unaltered copy of this License.

· I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

· J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

· K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

· L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

· M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

· N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

· O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements."

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.

No comments:

Powered By Mushu

Powered By Mushu