Basic Test Case Concepts
A testcase is simply a test with formal steps and instructions; testcases are valuable because they are repeatable, reproducible under the same environments, and easy to improve upon with feedback. A testcase is the difference between saying that something seems to be working okay and proving that a set of specific tasks are known to be working correctly.Some tests are more straightforward than others. For example, say you need to verify that all the links in your web site work. There are several different approaches to checking this:you can read your HTML code to see that all the link code is correct you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would imply that your links are correct you can use your browser (or even multiple browsers) to check every link manually you can use a link-checking program to check every link automatically you can use a site maintenance program that will display graphically the relationships between pages on your site, including links good and bad you could use all of these approaches to test for any possible failures or inconsistencies in the tests themselves Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide which one of more of these tests best suits your site structure, your test resources, and your need for granularity of results. You run the test, and you get your results showing any broken links.Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but points at the incorrect page, your link test won't catch the problem. My general point here is that you must understand what you are testing. A testcase is a series of explicit actions and examinations that identifies the "what".A testcase for checking links might specify that each link is tested for functionality, appropriateness, usability, style, consistency, etc. For example, a testcase for checking links on a typical page of a site might include these steps:Link Test: for each link on the page, verify that the link works (i.e., it is not broken) the link points at the correct page the link text effectively and unambiguously describes the target page the link follows the approved style guide for this web site (for example, closing punctuation is or is not included in the link text, as per the style guide specification) every instance of a link to the same target page is coded the same way.As you can see, this is a detailed testing of many aspects of the link, with the result that on completion of the test, you can say definitively what you know works. However, this is a simple example: testcases can run to hundreds of instructions, depending on the types of functionality being tested and the need for iterations ofsteps.Defining Test and Testcase ParametersA testcase should set up any special environment requirements the test may have, such as clearing the browser cache, enabling JavaScript support, or turning on the warnings for the dropping of cookies.In addition to specific configuration instructions, testcases should also record browser types and versions, operating system, machine platforms, connection speeds -- in short, the testcase should record any parameter that would affect the reproducibility of the results or could aid in troubleshooting any defects found by testing. Or to state this a little differently, specify what platforms this testcase should be run against, record what platforms it is run against, and in the case of defects report the exact environment in which the defect was found. The various required fields of a test case are as follows Test Case ID: It is unique number given to test case in order to be identified.Test description: The description if test case you are going to test.Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified.Function to be tested: The name of function to be tested.Environment: It tells in which environment you are testing. Test Setup: Anything you need to set up outside of your application for example printers, network and so on.Test Execution: It is detailed description of every step of execution.Expected Results: The description of what you expect the function to do. Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed.Sample TestcaseHere is a simple test case for applying bold formatting to a text. Test case ID: B 001 Test Description: verify B - bold formatting to the text Revision History:3/ 23/ 00 1.0- Valerie- Created Function to be tested: B - bold formatting to the text Environment: Win 98 Test setup: N/A Test Execution: Open program Open new document Type any text Select the text to make bold. Click BoldExpected Result: Applies bold formatting to the text Actual Result: passTestcase definitionDefine testcases in the Definition pane of the Component Test perspective. This is also where you define the hosts on which the testcases will run. Once you define the testcase element in the Definition pane, its contents appear in the Outline pane. You can add elements to the testcase's main block, and once your definition is complete you can prepare it to run and create a testcase instance.Testcase stagesAs you work with testcases in the Component Test perspective, they go through different stages, from definition to analysis. Each stage is generated from the previous one, but is otherwise unrelated: for example, although a testcase instance is generated from a testcase definition, changes to the definition will not affect the instanceCreating manual testcasesCreate manual testcases to guide a tester through the steps necessary to test a component or application. Once you have created a manual testcase, you can prepare it to run.Adding manual testcasesTo add a manual testcase to the Component Test perspective, follow these steps:1.In the Definition pane, right-click on Testcases and click: New > Testcase 2.In the New Testcase wizard, select the project you want to define the testcase in. 3.Name the project and click Next. 4.Select the Manual scheduler. 5.Click Finish to add the testcase to the Testcase folder under the selected project. The contents of the testcase appear in the Outline pane. To start with, it contains a main block, which will organize all the other contents of the testcase.Creating HTTP testcasesCreate HTTP testcases to run methods and queries against an HTTP server. You can define HTTP testcases by importing an HTTP XML file that defines a set of interactions, or you can define it using the tasks below. Once you have defined the testcase, you can prepare it to run.Creating Java testcasesCreate Java testcases to test static Java methods by calling them and verifying the results. Once you have defined the testcase, you can generate an instance of it, and edit the instance's code to provide the logic for evaluating each task and verification point.Adding Java testcasesTo add a Java testcase to the Component Test perspective:1.In the Definition pane, right-click on Testcases and click: New > Testcase 2.In the New Testcase wizard, select the project you want to define the testcase in. 3.Name the project and click Next. 4.Select the Java scheduler. 5.Click Finish to add the testcase to the Testcase folder under the selected project. The contents of the testcase appear in the Outline pane. To start with, it contains a main block, which will organize all the other contents of the testcase. Reusing testcasesYou can reuse existing testcase definitions when you define new ones. This lets you define testcases for common sequences (such as logging into an application) that you can then reuse in more complex compound testcases.To reuse a testcase:1.Select the testcase you want to add the existing testcase to. 2.In the Outline pane, right-click the block you want to add the testcase to and click Add Testcase Definition Reference. 3.In the Add Testcase Definition Reference wizard, select the testcase you want to reuse. 4.Click Finish. The reused testcase is incorporated by reference: its definition is still maintained separately, and the compound testcase definition will pick up changes to the testcases it reuses. However, when you create a testcase instance, the generated code for the referenced testcase definition will be stored as part of the referencing testcase instance. In other words, reuse happens only at the definition level: at the instance level, each reusing testcase creates its own copy of the reused testcases.Test Cases & ExplanationWe will not supply you with test input for most of your assignments. Part of your job will be to select input cases to show that your program works correctly. You should select input from the following categories:Normal Test Cases: These are inputs that would be considered "normal" or "average" for your program. For example, if your program computes square roots, you could try several positive numbers, both less than and greater than 1, including some perfect squares such as 16 and some numbers without rational square roots. Boundary Test Cases: These are inputs that are legal, but on or near the boundary between legal and illegal values. For example, in a square root program, you should try 0 as a boundary cases. Exception Test Cases: These are inputs that are illegal. Your program may give an error message or it might crash. In a square root program, negative numbers would be exception test cases. You must hand in outputs (saved in file form) of your test runs. In addition to handing in your actual test runs, give us a quick explanation of how you picked them. For example, if you write a program to compute square roots, you might say "my test input included zero, small and large positive numbers, perfect squares and numbers without a rational square root, and a negative number to demonstrate error handling". You may give this explanation in the separate README file, or included alongside the test cases. You will be marked for how well the test cases you pick demonstrate that your program works correctly. If your program doesn't work correctly in all cases, please be honest about it. It is perfectly valid to have test cases which illustrate the circumstances in which your program does not yet work. If your program doesn't run at all, you can hand in a set of test cases with an explanation of how you picked them and what the correct output would be. Both of these will get you full marks for testing. If you pick test cases to hide the faults in your program, you will lose marks. Black Box Test Case DesignObjective and Purpose The purpose of the Black Box Test Case Design (BBTD) is to discover circumstances under which the assessed object will not react and behave according to the requirements or respectively the specifications. Operational Sequence The test cases in a black box test case design are deviated from the requirements or respectively the specifications. The object to be assessed is considered as a black box, i. e. the assessor is not interested in the internal structure and the behavior of the object to be assessed. It can be differentiated between the following black box test case designs: >Generation of equivalence classes >Marginal value analysis >Intuitive test case definition >Function coverage 1. Generation of Equivalence Classes : Objective and Purpose :It is the objective of the generation of equivalence classes to achieve an optional probability to detect errors with a minimum number of test cases. Operational Sequence :The principle of the generation of equivalence classes is to group all input data of a program into a finite number of equivalence classes so it can be assumed that with any representative of a class it is possible to detect the same errors as with any other representative of this class. The definition of test cases via equivalence classes is realized by means of the following steps: Analysis of the input data requirements, the output data requirements, and the conditions according to the specifications 2. Definition of the equivalence classes by setting up the ranges for input and output data 3. Definition of the test cases by means of selecting values for each classwhen defining equivalence classes, two groups of equivalence classes have to be differentiated: valid equivalence classes invalid equivalence classes For valid equivalence classes, the valid input data are selected; in case of invalid equivalence classes erroneous input data are selected. If the specification is available, the definition of equivalence classes is predominantly a heuristic process. 4. Marginal Value Analysis :Objective and Purpose : It is the objective of the marginal value analysis to define test cases that can be used to discover errors connected with the handling of range margins. Operational Sequence :The principle of the marginal value analysis is to consider the range margins in connection with the definition of test cases. This analysis is based on the equivalence classes defined by means of the generation of equivalence classes. Contrary to the generation of equivalence classes, not any one representative of the class is selected as test case but only the representatives at the class margins. Therefore, the marginal value analysis represents an addition to the test case design according to the generation of equivalence classes. 5. Intuitive Test Case Definition Objective and Purpose :It is the objective of the intuitive test case definition to improve systematically detected test cases qualitatively, and also to detect supplementary test cases. Operational Sequence :Basis for this methodical approach is the intuitive ability and experience of human beings to select test cases according to expected errors. A regulated procedure does not exist. Apart from the analysis of the requirements and the systematically defined test cases (if realized) it is most practical to generate a list of possible errors and error-prone situations. In this connection it is possible to make use of the experience with repeatedly occurred standard errors. Based on these identified errors and critical situations the additional test cases will then be defined. 6. Function Coverage Objective and Purpose It is the purpose of the function coverage to identify test cases that can be used to proof that the corresponding function is available and can be executed as well. In this connection the test case concentrates on the normal behavior and the exceptional behavior of the object to be assessed. Operational Sequence Based on the defined requirements, the functions to be tested must be identified. Then the test cases for the identified functions can be defined. Recommendation With the help of a test case matrix it is possible to check if functions are covered by several test cases. In order to improve the efficiency of the tests, redundant test cases ought to be deleted. White Box Test Case DesignObjective and Purpose The objective of the "White Box Test Case Design" (WBTD) is to detect errors by means of execution-oriented test cases. Operational Sequence White Box Testing is a test strategy which investigates the internal structure of the object to be assessed in order to specify execution-oriented test cases on the basis of the program logic. In this connection the specifications have to be taken into consideration, though. In a test case design, the portion of the assessed object which is addressed by the test cases is taken into consideration. The considered aspect may be a path, a statement, a branch, and a condition. The test cases are selected in such a manner that the correspondingly addressed portion of the assessed object is increased.The following White Box Test Case methods exist:1. Path coverage 2. Statement coverage 3. Branch coverage 4. Condition coverage 5. Branch/condition coverage 6. Coverage of all multiple conditions 1.Path Coverage Objective and PurposeIt is the objective of the path coverage to identify test cases executing a required minimum number of paths in the object to be assessed. The execution of all paths cannot be realized as a rule. Operational SequenceBy taking into consideration the specification, the paths to be executed and the corresponding test cases will be defined. 2. Statement Coverage Objective and PurposeIt is the objective of the statement coverage to identify test cases executing a required minimum number of statements in the object to be assessed. Operational SequenceBy taking into consideration the specification, statements are identified and the corresponding test cases are defined. Depending on the required coverage degree, either all or only a certain number of statements are to be used for the test case definition. 3. Branch Coverage Objective and PurposeIt is the objective of the branch coverage to identify test cases executing a required minimum number of branches, i. e. at least once in the object to be assessed. Operational SequenceBy taking into consideration the specification, a sufficiently large number of test cases must be designed by means of an analysis so both the THEN and the ELSE branch are executed at least once for each decision. I. e. the exit for the fulfilled condition and the exit for the unfulfilled must be utilized and each entry must be addressed at least once. For multiple decisions there exists the additional requirement to test each possible exit at least once and to address each entry at least once. 4. Condition Coverage Objective and PurposeThe objective of the condition coverage is to identify test cases executing a required minimum number of conditions in the object to be assessed. Operational SequenceBy taking into consideration the specification, conditions are identified and the corresponding test cases are defined. The test cases are defined on the basis of a path sequence analysis. 5. Branch/Condition Coverage Objective and PurposeThe objective of the branch/condition coverage is to identify test cases executing a required minimum number of branches and conditions in the object to be assessed. Operational SequenceBy taking into consideration the specification, branches and conditions are identified and the corresponding test cases are defined. 6. Coverage of all Multiple Conditions Objective and PurposeThe objective of the coverage of all multiple conditions is to identify test cases executing a required minimum number of all possible condition combinations for a decision in the object to be assessed. Operational SequenceBy taking into consideration the specification, condition combinations for decisions are identified and the corresponding test cases are defined. When defining test cases it must be observed that all entries are addressed at least once.Test Cases :::::How to write TEST CASES?To write test cases one should be clear on the specifications required for a particular case. Once the case is decided check out for the requirments and then write test cases. For writing test cases first you must find Boundary Value Analysis. Let us write a test case for a Consignee Details Form. (Consignee Details : Consignee is the customer whoever to purchase our product. Here he want to give the information about himself. For example name, address and etc...)Here is the screen shot of the formSoftware Requirement Specification According to the software requirement specification (SRS) one should write test cases upto expected results. Here is the screen shot of SRS Boundary Value Analysis:It concentrate on range between minimum value and maximum values. It does not concentrate on centre values. For example how to calculate Boundary Value for Company name fieldMinimum length is 4 & Maximum length is 15For Boundary value you have to check + or – minimum length and + or – Maximum lengthfor Company name field minimum value =3,4,5maximum value=14,15,16According to the Software Requirement SpecificationThe boundary values given above are Valid values=4,5,14,15Invalid values=3,16 because this values are out of range where as given in software requirement specification.>You have to write test cases for Boundary values also.For single user id field you have 11 test case including boundary value. >You have to write test cases upto expected result after getting software requirement specification itself you can start writing a test cases. >After the creation of test cases completed. >Arrival of build will be arises to the testing field >Build->Its a complete project >After that you have to execute the test cases EXECUTION OF TEST CASESYou have to check all the possible Test input given in test cases and then check whether all the test cases are executed or not How to execute?>For example whether you are checking company name as a mandatory means you need not give any input to Company name field and then enter password .then click OK button means.That alert message “Enter Company name:” must be displayed. This was your expected result . If it is happen while you are executing the test cases with the project .Mandatory->compulsoryTest Case 1 Test Case ID : Test Case TitleThe test case ID may be any convenient identifier, as decided upon by the tester. Identifiers should follow a consistent pattern within Test cases, and a similar consistency should apply access Test Modules written for the same project.Purpose:The purpose of the Test case, usually to verify a specific requirement.Owner:The persons or department responsible for keeping the Test cases accurate.Expected Result :Describe the expected results and outputs from this Test Case. It is also desirable to include some method of recording whether or not the expected results actually occurred (i.e.) if the test case, or even individual steps of the test case, passed.Test Data:Any required data input for the Test Case.Test Tools:Any specific or unusual tools or utilities required for the execution of this Test Case.Dependencies :If correct execution of this Test Case depends on being pleceded by any other Test Cases, that fact should be mentioned here. Similarly any dependency on factory outside the immediate test environment should also be mentioned.Initialization :If the system software or hardware has to be initialized in a particular manner in order for this Test case to succeed, such initialization should be mentioned here.Description:Describe what will take place during the Test Case the description should take the form of a narrative description of the Test Case, along with a Test procedure , which in turn can be specified by test case steps, tables of values or configurations, further narrative or whatever is most appropriate to the type of testing taking place.Test Case 2 Test Case 3 Test case 4Test Case Description : Identify the Items or features to be tested by this test case.Pre and post conditions: Description of changes (if any) to be standard environment. Any modification should be automatically done. Test Case 4 - DescriptionCase : Test Case NameComponent : Component NameAuthor : Developer NameDate : MM – DD – YY Version : Version Number Input / Output Specifications:Identify all inputs / Outputs required to execute the test case. Be sure to identify all required inputs / outputs not just data elements and values:> Data (Values , ranges, sets ) > Conditions (States: initial, intermediate, final) > Files (database, control files) Test Procedure Identify any special constrains on the test case. Focus on key elements such as special setup.Expected ResultsFill this row with the description of the test resultsFailure RecoveryExplanations regarding which actions should be performed in case of test failure.CommentsSuggestions, description of possible improvements, etc.Test Case 5WEB TESTINGWriting Test Cases for Web Browsers This is a guide to making test cases for Web browsers, for example making test cases to show HTML, CSS, SVG, DOM, or JS bugs. There are always exceptions to all the rules when making test cases. The most important thing is to show the bug without distractions. This isn't something that can be done just by following some steps, you have to be intelligent about it. Minimising existing testcases.STEP ONE: FINDING A BUG The first step to making a testcase is finding a bug in the first place. There are four ways of doing this: 1. Letting someone else do it for you: Most of the time, the testcases you write will be for bugs that other people have filed. In those cases, you will typically have a Web page which renders incorrectly, either a demo page or an actual Web site. However, it is also possible that the bug report will have no problem page listed, just a problem description.2. Alternatively, you can find a bug yourself while browsing the Web. In such cases, you will have a Web site that renders incorrectly. 3. You could also find the bug because one of the existing testcases fails. In this case, you have a Web page that renders incorrectly. 4. Finally, the bug may be hypothetical: you might be writing a test suite for a feature without knowing if the feature is broken or not, with the intention of finding bugs in the implementation of that feature. In this case you do not have a Web page, just an idea of what a problem could be. If you have a Web page showing a problem, move to the next step. Otherwise, you will have to create an initial testcase yourself. This is covered on the section on "Creating testcases from scratch" later. STEP TWO: REMOVING DEPENDENCIESYou have a page that renders incorrectly. Make a copy of this page and all the files it uses, and update the links so they all point to the copies you made of the files. Make sure that it still renders incorrectly in the same way -- if it doesn't, find out why not. Make your copy of the original files as close to possible as the original environment, as close as needed to reproduce the bug. For example, instead of loading the files locally, put the files on a remote server and try it from there. Make sure the MIME types are the same if they need to be, etc.Once you have your page and its dependencies all set up and still showing the same problem, embed the dependencies one by one.For example, change markup like this:link rel="stylesheet" href="foo.css"...to this:Each time you do this, check that you haven't broken any relative URIs and that the page still shows the problem. If the page stops showing the problem, you either made a mistake when embedding the external files, or you found a bug specifically related to the way that particular file was linked. Move on to the next file.STEP THREE: MAKING THE TEST FILE SMALLER Once you have put as many of the external dependencies into the test file as you can, start cutting the file down. Go to the middle of the file. Delete everything from the middle of the file to the end. (Don't pay attention to whether the file is still valid or not.) Check that the error still occurs. If it doesn't, put that part pack, and remove the top half instead, or a smaller part. Continue in this vein until you have removed almost all the file and are left with 20 or fewer lines of markup, or at least, the smallest amount that you need to reproduce the problem.Now, start being intelligent. Look at the file. Remove bits that clearly will have no effect on the bug. For example if the bug is that the text "investments are good" is red but should be green, replace the text with just "test" and check it is still the wrong colour.Remove any scripts. If the scripts are needed, try doing what the scripts do then removing them -- for example, replace this:
document.write('test')
test;..with:
test...and check that the bug still occurs. Merge any <> blocks together. Change presentational markup for CSS. For example, change this:< color="red">...to:span { color: red; } /* in the stylesheet */Do the same with style="" attributes (remove the attributes, but it in a <> block instead). Remove any classes, and use element names instead. For example: ..a { color: red; } .b { color: green; }
This should be green....becomes:div { color: red; } p { color: green; }
This should be green.Do the same with IDs. Make sure there is a strict mode DOCTYPE: Remove any<>elements. Remove any "lang" attributes or anything that isn't needed to show the bug. If you have images, replace them with very simple images, e.g.:http://hixie.ch/resources/images/sampleIf there is script that is required, remove as many functions as possible, merge functions together, put them inline instead of in functions. STEP FOUR: GIVE THE TEST AN OBVIOUS PASS CONDITION The final step is to make sure that the test can be used quickly. It must be possible to look at a test and determine if it has passed or failed within about 2 seconds. There are many tricks to do this, which are covered in other documents such as the CSS2.1 Test Case Authoring Guidelines: http://www.w3.org/Style/CSS/Test/guidelines.htmlMake sure your test looks like it has failed even if no script runs or anything. Make sure the test doesn't look blank if it fails. Creating testcases from scratchSTEP ONE: FIND SOMETHING TO TESTRead the relevant specification. Read it again. Read it again, making sure you read every last bit of it, cover to cover.Read it one more time, this time checking all the cross-references. Read the specification in random order, making sure you understand every last bit of it.Now, find a bit you think is likely to be implemented wrongly. Work out a way in which a page could be created so that if the browser gets it right, the page will look like the test has passed, and if the browser gets it wrong, the page will look like it failed. Write that page. Now jump to step four above. Note: This information is collected.
Posted by Shailaja Kiran at 05:11 0 comments
Labels: Test Cases, Testing
Friday, 23 November 2007
ISTQB Foundation Exam Preparation
Exam Preparation follows :Each and every syllabus module contains :> Description of the module.> Content of the module.> Module Regarding Document.> A simple test on the module.Fundamentals of testing :This section looks at why testing is necessary, what testing is, explains general testing principles, the fundamental test process, and psychological aspects of testing.1.0 Fundamentals (or) Principles of testing 1.1 Why is testing necessary 1.2 What is testing 1.3 General testing principles 1.4 Fundamental test process 1.5 Psychology of testing Prepare a bit from here Chapter 1Take a Small Test now :1. Use numbers 1 to 5 to indicate which fundamental test process the following major tasks belong to:1 for planning and control, 2 for analysis and design, 3 for implementation and execution, 4 for evaluating exit criteria and reporting, and 5 for test closure activities. A. _____ Creating the test data B. _____ Designing test cases C. _____ Analyzing lessons learned D. _____ Defining the testing objectives E. _____ Assessing whether more tests are needed F. _____ Identifying the required test data G. _____ Comparing actual progress against the plan H. _____ Preparing a test summary report I. _____ Documenting the acceptance of the system J. _____ Re-executing a test that previously failed 2. What should be taken into account to determine when to stop testing? I. Technical risk II. Business risk III. Project constraints IV. Product documentation A. I and II are true; III and IV are false B. III is true; I, II, and IV are false C. I, II, and IV are true; III is false D. I, II and III are true; IV is false3. How can software defects in future projects be prevented from reoccurring? A. Creating documentation procedures and allocating resource contingencies B. Asking programmers to perform a thorough and independent testing C. Combining levels of testing and mandating inspections of all documents D. Documenting lessons learned and determining the root cause of problems4.Use numbers 1 to 5 to indicate which fundamental test process the following major tasks belong to: 1 for planning and control, 2 for analysis and design, 3 for implementation and execution, 4 for evaluating exit criteria and reporting, and 5 for test closure activities. K. _____ Reporting the status of testing L. _____ Documenting the infrastructure for reuse later M. _____ Checking the test logs against the exit criteria N. _____ Identifying the required test environment O. _____ Developing and prioritizing test procedures P. _____ Comparing actual vs. expected results Q. _____ Designing and prioritizing test cases R. _____ Assessing whether the exit criteria should be changed S. _____ Receiving feedback and monitoring test activities T. _____ Handing over the testware to the operations team Testing throughout the software lifecycle :Explains the relationship between testing and life cycle development models, including the V-model and iterative development. Outlines four levels of testing:• Component testing • Integration testing • System testing • Acceptance testingDescribes four test types, the targets of testing:• Functional • Non-functional characteristics • Structural • Change-relatedOutlines the role of testing in maintenance.2.0 Testing throughout the life cycle2.1 Software development models 2.2 Test levels 2.3 Test types: the targets of testing 2.4 Maintenance testing Prepare a bit from here Chapter 2Take a small test now :1. What test can be conducted for off-the-shelf software to get market feedback? A. Beta testing B. Usability testing C. Alpha testing D. COTS testing2. Fill in the Blanks now :1. _____ are the capabilities that a component or system must perform. 2. Reliability, usability, and portability are examples of _____. 3. Hardware and instrumentation needed for testing are parts of a _____. 4. _____ is also known as structural testing. 5. _____ ignores the internal mechanisms of a system being tested. 6. Which test level tests individual components or a group of related units? 7. Which test level determines if the customer will accept the system? 8. _____ checks the interactions between components. 9. _____ is usually performed on a complete, integrated system. 10. _____ is another name for unit testing. 3. Which test levels are USUALLY included in the common type of V-model? A. Integration testing, system testing, acceptance testing and regression testing B. Component testing, integration testing, system testing and acceptance testing C. Incremental testing, exhaustive testing, exploratory testing and data driven testing D. Alpha testing, beta testing, black-box testing and white-box testingStatic techniques :Explains the differences between the various types of review and outlines the characteristics of a formal review. Describes how static analysis can find defects.3.0 Static techniques3.1 Reviews and the test process 3.2 Review process 3.3 Static analysis by tools Prepare a bit from here Chapter 3Take a small test now :1.Which typical defects are easier to find using static instead of dynamic testing? L. Deviation from standards M. Requirements defects N. Insufficient maintainability O. Incorrect interface specifications A. L, M, N and O B. L and N C. L, N and O D. L, M and N2. In a formal review, who is primarily responsible for the documents to be reviewed? A. Author B. Manager C. Moderator D. Reviewers3.What are the typical six main phases of a formal review?Test Design Techniques :Explains the differences between the various types of review and outlines the characteristics of a formal review. Describes how static analysis can find defects.4.0 Test Design Techniques4.1 Identifying test conditions and designing test cases 4.2 Categories of test design techniques 4.3 Specification-based or black box techniques 4.4 Structure-based or white box techniques 4.5 Experience-based techniques 4.6 Choosing test techniques Prepare a bit from here Chapter 4Take a small test now :1. Features to be tested, approach, item pass/fail criteria and test deliverables should be specified in which document? A. Test case specification B. Test procedure specification C. Test plan D. Test design specificationWhich aspects of testing will establishing traceability help? A. Configuration management and test data generation B. Test specification and change control C. Test condition and test procedures specification D. Impact analysis and requirements coverageTest Management This section explains how to identify test conditions (things to test) and how to design test cases and procedures. It also explains the difference between white and black box testing. The following techniques are described in some detail with practical exercises:• Equivalence partitioning • Boundary value analysis • Decision tables • State transition testing • Statement and decision testingIn addition, use case testing and experience-based testing (such as exploratory testing) are described and advice is given on choosing techniques.5.0 Test management5.1 Test organisation 5.2 Test planning and estimation 5.3 Test progress monitoring and control 5.4 Configuration management 5.5 Risk and testing 5.6 Incident or bug management Prepare a bit from here Chapter 5Take a small test now :1. Which of the following is a KEY task of a tester? A. Reviewing tests developed by others B. Writing a test strategy for the project C. Deciding what should be automated D. Writing test summary reports2. Which of the following are test leader's vs. tester's tasks? A. Adjust plans as needed B. Analyze design documents C. Analyze overall test progress D. Assess user requirements E. Automate tests as needed F. Contribute to test plans G. Coordinate configuration management H. Coordinate the test strategy I. Create test specifications J. Decide what to automate Tool support for testing :Different types of tool support for testing are described throughout the course. This session summarises them, discusses how to use them effectively and how to best introduce a new tool.6.0 Tool support for testing6.1 Types of test tools 6.2 Effective use of tools, potential benefits and risks 6.3 Introducing a tool into an organisation Prepare a bit from here Chapter 6Take a small test now :1) Match the test tool classifications to the test tools. 1. Test management—applies to all test activities 2. Static testing—facilitates static analysis in detecting problems early 3. Test specification—generates tests and prepares data 4. Test execution and logging—runs tests and provides framework 5. Performance and monitoring—observes systems behavior 6. Specialized—caters to specific environment or platform 7. Other—assists in other miscellaneous testing tasks A. ___ Configuration management tools B. ___ Coverage measurement tools C. ___ Debugging tools D. ___ Dynamic analysis tools E. ___ Incident management tools F. ___ Industry-specific tools G. ___ Modeling tools H. ___ Monitoring tools I. ___ Performance testing tools J. ___ Platform-specific tools 2. Which of the following are potential benefits of using test support tools? A. Ensuring greater consistency and minimizing software project risks B. Reducing repetitive work and gaining easy access to test information C. Performing objective assessment and reducing the need for training D. Allowing for greater reliance on the tool to automate the test processMail me for answers.All the best :-)Please follow this post often to see latest questions updated.Note:- This is just for reference. Don`t not completely refer this for your exam.
Posted by Shailaja Kiran at 07:05 0 comments
Labels: ISTQB/ISEB
Bug Life Cycle
What is Bug/Defect?Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”Other definitions can be:An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.(or)A fault in a program, which causes the program to perform in an unintended or unanticipated manner.Lastly the general definition of bug is: “failure to conform to specifications”.If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.Life cycle of Bug:1) Log new defectWhen tester logs any new bug the mandatory fields are:Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce.In above list you can add some optional fields if you are using manual Bug submission template:These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.The following fields remain either specified or blank:If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.when you consider the significant steps in bug life cycle you will get quick idea of bug life. On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.Bug status description:These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.1) New: When QA files new bug.2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.Bug and Statuses Used During A Bug Life CycleThe main purpose behind any Software Development process is to provide the client (Final/End User of the software product) with a complete solution (software product), which will help him in managing his business/work in cost effective and efficient way. A software product developed is considered successful if it satisfies all the requirements stated by the end user.Any software development process is incomplete if the most important phase of Testing of the developed product is excluded. Software testing is a process carried out in order to find out and fix previously undetected bugs/errors in the software product. It helps in improving the quality of the software product and make it secure for client to use.What is a bug/error?A bug or error in software product is any exception that can hinder the functionality of either the whole software or part of it.How do I find out a BUG/ERROR?Basically, test cases/scripts are run in order to find out any unexpected behavior of the software product under test. If any such unexpected behavior or exception occurs, it is called as a bug.What is a Test Case?A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs.What do I do if I find a bug/error?In normal terms, if a bug or error is detected in a system, it needs to be communicated to the developer in order to get it fixed.Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed.(Please note that there are various ways to communicate the bug to the developer and track the bug status)Statuses associated with a bug:New:When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug.Assigned:After the bug is reported as ‘New’, it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of ‘Assigned’ is assigned to it.Open:Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to indicate that he/she is working on it to find a solution.Fixed:Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as ‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team.Pending Retest:After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending Retest’ is assigned to it.Retest:The testing team leader changes the status of the bug, which is previously marked with ‘Pending Retest’ to ‘Retest’ and assigns it to a tester for retesting.Closed:After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester closes it and marks it with ‘Closed’ status.Reopen:If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as ‘Reopen’.Pending Reject:If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending Reject’.Rejected:If the Testing Leader finds that the system is working according to the specificationsor the bug is invalid as per the explanation from the development, he/she rejects the bug and marks its status as ‘Rejected’.Postponed:Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the bug is marked with ‘Postponed’ status.Deferred:In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with ‘Deferred’ status.Software Testing - How To Log A Bug (Defect)As we already have discussed importance of Software Testing in any software development project (Just to summarize: Software testing helps in improving quality of software and deliver a cost effective solution which meet customer requirements), it becomes necessary to log a defect in a proper way, track the defect, and keep a log of defects for future reference etc.As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts and it becomes very important to communicate the defect to the developers in order to get it fixed, keep track of current status of the defect, find out if any such defect (similar defect) was ever found in last attempts of testing etc. For this purpose, previously manually created documents were used, which were circulated to everyone associated with the software project (developers and testers), now a days many Bug Reporting Tools are available, which help in tracking and managing bugs in an effective way.How to report a bug?It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same. The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that is failed.At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project, Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority and Bug ID etc.) are filled and detailed description of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test case are attached to the bug for reference by the developer.After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process progresses.If more than one tester are testing the software application, it becomes a possibility that some other tester might already have reported a bug for the same defect found in the application. In such situation, it becomes very important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual results are written for each step and the test case is failed for the step at which the test case fails.This file containing test case and the screen shots taken are sent to the developers for reference. As the tracking process is not automated, it becomes important keep updated information of the bug that was raised till the time it is closed.(Please Note: The above given procedure of reporting a bug is general and not based on any particular project. Most of the times, the bug reporting procedures, values used for the various fields used at the time of reporting a bug and bug tracking system etc. may change as par the software testing project and company requirements.)
Posted by Shailaja Kiran at 00:31 0 comments
Saturday, 17 November 2007
Statement Coverage & Decision Coverage
ISEB Foundation Certification Syllabus Covers three TEST DESIGN TECHNIQUES,The 3 categories are :1) Specification based or Black box testing.2) Structured-based or White-Box Techniques.3) Experienced Based Techniques.As per the request of my blog readers I would like to post the "Structured-based or White-Box Techniques" first then later on continue with other design techniques.TEST DESIGN TECHNIQUES for Structured-based (or) White-Box Techniques are:-> Statement Testing Coverage-> Decision Testing Coverage{Statement Coverage & Decision Coverage : These 2 topics are covered in ISEB foundation Syllabus in 4th chapter "TEST DESIGN TECHNIQUES".}Structured-based or White-Box Techniques :White Box Testing :-> Testing based on Knowledge of internal structure and logic.-> Logic errors and incorrect assumptions are inversely proportional to a path’s execution probability.-> We often believe that a path is not likely to be executed, but reality is often counter intuitive.-> Measure Coverage.Structured-based or White-Box Techniques is based on an identified structure of the software or system, as seen in the following examples:Component level: The structure is that of the code itdelf, ie., statements, decisions or branches.Integration level : The structure may be a call three (a diagram in which modules call other modules).System level : the structure may be a menu structure, bussiness process or webpage structure.Structure-based or white-box testing can be applied at different levels of testing. Here we will be focussing on white-box testing at the code level, but it can be applied wherever we want to test the structure of something - for example ensuring that all modules in a particular system have been executed.** Further we will discuss about two code-related structural techniques for code coverage, based on statement and dicision, are discussed.** For decision testing, a control flow diagram may be used to visualize the alternatives for each decision.As said earlier, I focus main on code-related structural techniques. These techniques identify paths through the code tha need to be excercised in oredr to acheive the required level of code average.These are methods that can be deployed that can make the identification of white-box test cases easier - one method is control-flow graphing, control flow graphing uses nodes, edges and regions.. I will show them in detail with examples here..Now comming to the actual topic : TEST DESIGN TECHNIQUES for Structured-based (or) White-Box Techniques are:-> Statement Testing Coverage-> Decision Testing Coverage1. Statement testing & Coverage :A Statement is: >> 'An entity in a programming language, which is typically the smallest indivisible unit of execution' (ISTQB Def).A Statement coverage is:>> 'The percentage of executable statements that has been exercised by a test suite' (ISTQB Def)Statement coverage:-> Does not ensure coverage of all functionality-> The objective if the statement testing is to show that the executable statements within a program have been executed at least once. An executable statement can be described as a line of program sourse code that will carry out some type of action. For example:If all statements in a program have been executed by a set of tests then 100% statement coverage has been acheived. However, if only half of the statement have been executed by a set of tests then 50% statement coverage has been acheived.The aim is to acheive the maximum amount of statement coverage with the mimimum number of test cases.>>Statement testing test cases to execute specific statements, normally to increase statement coverage.>>100% statement coverage for a component is acheived by executing all of the execuatbel statements in that component.If we require to carry out statemtnt testing, the amount of statement coverage required for the component should be stated in the test coverage requirements in the test plan.We should aim to acheive atleast the minimim coverage requirements with our test cases. If 100% statement coverage is not required, then we need to determine which ares of the component are more important to test by this method.>>Consider the following lines of code:>> 1 test would be required to execute all three executable statements.If our component consists of three lines of code we will execute all with one test case, thus acheiving 100% statement coverage. There is only one way we can execute the code - starting at line number 1 and finishing at line number 3.>>statement testing is more complicated when there is logic in the code>>For example..>>Here there is one executable statement i.e., "Display error message">> hence 1 test is required to execute all executable statements.Program code becomes tough when logic is introduced. It is likely what a component will have to carry out different actions depending upon circumstances at the time of execution. In the code examp,e shown, the component will do different things depending on whether the age input is less than 17 or if it is 17 and above. With the statement testing we have to determine the routes through the code we need to take in order to execute the statements and the input required to get us there!In this example, the statement will be executed if the age is less than 17, so we would create a test case accordingly.>> For more complex logic we could use control flow graphing>> Control flow graphs consists of nodes, edges and regionsControl flow graphs describes the logic structure if the software programs - it is a method by which flows through the program logic are charted, usign the code itseld rather than the program specification. Each flow graph nodes and egdes The nodes represent computational statements or expressions, and the edges represent transfer of control between the nodes. Together the nodes and edges encompass an area known as a region.In the diagram, the structure represents an 'If Then Else Endif' costurct. NOdes are shown for the 'If' and the 'Endif'. Edges are shown for the 'Then' ( the true path) and the 'Else ( the false path). The region is the area enclosed by the nodes and the edges.>>All programs consists of these basic structures..This is hetzel notation that only shows logic flow.There are 4 basic structures that are used withn control-flow graphong.The 'DoWhile' structure will execute a section of code whilst a feild or indicator is set to a certain value. For example,The 'Do until' structure will execute a section of code until a field or indicator is set to a certain value. Foe example,The evaluation of the condition occurs after the code is executed.The 'Go To' structure will divert the program execution to the program section in question. For example >> SO the logic flow code could now be shown as follows:If we applied control-flow graphing to our sample code, then 'if Then Else' structure is applicable.However, while it shows us the structure of the code, it doesn`t show us where the executabel statements are, and so it doesn`t help us at the moment with determining the tests we required for statement coverage.>> we can introduce extra nodes to indicate where the executable statements are>> And we can see the path we need to travel to execute the statemen in the code.What we can do is introduce extra nodes to indicate where the statements occur in the program code.NOw in our example we can see that we need to answer 'yes' to the question being posed to traverse the code and execute the statement on line 2.>> Now consider this code and control flow graph:>> We will need 2 tests to acheive 100% statement coverage.Program logic can be a lot more complicated than the examples I have given so far!In the source code shown here, We have executable statements associated with each outcome of the question being asked. We have to dosplay an error message if the age is less than 17( answering 'yes' to the question), and we have display 'costomer OK' if we answer 'No'.We can only traverse the code only once with a given test; therefore we require two tests to acheive 100% statement coverage.>> And this example...>> We will need 3 tests to acheive 100% statement coverage.NOw it get even more complecated!In this example, we have a supplementary question, or what is know as a 'nested if'. If we answer 'yes' to 'If fuel tank empty?' we then have a further question asked, and each outcome of this question has an associated statement.Therefore we will need two tests that answer 'yes' to 'if fuel tank empty'* Fuel tank empty AND petrol engine ( to execute line 3)* Fuel tanl empty AND NOT petrol engine( to execute line 5)one further test will be required where we anser 'no' to 'if fuel tank empty' to enable us to execute the statement at line 8.>>And this will be the last example for statement coverage.. we will then go for decision coverage.>> We will need 2 tests to acheive 100% statement coverage.In this example,,, we ahve two saperate questions that are being asked.The tests have shown are* A coffee drinker who wants cream* A non coffee drinker who doesn`t want creamOur 2 tests acheive 100% statement coverage, but equally we could have had 2 tests with:* A coffee drinker who doesn`t want cream* A no-coffee drinker who wants creamIf we were being asked to acheive 100% statement coverage, and if all statements were of equal importance, it would n`t matter which set if tests we chooose.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Checking your calculation values:Minimum tests required to acheive 100%Decision coverage >= Statement coverage~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Decision testing & CoverageA Decision is :>> ' A program point at which the control flow has two or more alternative routes. A node with two or more links to saperate branches.'(ISTQB Def)A Decision Coverageis :>> ' The percentage if the decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.' (ISTQB Def)Decision Coverage :>>The objective of decision coverage testing is to show all the decisions within a component have been executed at least once.A decision can be described as a line of source code that asks a question.For example:If all decisions within a component have exercised by a given set of tests then 100% decision coverage has been achieved. However if only half of the decisions have been taken with a given set of tests then you have only achieved 50% decision coverage.Again, as with statement testing, the aim is to achieve the maximum amount of coverage with the minimum number of tests.>> Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage.>> Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points.If we are required to carry out decision testing, the amount of decision coverage required for a component should be stated in the test requirements in the test plan.We should aim to achieve atleast the minimum coverage requirements with our test cases. If 100% decision coverage is not required, then we need to determine which areas of the component are more important to test by this method.>> Decision coverage is stronger than statement coverage.>> 100% decision coverage for a component is achieved by exercising all decision outcomes int he component.>> 100% decision coverage guarantees 100% statement coverage, but not vice versa.Decision testing can be considered as the next logical progression from statement testing in that we are not much concerned with testing every statement but the true and false outcomes from every decision.As we saw in our earlier examples of the statement testing, not every decision outcome has a statement( or statements) to execute.If we achieve 100% decision coverage, we would have executed every outcome of every decision, regardless of whether there were associated statements or not..>>Lets take earlier example we had for statement testing:>>This would require 2 tests to achieve 100% decision coverage, but only 1 test to achieve 100% statement coverage.In this example there is one decision, and therefore 2 outcomes.To achieve 100% decision coverage we could have two tests:* Age less than 17(answer 'yes')* Age equal to or greater than 17 (answer 'no')This is a greater number of tests than would be required for statement testing as statements are only associated with one decision outcome(line 2).>> Again, consider this earlier example :>> we will need 2 tests to achieve 100% decision coverage & also 2 tests to achieve 100% statement coverage.This example would still result in two tests, as there is one decision therefore 2 outcomes to tests.However, we would need two tests to achieve 100% statement coverage, as there are statements with each outcome of the decision.So,in this instance, statement and decision testing would give us the same number of tests. NOte that if 100% coverage is required, statement testing can give us the same number of tests as decision testing, BUT NEVER MORE!>>Lets look at some more examples now.. >> We will need 3 tests to achieve 100% decision coverage, but only 1 test to achieve 100% statement coverage.Here we have an example of a supplimentary question, or a 'nested if'.We have 2 decisions, so you may think that 4 tests may be required to achieve 100% decision coverage( two for each decision).This is NOT the case! We can achieve 100% decision coverage with three tests - we need to exercise the 'Yes' outcome from the first decision ( line 1) twice, in order to subsequently exercise the 'Yes' and then the 'No' outcome from the supplementary question(line 2).We need a further third test to ensure we exercise the 'No' outcome of the first decision( line 1 ).There is only one decision outcome that has an associated statement - this means that 100% statement coverage can be achieved with one test.>> As more statements are added, the tests for decision coverage are the same:>> 3 tests to achieve 100% decision coverage, and 2 tests to achieve 100% statement coverage.We have now introduced a statement that is associated with 'No' outcome of the decision on line 2.This change affects the number of tests required to achieve 100% statement coverage, but does NOT alter the number of tests required to achieved 100% decision coverage - it is still three!>And again an example..>> 3 tests to achieved both decision and statement coverage.Finally, we have statements associated with each outcome of each decision - the number of tests to achieve 100% statement coverage and 100% decision coverage are now the same.>> And Last Example..>> We will need 2 tests to achieve 100% decision coverage and 100% statement coverage.We looked at this example of the "if Then Else' structure when considering statement testing.As the decisions are separate questions we only need two tests to achieve 100% decision coverage( the same as the number required for statement coverage).You may have thought that four tests were required - exercising the four different routes through the code, but remember, with decision testing our concern is to exercise each outcome of each decision atleast once - as long as we have answered 'Yes' and 'No' to each decision we have satisfied the requirements of the techinique.The tests we have illustrated would need the following input conditions:* Coffee drinker wanting cream.* Non Coffee drinker not wanting cream ( but milk).Equally, we could have chosen the following input conditions:* Coffee drinker not wanting cream( but milk).* Non coffee drinker wanting cream.>> Then What about loops?>> If we choose an initial value of p=4, we only need 1 test to achieve 100% statement and 100% decision coverage.The control-flow graphs we showed earlier depicted a 'Do While' construct.To reiterate, thw 'Do While' structure will execute a section of code whist a field or indicator is set to a certain value. For example,The evaluation of the condition occurs before the code is executed.Unlike the 'If Then Else', we can loop around the 'Do While' structure, which means that we exercise different routes through the code with one test.As in the above diagram, if we set 'p' with an initial value '4', the first time through the code will :* Go from line 1 to line 2* Answer 'Yes' to the 'If' on line 2 ( if p<5)* p="p*2,">5)* Go from line 4 to line 5* Execute the statement on line 5( which adds 1 to 'p', making it`s value '10')* Execute the statement on line 6, which takes it back up to line 1.Once more we execute the code* Line 1 - 'P' is not less than '10' ( it is equal to 10), therefore, we exit this structure.1 test - it achieves 100% statement coverage and 100% decision coverage.>> And it`s same for 'Do until' structure>> IF we choose an initial value of A =15, we only need 1 test to achieve 100% Decision coverage and 100% statement coverage.The control flow structures we showed earlier also depicted a 'Do Until' structure.To reiterate, the 'Do Until' structure will execute a section of code until a field or indicator is set to a certain value. For example,The evaluation of the condition occurs after the code is executed.Unlike the 'If Then Else', we can loop around the 'Do Until' structure, which means that we exercise different routes through the code with one test.In the example above, If we set 'A' with an initial value of '15', the first time through the code will:* Go from line 1 to line 2* Answer 'Yes' to the 'If' on line 2 (If A<20)* Execute the statement on line 3 ( A=A*2,which makes A=30)* GO from line 3, through the line 4 to line 5* Execute the statement on line 5( which adds 1 to 'A', making its value '31'.* Execute the statement on line 6, Which takes back to line 1Again we execute the code, with the value of 'A' now '31'* Go from line 1 to line 2 * Answer 'No' to the 'If' on line 2 (If A < 20)* Go from line 2, through line 4 to line 5* Execute the statement on line 5( which adds 1 to 'A', making it`s value '32')* Execute the statement on line 6, which exits the structure('A' is greater than 31)1 test - it achieves 100% statement coverage and 100% decision coverage.END...
Posted by Shailaja Kiran at 10:34 0 comments
Labels: ISTQB/ISEB, Testing
Tester’s Aptitude/Knowledge Test
Note : This is not a ISEB/ISTQB certification sample test.But, this test is for all who have knowledge and experience in software testing. According to the marks your grade is decided. Read further..IntroductionThe tester’s aptitude test has been compiled to assist the test manager/team leader in the recruiting of good quality testers. This test should be used in conjunction with other interviewing techniques.StructureThe test comprises of 25 questions, each carrying different marks. The questions have been designed to test a broad knowledge of testing from scenario testing to specific questions on testing tools.MarkingTotal number of questions 25..• D.-.Score less than 50% - Fail • C.-.Score 50% to 65% - Trainee Tester• B.-.Score 65% to 80% - Tester• A.-.Score more than 80% - Senior TesterQuestions :1.What statement do you consider to be most important and why?a) Testing has the primary intent of showing the system meets the users needs.b) Testing has the primary intent of finding faults2.You have run all your tests and they all pass. Is this good news or bad news?3.What would you do if you were asked to test a system which is unfamiliar to you has out-of-date or inadequate documentation?4.In running a test you find the actual result does not match the expected result – what would you do?5.Do you consider positive or negative testing to be most important or trying to break the system - and why?6.How would you define a good test?7.You have been assigned to test the new Triangle Determination Application (see screen shot below).As you can see the screen consists of three text fields and a single button. The user is expected to enter an integer value into each of the three text fields. Upon hitting the OK button the program will print a message in a separate dialog box stating whether the triangle is scalene (all sides are different lengths), isosceles (two sides are the same length), or equilateral (all three sides are the same length).Write a set of test cases (i.e. specific sets of data) that you feel would adequately test this program. Write the tests so that someone other than you can run them.8.In testing the above application you identify what you believe to be a fault – instead of printing the message concerning the type of triangle in a separate dialog box the application is printing the message in the space between the 3 text fields and the OK button. What should your next step be (answer and state why)?a) Continue testing to the end of the script, and then report the bug.b) Stop testing, report the bug immediately, then continue alternative scriptsc) Stop testing, report the bug and await a fix.d) Continue testing and report the bug later, along with those found in other scripts9.You have raised a fault, but Development are unable to reproduce it. What should your next step be? (Give answer and state why)a) Let development sign off the bug as not reproducible.b) Sign off the bug yourself as not reproducible.c) Tell development the bug definitely exists and you will not pass it unless fixed.d) Re-test and upon confirmation provide more detailed information to Development, talking them through each stage if necessary.10.Scenario:You have two sets of tests to run on the new version of the software.Test Set 1: a test set to provide confidence that software has not regressed from the previous version.Test Set 2: a detailed test set to investigate potential faults in the new release of software.Having run test set 1 you discover a number of faults in the new version of software – what do you do?11.Draw and explain the ‘V’ Model and how testing fits into the Development Lifecycle. Indicate on the model where you would design your tests.12.Describe the stages of testing and what the objectives are at each stage.13.Explain what you understand by the terms:Regression Testing and Re-Testing14.Scenario:You have planned to run 600 tests on your own. Each test will take approximately 10 minutes to run. Your manager has told you that you must complete these tests within one week. What would you do?15.Do you consider testing tools to be valuable during the testing process – why/why not?16.List 3 test tool categories and describe what each can do.17.Name 2 standards that refer to testing18.How would you test these requirements:a) The system must be user-friendlyb) The system must be easy to installc) The following response times are to be achieved with the new system:• Initial loading of the web application must be achieved within 3 seconds• Updating of the information on the web page must be no more than 5 seconds19.Why do you consider testing to be necessary?20.A hotel telephone system can perform 3 functions:• Call another hotel room by entering a room number (201 to 500)• Call an external line by entering a 9, followed by the number• Call various hotel services• 0 = Operator• 7 = Room Service• 8 = ReceptionWrite a set of test cases to adequately test this telephone system21.Describe what you understand about the term “Static Testing” and list 3 static testing techniques.22.How would you prioritise your tests (list 5)?23.Scenario:You are testing 2 programs and have 3 weeks to test them both. Having run all of your tests on both programs you finish testing within 2 weeks. You need to decide which of the 2 programs you would re-visit and run further tests against. Choose which program you would re-test (can choose only one!) – and state you reasons:Program AProgrammer: AComplexity Level: 2Lines of Code: 2000Number of tests: 100Number of bugs found: 10(1 high severity, 3 medium & 6 low)Program BProgrammer: BComplexity Level: 2Lines of Code: 2000Number of tests: 100Number of bugs found: 50(10 high severity, 25 medium & 15 low)24.An ATM has been specified to work in the following way:Enter a card and if the card is invalid reject the card and exit system. If it is a valid card then enter a PIN number. Check to see if the PIN is invalid – if it is then display a message ‘invalid pin number, please re-enter’. If 3 attempts are made with an invalid pin then the machine keeps the card. If it is a valid PIN then the user can select one of the following transactions:• Cash Withdrawal without receipt• Cash Withdrawal with receipt• Balance Enquiry• Statement request• CancelWhat tests would you produce to test this application? State any assumptions when testing25.The following is an extract from a fault log, write down any potential problems or omissions with this:So now its time to compare your answers with the actual answers..1.They are both accurate! The purpose of testing is to find faults AND ensure it meats the users needs (fit for purpose).2.It depends on how good your tests were and what they were testing. To have justified confidence in the software we must have confidence in our tests, data and environment.3.Talk to users, developers and analysts to understand what the system is supposed to do.Document this understanding and get it reviewed and use this as a substitute for the Requirements/Design documentation.Talk with testers who have tested the system previouslyRead whatever is available and clarify assumptions4.The tester should first establish whether the reason is because of a test fault (i.e. they have made a mistake) or whether it is an environment fault. If neither of these are true then they should then check to see whether this fault has already been raised. If not then either raise the fault or more preferable – talk to the development group to check the fault out.5.They are as important as each other. However testers need to have a different mindset to developers and therefore should actively look for potential faults. If we only concentrate on positive tests (show that the system does what it should do) then we will potentially experience problems when the system goes live. If we only concentrate on negative tests (showing the system doesn’t do what it shouldn’t) then again we could potentially miss significant faults. However if we look primarily at breaking the system then we may find lots of faults (the what if scenarios) but we may not establish if the system is going to meet the users needs and requirements. A balance is needed with all three approaches.6. A good test is one that can potentially find a fault in the system. If this test does not find a fault then it will give us a certain amount of confidence.Tests must also be efficient – we should not have tests which all do the same thing.7.Do you have a test case:1. for a valid scalene triangle?2. for a valid equilateral triangle?3. for a valid isosceles triangle?4. for each of the three permutations of two equal sides in valid isosceles triangles?5. in which one side has a length of zero?6. in which one side has a negative length?7. in which the sum of the length of two sides is equal to the length of the third?8. for each of the three permutations of case 7?9. in which the sum of the length of two sides is less than the length of the third?10. for each of the three permutations of case 9?11. in which all side lengths are zero?12. which uses non-integer input values?13. which uses the wrong number of input values?14. did all your test cases specify the expected output?Myers states that experienced professional programmers score on average 7.8 out of the first 14 questions. Extra points can be given for further tests such as performance, reliability and configuration.8.This is not a serious problem. The message is being printed. The best solution would be (a) or (d) – it is essential that faults be raised as soon as possible so that Development can fix them. However this is dependent on the severity and priority of the fault. This fault is not stopping any further testing on this script – it might be that other similar problems occur with other messages and this extra information might assist development with further investigation9.The answer is (d) – it might be our environment or it could have been fixed by some other fault fix in the new version.10.First we should investigate the faults – is it because we had run our tests wrongly, or that we were running the tests on the wrong environment?Assuming that it is because the software has regressed – then we must establish the nature of the faults and severity of the faults.It is probably inefficient to run any further tests at this stage. We should work with development in getting a new version of the software with the faults fixed and re-tested before running test set 2.11.The key aspect here is that testing should happen throughout the Development Lifecycle. Also designing of the test cases should happen as soon as possible.12. Component TestingLowest level of testing, detail, finding faults, performed by the developersComponent IntegrationCombining components, testing interfaces, performed by developers, various types of integration (top-down, functional, bottom up and big bang). Business scenarios and non-functional aspects if possible.System Testing (functional and non-function)Testing the system as a whole. Testing requirements and business processes. Also testing non-functional aspects such as Performance, usability etc.System IntegrationTesting the system with other systems and networksAcceptance TestingTesting by users/customers to gain confidence that the system is going to support the business as well as meet their requirements.13.Regression Testing:Running tests to ensure that the software has not regressed in anyway as a result of changes to the software and/or environment. Regression testing is running passed tests again to ensure that they still pass.Re-TestingThis is running a test again that had found a fault to check that the fault has been fixed correctly. Re-testing is running a failed test again to ensure that it now passes.14.Assuming there are 7hours per working day. This task would take you:600x10 = 6000 minutes = 100 hours = 14.286 daysThere are a number of options that could be considered:ô€‚‰ Work overtime (this should not be considered as a first resort)ô€‚‰ Ask for more staff to help (again this may not be the best approach, particularly if you need to spend time training and mentoring the new staff)WE SHOULD:ô€‚‰ Re-prioritise our tests and run the most important tests firstô€‚‰ Assuming that not all the 600 tests would have been run within this time, risk assessment need to be made as to the consequences of not running the extra tests.ô€‚‰ After this initial week and the system is implemented there is no reason why the extra tests could not be run (assuming that you are given the time)15.Testing tools are very important to assist the tester in their work. Using tools can also potentially make the tester more efficient in their work – they are able to run more tests (using regression testing for example). Or they can quickly compare 3 reports (comparison tool).The tools in themselves however do not make good testers and also should not be considered if the test process is in ‘chaos’.16.ô€‚‰ Requirement Testing Toolsô€‚‰ Test Design Toolsô€‚‰ Test Data Preparation Toolsô€‚‰ Regression Testing toolsô€‚‰ Debug Toolsô€‚‰ Dynamic Analysis Toolsô€‚‰ Coverage Measurement Toolsô€‚‰ Static Analysis Toolsô€‚‰ Performance Testing Toolsô€‚‰ Test Management Toolsô€‚‰ Network monitoring toolsô€‚‰ Test Harness or Simulation toolsThe importance of this question is to see if the candidate has any knowledge about tools. We do not want the names of tools but want to know if the candidate can distinguish between the types of tool.17.Any of the following:BS 7925-1 (Glossary of testing terms), BS7925-2 (Component Testing), ISO9000 and ISO9001 (Quality standards), IEEE829 (Test Documentation), IEEE1028 (Reviews), IEEE1044 (Incidents)18.How would you approach these requirements:a) The system must be user-friendlyWhat do we mean by ‘user-friendly’? Questions to ask:ô€‚‰ Friendly to whom?ô€‚‰ Who are the users?Test approaches:ô€‚‰ Talk to the usersô€‚‰ Document assumptionsô€‚‰ Compile test scenarios for people who have not seen the systemô€‚‰ Document tests and review these with the usersb) The system must be easy to installWhat do we mean by ‘easy? Questions to ask:ô€‚‰ For whom?ô€‚‰ Is there any installation documentation to follow?Test approaches:ô€‚‰ Follow installation documentation (if there is any)ô€‚‰ Allow tests to be run by an inexperienced user to see how easy it isô€‚‰ Document tests and review these with the usersc) The following response times are to be achieved with the new system:• Initial loading of the web application must be achieved within 3 seconds• Updating of the information on the web page must be no more than 5 secondsOnce more we need to ask some probing questions surrounding this requirement:ô€‚‰ What happens if we don’t meet the times?ô€‚‰ Would a range of values be better?ô€‚‰ What is happening on the network?ô€‚‰ Are these average times or are they ‘peak’ times?ô€‚‰ What is involved in updating – how much information?In attempting to test this requirement we would document the exact criteria for the test and the simplest way would be to time a number of tests and supply the average.With all these 3 requirements, what we are looking for is to see whether the potential tester will challenge the requirements of whether they would just accept them and try to test to the best of their ability.19.ô€‚‰ There are faults in the softwareô€‚‰ Failures in live operation can be expensiveô€‚‰ Sometime a ‘legal’ or contractual requirementô€‚‰ To asses the quality of the softwareô€‚‰ To preserve the quality of the softwareô€‚‰ To help achieve quality software (by finding and removing the faults)20.21.Static Testing is non-execution of the code. Techniques include; reviews, inspections, walkthroughs, individual techniques such as desk checking, data-stepping and proofreading. There is also static analysis (data flow and control flow analysis)22.• ask the customer to prioritise the requirements• ask the customer to prioritise the tests• what is most critical to the customer’s business• test where a failure would be most severe• test where failures would be most visible• test where failures are most likely• areas changed most often• areas with most problems in the past• most complex areas, or technically critical23.Key points:1. Different programmers wrote A and B2. Complexity level of the programs are the same3. Size of the programs are the same4. Tester is the same for testing A and B5. Number of tests run on both programs is the same6. Number of bugs is higher in program BProgram B seems to have far more faults therefore we would be inclined to spend the further week testing Program B, as there is likely to be more bugs to find. We may also not be very confident at this point with Program B therefore we need to see our confidence increased.24.1. Invalid Card – reject card and exit2. Valid Card and Invalid PIN – error message ‘invalid pin…’ (then enter valid pin)3. Valid Card and Invalid PIN – error message ‘invalid pin…’ (then enter another 2 invalid Pins)4. Valid Card, Valid Pin & Cancel (correct length pin)5. Valid Card, Valid Pin in a large number – but the pin number contains more than the maximum number – should error6. Valid Card, Valid Pin & Cash Withdraw without receipt7. Valid Card, Valid Pin & Cash Withdraw with receipt8. Valid Card, Valid Pin & Balance enquiry9. Valid Card, Valid Pin & Statement Request10. Destructive tests include:• Putting in 2 cards• Putting correct pin, but adding an extra number to make invalidAssumptions:1. Can insert up to 3 invalid pins and machine retains card2. Can only select one transaction and then have to re-insert card3. Pressing cancel will return card25.Potential Problems/Omissionsô€‚‰ No date on log as to when raisedô€‚‰ No keywords (i.e. screen) so that searches can be performed preventing duplication of fault logsô€‚‰ No status of the log (opened/fixed/closed/cleared etc.)ô€‚‰ No owner of the log.ô€‚‰ Has priority – but no severity (i.e. risk to the customer)ô€‚‰ No version number of the system being tested – it is very likely that the testers are on a different version to development and that it was a fault but has been inadvertently fixed on this latest softwareô€‚‰ Query the priority of this log (should it be a 3?)ô€‚‰ No actual error message on the log – this may give some clue to the developer about the nature of the faultô€‚‰ Response seems to be leading to a dialogue – if we are not careful this fault will never be fixed! Tester should talk to the developer rather than sending another message via the fault log.ô€‚‰ The response by the developer points to another part of the system (security) – this may be an indication of developers trying to quickly close the issue without performing sufficient investigation. It could however be because the tester has not spent enough time documenting the problem.NOW WHAT IS THE RESULT • D.-.Score less than 50% - Fail• C.-.Score 50% to 65% - Trainee Tester• B.-.Score 65% to 80% - Tester• A.-.Score more than 80% - Senior Tester
Posted by Shailaja Kiran at 04:43 0 comments
Labels: Testing
Software Testing Fundamentals—Concepts, Roles, and Terminology
SOFTWARE TESTING—WHAT, WHY, AND WHOWHAT IS SOFTWARE TESTING?Software testing is a process of verifying and validating that a software application or program1. Meets the business and technical requirements that guided its design and development, and2. Works as expected.Software testing also identifies important defects, flaws, or errors in the application code that must be fixed. The modifier “important” in the previous sentence is, well, important because defects must be categorized by severity(more on this later).During test planning we decide what an important defect is by reviewing the requirements and design documents with an eye towards answering the question “Important to whom?” Generally speaking, an important defect is one thatfrom the customer’s perspective affects the usability or functionality of the application. Using colors for a traffic lighting scheme in a desktop dashboard may be a no-brainer during requirements definition and easily implemented during development but in fact may not be entirely workable if during testing we discover that the primary business sponsor is color blind. Suddenly, it becomes an important defect. (About 8% of men and .4% of women have some form of color blindness.)The quality assurance aspect of software development—documenting the degree to which the developers followed corporate standard processes or best practices—is not addressed in this paper because assuring quality is not a responsibility of the testing team. The testing team cannot improve quality; they can only measure it, although it can be argued that doing things like designing tests before coding begins will improve quality because the coders can then use that information while thinking about their designs and during coding and debugging.Software testing has three main purposes: verification, validation, and defect finding.♦ The verification process confirms that the software meets its technical specifications. A “specification” is a description of a function in terms of a measurable output value given a specific input value under specificpreconditions. A simple specification may be along the line of “a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields ordered by month within 3 seconds of submission.”♦ The validation process confirms that the software meets the business requirements. A simple example of abusiness requirement is “After choosing a branch office name, information about the branch’s customer account managers will appear in a new window. The window will present manager identification and summary information about each manager’s customer base: .” Other requirementsprovide details on how the data will be summarized, formatted and displayed.♦ A defect is a variance between the expected and actual result. The defect’s ultimate source may be traced to a fault introduced in the specification, design, or development (coding) phases.WHY DO SOFTWARE TESTING?“A clever person solves a problem. A wise person avoids it." - Albert EinsteinWhy test software? “To find the bugs!” is the instinctive response and many people, developers and programmers included, think that that’s what debugging during development and code reviews is for, so formal testing is redundant at best. But a “bug” is really a problem in the code; software testing is focused on finding defects in the final product.Here are some important defects that better testing would have found.♦ In February 2003 the U.S. Treasury Department mailed 50,000 Social Security checks without a beneficiary name. A spokesperson said that the missing names were due to a software program maintenance error.♦ In June 1996 the first flight of the European Space Agency's Ariane 5 rocket failed shortly after launching, resulting in an uninsured loss of $500,000,000. The disaster was traced to the lack of exception handling for a floating-point error when a 64-bit integer was converted to a 16-bit signed integer.Software testing answers questions that development testing and code reviews can’t.♦ Does it really work as expected?♦ Does it meet the users’ requirements?♦ Is it what the users expect?♦ Do the users like it?♦ Is it compatible with our other systems?♦ How does it perform?♦ How does it scale when more users are added?♦ Which areas need more work?♦ Is it ready for release?What can we do with the answers to these questions?♦ Save time and money by identifying defects early♦ Avoid or reduce development downtime♦ Provide better customer service by building a better application♦ Know that we’ve satisfied our users’ requirements♦ Build a list of desired modifications and enhancements for later versions♦ Identify and catalog reusable modules and components♦ Identify areas where programmers and developers need trainingWHAT DO WE TEST?First, test what’s important. Focus on the core functionality—the parts that are critical or popular—before looking at the ‘nice to have’ features. Concentrate on the application’s capabilities in common usage situations before going onto unlikely situations. For example, if the application retrieves data and performance is important, test reasonable queries with a normal load on the server before going on to unlikely ones at peak usage times. It’s worth saying again: focus on what’s important. Good business requirements will tell you what’s important.The value of software testing is that it goes far beyond testing the underlying code. It also examines the functional behavior of the application. Behavior is a function of the code, but it doesn’t always follow that if the behavior is “bad”then the code is bad. It’s entirely possible that the code is solid but the requirements were inaccurately or incompletely collected and communicated. It’s entirely possible that the application can be doing exactly what we’retelling it to do but we’re not telling it to do the right thing.A comprehensive testing regime examines all components associated with the application. Even more, testing provides an opportunity to validate and verify things like the assumptions that went into the requirements, the appropriateness of the systems that the application is to run on, and the manuals and documentation that accompany the application. More likely though, unless your organization does true “software engineering” (think of Lockheed- Martin, IBM, or SAS Institute) the focus will be on the functionality and reliability of application itself.Testing can involve some or all of the following factors. The more, the better.♦ Business requirements♦ Functional design requirements♦ Technical design requirements♦ Regulatory requirements♦ Programmer code♦ Systems administration standards and restrictions♦ Corporate standards♦ Professional or trade association best practices♦ Hardware configuration♦ Cultural issues and language differencesWHO DOES THE TESTING?Software testing is not a one person job. It takes a team, but the team may be larger or smaller depending on the size and complexity of the application being tested. The programmer(s) who wrote the application should have a reduced role in the testing if possible. The concern here is that they’re already so intimately involved with the product and “know” that it works that they may not be able to take an unbiased look at the results of their labors.Testers must be cautious, curious, critical but non-judgmental, and good communicators. One part of their job is to ask questions that the developers might find not be able to ask themselves or are awkward, irritating, insulting or eventhreatening to the developers.♦ How well does it work?♦ What does it mean to you that “it works”?♦ How do you know it works? What evidence do you have?♦ In what ways could it seem to work but still have something wrong?♦ In what ways could it seem to not work but really be working?♦ What might cause it to not to work well?A good developer does not necessarily make a good tester and vice versa, but testers and developers do share at least one major trait—they itch to get their hands on the keyboard. As laudable as this may be, being in a hurry to start can cause important design work to be glossed over and so special, subtle situations might be missed that would otherwise be identified in planning. Like code reviews, test design reviews are a good sanity check and well worth the time and effort.Testers are the only IT people who will use the system as heavily an expert user on the business side. User testing almost invariably recruits too many novice business users because they’re available and the application must be usable by them. The problem is that novices don’t have the business experience that the expert users have and might not recognize that something is wrong. Testers from IT must find the defects that only the expert users will find because the experts may not report problems if they’ve learned that it's not worth their time or trouble.THE V-MODEL OF SOFTWARE TESTINGSoftware testing is too important to leave to the end of the project, and the V-Model of testing incorporates testing into the entire software development life cycle. In a diagram of the V-Model, the V proceeds down and then up, fromleft to right depicting the basic sequence of development and testing activities. The model highlights the existence of different levels of testing and depicts the way each relates to a different development phase.Like any model, the V-Model has detractors and arguably has deficiencies and alternatives but it clearly illustrates that testing can and should start at the very beginning of the project. (See Goldsmith for a summary of the pros and consand an alternative. Marrik’s articles provide criticism and an alternative.) In the requirements gathering stage the business requirements can verify and validate the business case used to justify the project. The business requirements are also used to guide the user acceptance testing. The model illustrates how each subsequent phaseshould verify and validate work done in the previous phase, and how work done during development is used to guide the individual testing phases. This interconnectedness lets us identify important errors, omissions, and other problems before they can do serious harm. Application testing begins with Unit Testing, and in the section titled“Types of Tests” we will discuss each of these test phases in more detail.THE TEST PLANThe test plan is a mandatory document. You can’t test without one. For simple, straight-forward projects the plan doesn’t have to be elaborate but it must address certain items. As identified by the “American National Standards Institute and Institute for Electrical and Electronic Engineers Standard 829/1983 for Software Test Documentation”, the following components should be covered in a software test plan.Items Covered by a Test Plan :REDUCE RISK WITH A TEST PLANThe release of a new application or an upgrade inherently carries a certain amount of risk that it will fail to do what it’s supposed to do. A good test plan goes a long way towards reducing this risk. By identifying areas that are riskierthan others we can concentrate our testing efforts there. These areas include not only the must-have features butalso areas in which the technical staff is less experienced, perhaps such as the real-time loading of a web form’s contents into a database using complex ETL logic. Because riskier areas require more certainty that they work properly, failing to correctly identify those risky areas leads to a misallocated testing effort.How do we identify risky areas? Ask everyone for their opinion! Gather information from developers, sales and marketing staff, technical writers, customer support people, and of course any users who are available. Historical data and bug and testing reports from similar products or previous releases will identify areas to explore. Bug reports from customers are important, but also look at bugs reported by the developers themselves. These will provide insight to the technical areas they may be having trouble in.When the problems are inevitably found, it’s important that both the IT side and the business users have previously agreed on how to respond. This includes having a method for rating the importance of defects so that repair work effort can be focused on the most important problems. It is very common to use a set of rating categories that represent decreasing relative severity in terms of business/commercial impact. In one system, '1' is the most severeand 6' has the least impact. Keep in mind that an ordinal system doesn’t allow an average score to be calculated, but you shouldn’t need to do that anyway—a defect’s category should be pretty obvious.1. Show Stopper – It is impossible to continue testing because of the severity of the defect.2. Critical -- Testing can continue but the application cannot be released into production until this defect is fixed.3. Major -- Testing can continue but this defect will result in a severe departure from the business requirements if released for production.4. Medium -- Testing can continue and the defect will cause only minimal departure from the business requirements when in production.5. Minor -– Testing can continue and the defect will not affect the release into production. The defect should be corrected but little or no changes to business requirements are envisaged.6. Cosmetic -– Minor cosmetic issues like colors, fonts, and pitch size that do not affect testing or production release. If, however, these features are important business requirements then they will receive a higher severity level.WHAT SHOULD A TEST PLAN TEST?Testing experts generally agree that test plans are often biased towards functionaltesting during which each feature is tested alone in a unit test, and that the systems integration test is just a series of unit tests strung together. (More on test types later.) The problem that this approach causes is that if we test each feature alone and then string a bunch of these tests together, we might never find that a series of steps such as {open a document, edit the document, print the document, save the document, edit one page, print one page, save as a new document}doesn’t work. But a user will find out and probably quickly. Admittedly, testing every combination of keystrokes or commands is difficult at best and may well be impossible (this is where unstructured testing comes in), but we must remember that features don’t function in isolation from each other.Users have a task orientation. To find the defects that they will find—the ones that are important to them—test plans need to exercise the application across functional areas by mimicking both typical and atypical user tasks. A test like the sequence shown above is called scenario testing, task-based testing, or use-case testing.An incomplete test plan can result in a failure to check how the application works on different hardware and operating systems or when combined with different third-party software. This is not always needed but you will want to think about the equipment your customers use. There may be more than a few possible system combinations that need to be tested, and that can require a possibly expensive computer lab stocked with hardware and spending much time setting up tests. Configuration testing isn't cheap, but it’s worth it when you discover that the application running on your standard in-house platform which "entirely conforms to industry standards" behaves differently when it runs on the boxes your customers are using. In a 1996 incident this author was involved in, the development and testing was done on new 386-class machines and the application worked just fine. Not until customers complained about performance did we learn that they were using 286’s with slow hard drives.A crucial test is to see how the application behaves when it’s under a normal load and then under stress. The definition of stress, of course, will be derived from your business requirements, but for a web-enabled application stress could be caused by a spike in the number of transactions, a few very large transactions at the same time, or a large number of almost identical simultaneous transactions. The goal is to see what happens when the application is pushed to substantially more than the basic requirements. Stress testing is often put off until the end of testing, aftereverything else that’s going to be fixed has been. Unfortunately that leaves little time for repairs when the requirements specify 40 simultaneous users and you find that performance becomes unacceptable at 50.Finally, Marick (1997) points out two common omissions in many test plans--the installation procedures and the documentation are ignored. Everyone has tried to follow installation instructions that were missing a key step or two, and we’ve all paged through incomprehensible documentation. Although those documents may have been written by a professional technical writer, they probably weren’t tested by a real user. Bad installation instructions immediately cause lowered expectations of what to expect from the product, and poorly organized or written documentation certainly doesn’t help a confused or irritated customer feel better. Testing installation procedures and documentation is a good way to avoid making a bad first impression or making a bad situation worse.Test Plan TerminologyTYPES OF SOFTWARE TESTSThe V-Model of testing identifies five software testing phases, each with a certain type of test associated with it.Each testing phase and each individual test should have specific entry criteria that must be met before testing can begin and specific exit criteria that must be met before the test or phase can be certified as successful. The entry and exit criteria are defined by the Test Coordinators and listed in the Test Plan.UNIT TESTINGA series of stand-alone tests are conducted during Unit Testing. Each test examinesan individual component that is new or has been modified. A unit test is also called a module test because it tests the individual units of code thatcomprise the application.Each test validates a single module that, based on the technical design documents, was built to perform a certain task with the expectation that it will behave in a specific way or produce specific results. Unit tests focus on functionalityand reliability, and the entry and exit criteria can be the same for each module or specific to a particular module. Unit testing is done in a test environment prior to system integration. If a defect is discovered during a unit test, the severity of the defect will dictate whether or not it will be fixed before the module is approved.Sample Entry and Exit Criteria for Unit TestingSYSTEM TESTINGSystem Testing tests all components and modules that are new, changed, affected by a change, or needed to form the complete application. The system test may require involvement of other systems but this should be minimized as much as possible to reduce the risk of externally-induced problems. Testing the interaction with other parts of the complete system comes in Integration Testing. The emphasis in system testing is validating and verifying the functional design specification and seeing how all the modules work together. For example, the system test for a new web interface that collects user input for addition to a database doesn’t need to include the database’s ETL application—processing can stop when the data is moved to the data staging area if there is one.The first system test is often a smoke test. This is an informal quick-and-dirty run through of the application’s major functions without bothering with details. The term comes from the hardware testing practice of turning on a new piece of equipment for the first time and considering it a success if it doesn’t start smoking or burst into flame.System testing requires many test runs because it entails feature by feature validation of behavior using a wide range of both normal and erroneous test inputs and data. The Test Plan is critical here because it contains descriptions ofthe test cases, the sequence in which the tests must be executed, and the documentation needed to be collected in each run.When an error or defect is discovered, previously executed system tests must be rerun after the repair is made to make sure that the modifications didn’t cause other problems. This will be covered in more detail in the section onregression testing.Sample Entry and Exit Criteria for System TestingSYSTEM TESTINGSystem Testing tests all components and modules that are new, changed, affected by a change, or needed to form the complete application. The system test may require involvement of other systems but this should be minimized as much as possible to reduce the risk of externally-induced problems. Testing the interaction with other parts of the complete system comes in Integration Testing. The emphasis in system testing is validating and verifying the functional design specification and seeing how all the modules work together. For example, the system test for a new web interface that collects user input for addition to a database doesn’t need to include the database’s ETL application—processing can stop when the data is moved to the data staging area if there is one.The first system test is often a smoke test. This is an informal quick-and-dirty run through of the application’s major functions without bothering with details. The term comes from the hardware testing practice of turning on a new piece of equipment for the first time and considering it a success if it doesn’t start smoking or burst into flame.System testing requires many test runs because it entails feature by feature validation of behavior using a wide range of both normal and erroneous test inputs and data. The Test Plan is critical here because it contains descriptions ofthe test cases, the sequence in which the tests must be executed, and the documentation needed to be collected in each run.When an error or defect is discovered, previously executed system tests must be rerun after the repair is made to make sure that the modifications didn’t cause other problems. This will be covered in more detail in the section on regression testing.Sample Entry and Exit Criteria for System TestingAs part of system testing, conformance tests and reviews can be run to verify that the application conforms to corporate or industry standards in terms of portability, interoperability, and compliance with standards. For example,to enhance application portability a corporate standard may be that SQL queries must be written so that they work against both Oracle and DB2 databases.INTEGRATION TESTINGIntegration testing examines all the components and modules that are new, changed, affected by a change, or needed to form a complete system. Where system testing tries to minimize outside factors, integration testing requires involvement of other systems and interfaces with other applications, including those owned by an outsidevendor, external partners, or the customer. For example, integration testing for a new web interface that collects user input for addition to a database must include the database’s ETL application even if the database is hosted by a vendor—the complete system must be tested end-to-end. In this example, integration testing doesn’t stop with the database load; test reads must verify that it was correctly loaded.Integration testing also differs from system testing in that when a defect is discovered, not all previously executed tests have to be rerun after the repair is made. Only those tests with a connection to the defect must be rerun, butretesting must start at the point of repair if it is before the point of failure. For example, the retest of a failed FTP process may use an existing data file instead of recreating it if up to that point everything else was OK.Sample Entry and Exit Criteria for Integration TestingIntegration testing has a number of sub-types of tests that may or may not be used, depending on the application being tested or expected usage patterns.♦ Compatibility Testing – Compatibility tests insures that the application works with differently configured systems based on what the users have or may have. When testing a web interface, this means testing for compatibility with different browsers and connection speeds.♦ Performance Testing – Performance tests are used to evaluate and understand the application’s scalability when, for example, more users are added or the volume of data increases. This is particularly important for identifying bottlenecks in high usage applications. The basic approach is to collect timings of the criticalbusiness processes while the test system is under a very low load (a ‘quiet box’ condition) and then collect the same timings with progressively higher loads until the maximum required load is reached. For a data retrieval application, reviewing the performance pattern may show that a change needs to be made in a stored SQL procedure or that an index should be added to the database design.♦ Stress Testing – Stress Testing is performance testing at higher than normal simulated loads. Stressing runs the system or application beyond the limits of its specified requirements to determine the load under which it fails and how it fails. A gradual performance slow-down leading to a non-catastrophic system halt is the desired result, but if the system will suddenly crash and burn it’s important to know the point where that will happen. Catastrophic failure in production means beepers going off, people coming in after hours, system restarts, frayed tempers, and possible financial losses. This test is arguably the most important testfor mission-critical systems.♦ Load Testing – Load tests are the opposite of stress tests. They test the capability of the application to function properly under expected normal production conditions and measure the response times for critical transactions or processes to determine if they are within limits specified in the business requirements anddesign documents or that they meet Service Level Agreements. For database applications, load testing must be executed on a current production-size database. If some database tables are forecast to grow much larger in the foreseeable future then serious consideration should be given to testing against adatabase of the projected size.Performance, stress, and load testing are all major undertakings and will require substantial input from the business sponsors and IT staff in setting up a test environment and designing test cases that can be accurately executed. Because of this, these tests are sometimes delayed and made part of the User Acceptance Testing phase. Load tests especially must be documented in detail so that the tests are repeatable in case they need to be executed several times to ensure that new releases or changes in database size do not push response times beyondprescribed requirements and Service Level Agreements.USER ACCEPTANCE TESTING (UAT)User Acceptance Testing is also called Beta testing, application testing, and end-user testing. Whatever you choose to call it, it’s where testing moves from the hands of the IT department into those of the business users. Software vendors often make extensive use of Beta testing, some more formally than others, because they can get users to do it for free.By the time UAT is ready to start, the IT staff has resolved in one way or another all the defects they identified. Regardless of their best efforts, though, they probably don’t find all the flaws in the application. A general rule ofthumb is that no matter how bulletproof an application seems when it goes into UAT, a user somewhere can still find a sequence of commands that will produce an error.Nothing is foolproof. Fools are just too darn clever. - anonymousTo be of real use, UAT cannot be random users playing with the application. A mix of business users with varying degrees of experience and subject matter expertise need to actively participate in a controlled environment.Representatives from the group work with Testing Coordinators to design and conduct tests that reflect activities and conditions seen in normal business usage. Business users also participate in evaluating the results. This insures that the application is tested in real-world situations and that the tests cover the full range of business usage. The goal of UAT is to simulate realistic business activity and processes in the test environment.A phase of UAT called “Unstructured Testing” will be conducted whether or not it’s in the Test Plan. Also known as guerilla testing, this is when business users bash away at the keyboard to find the weakest parts of the application. In effect, they try to break it. Although it’s a free-form test, it’s important that users who participate understand that they have to be able to reproduce the steps that led to any errors they find. Otherwise it’s of no use.A common occurrence in UAT is that once the business users start working with the application they find that it doesn’t do exactly what they want it to do or that it does something that, although correct, is not quite optimal. Investigation finds that the root cause is in the Business Requirements, so the users will ask for a change. During UAT is when change control must be most seriously enforced, but change control is beyond the scope of this paper. Suffice to say that scope creep is especially dangerous in this late phase and must be avoided.Sample Entry and Exit Criteria for User Acceptance TestingPRODUCTION VERIFICATION TESTINGProduction verification testing is a final opportunity to determine if the software is ready for release. Its purpose is to simulate the production cutover as closely as possible and for a period of time simulate real business activity. As a sort of full dress rehearsal, it should identify anomalies or unexpected changes to existing processes introduced by the new application. For mission critical applications the importance of this testing cannot be overstated.The application should be completely removed from the test environment and then completely reinstalled exactly as it will be in the production implementation. Then mock production runs will verify that the existing business process flows, interfaces, and batch processes continue to run correctly. Unlike parallel testing in which the old and newsystems are run side-by-side, mock processing may not provide accurate data handling results due to limitations of the testing database or the source data.Sample Entry and Exit Criteria for Production Verification TestingREGRESSION TESTINGBugs will appear in one part of a working program immediatelyafter an 'unrelated' part of the program is modified -MurphyRegression testing is also known as validation testing and provides a consistent, repeatable validation of each change to an application under development or being modified. Each time a defect is fixed, the potential exists to inadvertently introduce new errors, problems, and defects. An element of uncertainty is introduced about ability of the application to repeat everything that went right up to the point of failure. Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs.Regression testing is conducted in parallel with other tests and can be viewed as a quality control tool to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the change. It is important to understand that regression testing doesn’t test that a specific defect has been fixed. Regression testing tests that the rest of the application up to the point or repair was not adversely affected by the fix.Sample Entry and Exit Criteria for Regression Testing
Posted by Shailaja Kiran at 02:25 0 comments
Labels: Testing
Thursday, 8 November 2007
ISEB/ISTQB Foundation Documents
ISEB/ISTQB Foundation Documents.Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.ISEB/ISTQB Foundation Document-1ISEB/ISTQB Foundation Document-2ISEB/ISTQB Foundation Document-3ISEB/ISTQB Foundation Document-4ISEB/ISTQB Foundation Document-5Follow the below procedure to download the file :When u click the link it will redirect to megaupload.com. There u need to enter a given 3 alphabets in to the adjacent text box and click download. we need to wait for 45 sec. After 45 sec u can see button displaying free download. click on that to download the file. If u have Pay account u can directly download with out waiting.Please get back to me if u still have any problems.Please make sure that this material is not sufficient for your exam(foundation level). These documents just give a brief information about the syllabus.All The Best
Posted by Shailaja Kiran at 07:51 3 comments
Labels: ISEB/ISTQB Foundation Certification Course Material
Wednesday, 7 November 2007
ISTQB TEST 8
ISTQB Foundation Level Mock Test 2Duration: 1 hourInstructions:1. Pass criteria will be 60%2. No negative marking1. COTS is known as A. Commercial off the shelf softwareB. Compliance of the softwareC. Change control of the softwareD. Capable off the shelf software2. From the below given choices, which one is the ‘Confidence testing’ A. Sanity testing B. System testingC. Smoke testing D. Regression testing3. ‘Defect Density’ calculated in terms of A. The number of defects identified in a component or system divided by the size of the component or the systemB. The number of defects found by a test phase divided by the number found by that test phase and any other means after wardsC. The number of defects identified in the component or system divided by the number of defects found by a test phaseD. The number of defects found by a test phase divided by the number found by the size of the system4. ‘Be bugging’ is known as A. Preventing the defects by inspectionB. Fixing the defects by debuggingC. Adding known defects by seeding D. A process of fixing the defects by tester5. An expert based test estimation is also known as A. Narrow band DelphiB. Wide band DelphiC. Bespoke DelphiD. Robust Delphi6. When testing a grade calculation system, a tester determines that all scores from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis is known as: A. Equivalence partitioning B. Boundary value analysisC. Decision tableD. Hybrid analysis7. All of the following might be done during unit testing except A. Desk checkB. Manual support testingC. WalkthroughD. Compiler based testing8. Find the Min number of tests to ensure that each statement is executed at least once A. 5 B. 6 C. 4 D. 89. Which of the following characteristics is primarily associated with software reusability? A. The extent to which the software can be used in other applicationsB. The extent to which the software can be used by many different usersC. The capability of the software to be moved to a different platformD. The capability of one system to be coupled with another system10. Which of the following software change management activities is most vital to assessing the impact of proposed software modifications? A. Baseline identification B. Configuration auditingC. Change control D. Version control11. Which of the following statements is true about a software verification and validation program? I. It strives to ensure that quality is built into software.II. It provides management with insights into the state of a software project.III. It ensures that alpha, beta, and system tests are performed.IV. It is executed in parallel with software development activities.A. I, II&III B.II, III&IV C.I, II&IV D.I, III&IV12. Which of the following is a requirement of an effective software environment? I. Ease of useII. Capacity for incremental implementationIII. Capability of evolving with the needs of a projectIV. Inclusion of advanced toolsA.I, II &III B.I, II &IV C.II, III&IV D.I, III&IV13. A test manager wants to use the resources available for the automated testing of a web application. The best choice is A. Test automater, web specialist, DBA, test leadB. Tester, test automater, web specialist, DBAC. Tester, test lead, test automater, DBAD. Tester, web specialist, test lead, test automater14. A project manager has been transferred to a major software development project that is in the implementation phase. The highest priority for this project manager should be to B. Establish a relationship with the customerC. Learn the project objectives and the existing project planD. Modify the project’ s organizational structure to meet the manager’ s management styleE. Ensure that the project proceeds at its current pace15. Change X requires a higher level of authority than Change Y in which of the following pairs? Change X Change YA. Code in development Code in productionB. Specifications during requirements analysis Specifications during systems testC. Documents requested by the technical development group Documents requested by customersD. A product distributed to several sites A product with a single user16. Which of the following functions is typically supported by a software quality information system? I. Record keepingII. System designIII. Evaluation schedulingIV. Error reportingA.I, II&III B.II, III &IV C.I, III &IV D.I, II & IV17. During the testing of a module tester ‘X’ finds a bug and assigned it to developer. But developer rejects the same, saying that it’s not a bug. What ‘X’ should do? A. Report the issue to the test manager and try to settle with the developer.B. Retest the module and confirm the bugC. Assign the same bug to another developerD. Send to the detailed information of the bug encountered and check the reproducibility18. The primary goal of comparing a user manual with the actual behavior of the running program during system testing is to A. Find bugs in the programB. Check the technical accuracy of the documentC. Ensure the ease of use of the documentD. Ensure that the program is the latest version19. A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. A. System Testing B. Big-Bang Testing C. Integration Testing D. Unit Testing20. In practice, which Life Cycle model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing. A. Water Fall Model B.V-Model C. Spiral Model D. RAD Model21. Which technique can be used to achieve input and output coverage? It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing. A. Error Guessing B. Boundary Value Analysis C. Decision Table testing D. Equivalence partitioning22. There is one application, which runs on a single terminal. There is another application that works on multiple terminals. What are the test techniques you will use on the second application that you would not do on the first application? A. Integrity, Response time B. Concurrency test, ScalabilityC. Update & Rollback, Response time D. Concurrency test, Integrity23. You are the test manager and you are about the start the system testing. The developer team says that due to change in requirements they will be able to deliver the system to you for testing 5 working days after the due date. You can not change the resources(work hours, test tools, etc.) What steps you will take to be able to finish the testing in time. ( A. Tell to the development team to deliver the system in time so that testing activity will be finish in time.B. Extend the testing plan, so that you can accommodate the slip going to occurC. Rank the functionality as per risk and concentrate more on critical functionality testingD. Add more resources so that the slippage should be avoided24. Item transmittal report is also known as A. Incident report B. Release note C. Review report D. Audit report25. Testing of software used to convert data from existing systems for use in replacement systems A. Data driven testing B. Migration testing C. Configuration testing D. Back to back testing26. Big bang approach is related to A. Regression testing B. Inter system testingC. Re-testing D. Integration testing27. Cause effect graphing is related to the standard A. BS7799 B. BS 7925/2 C. ISO/IEC 926/1 D. ISO/IEC 2382/128. “The tracing of requirements for a test level through the layers of a test documentation” done by A. Horizontal tracebility B. Depth tracebilityC. Vertical tracebility D. Horizontal & Vertical tracebilities29. A test harness is a A. A high level document describing the principles, approach and major objectives of the organization regarding testingB. A distance set of test activities collected into a manageable phase of a projectC. A test environment comprised of stubs and drives needed to conduct a testD. A set of several test cases for a component or system under test30. You are a tester for testing a large system. The system data model is very large with many attributes and there are a lot of inter dependencies with in the fields. What steps would you use to test the system and also what are the efforts of the test you have taken on the test plan A. Improve super vision, More reviews of artifacts or program means stage containment of the defects.B. Extend the test plan so that you can test all the inter dependenciesC. Divide the large system in to small modules and test the functionalityD. Test the interdependencies first, after that check the system as a whole31. Change request should be submitted through development or program management. A change request must be written and should include the following criteria. I. Definition of the changeII. Documentation to be updatedIII. Name of the tester or developerIV. Dependencies of the change request.A. I, III and IV B. I, II and III C. II, III and IV D. I, II and IV32. ‘Entry criteria’ should address questions such as I. Are the necessary documentation, design and requirements information available that will allow testers to operate the system and judge correct behavior.II. Is the test environment-lab, hardware, software and system administration support ready?III. Those conditions and situations that must prevail in the testing process to allow testing to continue effectively and efficiently.IV. Are the supporting utilities, accessories and prerequisites available in forms that testers can useA. I, II and IV B. I, II and III C. I, II, III and IV D. II, III and IV.33. “This life cycle model is basically driven by schedule and budget risks” This statement is best suited for A. Water fall model B. Spiral model C. Incremental model D. V-Model34. The bug tracking system will need to capture these phases for each bug. I. Phase injectedII. Phase detectedIII. Phase fixedIV. Phase removedA. I, II and III B. I, II and IV C. II, III and IV D. I, III and IV35. One of the more daunting challenges of managing a test project is that so many dependencies converge at test execution. One missing configuration file or hard ware device can render all your test results meaning less. You can end up with an entire platoon of testers sitting around for days. Who is responsible for this incident?A. Test managers faults onlyB. Test lead faults onlyC. Test manager and project manager faultsD. Testers faults only36. System test can begin when? I. The test team competes a three day smoke test and reports on the results to the system test phase entry meetingII. The development team provides software to the test team 3 business days prior to starting of the system testingIII. All components are under formal, automated configuration and release management controlA. I and II only B. II and III only C. I and III only D. I, II and III37. Test charters are used in ________ testing A. Exploratory testing B. Usability testingC. Component testing D. Maintainability testingAll The BestISTQB Foundation Level Mock Test 2 KeyQ.No Answer Q.No Answer1 (A) 20 (B)2 (C) 21 (D)3 (A) 22 (C)4 (C) 23 (C)5 (B) 24 (B)6 (A) 25 (B)7 (B) 26 (D)8 (B) 27 (B)9 (A) 28 (A)10 (C) 29 (C)11 (C) 30 (A)12 (A) 31 (D)13 (B) 32 (A)14 (B) 33 (D)15 (D) 34 (B)16 (C) 35 (A)17 (D) 36 (D)18 (B) 37 (A)19 (B)
Posted by Shailaja Kiran at 09:55 1 comments
Labels: ISTQB/ISEB, Sample Papers, Testing
ISTQB TEST 7
ISTQB Foundation Level Mock Test 1Duration: 1 hourInstructions:1. Pass criteria will be 60% , 2. No negative marking1. ___________ Testing will be performed by the people at client own locations (1M)A. Alpha testing B. Field testing C. Performance testing D. System testing2. System testing should investigate (2M)A. Non-functional requirements only not Functional requirementsB. Functional requirements only not non-functional requirementsC. Non-functional requirements and Functional requirementsD. Non-functional requirements or Functional requirements3. Which is the non-functional testing (1M)A. Performance testing B. Unit testing C. Regression testing D. Sanity testing4. Who is responsible for document all the issues, problems and open point that were identified during the review meeting (2M) A. Moderator B. Scribe C. Reviewers D. Author5. What is the main purpose of Informal review (2M) A. Inexpensive way to get some benefit B. Find defects C. Learning, gaining understanding, effect findingD. Discuss, make decisions, solve technical problems6. Purpose of test design technique is (1M) A. Identifying test conditions only, not Identifying test casesB. Not Identifying test conditions, Identifying test cases onlyC. Identifying test conditions and Identifying test casesD. Identifying test conditions or Identifying test cases7. ___________ technique can be used to achieve input and output coverage (1M) A. Boundary value analysis B. Equivalence partitioning C. Decision table testing D. State transition testing 8. Use cases can be performed to test (2M) A. Performance testing B. Unit testing C. Business scenarios D. Static testing9. ________________ testing is performed at the developing organization’s site (1M) A. Unit testing B. Regression testing C. Alpha testing D. Integration testing10. The purpose of exit criteria is (2M)A. Define when to stop testingB. End of test level C. When a set of tests has achieved a specific pre conditionD. All of the above11. Which is not the project risks (2M) A. Supplier issues B. Organization factors C. Technical issues D. Error-prone software delivered12. Poor software characteristics are (3M) A. Only Project risksB. Only Product risksC. Project risks and Product risksD. Project risks or Product risks13. ________ and ________ are used within individual workbenches to produce the right output products. (2M) A. Tools and techniques B. Procedures and standards C. Processes and walkthroughs D. Reviews and update14. The software engineer's role in tool selection is (3M) A. To identify, evaluate, and rank tools, and recommend tools to managementB. To determine what kind of tool is needed, then find it and buy itC. To initiate the tool search and present a case to managementD. To identify, evaluate and select the tools15. A _____ is the step-by-step method followed to ensure that standards are met (2M) A. SDLC B. Project Plan C. Policy D. Procedure16. Which of the following is the standard for the Software product quality (1M) A. ISO 1926 B. ISO 829 C. ISO 1012 D. ISO 102817. Which is not the testing objectives (1M) A. Finding defectsB. Gaining confidence about the level of quality and providing informationC. Preventing defects.D. Debugging defects18. Bug life cycle (1M) A. Open, Assigned, Fixed, ClosedB. Open, Fixed, Assigned, ClosedC. Assigned, Open, Closed, FixedD. Assigned, Open, Fixed, Closed19. Which is not the software characteristics (1M) A. Reliability B. Usability C. Scalability D. Maintainability20. Which is not a testing principle (2M) A. Early testing B. Defect clustering C. Pesticide paradox D. Exhaustive testing21. ‘X’ has given a data on a person age, which should be between 1 to 99. Using BVA which is the appropriate one (3M) A. 0,1,2,99 B. 1, 99, 100, 98 C. 0, 1, 99, 100 D. –1, 0, 1, 99 22. Which is not the fundamental test process (1M) A. Planning and control B. Test closure activitiesC. Analysis and design D. None 23. Which is not a Component testing (2M) A. Check the memory leaks B. Check the robustness C. Check the branch coverage D. Check the decision tables24. PDCA is known as (1M) A. Plan, Do, Check, Act B. Plan, Do, Correct, ActC. Plan, Debug, Check, Act D. Plan, Do, Check, Accept 25. Contract and regulation testing is a part of (2M) A. System testing B. Acceptance testing C. Integration testing D. Smoke testing26. Which is not a black box testing technique (1M) A. Equivalence partition B. Decision tablesC. Transaction diagrams D. Decision testing 27. Arc testing is known as (2M) A. Branch testing B. Agile testingC. Beta testing D. Ad-hoc testing28. A software model that can’t be used in functional testing (2M)A. Process flow model B. State transaction modelC. Menu structure model D. Plain language specification model29. Find the mismatch (2M) A. Test data preparation tools – Manipulate Data basesB. Test design tools – Generate test inputsC. Requirement management tools – Enables individual tests to be traceable D. Configuration management tools – Check for consistence30. The principle of Cyclomatic complexity, considering L as edges or links, N as nodes, P as independent paths (2M) A. L-N +2PB. N-L +2PC. N-L +PD. N-L +P31. FPA is used to (2M) A. To measure the functional requirements of the projectB. To measure the size of the functionality of an Information systemC. To measure the functional testing effortD. To measure the functional flow32. Which is not a test Oracle (2M) A. The existing system (For a bench mark)B. The code C. Individual’s knowledgeD. User manual33. Find the correct flow of the phases of a formal review (3M) A. Planning, Review meeting, Rework, Kick offB. Planning, Individual preparation, Kick off, ReworkC. Planning, Review meeting, Rework, Follow upD. Planning, Individual preparation, Follow up, Kick off34. Stochastic testing using statistical information or operational profiles uses the following method (3M) A. Heuristic testing approach B. Methodical testing approachC. Model based testing approachD. Process or standard compliant testing approach35. A project that is in the implementation phase is six weeks behind schedule. The delivery date for the product is four months away. The project is not allowed to slip the delivery date or compromise on the quality standards established for this product. Which of the following actions would bring this project back on schedule? (3M) A. Eliminate some of the requirements that have not yet been implemented.B. Add more engineers to the project to make up for lost work.C. Ask the current developers to work overtime until the lost work is recovered.D. Hire more software quality assurance personnel.36. One person has been dominating the current software process improvement meeting. Which of the following techniques should the facilitator use to bring other team members into the discussion? (3M) A. Confront the person and ask that other team members be allowed to express their opinions.B. Wait for the person to pause, acknowledge the person’ s opinion, and ask for someone else’ s opinion.C. Switch the topic to an issue about which the person does not have a strong opinion.D. Express an opinion that differs from the person’ s opinion in order to encourage others to express their ideas.37. Maintenance releases and technical assistance centers are examples of which of the following costs of quality? (3M) A. External failureB. Internal failureC. AppraisalD. PreventionAll the bestISTQB Foundation Level Mock Test 1 KeyQ.No Answer Q.No Answer1 B 20 D2 C 21 C3 A 22 D4 B 23 D5 A 24 A6 C 25 B7 B 26 D8 C 27 A9 C 28 C10 D 29 D11 D 30 A12 B 31 B13 B 32 B14 A 33 C15 D 34 C16 A 35 A17 D 36 B18 A 37 A19 C
Posted by Shailaja Kiran at 09:53 0 comments
Labels: Sample Papers, Testing
ISTQB TEST 6
Foundation Certificate In Software Testing Practice ExamTime allowed: 1 hour40 QUESTIONSQuestionNOTE: Only one answer per question1 We split testing into distinct stages primarily because:a) Each test stage has a different purpose.b) It is easier to manage testing in stages.c) We can run different tests in different environments.d) The more stages we have, the better the testing.2 Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities?a) Regression testingb) Integration testingc) System testingd) User acceptance testing3 Which of the following statements is NOT correct?a) A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.b) A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.c) A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.d) A minimal test set that achieves 100% statement coverage will generally detect more faults than one that achieves 100% branch coverage. 4 Which of the following requirements is testable?a) The system shall be user friendly.b) The safety-critical parts of the system shall contain 0 faults.c) The response time shall be less than one second for the specified design load. d) The system shall be built to be portable.5 Analyse the following highly simplified procedure:Ask: “What type of ticket do you require, single or return?”IF the customer wants ‘return’Ask: “What rate, Standard or Cheap-day?”IF the customer replies ‘Cheap-day’Say: “That will be £11:20”ELSESay: “That will be £19:50”ENDIFELSESay: “That will be £9:75”ENDIFNow decide the minimum number of tests that are needed to ensure that allthe questions have been asked, all combinations have occurred and allreplies given.a) 3 b) 4c) 5d) 6 6 Error guessing:a) supplements formal test design techniques.b) can only be used in component, integration and system testing.c) is only performed in user acceptance testing.d) is not repeatable and should not be used.7 Which of the following is NOT true of test coverage criteria?a) Test coverage criteria can be measured in terms of items exercised by a test suite.b) A measure of test coverage criteria is the percentage of user requirements covered.c) A measure of test coverage criteria is the percentage of faults found.d) Test coverage criteria are often used when specifying test completion criteria.8 In prioritising what to test, the most important objective is to:a) find as many faults as possible.b) test high risk areas.c) obtain good test coverage.d) test whatever is easiest to test.9 Given the following sets of test management terms (v-z), and activity descriptions (1-5), which one of the following best pairs the two sets?v – test controlw – test monitoringx - test estimationy - incident managementz - configuration control1 - calculation of required test resources2 - maintenance of record of test results3 - re-allocation of resources when tests overrun4 - report on deviation from test plan5 - tracking of anomalous test resultsa) v-3,w-2,x-1,y-5,z-4b) v-2,w-5,x-1,y-4,z-3c) v-3,w-4,x-1,y-5,z-2 d) v-2,w-1,x-4,y-3,z-510 Which one of the following statements about system testing is NOT true?a) System tests are often performed by independent teams.b) Functional testing is used more than structural testing.c) Faults found during system tests can be very expensive to fix.d) End-users should be involved in system tests.11 Which of the following is false?a) Incidents should always be fixed.b) An incident occurs when expected and actual results differ.c) Incidents can be analysed to assist in test process improvement.d) An incident can be raised against documentation.12 Enough testing has been performed when:a) time runs out.b) the required level of confidence has been achieved.c) no more faults are found.d) the users won’t find any serious faults.13 Which of the following is NOT true of incidents?a) Incident resolution is the responsibility of the author of the software under test.b) Incidents may be raised against user requirements.c) Incidents require investigation and/or correction.d) Incidents are raised when expected and actual results differ.14 Which of the following is not described in a unit test standard?a) syntax testingb) equivalence partitioningc) stress testing d) modified condition/decision coverage15 Which of the following is false?a) In a system two different failures may have different severities.b) A system is necessarily more reliable after debugging for the removal of a fault.c) A fault need not affect the reliability of a system.d) Undetected errors may lead to faults and eventually to incorrect behaviour.16 Which one of the following statements, about capture-replay tools, is NOT correct?a) They are used to support multi-user testing.b) They are used to capture and animate user requirements.c) They are the most frequently purchased types of CAST tool.d) They capture aspects of user behaviour.17 How would you estimate the amount of re-testing likely to be required?a) Metrics from previous similar projectsb) Discussions with the development teamc) Time allocated for regression testingd) a & b 18 Which of the following is true of the V-model?a) It states that modules are tested against user requirements.b) It only models the testing phase.c) It specifies the test techniques to be used.d) It includes the verification of designs. 19 The oracle assumption:a) is that there is some existing system against which test output may be checked.b) is that the tester can routinely identify the correct outcome of a test.c) is that the tester knows everything about the software under test.d) is that the tests are reviewed by experienced testers.20 Which of the following characterises the cost of faults?a) They are cheapest to find in the early development phases and the most expensive to fix in the latest test phases.b) They are easiest to find during system testing but the most expensive to fix then.c) Faults are cheapest to find in the early development phases but the most expensive to fix then.d) Although faults are most expensive to find during early development phases, they are cheapest to fix then.21 Which of the following should NOT normally be an objective for a test?a) To find faults in the software.b) To assess whether the software is ready for release.c) To demonstrate that the software doesn’t work.d) To prove that the software is correct.22 Which of the following is a form of functional testing?a) Boundary value analysisb) Usability testingc) Performance testingd) Security testing23 Which of the following would NOT normally form part of a test plan?a) Features to be testedb) Incident reportsc) Risksd) Schedule24 Which of these activities provides the biggest potential cost saving from the use of CAST?a) Test managementb) Test designc) Test executiond) Test planning25 Which of the following is NOT a white box technique?a) Statement testingb) Path testingc) Data flow testingd) State transition testing26 Data flow analysis studies:a) possible communications bottlenecks in a program.b) the rate of change of data values as a program executes.c) the use of data on paths through the code.d) the intrinsic complexity of the code.27 In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%The next £28000 is taxed at 22%Any further amount is taxed at 40%To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?a) £1500b) £32001c) £33501 d) £2800028 An important benefit of code inspections is that they:a) enable the code to be tested before the execution environment is ready.b) can be performed by the person who wrote the code.c) can be performed by inexperienced staff.d) are cheap to perform.29 Which of the following is the best source of Expected Outcomes for User Acceptance Test scripts?a) Actual resultsb) Program specificationc) User requirementsd) System specification30 What is the main difference between a walkthrough and an inspection?a) An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.b) An inspection has a trained leader, whilst a walkthrough has no leader.c) Authors are not present during inspections, whilst they are during walkthroughs.d) A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator.31 Which one of the following describes the major benefit of verification early in the life cycle?a) It allows the identification of changes in user requirements.b) It facilitates timely set up of the test environment.c) It reduces defect multiplication.d) It allows testers to become involved early in the project.32 Integration testing in the small:a) tests the individual components that have been developed.b) tests interactions between modules or subsystems.c) only uses components that form part of the live system.d) tests interfaces to other systems.33 Static analysis is best described as:a) the analysis of batch programs.b) the reviewing of test plans.c) the analysis of program code.d) the use of black box testing.34 Alpha testing is:a) post-release testing by end user representatives at the developer’s site.b) the first testing that is performed.c) pre-release testing by end user representatives at the developer’s site.d) pre-release testing by end user representatives at their sites.35 A failure is:a) found in the software; the result of an error.b) departure from specified behaviour.c) an incorrect step, process or data definition in a computer program.d) a human action that produces an incorrect result.36 In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%The next £28000 is taxed at 22%Any further amount is taxed at 40%Which of these groups of numbers would fall into the same equivalence class?a) £4800; £14000; £28000b) £5200; £5500; £28000c) £28001; £32000; £35000d) £5800; £28000; £32000 37 The most important thing about early test design is that it:a) makes test preparation easier.b) means inspections are not required.c) can prevent fault multiplication.d) will find all faults.38 Which of the following statements about reviews is true?a) Reviews cannot be performed on user requirements specifications.b) Reviews are the least effective way of testing code.c) Reviews are unlikely to find faults in test plans.d) Reviews should be performed on specifications, code, and test plans.39 Test cases are designed during:a) test recording.b) test planning.c) test configuration.d) test specification.40 A configuration management system would NOT normally provide:a) linkage of customer requirements to version numbers.b) facilities to compare test results with expected results. c) the precise differences in versions of software component source code.d) restricted access to the source code library.Question number Correct answer1 A2 A3 D4 C5 A6 A7 C8 B9 C10 D11 A12 B13 A14 C15 B16 B17 D18 D19 B20 A21 D22 A23 B24 C25 D26 C27 C28 A29 C30 D31 C32 B33 C34 C35 B36 D37 C38 D39 D40 B
Posted by Shailaja Kiran at 09:49 0 comments
Labels: Sample Papers, Testing
Monday, 5 November 2007
Istqb - Normative References
ISTQB - Normative referencesAt the time of publication, the edition indicated was valid. All standards are subject to revision, and parties to agreements based upon this Standard are encouraged to investigate the possibility of applying the most recent edition of the standards listed below. Members of IEC and ISO maintain registers of currently valid International Standards.- BS 7925-2:1998. Software Component Testing.- DO-178B:1992. Software Considerations in Airborne Systems and EquipmentCertification, Requirements and Technical Concepts for Aviation (RTCA SC167).- IEEE 610.12:1990. Standard Glossary of Software Engineering Terminology.- IEEE 829:1998. Standard for Software Test Documentation.- IEEE 1008:1993. Standard for Software Unit Testing.- IEEE 1012:1986. Standard for Verification and Validation Plans- IEEE 1028:1997. Standard for Software Reviews and Audits.- IEEE 1044:1993. Standard Classification for Software Anomalies.- IEEE 1219:1998. Software Maintenance.- ISO/IEC 2382-1:1993. Data processing - Vocabulary - Part 1: Fundamental terms.- ISO 9000:2000. Quality Management Systems – Fundamentals and Vocabulary.- ISO/IEC 9126-1:2001. Software Engineering – Software Product Quality – Part 1:Quality characteristis and sub-characteristics.- ISO/IEC 12207:1995. Information Technology – Software Life Cycle Processes.- ISO/IEC 14598-1:1996. Information Technology – Software Product Evaluation - Part 1:General Overview.
Monday, December 31, 2007
ISTQB Certification
Posted by Mustafeez at 4:02 AM
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment