Thursday, January 31, 2008

ISTQB Foundation Course Content


Fundamentals of Software Testing
  • Grasping the software systems context
  • Identifying causes of software defects
  • Bug
  • Defect
  • Error
  • Failure
  • Fault
  • Mistake
  • Quality
  • Risk
Ensuring Software Success Through Testing
The key objectives of testing
  • Finding defects during development
  • Providing confidence and information
Adhering to seven testing principles
  • Presence of defects
  • Exhaustive testing
  • Early testing
  • Defect clustering
  • Pesticide paradox
  • Context dependent
  • Absence-of-errors fallacy
Applying common sense processes
  • Planning and controlling
  • Analyzing and designing
  • Implementing and executing
  • Evaluating exit criteria and reporting
  • Closing activities
Coping with the psychology of testing
  • Contrasting developer vs. tester mindset
  • Discerning levels of independence
Testing and the Software Life Cycle
Distinguishing software development models
  • Adapting to V-model and iterative models
  • Performing tests within a life cycle model
Conducting the main test levels
  • Component
  • Integration
  • System
  • Acceptance
Comparing four software test types
  • Recognizing functional and structural tests
  • Performing non-functional testing
  • Analyzing software structure/architecture
  • Conducting confirmation and regression tests
Performing maintenance testing
  • Identifying reasons for maintenance testing
  • Modification
  • Migration
  • Retirement
Finding Defects with Static Techniques
Comparing static analysis to dynamic testing
  • Detection
  • Correction
  • Improvement
Differentiating various review types
  • Informal
  • Technical
  • Walkthrough
  • Inspection
Leveraging Test Design Techniques
Differentiating various "specifications"
  • Test design
  • Test case
  • Test procedure
Applying specification-based techniques
  • Equivalence partitioning
  • State transition
  • Boundary value analysis
  • Use case
  • Decision table
Utilizing structure-based techniques
  • Statement coverage
  • Decision coverage
Deploying experience-based knowledge
  • Intuition
  • Experience
  • Knowledge
Managing the Testing Process
Organizing and assigning responsibilities
  • Independence
  • Test leader
  • Tester
Planning and estimating the activities
  • Metrics-based vs. expert-based approach
  • Justifying exit criteria adequacy
  • Standardizing test documentation
Monitoring and controlling test progress
  • Applying common metrics
  • Interpreting test summary reports
Implementing configuration management
  • Ensuring proper version control
  • Generating incident reports
Addressing project and product risks
  • Contractual
  • Organizational
  • Technical
  • Assess
  • Determine
  • Implement
Adopting Test Support Tools
Classifying different types of test tools
  • Test management
  • Static testing
  • Test specification
  • Executing and logging
  • Performance and monitoring
  • Other
Introducing a tool into an organization
  • Recognizing potential benefits and risks
  • Considering special circumstances

Monday, January 28, 2008

Automation Fundamental Concepts & QTP At A Glance

Automation Fundamental Concepts & QTP At A Glance

Automation Fundamental Concepts

1. What is Test Automation ?

Software Test Automation is the process of automating the steps of manual test cases using an automation tool Or utility to shorten the testing life cycle with respect to time…

When application undergoes regression, some of the steps might be missed out or skipped which can be avoided in Automation…

Automation helps to avoid human errors and also expedite the testing process…

To implement the Test Automation detailed planning and effort is required

Automation saves time and effort which results in reduction of the Test life cycle…

Benefits of Automation

Consistency of Test Execution

Reducing cycle time of regression test cycles

Data driven testing

Repeatability

Coverage

Reliability

Reusability of test wares

Automation life cycle is a subset of the entire test life cycle…

Automation planning can be initiated in parallel to the test planning phase…

Factors to be considered in automation planning,

Stability of AUT (Application under test)

No of regression cycles to be performed

Compatibility of App platform with testing tools

Cost benefit analysis (ROI)

Availability of skilled resources

Regression Testing & Automation

2. When Automation is applicable?

Regression Testing Cycles are long and iterative.

If the application is planned to have multiple releases / builds

If it’s a long running application where in small enhancements / Bug Fixes keeps happening

Test Repeatability is required

QTP At a Glance …..

Introduction to QTP (QuickTest Professional)
“The Mercury advanced keyword-driven testing solution”
Technologies Supported
Default Support
1. Standard Windows applications
2. Web objects / Applications
3. ActiveX controls
4. Visual Basic applications

Additional QuickTest add-ins Support,
1. Java
2. Oracle
3. SAP Solutions,
4. .NET Windows
5. Web Forms,
6. Siebel,
7. PeopleSoft,
8. Web services,
and terminal emulator applications

Testing Process with QTP

Quick Test Pro involves 3 main stages:

1 Creating Test scripts

2 Running Tests

3 Analyzing Test Results

Creating Tests

Create a test script by recording a manual test scenario on the AUT (Application Under Test) using QTP.

Quick Test Pro graphically displays each step users perform in the form of a collapsible, icon based tree view in QTP’s Keyword View.

Running Tests & Analyzing Test Results

Running Tests:

Once the test scripts are recorded / created, next step is to execute them…While running (executing) the tests Quick Test Pro connects to the web site or AUT and performs each operation in the test as performed manually while recording / creating tests (test scripts)…

Debugging Test : To identify and eliminate the defects in the test scripts.

Analyzing Test Results:

Once the test scripts are executed, test results and the summary can be viewed for result analysis.

QTP’s Add-in Manager facilitates the users to select the technology / environment from variety of environments suitable for the AUT (Application Under Test)

Once an add-in is loaded, users can record that application in its supported environment and thus QTP recognizes the objects specific to the application under test (AUT) as loaded through Add-in Manager.

It is critical for the users to know the development technologies / environment of AUT, and load right Add-Ins while invoking Quick Test Pro.

Quick Test Professional - Record & Run Modes

Recording Modes

Normal

Analog

Low level

Run Modes

Normal

Fast

Update

Quick Test Professional - Options à General

Best Practices for General Options:

Deselect all check boxes except “Save data for integrating with performance testing …” and “Display Add-in Manager on startup” which is default setting.

Click on “Restore Layout” button to reset screens to the initial setting when QuickTest was first installed.

q Best Practices for Options for Run Mode:

Ø Run Mode as Normal:

This ensures that the execution arrow appears to help with trouble shooting the tests.

Synchronization becomes better for the AUT (Application Under Test)

Ø Test Results:

Deselect the option “View Results when run session ends”

Ø Mercury Tool Integration:

Select "Allow other Mercury products to run tests and components“

Ø Screen Capture:

Save step screen capture to results "On error and warnings"

Test Pane:

Test Pane contains two labs to view the tests,

Keyword View

Expert View

Keyword View:

Quick Test Pro displays your test in the form of a collapsible, icon based tree…

Expert View:

Quick Test Pro displays the source code (VB Script) of the tests in this view.

Data Table :

Data table assists in Parameterizing the tests…

Debug Viewer Pane :

It assists in debugging tests with the help of watch Expressions, Variables, and Command.

Quick Test Pro Commands :

The Quick Test Pro commands can be chosen from the menu bar or from a Tool bar.

File Tool bar :

File tool bar contains buttons for managing the test.

Test Tool bar :

Test tool bar contains buttons for the commands used when creating or maintaining the tests…

Debug Tool Bar :

It contains buttons for commands used when debugging the steps in the tests…

Action Tool Bar :

To view all actions in the test flow or to view the details of a selected action…

Thursday, January 10, 2008

Best Practices in Context-Driven Testing/Thinking!

Rita asks, “My test manager has asked me to test a FMCG retailing website to test. I have never tested such a website before. Can you guide me how to start testing this website? Can you point me to some best practices that can be followed while testing this website? Please help.”

Ritika asks, “I am doing my project work in component based integration testing and I am new to this field. So I need your help. I have to implement different integration testing techniques like UML based, dataflow specification etc. Could you please help me?”


Deepak asks, “Hi Debasis, nice articles. I tried searching for context-driven testing but couldn't find any article. I am new to testing and want to pursue my career in testing, a little bit about me. I want to know the framework for context-driven testing. Is it possible to write an article w.r.t the above-mentioned framework? Thanks any way. It is a pleasure reading your article (although I have read only one article)!”

Can you pick out the common thing in the above 3 questions? Do you have answers to any of the above FAQs? If your answer is No, then read on. In case your answer is Yes, then this post might be intended for testers like you, keep reading!

What is a ‘context’?
The WordWeb dictionary of my desktop computer defines a “context” as “The set of facts or circumstances that surround a situation or event”. Definition apart, a context is the circumstances relevant to something under consideration. It is the background information that enhances understanding of technical and business environments to which the problem relates.

What is ‘context-driven testing’?
The context-driven testing is often considered as a flavor of Agile Testing that recommends continuous and creative evaluation of testing opportunities in light of the potential information revealed (mostly by context-based questioning) and the value of that information to the organization right now! It can be defined as testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical space-ship control software (remember Columbia disaster?) would be completely different than that for a low-cost payroll processing software.

A doctor does not use a chopper (that a butcher uses) while operating a patient. Instead, he uses a scalpel. Both the chopper and the scalpel are sharp knives and are meant for cutting but they can’t replace each other and has their use in their own specific areas (contexts). Neither a chopper can be used for operating a patient nor a scalpel can be used for slaughtering a pig! As the context changes, the effectiveness of the equipments (practices/approaches) also changes. So there is nothing such thing like one-approach-fits-all! Approaches need to change with the change of situation and change in circumstances.

As James Bach puts it - “Context-driven” means to match your solutions to your problems as your problems change. Good practice is not a matter of popularity. It’s a matter of skill and context.

I believe that, context-driven testing starts with context-driven thinking! Context-driven testers never try to fit in the same solution to different problems and still hope that they would eventually succeed! Rather they try to find a different solution that might solve the new problem and follow it! Remember, there is no “master key” (best practice that succeeds in *every* cases, no matter what is the context) in software testing. [Master key is a key that secures entrance everywhere by fitting into multiple locks!]

The Seven Basic Principles of the Context-Driven School!
1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project's context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn't solved, the product doesn't work.
6. Good software testing is a challenging intellectual process.
7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Context-driven testers agree with these principles and follow them while solving testing problems! In case you are interested in knowing about other schools of software testing you might consider reading this post by Dr. Cem Kaner.

What a context-driven tester like me might do when allotted to a testing assignment?
1. A context-driven tester might start by asking a bunch of context-based/context-related questions in an attempt to know the context better and deduce information about the mission, aims and objectives of the testing assignment! As it is said, proper questioning has the capability of solving half of the problem; and a context-driven tester knows this fact clearly from his experience and practice!
2. Once somebody said - The only thing that does not change in this world is “the word change” itself! This applies to software development too. Projects keep changing shapes as the development phase continues. A smart context-driven tester knows and understands that it is wise to be flexible and change the testing strategy as the context changes during a development phase. He can change mind and change the way he thinks once he realizes that the context has changed and is not the same as it was before!
3. A context-driven tester understands that the testing practices/approaches/procedures that had worked in his earlier testing project might or might not work in the current assignment simply because the context of both these projects might not be the same! A context-driven tester clearly knows that what looked like the “best practice” (so called!) in the earlier project might suddenly loose its value and become a “bad practice” once the context is different!
4. A context-driven tester also understands that context-driven testing is a set of values rather than a process or technique. It revolves around the fact that software users are human beings with diverse preferences, needs, abilities and limitations. A program that works well for one person in a given situation might prove inadequate or inappropriate for another person or situation.

“Survival of the fittest” [Charles Darwin has called it as 'natural selection', or the preservation of favored races in the struggle for life in his theories of evolution in the book “The Origin of Species”] is a phrase that refers to the process by which favorable traits that are heritable become more common in successive generations of a population of reproducing organisms, and unfavorable traits that are heritable become less common! In other words, those who are better equipped with the power of adaptation survive and the rest perish! In a software development environment where the context is unstable and is likely to change, the testers those are better trained to adapt themselves and change gears with the changing context have a better chance of success than those who are unable to do so! It is all about adjusting yourself (the tester) to the situation than to adjust the situation as per your comfort level! We can't change this world. The world won't change until we change. The questions are:
» "Will we change"?
» “Are we prepared to evolve”?
» “Are we prepared to change our practices and approaches to testing when the context has changed”?
» “Are we prepared to stop expecting any best practice to come to our rescue and do wonders for us even when the context in question is entirely different from the case where these practices had been successful”?

What do you think? Do you let the *context* drive your testing approach or the *best practices*? I am excited to hear your views. It is time to get vocal and let out your views/opinions by commenting.


Beta Testing Crashed Mozilla Firefox 3 Beta 1!

“The best software tester isn’t the one who finds the most bugs in the software or who embarrasses the most programmers. The best tester is the one who gets the most bugs fixed.” - (Dr. Cem Kaner: The Bug Advocacy)


After an arduous Alpha development period that included no fewer than nine milestone releases, Mozilla has finally announced Firefox 3 Beta on November 19th, 2007. Mike Betlzner has a comprehensive post on DevNews, which reminds that the Firefox 3 Beta 1 milestone release is intended for testing purposes only and is not for casual users! According to Mozilla developers, the Firefox 3 Beta 1 milestone fixes over 11,000 bugs and has almost 2 million lines of code changes in comparison with the current Firefox 2.x browser.


These are few features and changes (as claimed by Mozilla) in this milestone that aroused my curiosity to test them in Beta Version:

» Improved security features such as: better presentation of website identity and security, malware protection, stricter SSL error pages, anti-virus integration in the download manager, and version checking for insecure plugins.
» Improved ease of use through: better password management, easier add-on installation, new download manager with resumable downloading, full page zoom, animated tab strip, and better integration with Windows Vista and Mac OS X.
» Richer personalization through: one-click bookmarking, smart search bookmark folders, direct typing in location bar searches your history and bookmarks for URLs and page titles, ability to register web applications as protocol handlers, and better customization of download actions for file types.
» Improved platform features such as: new graphics and font rendering architecture, major changes to the HTML rendering engine to provide better CSS, float-, and table layout support, native web page form controls, color profile management, and offline application support.
» Performance improvements such as: better data reliability for user profiles, architectural improvements to speed up page rendering, over 300 memory leak fixes, and a new XPCOM cycle collector to reduce entire classes of leaks.


(You can find out more about all of these features in the “What’s New” section of the release notes.)

For all those, who might be new to the concept of Beta Release and Beta Testing, here is a small explanation from Wikipedia:


“A beta version is the first version released outside the organization or community that develops the software, for the purpose of evaluation or real-world black/grey-box testing. The process of delivering a beta version to the users is called beta release. Beta level software generally includes all features, but may also include known issues and bugs of a less serious variety. The users of a beta version are called beta testers. They are usually customers or prospective customers of the organization that develops the software. They receive the software for free or for a reduced price, but act as free testers. The software is released for beta testers so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.”


I love to participate in Testing Beta Products such as this. To make sure I don’t miss an opportunity to get involved in testing a Beta Release, I have set a Google Alert to get updates about Beta Releases whenever there is one. This time also I was updated (thanks to Google Alert) as soon as there was a release of Firefox 3 Beta 1 milestone. I have downloaded and upgraded my Firefox 2 to this Beta Release and have been testing it since then. Here I must admit that this release seems to be lot faster (in terms of speed of performance) than its predecessor; thanks to the evidently improved architectural design. On the usability front, Beta 1 enables users to resume a download after the browser is restarted. The tab functionality in this release is improved with a new tab-scrolling feature and the ability to save tabs when closing the browser.


My experience with the Beta 1 browser had been pretty good until today! Today, I was surfing through the blogosphere and suddenly all my open instances of Mozilla Firefox 3.0b1 were closed and I was displayed with the Mozilla Crash Reporter with the typical Crash!Bang!Boom! screen (click for a screenshot of the crash message)! I have submitted the crash to Mozilla with my email id to intimate me when the problem is fixed. Here are the details of the crash report!

[BuildID: 2007110904
CrashTime: 1196180506
Email: debasis_pradhan123[AT]yahoo[DOT]co[DOT]in
InstallTime: 1195841518
Name:
ProductName: Firefox
SecondsSinceLastCrash: 662
URL: http://www.howtocopewithpain.org/blog/
UserID: 2e4e1718-9a38-418d-be09-891e176a371c
Vendor: Mozilla
Version: 3.0b1

This report also contains information about the state of the application when it crashed.]


Pros and Cons of this Crash Report:
Pros – The crash report passes on functional (in telling the user that an unexpected problem has occurred and the Brower needs to shut down) grounds. The usability of the report is also quite acceptable. It is fairly capable of reporting the crash and goes ahead to tell that the Browser would try to restore the tabs and windows when it restarts. It allows the reporter to view the report before sending, gives a choice to report it to the developers at Mozilla and also to attach an email id to be reported back once the problem is sorted out. These are few qualities of a good error report, in my opinion.


Cons – On the negative side, the error report does not include much technical details in the crash report. Although it contains some details to identify the crash, it does not show any crash dump or any such data that could have given some clue about the possible cause of the crash! It just contains some basic details like the BuildID, CrashTime, InstallTime, ProductName, SecondsSinceLastCrash, URL, UserID, Vendor and Version. But it leaves the tester without much information about the crash as such. Something like a time stamped list of actions or a stack dump could have been much more useful in giving the tester more info regarding the crash.


Recoverability of the Application after the crash: I had chosen the “Restart Firefox” button on the crash report and it indeed was able to restore my previous tabs and windows with the last opened URLs. This is a good thing to see the Browser being able to recover on its own after the crash.


Reproducibility: In this case, however, reproducing the crash was not as difficult as it seemed at the first glance! Looking into the crash report gave me the URL where I had got Mozilla Firefox 3 Beta 1 to crash. I tried opening the same URL once again and bang; it crashed again with the same error message [notice the “SecondsSinceLastCrash: 662” in the crash report]. However, this crash is not always reproducible. I was able to get this crash 6 out of 10 times while retesting. I have reported the crash at the Bugzilla at Mozilla too and waiting their response. I will keep you updated once I get any further communication from the Mozilla Developer team.


In the mean while, feel free to retest this in your own system and let me know (by commenting) if you are able to reproduce the above crash!



Website Testing - Did you miss anything while Testing?

till date. But still, website testing is probably one of the most commonly confused topic among testers! Need evidence? Spend a little time searching and you can see tons of queries flooding the Internet (Online Forums, Usenet Groups, Orkut Communities, Tech Corners etc) regarding website testing, how a website should be tested, what should be tested, which things should be given priority while testing, what should not be given much importance while testing and so on. I might well be the trillionth person on this planet to write an article on Website Testing here! But I am writing this because I am really tired of replying emails asking me to write an article on Website Testing in my blog. However, this article is not an attempt to show you how you should test your website. You should understand that there is no universal rule to testing that can fit all kind of similar testing assignments. The testing approach that is going to follow consists of a checklist of items that might be tested while testing a website. This does not mean that following this checklist can essentially guarantee you success while testing *any and every* kind of website. In contrary, this post is meant to be used as a website testing cheat-sheet, a kind of checklist of items to remember while testing a website! While following this checklist, the chance of success (or failure) greatly depends on your own particular context! However, here is the cheat-sheet for website testing:


[A] Functionality Testing:

While testing the functionality of the websites the following areas should be tested.


a) Links/URL Testing: There are mainly 4 types of links in most websites.

» Internal links [Test the links that point to the pages of the same website.]

» External links [Test the links that point to external websites.]

» Mail links [Test if the email links open the default email client with the recipient email ID already filled in the “To” field.]

» Broken links [Test if any of those links are broken or dead! Free tools like Link Valet (which is convenient for checking a few pages) or Xenulink (convenient for checking a whole site) can be helpful in testing broken links.]


b) Forms Testing: The web forms should be consistent and should contain all the required input and output controls. Test the integrity of the web forms and the consistency of the variables.


c) Validation Testing:

» You can use tools like W3C validator to test and make sure that you have valid HTML (or XHTML).

» Most of the modern day websites use CSS (Cascading Style Sheet). You can use tools like W3C CSS validator to test and validate the CSS used in your site.

» Test the different fields for field level validation. Test and validate user inputs like: TextBox inputs, ComBox inputs, DropDownBox selections, KeyDown, KeyPress, KeyUp etc.


d) Test the Error messages: Error messages are integral part of any well-developed website and they guide the user whenever any wrong/unexpected input is submitted. Testing the Error messages is very important as a badly designed Error message can misguide the end user about the actual impact of the error! About testing error messages, Ben Simo has a nice article that you might find interesting.


e) Testing optional and mandatory fields: Test if the web forms handle the optional and mandatory fields efficiently. Ideally, the application should not allow you to proceed unless you have filled in ALL the mandatory fields and should not restrict you from proceeding if you have left ANY of those optional fields unfilled!


f) Database Testing: Most of the modern day websites come with a backend database (unless it’s a site consisting of purely static web pages). So testing the database for its integrity becomes essential to make sure the website is able to handle the data processing effectively.


g) Cookies Testing: A cookie is information that a website (server side) puts on your hard disk (client side) so that it can remember something about you at a later time. If your website sets cookies at the client machines then test to check that cookies and other session tokens are created in a secure and unpredictable way. Poor handling of cookies can result in security holes and vulnerabilities that can be taken advantage by malicious users and hackers. Here is a nice article on Testing for Cookie and Session Token Manipulation.


h) Client-side Testing: Test the temporary Internet files on the client side system to make sure if any sensitive data (like password, credit card number etc) is being stored in the client system without being encrypted or in an unsecured way.

[B] Performance Testing:

IEEE defines Performance testing as the testing conducted to evaluate the compliance of a system or component with specified performance requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future load/stress testing. Performance testing can be applied to understand the website’s scalability, any loopholes in the load balancing and to test the response time between a request (from the client) and the reply (from the server) and the amount of load/stress the site is able to sustain. Scott Barber’s PerfTestPlus and Chris Loosley’s Web Performance Matters are two great resources for more exhaustive information on Performance Testing.

[C] Connection Speed Testing:

Test the website over various Networks like - Dial up connection with a 56-kbps modem (hard to believe, but there are lot of web users who still use a modem), ISDN, cable connection, broadband connection with different download speeds, DSL connection, satellite internet etc. With slower connection speed, if your website results in slow performance or partial loading of web pages then it should be cause of concern. As testers, we act as the representative of the end users. If we find out that even a part of our intended end user community is going to have problem with the application we are testing, it should be sufficient to raise the issue as a defect!

[D] Web Usability Testing:

If you develop a website which is not user-friendly and is difficult to learn, understand and navigate, then it won’t be of much use for the user. For example, if your website relies way too much on JavaScript for navigation, a browser with disabled JavaScript can render the site unusable. Some of the criteria to keep in mind while testing the usability of a website are:

» Ease of learning (How intuitive and self-explanatory the site is).

» Navigation.

» User satisfaction.

» Web accessibility testing (If all the content and parts of the site are accessible).

» General appearance (Look and feel).

[E] Testing Client-Server Interface:

In web testing the server side interface is tested to verify that communication is properly established with the client. In case of n-Tier architecture based web applications, the middle-tier Business Logic APIs should also be tested (even if they are third-party shrink-wrapped softwares) for any communication/interface malfunctioning!

[F] Compatibility Testing:

Test your website to verify that it’s pages render adequately in different browsers (e.g. IE 5, IE 6, IE 7, Firefox 2, Opera, Safari etc) using different operating systems (Win XP, Vista, Linux, Mac etc) on different hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the web pages and can introduce embarrassing (and often costly) bugs. However, an important thing to remember while testing the web compatibility is – we should first identify the major customer base for the website and then decide the main browsers and OS to test for. Some typical compatibility tests include testing your application:

» Using various browsers.

» Using various window sizes.

» Using various font sizes.

» Using browsers with CSS, JavaScript turned OFF!

» Using browsers with pop-up blockers!

» On various Operating Systems.

» With different screen resolutions and color depths.

» On various client hardware configurations.

» Using different memory sizes and hard drive space.

» In different network environments.

» Ability to take printouts with different printers (Printer-friendly Versions)

[G] Web Security Testing:

Often Web Security Testing is also referred to as Penetration Testing. The primary objective for testing the security of a website is to identify potential vulnerabilities/security holes and to patch/repair them. For example, if your website allows some files to be uploaded, your web server should have proper automated antivirus checking in place to detect and disable any attempt of virus uploading by the client side. Some of the major aspects of web security testing are:

» Network Scanning.

» Vulnerability Scanning.

» Password Cracking.

» Log Review.

» Integrity Checkers.

» Virus Detection.

Hope this article helps in giving a rough coverage of areas that needs attention while testing a website. Did I miss something here that should have been included in the checklist? Feel free to let me know by commenting and I will update this post along with proper credit to you.


Update: An important aspect of website testing that I had missed to include in my cheat-sheet is Testability Testing. Thanks to Michael Bolton for his comment.

Regression Testing Revisited - Thanks to this Interesting Question!

Software” and “defects” are like two sides of a coin. If you start developing software, chances are very high that you will (although unintentionally) end up leaving some defects in it. Defects in software are almost unavoidable. Software defects are like Christmas Combo offers where you get something for free with the purchase of some Product! In case of a Christmas Bonanza offer you might throw away the free gift if you don’t like it [without spending anything extra!]. I hope, software development was as simple as that! In case of software development, you just can’t discard the unwanted defects as simply, without investing time, effort, money and whatnots!

As it has been said a million times - even (so-called) thorough testing cannot produce a 100% defect-free software. As long as there would be softwares there would be defects in the software. And as long as testers would keep finding defects they would have to report it in some form of Defect Report/Bug Report, so that further actions could be taken on the defects. As Dr. Cem Kaner puts it - “The aim of writing problem report (defect report) is to get the defects fixed”.

Just imagine a tester who is very efficient and skillful in finding defects in the application that he is testing but is very poor in reporting them. Is it not something like cooking a great dish but serving it in an old dirty plate that leaks and messes the dining table? What is the point of taking your time and putting your effort to prepare a tasty dish if your guests can’t enjoy its tastefulness (due to the bad way in which it is served)? The same can happen if a tester is good at finding important defects quickly but ends up logging bad defect reports/bug reports! Chances are more that the seriousness of his reported defect might go unnoticed [worst still, the defect might get turned down as “Rejected” since the programmer looking at the defect report finds it hard to figure out the bug].

The work of a software tester on software projects is much like the technicians at a diagnostics center do. There is no scope for ambiguity when a medical diagnosis is reported. Before a technician reveals his diagnosis, a lot of symptom analysis, critical thinking, cause-effect analysis, syndrome of related diseases and mind work goes into it. A diagnostics technician has to be very accurate in reporting the problems as he diagnoses. “Patient XYZ has perfect symptoms of acute dual renal failure” might kill the patient just out of shock. In any field, effective reporting is much of an art.

Similarly, when diagnosing issues with software while testing, there is hardly any scope of ambiguity. As testers, we have to keep in mind that each bug reported involves some work for programmers. They have to understand the context of the issue, try to reproduce it and resolve it after it is reproduced. Unless your defect report is clear and precise, the programmer might find it a tough call to reproduce the defect. And unless the programmer is able to reproduce the defect, there is not much he can do about it to fix it. But non-reproducibility of the defect at the programmer’s machine does not mean that the defect is gone! It (the defect) is still sitting there (in the code, the design or the architecture of the project). And there begins the problem. Many a times, a project manager needs to understand what are the severe issues open and manage the project accordingly. But he might miss to consider the severity of your reported defect, as it was badly reported in the first place by you and the manager can’t reproduce it!

For this, testers must distinctly and succinctly report each of the finding with appropriate severity and priority assigned. [Note: When I say that your defect report must be with appropriate severity and priority, you (the tester) must understand that these are perceived severity and priority and are subjected to change in future if the person evaluating this defect (can be the system analyst, your test lead, the test manager, the project manager) thinks that a different severity/priority would be more appropriate under the given context.] Suppose you are a programmer and you see a bug report stating, “The Parser is unreliable” or “The values in the City combo box are not proper”. How would you react to it? Words like *unreliable* and *proper* are relative. What is proper to me might not appear as proper to another tester. So use of words like these can make the defect report ambiguous and the programmer reading the report can get confused!

Following are few pointers that can be kept in mind to effectively report software issues:

1. It’s all in a Name: Each defect should be clearly identifiable by the defect title. Keep it short, and yet the defect report title should give a descriptive summary of the defect. Use meaningful defect titles that reveal the point of the report without further reading; the key here is to create microcontent that can fare well on its own. Give the maximum amount of information with the least amount of words. Time is finite and programmers/managers are infinitely busy. Blast your defect investigation result into the person reading your defect report at the speed of light!

2. You miss the Context, you miss the Defect: Each defect should be reported after building a proper context. What are the pre-conditions for reproducing the defect? What was the test environment while you got the defect? Software defects are very sensitive. They don’t show up unless all the error triggering conditions are in place and all the pre-requisites are met while setting up the test bed. Try to include all such information in your defect report.

3. Don’t leave a Missing Link in your Defect Report: They say Waterfall Model is dead in today’s fast paced agile era of complex software development. But I would say following a Waterfall Model like approach is not that bad, always! Not at least, in case of writing down the steps required to reproduce the defect. Sequentially write down the steps to reproduce the defect in exact order. Missing to mention a step might result in non-reproducibility of the defect.

4. If it’s not clear it can get blurry at the other end: Be very clear and precise. Use short and meaningful sentences. As a simple practice, read aloud the defect report before hitting the “Publish”/”Submit” button on your defect tracking tool. If it is not clear and you are finding it difficult to understand how to reproduce the defect, chances are high that the programmer would too.

5. Tell Stories to show Defects: Defect Reporting is another area where a tester’s story telling skills can come in handy. Cite examples wherever necessary, especially if defects are a result of a combination of values. Build scenarios and present your scenarios to strengthen the importance of the defect.

6. Tag your Defect: Give references to specifications (if available) wherever required. e.g. Refer Shopping Cart module on page 137 of SRS Version 4.2.

7. Simplicity is the mother of Elegance: Keep the descriptions simple. It’s never a good idea to only use sophisticated words that everyone may not know. Keep your report description simple and easily understandable. Don’t expect the programmer to have a dictionary software running on his machine to find meaning of the highly sophisticated words in your defect report. Save such words for your doctoral thesis in English! ;)

8. Pass Information, not Judgements: Be objective. Report what you are supposed to without passing any kind of judgement in the defect descriptions.

9. Severity and Priority: Thoughtfully assign severity and priority to the defect. It becomes easier to filter out important issues later on if they are already assigned with severities and priorities. Remember, a minor issue can have high priority and vice versa. So give it a thought before assigning a defect with severity and priority.

10. Don’t create a Mess: Hang on to your basic hygiene while defect reporting: spell-check, grammar-check, style-check. Spell-check your report, and read it for clarity once or twice before posting. An error now and then isn’t bad but a lot of error in your defect report might send a wrong signal about the credibility of you as a tester and about your defect reporting skills.

Reporting a defect is no rocket science, but it surely requires a lot of common sense. I have seen people writing mini-essays in bug reports and also the ones who report one-liners. Reported bugs should not add an overhead of understanding to the programmers but help them instead to reproduce the bug and effectively resolve it.

Security Testers! Are you ready for the Top 10 Security Threats for 2008?

Security Testers, Security Analysts, Penetration Testers, Vulnerability Assessors pay attention! Security Company McAfee has released a list of top 10 predicted security threats for year 2008. If you are a tester who gives importance to the security aspect of the product you are testing and who likes to do a security analysis based on the potential risks associated with the components of the AUT (Application Under Test), then this post might interest you. In case you are not a software tester, don’t stop reading this yet! Knowledge of such threats can be helpful for any computer user, so read on. :)

McAfee Inc has released its top ten predictions for security threats for 2008. Researchers at McAfee Avert Labs expect an increase in Web dangers and threats targeting Microsoft Corp's Windows Vista operating system, among other new or increased threats. "Threats are increasingly moving to the Web and migrating to newer technologies such as VoIP and instant messaging," said Jeff Green, senior vice president of McAfee Avert Labs and product development. "Professional and organized criminals continue to drive a lot of the malicious activity. As they become increasingly sophisticated, it is more important than ever to be aware and secure when traversing the Web."

A popular proverb says, “you must know your enemy if you want to win the war”! As testers, we must know the security threats before we can plan out any strategy to combat against them. And I feel it is important to know these threats before we plan out a risk-based testing strategy for the application we are testing. Having said that, this list of top ten security threats is NOT exhaustive. These are mere predictions by a reputed security company. We should also keep our eyes and ears open for other possible threats in addition to the ones mentioned in the list. Here is a summarized version of the list of security threats as released by McAfee for year 2008:

1. Web 2.0 on Target – Attackers have started using Web 2.0 sites as a way to distribute malware and are data mining the Web, looking for information people share to give their attacks more authenticity. With more and more users looking for this type of websites, the attackers are adapting their solutions and attempt to conduct malware attacks and other malicious actions through these pages. The recent Salesforce and MySpace attacks are pretty edifying, most attempts targeting users’ login credentials. As a tester, if you are testing a web application or social networking site that uses Web 2.0 standards, then this can be a matter of concern for you!

2. The Botnet Storm – A recently noticed threat recognized as "Storm" exposed a new trend in the malicious attacks concerning the computers. Also known as "Nuwar", the Storm created the largest peer-to-peer botnet ever. It has been the most versatile malware on record. The infections permanently change codes and several file formats making the blocking and removal process very difficult for the security technologies, which are supposed to protect the data stored on the hard-drives. A number of PCs were turned into bots after the infection. Bots are computer programs that give cyber crooks full control over PCs. Bot programs typically get installed surreptitiously on the PCs of unknowing computer users. More such security attacks are to be witnessed in the year 2008, as per McAfee.

3. You will hate this IM (Instant Malware) – Instant messaging client continuously rise in popularity as lots of Internet users choose Yahoo Messenger, Windows Live Messenger or Skype to communicate on the web. For several years, researchers have warned of the risk of a self-executing instant-messaging (IM) worm. This threat could spawn millions of users and circle the globe in a matter of seconds. Although IM malware has existed for years, we have yet to see such a self-executing threat. And with the increasing IM virus families, year 2008 could be the year when we witness a devastating self-executing instant malware.

4. It’s all about Money – The threat to virtual economies is outpacing the growth of the threat to the real economy. As virtual objects continue to gain real value, more attackers will look to capitalize on this. The numbers and types of password-stealing Trojans are on the rise, the two favorite targets being: online gaming and banking industries.

5. Bull’s eye on Windows Vista – Once the market share of Win Vista crosses the threshold of 10% and Vista becomes more prevalent (with the advent of Service Pack 1), professional attackers and malware authors may begin to see an impact on their businesses and expend some effort in exploring ways to circumvent the new operating system’s defense mechanism. The old threats will still persist, but a new crop is on its way!

6. Virtualization Honeypot – As security vendors continue to embrace virtualization to create new, more resilient defenses to defeat complex threats, researchers, professional hackers, and malware authors will begin looking at ways to circumvent the new defensive technology.

7. VoIP Attack – Attacks on VoIP (Voice over Internet Protocol) applications should increase by 50 percent in 2008, according to McAfee. The technology is still new and defense strategies are lagging, making VoIP a favorites target for professional hackers.

8. Phishers to target less-popular sites – The phishing attacks have always been pretty efficient as they use copies of genuine websites to trick the users to enter their sensitive data (like user ID, password, credit card number etc). Cyber criminals are getting smarter. They have learned that it’s tougher and riskier to target top-tier sites, which are attacked regularly and are prepared to respond more quickly. Knowing that a large percentage of people reuse their user names and passwords, malware writers are likely to target less-popular sites more frequently than before to gain access to primary targets using information gained from secondary-target victims.

9. Beware of Parasites – In 2007 several crimeware authors turned old school to deliver threats like Grum, Virut, and Almanahe; parasitic viruses with a monetary mission. The number of variants of an older parasitic threat, Philis, grew by more than 400 percent, while over 400 variants of a newcomer, Fujacks, were catalogued. McAfee is expecting a continued interest in parasitic malwares from the crimeware community, with overall parasitic malware expected to grow by 20 percent in 2008.

10. Adware Attacks – And at last, this one is like a breeze of cool air. Adware will diminish in 2008, according to McAfee. The combination of lawsuits, better defenses, and the negative connotation associated with advertising through adware helped start the decline of adware in 2006. And according to McAfee, this decline will continue in 2008. But still, the threat of adware attacks is serious enough to push it into the top 10 threats list for year 2008.

Well, this concludes the list of top ten security threats for year 2008 (what a nice way to welcome a new year)! Let’s see if the knowledge of the threats can help us (testers) in planning out a better strategy for our next security test plan while approaching risk-based testing. Wish you all a very Happy New Year 2008 ahead.

Powered By Mushu

Powered By Mushu