Author: ashiqirphan2013

Developer/Tester Support for Developing a Defect Repository

Software engineers and test specialists should follow the examples of engineers in other disciplines who make use of defect data. A requirement for repository development should be a part of testing and/or debugging policy statements.

Forms and templates should be designed to collect the data. Each defect and frequency of occurrence must be recorded after testing.

Defect monitoring should be done for each on-going project. The distribution of defects will change when changes are made to the process.

Figure 1.5 The Defect Repository and Support for TMM Maturity Goals

The defect data is useful for test planning. It is a TMM level 2 maturity goal. It helps a tester to select applicable testing techniques, design the test cases, and allocate the amount of resources needed to detect and remove defects. This allows tester to estimate testing schedules and costs.

The defect data can support debugging activities also. A defect repository can help in implementing several TMM maturity goals including

  • Controlling and monitoring of test,
  • Software quality evaluation and control,
  • Test measurement,
  • Test process improvement.

Defect Example: The Coin Problem

Specification for the program calculate_coin_value

This program calculates the total rupees value for a set of coins. The user inputs the amount of 25p, 50p and 1rs coins. There are size different denominations of coins. The program outputs the total rupees and paise value of the coins to the user

Input      : number_of_coins is an integer

Output    : number_of_rupees is an integer

number_of_paise is an integer

This is a sample informal specification for a simple program that calculates the total value of a set of coins. The program could be a component of an interactive cash register system. This simple example shows

  • Requirements/specification defects,
  • Functional description defects,
  • Interface description defects.

The functional description defects arise because the functional description is ambiguous and incomplete. It does not state that the input and the output should be zero or greater and cannot accept negative values. Because of these ambiguities and specification incompleteness, a checking routine may be omitted from the design. A more formally stated set of preconditions and post conditions is needed with the specification.

A precondition is a condition that must be true in order for a software component to operate properly.

A postcondition is a condition that must be true when a software component completes its operation properly.

The functional description is unclear about the maximum number of coins of each denomination allowed, and the maximum number of rupees and paise allowed as output values. It is not clear from the specification how the user interacts with the program to provide input, and how the output is to be reported.

1. Design Description for the Coin Problem

Design Description for Program calculate_coin_values
Program calculate_coin_values
number_of_coins is integer
total_coin_value is integer
number_of_rupees is integer
number_of_paise is integer
coin_values is array of six integers representing
each coin value in paise
initialized to: 25,25,100
begin
initialize total_coin_value to zero
initialize loop_counter to one
while loop_counter is less than six
begin
output “enter number of coins”
read (number_of_coins )
total_coin_value = total_coin_value +
number_of_coins * coin_value[loop_counter]
increment loop_counter
end
number_rupees = total_coin_value/100
number_of_paise = total_coin_value – 100 * number_of_rupees
output (number_of_rupees, number_of_paise)
end

2. Design Defects in the Coin Problem

Control, logic, and sequencing defects. The defect in this subclass arises from an incorrect “while” loop condition (should be less than or equal to six)

Algorithmic, and processing defects. These arise from the lack of error checks for incorrect and/or invalid inputs, lack of a path where users can correct erroneous inputs, lack of a path for recovery from input errors.

Data defects. This defect relates to an incorrect value for one of the elements of the integer array, coin_values, which should be 25, 50, 100.

External interface description defects. These are defects arising from the absence of input messages or prompts that introduce the program to the user and request inputs.

3. Coding Defects in the Coin Problem

Control, logic, and sequence defects. These include the loop variable increment step which is out of the scope of the loop. Note that incorrect loop condition (i<6) is carried over from design and should be counted as a design defect.

Algorithmic and processing defects. The division operator may cause problems if negative values are divided, although this problem could be eliminated with an input check.

Data Flow defects. The variable total_coin_value is not initialized. It is used before it is defined.

Data Defects. The error in initializing the array coin_values is carried over from design and should be counted as a design defect.

External Hardware, Software Interface Defects. The call to the external function “scanf” is incorrect. The address of the variable must be provided.

Code Documentation Defects. The documentation that accompanies this code is incomplete and ambiguous. It reflects the deficiencies in the external interface description and other defects that occurred during specification and design.

The poor quality of this small program is due to defects injected during several phases of the life cycle because of different reasons such as lack of education, a poor process, and oversight by the designers and developers.

Defect Classes, the Defect Repository, and Test Design

Defects can be classified in many ways. It is important for an organization to follow a single classification scheme and apply it to all projects.

Some defects will fit into more than one class or category. Because of this problem, developers, testers, and SQA staff should try to be as consistent as possible when recording defect data.

The defect types and frequency of occurrence should be used in test planning, and test design. Execution-based testing strategies should be selected that have the strongest possibility of detecting particular types of defects. The four classes of defects are as follows,

  • Requirements and specifications defects,
  • Design defects,
  • Code defects,
  • Testing defects

1. Requirements and Specifications Defects

The beginning of the software life cycle is important for ensuring high quality in the software being developed. Defects injected in early phases can be very difficult to remove in later phases. Since many requirements documents are written using a natural language representation, they may become

  • Ambiguous,
  • Contradictory,
  • Unclear,
  • Redundant,
  • Imprecise.

Some specific requirements/specification defects are:

1.1 Functional Description Defects

The overall description of what the product does, and how it should behave (inputs/outputs), is incorrect, ambiguous, and/or incomplete.

1.2 Feature Defects

Features is described as distinguishing characteristics of a software component or system. Feature defects are due to feature descriptions that are missing, incorrect, incomplete, or unnecessary.

1.3 Feature Interaction Defects

These are due to an incorrect description of how the features should interact with each other.

1.4 Interface Description Defects

These are defects that occur in the description of how the target software is to interface with external software, hardware, and users.

2. Design Defects

Design defects occur when the following are incorrectly designed,

  • System components,
  • Interactions between system components,
  • Interactions between the components and outside software/hardware, or users

It includes defects in the design of algorithms, control, logic, data elements, module interface descriptions, and external software/hardware/user interface descriptions. The design defects are,

2.1 Algorithmic and Processing Defects

These occur when the processing steps in the algorithm as described by the pseudo code are incorrect.

2.2 Control, Logic, and Sequence Defects

Control defects occur when logic flow in the pseudo code is not correct.

2.3 Data Defects

These are associated with incorrect design of data structures.

2.4 Module Interface Description Defects

These defects occur because of incorrect or inconsistent usage of parameter types, incorrect number of parameters or incorrect ordering of parameters.

2.5 Functional Description Defects

The defects in this category include incorrect, missing, or unclear design elements.

2.6 External Interface Description Defects

These are derived from incorrect design descriptions for interfaces with COTS components, external software systems, databases, and hardware devices.

3. Coding Defects

Coding defects are derived from errors in implementing the code. Coding defects classes are similar to design defect classes. Some coding defects come from a failure to understand programming language constructs, and miscommunication with the designers.

3.1 Algorithmic and Processing Defects

Code related algorithm and processing defects include

  • Unchecked overflow and underflow conditions,
  • Comparing inappropriate data types,
  • Converting one data type to another,
  • Incorrect ordering of arithmetic operators,
  • Misuse or omission of parentheses,
  • Precision loss,
  • Incorrect use of signs.

3.2 Control, Logic and Sequence Defects

This type of defects include incorrect expression of case statements, incorrect iteration of loops, and missing paths.

3.3 Typographical Defects

These are mainly syntax errors, for example, incorrect spelling of a variable name that are usually detected by a compiler or self-reviews, or peer reviews.

3.4 Initialization Defects

This type of defects occur when initialization statements are omitted or are incorrect. This may occur because of misunderstandings or lack of communication between programmers, or programmer`s and designer`s, carelessness, or misunderstanding of the programming environment.

3.5 Data-Flow Defects

Data-Flow defects occur when the code does not follow the necessary data-flow conditions.

3.6 Data Defects

These are indicated by incorrect implementation of data structures.

3.7 Module Interface Defects

Module Interface defects occurs because of using incorrect or inconsistent parameter types, an incorrect number of parameters, or improper ordering of the parameters.

3.8 Code Documentation Defects

When the code documentation does not describe what the program actually does, or is incomplete or ambiguous, it is called a code documentation defect.

3.9 External Hardware, Software Interfaces Defects

These defects occur because of problems related to

  • System calls,
  • Links to databases,
  • Input/output sequences,
  • Memory usage,
  • Resource usage,
  • Interrupts and exception handling,
  • Data exchanges with hardware,
  • Protocols,
  • Formats,
  • Interfaces with build files,
  • Timing sequences.

4. Testing Defects

Test plans, test cases, test harnesses, and test procedures can also contain defects. These defects are called testing defects. Defects in test plans are best detected using review techniques.

4.1 Test Harness Defects

In order to test software, at the unit and integration levels, auxiliary code must be developed. This is called the test harness or scaffolding code. The test harness code should be carefully designed, implemented, and tested since it is a work product and this code can be reused when new releases of the software are developed.

4.2 Test Case Design and Test Procedure Defects

These consists of incorrect, incomplete, missing, inappropriate test cases, and test procedures.

Origins of Defects

Defects have negative effects on software users. Software engineers work very hard to produce high-quality software with a low number of defects.

Figure 1.4 Origins of Defects

  1. Education: The software engineer did not have the proper educational background to prepare the software artifact.
  2. Communication: The software engineer was not informed about something by a colleague.
  3. Oversight: The software engineer omitted to do something.
  4. Transcription: The software engineer knows what to do, but makes a mistake in doing it.
  5. Process: The process used by the software engineer misdirected his/her actions.

The impact of defect on the user ranges from a minor inconvenience to rendering the software unfit for use. Testers have to discover these defects before the software is in operation. The results of the tests are analysed to determine whether the software has behaved correctly.

In this scenario a tester develops hypotheses about possible defects. Test cases are then designed based on the hypotheses. The hypotheses are used to,

  • Design test cases.
  • Design test procedures.
  • Assemble test sets.
  • Select the testing levels suitable for the tests.
  • Evaluate the results of the tests.

1. Fault Model

A fault (defect) model can be described as a link between the error made, and the fault/defect in the software.

2. Defect Repository

To increase the effectiveness of their testing and debugging processes, software organizations need to initiate the creation of a defect database, or defect repository. The defect repository supports storage and retrieval of defect data from all projects in a centrally accessible location.

The Tester’s Role in a Software Development Organization

The tester’s job is to

  • Reveal defects,
  • Find weak points,
  • Inconsistent behaviour,
  • Circumstances where the software does not work as expected.

It is difficult for developers to effectively test their own code. A tester needs very good programming experience in order to understand how code is constructed, and to know where and what types of, defects could occur.

A tester should work with the developers to produce high-quality software that meets the customers’ requirements.

Teams of testers and developers are very common in industry, and projects should have a correct developer/tester ratio. The ratio will vary depending on

  • Available resources,
  • Type of project,
  • TMM level.
  • Nature of the project
  • Project Schedules

A testers also need to work with requirements engineers to make sure that requirements are testable, and to plan for system and acceptance test.

Testers also need to work with designers to plan for integration and unit test.

Test managers need to cooperate with project managers in order to develop reasonable test plans, and with upper management to provide input for the development and maintenance of organizational

  • Testing standards,
  • Polices,
  • Goals.

Testers also need to cooperate with software quality assurance staff and software engineering process group members.

Testers are a part of the development group. They concentrate on testing. They may be part of the software quality assurance group. Testers are specialists, their main function is to plan, execute, record, and analyse tests. They do not debug software. When defects are detected during testing, software should be returned to the developers.

The developers locate the defect and repair the code. The developers have a detailed understanding of the code, and they can perform debugging better.

Testers need the support of management. Testers ensure that developers release code with few or no defects, and that marketers can deliver software that satisfies the customers’ requirements, and is reliable, usable, and correct.

Software Testing Principles

Testing principles are important to test specialists and engineers because they are the foundation for developing testing knowledge and acquiring testing skills. They also provide guidance for defining testing activities. A principle can be defined as,

  1. A general or fundamental law.
  2. A rule or code of conduct.
  3. The laws or facts of nature underlying the working of an artificial device.

In the software domain, principles may also refer to rules or codes of conduct relating to professionals who design, develop, test, and maintain software systems. The following are a set of testing principles,

Principle 1. Testing is the process of exercising a software component using a selected set of test cases, with the intent of revealing defects, and evaluating quality.

This principle supports testing as an execution-based activity to detect defects. It also supports the separation of testing from debugging since the intent of debugging is to locate defects and repair the software.

The term “software component” means any unit of software ranging in size and complexity from an individual procedure or method, to an entire software system.

The term “defects” represents any deviations in the software that have a negative impact on its functionality, performance, reliability, security, and/or any other of its specified quality attributes.

Principle 2. When the test objective is to detect defects, then a good test case is one that has a high probability of revealing a yet undetected defects.

Testers must carry out testing in the same way as scientists carry out experiments. Testers need to create a hypothesis and work towards proving or disproving it, it means he/she must prove the presence or absence or a particular type of defect.

Principle 3. Test results should be inspected meticulously.

Testers need to carefully inspect and interpret test results. Several erroneous and costly scenarios may occur if care is not taken.

A failure may be overlooked, and the test may be granted a “pass” status when in reality the software has failed the test. Testing may continue based on erroneous test results. The defect may be revealed at some later stage of testing, but in that case it may be more costly and difficult to locate and repair.

Principle 4. A test case must contain the expected output or result.

The test case is of no value unless there is an explicit statement of the expected outputs or results. Expected outputs allow the tester to determine

  • Whether a defect has been revealed,
  • Pass/fail status for the test.

It is very important to have a correct statement of the output so that time is not spent due to misconceptions about the outcome of a test. The specification of test inputs and outputs should be part of test design activities.

Principle 5. Test cases should be developed for both valid and invalid input conditions.

A tester must not assume that the software under test will always be provided with valid inputs. Inputs may be incorrect for several reasons.

Software users may have misunderstandings, or lack information about the nature of the inputs. They often make typographical errors even when complete/correct information is available. Devices may also provide invalid inputs due to erroneous conditions and malfunctions.

Principle 6. The probability of the existence of additional defects in a software component is proportional to the number of defects already detected in that component.

The higher the number of defects already detected in a component, the more likely it is to have additional defects when it undergoes further testing.

If there are two components A and B, and testers have found 20 defects in A and 3 defects in B, then the probability of the existence of additional defects in A is higher than B.

Principle 7. Testing should be carried out by a group that is independent of the development group.

This principle is true for psychological as well as practical reasons. It is difficult for a developer to admit that software he/she has created and developed can be faulty. Testers must realize that

  • Developers have a great pride in their work,
  • Practically it is difficult for the developer to conceptualize where defects could be found.

Principle 8. Tests must be repeatable and reusable.

The tester needs to record the exact conditions of the test, any special events that occurred, equipment used, and a carefully note the results. This information is very useful to the developers when the code is returned for debugging so that they can duplicate test conditions. It is also useful for tests that need to be repeated after defect repair.

Principle 9. Testing should be planned.

Test plans should be developed for each level of testing. The objective for each level should be described in the associated plan. The objectives should be stated as quantitatively as possible.

Principle 10. Testing activities should be integrated into the software life cycle.

Testing activity should be integrated into the software life cycle starting as early as in the requirements analysis phase, and continue on throughout the software life cycle in parallel with development activities.

Principle 11. Testing is a creative and challenging task.

Difficulties and challenges for the tester include the following:

  • A tester needs to have good knowledge of the software engineering discipline.
  • A tester needs to have knowledge from both experience and education about software specification, designed, and developed.
  • A tester needs to be able to manage many details.
  • A tester needs to have knowledge of fault types and where faults of a certain type might occur in code construction.
  • A tester needs to reason like a scientist and make hypotheses that relate to presence of specific types of defects.
  • A tester needs to have a good understanding of the problem domain of the software that he/she is testing. Familiarly with a domain may come from educational, training, and work related experiences.
  • A tester needs to create and document test cases. To design the test cases the tester must select inputs often from a very wide domain. The selected test cases should have the highest probability of revealing a defect. Familiarly with the domain is essential.
  • A tester needs to design and record test procedures for running the tests.
  • A tester needs to plan for testing and allocate proper resources.
  • A tester needs to execute the tests and is responsible for recording results.
  • A tester needs to analyse test results and decide on success or failure for a test. This involves understanding and keeping track of huge amount of detailed information.
  • A tester needs to learn to use tools and keep updated of the newest test tools.
  • A tester needs to work and cooperate with requirements engineers, designers, and developers, and often must establish a working relationship with clients and users.
  • A tester needs to be educated and trained in this specialized area and often will be required to update his/her knowledge on a regular basis due to changing technologies.

Basic Definitions

Many of the definitions generally used in testing are based on the terms described in the IEEE Standards Collection for Software Engineering. The standards collection includes the IEEE Standard Glossary of Software Engineering Terminology, which is a dictionary of software engineering vocabulary.

Errors

An error is a mistake, misconception, or misunderstanding on the part of a software developer.

Faults (Defects)

A fault (defect) is introduced into the software as the result of an error. It is an irregularity in the software that may cause it to behave incorrectly, and not according to its specification.

Failures

A failure is the inability of a software system or component to perform its required functions within specified performance requirements.

Test Cases

To detect defects in a piece of software the tester selects a set of input data and then executes the software with the input data under a particular set of conditions.

A test case is a test-related item which contains the following information:

  1. A set of test inputs. These are data items received from an external source by the code under test. The external source can be hardware, software, or human.
  2. Execution conditions. These are conditions required for running the test, for example, a certain state of a database, or a configuration of a hardware device.
  3. Expected outputs. These are the specified results to be produced by the code under test.

Test

A test is a group of related test cases, or a group of related test cases and test procedures.

Test Oracle

A test oracle is a document, or piece of software that allows testers to determine whether a test has been passed or failed.

Test Bed

A test bed is an environment that contains all the hardware and software needed to test a software component or a software system.

Software Quality

Software quality can either be defined as

  1. Quality relates to the degree to which a system, system component, or process meets specified requirements.
  2. Quality relates to the degree to which a system, system component, or process meets customer or user needs, or expectations.

Metric

A metric is a quantitative measure of the degree to which a system, system component, or process has a given attribute

Quality Metric

A quality metric is a quantitative measurement of the degree to which an item possesses a given quality attribute. Some examples of quality metric are,

  1. Correctness—the degree to which the system performs its intended function
  2. Reliability—the degree to which the software is expected to perform its required functions under stated conditions for a stated period of time
  3. Usability—relates to the degree of effort needed to learn, operate, prepare input, and interpret output of the software
  4. Integrity—relates to the system’s ability to withstand both intentional and accidental attacks
  5. Portability—relates to the ability of the software to be transferred from one environment to another
  6. Maintainability—the effort needed to make changes in the software
  7. Interoperability—the effort needed to link or couple one system to another.

Software Quality Assurance Group

The software quality assurance (SQA) group is a team of people with the necessary training and skills to ensure that all necessary actions are taken during the development process so that the resulting software conforms to established technical requirements.

Reviews

A review is a group meeting whose purpose is to evaluate a software artifact or a set of software artifacts.

Testing as a Process

The software development process is described as a series of phases, procedures and steps that result in the production of software products, embedded within the software development process are several other processes including testing.

Testing is related to two other processes called verification and validation.

Validation is the process of evaluating a software system or components during or at the end of the development cycle in order to determine whether it satisfies specified requirements. Validation is usually associated with traditionally execution-based testing, that is, exercising the code with test cases.

Figure 1.3 Process Embedded in the Software Development Processes

Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfies the conditions imposed at the start of that phase. Verification is usually associated with inspections and reviews of software deliverables.

Two definitions of testing are,

Testing is described as a group of procedures carried out to evaluate some aspect of a piece of software.

Testing can be described as a process used for revealing defects in software, and for establishing that the software has attained a specified degree of quality with respect to selected attributes.

Testing covers both validation and verification activities. Testing includes the following,

  • Technical reviews,
  • Test planning,
  • Test tracking,
  • Test case design,
  • Unit test,
  • Integration test,
  • System test,
  • Acceptance test, and
  • Usability test.

Testing can also be described as a dual-purpose process. It reveals defects and evaluates quality attributes of the software such as

  • Reliability,
  • Security,
  • Usability, and
  • Correctness.

The debugging process begins after testing has been carried out and the tester has noted that the software is not behaving as specified.

Debugging is the process of

  1. Locating the fault or defect,
  2. Repairing the code,
  3. Retesting the code.

Testing has economic, technical and managerial aspects. Testing must be managed. Organizational policy for testing must be defined and documented.

The Role of Process in Software Quality

The need for software products of high quality has pressured those in the software profession to identify and quantify quality factors such as usability, testability, maintainability and reliability and to identify engineering practices that support the production of quality products having these favourable attributes. Among the practices identified that contribute to the development of high-quality software are,

  • Project Planning
  • Requirements Management
  • Development of formal specification

Structured design with use of information hiding and encapsulation, design and code reuse, inspections and reviews, product and process measurements, education and training of software professionals, development and application of CASE tools, use of effective testing techniques and integration of testing activities into the entire life cycle.

Process, in the software engineering domain, is the set of methods, practices, standards, documents, activities, policies, and procedures that software engineers use to develop and maintain a software system and its associated artifacts, such as project and test plans, design documents, code, and manuals.

Adding individual practices to an existing software development practices in an adhoc way is not satisfactory. The software development process is similar to any other engineering activity, it must be engineered. It must be

  • Designed
  • Implemented
  • Evaluated
  • Maintained

Similar to other engineering process, a software development process must evolve in a consistent and predictable manner. Both best technical and managerial practices must be integrated in a systematic manner. Most of the software process improvement modes accepted by the industries are high-level modes. They focus on the software as a whole and do not support specific development of any sub process such as design and testing.

Figure 1.2 Components of an Engineering Process

Testing as an Engineering Activity

Software systems are becoming more challenging to build. They are playing an increasingly important role in the society. People with software development skills are in demand. There is pressure for software development professional to focus on quality issues. Poor quality software that can cause loss of life or property is no longer acceptable to society. Failure can result in catastrophic losses.

Conditions demand software development staff with interest and training in the areas of software product and process quality. Highly qualified staff make sure that software products are built on time, within budget, and are of the highest quality.

Quality is determined by attributes such as reliability, correctness, usability and the ability to meet all user requirements.

The education and training of engineers in each engineering discipline is based on the teaching of related scientific principles as shown in Figure 1.1.

A joint task force has been formed to define a body of knowledge that covers the software engineering discipline, to discuss the nature of education for this new profession and to define a code of ethics for the software engineering discipline. The members of the joint task force are IEEE Computer Society and the Association of Computing Machinery (ACM).

Using an engineering approach to software development means the following

  1. The development of the process is well understood.
  2. Projects are planned.
  3. Life cycle models are defined and adhered to.
  4. Standards are in place for product and process
  5. Measurements are employed to evaluate product and process quality.
  6. Components are reused.

Figure 1.1 Elements of Engineering Disciplines

Validation and Verification process play a key role in quality determination. Engineers should have proper education, training and certification.

A test specialist is one whose education is based on the principles, practices and processes that constitute the software engineering discipline and whose specific focus is on one area of that discipline, software testing.

A test specialist who is trained as an engineer should have knowledge on the following

  • Test related principles,
  • Processes,
  • Measurements,
  • Standards,
  • Plans,
  • Tools and methods
  • How to apply them to the testing tasks.

Testing concepts, are not an isolated collection of technical and managerial activities, it should be integrated within the context of a quality testing process. It grows in competency and uses engineering principles to guide improvement growth.