Verification and Validation

The process of examining the documents and finding an error is known as verification. Verification is also referred to as static testing since the existing documents are reviewed with the intention of finding defects. Verification is done in the initial stage of software development, right from system requirement gathering to code review processes are covered under verification. On the other hand, validation is a process of executing a document and finding defects.

The process of validation covers the execution of a document on a terminal/computer and finding errors. It is also called dynamic testing since the code is executed and the errors/defects are tracked during the execution of an application. Validation begins when partial code is developed or modules are ready.

Software verification and validation activities check the software against its specifications. Every software that is produced must be verified and validated. This is done by:

  • Checking that each item/object in software meets specified requirements.
  • Checking each software item before it is used as an input to another activity.
  • Ensuring that the amount of verification and validation effort is adequate to show that each software item is suitable for operational use.
  • Ensuring that checks on each software item are done, as far as possible, by someone other than the author.

Project management is responsible for organizing software verification and validation activities, defining software verification and validation roles (e.g., review team leaders), and allocating staff to those roles. Whatever the size of the project, software verification and validation greatly affect software quality. People are not infallible, and software that has not been verified has little chance of working. Typically, 20 to 50 errors per 1000 lines of code are found during development, and 1.5 to 4 per 1000 lines of code remain even after system testing. Each of these errors could lead to an operational failure or non-compliance with a requirement.

Objectives:

Today’s mission-critical applications employ sophisticated programming techniques that produce required results. Technologies are complex, and time frames are shorter. Given contemporary market dynamics, poorly tested product releases without proper verification and validation can result in unexpected problems for customers. To minimize the risks of failure, testing, verification, and validation have to be in place as early as possible, yet employing proper testing processes and methodologies. The main objective of software verification and validation is to reduce software errors to an acceptable level. The effort needed can range from 30% to 90% of the total project resources, depending upon the criticality and complexity of the software.

Explanation:

Software testing is a process used to identify the correctness, completeness, and quality of developed computer software. In order to make sure that the product development is on the correct track, we have to start our testing process right from the beginning. The picture below depicts the Verification and Validation model, which shows that the software testing process runs parallel with the development process. The left part of the “V” is called verification, which is carried out while the product is under development, and the right part of the “V” is called validation, which is carried out after a part of the product is developed. This picture can also be called the Software Testing Life Cycle (STLC). In STLC, each development activity is followed by a testing activity.

Different stages of SDLC with STLC:

Stage 1: Requirement Gathering

Development Activity: Defining requirements to establish specifications is the first step in the development of software. However, in many situations, not enough care is taken in establishing correct requirements upfront. It is necessary that requirements are established in a systematic way to ensure their accuracy and completeness, but this is not always an easy task.

Testing Activity

In order to make the requirements accurate and complete, we start our testing right from the requirements phase in which we review the requirements. For example, the requirements should not have ambiguous words like “inay” or “may not.” It should be clear and concise.

Stage 2: Functional Specifications

Development Activity

The Functional Specification describes the features of the software product. It describes the product’s behavior as seen by an external observer and contains the technical information and data needed for the design. The Functional Specification defines what the functionality will be.

Testing Activity

SQL Query C.

In order to make the functional specifications accurate, we have to review our functional specifications.

Stage 3: Design

Development Activity

During the design process, the software specifications are transformed into design models that describe the details of the data structures, system architecture, interface, and components. At the end of the design process, a design specification document is produced. This document is composed of the design models that describe the data, architecture, interfaces, and components.

Testing Activity

Each design product is reviewed for quality before moving to the next phase of software development. In order to evaluate the quality of a design (representation), the criteria for a good design should be established. Such a design should:

  • Exhibit good architectural structure.
  • Be modular and contain distinct representations of data, architecture, interfaces, and components (modules).
  • Lead to data structures that are appropriate for the objects to be implemented and be drawn from recognizable design patterns.
  • Lead to components that exhibit independent functional characteristics.
  • Lead to interfaces that reduce the complexity of connections between modules and with the external environment.
  • Be derived using a reputable method that is driven by information obtained during software requirements analysis.

These criteria are not achieved by chance. The software design process encourages good design through the application of fundamental design principles, systematic methodology, and through review.

Stage 4: Code

Development Activity

In this phase, the designs are translated into code. Computer programs are written using a conventional programming language or an application generator. Programming tools like compilers, interpreters, and debuggers are used to generate the code. Different high-level programming languages such as C, C++, VB, Java, and others are used for coding. With respect to the type of application, the right programming language is chosen.

Testing Activity

Code review is a process of verifying the source code. Code review is done to find and fix defects that are overlooked in the initial development phase, to improve the overall quality of the code. Code reviews can often find and remove common security vulnerabilities such as format string attacks, race conditions, and buffer overflows, thereby improving software security. Online software repositories, like anonymous CVS, allow groups of individuals to collaboratively review code to improve software quality and security.

Stage 5: Building Software

Development Activity

In this phase, we build different software units (components) and integrate them one by one to build a single software.

Testing Activity

a. Unit Testing

SQL Query C.

Once the units are ready, individual components should be tested to verify that the units function as per the specifications. A unit test is a validation procedure to check the working of the smallest module of source code. Test cases are written for all functions and methods to identify and fix the problems faster. For testing of units, dummy objects are written, such as stubs and drivers. This helps in testing each unit separately when all the code is not written. Usually, a developer uses this method to review their code.

b. Integration Testing

Individual software modules are combined and tested as a group under integration testing. Integration testing follows unit testing and is done before system testing. The purpose is to validate functional, performance, and reliability requirements. Test cases are constructed to test all components and their interfaces and confirm whether they are working correctly. It also includes inter-process communication and shared data areas.

Stage 6: Building System

Development Activity

After the software has been built, we have the whole system considering all the non-functional requirements like installation procedures, configuration, etc.

Testing Activity

a. System Testing

Testing the complete integrated system to confirm that it complies with the requirement specification is called System Testing. Under System Testing, the entire system is tested against its Functional Requirement Specifications (FRS) and/or System Requirement Specification (SRS) and with the non-functional requirements. System Testing is crucial. Testers have to test from users’ perspective and need to be more creative.

b. Acceptance Testing

User Acceptance Testing (UAT) is a process to obtain confirmation by the owner of the object under test, through trial or review, that the modification or addition meets mutually agreed-upon requirements. In software development, UAT is one of the final stages of a project and will often occur before a customer accepts a new system. Users of the system will perform these tests according to their User Requirements Specification, to which the system should conform. There are two stages of acceptance testing: ALPHA and BETA.

Stage 7: Release for Use

After the whole product has been developed and the required level of quality has been achieved, and the software is released for actual use by the customers. The difference between Verification and Validation is given in the below Table 3.1.

Am I building the right product?

Validation

Determining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.

High-level activity

Performed after a work product is produced against established criteria, ensuring that the product integrates correctly into the environment.

Determination of the correctness of the final software product by a development project with respect to the user needs and requirements.

Verification

Table 3.1 Difference between Verification and Validation

Verification Methods

Standards

Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.

There are mainly three techniques of verification. They are as follows:

  • Inspections
  • Walkthroughs
  • Peer Reviews

Am I building the right product?

Inspections

Performed during development on key artifacts, like walkthroughs, reviews, and inspections, mentor feedback, training, checklists, and inspections are planned, structured meetings carried out for verifying a particular work product. The following phases are involved:

  • Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more formal review types (e.g., inspection); and selecting a document or a part to review.
  • Kick-off: distributing documents; explaining the objectives, process, and documents to the participants; and checking entry criteria (for more formal review types).
  • Rework: fixing defects found, typically done by the author.
  • Individual preparation: work done by each of the participants on their own before the review meeting, noting potential defects, questions, and comments.
  • Review meeting: discussion or logging, with documented results or minutes (for more formal review types). The meeting participants may simply note defects, make recommendations for handling the defects, or make decisions about the defects.
  • Rework: fixing defects found, typically done by the author.
  • Follow-up: checking that defects have been addressed, gathering metrics, and checking on exit criteria (for more formal review types).

The key activities performed in inspections are:

  • The meeting is led by a trained moderator (not the author). The roles to be performed by the individual are defined. All the individuals need to be prepared before attending the meeting.
  • The meeting is carried out where the presenter would read the document.
  • The defects are identified and recorded by a recorder.
  • The identified defects are reviewed by the author.
  • A formal follow-up process is done to assess the defects found while inspecting the product.

Walkthroughs

Walkthroughs are informal meetings carried out to verify a particular work product. They have the following key characteristics:

  • An informal meeting carried out to verify the work product. The pre-meeting preparation of reviewers is not necessary.
  • The main purposes are learning and understanding the product and finding defects. The meeting is led by the author who presents the document.

Peer Reviews

A peer review is nothing but giving the document to a person and asking the person to find defects in the document. These are also called “buddy checks”.

Validation Methods

There are two main strategies for validating software. They are as follows:

  • White Box Testing
  • Black Box Testing

White Box Testing

White box testing strategy deals with the internal logic and structure of the code. White box testing is also called glass, structural, open box, or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements, and internal logic of the code, etc.

In order to implement white box testing, the tester has to deal with the code and hence needs to possess knowledge of coding and the internal working of the code.

White box testing also needs the tester to look into the code and find out which units/statements/chunks of the code are malfunctioning. So white box testing is normally done by developers.

Black Box Testing

Black Box Testing is not a type of testing; it instead is a testing strategy that does not need any knowledge of internal design or code, etc.

As the name “black box” suggests, no knowledge of internal logic or code structure is required.

The types of testing under this strategy are totally based/focused on testing for requirements and functionality of the work product/software application. Black box testing is sometimes also called “Opaque Testing,” “Functional/Behavioral Testing,” and “Closed Box Testing.”

The basis of the Black Box Testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for the normal and abnormal behavior of the system.

In order to implement the Black Box Testing Strategy, the tester needs to be thorough with the requirement specifications of the system and, as a user, should know how the system should behave in response to a particular action.

Scroll to Top