Software Quality Process

Karanfil Duygu Ozdemir
6 min readFeb 5, 2022

Software testing is research conducted to provide information to stakeholders about the quality of services or products under testing, software testing is also the independent and objective examination of software to understand the risks of software applications. This writingrelated to details for the management of this process.

Requirement Analysis

Requirements analysis is the first stage of the software life cycle. It involves constant communication with all project teams and end-users of the product to define expectations, resolve conflicts, and document all the key specifications. Requirements analysis; critical to the success or failure of a system or software project. Requirements must be documented, practicable, measurable, in testable form, traceable, and appropriate to the needs of the identified business.

Categorization of Requirement

The analysis is essential for project planning, system verification and validation, and integration coordination. Categorizing requirements can help developers and test teams.

- Functional Requirements

The functional requirements describe what to do in the process of determining the necessary task, action, or task that must be performed.

- Non-functional Requirements

Non-functional requirements can be described as system constraints or expected states. For example, issues such as how many users the software will support and which version of the software will support are non-functional requirements. They must be classified separately from the functional requirements for testing.

- Performance Requirements

How a task or authority will be carried out; regardless of usage, quality, scope, timing, or readiness. Performance (how well should it be completed) in requirements analysis will be developed interactively between basically all functions in terms of preparation, system lifespan; For precision in their estimates, criticality for the application, and things to do with those related to uses.

-Design Requirements

It includes requirements for “build”, “code/test requirements” and “how to execute” questions for products, for processes expressed in technical data packages and technical manuals.

Designing test cases

About Test Case

Test cases are the documents that specify the inputs, events, or actions prepared according to the requirements and the results expected to occur as a result of these. The variety of input sets results in an infinite number of test cases. It is not possible to consider all of them individually due to time and cost constraints. Testers or QA Engineers have to develop strategies to create the most test coverage with the minimalized scenarios. All lines of code must be run at least once and a test case set should be covered all inputs must be used to complete the test process.

Required Fields

A basic test case body consists of expected and actual outputs versus inputs. The script must have a purpose. It should create a traceability matrix over the objectives to keep track of no open issues and which scenarios should be run.

Suite: Related test cases should be in the same group.

Test Case No: ID of the test case.

Use Case No: Which use case the test case is associated with.

Test Purpose or Title: A title about why the test case was written.

Preconditions: Infrastructure, prerequisites required to perform the test.

Priority: The importance of the test case for the system. There is no specific standard of priority.

Test Step No: Test step number.

Test Data: The data to be used when performing the test.

Expected Outputs: Expected outputs as a result of inputs (operation performed).

Actual Outputs: Outputs encountered while testing.

Execution and Evaluation of Test Cases

Test suites and individual test cases are executed. This can be done manually or using test execution tools (automation) according to a scheduled sequence. After creating test cases, testers take test conditions, convert them into test cases and test products during execution, and set up the test environment.

The process of executing tests can be facilitated by a variety of tools. The evaluation process is a comparison process and is the mutual evaluation of the software test results and the outputs specified.

The result of the test execution should be recorded. (Test Execution Log). The identities and versions of the tested software, test tools, and test software are logged.

The actual results (what happened after running the tests) are compared with the expected results (which we predict).

In case of discrepancies between actual and expected results, Bugs should be reported.

The test of the correction performed for each failure is repeated. For example, re-run a previously failed test (validation test) to validate the fix, run a corrected test, and/or tests (regression) to ensure that errors do not occur in unmodified areas of the software or that the bug fix does not reveal other failures.

Bug Management

In the bug management and reporting phase, the priority level is selected for the errors and the error description is entered in detail. In the bug description, the data and environment information used in the test scenario should be entered. The test scenario is entered step by step. The expected state and the actual state are indicated. Supported by screenshot and video if possible. Bug recording is created by assigning the person to correct the failure.

Bug Title

The error header is used to distinguish the found error from other errors. However, it is an important part. The error title should be short and understandable. The person to fix the bug or another QA engineer should have a full understanding of the topic when they read the title.

The main reason for specifying the URL address where the error occurred in the error report is so that the person who will correct the error can quickly simulate the error or view the error. However, URL information cannot be given in the error report for dynamic URLs or one-time working URLs.

Steps

This is the section where the steps to catch the error are written. There should be a steps section in the bug report so that the developers can simulate the error quickly by repeating the same process.

Sample :

Step 1: Navigate to the homepage.

Step 2: Click on the ‘Form’ menu.

Step 3: Clicking the view option.

Expected Result — Actual Result

While testing any feature of the system, it is required to write the expected result and the actual result from the tested module. Because in this way, the person who will fix the error will understand what is wrong and will try to fix the problem quickly.

Sample :

• Step 1: The homepage opens.

• Step 2: Click the “Forms” button.

• Step 3: Check the opened form list

[Expected Result: Form list should be opened with all options.]

[Actual Result: Options are listed.]

Priority and Severity

Each error must have a priority. Critical errors that stop the system need to be fixed first. The priority classification can be as follows.

• Critical: Problems that stop the system. It must be resolved immediately.

• High: These are the errors that do not stop the system but affect it significantly.

• Medium: Errors that do not affect the operation of the system but show the quality of the project.

• Low: These are the problems that do not affect the operation of the system.

The severity of a software bug determines the extent of the impact of the bug on the operation of the system. The severity classification may be as follows.

• Critical: A newly made feature or a bug fixed caused a change in the infrastructure of the system.

• High: It is not a severity level that stops the system, but it is at a level that will affect its operation.

• Medium (Medium): It is the level of importance that does not disturb the operation of the system.

• Low: It is a small degree of aesthetic importance that does not affect the system.

--

--