ISO/IEC 9126

Angelica De Loyola
10 min readSep 2, 2017

--

ISO/IEC 9126 Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011

The quality criteria according to ISO 9126

The fundamental objective of the ISO/IEC 9126 standard is to address some of the well known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of “success”. By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project’s objectives and goals.

The standard is divided into four parts:

· quality model

· external metrics

· internal metrics

· quality in use metrics.

Function

The quality model presented in the first part of the standard, ISO/IEC 9126–1, classifies software quality in a structured set of characteristics and sub-characteristics as follows:

· Functionality — “A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.”

· Suitability

· Accuracy

· Interoperability

· Security

· Functionality compliance

· Reliability — “A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.”

· Maturity

· Fault tolerance

· Recoverability

· Reliability compliance

· Usability — “A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.”

· Understandability

· Learnability

· Operability

· Attractiveness

· Usability compliance

· Efficiency — “A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.”

· Time behaviour

· Resource utilization

· Efficiency compliance

· Maintainability — “A set of attributes that bear on the effort needed to make specified modifications.”

· Analyzability

· Changeability

· Stability

· Testability

· Maintainability compliance

· Portability — “A set of attributes that bear on the ability of software to be transferred from one environment to another.”

· Adaptability

· Installability

· Co-existence

· Replaceability

· Portability compliance

Myers–Briggs Type Indicator

The Myers–Briggs Type Indicator (MBTI) is an introspective self-report questionnaire claiming to indicate psychologicalpreferences in how people perceive the world around them and make decisions.

The MBTI was constructed by Katharine Cook Briggs and her daughter Isabel Briggs Myers. It is based on the typological theory proposed by Carl Jung,who had speculated that there are four principal psychological functions by which humans experience the world — sensation, intuition, feeling, and thinking — and that one of these four functions is dominant for a person most of the time. The MBTI was constructed for normal populations and emphasizes the value of naturally occurring differences. “The underlying assumption of the MBTI is that we all have specific preferences in the way we construe our experiences, and these preferences underlie our interests, needs, values, and motivation.”

Although popular in the business sector, the MBTI exhibits significant psychometric deficiencies, notably including poor validity(i.e. not measuring what it purports to measure) and poor reliability (giving different results for the same person on different occasions). The four scales used in the MBTI have some correlation with four of the Big Five personality traits, which are a more commonly accepted framework.

Ishikawa diagram

Ishikawa(also called fishbone diagrams, herringbone diagrams, cause-and-effect diagrams, or Fishikawa) are causal diagrams created by Kaoru Ishikawa (1968) that show the causes of a specific event.

Common uses of the Ishikawa diagram are product design and quality defect prevention to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify and classify these sources of variation.

1. Define the problem (effect) to be solved. This first step is probably one of the most important tasks in building a cause and effect diagram. While defining your problem or event, your problem statement may also contain information about the location and time of the event. On the cause and effect diagram the problem is visually represented by drawing a horizontal line with a box enclosing the description of the problem on the tip of the arrow.

2. Identify the key causes of the problem or event. In this step, the primary causes of the problem are drilled down by using brainstorming techniques. Often these causes are categorized under people, equipment, materials, external factors, etc.

3. Identify the reasons behind the key causes. The goal in this step is to brainstorm as many causes for each of the key causes. Tools such as the 5 Whys (the subject of a future column) can help your team to drill down to these sub-causes. To facilitate participation from all of your team members, ask each member of the group to provide one reason behind a key cause.

4. Identify the most likely causes. At the end of step three, your team should have a good overview of the possible causes for the problem or event; if there are areas in the chart where possible causes are few, see if your team can dig deeper to find more potential causes.

Check sheet

The check sheet is a form (document) used to collect data in real time at the location where the data is generated. The data it captures can be quantitative or qualitative. When the information is quantitative, the check sheet is sometimes called a tally sheet.[1]

The check sheet is one of the so-called Seven Basic Tools of Quality Control

Steps:

1.Identify the end objectives of the measurement, such as what questions are to be answered and what decisions are to be made. Consequently, identify what data needs to be collected, and in what format.

2. Identify the data that needs to be collected about the process. This should include all variables which could be problem causes or could contribute to variation in results, such as date, time, operator, batch number, machine reference, etc.

3.Identify the period and circumstances of data collection and consequently estimate the maximum number of measurements per Check Sheet.

4. Design the Check Sheet, aiming to ease the collection, transcription and interpretation processes.

5.Ensure the Check Sheet works as intended by testing it, preferably in a live situation.

6.Ensure users are able to use the Check Sheets properly. This may include training, adjusting work instructions, etc. In any case, the data recording should not be too intrusive.

7.Collect the data, ensuring all required data is entered onto the form and can be clearly read. Ensure that representative samples are being taken for the conclusions that will be drawn from the results (for example, assumptions about everyone cannot be made if only one person is using it).

8.Interpret and use the results as planned.

SCATTER DIAGRAM

The scatter diagram graphs pairs of numerical data, with one variable on each axis, to look for a relationship between them. If the variables are correlated, the points will fall along a line or curve. The better the correlation, the tighter the points will hug the line.

When to Use a Scatter Diagram

· When you have paired numerical data.

· When your dependent variable may have multiple values for each value of your independent variable.

· When trying to determine whether the two variables are related, such as…

o When trying to identify potential root causes of problems.

o After brainstorming causes and effects using a fishbone diagram, to determine objectively whether a particular cause and effect are related.

o When determining whether two effects that appear to be related both occur with the same cause.

o When testing for autocorrelation before constructing a control chart.

· Step 1: Enter your data into two columns. One column should be the x-variable (the independent variable) and the second column should be the y-variable (the dependent variable). Make sure you put a header for your data in the first row in each column — it will make the creation of the scatter plot easier in Step 4 and Step 5.

· Step 2: Click “Graph” on the toolbar and then click “Scatter plot.”

· Step 3: Click “Simple” Scatter plot. In most cases, this is the option you’ll use for scatter plots in elementary statistics. You can choose one of the others (such as the scatter plot with lines), but you’ll rarely need to use them.

· Step 4: Click your y-variable name in the left window, then click “Select” to move that y-variable into the y-variable box.

· Step 5: Click your x-variable name in the left window, then click “Select” to move that x-variable into the x-variable box.

· Step 6: Click “OK” to create the scatter plot in Minitab. The graph will appear in a separate window.

· Tip: If you want to change the ticks (the spacing for the x-axis or y-axis), double-click one of the numbers to open the Edit Scale box, where you can change a variety of options for your scatter plot, including ticks.

Histogram

A histogram is an accurate graphical representation of the distribution of numerical data. It is an estimate of the probability distribution of a continuous variable (quantitative variable) and was first introduced by Karl Pearson. It is a kind of bar graph. To construct a histogram, the first step is to “bin” the range of values — that is, divide the entire range of values into a series of intervals — and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent, and are often (but are not required to be) of equal size.

Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot.

A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes.

Pareto Chart

A Pareto chart, named after Vilfredo Pareto, is a type of chart that contains both bars and a line graph, where individual values are represented in descending order by bars, and the cumulative total is represented by the line.

The purpose of the Pareto chart is to highlight the most important among a (typically large) set of factors. In quality control, it often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, and so on. Wilkinson (2006) devised an algorithm for producing statistically based acceptance limits (similar to confidence intervals) for each bar in the Pareto chart.

Cross functional flowchart/Deployment flowchart

Deployment flowchart (sometimes referred to as a cross functional flowchart) is a business process mapping tool used to articulate the steps and stakeholders of a given process.

“Deployment flowcharts consist of a sequence of activity steps and also the interactions between individuals or groups.”.[1] Each participant in the process is displayed on the map (which is constructed as a matrix) — tasks/activity are then articulated in sequence under the column corresponding to that stakeholder.

As deployment flowcharts highlight the relationships between stakeholders in addition to the process flow[2] they are especially useful in highlighting areas of inefficiency, duplication or unnecessary processing.[3] Often utilized within Six sigma activity,[4] completed flowcharts are commonly used to examine the interfaces between “participants” which are typically causes for delays and other associated issues. Deployment flowcharts are useful for determining who within an organization is required to implement a process and are sometimes used as a business planning tool.

While deployment flowcharts can be drawn by hand using pen and paper — various software tools include functionality to construct the flowcharts on computer, these include products such as Microsoft Visio.

Run chart

A run chart, also known as a run-sequence plot is a graph that displays observed data in a time sequence. Often, the data displayed represent some aspect of the output or performance of a manufacturing or other business process. It is therefore a form of line chart.

Six Sigma

“Six Sigma is a quality program that, when all is said and done, improves your customer’s experience, lowers your costs, and builds better leaders. — Jack Welch

Six Sigma at many organizations simply means a measure of quality that strives for near perfection. Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects (driving toward six standard deviations between the mean and the nearest specification limit) in any process — from manufacturing to transactional and from product to service. Many frameworks exist for implementing the Six Sigma methodology. Six Sigma Consultants all over the world have developed proprietary methodologies for

--

--