Notes on sociotechnical systems design

Marc Rettig
Rettig’s Notes
Published in
15 min readAug 24, 2017

--

This is not a piece of thoughtful writing. It’s a place for me to take notes, which I’m making public in case it helps someone else. This collection will change (though maybe not very frequently). It’s a work in progress, the cookies are still dough, standard disclaimers apply.

Harvested from Wikipedia

Sociotechnical systems pertains to theory regarding the social aspects of people and society and technical aspects of organizational structure and processes. Here, technical does not necessarily imply material technology. The focus is on procedures and related knowledge, i.e. it refers to the ancient Greek term logos. “Technical” is a term used to refer to structure and a broader sense of technicalities. Sociotechnical refers to the interrelatedness of social and technical aspects of an organization or the society as a whole.

. . .

Responsible autonomy

Sociotechnical theory was pioneering for… a shift in emphasis towards considering teams or groups as the primary unit of analysis and not the individual. Sociotechnical theory pays particular attention to internal supervision and leadership at the level of the “group” and refers to it as “responsible autonomy”. The… simple ability of individual team members being able to perform their function is not the only predictor of group effectiveness. There are a range of issues in team cohesion research, for example, that are answered by having the regulation and leadership internal to a group or team.

…The key to responsible autonomy seems to be to design an organization possessing the characteristics of small groups whilst preventing the “silo-thinking” and “stovepipe” neologisms of contemporary management theory. In order to preserve “…intact the loyalties on which the small group [depend]…the system as a whole [needs to contain] its bad in a way that [does] not destroy its good”. In practice, this requires groups to be responsible for their own internal regulation and supervision, with the primary task of relating the group to the wider system falling explicitly to a group leader. This principle, therefore, describes a strategy for removing more traditional command hierarchies.

Adaptability

“the rate at which uncertainty overwhelms an organisation is related more to its internal structure than to the amount of environmental uncertainty.”

Sitter in 1997 offered two solutions for organisations confronted, like the military, with an environment of increased (and increasing) complexity:

  1. “The first option is to restore the fit with the external complexity by an increasing internal complexity. …This usually means the creation of more staff functions or the enlargement of staff-functions and/or the investment in vertical information systems”.
  2. “…the organisation tries to deal with the external complexity by ‘reducing’ the internal control and coordination needs. …This option might be called the strategy of ‘simple organisations and complex jobs’”.

Adaptability and complexity

“A very large variety of unfavourable and changing environmental conditions is encountered … many of which are impossible to predict. Others, though predictable, are impossible to alter.”

Many type of organisations are clearly motivated by the appealing “industrial age”, rational principles of “factory production”, a particular approach to dealing with complexity: “In the factory a comparatively high degree of control can be exercised over the complex and moving “figure” of a production sequence, since it is possible to maintain the “ground” in a comparatively passive and constant state”. On the other hand, many activities are constantly faced with the possibility of “untoward activity in the ‘ground’” of the ‘figure-ground’ relationship.” The central problem, one that appears to be at the nub of many problems that “classic” organisations have with complexity, is that “The instability of the ‘ground’ limits the applicability … of methods derived from the factory”.

In Classic organisations, problems with the moving “figure” and moving “ground” often become magnified through a much larger social space, one in which there is a far greater extent of hierarchical task interdependence. For this reason, the semi-autonomous group, and its ability to make a much more fine grained response to the “ground” situation, can be regarded as “agile”. Added to which, local problems that do arise need not propagate throughout the entire system (to affect the workload and quality of work of many others) because a complex organization doing simple tasks has been replaced by a simpler organization doing more complex tasks. The agility and internal regulation of the group allows problems to be solved locally without propagation through a larger social space, thus increasing tempo.

Whole tasks

A whole task “has the advantage of placing responsibility for the … task squarely on the shoulders of a single, small, face-to-face group which experiences the entire cycle of operations within the compass of its membership.” The Sociotechnical embodiment of this principle is the notion of minimal critical specification. This principle states that, “While it may be necessary to be quite precise about what has to be done, it is rarely necessary to be precise about how it is done.”

[This seems directly analogous to encapsulation in object-based systems, with similar advantages for similar reasons.]

The key factor in minimally critically specifying tasks is the responsible autonomy of the group to decide, based on local conditions, how best to undertake the task in a flexible adaptive manner. This principle is isomorphic with ideas like effects-based operations (EBO). EBO asks the question of what goal is it that we want to achieve, what objective is it that we need to reach rather than what tasks have to be undertaken, when and how. The EBO concept enables the managers to “…manipulate and decompose high level effects. They must then assign lesser effects as objectives for subordinates to achieve. The intention is that subordinates’ actions will cumulatively achieve the overall effects desired”.[13] In other words, the focus shifts from being a scriptwriter for tasks to instead being a designer of behaviours. In some cases, this can make the task of the manager significantly less arduous.

Harvested from Baxter and Sommerville

The underlying premise of socio-technical thinking is that systems design should be a process that takes into account both social and technical factors that influence the functionality and usage of computer-based systems. The rationale for adopting socio-technical approaches to systems design is that failure to do so can increase the risks that systems will not make their expected contribution to the goals of the organisation. Systems often meet their technical ‘requirements’ but are considered to be a ‘failure’ because they do not deliver the expected support for the real work in the organisation. The source of the problem is that techno-centric approaches to systems design do not properly consider the complex relationships between the organisation, the people enacting business processes and the system that supports these processes.

[Describe problems of practicality and usability of sociotechnical design and engineering approaches. They aim to bridge the gap.]

We believe that it is not enough to simply analyse a situation from a socio-technical perspective and then explain this analysis to engineers. We also must suggest how socio-technical analyses can be used constructively when developing and evolving systems. Many companies have invested heavily in software design methods and tools, so socio-technical approaches will only be successful if they preserve and are compatible with these methods.

We must avoid terminology that is alien to engineers, develop an approach that they can use, and generate value that is proportionate to the time invested.

Five key characteristics of open socio-technical systems

(Badham et al., 2000):

  • Systems should have interdependent parts.
  • Systems should adapt to and pursue goals in external environments.
  • Systems have an internal environment comprising separate but interdependent technical and social subsystems.2
  • Systems have equifinality. In other words, systems goals can be achieved by more than one means. This implies that there are design choices to be made during system development.
  • System performance relies on the joint optimisation of the technical and social subsystems. Focusing on one of these systems to the exclusion of the other is likely to lead to degraded system performance and utility.

STSD methods were developed to facilitate the design of such systems. We have restricted our scope here to this class of systems, and do not consider deeply embedded systems, for example, where there is usually no social subsystem involved.

Problems with existing approaches

The development of STSD methods has identified and attempted to address real problems in understanding and developing complex organisational systems which, nowadays, inevitably rely on large-scale software-intensive systems. Despite positive experiences in demonstrator projects, however, these methods have not had any significant impact on industrial software engineering practice. The reasons for this failure to adopt and maintain the use of STSD approaches have been analysed in several places, and from several viewpoints (e.g., Mathews, 1997; Mumford, 2000, 2006). We summarise the main problems identified by these authors below, and also discuss other issues that have arisen in our own use of STSD methods.

Problems elaborated upon in the article

  • inconsistent terminology
  • levels of abstraction
  • conflicting value systems (humanistic principles vs. managerial values)
  • lack of agreed success criteria
  • analysis without synthesis
  • multidisciplinarity
  • perceived anachronism
  • fieldwork issues

Socio-technical systems engineering

Systems engineering activities

**** I paused at that point in the article ****

Harvested from Social Design of Technical Systems

Whitworth and Amad, https://www.interaction-design.org/literature/topics/socio-technical-systems

[There’s a lot of good stuff in here, but I’m going to narrow the focus of these notes to things relevant for the NASA work.]

The evolution of computing implies a requirements hierarchy. If the hardware works software becomes the priority, if the software works user needs arise, and when user needs are met social requirements follow. As one level’s issues are met those of the next appear, as climbing one hill reveals another. … As software response times improve, user response times become the issue. Companies like Google and E-bay still seek customer satisfaction, but customers in crowds have social needs like fairness and synergy. As computing evolves, higher levels come to drive success. In general, the highest level of a system defines its success, e.g. social networks need a community to succeed. If no community forms, it doesn’t matter how easy to use, fast or reliable the software is. Lower levels are essential to avoid failure, but higher levels are essential to success.

Conversely, any level can cause failure. E.G. it doesn’t matter how high community morale is if the hardware fails, the software crashes or the interface is unusable. An STS fails if its hardware fails, if its program crashes or if users can’t figure it out. Hardware, software, personal and community failures are all computing errors. The one thing they have in common is that the system fails to perform, and in evolution, what doesn’t perform doesn’t survive.

When computing was just technology, it only failed for technical reasons, but now it is socio-technology; it can also fail for social reasons. Technology is hard, but society is soft. That the soft should direct the hard seems counter-intuitive, but trees grow at their soft tips not their hard base. As a tree trunk doesn’t direct its expanding canopy, so today’s social computing was undreamt of by its technical base.

  1. Hardware systems exchange energy. So “functionality” is power, i.e. hardware with high CPU cycle or disk read-write rates. “Usable” hardware uses less power for the same result, e.g. mobile phones that last longer. Reliable hardware is rugged enough to work if you drop it, and flexible hardware is mobile to still work if you move around, i.e. change environments. Secure hardware blocks physical theft, e.g. by laptop cable locks, and extendible hardware has ports for peripherals to be attached. Connected hardware has wired or wireless links and private hardware is tempest proof i.e. it doesn’t physically leak energy.
  2. Software systems exchange information. Functional software has many ways to process information, while “usable” software uses less CPU processing (“lite” apps). Reliable software avoids errors or recovers from them quickly. Flexible software is operating system platform independent. Secure software can’t be corrupted or overwritten. Extendible software can access OS program library calls. Connected software has protocol “handshakes” to open read/write channels. Private software can encrypt information so others can’t see it.
  3. HCI systems exchange meaning, including ideas, feelings and intents. In functional HCI the human computer pair is effectual, i.e. meets the task goal. Usable HCI requires less intellectual, affective or conative (footnote 33) effort, i.e. is intuitive. Reliable HCI avoids or recovers from unintended user errors by checks or undo choices — the web Back button is an HCI invention. Flexible HCI lets users change language, font size or privacy preferences, as each person is a new environment to the software. Secure HCI avoids identity theft by user password. Extendible HCI lets users use what others create, e.g. mash-ups and third party add-ons. Connected HCI communicates with others, while privacy includes not getting spammed or being located on a mobile device.

Socio-technical systems not only deny defections, but also enable synergies (Table 8). Forums like AnandTech illustrate this, as if anyone in a group solves a problem everyone can get the answer. The larger the group, the more likely someone can solve in seconds a problem you have struggled with for days. Same again functions let Amazon readers use the experiences of others to find books bought by those who bought the book they are looking at now. Wikipedia users correct errors of fact, supply references and examples to everyone.

Synergy reduces when citizens work to personal requirements like:

“Take what you can and give nothing back

Synergy increases when citizens follow community ethics like:

“Give unto others as you would they give unto you”.

Harvested from Emergent properties of sociotechnical systems

Ian Sommerville:

Three important characteristics:

1:25 — defines emergence

Uses aircraft as example. Stability, control, …many other than flight.

Emergent properties are a consequence of the relationships between system components

They can therefore only be assessed and measured once the components have been integrated into a system.

Two types of emergent property:

Functional properties are ones we are trying to achieve when we create the system. Something the system does.

The “ilities” are non-functional emergent properties.

Very important for critical systems, because if we don’t get the right level of “ilities,” the system won’t be useful.

Examples

Digging deeper into reliability…

Three principle influences on reliability in a sociotechnical system: hardware, software, and people.

[6:00ish — bullet points for hardware and software reliability]

Operator reliability: likelihood of person making an error.

Emergence comes from the fact that these different influences on reliability interact.

If we have a hardware failure, the software may behave in an unexpected way. This may be misinterpreted by the operator, who then takes the wrong actions (for reasons that make complete sense at the time, based on his or her current interpretation of events). The operator’s actions are interpreted by the software, which feeds back to the hardware, and so on.

So overall reliability is an emergent property.

Some notes on prototyping sociotechnical systems

  1. Talk to Jess McMullin. Can he share anything I can use with client?

GENERALIZED APPROACH TO PATCHWORK PROTOTYPING

Based on the experiences described above, we have outlined a general approach to building patchwork prototypes using OSS. While our experience has been primarily with web-based tools, and this process has been defined with such tools in mind, it is likely that a similar approach could be taken with prototyping any kind of software. Like other prototyping methods, this is designed to be iterated, with the knowledge and experience gained from one step feeding into the next. The approach entails the following five stages:

  1. Make an educated guess about what the target system might look like;
  2. Select tools which support some aspect of the desired functionality;
  3. Integrate the tools into a rough composite;
  4. Deploy the prototype and solicit feedback from users;
  5. Reflect on the experience of building the prototype and the feedback given by users, and repeat.

See Service Prototype on servicedesigntools.org

[Oy, this is pretty thin. Eilidh Dickson is mentioned. Might she have materials? Does CIID have something to share?]

From IxD Foundation

You will need to pay attention to these four key components of prototyping and testing, no matter what method you choose to utilise:

  • People — including those whom you are testing and the observers
  • Objects — static and interactive, including the prototype and other objects the people and/or prototype interact/s with
  • Location — places and environments
  • Interactions — digital or physical, between people, objects and the location

When you are building your prototypes, as well as when you’re testing them, keep in mind these key components. For instance, if you are testing your prototype in a lab, think about how to simulate the natural environment in which your design will engage its users. Also, take note of other objects that the prototype will be used with. When performing a task, for example, will the users be wearing gloves, or have their hands full? What implications would that have on how they can use a product or service? With these four components of testing a prototype in mind, let us look at the eight common methods of prototyping that you can use.

Methods

  • Sketches and diagrams
  • Paper interfaces
  • Storyboards
  • Lego prototypes
  • Role-playing
  • Physical models
  • Wizard of Oz

User-driven prototypes

A user-driven prototype is unlike any other prototyping method previously mentioned. Instead of building a prototype to test on users, you will instead get the user to create something, and from the process learn more about the user. When you ask the user to design a solution, rather than provide feedback on a prototype, you can learn about the assumptions and desires that the user possesses. The purpose of a user-driven prototype is not to use the solutions that the users have generated; instead, it is to use their designs to understand their thinking. You can use user-driven prototypes to gain empathy with your users or to fine-tune the details of your product once you have an idea in mind.

In order to create a user-driven prototype, you should ask the users to create something that enables you to understand how they think about certain issues. For instance, if you are interested in creating an improved airport waiting experience, you could ask users to draw out what they think is the ideal airport waiting process — or you could give them a bunch of Lego bricks and encourage them to show you their dream waiting area in an airport. Alternatively, if your solution is a website, you could ask your users to create a sketch of what features they think the website should have. For user-driven prototypes to be useful, you should balance the amount of help you offer the users so they do not feel lost (and thus fail to ideate), while making the session open enough so that you can learn more about the users without shepherding them towards your own ideas, which would defeat the purpose in this light.

--

--

Marc Rettig
Rettig’s Notes

Fit Associates, SVA Design for Social Innovation, Okay Then