Category: socio-technical design

  • Socio-Technical System Design

    Socio-Technical System Design

    Origins of Human-Centered Design

    A few years ago, I published an academic paper – which proved to be one of my most-read articles, on user-centered vs. human-centered design. In that paper, I compared the typical analytic methods and tools for user-centered design to an idea of human-centered design that came out of the field industrial engineering. Having seen the recent explosion of “user-centered” design fields such as User Experience design, I feel even more strongly that human-centered design is a discipline that has not yet fulfilled its potential for changes to the way in which we design technology systems for both work and play.

    Human-centered design ideas come out of an emancipatory labor movement – originally in the UK – that looked at the constraints imposed by technology on work and focused on the impact of design on the quality of working life. This “socio-technical” approach to design (Emery & Trist, 1960) originated in studies of industrial processes, often embedded in the rapid societal and technical change of post World War II Britain conducted by researchers from The Tavistock Institute of Human Relations in London. A research team led by Eric Trist, Ken Bamforth, and Fred Emery studied the organization of coal-mining teams for various types of mine and coal-seam environment, concluding that the design of working arrangements and the use of technology needed to be balanced with the conditions in various type of working environment. They noted the tension between the need for miners to self-organize into collaborative groups that increased productivity by allowing miners autonomy in selecting their team role, and management directives which constrained group autonomy because this empowered the miners – and allowed them to negotiate the higher rates paid for skilled labor (Trist et al., 1963). They coined the term “sociotechnical” to define an approach to the design of working arrangements that balanced the socially-situated needs of human workers with the use of machinery to automate repetitive and dangerous work.

    The ideas behind socio-technical design really took off in the 1980s, with the explosion of affordable office technologies and personal computing. Some notable thinkers in this aspect of design include:

    Mike Cooley (Architect or Bee?, 2016), who explained how technology design choices exerted control over the labor force at the expense of social good. A key element of his arguments was to explain how the combination of conceptual design ability with the practical ability to understand the context of practice across multiple domains – common in the renaissance – has given way to a “deep dive” specialization in one area or another. This separation of “planning” from “doing” leads to design problems, as designers cannot envision the context in which their design will be used and make stupid mistakes. It also excludes consideration of social good when making design choices. Technology decisions are made on the basis of manufacturing cost rather than long-term, environmental impact.

    Ken Eason, who argued in his early work (e.g. Eason, 1982) that designers’ choice of design approach affected system usability: a technology-led approach leads to ‘fire fighting’ when negative organizational effects become apparent; and user involvement in design tends to fail as users take longer to understand new technology than developers, so design is complete by the time they are able to make a contribution. He proposed an evolutionary design process that builds slowly from small-scale systems to large ones, retaining the flexibility to change and adapt to emerging user needs, promoting user learning via system prototypes and training, and involving users in system evaluation. His later work discusses how the typical “closed system” approach to IT design (goal-oriented and focused on predefined requirements) constrains the “open system” approach to design needed to balance the emergent needs of human users with technology goals, and also cater for the evolving system requirements engendered by changing global business environments (Eason, 2009).

    Howard Rosenbrock (1981, 1988), was a visionary engineering theorist, who not only developed innovative techniques approaches to algorithm design for control engineering, but also saw engineering as an “art” (Rosenbrock 1988) that needed to balance the design of technology with the social needs, personal experience, and judgment of human beings. The opening to his 1981 paper, Engineers And The Work That People Do, contains the most chilling description of a work environment that I have ever read:

    The plant was almost completely automatic. Parts of the glass envelope, for example, were sealed together without any human intervention. Here and there, however, were tasks which the designer had failed to automate, and workers were employed, mostly women and mostly middle-aged. One picked up each glass envelope as it arrived, inspected it for flaws, and replaced it if it was satisfactory, once every four-and one-half seconds. Another picked out a short length of aluminum wire from a box with tweezers, holding it by one end. Then she inserted it delicately inside a coil which would vaporize it to produce the reflector, repeating this again every four-and-one-half seconds. Because of the noise, and the isolation of the work places, and the concentration demanded by some of them, conversation was hardly possible.

    Rosenbrock, H. H. (1981). Engineers And The Work That People Do, pg. 4.

    This description still sends shivers down my spine. Not just because of the working conditions, but because of the casual way in which Rosenbrock mentions that the few manual work-processes on the light-bulb factory floor are not automated only because they are too complex or expensive to automate. They used human-beings for repetitive, demeaning jobs in which the environment made it too difficult to socialize with others, simply because they were cheaper or easier than designing an automated solution.

    Moving to Participative Design

    Obviously, any blog post cannot capture the whole of the socio-technical movement, with all the complexities that the various studies introduced. Here, I have tried to outline the tip of the iceberg, explaining the motivations that led to the HCI, CSCW, and agile design fields that influence contemporary design. But I cannot leave this discussion before mentioning the key influence of End Mumford. Professor Mumford was critical in promoting the importance of user participation in design (Mumford, 1983). She even conducted studies to demonstrate how users “went native” when participating in technology design, as technology-design skills were considered so glamorous and career-enhancing (1975). She devised a method – the ETHICS approach – that illustrated how to analyze requirements in ways that both balanced the technical and the social aspects of design, but also managed the inevitable subversion of social (work-system) design by considerations of technical expediency and optimization (Mumford & Weir, 1979; Mumford, 1995).

    So how do we design human-centered systems that support workers in the work they need to do, while allowing them autonomy in the way that they do this work? The process devised over many years is to use socio-technical systems design.

    The balance of social and technical considerations in system design

    Figure 1. Socio-Technical System Design

    As shown in Figure 1, above, socio-technical design balances the needs of a “supported system” of people doing work – a.k.a. the social system – with a “supporting system of information and communication technology – a.k.a. the technical system. It is important to start with the social system: the people who do the work are unfailingly the people who understand best what it requires, in the way of information and computing support. It is also important to see the drivers of design as the need to balance changes to the two systems, so the IT system supports the system of work (and not vice-versa). I refer to this principle as the co-design of business-process and IT systems. This concept is actually taken from the work of Peter Checkland, who argues that designed IT systems often solve the wrong problem, because designers fails to appreciate that the point of design is to support purposeful systems of human activity, rather than pursuing the separate aims of a technology architecture, data structures and information systems (Checkland, 1981; Winter, Brown, & Checkland, 1995).

    References

    Checkland, P. (1981) Systems Thinking, Systems Practice, John Wiley & Sons, Chichester.

    Cooley, Mike (2016). Architect or Bee? The Human Price of Technology. UK: Spokesman Books. ISBN978-0-85124-8493.

    Eason, K. D. (1982). The Process Of Introducing Information Technology. Behaviour and Information Technology, 1(2), April-June 1982>
    Reprinted as Eason, K.D. (1984) “Managing Technological Change,” in Rob Paton, Suzanne Brown, Jake Chapman, Mike Floyd and John Hamwee (Eds.) Organizations: Cases, Issues, Concepts. The Open University, Milton Keynes, UK.

    Eason, K. D. (2009). Before the Internet: The Relevance of SocioTechnical Systems Theory to Emerging Forms of Virtual Organisation. International Journal of Sociotechnology and Knowledge Development, 1(2). 

    Emery, F. E., & Trist, E. L. (1960). Socio-Technical Systems. In C. W. Churchman & M. Verhulst (Eds.), Management Science Models and Techniques (Vol. 2). Oxford UK: Pergamon Press.

    Mumford, E. & Sackman, H. (1975) Human Choice and Computers, North-Holland Publishing Company.

    Mumford, E. & Weir, M. (1979), “Computer Systems in Work Design: the ETHICS Method”, John Wiley, New York

    Mumford, E. (1983) Designing Participatively: A Participative Approach to Computer Systems Design. Manchester Business School, Manchester, UK.

    Mumford, E. (1995) Effective Systems Design and Requirements Analysis: The ETHICS Approach. Macmillan, Basingstoke, UK

    Rosenbrock, H. H. (1981). Engineers And The Work That People Do. IEEE Control Systems Magazine, 1(3), 4-8.

    Rosenbrock, H. H. (1988). Engineering As An Art. AI & Society, 2(4), 315-320.

    Trist, E., Higgin, G., Murray, H., and Pollock, A. B. (1963) Organisational Choice. London: Tavistock Publications.

    Trist, E. L. (1981). The evolution of socio-technical systems. Toronto: Ontario Quality of Working Life Centre. Report access is provided courtesy of Larry Miller’s Blog on Leadership, Learning and Culture.

    Winter, M. C., Brown, D. H., & Checkland, P. B. (1995). A Role For Soft Systems Methodology in Information Systems Development. European Journal of Information Systems, 4(3), 130-142.

  • User-Centered Vs. Human-Centered Design

    User-Centered Vs. Human-Centered Design

    In the last few years, the terms human-centered and user-centered have become synonymous in HCI, with a focus on disciplines such as “user experience” and “interaction design.” Here I will argue that neither discipline really deals with the core issues of human-centered design.

    Human-centeredness in design involves designing artifacts and technology support environments that provide a “support system” to the humans performing specific work, applying their effort to achieving specific goals, or collaborating around a set of (more or less) well-defined aims. The human is seldom alone in these endeavors. They engage with others in a social network of communications and collaborative actions, they socialize and exchange ideas, and they work together purposefully. A better description than “user” to describe the context of use for the artifacts and environments which we design would be “human-activity system.” As humans, we are thrown into a world of work and activity by others, which we can take control of through action (Heidegger, 1967). We can support this world by understanding the various purposes of human activity and designing technology to assist in those purposes (Checkland & Winter, 2000) . So human-centered design is systemic: it appreciates the social and organizational context of work and it supports the multivocal purposes of the system of human activity, within its context.

    User-centeredness by contrast is isolationist in its focus on interaction design. It takes a human being, rich in purpose and understanding, and reduces them to the role of artifact user. Not only that, but by implication, the user of a pre-defined artifact, whose purpose is understood, but whose mechanisms of interaction remain to be fully defined. By focusing on conceptual models of use, user scripts, and activity/task frameworks (e.g. Sharp et al. 2019), it isolates the user from the social context of work, describing activities in terms of fixed procedures and embedding assumptions about how and why the artifact will be used. It loses the joyful multivocality of the human-centered approach to design. Instead of understanding, with Heidegger, that thrownness is a temporary state, where there is a choice between reaction or being proactive, user-centered design embeds reaction as a paradigm. It separates tasks from workflows, making each interaction an end in itself and enforcing the approach to design that led Lucy Suchman to write her famous treatise on situated design (Suchman, 1987). There is no linked flow of work processes, where the human being knows that (for example) they have already photocopied the report covers (onto special cardstock) and the early chapters, so now have only to copy later chapters. There is the dumb lack-of-saved-status machine, which jams halfway, then asks the user to reload the report pages in their original order, starting with the covers which need the user to load special cardstock into the paper feeder.

    Designing for humans rather than users is a choice.

    • It involves more complex and realistic state machines, which account for multiple stages of linked workflow, supported by multiple sets of machine interactions.
    • It involves a conscious decision to support informal communications and activities – for example, water-cooler conversations or phone calls, which may or may not result in recorded outcomes.
    • It treats the participants in a human activity system as autonomous individuals, not agents to be modeled, controlled, and curtailed.
    • It recognizes that a social system of information exchange exists, of which the machine is only a part – and that humans need to exercise a deliberative choice about what to record and why – and that any computer-based system of data is part of a wider, human-network-based system of information.
    • Above all, it acknowledges that knowledge, understanding, and the meanings that we ascribe to work are emergent. We understand how to do things by doing them – after which we have a better understanding of how to do them next time. Embedding any particular set of procedures into a computer-based system is not only a waste of time, but may be counterproductive, in the face of new ways of proceeding.

    So no – “user experience” and “interaction design” do not support human-centered information system design. They seek to humanize the artificial processes imposed by transaction-based systems through associating these with specific paradigms or conceptual models that guide the psychology of human activity and interaction. But they don’t even scratch the surface of understanding systemic activity. For that, you need to employ methods such as Soft Systems Analysis (Checkland & Poulter, 2007) – and to take human-centeredness seriously.

    User-centered design is NOT the same as human-centered design. User-centered design focuses on ameliorating design decisions already made, to define IT & data use around the needs of notional “business goals.” Human-centered design focuses design on the needs of people engaged in purposeful activities aimed at a variety of goals.

    To conclude, user-centered design – as the term is employed in HCI and UX – is not the same as human-centered design. User-centered design is aimed at mitigating and improving the experience of using a system of technology that was designed for another purpose than those the user prioritizes – to make money, to “engage” users on the website so they return (and spend more money), and to publicize the firm’s products and services. In contrast, human-centered design is an approach that starts with user values, priorities, and purposes. It seeks to afford uses of the system that fulfill how the user would like to access the features that they value and expect. It designs the flow of use-interactions around the expected user flow of work (or play), allowing the user to configure this flow how they want. It does not make you do illogical or stupid things, like reloading all the sheets in a photocopier feed in their original order, even when the copy failed on the last-page-but one. It does not make you enter the same information repeatedly, because the designer was too unimaginative to anticipate that a user might want to change some of the options they had selected earlier (e.g. when booking an airline ticket). And it doesn’t make you go through seven layers of a menu to reach the one page you need.

    Human-centered design is performed by people who talk to users, learn to think like users, and walk alongside them in their work. These designers not only prototype and evaluate their designs, but also listen to the feedback they are given. They value user input and see it an critical to their portfolio of design experience. In the design literature of the 1980s there was a lot of discussion of how user representatives would “go native,” when participating in design projects, learning to think like designers and subsuming the interests of their fellow users in the process. In the 2020s, we need to see more IT designers going native, learning to think like users, reworking IT system designs to support how users work, and valuing the aspects of system design that users value. That is human-centered design.

    References

    Checkland, P. & Winter, M.C. 2000) “The relevance of soft systems thinking,” Human Resource Development International (3:3), pp. 411-417.

    Checkland, P. and Poulter, J. (2007) Learning For Action: A Short Definitive Account of Soft Systems Methodology, and its use for Practitioners, Teachers and Students, Wiley, UK

    Heidegger, M. 1962. Being and Time New York NY.: Harper & Row New York, 1962.

    Sharp, H., Preece, J. & Rogers, Y. (2019) Interaction Design: Beyond Human-Computer Interaction 5th Edition, Wiley, UK

    Suchman, L. 1987. Plans and situated action Cambridge, UK: Cambridge University Press, 2007.