+ All Categories

35

Date post: 12-May-2017
Category:
Upload: moulay-barmak
View: 212 times
Download: 0 times
Share this document with a friend
18
Systems of Systems (SoS) Engineering: a view from the inside looking out C.E. Siemieniuch & M.A. Sinclair Loughborough University, UK [email protected] Copyright © 2012 by C.E. Siemienuch & M.A. Sinclair. Published and used by INCOSE with permission. Abstract. The paper provides a human factors based perspective on System of Systems Engineering (SoSE), different to that usually encountered in the military and civilian arenas. This reflects the evident fact that the vast majority of System of Systems (SoS) are created, configured, controlled, supervised, maintained, upgraded and decommissioned by humans. The paper commences by outlining what an SoS feels like from the inside, and then goes on to outline a selection of issues of importance to SoS engineering and performance stated from a human factors perspective. These include tipping points, incomplete knowledge, SoS immortality, complexity, and the importance of trustworthy behaviour within the SoS. Some design approaches to address these issues are then outlined. Introduction It is an interesting fact that when we define a system boundary, we typically exclude the overall infrastructure on which the system depends, or, if we do take it into account, we often assume it is constant and unchanging. Yet when one looks at the usual set of interfaces that a system has with its external environment, as shown in figure 1, this could be merely a convenience rather than a correct view; most times, we should engineer an individual System of Interest (SoI) from the perspective of its accommodating System-of-Systems (SoS). The most important aspect of this was captured 2,500 years ago, by (Heracleitus 535 - 475 BCE) ‘You cannot step in the same river twice’, said he (more colloquially, ‘nothing stays the same’); the expected operational lifetime of the SoI will determine how important it is to consider the wider SoS in which the SoI is embedded. When one considers the range of SoS that, in various instantiations, have become critical to the operations of our societies and therefore, of necessity, immortal (examples are energy supply grids, transport, national security systems, national health care, public education and so on), the expectation that none of the component systems and the SoS environment can or will remain the same becomes fundamental. On shorter timescales, there is still much pressure for change, too: for many instantiations of SoS the mantra ‘Faster, Better, Cheaper’ has strong effect. For example, in automotive manufacturing systems, it has long been an expectation that costs will drop by 3% per annum (Touche 2003), funded by greater efficiencies within the system and by reduced waste,
Transcript

   

Systems of Systems (SoS) Engineering: a view from the inside looking out

C.E.  Siemieniuch  &  M.A.  Sinclair  

Loughborough  University,  UK  

[email protected]

 

Copyright  ©  2012  by  C.E.  Siemienuch    &  M.A.  Sinclair.    Published  and  used  by  INCOSE  with  permission.  

Abstract. The paper provides a human factors based perspective on System of Systems Engineering (SoSE), different to that usually encountered in the military and civilian arenas. This reflects the evident fact that the vast majority of System of Systems (SoS) are created, configured, controlled, supervised, maintained, upgraded and decommissioned by humans. The paper commences by outlining what an SoS feels like from the inside, and then goes on to outline a selection of issues of importance to SoS engineering and performance stated from a human factors perspective. These include tipping points, incomplete knowledge, SoS immortality, complexity, and the importance of trustworthy behaviour within the SoS. Some design approaches to address these issues are then outlined.

Introduction  It is an interesting fact that when we define a system boundary, we typically exclude the overall infrastructure on which the system depends, or, if we do take it into account, we often assume it is constant and unchanging. Yet when one looks at the usual set of interfaces that a system has with its external environment, as shown in figure 1, this could be merely a convenience rather than a correct view; most times, we should engineer an individual System of Interest (SoI) from the perspective of its accommodating System-of-Systems (SoS).

The most important aspect of this was captured 2,500 years ago, by (Heracleitus 535 - 475 BCE) ‘You cannot step in the same river twice’, said he (more colloquially, ‘nothing stays the same’); the expected operational lifetime of the SoI will determine how important it is to consider the wider SoS in which the SoI is embedded. When one considers the range of SoS that, in various instantiations, have become critical to the operations of our societies and therefore, of necessity, immortal (examples are energy supply grids, transport, national security systems, national health care, public education and so on), the expectation that none of the component systems and the SoS environment can or will remain the same becomes fundamental.

On shorter timescales, there is still much pressure for change, too: for many instantiations of SoS the mantra ‘Faster, Better, Cheaper’ has strong effect. For example, in automotive manufacturing systems, it has long been an expectation that costs will drop by 3% per annum (Touche 2003), funded by greater efficiencies within the system and by reduced waste,

   

particularly of time. The result is an incremental stream of changes to the system, usually with the effect of removing resilience from the primary function(s) of the system. This can only work well in a stable process environment, which in turn requires that all the complexities, unexpected events, and disturbances of real life have to be addressed at the defined system boundaries and interfaces. Which is where, in the orthodox systems engineering paradigm, we decide that stability lies in the systems  beyond  the  boundary,  and  we can ignore them. This may have been acceptable in the past, but with the growth of networks of interdependent systems around the globe, the increasing requirement for interoperability and the pervasion of these into our societies, it is true no longer. This means that there is, and will be, considerable human involvement in these SoS, due to human capabilities in the application of knowledge and wisdom, and because they are still the only agents capable in moral authority and responsibility. Humans will remain involved in strategy and policy; design, development and change; supervision, monitoring and control; maintenance and logistical functions. Legally, humans are still responsible for the performance and behaviour of these SoS, and it will not have escaped our attention that armies of people are involved in Call Centres, explaining and fixing the incivilities of various SoS in their interactions with the rest of humanity.

 

 

 

Fig. 1 Representation of a system-of-interest, from a human factors perspective, to show a number of the external interfaces of the system. All of them imply another

system must exist to which to interface.  

But the design, development and operation of SoS have some differences to systems design. By definition, SoS comprise federations of systems, where there is operational and managerial independence of individual systems within the SoS, albeit there may be contractual and other arrangements between the organisations or enterprises owning the systems (Maier 1998; Fisher 2006; Dahmann and Baldwin 2008; Jamshidi 2009; Jamshidi

   

2009). As these authors have pointed out, in support of the assertions above, it is at the SoS interfaces where SoSE has its major effects.

In the rest of this paper, we discuss a few of the human-related issues in the engineering of SoS; it should be remembered that not all systems in a SoS are IT-centric; many human-centred, organisational systems are often heavily involved.  

A view from inside an SoS What tends to be missing from discussions of SoS is the ‘view from the inside’; how humans within the SoS understand what it is that these SoS do. We present three examples below.

The first is from Intel:

"Our business model is one of very high risk: We dig a very big hole in the ground, spend three billion dollars to build a factory in it, which takes three years, to produce technology we haven't invented yet, to run products we haven't designed yet, for markets which don't exist. We do that two or three times a year." (P. Otellini, CEO Intel Corporation, BBC News, CES Las Vegas, 2008)

The second is a compilation of comments generated by the authors from several senior managers in the automotive industry, expanding on the Intel version above:

“In fifty years’ time, we will be delivering services to customers we do not know now, involving products we have not yet identified, made with materials yet to be invented, with processes we have yet to develop. We will do this in partnership with suppliers, some of whom will cease to exist, and all of which will be different by then. This will be accomplished by people we have not yet recruited, with money we are trying to earn now.”

Taken together, these quotations highlight two things; firstly that that wisdom, knowledge and experience is the most valuable commodity to carry forwards into the future, and secondly that given the uncertainties, there must be collaboration with people in other organizations to bring about a future SoS that is successful.

There is a well-hidden ancilliary to these quotations; if an organization is involved in a SoS, it is likely that it will be involved in other SoS in parallel, with the implication that the organization will have to manage a portfolio of involvements on a range of SoS, with the attendant management issues. Which brings us nicely to the next quotation, which captures well how many managers and other professionals in SoS environments experience their work:

   

"For example, there was nothing I could do, I learned sometimes painfully, that did not have its own rhythms and pacings, pauses and accelerations, beginnings and endings. And time was not a matter of merely academic interest; it was central to whether I got anything done at all. Furthermore, there was the problem of the intrusiveness of events: things did not occur one at a time; no competence could be practiced in pristine singularity. Instead, at any moment I was flowing with the multiple, disjointed time streams of the various projects in which I was involved. ... The multiple time streams were, of course, not co-ordinated in space; they competed for my attention. Frequently there were three or four places where I was supposed to be at virtually the same time. The calendar was filled with contingencies. I (and even more my secretary) tried heroically to keep some harmony in this overscheduled life, but there was usually little margin for such 'errors' as delayed planes, unexpected visitors, summons from top administrators, long-winded faculty members, or, heaven forbid, any of those ills of the flesh and the psyche that accompany the manager's harried life. ... Everything was interactive. ... I simply had to learn to understand myself in a spatio-temporal field of relationships, flowing and shifting. It was a field of multiple players, each of whom had his or her own schedules, expectations of the rates at which things ought to proceed, and resistances to being sidetracked by other people's temporal perceptions and  priorities." (Vaill, P. B. (1998). The unspeakable texture of process wisdom. Organizational wisdom and executive courage. S. Srivastva and D. L. Cooperrider. San Francisco, Jossey-Bass: 25-39.)

This last quotation highlights the significance of time, and of the intrusion of events. Indirectly, it also embraces the point made earlier; if his academic institution would deliver a smooth, efficient learning process for its students, then all the disturbances to this must be handled elsewhere, preferably at the interfaces, by people living his kind of professional life.

Relevant human/organizational issues within SoS There are many of these; for this paper, we discuss a few of the more pervasive ones before presenting possible solutions for consideration.

Tipping points. One aspect lurking within the dynamics of SoS is the problem of ‘tipping points’ (Repenning, Goncalves et al. 2001; Rudolph and Repenning 2002) where operations of parts of an SoS descend inexorably into a state of near-chaos, especially for SoSs that exhibit close-coupling, as is typical in many SoS. One reason for these is quite subtle; an unfortunate temporal concatenation of events across an SoS may cause widespread failures, as in some large electricity grid failures (Andersson, Donalek et al. 2005). Another version of this is organisational; consider fig. 2 below, from (Gover 1993)

   

 

Fig. 2 A lifecycle model from the IT industry, showing the gradual commoditisation of components. Of interest are the different organisational drivers across the lifecycle.  

If an SoS in composed of organisations at the four different stages of the lifecycle, we may expect different organisational structures, policies, and ways of working will collide at the interfaces. Emergent behaviour is likely, including both delays and concatenated changes. Combined, these are likely to produce a significant tipping point into possible financial or operational failure with all the attendant dangers this may present to the SoS owners, stakeholders or the general public at large. It is well-known that SoS engineering must concentrate on the interfaces; what this example indicates is that the interface is thick rather than thin, and necessarily must include the levels of strategy, policy, operations and transactions.

A different example is a recent significant slow-down in the supply of consumer IT devices, because a Japanese manufacturer of 70% of the world’s supply of polyvinylidene fluoride used in making lithium-ion batteries was incapacitated by the 2011 tsunami (leMerle 2011). Other well-known examples include the disruption to the Toyota, Honda and Nissan production and procurement systems due to the same tsunami,

It will be obvious that the occurrence and effects of these tipping points will be exacerbated by delays, both intrinsic delays occasioned by the time it takes for the system’s processes to happen, and extrinsic delays such as those imposed by the management’s structures, procedures and habits. Hence, provision for resilience in the design of an SoS is of great importance, though perhaps not just by piling further pressures on management to deliver this, as Vaill has hinted above.

   

Perpetually working with incomplete knowledge. An important aspect of the design and management of SoS is the perennial fact of working with imperfect, incomplete knowledge. Because of confidentiality, security, and competitiveness reasons, the design or configuration of an SoS will always be associated with incomplete knowledge and information flows, leading to emergent problems in operation, as illustrated in figure 3 below, taken from the domain of Fast-Moving Consumer Goods (FMCG). Competitive marketing campaigns (and other coups d’oeil) demand secrecy within the supply chain, exacerbating the usual demands for commercial confidentiality, resulting in unexpected disturbances for those systems and organizations not in the know, for which the requisite knowledge and information will not be available beforehand (though hindsight may help to explain what has happened and why).

 

Compounding the issue of incomplete knowledge is the evolution of individual systems within the SoS, often dictated by changes in circumstances of the owner organisation; while discussions should occur, large organisations may evolve their systems without much regard to the effects on other systems – the current financial crisis in the banking world being a good example.

Another issue in many SoS is the near-enshrinement of the mantra, ‘Faster, better, cheaper’ as outlined above While laudable (especially from the customer’s perspective), this guarantees that not only does the SoS have to accommodate changes to the environment outside its boundaries, it must accommodate continual evolution of its component systems while maintaining an ever more precise, regulated, manufacturing or production process, and full-time resources must be devoted to this effort. As an aside, the mantra can also introduce brittleness into systems by introducing too much close-coupling and by reducing manpower (which is where knowledge, wisdom, and the main capability for resilience resides).

When coupled with frequent demands for change, the implication of this is that SoS design is unlikely ever to be completed and, in the absence of full knowledge, adjustments will always be necessary. Furthermore, designers come and go, and knowledge conservation becomes an important issue. In fact, in this scenario it becomes unwise to believe at any time that you have sufficient knowledge and information, and hence resilience in the design approach is essential. The dictum ‘We should consider the user to be the designer’s on-site representative’ (Rasmussen and Goodstein 1986) is significant across the SoS, including what is currently known as ‘Customer Relationship Management’. Apart from its oft-stated benefits, it is the only way one can check one’s assumptions and knowledge about the reality of the SoS and the role of one’s SoI within it.

   

 

Fig.3 Illustration of a supply chain in the Fast Moving Consumer Goods domain. At right are two competing supermarkets; in the middle is a major supplier to them; there is a tier 1 supplier and a consultant. The boxes indicate some of the issues affecting

the SoS interfaces; information must be kept confidential, and warnings about changes of competitive advantage will be nearly non-existent. Nevertheless, there

are opportunities for information leakage.  

Fuzzy SoS boundaries. SoS boundaries can be fuzzy for four main reasons:

• Component systems and their owners within the SoS may have a short-term presence; there to perform a function and then depart. Consultants are an example. Are they part of the SoS or not and if so how does the SoS deal with their ingress and egress?

• It may be the case that nobody knows the full complement of who or which systems are actually participating in the SoS: emergency services SoS would be an example of this. Organisations owning a system within the SoS typically know their immediate neighbours, but few participating organisations beyond them.

• Even within a single organisation that owns a participating system, there may be confusion about the extent of that system boundary. Where there are interfaces with the rest of the SoS, the boundary is likely to be well-described. Elsewhere, this may not be the case; how often, for example, do system engineers consider the local State infrastructure that supports the system; the provision for waste removal, education, road transportation, emergency services and the like?

   

• Individuals within the SoS (who may have signed many security and confidentiality documents) may extend the boundaries in order to get their jobs done. Consider the Systems Engineer faced with an intractable problem and a  looming deadline. That SE might discuss the problem with others within the SoS, but given no useful answer might make a few discreet inquiries of trusted colleagues through a   professional  organisation like INCOSE, maybe pushing out the boundaries with unknown effects.

While the fourth reason may not be serious (except, perhaps, for confidentiality) the first three imply a lack of knowledge and situational awareness within the SoS. This provides some rationale for the comment by a senior manager in the aerospace industry, ‘Stupidity is the second most expensive thing for an organisation. Ignorance comes first’.

Induced and intrinsic complexity in organizations. Encompassing all of the issues mentioned above is complexity, which we may define as:

‘The behavioural characteristics of the network of agents and relationships that make up the system of interest. These characteristics are not decomposable to individual elements or relationships’ (Siemieniuch and Sinclair 2005).

For an organisational system, (Gregg 1996) has defined some attributes of complexity that make long-term behaviour unpredictable:

• Many agents, of different kinds (e.g. individuals, teams, etc; doing different tasks at different times)

• Some degree of behavioural autonomy for agents

• Multiple steady states for agents

• Interactions between agents in an environment (e.g. reaching design decisions with engineers at different sites)

• Lots of connections between agents (e.g. email, phone, fax)

• Communicating in parallel (e.g. engineers in a concurrent engineering environment)

• Effects of an evolving environment

• Effects of co-evolving agents

• Interactions between different goals within an agent

• Interactions between agents with different goals

• Language/culture differences

   

This set of organisational attributes is a good description of the working environment of most Systems Engineers, and as Gregg pointed out, even a sub-set of these is sufficient for emergent behaviour, with which Systems Engineers often have to contend. The same applies to the organisation.

However, it is possible to split organisational complexity into two parts; Intrinsic and Induced complexity. The former is the inescapable complexity necessary to achieve the goals of the enterprise system (if you would go to the moon and return, the system to accomplish this is necessarily complex). Induced complexity arises when the system’s management is neither structured, staffed nor operated appropriately, thus exacerbating the effects of Gregg’s attributes. Fig. 4 below illustrates this.

What can be done about intrinsic complexity? Not much; as (Woods and Hollnagel 2006) have said, ‘Complexity is conserved under transformation and translation’. However, there are some benefits that can accrue from translating complexity to other parts of the SoS. An example comes again from the FMCG domain; consider a global company’s factories in different regions that make a wide range of a particular product,  and they are already running at only 3% down-time. It is very expensive to reduce this figure. On the other  hand, if the company reduces the range made in each factory, thereby reducing down-time, there is an increase in distribution complexity to deliver the same range of product to the regions’ customers. But the latter is easier to manage, enabling the company to maximise its profits.

Fig. 4 Illustration of Intrinsic and Induced Complexity, and their overlap.

   

Induced complexity is actually quite common; it is often a product of unclear authorities, confused responsibilities, lack of comprehensive training of staff, issues of organisational culture, lack of trust, and many other well-known ills. Because these are well-known, there is any number of tools and consultants ready to apply them, though the solutions could be applied just as easily by the companies themselves, given a little wisdom. The emerging field of Enterprise Systems Engineering is of relevance here.

 

Information integrity and security considerations. There are several parts to this. Firstly, if we consider information to comprise data with meaning, it is possible that a database passed from one system to another may, for sound internal reasons, be conflated, elaborated, condensed, rearranged and/or clipped (thereby altering meaning), and then be made available to the rest of the SoS. Secondly, there is the issue of confidentiality, which may preclude the passage of useful information around the SoS (passing one competitor’s plans to another within the SOS being an example). Thirdly, there is the inadvertent disclosure of information that is intended to be secure, which may occur because of different implementations of security across the SoS, or because tiny bits happen to be disclosed in different parts of the SoS, again for good internal reasons, but which amount to a significant breach. Most of these problems are human-initiated, and SoSE is lacking in addressing these.

Legacy issues. The classic example of these is the old, still-working software systems that perform some vital function within a SoS and that are deemed to be too costly to replaceand for which documentation is scarce, and nobody knows how they work. A more modern case is an SoS some of whose systems are no longer supported by their vendors; for example, ‘we observed an outsourced application with 120 COTS products, 46% of which were delivered in a vendor-unsupported state’ (Boehm 2006). From an enterprise systems perspective, there is a loud message here about knowledge conservation and distribution in order to maintain these systems in a viable operational condition.

Summary of the issues discussed above There are many other issues that may impede SoS performance, however a summary of those discussed in this paper implies the following:

• Many SoS are sufficiently important to society that they will be maintained for very long periods; effectively, they become immortal (lasting beyond the designer’s three score years and ten). Over that period of time, no system in the SoS will stay the same, and it is likely that over a 40-year period there will be close to 100% staff turnover. Since most SoS knowledge and wisdom is held in human heads, this has strong implications for on-going knowledge conservation and management (Siemieniuch and Sinclair 2001)

• The complexity inherent in most SoS allied to the frequent processes of change within the SoS imply that unexpected, emergent behaviour is likely to occur over any reasonable period of time. While some of this complexity is induced though   the organisations that administer the SoS and can therefore be ameliorated to a large extent, the intrinsic complexity cannot; it can only be translated for better functioning.

   

• SOS systems dynamics, allied to emergent behaviour and unexpected environmental events, mean that ‘tipping points’ are a credible possibility. It will be necessary to engineer SoS for resilience, not just for efficiency. Immediate resilience comes from good design and implementation of the SoS, but deep resilience, responding to major upheavals, requires human wisdom and ingenuity allied to organisational competence. Inter-organisational interfaces will be thick, covering strategic, tactical and operational issues. Policy, operations and transactions, and will extend backwards into each organization.

• Commercial confidence, security, and respect for the ownership of information mean that there is seldom a complete, full flow of information round the SoS.

• The possibility that nobody within the SoS organisations knows the full extent of the SoS may have expensive consequences when some disturbance happens to the SoS.

• Knowledge conservation is a critical, continuous issue for any long-term SoS. But this alone is not enough; communication for mutual benefit across organizations is an important component of overall resilience and agility (though there may be necessary restrictions on this)

Addressing the issues raised above All of these issues above involve human activity, either in their occurrence, or in their control, amelioration or enhancement. Clearly, there is ample scope to apply some organizational SoS engineering capability in this domain, and we address some of these below.

Model-based design of the SoS Firstly, there is the issue of necessarily-bounded knowledge for the SoS engineers. This happens for security and confidentiality reasons, with systems developed and optimised for internal usage. They may then be given a patch to enable them to communicate with the rest of the SoS, either through a standards-based internet interface, or by human-centred communication methods. Whatever, the net effect is that the pool of SoS system designers comprise distributed groups, with little knowledge of the SoS outside their own organisation, and even less about its potential evolution. The design of an efficient SoS, under these circumstances, is perhaps best achieved by adopting a model-based systems engineering (MBSE) approach, distributed across the SoS organizations, that is common in software engineering (DOD 2008; Sprinkle, Eklund et al. 2009), but with particular emphasis on the embodiment of open standards, the clear, agreed specification of interfaces, and the conservation of knowledge. This MBSE approach necessarily must embrace both the software systems, their interconnections and the enterprise systems that own and operate them.

We would emphasise that, in common with other authors (deMeyer, Loch et al. 2002; Williams 2005; Daw 2007; Alberts 2011), that SoS engineering starts with patterns and visions, not requirements, and hence the MBSE approach should accept that for much of the development of a given SoS, significant parts of the model will be held in human heads, and that IT tools do not make much recognition of this. Necessarily, too, the model-based approach must include corporate and engineering governance metrics to enable unwanted

   

patterns of SoS behaviour to be made apparent and to be explored, particularly in relation to the three governance questions;

• Are we doing the right things?

• Are we doing those things right?

• How do we know this?

 

By taking a full approach, MBSE will do much to enable frequent re-design of the SoS and its systems throughout the life of the SoS, particularly when one considers the key elements of SoS engineering (DOD 2008), from an organisational perspective:

• Translating capability objectives into SOS requirements

• Understanding systems and relationships

• Assessing performance to capability objectives

• Developing and evolving an SoS architecture

• Monitoring and assessing changes

• Addressing requirements and solution options

• Orchestrating upgrades to SoS

Reducing the likelihood of tipping points. While there are numerous ways in which a tipping-point problem could happen in a SoS, an almost-universal exacerbating factor is delays. These can be extrinsic or intrinsic; the latter are an artefact of systems design (and it is impressive how much effort is spent on addressing this issue in real-time software systems), whereas the former are frequently a function of the organisation’s structure and allocation of responsibilities and authorities to make decisions. The latter is again an area where Human Factors and other socio-technical professionals can make a big contribution in the area of enterprise systems engineering (Sinclair 2007; Hubbard, Siemieniuch et al. 2010; Hodgson, Hubbard et al. 2011; Henshaw, M.S.Morcos et al. 2012; Sinclair, Siemieniuch et al. 2012)

Resilience and agility. There is the problem of resilience and agility of the SoS, especially for long-lived, essentially immortal systems, such as health care, governmental systems, transport and energy. This is best discussed by an example, taken from the defence domain, shown in Fig 5 below, adapted from (Mackley 2008). Agility, then, becomes a question of how much time does one have in which to adapt to changed circumstances. If there is less than a day, one can only use built-in adaptability. If there is more time, one can reach back down the supply chain that is provided by the SoS to make changes that enable a greater

   

degree of adaptability. Resilience, also related to time, becomes a measure of the capability of the whole SoS to adapt (but is not included in Fig.4, since it embraces the whole diagram).

Firstly, both resilience and agility can be increased if the activities in the cells in Fig 4 can be moved upwards. Inspection of the cells in the diagram readily reveals that there is considerable human-technology involvement in most of the cells, pointing yet again to a human factors or socio-technical role in bringing about this upwards shift. Secondly, it is likely that distinct changes to the capability delivered by the SoS will be required frequently – recall Heracleitus. Consequently, the design process will have to be repeated equally frequently, perhaps extending to the swapping of organisations within the SoS (the average life expectancy of a company in Europe is 12,5 years (deGeus 2002), and for the ‘immortal’ systems mentioned above, this is not long enough).

Fig. 5 The development process to deploy a drone capable of attacking a ship. Across the top of the matrix are the MOD ‘Lines of development’, all of which are

components of a given capability. Down the side is a timescale to have the capability developed. To have the capability to attack a ship by a drone today, one must

‘backwards-chain’ along the arrows in the matrix to discover that the SoS that delivers this capability should have started at least a decade ago.

Possible socio-technical contributions to SoS design and operation Consideration of the topics discussed above (and many others not included) may lead to the conclusion that Human Factors specialists (or cognate specialists) involved in the design of SoS are faced with a ‘wicked’ problem (Rittel and Webber 1973; Vicente, Burns et al. 1997), largely due to the bounds imposed on the distribution of knowledge across the SoS, and the need to predict the future quite far in advance. While the need for change may be driven by the SoS, change will occur at the level of the individual organisational systems involved in

   

the SoS, and it is the people within these systems who will decide what changes will be made, how they will be implemented, and who will then implement them. In turn, this will depend on the ingenuity, skills, motivation, culture and readiness to change of these people (leaving aside all the other legal, financial, etc. issues). For all of these, Human Factors and other socio-technical professionals already have the capability to address them, at the human-machine, human-system and organisational levels; however, there may be a little less competence at the inter-organisational aspects. The latter include:

 

The specification of organisational interfaces. While contracts will delineate the structure of the interface through service level and other agreements, their effective, efficient, and flexible implementation may need further exploration and, perhaps, standardisation. These specifications should include consideration of strategy, policy, organisation and transaction, with the emphasis on flexibility and partnerships, rather than organisational power and the creation of positions that can be argued in courts of law. As an example, in the 1990s a global supermarket chain refused to sign contracts with its suppliers; instead, it issued Letters of Intent, backed up with a handshake and subsequent behavioural integrity in the SoS; they did what they said, and they shared any accruing benefits in the SoS. It is now one of the world’s largest chains

The propagation of trust. Given the importance of trust between organisations (necessitated by the forced bounds on knowledge and information transfer), there is a strong requirement for trustworthy behaviour by individual organisations involved in a SoS. Integrity of performance engenders trust, and without trust there can be little expectation of good performance by one’s partners, with damaging effects for co-ordinated planning and operations. Trust comes initially from company values as enunciated and strongly supported by the company Board, through good governance within a strong organizational culture, devolved authority and accompanying responsibility, properly supported roles, well-understood processes, and excellent feedback., (Siemieniuch, Sinclair et al. 1997; Weick, Sutcliffe et al. 1999; Siemieniuch and Sinclair 2000; Reason 2001; Weick and Sutcliffe 2001; Siemieniuch and Sinclair 2004). Organisations can be designed to embody these principles, albeit each will look different according to its circumstances, and each will change with time.

Knowledge conservation. Given the expectation that people will be flexible and adaptable to change in the SOS, there is a significant role for the discovery, formalisation, dissemination and conservation of knowledge and wisdom (the latter being the combination of knowledge and experience) within the SoS. While we are knowledgeable about the management of knowledge, the management of wisdom in organisations is much less understood, though contributions do exist (Hammer 2002; Sinclair, Henshaw et al. 2009)

Ethical organizational behaviour. The points above carry an implication that ethical behaviour will be the norm. This becomes very important in the context of global SoSs, and it is fortunate that some recent research (Haidt 2010; Hodgson, Hubbard et al. 2011) has explored aspects of this. Coupled with the notion of personal duties (Ross 1930, 2002)it

   

should be possible to design roles, responsibilities and the allocation of authority to enhance the likelihood of ethical behaviour in the organisation, and hence engender trust more easily, with all the ensuing benefits for the SOS.

Conclusion  This paper has addressed the contribution that socio-technical knowledge could make to the growing domain of SoS engineering. But is this an important consideration for society? In the opinion of the authors, it is – especially when it is realised that just about every operating system in the world, including its supporting infrastructure, is a SoS. The steady growth of IT&T systems into almost all aspects of human life and society, and the interconnections between these that are occurring mean that this is a topic that is not ephemeral, and, if only from a professional ethics perspective, it behoves its inclusion.

 

References   Alberts, D. S. 2011. The agiity advantage, US DOD Command & Cpontrol research Program.

Andersson, G., P. Donalek, et al. 2005. "Causes of the 2003 major grid blackouts in North America and Europe, and recommended means to Improve system dynamic performance." IEEE Transactions on Power Systems 20(4): 1922-1928.

Boehm, B. 2006. "Some future trends and implications for systems and software engineering processes." Systems Engineering 9(1): 1-19.

Dahmann, J. and K. Baldwin 2008. Understanding the Current State of US Defense Systems of Systems and the Implications for Systems Engineering. . 2nd Annual IEEE Systems Conference. Montreal.

Daw, A. J. 2007. Keynote: On the wicked problem of defence acquisition. 7th AIAA Aviation Technology, Integration and Operations Conference: Challenges in Systems Engineering for Advanced Technology Programmes. Belfast, N.I., AIAA: 1-26.

deGeus, A. P. 2002. The living company: habits for survival in a turbulent business environment. Boston, MD, Harvard business School Press.

deMeyer, A., C. H. Loch, et al. 2002. "Managing project uncertainty: from variation to chaos." Sloan Management Review 43(2): 60-67.

DOD, U. 2008. Systems engineering guide for systems of systems. Washington, DC, US Department of Defense, Office of the Deputy Under Secretary of Defense for Acquisition and Technology.

Fisher, D. A. 2006. An emergent perspective on interoperation in systems of systems, Carnegie-Mellon University.

Gover, J. E. 1993. "Analysis of US semiconductor collaboration." IEEE Transactions on Engineering Management 40(2): 104-113.

   

Gregg, D. 1996. Emerging challenges in business and manufacturing decision support. The Science of Business Process Analysis, ESRC Business Process Resource Centre, University of Warwick, Coventry, UK, ESRC Business Process Resource Centre.

Haidt, J. 2010. "The new science of morality." The new science of morality, part 1. A taste analogy in moral psychology: picking up where Hume left off Retrieved 20/09/2011.

Hammer, M. 2002. The Getting and Keeping Of Wisdom - Inter-Generational Knowledge Transfer in a Changing Public Service. Ottawa, Research Directorate, Public Service Commission of Canada.

Henshaw, M. J. D., M.S.Morcos, et al. 2012. " Identification of induced complexity in PSS enterprises, ." Journal of Enterprise Transformation.

Heracleitus 535-475 BCE. (cited in G. Davenport, 1979). Herakleitos and Diogenes. Bolinas, Grey Fox Press.

Hodgson, A., E.-M. Hubbard, et al. 2011. Culture and the performance of teams in complex systems. IEEE Conference on Systems of Systems Engineering. Albuquerque, USA.

Hubbard, E. M., C. E. Siemieniuch, et al. 2010. Working towards a holistic organizational systems model. Proceedings of the 5th IEEE International Conference on System of Systems Engineering (SoSE), June 2010., Loughborogh, UK, IEEE.

Jamshidi, M., Ed. 2009. System of systems engineering - innovations for the 21st century, J. Wiley & Sons.

Jamshidi, M., Ed. 2009. Systems of systems engineering - principles and applications. Boca Raton, CRC Press.

leMerle, M. 2011. How to prepare for a black swan. Strategy+Business, Booz & Co. 64.

Mackley, T. 2008. Concepts of agility in Network Enabled Capability. Realising Network Enabled Capability. Oulton Hall, Leeds, UK, BAE Systems.

Maier, M. W. 1998. "Architecting principles for systems-of-systems." Systems Engineering 1(4): 267-284.

Rasmussen, J. and L. P. Goodstein 1986. Decision support in supervisory control. Analysis, design and evaluation of man-machine systems, Varese, Italy, 2nd IFAC/IFIP/IFORS/IEA Conference.

Reason, J. 2001. The dimensions of organisational resilience to operational hazards. British Airways Human Factors Conference: Enhancing Operational Effectiveness, No publisher.

Repenning, N. P., P. Goncalves, et al. 2001. "Past the Tipping Point: The Persistence of Firefighting in Product Development." California Management Review 43(4): 44-63.

Rittel, H. W. J. and M. M. Webber 1973. "Dilemmas in a general theory of planning." Policy Sciences 4: 155-169.

Ross, W. D. 1930, 2002. The right and the good. Oxford, UK, Oxford University Press (reprinted).

   

Rudolph, J. W. and N. P. Repenning 2002. "Disaster dynamics: understanding the role of quantity in organizational collapse." Administrative Science Quarterly 47(1): 1-30.

Siemieniuch, C. E. and M. A. Sinclair 2000. "Implications of the supply chain for role definitions in concurrent engineering." International Journal of Human Factors and Ergonomics in Manufacturing 10(3): 251-272.

———.   2001, 23 Nov.. "Organisational readiness for knowledge management within e-supply chains." Organisational readiness in e-supply Retrieved 12 Nov, 2001, from http://www.sck2001.com/.

———.   2004. "Organisational Readiness for Knowledge Management." International Journal of Operations & Production Management 24(1): 79-98.

———.   2005. The analysis of organisational processes. Evaluation of Human Work. J. R. Wilson and N. Corlett. Boca Raton, Florida, USA, CRC Press: 977 – 1008.

Siemieniuch, C. E., M. A. Sinclair, et al. 1997. Supply chain issues in the fast-moving consumer goods industry. 13th Triennial Congress of the International Ergonomics Association - IEA'97, Tampere, Finland, June 29 - July 4th 1997, Finnish Institute of Public Health.

Sinclair, M. A. 2007. "Ergonomics issues in future systems." Ergonomics 50(12): 1957 - 1986.

Sinclair, M. A., M. J. D. Henshaw, et al. 2009. Governance, Agility and Wisdom in the Capability Paradigm. Contemporary Ergonomics 2009. P. D. Bust. London, Taylor & Francis: 100 – 108.

Sinclair, M. A., C. E. Siemieniuch, et al. 2012. "The development of a tool to predict team performance." Applied Ergonomics 43(1): 176-183.

Sprinkle, J., J. M. Eklund, et al. 2009. "Model-based design: a report from the trenches of the DARPA Urban Challenge." Software and System Modeling 8: 551-566.

Touche, D. (2003). The road to world class manufacturing 2002. London: 56.

Vicente, K. J., C. M. Burns, et al. 1997. "Muddling through wicked design problems." Ergonomics in Design 5(1): 25-30.

Weick, K. E. and K. M. Sutcliffe 2001. Managing the unexpected : assuring high performance in an age of complexity. San Francisco, Jossey-Bass.

Weick, K. E., K. M. Sutcliffe, et al. 1999. " Organizing for high reliability: processes of collective mindfulness." Research in Organisational Behaviour 21: 23-81.

Williams, T. M. 2005. "Assessing and moving on from the dominant project management discourse in the light of project over-runs." IEEE Transactions on Engineering Management 52(4): 497-508.

Woods, D. D. and E. Hollnagel 2006. Joint cognitive systems: patterns in cognitive systems engineering. Basingstoke, Taylor & Francis.

   

Carys Siemieniuch is s a Senior Lecturer in Systems Engineering in the School of Electrical, Electronic and Systems Engineering at Loughborough University. She has worked as a systems ergonomist for over 25 years in the manufacturing and aerospace domains (civilian and defence)). As a PI/CoI on a range of EPSRC (8), EU (7) & MoD/Industry (9) funded projects, she has developed new understandings about: the management, capture and utilisation of tacit knowledge; allocation of function and systems design; impact of cultural factors on system autonomy levels; enterprise system modelling; human and organisational performance; and emergent system behaviours. With European CREE professional registration she is a registered EU ‘expert’ with evaluation, project reviewer and project management expertise within the Framework Programmes. She has over 100 referred publications in the area of organisational and system design.

Murray Sinclair is now a Visiting Fellow at Loughborough University. He is a Systems Ergonomist of some 40 years standing, having been an academic member of Loughborough University since 1970. His interests have evolved from the understanding of organisational processes of manufacturing from the shopfloor, through manufacturing systems engineering to design processes and the management of knowledge. Over the last five years, due to the steady infiltration of information technology into society and its pervasiveness in the lives of individuals, his interests now include the assurance of ethical behaviour by autonomous and semi-autonomous systems, such as robots, healthcare systems and the like. All of this work has necessitated close involvement with industry, in the automotive domain and latterly in the aerospace, defence and nuclear safety domains.

 


Recommended