+ All Categories
Home > Documents > UPGRADE, Vol. XII, issue no. 5, December 2011 -...

UPGRADE, Vol. XII, issue no. 5, December 2011 -...

Date post: 07-Nov-2018
Category:
Upload: trinhdien
View: 212 times
Download: 0 times
Share this document with a friend
155
Transcript
Page 1: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic
Page 2: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

3 Editorial. CEPIS UPGRADE: A Proud Farewell— Nello Scarabottolo, President of CEPIS

ATI, Novática and CEPIS UPGRADE— Dídac López-Viñas, President of ATI

4 Presentation. Trends and Advances in Risk Management— Darren Dalcher

10 The Use of Bayes and Causal Modelling in Decision Making,Uncertainty and Risk — Norman Fenton and Martin Neil

22 Event Chain Methodology in Project Management — MichaelTrumper and Lev Virine

34 Revisiting Managing and Modelling of Project Risk Dynamics -A System Dynamics-based Framework — Alexandre Rodrigues

41 Towards a New Perspective: Balancing Risk, Safety and Danger— Darren Dalcher

45 Managing Risk in Projects: What’s New? — David Hillson

48 Our Uncertain Future — David Cleden

55 The application of the ‘New Sciences’ to Risk and ProjectManagement — David Hancock

59 Communicative Project Risk Management in IT Projects— Karel de Bakker

67 Decision-Making: A Dialogue between Project and ProgrammeEnvironments — Manon Deguire

75 Decisions in an Uncertain World: Strategic Project RiskAppraisal — Elaine Harris

82 Selection of Project Alternatives while Considering Risks— Marta Fernández-Diego and Nolberto Munier

87 Project Governance — Ralf Müller

91 Five Steps to Enterprise Risk Management — Val Jonas

Vol. XII, issue No. 5, December 2011

CEPIS UPGRADE is the European Journalfor the Informatics Professional, published bi-monthly at <http://cepis.org/upgrade>

PublisherCEPIS UPGRADE is published by CEPIS (Council of Euro-pean Professional Informatics Societies, <http://www.cepis.org/>), in cooperation with the Spanish CEPIS societyATI (Asociación de Técnicos de Informática, <http://www.ati.es/>) and its journal Novática

CEPIS UPGRADE monographs are published jointly withNovática, that publishes them in Spanish (full version printed;summary, abstracts and some articles online)

CEPIS UPGRADE was created in October 2000 by CEPIS and wasfirst published by Novática and INFORMATIK/INFORMATIQUE,bimonthly journal of SVI/FSI (Swiss Federation of ProfessionalInformatics Societies)

CEPIS UPGRADE is the anchor point for UPENET (UPGRADE Euro-pean NETwork), the network of CEPIS member societies’ publications,that currently includes the following ones:• inforewiew, magazine from the Serbian CEPIS society JISA• Informatica, journal from the Slovenian CEPIS society SDI• Informatik-Spektrum, journal published by Springer Verlag on behalf

of the CEPIS societies GI, Germany, and SI, Switzerland• ITNOW, magazine published by Oxford University Press on behalf of

the British CEPIS society BCS• Mondo Digitale, digital journal from the Italian CEPIS society AICA• Novática, journal from the Spanish CEPIS society ATI• OCG Journal, journal from the Austrian CEPIS society OCG• Pliroforiki, journal from the Cyprus CEPIS society CCS• Tölvumál, journal from the Icelandic CEPIS society ISIP

Editorial TeamEditorial TeamChief Editor: Llorenç Pagés-CasasDeputy Chief Editor: Rafael Fernández CalvoAssociate Editor: Fiona Fanning

Editorial BoardProf. Nello Scarabottolo, CEPIS PresidentProf. Wolffried Stucky, CEPIS Former PresidentProf. Vasile Baltac, CEPIS Former PresidentProf. Luis Fernández-Sanz, ATI (Spain)Llorenç Pagés-Casas, ATI (Spain)François Louis Nicolet, SI (Switzerland)Roberto Carniel, ALSI – Tecnoteca (Italy)

UPENET Advisory BoardDubravka Dukic (inforeview, Serbia)Matjaz Gams (Informatica, Slovenia)Hermann Engesser (Informatik-Spektrum, Germany and Switzerland)Brian Runciman (ITNOW, United Kingdom)Franco Filippazzi (Mondo Digitale, Italy)Llorenç Pagés-Casas (Novática, Spain)Veith Risak (OCG Journal, Austria)Panicos Masouras (Pliroforiki, Cyprus)Thorvardur Kári Ólafsson (Tölvumál, Iceland)Rafael Fernández Calvo (Coordination)

English Language Editors: Mike Andersson, David Cash, ArthurCook, Tracey Darch, Laura Davies, Nick Dunn, Rodney Fennemore,Hilary Green, Roger Harris, Jim Holder, Pat Moody.

Cover page designed by Concha Arias-Pérez"Liberty with Risk" / © ATI 2011Layout Design: François Louis NicoletComposition: Jorge Llácer-Gil de Ramales

Editorial correspondence: Llorenç Pagés-Casas <[email protected]>Advertising correspondence: <[email protected]>SubscriptionsIf you wish to subscribe to CEPIS UPGRADE please send anemail to [email protected] with ‘Subscribe to UPGRADE’ as thesubject of the email or follow the link ‘Subscribe to UPGRADE’at <http://www.cepis.org/upgrade>

Copyright© Novática 2011 (for the monograph)© CEPIS 2011 (for the sections Editorial, UPENET and CEPIS News)All rights reserved under otherwise stated. Abstracting is permittedwith credit to the source. For copying, reprint, or republication per-mission, contact the Editorial Team

The opinions expressed by the authors are their exclusive responsibility

ISSN 1684-5285

MonographRisk Management(published jointly with Novática*)Guest Editor: Darren Dalcher

./..

Farewell Edition

* This monograph will be also published in Spanish (full version printed; summary, abstracts, and somearticles online) by Novática, journal of the Spanish CEPIS society ATI (Asociación de Técnicos deInformática) at <http://www.ati.es/novatica/>.

Page 3: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

99 From inforeview (JISA, Serbia)Information SocietySteve Jobs — Dragana Stojkovic

101 From Informatica (SDI, Slovenia)Surveillance SystemsAn Intelligent Indoor Surveillance System — Rok Piltaver,Erik Dovgan, and Matjaz Gams

111 From Informatik Spektrum (GI, Germany, and SI, Switzerland)Knowledge RepresentationWhat’s New in Description Logics — Franz Baader

121 From ITNOW (BCS, United Kingdom)Computer ScienceThe Future of Computer Science in Schools — Brian Runciman

124 From Mondo Digitale (AICA, Italy)IT for HealthNeuroscience and ICT: Current and Future Scenarios— Gianluca Zaffiro and Fabio Babiloni

135 From Novática (ATI, Spain)IT for MusicKatmus: Specific Application to support Assisted MusicTranscription — Orlando García-Feal, Silvana Gómez-Meire,and David Olivieri

145 From Pliroforiki (CCS, Cyprus)IT SecurityPractical IT Security Education with Tele-Lab — ChristianWillems, Orestis Tringides, and Christoph Meinel

153 Selected CEPIS News — Fiona Fanning

Vol. XII, issue No. 5, December 2011

CEPIS NEWS

UPENET (UPGRADE European NETwork)

Farewell Edition

Cont.

Page 4: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 3© CEPIS

Editorial

Farewell Edition

It was in year 2000 that CEPIS made the decision tocreate "a bimonthly technical, independent, non-commer-cial freely distributed electronic publication", with the aimof gaining visibility among the large memberships of itsaffiliated societies, and beyond this, the wider ICT commu-nities in the professional, business, academic and public ad-ministration sectors worldwide, contributing in parallel toenlarge and permanently update their professional skills andknowledge.

CEPIS UPGRADE was the name chosen for that jour-nal, born with the initial cooperation and support of the so-cieties ATI (Asociación de Técnicos de Informática, Spain)and SVI/FSI (Swiss Federation of Professional InformaticsSocieties), along with their respective publications, Nováticaand Informatik/Informatique, cooperation and support thathave continued until now, in the case of ATI and Novática.

Eleven years and more than 60 issues later, actual meas-urable facts show that CEPIS UPGRADE has achieved thosegoals: hundreds of thousands visits to, and downloads from,the journal website at <http://www.cepis.org/upgrade>; pres-ence in prestigious international indexes; references by manypublications; citations made in countless business, profes-sional, academic and even political fora; a newsletter witharound 2,500 subscribers.

All these achievements must be duly stressed now thatCEPIS has made the decision of discontinuing CEPIS UP-GRADE because it is not at all failure or lack of results thathave dictated this extremely painful choice but the generaleconomic climate. In our case, CEPIS has reached the con-clusion that publishing a technical-professional journal isnot a top priority today and that our resources should bededicated to other projects and activities.

CEPIS is proud of its journal and at the sad moment ofdistributing its farewell issue our most sincere acknowledge-ment and gratitude must be presented to all and everyonewho have contributed to its success. Let me name a few ofthem: the above mentioned societies ATI and SVI/FSI;Wolffried Stucky and François Louis Nicolet, that gave theinitial spin; the three Chief Editors that have skillful anddedicatedly led the journal along these eleven years (thesame François Louis Nicolet, Rafael Fernández Calvo andLlorenç Pagés-Casas); professionals in Spain, Belgium andSwitzerland (in special Fiona Fanning, Jorge Llácer, Carol-Ann Kogelman, Pascale Schürman and Steve Turpin). Plusthe Chief Editors of the nine publications making part of

UPENET (UPGRADE European NETwork), set up in 2003in order to increase the pan-European projection of the jour-nal.

And, last but not least, thanks a lot also to the multitudeof authors from Europe and other continents who have sub-mitted their papers for review and publication, as well as tothe Guest Editors of the monographs and our team of vol-unteer English-language editors. We cannot praise them allenough for their decisive and valuable collaboration.

Now let’s say farewell to CEPIS UPGRADE, but a re-ally proud one!

Nello ScarabottoloPresident of CEPIS

<http://www.cepis.org>

Note: A detailed history of CEPIS UPGRADE is available at<http://www.cepis.org/upgrade/files/iv-09-calvo.pdf>.

Editorial

ATI, Novática andCEPIS UPGRADE

The lifecycle of CEPIS UPGRADE has come to anend after eleven years. The decision has been taken by thegoverning bodies of CEPIS and is fully shared by theBoard of ATI (Asociación de Técnicos de Informática),the Spanish society that has edited the journal on behalfof CEPIS from the very beginning.

ATI, a founding member of CEPIS which has partici-pated in a large number of its projects and undertakings,is proud to have played a decisive role in CEPIS UP-GRADE’s success by providing all its own human andmaterial editorial resources through its journal Novática.

We must thank CEPIS for having given us the oppor-tunity to be part of such an important publishing endeav-our.

New projects and activities will undoubtedly be pro-moted by CEPIS and, as in the case of CEPIS UPGRADE,ATI will, as always, be available and willing to cooper-ate.

Dídac López-ViñasPresident of ATI

<http://www.ati.es>

CEPIS UPGRADE: A Proud Farewell

Page 5: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

4 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Presentation

Trends and Advances in Risk ManagementDarren Dalcher

1 IntroductionRisks can be found in most human endeavours. They

come from many sources and influence most participants.Increasingly, they play a part in defining and shaping ac-tivities, intentions and interpretations, and thereby directlyinfluencing the future. Accomplishing anything inevitablyimplies addressing risks. Within organisations and societyat large, learning to deal with risk is therefore progressivelyviewed as a key competence expected at all levels.

Practitioners in computing and information technologyare at the forefront of many new developments. Modern so-ciety is characterised by powerful technology, instantane-ous communication, rising complexity, tangled networks andunprecedented levels of interaction and participation. De-vising new ways of integrating with modern society inevi-tably imply learning to co-exist with higher levels of risk,uncertainty and ignorance. Moreover, society engages inmore demanding ventures whilst continuously requiringperformance and delivery levels that are better, faster andcheaper. Developers, managers, sponsors, senior executivesand stakeholders are thus faced with escalating levels of risk.

In order to accommodate and address risk we have builta variety of mechanisms, approaches and structures that weutilise in different levels and situations. This special issuebrings together a collection of reflections, insights and ex-periences from leading experts working at the forefront ofrisk assessment, analysis, evaluation, management and com-munication. The contributions come from a variety of do-mains addressing a myriad of tools, perspectives and newapproaches required for making sense of risk at differentlevels within organisations. Many of the papers report onnew ideas and advances thereby offering novel perspectivesand approaches for improving the management of risk. Thepapers are grounded in both research and practice and there-fore deliver insights that summarise the state of the disci-pline whilst indicating avenues for improvement and plac-ing new trends in the context of risk management and lead-ership in an organisational setting.

2 Structure and Contents of the MonographThe thirteen papers selected for the issue showcase four

perspectives in terms of the trends identified within the riskmanagement domain. The first three papers report on newtools and approaches that can be used to identify complexdependencies, support decision making and develop im-proved capability for uncertainty modelling. The followingfour papers look at new ways of interacting with risk man-

The Guest Editor

Darren Dalcher – PhD (Lond) HonFAPM, FBCS, CITP, FCMI– is a Professor of Software Project Management at MiddlesexUniversity, UK, and Visiting Professor in Computer Science inthe University of Iceland. He is the founder and Director of theNational Centre for Project Management. He has been namedby the Association for Project Management, APM, as one ofthe top 10 "movers and shapers" in project management andhas also been voted Project Magazine’s Academic of the Yearfor his contribution in "integrating and weaving academic workwith practice". Following industrial and consultancy experiencein managing IT projects, Professor Dalcher gained his PhD inSoftware Engineering from King’s College, University ofLondon, UK. Professor Dalcher is active in numerousinternational committees, steering groups and editorial boards.He is heavily involved in organising international conferences,and has delivered many keynote addresses and tutorials. Hehas written over 150 papers and book chapters on projectmanagement and software engineering.

He is Editor-in-Chief of Software Process Improvement andPractice, an international journal focusing on capability,maturity, growth and improvement. He is the editor of a majornew book series, Advances in Project Management, publishedby Gower Publishing. His research interests are wide and includemany aspects of project management. He works with manymajor industrial and commercial organisations and governmentbodies in the UK and beyond. Professor Dalcher is an invitedHonorary Fellow of the Association for Project Management(APM), a Chartered Fellow of the British Computer Society(BCS), a Fellow of the Chartered Management Institute, and aMember of the Project Management Institute, the Academy ofManagement, the IEEE and the ACM. He has received anHonorary Fellowship of the APM, "a prestigious honourbestowed only on those who have made outstandingcontributions to project management", at the 2011 APM AwardsEvening. <[email protected]>

agement and the development of new perspectives andlenses for addressing uncertainty and the emergence of riskleadership, thereby encouraging a new understanding ofthe concept of risk. The next two papers report on resultsfrom empirical studies related to differences in the percep-tion of decisions between managers of projects and pro-grammes and on the difference that risk management canmake in avoiding IT project failures. The final four paperslook at the development of decision making and risk man-

Page 6: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 5© Novática

Risk Management

Farewell Edition

agement infrastructure by addressing areas such as strate-gic project risk appraisal, project governance, selection ofalternative projects at the portfolio level and the develop-ment of enterprise risk management.

Many risk calculations, especially in banking and insur-ance, are derived from statistical models operating on care-fully collected banks of historical data. The other typicalapproach relies on developing risk registers and quantify-ing the exposure to risk by identifying and estimating theprobability and the loss impact. The paper by Fenton andNeil encourages practitioners to look beyond simple causalexplanations available through identification of correlationor the somewhat ‘accidental’ figures developed through reg-isters. In order to obtain a true measure of risk, practition-ers must therefore develop a more holistic perspective thatembraces a causal view of dependencies andinterconnectedness of events. Bayes networks have longbeen used to depict relationships and conditional depend-encies. The authors show how risks can be modelled as eventchains with a number of possible outcomes, enabling theintegration of risks from multiple perspectives and the de-composition of a risk problem into chains of interrelatedevents. As a result, control and mitigation measures maybecome more obvious through the process of modelling risksand the identification of relationships and dependencies thatextend beyond simple causal explanations.

Project planning is initiated during the earlier part of aproject, when uncertainty is at its greatest. The resultingschedules often fail to capture the full detail of reality.Moreover, they fail to account for change. The paper byTrumper and Virine proposes Event Chain methodologyas an approach for modelling uncertainty and evaluatingthe impacts of events on project schedules. Event chainmethodology is informed by ideas from other disciplinesand has been used as a network analysis technique in projectmanagement. Tools such as event chain diagrams visualisethe complex relationships and interdependencies betweenevents. The collection of tools and diagrams support theplanning, scheduling and monitoring of projects allowingmanagement to visualise some of the issues and take cor-rective action. The Event Chain methodology takes intoaccount factors such as delays, chains and complex dynam-ics that are not acknowledged by other scheduling meth-ods. They attempt to overcome human and knowledge limi-tations and enable updating of schedules in light of newinformation that emerges throughout the development proc-ess.

Complex relationships and interdependencies betweencasus and effects require more complex method of model-ling the impacts and influences between factors. Moreoverthe dynamics emerging from the uncertain knowledge ne-

cessitate a deeper understanding of causal interactions. Thepaper by Rodrigues highlights the use of systems dynamicsto capture some of the closed chains of feedback operatingwith uncertain environments. Feedback loops and impactdiagrams can show the effects of positive feedback cyclesthat can be used to “snowball” alongside other non-lineareffects. Dynamic modelling provides an effective tool foridentifying emergent risks resulting from complex interac-tions, interconnected chains of causes and events and chainsof feedback. They encourage the adoption of holistic solu-tions by investigating the full conditions that play a part ina certain interaction, identifying the full chain of events lead-ing to a risk. Moreover, as the model includes multiple vari-ables, it becomes possible to assess the range of impacts onall aspects and objectives and determine the interactions ofrisks, events and causes in order to derive a better under-standing of the true complexity and the behaviour of therisks.

Developing the right strategy for addressing risk dependson the context. Different approaches will appeal dependingon the specific circumstances and the knowledge, and un-certainty associated with a situation. Dalcher contends thatrisk is often associated with danger, and makes use of theidea of safety to identify different positions on a spectrumwith regards to our approach to risk. At one extreme, antici-pation relies on developing full knowledge of the circum-stances in advance. Addressing risks can proceed in a rea-sonably systematic manner through quantification and ad-justments. The other alternative is to develop sufficient flex-ibility to enable the system to adopt a resilient stance thatallows it to be ready to respond to uncertainties, as theyemerge, in a more dynamic fashion. This is done by search-ing for the next acceptable state and allowing the system toevolve and grow through experimentation. While the idealposition is somewhere between the two extremes, organi-sations can try to balance the different perspectives in a moredynamic fashion. The adoption of alternative metaphors mayalso help to think about risk management in new ways. Weoften acknowledge that risk is all about perspective. If man-agers focus on safety as a resource, they can develop analternative representation of the impacts of risk. The dy-namic management of safety, or well being can thus benefitfrom a change of perspective that allows managers to en-gage with opportunities, success and the future in new ways.

Managing risk is closely integrated with project man-agement. However, despite the awareness of risk and therecognition of the role of risk management in successfullydelivering projects there is still evidence that risk is not be-ing viewed as an integrated perspective that extends be-yond processes. Indeed, the management of risk is not aprecise and well-defined science: It is an art that relies on

Practitioners in computing and informationtechnology are at the forefront of many new developments“

Page 7: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

6 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

attitudes, perceptions, expectations, preferences, influences,biases, stakeholders and perspectives. The paper by Hillsonlooks at how risk is managed in projects. Focusing on risksin a project, may ignore the risk that the overall project posesto the organisation, perhaps at a portfolio or programmelevel. The actual process of managing risks is often flawedas some of the links and review points are missing. Moreo-ver, insufficient attention has been paid to the human com-ponent in risk assessment. Overall the process required formanaging risks requires a more dynamic approach respon-sive to learning and change. Revisiting our current proc-esses and rethinking our approach can serve to improve ourengagement with risk, thereby improving the outcomes ofprojects.

The management of uncertainty, as opposed to risk, of-fers new challenges. The impact of uncertainty often defersdecisions and delays actions as managers attempt to figureout their options. While risks can be viewed as the knownunknowns, uncertainty is concerned with the unknown un-knowns that are not susceptible to analysis and assessment.Increasingly, organisations allocate additional contingencyresources for other things that we do not know about. Thepaper by Cleden contends that the management of uncer-tainty requires a completely different approach. Uncertain-ties cannot be analysed and formulated. Managing projectuncertainty depends on developing an understanding of thelife cycle of uncertainty. Projects exist in a continual stateof dynamic tension with the accumulation of uncertaintiescontributing to pushing the project away from its expectedtrajectory. Managers endeavour to act swiftly to correct thedeviations and must therefore apply a range of strategiesrequired to stabilise the project. Uncertainties result fromcomplex dynamics which will often defy organised attemptsat careful planning. The solution is to adapt and restructurein a flexible and resilient fashion that will allow the projectto benefit from the uncertainty. Small adjustments willthereby allow projects to improve and adjust whilst respond-ing favourably to the conditions of uncertainty.

Project managers often have to deal with novel, one of akind, unfocused and complex situations that can be charac-terised as ill structured. To reflect the open-ended, intercon-nected, social perspective, planners and designers talk ofwicked problems. Such problems tend to be ill-defined andrely upon much elusive political judgement for resolution.The paper by Hancock points out that projects are not tame,

instead displaying chaotic, messy and wicked characteris-tics. Behavioural and dynamic complexities co-exist andinteract confounding decision makers. Applying simplis-tic, sequential resolution processes is simply inadequate formessy problems. Problems cannot be solved in isolationrequire conceptual, systemic and social resolution. Moreo-ver, solutions are likely to be good enough at best and willrequire stakeholder participation and engagement. The di-rect implication for tackling uncertainty and addressingcomplexity is that the managing risks mindset needs to beevolved into a risk leadership perspective. Such perspec-tive would look to guide, learn and adapt to new situations.Different events, outcomes and behaviours would requireadjustments and the risk process needs to adapt in order toovercome major political issues. To address the new uncer-tainties requires a move away from controlling risk towardsa negotiated flexibility that accommodates the disorder andunpredictability inherent in many complex project environ-ments.

Risk management is often proposed as a solution to thehigh failure rate in IT projects. However, the literature is atbest inconclusive about the contributions of risk manage-ment to project success. The paper by de Bakker reports ona detailed literature review which only identified anecdotalevidence to this effect. A further analysis confirms that riskmanagement needs to be considered in social terms giventhe interactive nature of the process and the limited knowl-edge that exists about the project and the desired outcomes.In the following stage, a collection of case studies identi-fied the activity of risk identification as a crucial step con-tributing to success, as viewed by all involved stakeholders.It would appear that the action, understanding and reflec-tion generated during that phase make recognisable contri-butions as identified by the relevant stakeholders. Risk re-porting is likewise credited with generating an impact. Anexperiment with 53 project groups suggests that those thatcarried out a risk identification and discussed the resultsperformed significantly better than those who did not. Thesegroups also seemed to be more positive about their projectand the result. The research suggests that it is the exchangeand interaction that make people more aware of the issues.It also helps in forming the expectations of the differentstakeholders groups. The discussion also has inevitable sideeffects, such as changing people’s views about probabili-ties and values. Nonetheless, the act of sharing, discussingand deliberating appear to be crucial in forming a better

This special edition bringstogether a collection

of reflections, insightsand experiences from leading

experts working at the forefrontof risk issues

Many of the papers report onnew ideas and advances therebyoffering novel perspectives andapproaches for improving the

management of risk

Page 8: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 7© Novática

Risk Management

Farewell Edition

crucial in forming a better understanding of the issues andtheir scale and magnitude.

The long held assumption of utilising linear sequencesin order to address problems, guide projects and make de-cisions have contributed to the perception of project andrisk management as engineering or technical domains. Someof the softer aspects related to the human side of interactionhave been neglected over the years. Deguire points out thatto accommodate complexity the softer aspects of humaninteraction need to be taken into account. Indeed, problemsolving requires reflection, interaction and deliberation.Given that problems and decisions are addressed at theproject management and in some organisations, also at pro-gramme management level, and that their approaches tosolving problems require deliberations and reflection at adifferent level of granularity, it is interesting to contrast theperceptions and expectations of managers in these domains.In contrast with project managers, programme managersappear to favour inductive processes. The difference mightrelate to the need to deliver outcomes and benefits, ratherthan outputs and products. As the level of complexity rises,decisions become more context-related and less mechanis-tic. Decisions made by programme manager may relate tomaking choices about specific projects and determiningwider direction and thus compel managers to engage withthe problem and its context. Indeed, the need to define moreof the assumptions in a wider context, forces deeper andwider consideration, involving people, preferences, contextand organisational issues.

Early choices need to be made about selecting the rightprojects, committing resources and maintaining portfoliosand programmes which are balanced. These decisions aretaken at an early stage under conditions of uncertainty andcan be viewed as strategic project decisions. The projectappraisal process and the decision making behaviour thataccompanies it clearly influence the resulting project. Thepaper by Harris explores the strategic level risks encoun-tered by managers in different types of projects. This isachieved by developing a project typology identifying eightmajor types of strategic level projects and their typical char-acteristics. It provides a rare link between strategic levelappraisal and risk management by focusing on the commonrisks shared by each type. The strategic investment appraisalprocess proposed in the work further supports the imple-mentation of effective decision making ranging from ideageneration and opportunity identification through prelimi-nary assumptions to the findings of the post audit review.Overall, managers can be guided towards implementing astrategy that is better suited to the context of their projectthereby enabling the development of a more flexible and

adaptable response. Identification of risks at an early stageenables better decision making when uncertainty is at itsheight.

The choice of the most suitable project is often subjectto constraints regarding financial, technical, environmentalor geographical constrains. Choices often have to be madeat the project portfolio level to select the most viable, oruseful approach. Alternatively, even when a project has beenagreed in principle, there is still a need to determine themost suitable method for delivering the benefits. The paperby Fernández-Diego and Munier offers the use of linearprogramming method to support the choice of a particularapproach and quantify the risks relevant for each of theoptions. The approach allows decision maker to maximiseon the basis of particular threats (or benefits) and balancevarious factors. The use of linear programming in projectmanagement for quantifying values and measuring con-straints is relatively new.

Large corporate failures in the last decade have raisedawareness of the need for organisational governance func-tions to oversee the effectiveness and integrity of decisionmaking in organisations. Governance spans the entire scopeof corporate activity extending from strategic aspects andtheir ethical implications to the execution of projects andtasks. It provides the mechanisms, frameworks and refer-ence points for self-regulation. Project governance is rap-idly becoming a major area of interest for many organisa-tions and is the topic of the paper by Müller. Governancesets of boundaries for project management action by defin-ing the objectives, providing the means to achieve them andevaluating and controlling progress. The orientation of theorganisation in terms of being share holder and stakeholderoriented, and the control focus on outcome or behaviourwould play a key part in identifying the most suitable gov-ernance paradigm which can range between conformist, andagile pragmatist to versatile artist. The paradigm in turn canshape the approach of the organisation to development, theprocesses applied and the overall orientation and structure.The governance of project management plays a part in di-recting the governance paradigm, which guides the gov-ernance of portfolios, programmes and projects. This helpsto reduce the risk of conflicts and inconsistencies and sup-port the achievement of organisational goals.

Focusing only on operational risks related to a specificimplementation project is insufficient. Risk relates to andimpact organisational concerns concerned with the survival,development and growth of an organisation. Specificprojects will incur individual risks. They will also contrib-ute to the organisation’s risk and may impact other areasand efforts. The paper by Jonas introduces Enterprise Risk

The thirteen papers selected for the issue showcasefour perspectives in terms of the trends identified

within the risk management domain

“”

Page 9: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

8 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Management as a wider framework sued by the entire busi-ness to assess the overall exposure to risk, and the organisa-tional ability to make timely and well informed decisions.The paper looks at the five steps required to implement asimple and effective enterprise Risk Management frame-work. The approach encourages horizontal integration oforganisational risk allowing different units to become awareof the potential impacts of initiatives in other areas on theirown future, targets, and systems. The normal expectation isfor vertical integration where guidance and instructions arepassed downwards and information is cascaded upwards.However the cross functional perspective allows integra-tion and sharing across different functional units. Verticalmanagement chains can be used to support leadership andprovide the basis for improved decision making throughenterprise-wide reporting. The required culture change isfrom risk management to managing risk. Facilitating the shiftrequires people to look ahead and make risk-focused deci-sions that will benefit their organisations. It also requiresthe support and reward mechanisms to recognise and sup-port such a shift.

There are some common themes that run through thepapers in this monograph. Most modern undertakings in-volve people: Processes cannot ignore the human elementand focus on computational steps alone and therefore agreater attention to subjective perceptions, stakeholders andexpectation pervades many of the articles. The context ofrisk is also crucial. Most authors refer to complex dynamics

and interactions. It would appear that our projects are be-coming increasingly more complex and the risks we grap-ple with increasingly involve technical, social and envi-ronmental impacts. The unprecedented level of uncertaintyseems to feature in many of the contributions. The direc-tion advocated in many of the papers requires a growingrecognition of the dynamics involved in interactions, of theneed to lead and guide, of holistic and systemic aspect ofsolving problems, of the need to adapt and respond and ofa need to adopt a more strategic, enterprise-wide view ofsituations.

3 Looking aheadRisk management appears to be an active area for re-

searchers and practitioners. It is encouraging to see such arange of view and perspectives and to hear about the ad-vances being proposed. New work in the areas of decisionmaking, uncertainty, complexity, problem solving, enter-prise risk management and governance will continue to re-vitalise risk management knowledge, skills and competences.Risk management has progressed in the last 25 years, but itappears that the new challenges and the focus on organisa-tions, enterprises, and wider systems will add new ideasand insights. In this issue leading researchers and practi-tioners have surveyed the development of ideas, perspec-tives and concepts within risk management opened aglimpse and given us a glimpse of the potential solutions.The journey from risk management towards the wider man-agement of risk, opportunity and uncertainty feels excitingand worthwhile. While there is still a long way to go, thejourney seems to be both promising, and exciting.

AcknowledgementsSincere thanks must be extended to all the authors for their

contribution to this special issue. The UPGRADE Editorial Teammust be also mentioned, in particular the Chief Editor, LlorençPagés, for his help and support in producing this issue.

While there is still a longway to go, the journey seems

to be both promising,and exciting

Useful References on "Risk Management"

In addition to the materials referenced by the authorsin their articles, we offer the following ones for those whowish to dig deeper into the topics covered by the mono-graph.

BooksJ. Adams, J. (1995). Risk, UCL Press, 1995.D. Apgar. Risk Intelligence, Harvard Business

School Press, 2006.P.L. Bernstein. Against the Gods: The remarkable

Story of Risk, Wiley, 1998.B.W. Boehm. Software Risk Management, IEEE

Computer Society Press, 1989.R.N. Charette. Software Engineering Risk Analysis

and Management, McGraw Hill, 1989.

D. Cleden. Managing Project Uncertainty, Gower,2009.

D. Cooper, S. Grey, G. Raymond, P. Walker. Man-aging Risk in Large Projects and Complex Procurements,Wiley, 2005.

T. DeMarco. Waltzing with Bears, Dorset House,2003.

N. Fenton, M. Neil. Risk Assessment and DecisionAnalysis with Bayesian Networks, CRC Press, 2012.

G. Gigerenzer. Reckoning with Risk, PenguinBooks, 2003.

E. Hall. Managing Risks: Methods for SoftwareSystems Development, Addison Wesley, 1998.

D. Hancock. Tame, Messy and Wicked Risk Lead-ership, Gower, 2010.

Page 10: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 9© Novática

Risk Management

Farewell Edition

E. Harris. Strategic Project Risk Appraisal and Man-agement, Gower, 2009.

D. Hillson, P. Simon. Practical Project Risk Man-agement: The ATOM Methodology, Management Con-cepts, 2007.

D. Hillson. Managing Risk in Projects, Gower, 2009.C. Jones. Assessment and Control of Software Risks,

Prentice Hall, 1994.M. Modarres. RiskAnalysis in Engineering: Tech-

niques, Tools and Trends, Taylor and Francis, 2006,R. Müller. Project Governance, Gower, 2009.M. Ould. Managing Software Quality and Business

Risk, Wiley, 1999.P.G. Smith, G.M. Merritt. Proactive Risk Manage-

ment: Controlling Uncertainty in Product Development,Productivity Press, 2002.

N.N. Taleb. The Black Swan: The Impact of theHighly Improbable, Randon House, 2007.

S. Ward, C. Chapman. How to Manage Project Op-portunity and Risk, 3rd edition, John Wiley, 2011.

G. Westerman, R. Hunter. IT Risk: Turning Busi-ness threats into Competitive Advantage, Harvard Busi-ness School Press, 2007.

Articles and PapersH. Barki, S. Rivard, J. Talbot. Toward an Assess-

ment of Software Development Risk, Journal of Man-agement Information Systems, 10 (2) 203-225, 1993.

C.B. Chapman. Key Points of Contention in Fram-ing Assumptions for Risk and Uncertainty Management,International Journal for Project Management, 24(4), 303-313, 2006.

F.M. Dedolph. The Neglected Management Activ-ity: Software Risk Management, Bell Labs Technical Jour-nal, 8(3), 91-95, 2003.

A. De Meyer, C.H. Loch, M.T. Pich. ManagingProject Uncertainty: From Variation to Chaos, MIT SloanManagement Review, 59-67, 2002.

R.E. Fairley. Risk Management for SoftwareProjects, IEEE Software, 11(3), 57-67, 1994.

R.E. Fairley. Software Risk Management Glossary,IEEE Software, 22(3), 101, 2005.

D. Gotterbarn S. Rogerson. Responsible Risk Analy-sis for Software Development: Creating the Software De-velopment Impact Statement, Communications of the As-sociation for Information Systems, 15, 730-750, 2005.

S.J. Huang, W.M. Han. Exploring the Relationshipbetween Software Project Duration and Risk Exposure: ACluster Analysis, Journal of Information and Management,45 (3,) 175-182, 2008.

J. Jiang, G. Klein. Risks to Different Aspects of Sys-Tem Success, Information and Management, 36 (5) 264-272, 1999.

J.J. Jiang, G. Klein, S.P.J. Wu, T.P. Liang. The Rela-tion of Requirements Uncertainty and Stakeholder Percep-tion Gaps to Project Management Performance, The Jour-

nal of Systems and Software, 82 (5,) 801-808, 2009.M. Kajko-Mattsson, J. Nyfjord. State of Software

Risk Management Practice, IAENG International Journalof Computer Science, 35(4), 451-462, 2008.

M. Keil, L. Wallace, D. Turk, G. Dixon-Randall, U.Nulden. An Investigation of Risk Perception and Risk Pro-pensity on the Decision to Continue a Software Develop-ment Project, The Journal of Systems and Software,53(2)145-157, 2000.

T.A. Longstaff, C. Chittister, R. Pethia, Y.Y. Haimes.Are We Forgetting the Risks of Information Technology?IEEE Computer, 33(12) 43-51, 2000.

S. Pender. Managing Incomplete Knowledge: WhyRisk Management is not Sufficient, 19(1), 79-87, 2001.

O. Perminova, M. Gustaffson, K. Wikstrom. Defin-ing Uncertainty in Projects: A New Perspective, Interna-tional Journal of Project Management, 26(1), 73-79, 2008.

S. Pfleeger. Risky Business: What we have Yet toLearn About Risk Management, Journal of Systems andSoftware, 53(3), 265-273, 2000.

J. Ropponen, K. Lyytinen. Components of SoftwareDevelopment Risk: How to Address Them? A Project Man-ager Survey, IEEE Transactions on Software Engineering,26 (2),2000, 98-112.

L. Sarigiannidis, P. Chatzoglou. Software Develop-ment Project Risk Management: A New Conceptual Frame-work, Journal of Software Engineering and Applications(JSEA), 4 (5) 293 – 305, 2011.

R. Schmidt, K. Lyytinen, M. Keil, P. Cule. Identify-ing Software Project Risks: An International Delphi Study,Journal of Management Information Systems, 17(4), 5-36, 2001.

L. Wallace, M. Keil. Software Project Risks and theirEffects on Project Outcomes, Communications of theACM, 47(4), 68-73, 2004.

L. Wallace, M. Keil, A. Rai. Understanding Soft-ware Project Risk: A Cluster Analysis, Journal of Infor-mation and Management, 42 (1), 115-125, 2004.

Web Sites<http://www.best-management-practice.com/Risk-

Management-MoR/>.<http://www.computerweekly.com/feature/Risk-

Management-Software-Essential-Guide><http://www.riskworld.com/><http://www.riskworld.com/websites/webfiles/

ws5aa015.htm>Directory of risk management websites: <http://

www.riskworld.com/websites/webfiles/ws00aa009.htm>Risk management journals: <http://www.

riskworld.com/software/sw5sw001.htm>

Page 11: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

10 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Keywords: Bayes, Bayesian Networks, Causal Mod-els, Risk.

1 IntroductionThe 2008-10 credit crisis brought misery to millions

around the world, but it at least raised awareness of the needfor improved methods of risk assessment. The armies ofanalysts and statisticians employed by banks and govern-ment agencies had failed to predict either the event or itsscale until far too late. Yet the methods that could haveworked – and which are the subject of this paper – werelargely ignored. Moreover, the same methods have the po-tential to transform risk analysis and decision making in allwalks of life. For example:

· Medical: Imagine you are responsible for diagnos-ing a condition and for prescribing one of a number of pos-sible treatments. You have some background informationabout the patient (some of which is objective like age andnumber of previous operations, but some is subjective, like‘overweight’ and ‘prone to stress’); you also have some priorinformation about the prevalence of different possible con-ditions (for example, bronchitis may be ten times more likelythan cancer). You run some diagnostic tests about whichyou have some information of the accuracy (such as thechances of false negative and false positive outcomes). Youalso have various bits of information about the success ratesof the different possible treatments and their side effects.On the basis of all this information how do you arrive at adecision of which treatment pathway to take? And howwould you justify that decision if something went wrong?

Legal: Anybody involved in a legal case (before or

The Use of Bayes and Causal Modelling in Decision Making,Uncertainty and Risk

Norman Fenton and Martin Neil

The most sophisticated commonly used methods of risk assessment (used especially in the financial sector) involve build-ing statistical models from historical data. Yet such approaches are inadequate when risks are rare or novel because thereis insufficient relevant data. Less sophisticated commonly used methods of risk assessment, such as risk registers, makebetter use of expert judgement but fail to provide adequate quantification of risk. Neither the data-driven nor the riskregister approaches are able to model dependencies between different risk factors. Causal probabilistic models (calledBayesian networks) that are based on Bayesian inference provide a potential solution to all of these problems. Suchmodels can capture the complex interdependencies between risk factors and can effectively combine data with expertjudgement. The resulting models provide rigorous risk quantification as well as genuine decision support for risk manage-ment.

Authors

Norman Fenton is a Professor in Risk Information Managementat Queen Mary University of London, United Kingdom, andalso CEO of Agena, a company that specialises in riskmanagement for critical systems. His current work onquantitative risk assessment focuses on using Bayesian networks.Norman’s experience in risk assessment covers a wide range ofapplication domains such as software project risk, legal reasoning(he has been an expert witness in major criminal and civil ca-ses), medical decision-making, vehicle reliability, embeddedsoftware, transport systems, and financial services. Norman haspublished over 130 articles and 5 books on these subjects, andhis company Agena has been building Bayesian Net-baseddecision support systems for a range of major clients in supportof these applications. <[email protected]>

Martin Neil is Professor in Computer Science and Statistics atthe School of Electronic Engineering and Computer Science,Queen Mary, University of London, United Kingdom. He isalso a joint founder and Chief Technology Officer of Agena Ltdand is Visiting Professor in the Faculty of Engineering andPhysical Sciences, University of Surrey, United Kingdom.Martin has over twenty years experience in academic research,teaching, consulting, systems development and projectmanagement and has published or presented over 70 papers inrefereed journals and at major conferences. His interests coverBayesian modeling and/or risk quantification in diverse areas:operational risk in finance, systems and design reliability(including software), software project risk, decision support,simulation, AI and statistical learning. He earned a BSc inMathematics, a PhD in Statistics and Software Metrics and is aChartered Engineer. <[email protected]>

The 2008-10 credit crisis brought misery to millionsaround the world, but it at least raised awareness

of the need for improved methods of risk assessment

“”

Page 12: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 11© Novática

Risk Management

Farewell Edition

during a trial) will see many pieces of evidence. Some ofthe evidence favours the prosecution hypothesis of guiltyand some of the evidence favours the defence hypothesis ofinnocence. Some of the evidence is statistical (such as thematch probability of a DNA sample) and some is purelysubjective, such as a character witness statement. It is yourduty to combine the value of all of this evidence either todetermine if the case should proceed to trial or to arrive at aprobability (‘beyond reasonable doubt’) of innocence. Howwould you arrive at a decision?

Safety: A transport service (such as a rail network oran air traffic control centre) is continually striving to im-prove safety, but must nevertheless ensure that any proposedimprovements are cost effective and do not degrade effi-ciency. There are a range of alternative competing propos-als for safety improvement, which depend on many differ-ent aspects of the current infrastructure (for example, in thecase of an air traffic control centre alternatives may includenew radar, new collision avoidance detection devices, orimproved air traffic management staff training). How do youdetermine the ‘best’ alternative taking into account not justcost but also impact on safety and efficiency of the overallsystem? How would you justify any such decision to a teamof government auditors?

Financial: A bank needs sufficient liquid capital read-ily available in the event of exceptionally poor performance,either from credit or market risk events, or from catastrophicoperational failures of the type that brought down Baringsin 1995 and almost brought down Société Générale in 2007.It therefore has to calculate and justify a capital allocationthat properly reflects its ‘value at risk’. Ideally this calcula-tion needs to take account of a multitude of current finan-cial indicators, but given the scarcity of previous catastrophicfailures, it is also necessary to consider a range of subjec-tive factors such as the quality of controls in place withinthe bank. How can all of this information be combined todetermine the real value at risk in a way that is acceptable tothe regulatory authorities and shareholders?

Reliability: The success or failure of major newproducts and systems often depends on their reliability, asexperienced by end users. Whether it is a high end digitalTV, a software operating system, or a complex military ve-hicle, like an armoured vehicle, too many faults in the de-livered product can lead to financial disaster for the pro-ducing company or even a failed military mission includ-ing loss of life. Hence, pre-release testing of such systemsis critical. But no system is ever perfect and a perfect sys-tem delivered after a competitor gets to the market first maybe worthless. So how do you determine when a system is‘good enough’ for release, or how much more testing isneeded? You may have hard data in the form of a sequenceof test results, but this has to be considered along with sub-jective data about the quality of testing and the realism ofthe test environment.

What is common about all of the above problems is thata ‘gut-feel’ decision based on doing all the reasoning ‘inyour head’ or on the back of an envelope is fundamentallyinadequate and increasingly unacceptable. Nor can we baseour decision on purely statistical data of ‘previous’ instances,since in each case the ‘risk’ we are trying to calculate isessentially unique in many aspects. To deal with these kindsof problems consistently and effectively we need a rigor-ous method of quantifying uncertainty that enables us tocombine data with expert judgement. Bayesian probabil-ity, which we introduce in Section 2, is such an approach.We also explain how Bayesian probability combined withcausal models (Bayesian networks) enables us to factor incausal relationships and dependencies. In Section 3 wereview standard statistical and other approaches to risk as-sessment, and argue that a proper causal approach basedon Bayesian networks is needed in critical cases.

2 Bayes Theorem and Bayesian NetworksAt their heart, all of the problems identified in Section

1 incorporate the basic causal structure shown in Figure 1.There is some unknown hypothesis H about which we

wish to assess the uncertainty and make some decision. Doesthe patient have the particular disease? Is the defendantguilty of the crime? Will the system fail within a given pe-riod of time? Is a capital allocation of 5% going to be suf-ficient to cover operational losses in the next financial year?

Consciously or unconsciously we start with some (un-conditional) prior belief about H (for example, ‘there is a 1in a 1000 chance this person has the disease’). Then weupdate our prior belief about H once we observe evidence

Figure 1: Causal View of Evidence.

‘Gut-feel’ decision based on doing all the reasoning‘in your head’ or on the back of an envelope is fundamentally

inadequate and increasingly unacceptable

“”

Page 13: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

12 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

E (for example, depending on the outcome of a test ourbelief about H being true might increase or decrease). Thisupdating takes account of the likelihood of the evidence,which is the chance of seeing the evidence E if H is true.

When done formally this type of reasoning is calledBayesian inference, named after Thomas Bayes who deter-mined the necessary calculations for it in 1763. Formally,we start with a prior probability P(H) for the hypothesis H.The likelihood, for which we also have prior knowledge, isformally the conditional probability of E given H, whichwe write as P(E|H).

Bayes’s theorem provides the correct formula for up-dating our prior belief about H in the light of observing E.In other words Bayes calculates P(H|E) in terms of P(H)and P(E|H). Specifically:

( | ) ( ) ( | ) ( )( | )( ) ( | ) ( ) ( | ) ( )

P E H P H P E H P HP H EP E P E H P H E notH P notH

= =+

Example 1: Assume one in a thousand people has a par-ticular disease H. Then:

P(H) = 0.001, so P(not H) = 0.999Also assume a test to detect the disease has 100% sensi-

tivity (i.e. no false negatives) and 95% specificity (mean-ing 5% false positives). Then if E represents the Booleanvariable "Test positive for the disease", we have:

P(E | not H) = 0.05P(E | H) = 1Now suppose a randomly selected person tests positive.

What is the probability that the person actually has the dis-ease? By Bayes Theorem this is:

( | ) ( ) 1 0.001( | ) 0.01963( | ) ( ) ( | ) ( ) 1 0.001 0.05 0.999

P E H P HP H EP E H P H E notH P notH

×= = =

+ × + ×

So there is a less than 2% chance that a person testingpositive actually has the disease.

Bayes theorem has been used for many years in numer-ous applications ranging from insurance premium calcula-tions [1], through to web-based personalisation (such as withGoogle and Amazon). Many of the applications pre-datemodern computers (see, e.g. [2] for an account of the cru-cial role of Bayes theorem in code breaking during WorldWar 2).

However, while Bayes theorem is the only rational wayof revising beliefs in the light of observing new evidence, itis not easily understood by people without a statistical/math-ematical background. Moreover, the results of Bayesiancalculations can appear, at first sight, as counter-intuitive.

Indeed, in a classic study [3] when Harvard Medical Schoolstaff and students were asked to calculate the probability ofthe patient having the disease (using the exact assumptionsstated in Example 1) most gave the wildly incorrect answerof 95% instead of the correct answer of less than 2%. Thepotential implications of such incorrect ‘probabilistic riskassessment’ are frightening. In many cases, lay people onlyaccept Bayes theorem as being ‘correct’ and are able to rea-son correctly, when the information is presented in alterna-tive graphical ways, such as using event trees and frequen-cies (see [4] and [5] for a comprehensive investigation ofthese issues). But these alternative presentation techniquesdo not scale up to more complex problems.

If Bayes theorem is difficult for lay people to computeand understand in the case of a single hypothesis and pieceof evidence (as in Figure 1), the difficulties are obviouslycompounded when there are multiple related hypothesesand evidence as in the example of Figure 2.

As in Figure 1 the nodes in Figure 2 represent variables(which may be known or unknown) and the arcs representcausal (or influential) relationships. Once we have relevantprior and conditional probabilities associated with each vari-able (such as the examples shown in Figure 3) the model iscalled a Bayesian network (BN).

The BN in Figure 2 is intended to model the problem ofdiagnosing diseases (TB, Cancer, Bronchitis) in patientsattending a chest clinic. Patients may have symptoms (likedyspnoea – shortness of breath) and can be sent for diag-nostic tests (X-ray); there may be also underlying causal

Figure 2: Bayesian Network for Diagnosing Disease.

Bayesian probability is a rigorous methodof quantifying uncertainty that enables us to combine data

with expert judgement

“”

Page 14: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 13© Novática

Risk Management

Farewell Edition

Figure 3: Node Probability Table (NPT) Examples.

Probability Table for “Visit to Asia?” Probability Table for “Bronchitis?”

a) Prior beliefs point to bronchitis as most likely b) Patient is ‘non-smoker’ experiencing dyspnoea

(shortness of breath): strengthens belief in bronchitis

c) Positive x-ray result increases probability of TB and cancer but bronchitis still most likely

d) Visit to Asia makes TB most likely now

Figure 4: Reasoning within the Bayesian Network.

factors that influence certain diseases more than others (suchas smoking, visit to Asia).

To use Bayesian inference properly in this type of net-work necessarily involves multiple applications of BayesTheorem in which evidence is ‘propagated’ throughout. Thisprocess is complex and quickly becomes infeasible whenthere are many nodes and/or nodes with multiple states. Thiscomplexity is the reason why, despite its known benefits,there was for many years little appetite to use Bayesian in-ference to solve real-world decision and risk problems. For-

tunately, due to breakthroughs in the late 1980s that pro-duced efficient calculations algorithms 13 [2][6], there arenow widely available tools such as [7] that enable anybodyto do the Bayesian calculations without ever having to un-derstand, or even look at, a mathematical formula. Thesedevelopments were the catalyst for an explosion of interestin BNs. Using such a tool we can do the kind of powerfulreasoning shown in Figure 4.

Specifically:With the prior assumptions alone (Figure 4a) Bayes

Page 15: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

14 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

theorem computes what are called the prior marginal prob-abilities for the different disease nodes (note that we did not‘specify’ these probabilities – they are computed automati-cally; what we specified were the conditional probabilitiesof these diseases given the various states of their parentnodes). So, before any evidence is entered the most likelydisease is bronchitis (45%).

When we enter evidence about a particular patientthe probabilities for all of the unknown variables get up-dated by the Bayesian inference. So, (in Figure 4b) once weenter the evidence that the patient has dyspnoea and is anon-smoker, our belief in bronchitis being the most likelydisease increases (75%).

If a subsequent X-ray test is positive (Figure 4b) ourbelief in both TB (26%) and cancer (25%) are raised butbronchitis is still the most likely (57%).

However, if we now discover that the patient visitedAsia (Figure 4d) we overturn our belief in bronchitis in fa-vour of TB (63%).

Note that we can enter any number of observations any-where in the BN and update the marginal probabilities of allthe unobserved variables. As the above example demon-strates, this can yield some exceptionally powerful analysesthat are simply not possible using other types of reasoningand classical statistical analysis methods.

In particular, BNs offer the following benefits:Explicitly model causal factors:Reason from effect to cause and vice versaOverturn previous beliefs in the light of new evidence

(also called ‘explaining away’)Make predictions with incomplete dataCombine diverse types of evidence including both

subjective beliefs and objective data.

Arrive at decisions based on visible auditable rea-soning (Unlike black-box modelling techniques there areno "hidden" variables and the inference mechanism is basedon a long-established theorem).

With the advent of the BN algorithms and associatedtools, it is therefore no surprise that BNs have been used ina range of applications that were not previously possiblewith Bayes Theorem alone. A comprehensive (and easilyaccessible) overview of BN applications, with special em-phasis on their use in risk assessment, can be found in [8].

It is important to recognise that the core intellectual over-head in using the BN approach is in defining the modelstructure and the NPTs – the actual Bayesian calculationscan and must always be carried out using a tool. However,while these tools enable large-scale BNs to be executed ef-ficiently, most provide little or no support for users to actu-ally build large-scale BNs, nor to interact with them easily.Beyond a graphical interface for building the structure, BN-builders are left to struggle with the following kinds of prac-tical problems that combine to create a barrier to the morewidespread use of BNs:

Eliciting and completing the probabilities in very largeNPTs manually (e.g. for a node with 5 states having three par-ents each with 5 states, the NPT requires 625 entries);

Dealing with very large graphs that contain similar,but slightly different "patterns" of structure ;

Handling continuous, as well as discrete variables.Fortunately, recent algorithm and tool developments (also

described in [8]) have gone a long way to addressing theseproblems and may lead to a ‘second wave’ of widespread BNapplications. But before BNs are used more widely in criticalrisk assessment and decision making, there needs to be a fun-damental cultural shift away from the current standard ap-proaches to risk assessment, which we address next.

3 From Statistical Models and Risk Registers toCausal Models

3.1 Prediction based on Correlation is not RiskAssessment

Standard statistical approaches to risk assessment seek

Table 1: Fatal Automobile Crashes per Month.

Month Total fatal crashes Average monthly temperature (F) January 297 17.0 February 280 18.0 March 267 29.0 April 350 43.0 May 328 55.0 June 386 65.0 July 419 70.0 August 410 68.0 September 331 59.0 October 356 48.0 November 326 37.0 December 311 22.0

The results of Bayesiancalculations can appear,

at first sight, ascounter-intuitive

Page 16: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 15© Novática

Risk Management

Farewell Edition

Figure 5: Scatterplot of Temperature against Road Fatalities (each Dot represents a Month).

Figure 6: Causal Model for Fatal Crashes.

to establish hypotheses from relationships discovered indata. Suppose we are interested, for example, in the risk offatal automobile crashes. Table 1 gives the number of crashesresulting in fatalities in the USA in 2008 broken down bymonth (source: US National Highways Traffic Safety Ad-ministration). It also gives the average monthly tempera-ture.

We plot the fatalities and temperature data in a scatterplotgraph as shown in Figure 5.

There seems to be a clear relationship between tempera-ture and fatalities – fatalities increase as the temperatureincreases. Indeed, using the standard statistical tools of cor-relation and p-values, statisticians would accept the hypoth-esis of a relationship as ‘highly significant’ (the correlation

Page 17: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

16 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

coefficient here is approximately 0.869 and it comfortablypasses the criteria for a p-value of 0.01).

However, in addition to serious concerns about the useof p-values generally (as described comprehensively in [6]),there is an inevitable temptation arising from such resultsto infer causal links such as, in this case, higher tempera-tures cause more fatalities. Even though any introductory sta-tistics course teaches that correlation is not causation, the re-gression equation is typically used for prediction (e.g. in thiscase the equation relating N to T is used to predict that at 80Fwe might expect to see 415 fatal crashes per month).

But there is a grave danger of confusing prediction withrisk assessment. For risk assessment and management theregression model is useless, because it provides no explana-tory power at all. In fact, from a risk perspective this modelwould provide irrational, and potentially dangerous, infor-mation: it would suggest that if you want to minimise yourchances of dying in an automobile crash you should do yourdriving when the highways are at their most dangerous, inwinter.

One obvious improvement to the model, if the data isavailable, is to factor in the number of miles travelled (i.e.journeys made). But there are other underlying causal andinfluential factors that might do much to explain the appar-ently strange statistical observations and provide betterinsights into risk. With some common sense and carefulreflection we can recognise the following:

Temperature influences the highway conditions(which will be worse as temperature decreases).

Temperature also influences the number of journeysmade; people generally make more journeys in spring andsummer and will generally drive less when weather condi-tions are bad.

When the highway conditions are bad people tendto reduce their speed and drive more slowly. So highwayconditions influence speed.

The actual number of crashes is influenced not just bythe number of journeys, but also the speed. If relatively fewpeople are driving, and taking more care, we might expect fewerfatal crashes than we would otherwise experience.

The influence of these factors is shown in Figure 6:The crucial message here is that the model no longer

involves a simple single causal explanation; instead it com-bines the statistical information available in a database (the‘objective’ factors) with other causal ‘subjective’ factors de-rived from careful reflection. These factors now interact ina non-linear way that helps us to arrive at an explanationfor the observed results. Behaviour, such as our natural cau-

tion to drive slower when faced with poor road conditions,leads to lower accident rates (people are known to adapt tothe perception of risk by tuning the risk to tolerable levels.- this is formally referred to as risk homeostasis). Con-versely, if we insist on driving fast in poor road conditionsthen, irrespective of the temperature, the risk of an acci-dent increases and so the model is able to capture our intui-tive beliefs that were contradicted by the counterintuitiveresults from the simple regression model.

The role played in the causal model by driving speedreflects human behaviour. The fact that the data on the av-erage speed of automobile drivers was not available in adatabase explains why this variable, despite its apparentobviousness, did not appear in the statistical regressionmodel. The situation whereby a statistical model is basedonly on available data, rather than on reality, is called "con-ditioning on the data". This enhances convenience but atthe cost of accuracy.

By accepting the statistical model we are asked to defyour senses and experience and actively ignore the role un-observed factors play. In fact, we cannot even explain theresults without recourse to factors that do not appear in thedatabase. This is a key point: with causal models we seekto dig deeper behind and underneath the data to explorericher relationships missing from over-simplistic statisticalmodels. In doing so we gain insights into how best to con-trol risk and uncertainty. The regression model, based onthe idea that we can predict automobile crash fatalities basedon temperature, fails to answer the substantial question: howcan we control or influence behaviour to reduce fatalities.This at least is achievable; control of weather is not.

3.2 Risk Registers do not help quantify RiskWhile statistical models based on historical data repre-

sent one end of a spectrum of sophistication for risk assess-ment, at the other end is the commonly used idea of a ‘riskregister’. In this approach, there is no need for past data; inconsidering the risks of a new project risk managers typi-cally prepare a list of ‘risks’ that could be things like:

Some key people you were depending on becomeunavailable

A piece of technology you were depending on fails.You run out of funds or time

The very act of listing and then prioritising risks, meansthat mentally at least risk managers are making a decisionabout which risks are the biggest. Most standard texts onrisk propose decomposing each risk into two components:

‘Probability’ (or likelihood) of the risk

Figure 7: Standard Impact-based Risk Measure.

Page 18: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 17© Novática

Risk Management

Farewell Edition

An Example: Meteor Strike Alarm in the Film "Armageddon"

By destroying the meteor in the film "Armageddon" Bruce Willis saved the world. Both the chance of the meteor strikeand the consequences of such a strike were so high, that nothing much else mattered except to try to prevent the strike.In popular terminology what the world was confronting was a truly massive ‘risk’. But if the NASA scientists in the filmhad measured the size of the risk using the formula in Figure 7 they would have discovered such a measure wasirrational, and it certainly would not have explained to Bruce Willis and his crew why their mission made sense. Specifi-cally:

Cannot get the Probability number (for meteor strikes earth). According to the NASA scientists in the film themeteor was on a direct collision course with earth. Does that make it a certainty (i.e. a 100% chance) of it striking Earth?Clearly not, because if it was then there would have been no point in sending Bruce Willis and his crew up in the spaceshuttle. The probability of the meteor striking Earth is conditional on a number of control events (like intervening todestroy the meteor) and trigger events (like being on a collision course with Earth). It makes no sense to assign a directprobability without considering the events it is conditional on. In general it makes no sense (and would in any case betoo difficult) for a risk manager to give the unconditional probability of every ‘risk’ irrespective of relevantcontrols and triggers. This is especially significant when there are, for example, controls that have never been usedbefore (like destroying the meteor with a nuclear explosion).

Cannot get the Impact number (for meteor striking earth). Just as it makes little sense to attempt to assign an(unconditional) probability to the event "Meteor strikes Earth’, so it makes little sense to assign an (unconditional)number to the impact of the meteor striking. Apart from the obvious question "impact on what?", we cannot say what theimpact is without considering the possible mitigating events such as getting people underground and as far away aspossible from the impact zone.

Risk score is meaningless. Even if we could get round the two problems above what exactly does the resultingnumber mean? Suppose the (conditional) probability of the strike is 0.95 and, on a scale of 1 to 10, the impact of thestrike is 10 (even accounting for mitigants). The meteor ‘risk’ is 9.5, which is a number close to the highest possible 10.But it does not measure anything in a meaningful sense

It does not tell us what we really need to know. What we really need to know is the probability, given our currentstate of knowledge, that there will be massive loss of life.

‘Impact’ (or loss) the risk can causeThe most common way to measure each risk is to multi-

ply the probability of the risk (however you happen to meas-ure that) with the impact of the risk (however you happen tomeasure that) as in Figure 7.

The resulting number is the ‘size’ of the risk - it is basedon analogous ‘utility’ measures. This type of risk measureis quite useful for prioritising risks (the bigger the numberthe ‘greater’ the risk) but it is normally impractical and canbe irrational when applied blindly. We are not claiming thatthis formulation is wrong. Rather, we argue that it is nor-mally not sufficient for decision-making.

One immediate problem with the risk measure of Figure7 is that, normally, you cannot directly get the numbers youneed to calculate the risk without recourse to a much moredetailed analysis of the variables involved in the situation athand.

In addition to the problem of measuring the size of eachindividual risk in isolation, risk registers suffer from thefollowing problems:

However the individual risk size is calculated, thecumulative risk score measures the total project risk. Hence,there is a paradox involved in such an approach: the morecarefully you think about risk (and hence the more indi-vidual risks you record in the risk register) the higher theoverall risk score becomes. Since higher risk scores are as-sumed to indicate greater risk of failure it seems to followthat your best chance of a new project succeeding is to sim-ply ignore, or under-report, any risks.

Figure 8: Meteor Strike Risk.

By destroyingthe meteor in the film

'Armageddon' Bruce Willissaved the world

Page 19: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

18 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 9: Conditional Probability Table for "Meteor strikes Earth".

Different projects or business divisions will assessrisk differently and tend to take a localised view of theirown risks and ignore that of others. This "externalisation"of risk to others is especially easy to ignore if their interestsare not represented when constructing the register. For ex-ample the IT department may be forced to accept the dead-lines imposed by the marketing department.

A risk register does not record "opportunities" or "ser-endipity" and so does not deal with upside uncertainty, onlydownside.

Risks are not independent. For example, in most cir-cumstances cost, time and quality will be inextricably linked;you might be able to deliver faster but only by sacrificingquality. Yet "poor quality" and "missed delivery" will ap-pear as separate risks on the register giving the illusion thatwe can control or mitigate one independently of the other.In the subprime loan crisis of 2008 there were three risks:

1) extensive defaults on subprime loans, 2) growth in nov-elty and complexity of financial products and 3) failure ofAIG (American International Group Inc.) to provide insur-ance to banks when customers default. Individually theserisks were assessed as ‘small’. However, when they occurredtogether the total risk was much larger than the individualrisks. In fact, it never made sense to consider the risks indi-vidually at all.

Hence, risk analysis needs to be coupled with an as-sessment of the impact of the underlying events, one onanother, and in terms of their effect on the ultimate out-comes being considered. The accuracy of the risk assessmentis crucially dependent on the fidelity of the underlying model;the simple formulation of Figure 7 is insufficient. Instead ofgoing through the motions to assign numbers without actuallydoing much thinking, we need to consider what lies under thebonnet.

Figure 11: Initial Risk of Meteor Strike.

Figure 10: Probability Table for"Meteor on Collision Course withEarth".

Page 20: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 19© Novática

Risk Management

Farewell Edition

Figure 12: The Potential Difference made by Bruce Willis and Crew.

Risk is a function of how closely connected events, sys-tems and actors in those systems might be. Proper risk as-sessment requires a holistic outlook that embraces a causalview of interconnected events. Specifically to get rationalmeasures of risk you need a causal model, as we describenext. Once you do this measuring risk starts to make sense,but it requires an investment in time and thought.

3.2.1 Thinking about Risk using Causal AnalysisIt is possible to avoid all the above problems and ambigui-

ties surrounding the term risk by considering the causal con-text in which risks happen (in fact everything we present hereapplies equally to opportunities but we will try to keep it assimple as possible). The key thing is that a risk is an event thatcan be characterised by a causal chain involving (at least):

the event itselfat least one consequence event that characterises the

impactone or more trigger (i.e. initiating) eventsone or more control events which may stop the trig-

ger event from causing the risk eventone or more mitigating events which help avoid the

consequence event

This is shown in the example of Figure 8.With this causal perspective, a risk is therefore actually

characterised not by a single event, but by a set of events.These events each have a number of possible outcomes (tokeep things as simple as possible in the example here wewill assume each has just two outcomes true and false sowe can assume "Loss of life" here means something like‘loss of at least 80% of the world population’).

The ‘uncertainty’ associated with a risk is not a sepa-rate notion (as assumed in the classic approach). Every event(and hence every object associated with risk) has uncer-tainty that is characterised by the event’s probability distri-bution. Triggers, controls, and mitigants are all inherentlyuncertain. The sensible risk measures that we are propos-ing are simply the probabilities you get from running theBN model. Of course, before you can run it you still haveto provide the prior probability values. But, in contrast tothe classic approach, the probability values you need to sup-ply are relatively simple and they make sense. And younever have to define vague numbers for ‘impact’.

Example. To give you a feel of what you would need todo, in the Armageddon BN example of Figure 8 for theuncertain event "Meteor strikes Earth" we still have to as-

Proper risk assessment requires a holistic outlookthat embraces a causal view of interconnected events“ ”

Page 21: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

20 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 13: Incorporating Different Risk Perspectives.

sign some probabilities. But instead of second guessing whatthis event actually means in terms of other conditionalevents, the model now makes it explicit and it becomes mucheasier to define the necessary conditional probability. Whatwe need to do is define the probability of the meteor strikegiven each combination of parent states as shown in Figure9.

For example, if the meteor is on a collision course thenthe probability of it striking the earth is 1, if it is not de-stroyed, and 0.2, if it is. In completing such a table we nolonger have to try to ‘factor in’ any implicit conditioningevents like the meteor trajectory.

There are some events in the BN for which we do needto assign unconditional probability values. These are repre-sented by the nodes in the BN that have no parents; it makessense to get unconditional probabilities for these because,by definition, they are not conditioned on anything (this isobviously a choice we make during our analysis). Suchnodes can generally be only triggers, controls or mitigants.An example, based on dialogue from the film, is shown inFigure 10.

Once we have supplied the priors probability values aBN tool will run the model and generate all the measures ofrisk that you need. For example, when you run the modelusing only the initial probabilities the model (as shown inFigure 11) computes the probability of the meteor strikingEarth as 99.1% and the probability of loss of life (meaningat least 80% of the world population) is about 94%.

In terms of the difference that Bruce Willis and his crewcould make we run two scenarios: One where the meteor is

exploded and one where it is not. The results of both sce-narios are shown together in Figure 12.

Reading off the values for the probability of "loss oflife" being false we find that we jump from just over 4%(when the meteor is not exploded) to 81% (when the me-teor is exploded). This massive increase in the chance ofsaving the world clearly explains why it merited an attempt.

Clearly risks in this sense depend on stakeholders andperspectives, but these perspectives can be easily combinedas shown in Figure 13 for ‘flood risk’ in some town.

The types of events are all completely interchangeabledepending on the perspective. From the perspective of thelocal authority the risk event is ‘Flood’ whose trigger is ‘dambursts upstream’ and which has ‘flood barrier’ as a control.Its consequences include ‘loss of life’ and also ‘housefloods’. But the latter is a trigger for flood risk from a House-holder perspective. From the perspective of the Local Au-thority Solicitor the main risk event is ‘Loss of life’ forwhich ‘Flood’ is the trigger and ‘Rapid emergency response’becomes a control rather than a mitigant.

This ability to decompose a risk problem into chains ofinterrelated events and variables should make risk analysismore meaningful, practical and coherent. The BN tells astory that makes sense. This is in stark contrast with the"risk equals probability times impact" approach where notone of the concepts has a clear unambiguous interpretation.Uncertainty is quantified and at any stage we can simplyread off the current probability values associated with anyevent.

The causal approach can accommodate decision-mak-

Page 22: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 21© Novática

Risk Management

Farewell Edition

ing as well as measures of utility. It provides a visual andformal mechanism for recording and testing subjective prob-abilities. This is especially important for a risky event forwhich you do not have much or any relevant data.

4 ConclusionsWe have addressed some of the core limitations of both

a) the data-driven statistical approaches and b) risk regis-ters, for effective risk management and assessment. We havedemonstrated how these limitations are addressed by usingBNs. The BN approach helps to identify, understand andquantify the complex interrelationships (underlying evenseemingly simple situations) and can help us make sense ofhow risks emerge, are connected and how we might repre-sent our control and mitigation of them. By thinking aboutthe hypothetical causal relations between events we can in-vestigate alternative explanations, weigh up the conse-quences of our actions and identify unintended or(un)desirable side effects.

Of course it takes effort to produce a sensible BN model:Special care has to be taken to identify cause and

effect: in general, a significant correlation between two fac-tors A and B (where, for example A is ‘yellow teeth’ and Bis ‘cancer’) could be due to pure coincidence or a causalmechanism, such that:

- A causes B- B causes A- Both A and B are caused by C (where in our example

C might be ‘smoking’) or some other set of factorsThe difference between these possible mechanisms is

crucial in interpreting the data, assessing the risks to theindividual and society, and setting policy based on the analy-sis of these risks. In practice causal interpretation may col-lide with our personal view of the world and the prevailingideology of the organisation and social group, of which wewill be a part. Explanations consistent with the ideologicalviewpoint of the group may be deemed more worthy andvalid than others irrespective of the evidence. Hence sim-plistic causal explanations (e.g. ‘poverty’ causes ‘violence’)are sometimes favoured by the media and reported unchal-lenged. This is especially so when the explanation fits theestablished ideology helping to reinforce ingrained beliefs.Picking apart over-simplistic causal claims and reconstruct-ing them into a richer, more realistic causal model helpsseparate ideology from reality and determine whether themodel explains reality. The richer model may then also helpidentify more realistic possible policy interventions.

The states of variables need to be carefully definedand probabilities need to be assigned that reflect our bestknowledge.

It requires an analytical mindset to decompose theproblem into "classes" of event and relationships that aregranular enough to be meaningful, but not too detailed thatthey are overwhelming.

If we were omniscient we would have no need of prob-abilities; the fact that we are not gives rise to our need tomodel uncertainty at a level of detail that we can grasp, that

is useful and which is accurate enough for the purpose re-quired. This is why causal modelling is as much an art (butan art based on insight and analysis) as a science.

The time spent analysing risks must be balanced by theshort term need to take action and the magnitude of therisks involved. Therefore, we must make judgements abouthow deeply we model some risks and how quickly we usethis analysis to inform our actions.

References[1] S.L. Lauritzen, D.J. Spiegelhalter. Local computations

with probabilities on graphical structures and theirapplication to expert systems (with discussion). Jour-nal of the Royal Statistical Society Series 50(2), 157-224 (1988).

[2] I.B. Hossack, J. H. Pollard, B. Zehnwirth. Introduc-tory statistics with applications in general insurance,Cambridge University Press, 1999.

[3] W. Casscells, A. Schoenberger, T.B. Graboys. "Inter-pretation by physicians of clinical laboratory results."New England Journal of Medicine 299 999-1001,1978.

[4] L. Cosmides, J. Tooby. "Are humans good intuitivestatisticians after all? Rethinking some conclusionsfrom the literature on judgment under uncertainty."Cognition 58 1-73, 1996.

[5] N. Fenton, M. Neil (2010). "Comparing risks of alter-native medical diagnosis using Bayesian arguments."Journal of Biomedical Informatics 43: 485-495.

[6] J. Pearl. "Fusion, propagation, and structuring in be-lief networks." Artificial Intelligence 29(3): 241-288,1986.

[7] Agena 2010, <http://www.agenarisk.com>.[8] N.E. Fenton, M. Neil. Managing Risk in the Modern

World: Bayesian Networks and the Applications. Lon-don Mathematical Society, Knowledge Transfer Re-port. 1, 2007. <http://www.lms.ac.uk/activities/comp_sci_com/KTR/apps_bayesian_networks.pdf>.

Page 23: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

22 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Keywords: Project Management, Project Scheduling,Quantitative Methods, Schedule Network Analysis.

1 Why Project Managers ignore Risks in ProjectSchedules

Virtually all projects are affected by multiple risks anduncertainties. These uncertainties are difficult to identify andanalyze which can lead to inaccurate project schedules. Dueto these inherent uncertainties, most projects do not pro-ceed exactly as planned and, in many cases, they lead toproject delays, cost overruns, and even project failures.Therefore, creating accurate project schedules, which re-flect potential risks and uncertainties, remains one of themain challenges in project management.

In [1][2][3] the authors reviewed technical, psychologi-cal and political explanations for inaccurate scheduling andforecasting. They found that strategic misrepresentationunder political and organizational pressure, expressed byproject planners as well as cognitive biases, play major rolesin inaccurate forecasting. In other words, project plannerseither unintentionally, due to psychological biases, or in-tentionally, due to organizational pressures, consistentlydeliver inaccurate estimates for cost and schedule, which inturn lead to inaccurate forecasts [4].

Among the cognitive biases related to project forecast-ing is the planning fallacy [5] and the optimism bias [6].According to one explanation, project managers do not ac-count for risks or other factors that they perceive as lyingoutside of the specific scope of a project. Project managersmay also discount multiple improbable high-impact risksbecause each has very small probability of occurring. It hasbeen proposed in [7] that limitations in human mental proc-esses cause people to employ various simplifying strategiesto ease the burden of mentally processing information whenmaking judgments and decisions. During the planning stage,project managers rely on heuristics or rules of thumb to maketheir estimates. Under many circumstances, heuristics leadto predictably faulty judgments or cognitive biases [8]. Theavailability heuristic [9][10] is a rule of thumb with whichdecision makers assess the probability of an event by the

Authors

Michael Trumper has over 20 years experience encompassingtechnical communications, instructional and software design forproject risk and economics software. He has consulted in thedevelopment and delivery of project risk analysis andmanagement software, consulting, and training solutions toclientele that includes NASA, Boeing, Dynatec, LockheedMartin, Proctor and Gamble, L-3com and others. Coauthored"Project Decisions: the art and science" (published in 2007 andcurrently in PMI bookstore) and authored papers on quantitativemethods in project risk analysis. <[email protected]>.

Lev Virine has more than 20 years of experience as a structuralengineer, software developer, and project manager. In the past10 years he has been involved in a number of major projectsperformed by Fortune 500 companies and government agen-cies to establish effective decision analysis and risk managementprocesses as well as to conduct risk analyses of complex projects.Lev’s current research interests include the application ofdecision analysis and risk management to project management.He writes and speaks to conferences around the worldon project decision analysis, including the psychology ofjudgment and decision-making, modeling of business processes,and risk management. Lev received his doctoral degree inengineering and computer science from Moscow StateUniversity, Russia. <[email protected]>.

Event Chain Methodology in Project ManagementMichael Trumper and Lev Virine

Risk management has become a critical component of project management processes. Quantitative schedule risk analy-sis methods enable project managers to assess how risks and uncertainties will affect project schedules and increase theeffectiveness of their project planning. Event chain methodology is an uncertainty modeling and schedule network analy-sis technique that focuses on identifying and managing the events and event chains that affect projects. Event chainmethodology improves the accuracy of project planning by simplifying the modeling and analysis of risks and uncertain-ties in the project schedules. As a result, it helps to mitigate the negative impact of cognitive and motivational biasesrelated to project planning. Event chain methodology is currently used in many organizations as part of their project riskmanagement process.

ease with which instances or occurrences can be brought tomind. For example, project managers sometimes estimatetask duration based on similar tasks that have been previ-ously completed. If they base their judgments on their mostor least successful tasks, this can cause inaccurate estima-tions. The anchoring heuristic refers to the human tendencyto remain close to an initial estimate. Anchoring is relatedto overconfidence in estimation of probabilities – a ten-dency to provide overly optimistic estimates of uncertainevents. Arbitrary anchors can also affect people’s estimatesof how well they will perform certain problem solving tasks[11].

Page 24: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 23© Novática

Risk Management

Farewell Edition

Problems with estimation are also related to selectiveperception - the tendency for expectations to affect percep-tion [12]. Sometimes selective perception is referred, as "Ionly see what I want to see". One of the biases related toselective perception is the confirmation bias. This is the ten-dency of decision makers to actively seek out and assignmore weight to evidence that confirms their hypothesis, andto ignore or underweight evidence that could discount theirhypothesis [13][14].

Another problem related to improving the accuracy ofproject schedules is the complex relationship between dif-ferent uncertainties. Events can occur in the middle of anactivity, they can be correlated with each other, one eventcan cause other events, the same event may have differentimpacts depending upon circumstances, and different miti-gation plans can be executed under different conditions.These complex systems of uncertainties must be identifiedand visualized to improve the accuracy of project sched-ules.

Finally, the accuracy of project scheduling can be im-proved by constantly refining the original plan using actualproject performance measurement [15]. This can be achievedthrough analysis of uncertainties during different phases ofthe project and incorporating new knowledge into the projectschedule. In addition, a number scheduling techniques suchas resource leveling and the incorporation of mitigationplans, and the presence of repeated activities are difficultto model in project schedules with risks and uncertainties.Therefore, the objective is to identify an simpler process,which includes project performance measurement and otheranalytical techniques.

Event chain methodology has been proposed as an at-tempt to satisfy the following objectives related to projectscheduling and forecasting by:

1. Mitigating the effects negative of motivational andcognitive biases and improve the accuracy of estimating andforecasting.

2. Simplifying the process of modeling risks and un-certainties in project schedules, in particular, by improvingthe ability to visualize multiple events that affect projectschedules and perform reality checks.

3. Performing more accurate quantitative analysis whileaccounting for such factors as the relationships betweendifferent events and the actual moment of events.

4. Providing a flexible framework for scheduling whichincludes project performance measurement, resourceleveling, execution of migration plans, correlations betweenrisks, repeated activities, and other types of analysis.

2 Existing Techniques as Foundations for EventChain Methodology

The accuracy of project scheduling with risks and un-

certainties can be improved by applying a process orworkflow tailored to the particular project or set of projects(portfolio) rather than using one particular analytical tech-nique. According to the PMBOK® Guide of the ProjectManagement Institute [16] such processes can include meth-ods of identification of uncertainties, qualitative and quan-titative analysis, risk response planning, and risk monitor-ing and control. The actual processes may involve varioustools and visualization techniques.

One of the fundamental issues associated with manag-ing project schedules lies in the identification of uncertain-ties. If the estimates for input uncertainties are inaccurate,this will lead to inaccurate results regardless of the analysismethodology. The accuracy of project planning can be sig-nificantly improved by applying advanced techniques foridentification risks and uncertainties. Extensive sets of tech-niques and tools which can be used by individuals as wellas in groups are available to simplify the process of uncer-tainty modeling [17][18].

The PMBOK® Guide recommends creating risk tem-plates based on historical data. There are no universal, ex-haustive risk templates for all industries and all types ofprojects. Project management literature includes many ex-amples of different risk lists which can be used as templates[19]. A more advanced type of template is proposed in [20]:risk questionnaires. They provide three choices for each riskwhere the project manager can select when the risk canmanifest itself during the project: a) at anytime b) abouthalf the time, and c) less than half the time. One of the mostcomprehensive analyses of risk sources and categories wasperformed by Scheinin and Hefner [21]. Each risk in theirrisk breakdown structure includes what they call a "fre-quency" or rank property.

PMBOK® Guide recommends a number of quantita-tive analysis techniques, such as Monte Carlo analysis, de-cision trees and sensitivity analysis. Monte Carlo analysisis used to approximate the distribution of potential resultsbased on probabilistic inputs [22][23][24][25]. Each trial isgenerated by randomly pulling a sample value for each in-put variable from its defined probability distribution. Theseinput sample values are then used to calculate the results.This procedure is then repeated until the probability distri-

Risk management has becomea critical component of project management processes

Event chain methodology iscurrently used in manyorganizations as part of

their project riskmanagement process

“”

Page 25: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

24 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

butions are sufficiently well represented to achieve the de-sired level of accuracy. The main advantage of Monte Carlosimulation is that it helps to incorporate risks and uncertain-ties into the process of project scheduling.

However Monte Carlo analysis has the following limi-tations:

1. Project managers perform certain recovery actionswhen a project slips. These actions in most cases are nottaken into account by Monte Carlo. In this respect, MonteCarlo may give overly pessimistic results [26].

2. Defining distributions is not a trivial process. Distri-butions are a very abstract concept that some project man-agers find difficult to work with. [27].

Monte Carlo simulations may be performed based onuncertainties defined as risk drivers, or events [28][29]. Suchrisk drivers may lead to increases in task duration or cost.Each event is defined by different probability and impact,and can be assigned to a specific task. For example, event"problem with delivery of the component" may lead to 20%delay of the task with probability 20%. The issue with suchapproach is thatrelations between risks must be defined andtaken into account during simulation process. For example,in many cases one risk will trigger another risk but onlybased on certain conditions. These relationships can be verydifficult to define using traditional methods.

Another approach to project scheduling with uncertain-ties was developed by Goldratt [30], who applied the theoryof constraints to project management. The cornerstone ofthe theory is resource constrained critical paths called a "criti-cal chain". Goldratt’s approach is based on a deterministiccritical path method. To deal with uncertainties, Goldrattsuggests using project buffers and encouraging early taskcompletion. Although critical chain has proved to be a veryeffective methodology for a wide range of projects [31][32],it is not fully embraced by many project managers becauseit requires change to established processes.

A number of quantitative risk analysis techniques havebeen developed to deal with specific issues related to uncer-tainty management. Decisions trees [33] help to calculatethe expected value of projects as well as identify projectalternatives and to select better courses of action. Sensitiv-ity analysis is used to determine which variables, such asrisks, have most potential impact on projects [25]. Thesetypes of analysis usually become important components ina project planning process that accounts for risks and un-certainties.

One of the approaches, which may help to improve ac-curacy of project forecasts, is the visualization of projectplans with uncertainties. Traditional visualization techniquesinclude bar charts or Gantt charts and various schedule net-work diagrams [16]. Visual modeling tools are widely usedto describe complex models in many industries. Unified

modeling language (UML) is actively used in software de-sign [34][35].

Among integrated processes designed to improve theaccuracy of project planning with risks and uncertainties isthe reference class forecasting technique [36]. This proc-ess includes identifying similar past and present projects,establishing probability distributions for selected referenceclasses and using them to establish the most likely outcomeof a specific project. The American Planning Associationofficially endorses reference class forecasting. Analysisbased on historical data helps to make more accurate fore-casts; however, they have the following major shortcom-ings:

1. Creating sets of references or analogue sets is not atrivial process because it involves a relevance analysis ofprevious projects and previous projects may not be relevantto the current one.

2. Many projects, especially in the area of research anddevelopment, do not have any relevant historical data.

3 Overview of Event Chain MethodologyEvent chain methodology is a practical schedule net-

work analysis technique as well as a method of modelingand visualizing of uncertainties. Event chain methodologycomes from the notion that regardless of how well projectschedules are developed, some events may occur that willalter it. Identifying and managing these events or eventchains (when one event causes another event) is the focusof event chain methodology. The methodology focuses onevents rather than a continuous process for changing projectenvironments because with continuous problems within aproject it is possible to detect and fix them before they havea significant effect upon the project.

Project scheduling and analysis using events chain meth-odology includes the following steps:

1. Create a project schedule model using best-case sce-nario estimates of duration, cost, and other parameters. Inother words, project managers should use estimates thatthey are comfortable with, which in many cases will beoptimistic. Because of a number of cognitive and motiva-tional factors outlined earlier project managers tend to cre-ate optimistic estimates.

2. Define a list of events and event chains with theirprobabilities and impacts on activities, resources, lags, andcalendars. This list of events can be represented in the formof a risk breakdown structure. These events should be iden-tified separately (separate time, separate meeting, differentexperts, different planning department) from the schedulemodel to avoid situations where expectations about theproject (cost, duration, etc.) affect the event identification.

3. Perform a quantitative analysis using Monte Carlosimulations. The results of Monte Carlo analysis are statis-

During the planning stage, project managers rely on heuristicsor rules of thumb to make their estimates“ ”

Page 26: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 25© Novática

Risk Management

Farewell Edition

tical distributions of the main project parameters (cost, du-ration, and finish time), as well as similar parameters asso-ciated with particular activities. Based on such statisticaldistributions, it is possible to determine the chance theproject or activity will be completed on a certain date andwithin a certain cost. The results of Monte Carlo analysiscan be expressed on a project schedule as percentiles ofstart and finish times for activities.

4. Perform a sensitivity analysis as part of the quanti-tative analysis. Sensitivity analysis helps identify the cru-cial activities and critical events and event chains. Crucialactivities and critical events and event chains have the mostaffect on the main project parameters. Reality checks maybe used to validate whether the probability of the events aredefined properly.

5. Repeat the analysis on a regular basis during thecourse of a project based on actual project data and includethe actual occurrence of certain risks. The probability andimpact of risks can be reassessed based on actual projectperformance measurement. It helps to provide up to dateforecasts of project duration, cost, or other parameters.

4 Foundations of Event Chain MethodologyEvent chain methodology expands on the Monte Carlo

simulations of project schedules and particularly risk driver(events) approach. Event chain methodology focuses on therelationship between risks, conditions for risk occurrence,and visualization of the risks events.

Some of the terminology used in event chain methodol-ogy comes from the field of quantum mechanics. In par-ticular, quantum mechanics introduces the notions of exci-tation and entanglement, as well as grounded and excitedstates [37][38]. The notion of event subscription andmulticasting is used in object oriented software develop-ment as one of the types of interactions between objects[39][40].

5 Basic Principles of Event Chain MethodologyEvent chain methodology is based on six major princi-

ples. The first principle deals with single events, the secondprinciple focuses on multiple related events or event chains,the third principle defines rules for visualization of the eventsor event chains, the fourth and fifth principles deals withthe analysis of the schedule with event chains, and the sixthprinciple defines project performance measurement tech-niques with events or event chains. Event chain methodol-ogy is not a completely new technique as it is based onexisting quantitative methods such Monte Carlo simulationand Bayesian theorem.

Principle 1: Moment of Event and Excitation StatesAn activity in most real life processes is not a continu-

ous and uniform procedure. Activities are affected by ex-ternal events that transform them from one state to another.The notion of state means that activity will be performeddifferently as a response to the event. This process of chang-ing the state of an activity is called excitation. In quantummechanics, the notion of excitation is used to describe el-evation in energy level above an arbitrary baseline energystate. In Event chain methodology, excitation indicates thatsomething has changed the manner in which an activity isperformed. For example, an activity may require differentresources, take a longer time, or must be performed underdifferent conditions. As a result, this may alter the activity’s

One of the fundamentalissues associated with

managing project scheduleslies in the identification

of uncertainties

Figure 1: Events cause Activity to move to transform from Ground States to Excited States.

Page 27: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

26 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

cost and duration.The original or planned state of the activity is called a

ground state. Other states, associated with different eventsare called excited states (Figure 1). For example, in the mid-dle of an activity the project requirements change. As a re-sult, a planned activity must be restarted. Similarly to quan-tum mechanics, if significant event affects the activities, itwill dramatically affect the property of the activity, for ex-ample cancel the activity.

Events can affect one or many activities, material or workresources, lags, and calendars. Such event assignment is animportant property of the event. An example of an eventthat can be assigned to a resource is an illness of a projectteam member. This event may delay of all activities to whichthis resource is assigned. Similarly resources, lags, andcalendars may have different grounded and excited states.For example, the event "Bad weather condition" can trans-form a calendar from a ground state (5 working days perweeks) to an excited state: non working days for the next 10days.

Each state of activity in particular may subscribe to cer-tain events. It means that an event can affect the activityonly if the activity is subscribed to this event. For example,an assembly activity has started outdoors. The ground statethe activity is subscribed to the external event "Bad weather".If "Bad weather" actually occurs, the assembly should moveindoors. This constitutes an excited state of the activity. Thisnew excited state (indoor assembling) will not be subscribed

to the "Bad weather": if this event occurs it will not affectthe activity.

Event subscription has a number of properties. Amongthem are:

Impact of the event is the property of the state ratherthan event itself. It means that impact can be different if anactivity is in a different state. For example, an activity issubscribed to the external event "Change of requirements".In its ground state of the activity, this event can cause a50% delay of the activity. However, if the event has oc-curred, the activity is transformed to an excited state. In anexcited state if "Change of requirement" is occurs again, itwill cause only a 25% delay of the activity because man-agement has performed certain actions when event first oc-curred.

Probability of occurrence is also a property of sub-scription. For example, there is a 50% chance that the eventwill occur. Similarly to impact, probability of occurrencecan be different for different states;

Excited state: the state the activities are transformedto after an event occurs;

Moment of event: the actual moment when the eventoccurs during the course of an activity. The moment of eventcan be absolute (certain date and time) or relative to anactivity’s start and finish times. In most cases, the momentwhen the event occurs is probabilistic and can be definedusing a statistical distribution (Figure 1). Very often, theoverall impact of the event depends on when an event oc-

Event chain methodology is a practical schedule networkanalysis technique as well as a method of modeling

and visualizing of uncertainties

Some of the terminology used in event chain methodologycomes from the field of quantum mechanics

Table 1: Moment of Risk Significantly affects Activity Duration.

Risk most likely occurs at the end of the activity (triangular distribution for moment of risk)

Equal probability of the risk occurrence during the course of activity

Risk occurs only ay the end of activity

Risk

Risk

Risk

Mean activity duration with the event occurred

5.9 days 6.3 days 7.5 days

90th percentile 7.9 days 9.14 days 10 days

“”

“”

Page 28: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 27© Novática

Risk Management

Farewell Edition

curs. For example, the moment of the event can affect totalduration of activity if it is restarted or cancelled. Below isan example how one event (restart activity) with a prob-ability of 50% can affect one activity (Table 1). Monte Carlosimulation was used to perform the analysis. Original ac-tivity duration is 5 days:

Events can have negative (risks) and positive (opportu-nities) impacts on projects. Mitigation efforts are consid-ered to be events, which are executed if an activity is in anexcited state. Mitigation events may attempt to transformthe activity to the ground state.

Impacts of an event affecting activities, a group of ac-tivities, or lags can be:

Delay activity, split activity, or start activity later;delays can be defined as fixed (fixed period of time) andrelative (in percent of activity duration); delay also can benegative

Restart activityStop activity and restart it later if requiredEnd activityCancel activity or cancel activity with all successors,

which is similar to end activity except activity will be markedas canceled to future calculation of activity’s success rate

Fixed or relative increase or reduction of the costRedeploy resources associated with activity; for ex-

ample a resource can be moved to another activityExecute events affecting another activity, group of

activities, change resource, or update a calendar. For exam-ple, this event can start another activity such as mitigationplan, change the excited state of another activity, or updateevent subscriptions for the excited state of another activity

The impacts of events are characterised by some addi-tional parameters. For example, a parameter associated withthe impact "Fixed delay of activity" is the actual durationof the delay.

The impact of events associated with resources is simi-lar to the impact of activity events. Resource events willaffect all activities this resource is assigned to. If a resourceis only partially involved in the activity, the probability ofevent will be proportionally reduced. The impact of eventsassociated with a calendar changes working and non-work-ing times.

One event can have multiple impacts at the same time.For example, a "Bad weather" event can cause an increaseof cost and duration at the same time. Event can be local,affecting a particular activity, group of activities, lags, re-sources, and calendars, or global affecting all activities inthe project.

Principle 2: Event ChainsSome events can cause other events. These series of

events form event chains, which may significantly affectthe course of the project by creating a ripple effect throughthe project (Figure 2). Here is an example of an event chainripple effect:

1. Requirement changes cause a delay of an activity.2. To accelerate the activity, the project manager di-

verts resources from another activity.3. Diversion of resources causes deadlines to be missed

on the other activity4. Cumulatively, this reaction leads to the failure of the

whole project.Event chains are defined using event impacts called "Ex-

ecute event affecting another activity, group of activities,change resources or update calendar". Here is how the afore-mentioned example can be defined using Event chain meth-odology:

1. The event "Requirement change" will transform theactivity to an excited state which is subscribed to the event"Redeploy resources".

2. Execute the event "Redeploy resources" to transferresources from another activity. Other activities should bein a state subscribed to the "Redeploy resources" event. Oth-erwise resources will be not available.

3. As soon as the resources are redeployed, the activitywith reduced resources will move to an excited state andthe duration of the activity in this state will increase.

4. Successors of the activity with the increased dura-

Table 2: Event Chain leads to Higher Project Duration compared to the Series of IndependentEvents with the Same Probability.

Independent events in each activity Event chain

Mean duration 18.9 days 19.0 days

90th percentile (high estimate of duration) 22.9 days 24.7 days

Figure 2: Example of Event Chain.

Activity 1

Activity 2 Event 3

Event 2Event 1

Activity 3

Event chain

Page 29: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

28 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

tion will start later, which can cause a missedproject deadline.

An event that causes another event iscalled the sender. The sender can cause mul-tiple events in different activities. This effectis called multicasting. For example a brokencomponent may cause multiple events: a de-lay in assembly, additional repair activity, andsome new design activities. Events that arecaused by the sender are called receivers.Receiver events can also act as a sender foranother event.

The actual effect of an event chain on aproject schedule can be determined as a re-sult of quantitative analysis. The examplebelow illustrates the difference between eventchain and independent events (Figure 2 andTable 2). Monte Carlo simulations were usedto perform the analysis. The project includesthree activities of 5 days each. Each activityis affected by the event "restart activity" with a probabilityof 50%.

Below are four different strategies for dealing with risks[41] defined using event chain methodology’s event chainprinciple:

1. Risk acceptance: excited state of the activity is con-sidered to be acceptable.

2. Risk transfer: represents an event chain; the impactof the original event is an execution of the event in anotheractivity (Figure 3).

3. Risk mitigation: represents an event chain; the origi-nal event transforms an activity from a ground state to anexcited state, which is subscribed to a mitigation event; themitigation event that occurs in excited state will try to trans-form activities to a ground state or a lower excited state(Figure 4).

4. Risk avoidance: original project plan is built in sucha way that none of the states of the activities are subscribedto this event.

Principle 3: Event Chain Diagramsand State Tables

Complex relationships between eventscan be visualized using event chain diagrams(Figure 5). Event chain diagrams are pre-sented on the Gantt chart according to thespecification. This specification is a set ofrules, which can be understood by anybodyusing this diagram.

1. All events are shown as arrows.Names and/or IDs of events are shown next

to the arrow.

Event chain methodology is actively used in many organizations,including large corporations and government agencies

Figure 4: Event Chain - Risk mitigation

Activity 1

Activity 2

Event 2

Event 1

Event chain: Risk transfer

Excited state

Figure 3: Event chain: Risk transfer

Figure 5: Example of Event Chain Diagram.

“ ”

Page 30: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 29© Novática

Risk Management

Farewell Edition

2. Events with negative impacts (threads) are repre-sented by down arrows; events with positive impacts (op-portunities) are represented by up arrows.

3. Individual events are connected by lines represent-ing the event chain.

4. A sender event with multiple connecting lines to re-ceivers represents multicasting.

5. Events affecting all activities (global events) areshown outside the Gantt chart. Threats are shown at the topof the diagram. Opportunities are shown at the bottom ofthe diagram.

Often event chain diagrams can become very complex.In these cases, some details of the diagram do not need tobe shown. Here is a list of optional rules for event chaindiagrams:

1. Horizontal positions of the event arrows on the Ganttbar correspond with the mean moment of the event.

2. Probability of an event can be shown next to the eventarrow.

3. Size of the arrow represents relative probability ofan event. If the arrow is small, the probability of the eventis correspondingly small.

4. Excited states are represented by elevating the asso-ciated section of the bar on the Gantt chart (see Figure 1).The height of the state’s rectangle represents the relativeimpact of the event.

5. Statistical distributions for the moment of event canbe shown together with the event arrow (see Figure 1).

6. Multiple diagrams may be required to represent dif-ferent event chains for the same schedule.

7. Different colors can be use to represent differentevents (arrows) and connecting lines associated with dif-ferent chains.

The central purpose of event chain diagrams is not toshow all possible individual events. Rather, event chain dia-grams can be used to understand the relationship betweenevents. Therefore, it is recommended that event chain dia-grams be used only for the most significant events duringthe event identification and analysis stage. Event chain dia-grams can be used as part of the risk identification process,

particularly during brainstorming meetings. Members ofproject teams can draw arrows between associated activi-ties on the Gantt chart. Event chain diagrams can be usedtogether with other diagramming tools.

Another tool that can be used to simplify the definitionof events is a state table. Columns in the state table repre-sent events; rows represent states of activity. Informationfor each event in each state includes four properties of eventsubscription: probability, moment of event, excited state,and impact of the event. State table helps to depict an ac-tivity’s subscription to the events: if a cell is empty the stateis not subscribed to the event.

An example of state table for a software developmentactivity is shown on Table 3. The ground state of the activ-ity is subscribed to two events: "architectural changes" and"development tools issue". If either of these events occur,they transform the activity to a new excited state called"refactoring". "Refactoring" is subscribed to another event:"minor requirement change". Two previous events are notsubscribed to the refactoring state and therefore cannotreoccur while the activity is in this state.

Principle 4: Monte Carlo Schedule Risk AnalysisOnce events, event chains, and event subscriptions are

defined, Monte Carlo analysis of the project schedule canbe performed to quantify the cumulative impact of theevents. Probabilities and impacts of events are used as aninput data for analysis.

In most real life projects, even if all the possible risksare defined, there are always some uncertainties or fluctua-tions or noise in duration and cost. To take these fluctua-tions into account, distributions related to activity duration,start time, cost, and other parameters should be defined inaddition to the list of events. These statistical distributionsmust not have the same root cause as the defined events, asthis will cause a double-count of the project’s risk.

Monte Carlo simulation process for Event chain meth-odology has a number of specific features. Before the sam-pling process starts all event chains should be identified.Particularly, all sender and receiver events should be iden-

Table 3: Example of the State Table for Software Development Activity.

Event 1: Architectural changes

Event 2: Development tools issue

Event 3: Minor requirements change

Ground state Probability: 20% Moment of event: any time Excited state: refactoring Impact: delay 2 weeks

Probability: 10% Moment of event: any time Excited state: refactoring Impact: delay 1 week

Excited state: refactoring Probability: 10% Moment of event: beginning of the state Excited state: minor code change Impact: delay 2 days

Excited state: minor code change

Page 31: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

30 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

tified through an analysis of state tables for each activity.Also, if events are assigned to resources, they need to bereassigned to activities based on resource usage for eachparticular activity. For example, if a manager is equally in-volved in two activities, a risk "Manager is not familiar withtechnology" with a probability 6% will be transferred to bothactivities with probability of 3% for each activity. Eventsassigned to summary activities will be assigned to each ac-tivity in the group. Events assigned to lags are treated thesame way as activities.

Each trial of the Monte Carlo simulation includes thefollowing steps specific to Event chain methodology:

1. Moments of events are calculated based of statisticaldistribution for moment of event on each state.

2. Determines if sender events have actually occurred atthis particular trial based on probability of the sender.

3. Determines if probabilities of receiver events are up-dated based on sender event. For example, if a sender eventunconditionally causes a receiver event, probability of a re-ceiver event will equal 100%.

4. Determines if receiver events have actually occurred; ifa receiver event is a sender event at the same time, the processof determining probabilities of receiver events will continue.

5. The process will repeat for all ground and excited statesfor all activities and lags.

6. If an event that causes the cancellation of an activityoccurs, this activity will be identified as canceled and the ac-tivity’s duration and cost will be adjusted.

7. If an event that causes the start of another activity oc-curs, such as execution of mitigation plan, the project schedulewill be updated for the particular trial. Number of trials wherethe particular activity is started will be counted.

8. The cumulative impact of the all events on the activi-ty’s duration and cost will be augmented by accounting forfluctuations of duration and cost.

The results of the analysis are similar to the results ofclassic Monte Carlo simulations of project schedules. Theseresults include statistical distributions for duration, cost, andsuccess rate of the complete project and each activity orgroup of activities. Success rates are calculated based onthe number of simulations where the event "Cancel activ-ity" or "Cancel group of activities" occurred. Probabilisticand conditional branching, calculating the chance that projectwill be completed before deadline, probabilistic cashflowand other types of analysis are performed in the same man-ner as with a classic Monte Carlo analysis of the projectschedules. Probability of activity existence is calculatedbased on to two types of inputs: probabilistic and condi-tional branching and number of trials where an activity isexecuted as a result of a "Start activity" event.

Principle 5: Critical Event Chains and Event CostSingle events or event chains that have the most poten-

tial to affect the projects are the critical events or criticalevent chains. By identifying critical events or critical eventchains, it is possible mitigate their negative effects. Thesecritical event chains can be identified through sensitivity analy-

sis: by analyzing the correlations between the main projectparameters, such as project duration or cost, and event chains.

Critical event chains based on cost and duration maydiffer. Because the same event may affect different activi-ties and have different impact of these activities, the goal isto measure a cumulative impact of the event on the projectschedule. Critical event chains based on duration are cal-culated using the following approach. For each event andevent chain on each trial the cumulative impact of event onproject duration (Dcum) is calculated based on the formula:

Dcum = ∑=

n

i 1(Di

’ - Di)*ki

where n is number of activities in which this event orevent chain occurs, Di is the original duration of activity iand Di

’is the duration of activity i with this particular eventtaken into an account, ki is the Spearman rank order corre-lation coefficient between total project duration and dura-tion of activity i. If events are assigned to calendars, Di

’ isthe duration of activity with the calendar used as a result ofthe event.

Cumulative impact of event on cost (Ccum) is calculatedbased on formula:

Ccum = ∑=

n

i 1(Ci

’ - Ci)

where Ci is the original cost of activity and Ci’is the ac-

tivity cost taking into account the this particular event.Spearman rank order correlation coefficient is calcu-

lated based on the cumulative effect of the event on costand duration (Ccum and Dcum ) and total project cost and du-ration.

One of the useful measures of the impact of the event isevent cost or additional expected cost, which would be addedto project as a result of the event. Event cost is not a mitiga-tion cost. Event cost can be used as decision criteria forselection of risk mitigation strategies. Mean event cost Ceventis normalized cumulative effect of the event on cost andcalculated according to the following formula:

Cevent = (Cproject’ - Cproject) * kevent / ∑

=

n

i 1 ki

where Cproject’ is the mean total project cost with risks

and uncertainties, Cproject is the mean total project cost with-out taking into account events, but with accounting for fluc-tuations defined by statistical distributions, kevent is the cor-relation coefficient between total project cost and cumula-tive impact of the event on cost on the particular activity, kiis correlation coefficient between total cost and cumulativeimpact of the event on the activity i. Event cost can be cal-culated based on any percentile associated with statisticaldistribution of project cost.

Page 32: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 31© Novática

Risk Management

Farewell Edition

Critical events or critical event chains can be visualizedusing a sensitivity chart, as shown on Figure 6. This chartrepresents events affecting cost in the schedule shown onFigure 2. Event 1 occurs in Task 1 (probability 47%) andTask 3 (probability 41%). Event 3 occurs in Task 3 (prob-ability 50%) and Event 2 occurs in Task 2 (probability 10%).All events are independent. The impact of all these eventsis "restart task". All activities have the same variable cost$6,667; therefore, the total project cost without risks anduncertainties equals $20,000. Total project cost with risksas a result of analysis equals $30,120. Cost of Event 1 willbe $5,300, Event 2 will be $3,440, and Event 3 will be$1,380. Because this schedule model does not include fluc-tuations for the activity cost, sum of event costs equals dif-ference between original cost and cost with risks and un-certainties ($10,120).

Critical events and events chains can be used to per-form a reality check. If the probability and outcome of eventsare properly defined, the most important risks, based onsubjective expert judgment, should be critical risks as a re-sult of quantitative analysis.

Principle 6: Project Performance Measurementwith Event and Event Chains

Monitoring the progress of activities ensures that up-dated information is used to perform the analysis. Whilethis is true for all types of analysis, it is a critical principleof event chain methodology. During the course of theproject, using actual performance data it is possible to re-calculate the probability of occurrence and moment of theevents. The analysis can be repeated to generate a newproject schedule with updated costs or durations.

But what should one do if the activity is partially com-pleted and certain events are assigned to the activity? If theevent has already occurred, will it occur again? Or vice versa,if nothing has occurred yet, will it happen?

There are three distinct approaches to this problem:1. Probabilities of a random event in partially completed

activity stay the same regardless of the outcome of previ-ous events. This is mostly related to external events, whichcannot be affected by project stakeholders. It was originallydetermined that "bad weather" event during a course of one-year construction project can occur 10 times. After a halfyear, bad weather has occurred 8 times. For the remaining

half year, the event could still occur 5 times. This approachis related to psychological effect called "gambler’s fallacy"or belief that a successful outcome is due after a run of badluck [42].

2. Probabilities of events in a partially completed ac-tivity depend on the moment of the event. If the moment ofrisk is earlier than the moment when actual measurement isperformed, this event will not affect the activity. For exam-ple, activity "software user interface development" takes10 days. Event "change of requirements" can occur any timeduring a course of activity and can cause a delay (a uni-form distribution of the moment of event). 50% of work iscompleted within 5 days. If the probabilistic moment ofevent happens to be between the start of the activity and 5days, this event will be ignored (not cause any delay). Inthis case, the probability that the event will occur will bereduced and eventually become zero, when the activityapproaches the completion.

3. Probabilities of events need to be defined by the sub-jective judgment of project managers or other experts atany stage of an activity. For example, the event "change ofrequirements" has occurred. It may occur again dependingon many factors, such as how well these requirements aredefined and interpreted and the particular business situa-tion. To implement this approach excited state activitiesshould be explicitly subscribed or not subscribed to certainevents. For example, a new excited state after the event"change of requirements" may not be subscribed to thisevent again, and as a result this event will not affect theactivity a second time.

The chance that the project will meet a specific dead-line can be monitored and presented on the chart shown onFigure 7. The chance changes constantly as a result of vari-ous events and event chains. In most cases, this chance isreducing over time. However, risk response efforts, suchas risk mitigations, can increase the chance of successfullymeeting a project deadline. The chance of the project meet-ing the deadline is constantly updated as a result of the quan-titative analysis based on the original assessment of theproject uncertainties and the actual project performancedata.

In the critical chain method, the constant change in thesize of the project buffer is monitored to ensure that projectis on track. In event chain methodology, the chance of the

Event 3

Event 2

Event 1

0 0.2 0.4 0.6 0.8

Tast 1 Task 2 Task 3

47% 41%

50%

10%

Correlation Coefficient

0.77

0.50

0.20

(K)K

5,300

3.440

1,380

EventCost

Figure 6: Critical Events and Event Chains.

Page 33: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

32 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

project meeting a certain deadline during different phasesof the project serves a similar purpose: it is an importantindicator of project health. Monitoring the chance of theproject meeting a certain deadline does not require a projectbuffer. It is always possible to attribute particular changesin the chance of meeting a deadline to actual and forecastedevents and event chains, and as a result, mitigate their nega-tive impact.

6 ConclusionsEvent chain methodology is designed to mitigate the

negative impact of cognitive and motivational biases relatedto the estimation of project uncertainties:

The task duration, start and finish time, cost, and otherproject input parameters are influenced by motivational fac-tors such as total project duration to much greater extentthan events and event chains. This occurs because eventscannot be easily translated into duration, finish time, etc.Therefore, Event chain methodology can help to overcomenegative affects of selective perception, in particular theconfirmation bias and, within a certain extent, the planningfallacy and overconfidence.

Event chain methodology relies on the estimation ofduration based on best-case scenario estimates and does notnecessarily require low, base, and high estimations or sta-tistical distribution and, therefore, mitigates the negativeeffect of anchoring.

The probability of events can be easily calculatedbased on historical data, which can mitigate the effect of theavailability heuristic. Compound events can be easy brokeninto smaller events. The probability of events can be calcu-lated using relative frequency approach where probabilityequals the number an event occurs divided by the totalnumber of possible outcomes. In classic Monte Carlosimulations, the statistical distribution of input parameterscan also be obtained from the historical data; however, theprocedure is more complicated and is often not used in prac-tice.

Event chain methodology allows taking into an accountfactors which were not analyzed by other schedule networkanalysis techniques: moment of event, chains of events, de-lays in events, execution of mitigation plans and others.Complex relationship between different events can be visu-alized using event chain diagrams and state tables, whichsimplifies event and event chain identification.

Finally, Event chain methodology includes techniquesdesigned to incorporate new information about actualproject performance to original project schedule and there-fore constantly improve accuracy of the schedule during acourse of a project. Event chain methodology offers practi-cal solution for resource leveling, managing mitigationplans, correlations between events and other activities.

Event chain methodology is a practical approach toscheduling software projects that contain multiple uncer-tainties. A process that utilizes this methodology can be eas-ily used in different projects, regardless of size and com-plexity. Scheduling using Event chain methodology is aneasy to use process, which can be can be performed usingoff-the-shelf software tools. Although Event chain meth-odology is a relatively new approach, it is actively used inmany organizations, including large corporations and gov-ernment agencies.

References[1] B. Flyvbjerg, M.K.S. Holm, S.L. Buhl. Underestimat-

ing costs in public works projects: Error or Lie? Jour-nal of the American Planning Association, vol. 68, no.3, pp. 279-295, 2002.

[2] B. Flyvbjerg, M.K.S. Holm, S.L. Buhl.. What causescost overrun in transport infrastructure projects? Trans-port Reviews, 24(1), pp. 3-18, 2004.

[3] B. Flyvbjerg, M.K.S. Holm. How inaccurate are de-mand forecasts in public works projects? Journal ofthe American Planning Association, vol. 78, no. 2, pp.131-146, 2005.

[4] L. Virine, L. Trumper. Project Decisions. The Art and

Figure 7: Monitoring Chance of Project Completion on a Certain Date.

Page 34: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 33© Novática

Risk Management

Farewell Edition

Science. Management Concepts. Vienna.VA, 2007,[5] R. Buehler, D. Griffin, M. Ross. Exploring the "plan-

ning fallacy": Why people underestimate their taskcompletion times. Journal of Personality and SocialPsychology, 67, 366-381, 1994.

[6] D. Lovallo, D. Kahneman. Delusions of success: howoptimism undermines executives’ decisions, HarvardBusiness Review, July Issue, pp. 56-63, 2003.

[7] A. Tversky, D. Kahneman. Judgment Under Uncer-tainty: Heuristics and biases. Science, 185, 1124-1130,74.

[8] G.E. McCray, R.L. Purvis, C.G. McCray. Project Man-agement Under Uncertainties: The Impact of Heuris-tics and Biases. Project Management Journal. Vol. 33,No. 1. 49-57, 2002.

[9] A. Tversky, D. Kahneman. Availability: A heuristic forjudging frequency and probability. Cognitive Physiol-ogy, 5, 207-232, 1973.

[10] J.S. Carroll. The effect of imagining an event on ex-pectations for the event: An interpretation in terms ofavailability heuristic. Journal of Experimental Psychol-ogy, 17, 88-96, 1978.

[11] D. Cervone, P.K. Peake. Anchoring, efficacy, and ac-tion: The influence of judgmental heuristics of self-efficacy judgments. Journal of Personality and SocialPsychology, 50, 492-501, 1986.

[12] S. Plous. The Psychology of Judgment and DecisionMaking, McGraw-Hill, 1993.

[13] P.C. Watson. On the failure to eliminate hypotheses ina conceptual task. Quarterly Journal of ExperimentalPsychology, 12, 129-140, 1960.

[14] J. St. B. T. Evans, J.L. Barston, P. Pollard. On the con-flict between logic and belief in syllogistic reasoning.Memory and Cognition, 11, 295-306, 1983.

[15] R.K. Wysocki, R. McGary. Effective Project Manage-ment: Traditional, Adaptive, Extreme, 3rd Edition, JohnWiley & Sons Canada, Ltd., 2003.

[16] Project Management Institute. A Guide to the ProjectManagement Body of Knowledge (PMBOK® Guide),Fourth Edition, Newtown Square, PA: Project Man-agement Institute, 2008.

[17] Clemen, R. T., (1996). Making Hard Decisions, Brooks/Cole Publishing Company, 2nd ed., Pacific Grove, CA

[18] G.W. Hill. Group versus individual performance: AreN + 1 heads better than one? Psychological Bulletin,91, 517-539, 1982.

[19] D. Hillson. Use a risk breakdown structure (RBS) tounderstand your risks. In Proceedings of the ProjectManagement Institute Annual Seminars & Symposium,October 3-10, 2002, San Antonio, TX, 2002.

[20] T. Kendrick. Identifying and Managing Project Risk:Essential Tools For Failure-Proofing Your Project,AMACOM, a division of American Management As-sociation, 2nd revised Edition. New York, 2009.

[21] W. Scheinin, R. Hefner. A Comprehensive Survey ofRisk Sources and Categories, In Proceedings of SpaceSystems Engineering and Risk Management Sympo-siums. Los Angeles, CA: pp. 337-350, 2005.

[22] D.T. Hulett. Schedule risk analysis simplified, PMNetwork, July 1996, 23-30, 1996.

[23] D.T. Hulett. Project Schedule Risk Analysis: MonteCarlo Simulation or PERT?"PM Network, February2000, p. 43, 2000.

[24] J. Goodpasture. Quantitative Methods in Project Man-agement, J.Ross Publishing, Boca Raton, FL, 2004.

[25] J. Schuyler. Risk and Decision Analysis in Projects,2nd Edition, Project Management Institute, NewtonSquare, PA, 2001.

[26] T. Williams. Why Monte Carlo simulations of projectnetworks can mislead. Project Management Journal,September 2004, 53-61, 2004.

[27] G.A. Quattrone, C.P. Lawrence, D.L. Warren, K.Souze-Silva, S.E. Finkel, D.E. Andrus. Explorationsin anchoring: The effects of prior range, anchor ex-tremity, and suggestive hints. Unpublished manuscript.Stanford University, Stanford, 1984.

[28] D.T. Hulett. Practical Schedule Risk Analysis. GowerPublishing, 2009.

[29] D.T. Hulett. Integrated Cost-Schedule Risk Analysis.Gower Publishing, 2011.

[30] E. Goldratt. Critical Chain. Great Barrington, MA:North River Press, 1997.

[31] M. Srinivasan, W. Best, S. Chandrasekaran. WarnerRobins Air Logistics Center Streamlines Aircraft Re-pair and Overhaul. Interfaces, 37(1). 7-21, 2007.

[32] P. Wilson, S. Holt. Lean and Six Sigma — A Continu-ous Improvement Framework: Applying Lean, SixSigma, and the Theory of Constraints to ImproveProject Management Performance. In Proceedings ofthe 2007 PMI College of Scheduling Conference, April15-18, Vancouver, BC, 2007.

[33] D.T. Hulett, D. Hillson. Branching out: decision treesoffer a realistic approach to risk analysis, PM Network,May 2006, pp 36-40, 2006.

[34] J. Arlow, I. Neustadt. Enterprise Patterns and MDA:Building Better Software with Archetype Patterns andUML. Addison –Wesley Professional, 2003.

[35] G. Booch, J. Rumbaugh , I. Jacobson. The UnifiedModeling Language User Guide, Addison –WesleyProfessional; 2nd edition, 2005

[36] B. Flyvbjerg. From Nobel Prize to project manage-ment: getting risks right. Project Management Jour-nal, August 2006, pp 5-15, 2006.

[37] R. Shankar. Principles of Quantum Mechanics, Sec-ond Edition, New York: Springer, 1994.

[38] E.B. Manoukian. Quantum Theory: A Wide Spectrum.New York: Springer, 2006.

[39] M. Fowler. Patterns of Enterprise Application Archi-tecture, Addison-Wesley Professional, 2002.

[40] R.C. Martin. Agile Software Development, Principles,Patterns, and Practices. Prentice Hall, 2002.

[41] Project Management Institute. A Guide to the projectmanagement body of knowledge (PMBOK). NewtownSquare, PA. Project Management Institute, Inc., 2004.

[42] A. Tversky, D. Kahneman. Belief in the law of smallnumbers. Psychological Bulletin, 76, 105-110, 1971.

Page 35: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

34 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Keywords: PMBOK Framework, Project ManagementBody of Knowledge, Project Risk Dynamics, Risk Man-agement Processes, SYDPIM methodology, System Dynam-ics.

1 Risk Management in Projects1.1 OverviewIn response to the growing uncertainty in modern

projects, over the last decade the project management com-munity has developed project-specific risk managementframeworks. The last edition of PMI’s body of knowledge(the PMBOK® Guide [1]), presents perhaps the most com-plete and commonly accepted framework, which has beenfurther detailed in the Practice Standard for Project RiskManagement [2]. Further developments complement thisframework like the establishment of project risk manage-ment maturity models to help organizations evaluate andimprove their ability to control risks in projects. However,most organizations still fall short of implementing thesestructured frameworks effectively. In addition, there are cer-tain types of risks that are not handled properly by the tra-ditional tools and techniques proposed.

1.2 Current Framework for Project Risk Man-agement

The fourth and latest edition of PMI’s Project Manage-ment Body of Knowledge [1] considers six risk manage-ment processes: plan risk management, identify risks, per-form qualitative risk analysis, perform quantitative riskanalysis, plan risk responses, and monitor and control risks.While this framework provides a comprehensive approachto problem solving, its effectiveness relies on the ability ofthese processes to cope with the multidimensional uncer-tainty of risks: identification, likelihood, impact, and oc-currence. The majority of the traditional tools and techniquesused in these processes were not designed to address theincreasingly systemic nature of risk uncertainty in modern

Author

Alexandre Rodrigues is an Executive Partner of PMO ProjectsGroup, an international consulting firm based in Lisbonspecialized in project management, with operations and officesin the UK, Africa and South America. He is also a seniorconsultant with the Cutter Consortium. Dr. Rodrigues is PMAmbassadorTM and International Correspondent for theinternational PMForum. He was the founding President of thePMI Portugal Chapter and served as PMI Chapter Mentor forfour years. He was an active member of the PMI teams thatdeveloped the 3rd edition of the PMBOK® Guide and theOPM3® model for organizational maturity assessment. He wasa core team member for the PMI Practice Standard for EarnedValue Management, which has just been released. He holds aPhD from the University of Strathclyde (UK), specializing inthe application of System Dynamics to Project Management.<[email protected]>

Revisiting Managing and Modelling of Project Risk Dynamics— A System Dynamics-based Framework1

Alexandre Rodrigues

The fast changing environment and the complexity of projects has increased the exposure to risk. The PMBOK (ProjectManagement Body of Knowledge) standard from the Project Management Institute (PMI) proposes a structured riskmanagement process, integrated within the overall project management framework. However, unresolved difficulties callfor further developments in the field. In projects, risks occur within a complex web of numerous interconnected causes andeffects, which generate closed chains of feedback. Project risk dynamics are difficult to understand and control and hencenot all types of tools and techniques are appropriate to address their systemic nature. As a proven approach to projectmanagement, System Dynamics (SD) provides this alternative view. A methodology to integrate the use of SD within theestablished project management process has been proposed by the author. In this paper, this is further extended to inte-grate the use of SD modelling within the PMBOK risk management process, providing a useful framework for the effectivemanagement project risk dynamics.

projects. This problem and limitation calls for further de-velopments in the field.

1.3 Project Risk DynamicsRisks are dynamic events. Overruns, slippage and other

problems can rarely be traced back to the occurrence of asingle discrete event in time. In projects, risks take placewithin a complex web of numerous interconnected causes

1 This paper derives from an article entitled "Managing and Model-ling Project Risk Dynamics - A System Dynamics-based Frame-work" which was presented by the author at the Fourth EuropeanProject Management Conference, PMI Europe 2001 [3]. As theuse of computer simulation based on System Dynamics to sup-port Project Risk Management in a systematic manner is still in itsearly phases, most likely due to the high level of organizationalmaturity and expertise required by Systems Dynamics modelling,the author decided revisit the ideas contained in the aforemen-tioned paper.

Page 36: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 35© Novática

Risk Management

Farewell Edition

and effects which generate closed chains of causal feedback.Risk dynamics are generated by the various feedback loopsthat take place within the project system.

The feedback perspective is particularly relevant to un-derstand, explain, and act upon the behaviour of complexsocial systems. Its added value for risk management is thatit sheds light on the systemic nature of risks. No single fac-tor can be blamed for generating a risk nor can managementfind effective solutions by acting only upon individual fac-tors. To understand why risks emerge and to devise effec-tive solutions, management needs to look at the whole. Asan example of this analysis, Figure 1 shows the feedbackstructure of a project, focused on the dynamics that can gen-erate risks related to requirements changes imposed by theclient. This understanding of risks is crucial for identifying,assessing, monitoring and controlling them better (see [4]for more details).

Feedback loops identified as "R+" are reinforcing ef-fects (commonly referred to as "snowball effects"), and theones identified as "B-" are balancing effects (e.g. control

decisions). The arrows indicate cause-effect relationships,and have an "o" when the cause and the direct effect changein the opposite direction. The arrows in red identify thecause-effect relationships likely to generate risks. This typeof diagram is referred to as "Influence Diagram" (ID).

If we ask the question: what caused quality problemsand delays? the right answer is not "staff fatigue", "poorQA implementation" or "schedule pressure". It is the wholefeedback structure that, over-time and under certain condi-tions, generated the quality problems and delays. In otherwords, the feedback structure causes problems to unfoldover-time. To manage systemic risks effectively, it is nec-essary to act upon this structure. This type of action con-sists of eliminating problematic feedback loops and creat-ing beneficial ones.

However, project risk dynamics are difficult to under-stand and control. The major difficulties have to do withthe subjective, dynamic and multi-factor nature of systemicrisks. Feedback effects include time-delays, non-linear ef-fects and subjective factors. Not all types of tools and tech-

Figure 1: Example of a Project Feedback Structure focused on Scope Changes.

The fast changing environment and the complexityof projects has increased the exposure to risk

“”

Page 37: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

36 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

niques are appropriate to address and model problems ofthis nature. The more classical modelling approaches tendto deliver static views based on top-down decompositionand bottom-up forecast, while focusing on the readily quan-tifiable factors. Managing project risk dynamics requires adifferent approach, based on a systemic and holistic per-spective, capable of capturing feedback and of quantifyingthe subjective factors where relevant.

2 A Proposed Framework for Managing and Mod-elling Project Risk Dynamics

2.1 OverviewManaging systemic risks requires an approach supported

by specialized tools and techniques. System Dynamics (SD)is a simulation modelling approach aimed at analysing thesystemic behaviour of complex social systems, such asprojects. The framework proposed here is based on the inte-grated use of SD within the existing project risk manage-ment framework, supporting the six risk management proc-esses proposed by the PMBOK [1]. This is an extension tothe more general methodology developed by the author,called SYDPIM [5], which integrates the use of SD withinthe generic project management framework. The use of SDis proposed as a complementary tool and technique to ad-dress systemic risks.

2.2 System DynamicsSD was developed in the late 50s [6] and has enjoyed a

sharp increase in popularity in the last ten years. Its applica-

tion to project management has also been growing impres-sively, with numerous successful applications to real lifeprojects [7]. An overview of the SD methodology can befound in [5] and [8].

The SD modelling process starts with the developmentof qualitative influence diagrams and then moves into thedevelopment of quantitative simulations models. Thesemodels allow for a flexible representation of complex sce-narios, like mixing the occurrence of various risks with theimplementation of mitigating actions. The model simula-tion generates patterns of behaviour over-time. Figure 2 pro-vides an example of the output produced by an SD projectmodel, when simulating the design phase of a softwareproject under two different scenarios.

SD models work as experimental management labora-tories, wherein decisions can be devised and tested in a safeenvironment. Their feedback perspective and "what-if" ca-pability provide a powerful means through which systemicproblems can be identified, understood and managed.

3 The SYDPIM FrameworkThe SYDPIM methodology integrates the use of SD

modelling within the established project management proc-ess. A detailed description can be found in [5] (see also [9]for a summary description). SYDPIM comprises two mainmethods: the model development method and the projectmanagement method. The first is aimed at supporting thedevelopment of valid SD models for a specific project. Thelatter supports the use of this model embedded within the

Figure 2: Example of Behaviour Patterns produced by an SD Project Model.

0.00 30.00 60.00 90.00 120.00Time

.00

00

00

.00

00

00

.00

00

00

.00

00

00

1 1 1 1

2 22

2

3

3

3

3

4 4 4 45

5

5 5

Schedule 2: Estimated Cost 3: Designs Completed 4: Cum Changes 5:Errors to reworkSchedule 2: Estimated Cost 3: Designs Completed 4: Cum Changes 5:Errors to rework

(a) Simulation “as planned”

0.00 30.00 60.00 90.00 120.00Time

0.00

60.00

120.00

0.00

250.00

500.00

0.00

3000.00

6000.00

1:

1:

1:

2:

2:

2:

3:

3:

3:

4:

4:

4:

5:

5:

5:

0.00

250.00

500.00

1 1 1

1

2 2 2

2

3

3

3

3

4 4

4 4

5

5

5

5

1: Actual Schedule 2: Estimated Cost 3: Designs Completed 4: Cum Changes 5:Errors to rework

0.00 30.00 60.00 90.00 120.00Time

0.00

60.00

120.00

0.00

250.00

500.00

0.00

3000.00

6000.00

1:

1:

1:

2:

2:

2:

3:

3:

3:

4:

4:

4:

5:

5:

5:

1:

1:

1:

2:

2:

2:

3:

3:

3:

4:

4:

4:

5:

5:

5:

0.00

250.00

500.00

1 1 1

1

2 2 2

2

3

3

3

3

4 4

4 4

5

5

5

5

1: Actual Schedule 2: Estimated Cost 3: Designs Completed 4: Cum Changes 5:Errors to rework1: Actual Schedule 2: Estimated Cost 3: Designs Completed 4: Cum Changes 5:Errors to rework

(b) Simulation “with scope changes”

Project risk dynamics are difficult to understand and controland hence not all types of tools and techniquesare appropriate to address their systemic nature

“”

Page 38: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 37© Novática

Risk Management

Farewell Edition

Figure 3: Overview of the SYDPIM Process Logic.

traditional project management framework, and formallyintegrated with the PERT/CPM models. An overview of theprocess logic is provided in Figure 3. The arrows in blackidentify the flows within the traditional project control proc-ess. SYDPIM places the use of an SD project model at thecore of this process, enhancing both planning and monitor-ing and thereby the overall project control.

The use of the SD model adds new steps to the basiccontrol cycle (the numbers indicate the sequence of thesteps). In planning, the SD model is used to pro-activelytest and improve the current project plan. This includes fore-casting and diagnosing the likely outcome of the currentplan, uncover assumptions (e.g. expected productivity), testthe plan’s sensitivity to risks, and test the effectiveness ofmitigating actions. In monitoring, the SD model is used toexplain the current project outcome and status, to enhanceprogress visibility by uncovering important intangible in-formation (e.g. undiscovered rework), and to carry out ret-rospective "what-if" analysis for process improvement whilethe project is underway. Overall, the SD model works as atest laboratory to assess the future plans and to diagnose theproject past. The model also works as an important reposi-tory of project history and metrics.

4 Using SYDPIM to manage Risk Dynamics withinthe PMBOK Framework

According to the SYDPIM framework, the SD modelcan be used in various ways to support the six risk manage-

ment processes identified in the PMBOK. Given the lim-ited size of this paper, this is now briefly described sepa-rately for each risk process. A more detailed explanation isforthcoming in the literature.

Plan Risk ManagementThe implementation of SYDPIM within risk manage-

ment planning allows for the definition of the appropriatelevel of structuring for the risk management activity, andfor the planning of the use of SD models within this activ-ity.

Adjusting the level of structuring for the risk manage-ment activity is crucial for the practical implementation ofthe risk management process. An SD project model can beused to analyse this problem. Various scenarios reflectingdifferent levels of structuring can be simulated and the fullimpacts are quantified. Typically, a "U-curve" will resultfrom the analysis of these scenarios, ranging from pure adhoc to over-structuring. An example of the use of an SDmodel for this purpose can be found in [3].

Identify RisksAn SD project model can support risk identification in

two ways: at the qualitative level, through the analysis ofinfluence diagrams, risks that result from feedback forcescan be identified; at the quantitative level, intangible projectstatus information (e.g. undiscovered rework) and assump-tions in the project plan can be uncovered (e.g. requiredproductivity).

System Dynamics modelling is a verycomplete technique and tool that covers a wide range

of project management needs

“”

Page 39: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

38 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Risks can be identified in an influence diagram as eventsthat result from: (i) balancing loops that limit a desiredgrowth or decay (e.g. the lack of available resources leadsto a balancing loop that limits the potential growth of workaccomplishment); (ii) reinforcing loops that lead to unde-sired growth or decay (e.g. schedule pressure leads to QAcuts, which in turn lead to more rework and delays, therebyreinforcing schedule pressure; see "R+" loop L3 in figure1); (iii) external factors that exacerbate any of these twotypes of feedback loops (e.g. training delays exacerbate thefollowing reinforcing loop: "the more you hire in the laterstages, the worst the slippage due to training overheads.").This type of analysis also allows for risks to be managed asopportunities: feedback loops can be put to work in favourof the project.

SD simulation models allow the project manager tocheck whether and how certain feedback loops previouslyidentified as "risk generators" will affect the project. In thisway, irrelevant risks can be eliminated, preventing unnec-essary mitigating efforts. Secondly, the calibration of theSD model uncovers important quantitative informationabout the project status and past, which typically is notmeasured because of its intangible and subjective nature.In this way, it forces planning assumptions to be made ex-plicit and thereby identifying potential risks.

Perform Qualitative Risk AnalysisInfluence diagrams can help to assess risk probability

and impacts through feedback loop analysis. Given a spe-cific risk, it is possible to identify in the diagram whichfeedback loops favour or counter the occurrence of the risk.Each feedback loop can be seen as a dynamic force thatpushes the project outcome towards (or away) from the riskoccurrence. The likelihood and the impact of each risk canbe qualitatively inferred from this feedback loop analysis.

An SD simulation model can be used to identify the spe-cific scenarios in which a risk would occur (i.e. likelihood).Regarding impact, with simple models and preliminary cali-brations, quantitative estimates can be taken as qualitativeindications of the order of magnitude of the risk impacts.

Perform Quantitative Risk AnalysisIn quantifying risks, an SD project model provides two

additional benefits over traditional models: first, it deliversa wide range of estimates, and secondly these estimates re-flect the full impacts of risk occurrence, including both di-rect and indirect effects.

Quantifying the impact of a risk consists in calibratingthe model for a scenario where the risk occurs (e.g. scopechanges), and then simulate the project. One can virtuallyanalyse the impact of the risk occurrence in any project vari-able, by comparing the produced behaviour pattern withthe one obtained when a risk-absent scenario is simulated.For example, figure 2(b) shows the behaviour patterns pro-duced by an SD project model when scope changes are in-troduced by the client over-time (curve 4). These patternscan be compared with the ones of figure 2(a), which showsthe scenario where no scope changes are introduced. Thistype of analysis allows the project manager to identify arisk’s impact on various aspects of the project (and over-time; not just the final value). In addition, the feedback na-ture of the SD model ensures that both direct and indirectimpacts of risks are quantified – ultimately, when a risk oc-curs it will affect everything in the project, and the SD modelcaptures the full impacts.

An SD project model generally includes variables re-lated to the various project objectives (cost, time, quality,and scope). One can therefore assess the risk impacts on alldimensions of the project objectives. The SD model alsoallows for scenarios combining several risks to be simu-lated, whereby their cross impacts are also captured. Sensi-tivity analysis can be carried out to analyse the project’ssensitivity to certain risks as well as to their intensity (e.g.what is the critical productivity level below which prob-lems will escalate?).

Plan Risk ResponsesInfluence diagrams and SD simulation models are very

powerful tools to support the development of effective riskresponses. They provide three main distinctive benefits: (i)support the definition and testing of complex risk-responsescenarios, (ii) provide the feedback perspective for the iden-tification of response opportunities, and (iii) they are veryeffective for diagnosing and understanding better the multi-factor causes of risks; these causes can be traced back thethrough the chains of cause-and-effect, with counter-intui-tive solutions often being identified.

Influence diagrams provide the complementary feedbackperspective. Therefore, the power to influence, change andimprove results rests on acting on the project feedback struc-ture. Risk responses can be identified as actions that elimi-nate vicious loops, or attenuate or reverse their influenceon the project behaviour. By looking at the feedback loopsand external factors identified as risks, the project managercan devise effective responses.

System Dynamics modellingis proposed in the specialized

Practice Standard forProject Risk Management

PMI’s Project ManagementBody of Knowledgeconsiders six risk

management processes

Page 40: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 39© Novática

Risk Management

Farewell Edition

An SD simulation model provides a powerful test-bedwhere, at low cost and in a safe environment, various risk-responses can be developed, their effectiveness can be testedfor the full impacts and can be improved prior to implemen-tation.

Risk Monitoring and ControlAn SD project model can be used as an effective tool for

risk monitoring and control. The model can be used to iden-tify early signs of risk emergence which otherwise wouldremain unperceived until problems were aggravated. Theimplementation of risks responses can also be monitoredand their effectiveness can be evaluated.

Risk occurrence can be monitored by analysing theproject behavioural aspects of concern (i.e. the risks "symp-toms"). An SD model has the ability to produce many ofthese patterns, which in the real world are not quantifieddue to their intangible and subjective nature (the amount ofundetected defects flowing throughout the development life-cycle is a typical example). The SD model provides a widerange of additional risk triggers, thereby enhancing the ef-fectiveness of monitoring risk occurrence.

The implementation of a risk response can be character-ized by changes in project behaviour. These changes can bemonitored in the model to check whether the responses arebeing implemented as planned. The effectiveness of the riskresponse (i.e. the expected impacts) can be monitored in thesame way. When deviations occur, the SD model can beused to diagnose why the results are not as expected.

5 Placing System Dynamics in the PMBOKFramework

System Dynamics modelling is a very complete tech-nique and tool that covers a wide range of project manage-ment needs by addressing the systemic issues that influenceand often dominate the project outcome. Its feedback andendogenous perspective of problems is very powerful, wid-ening the range for devising effective management solutions.It is an appropriate approach to manage and model projectrisk dynamics, for which most of the traditional modellingtechniques are inappropriate. SD therefore has a strong po-tential to provide a number of distinctive benefits to theoverall project management process. One of the necessaryconditions is that its application is integrated with the tradi-tional models within that process. The SYDPIM methodol-ogy was developed for that purpose, integrating the use ofSD project models with the traditional PERT/CPM models,based on the WBS and OBS structures [5]. SYDPIM pro-vides SD models with specific roles within the project con-

trol process. One of these roles is to support the risk man-agement activity.

As a proven tool and technique already applied withsuccess to various real projects [10], SD needs to be prop-erly placed in the PMBOK. This paper briefly discussed itspotential roles within the six project risk management proc-esses presented in the latest edition of the PMBOK [1]. It isconcluded that SD has potential to provide added value tothese processes, in particular to risk identification, risk quan-tification and to response planning.

Influence diagrams are already proposed by the PMBOKfor risk identification (process 11.2). System Dynamicsmodelling is further proposed in the specialized PracticeStandard for Project Risk Management [1] (PMI 2008), alsofor risk identification. This is an important acknowledge-ment that systemic problems in projects may require spe-cialized techniques, different from and complementary tothe more traditional ones. However, from practical experi-ence in real projects and extensive research carried out inthis field, it is the author’s opinion that the range of appli-cation of SD within the project management process is muchwider. There are many other processes in the PMBOKframework where SD can be employed as a useful tool andtechnique. These benefits can be maximized based on theSYDPIM methodology.

It is also the author’s opinion that by implementing theSYDPIM-based risk framework proposed here, the projectmanager can take better advantage of the benefits offeredby System Dynamics modelling, while enhancing the per-formance of the existing risk management process.

The use of System Dynamics in the field of Project Man-agement and in particular for Project Risk Management hasbeen deserving growing attention since the author first pro-posed an integrated process based approach [3], as reportedin the literature.

References[1] Project Management Institute (PMI). A Guide to the

Project Management Body of Knowledge, ProjectManagement Institute, North Carolina, 2008.

[2] Project Management Institute (PMI). Practice Stand-ard for Project Risk Management, Project ManagementInstitute, North Carolina, 2009.

[3] A. Rodrigues. "Managing and Modelling Project RiskDynamics - A System Dynamics-based Framework ".Proceedings of the 4th European PMI Conference 2001,London, United Kingdom, 2001.

[4] A.G. Rodrigues. "Finding a common language for glo-bal software projects." Cutter IT Journal, 1999.

By implementing the SYDPIM-based risk framework,the project manager can take better advantage

of the benefits offered by System Dynamics modelling

“”

Page 41: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

40 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

[5] A. Rodrigues. "The Application of System Dynamicsto Project Management: An Integrated Methodology(SYDPIM)". PhD Dissertation Thesis. Department ofManagement Sciences, University of Strathclyde, 2000.

[6] J. Forrester. Industrial Dynamics, MIT Press, Cam-bridge, US, 1961.

[7] A. Rodrigues. "The Role of System Dynamics inProject Management: A Comparative Analysis withTraditional Models." 1994 International System Dy-namics Conference, Stirling, Scotland, 214-225, 1994.

[8] M.J. Radzicki and R. Taylor. "Origin of System Dy-namics: Jay W. Forrester and the History of SystemDynamics". In: U.S. Department of Energy’s Introduc-tion to System Dynamics. Retrieved 23 October 2008.

[9] A. Rodrigues. "SYDPIM – A System Dynamics-basedProject-management Integrated Methodology." 1997International System Dynamics Conference: "SystemsApproach to Learning and Education into the 21st Cen-tury", Istanbul, Turkey, 439-442, 1997.

[10] A.G. Rodrigues, J. Bowers. "The role of system dy-namics in project management." International Journalof Project Management, 14(4), 235-247, 1996.

Additional Related LiteratureC. Pavlovski, B. Moore, B. Johnson, R. Cattanach, K.Hambling, S. Maclean. "Project Risk ForecastingMethod." International Conference on Software De-velopment (SWDC-REK), University of Iceland, Rey-kjavik, Iceland. May 27 - June 1, 2005.D.A. Hillson. "Towards a Risk Maturity Model." In-ternational Journal of Project & Business Risk Man-agement, 1(1), 35-45, 1997.J. Morecroft. "Strategic Modelling and Business Dy-namics: A Feedback Systems Approach." John Wiley& Sons. ISBN 0470012862, 2007.A.G. Rodrigues, T.M. Williams. "System dynamics inproject management: assessing the impacts of clientbehaviour on project performance." Journal of the Op-erational Research Society, Volume 49, Number 1, 1January 1998, pp 2-15(14).Seng Chia. "Risk Assessment Framework for ProjectManagement." Engineering Management Conference,2006 IEEE International. Print ISBN: 1-4244-0285-9.P. Senge. "The Fifth Discipline. Currency." ISBN 0-385-26095-4, 1990.J. D. Sterman. "Business Dynamics: Systems thinkingand modeling for a complex world." McGraw Hill.ISBN 0-07-231135-5, 2000.J.R. Wirthlin. "Identifying Enterprise Leverage Pointsin Defense Acquisition Program Performance." Doc-toral Thesis, MIT, Cambridge, USA, 2009.

Page 42: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 41© Novática

Risk Management

Farewell Edition

Keywords: Anticipation, Danger, Resilience, Risk, RiskManagement, Safety.

IntroductionImagine a clash between two worlds, one that is risk-

averse, traditional and conservative, the other that is risk-seeking, opportunistic and entrepreneurial. The former isthe old world, dedicated to the precautionary principle pa-rading under the banner ‘better safe than sorry’. The latteris the new world, committed to the maxim ‘no pain, no gain’.The question we are asked to address is whether the defen-sive posture exhibited by the old world or the forever of-fensive stance of the new world is likely to prevail.

Would their attitude to risk determine the outcome ofthis question? The answer must be a qualified yes. The no-tion of risk has become topical and pervasive in many con-texts. Indeed Beck [1] argues that risk has become a domi-nant feature of society, and that it has replaced wealth pro-duction as the means of measuring decisions.

In that Case, let’s survey the CombatantsEncamped on one bank, the old world is likely to resist

the temptation of genetically modified crops and hormone-induced products despite the advertised potential benefits.Risk is traditionally perceived as negative quantity, danger,hazard or potential harm. Much of risk management is predi-cated around the concept of the precautionary principle,asserting that acting in anticipation of the worst form ofharm should ensure that it does not materialise. Action istherefore biased towards addressing certain forms of riskthat are perceived as particularly unacceptable and prevent-ing them from occurring, even if scientific proof of the ef-fects is not fully established. According to this principle,old-world risk regulators cannot afford to take a chance withsome (normally highly political) risks.

Author

Darren Dalcher – PhD (Lond) HonFAPM, FBCS, CITP, FCMI– is a Professor of Software Project Management at MiddlesexUniversity, UK, and Visiting Professor in Computer Science inthe University of Iceland. He is the founder and Director of theNational Centre for Project Management. He has been namedby the Association for Project Management, APM, as one of thetop 10 "movers and shapers" in project management and hasalso been voted Project Magazine’s Academic of the Year forhis contribution in "integrating and weaving academic work withpractice". Following industrial and consultancy experience inmanaging IT projects, Professor Dalcher gained his PhD in Soft-ware Engineering from King’s College, University of London,UK. Professor Dalcher is active in numerous internationalcommittees, steering groups and editorial boards. He is heavilyinvolved in organising international conferences, and hasdelivered many keynote addresses and tutorials. He has writtenover 150 papers and book chapters on project management andsoftware engineering. He is Editor-in-Chief of Software ProcessImprovement and Practice, an international journal focusing oncapability, maturity, growth and improvement. He is the editorof a major new book series, Advances in Project Management,published by Gower Publishing. His research interests are wideand include many aspects of project management. He workswith many major industrial and commercial organisations andgovernment bodies in the UK and beyond. Professor Dalcher isan invited Honorary Fellow of the Association for ProjectManagement (APM), a Chartered Fellow of the British ComputerSociety (BCS), a Fellow of the Chartered Management Institute,and a Member of the Project Management Institute, the Academyof Management, the IEEE and the ACM. He has received anHonorary Fellowship of the APM, "a prestigious honourbestowed only on those who have made outstandingcontributions to project management", at the 2011 APM AwardsEvening. <[email protected]>

Towards a New Perspective: Balancing Risk, Safety and DangerDarren Dalcher

The management of risk has gradually emerged as a normal activity that is now a constituent part of many professions.The concept of risk has become so ubiquitous that we continually search for risk-based explanations of the world aroundus. Decisions and projects are often viewed through the lens of risk to determine progress, value and utility. But risk canhave more than one face depending on the stance that we adopt. The article looks at the implications of adopting differentpositions regarding risk thereby opening a wider discussion about the links to danger and safety. In rethinking our posi-tion, we are able to appraise the different strategies that are available and reason about the need to adopt a more bal-anced position as an essential step towards developing a better informed perspective for managing risk and potential.

The concept of risk has become so ubiquitous that we continuallysearch for risk-based explanations of the world around us“ ”

Page 43: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

42 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Old-world thinking supports the adoption of precaution-ary measures even when some cause and effect relationshipsare not fully understood. In others words, the principle linkshazards or threats with (scientific) uncertainty to demanddefensive measures. Following the lead offered by the legalsystems of Germany, Sweden and Denmark, the precaution-ary principle is likely to be fully embraced in guiding Euro-pean Commission policy (such as the White Paper on FoodSafety published by the Commission in 2000). When fol-lowed to the extreme, this policy leads to the pursuit of azero-risk approach, which like zero defects will remain elu-sive.

Amassed opposite is the new world, where risks conveypotential, opportunity and innovation. Risk offers the po-tential for gains, and occasionally creative chances and op-portunities to discover new patterns of behaviour that canlead to serious advantage over the competition. Risk thusoffers a key source of innovation. This can be viewed as theaggressive entrepreneurial approach to business.

Who would you bet your Money on?In the old-world camp, risk management is a disciplined

way of analysing risk and safety problems in well defineddomains. The difficulty lies in the mix of complexity, ambi-guity and uncertainty with human values where problemsare not amenable to old-world technical solutions. New-world problems manifest themselves as human interactionswith systems. They are complex, vexing socio-technical di-lemmas involving multiple participants with competing in-terests and conflicting values (read that as opportunities)

A ground rule for the clash is that total elimination ofrisk is both impossible and undesirable. It is a natural hu-man tendency to try to eliminate a given risk; however thatmay increase other risks or introduce new ones. Further-more, the risks one is likely to attempt to eliminate are thebetter-known risks that must have occurred in the past andabout which more is known. Given that elimination is notan option, we are forced into a more visible coexistence withrisks and their implications. The rest of this article will fo-cus on the dynamic relationship between safety, risk anddanger as an alternative way of viewing the risk–opportu-nity spectrum. It will therefore help to map, and potentiallyresolve, the roots of the clash from an alternative perspec-tive.

Away with Danger?The old world equates risk with danger, in an attempt to

achieve a safer environment. If only it were that simple!Safety may result from the experience of danger. Early pro-grammes, models and inventions are fraught with problems.Experience accumulates through interaction with and reso-lution of these problems. Trial and error leads to the abilityto reduce error. Eliminate all errors and you reduce the op-portunity for true reflective learning.

Safety, be it in air traffic control systems, business envi-ronments, manufacturing or elsewhere, is normally achievedthrough the accumulated experience of taking risks. In the

old world, the ability to know how to reduce risks inevita-bly grows out of historical interaction with risk. Solutionsare shaped by past problems. Without taking risk to knowhow to reduce risks, you would not know which solutionsare safe or useful.

What happens when a risk is actually reduced? Experi-ence reveals that safety also comes with a price. As we feelsafe, we tend to take more chances and attract new dan-gers. Research shows that the generation of added safety,through safety belts in cars or helmets in sport, encouragesdanger-courting behaviour, leading often to a net increasein overall risk taking. This may be explained by the re-duced incentive to avoid a risk, once protection against ithas been obtained.

Adding safety measures also adds to the overall com-plexity of the design process and the designed system, andto the number of interactions, thereby increasing the diffi-culty of understanding them and the likelihood of accidentsand errors. In some computer systems, adding safety de-vices may likewise decrease the overall level of safety. Themore interconnected the technology and the greater thenumber of components, the greater the potential for com-ponents to affect each other unexpectedly and to spreadproblems, and the greater the number of potential ways forsomething to go wrong.

So far we have observed that risk and danger maintaina paradoxical relationship, where risks can improve safetyand safety measures can increase risks. Danger and ben-efits are intertwined in complex ways ensuring that safetyalways comes at a price. Safety, like risk, depends on theperception of participants.

Predicting DangerThe mitigation of risk, as practised in the old world, is

typically predicated on the assumption of anticipation. Itthus assumes that risks can be identified, characterised,quantified and addressed in advance of their occurrence.The separation of cause and effect, implied by these ac-tions, depends on stability and equilibrium within the sys-tem. The purpose of intended action is to return the systemto the status quo following temporary disturbances. Theold world equates danger with deviation from the statusquo, which must be reversed. The purpose of risk manage-ment is to apply resources to eliminate such disturbances.The old-world is thus busy projecting past experience intothe future. It is thus perfectly placed to address previousbattles but not new engagements.

The assumption of anticipation offers a bad bet in anuncertain and unpredictable environment. An alternativestrategy is resilience, which represents the way an organ-ism or a system adapts itself to new circumstances in a moreactive and agile search for safety. The type of approachapplied by new-world practitioners calls for an ability toabsorb change and disruption, keep options open, and dealwith the unexpected by conserving energy and utilising sur-

Page 44: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 43© Novática

Risk Management

Farewell Edition

plus resources more effectively and more creatively.The secret in new-world thinking is to search for the

next acceptable state rather than focus on returning to theprevious state. In the absence of knowledge about the fu-ture, it is still possible to respond to change, by finding anew beneficial state as the result of a disturbance. Bounc-ing back and grabbing new opportunities becomes the or-der of the day. Entrepreneurs, like pilots, learn to deal withnew situations through the gradual development of a port-folio of coping patterns and strategies that is honed by ex-perience. Above all they learn to adapt and respond.

New-world actors grow up experimenting. Trial andexperimentation makes them more knowledgeable and ca-pable. Experiments provide information and varied experi-ence about unknown processes, different strategies and al-ternative reaction modes. Intelligent risk-taking in the formof trial and error leads to true learning and ultimate im-provement. The key to avoiding dramatic failures, and todeveloping new methods and practice in dealing with them,lies in such learning-in-the-small.

Acceptance of small errors is at the crux of developingthe skills and capability to deal with larger problems. Smalldoses of danger provide the necessary feedback for learn-ing and improvement. Similar efforts are employed by creditcard companies, banks and security organisations, who or-chestrate frequent threats and organised breaches of secu-rity to test their capability and learn new strategies and ap-proaches for coping with problems. In the new world, tak-ing small chances is a part of learning — and so is failure!Small, recognisable and reversible actions permit experi-mentation with new phenomena at relatively low risks. Onceagain we paradoxically discover that contained experimen-tation with danger leads to improved safety.

Large numbers of small moves, with frequent feedbackand adjustment permit experimentation on a large scale withnew phenomena at relatively low risks. Contained experi-mentation with danger leads to improved understanding ofsafety. Risk management is therefore a balancing act be-tween stopping accidents, increasing safety, avoiding ca-tastrophes and receiving rewards. Traditionalmechanistically based risk management spends too muchtime and effort on minimising accidents: as a result it losesthe ability to respond, ignores potential rewards and oppor-tunities, and may face tougher challenges as they accumu-late. It also focuses excessively on reducing accidents, tothe extent that rewards are often neglected and excludedfrom decision-making frames. Such fixation with worst-casescenarios and anticipation of worst-case circumstances of-ten leads to an inability to deal with alternative scenarios.

In the new world, safety is not a state or status quo, but

a dynamic process that tolerates natural change and dis-covery cycles. It can thus be viewed as a discovered com-modity. This resource needs to be maintained and cherishedto preserve its relevance and value. Accepting safety (andeven danger?) as a resource makes possible the adoption ofa long-term perspective, and it thus becomes natural to strivefor the continuous improvement of safety.

While many organisations may object to the introduc-tion of risk assessment and risk management because ofthe negative overtones, it is more difficult to resist an on-going perspective emphasising improvement and enhancedsafety. After all, successful risk assessment, like testing, isprimarily concerned with identifying problems (albeit be-fore they occur). The natural extension, therefore, is not tofocus simply on risk as a potential for achievement, but toregard the safety to which it can lead as a resource worthcherishing.

Like other commodities, safety degrades and decayswith time. The safety asset therefore needs continuous main-tenance to reverse entropy and maintain its relevance withrespect to an ever-changing environment. Relaxing of thiseffort will lead to a decline both in the level of safety and inits value as a corporate asset. In order to maintain its value,the process of risk management (or more appealingly, safetymanagement) must be kept central and continuous.

Exploring risks as an ongoing activity offers anotherstrategic advantage, in the form of the continuous discov-ery of new opportunities. Risk anticipation locks actors intothe use of tactics that have worked in the past (even doingnothing reduces the number of available options). Resil-ience and experimentation can easily uncover new optionsand innovative methods for dealing with problems. Theythus lead to divergence, and the value of the created diver-sity is in having the ability to call on a host of differenttypes of solutions.

Miller and Freisen observe that successful organisationsappear to be sensitive to changes in their environment [2].Peters and Waterman [3] report that successful companiestypically:

experiment more,encourage more tries,permit small failures,keep things small,interact with customers,encourage internal competition and allow resultantduplication and overlap, andmaintain a rich information environment.

Uncertainty and ambiguity lead to potential opportuni-ties as well as ‘unanticipated’ risks. Resilience is builtthrough experimentation, through delaying commitment,

Risk is traditionally perceived as negative quantity,danger, hazard or potential harm“ ”

Page 45: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

44 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

through enabling, recognising and embracing opportunitiesand, above all, through the acquisition of knowledge, expe-rience and practice in dealing with adversity-in-the-small.

Risk management requires flexible technologies arrangedwith diversity and agility. Generally, a variety of styles, ap-proaches and methods are required to ensure that more prob-lems can be resolved. This argument can be extended to pro-pose that such a diverse armoury should include anticipa-tion (which is essentially proactive), as well as resilience(essentially reactive in response to unknowable events) invarious combinations. The two approaches are not mutu-ally exclusive and can complement one another as each re-sponds to a particular type of situation.

Resilience and exploration are ideal under conditions ofambiguity and extreme uncertainty. Anticipation can be usedunder risky, yet reasonably certain, conditions; while thevast space in between would qualify for a balanced combi-nation of anticipation and resilience operating in concert.

The management of risks therefore needs to be coupledto the nature of the environment. After all, managing progressis not about fitting an undertaking to a (probably alreadyredundant) plan, but is about reducing the difference be-tween plan and reality. This need not be achieved throughthe elimination of problems (which may prove to be a sourceof innovation), but through adaptation to changing circum-stances. By overcoming the momentum that resists change,with small incursions and experiments leading to rapid feed-back, it becomes possible to avoid major disasters and dra-matic failures through acting in-the-small and utilising ag-ile risk management.

Remember the two Worlds?Well, it appears we need both. The old world is outstand-

ing at using available information in an effort to improveefficiency and execution, while the new world is concernedwith potential, promise and innovation.

The single most important characteristic of success hasoften been described as conflict or contention. The clashbetween the worlds provides just that. It gives rise to a port-folio of attitudes, experiences and expertise that can be usedas needed. Skilful manipulation of the safety resource andthe knowledge of both worlds would entail balancing a port-folio of risks, ensuring that the right risks are taken and thatthe right opportunities are exploited while keeping a watch-ful eye on the balance between safety and danger. A satis-factory balance will thus facilitate the exploration of newpossibilities alongside the exploitation of old and well-un-derstood certainties. By consulting all those affected by risks,and by maximising the repertoire, it becomes possible todamp the social amplification of risk and to embrace risk

and danger from an intelligent and collective perspective.If this balance is not achieved, one of the two worlds

will prevail. They will bring with them their baggage, whichwill dominate risk practice. A practice dominated by either‘better safe than sorry’ or ‘no pain, no gain’ will be unableto combine the benefits of agile exploration and mature ex-ploitation. Intelligent risk management depends on a dy-namic balancing act that is responsive to environmentalfeedback.

Perhaps more importantly, the justification for creatingsuch a balance lies in taking a long-term perspective andviewing safety as an evolving commodity. Risk manage-ment is not a service. A specific risk may be discrete, butrisk management is a growing and evolving body of knowl-edge -- an improving asset. In this improvement lies thevalue of the asset.

"There is no point in getting into a panic about the risksof life until you have compared the risks which worry youwith those that don’t, but perhaps should!"

Lord N. Rothschild, 1978

Once we graduate beyond viewing risk management asa fad offered by either world, we can find the middle groundand the benefit of both worlds.

References:[1] U. Beck, Risk Society: Towards a New Modernity,

Sage, London, 1992.[2] D. Miller and P. H. Friesen, "Archetypes of Strategy

Formulation", Management Science, Vol. 24, 1978, pp.921-923.

[3] T J. Peters and R. H. Waterman, In Search Of Excel-lence: Lessons From America’s Best-Run Companies,Harper and Row, London, 1982.

To probe further:C. Hood and D. K. C. Jones (Eds.), Accident and De-sign: Contemporary Debates in Risk Management, UCLPress, London, 1996.V. Postrel, The Future and Its Enemies: The GrowingConflict over Creativity, Enterprise and Progress,Scribner Book Company, 1999.A. Wildavsky, Searching for Safety, Transaction Books,Oxford, 1988.<http://www.biotech-info.net/precautionary.html>.<http://europa.eu.int/comm/dgs/health_consumer/li-brary/press/press38_en.html>.<http://www.sehn.org/precaution.html>.

Acceptance of small errors is at the crux of developingthe skills and capability to deal with larger problems“ ”

Page 46: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 45© Novática

Risk Management

Farewell Edition

Keywords: Energy Levels for Risk Management, Hu-man Aspects, Individual Project Risk, Overall Project Risk,Post-Project Review, Project Risk Management Principles,Project Risk Process, Risk Responses.

Humans have been undertaking projects for millennia,with more or less formality, and with greater or lesser de-grees of success. We have also recognised the existence ofrisk for about the same period of time, understanding thatthings don’t always go according to plan for a range of rea-sons. In relatively recent times these two phenomena havecoalesced into the formal discipline called project risk man-agement, offering a structured framework for identifyingand managing risk within the context of projects. Given theprevalence and importance of the subject, we might expectthat project risk management would be fully mature by now,only needing occasional minor tweaks and modificationsto enhance its efficiency and performance. Surely there isnothing new to be said about managing risk in projects?

While it is true that there is wide consensus on projectrisk management basics, the continued failure of projectsto deliver consistent benefits suggests that the problem ofrisk in projects has not been completely solved. Clearly theremust be some mismatch between project risk managementtheory and practice, or perhaps there are new aspects to bediscovered and implemented, otherwise all project riskswould be managed effectively and most projects would suc-ceed.

Managing Risk in Projects: What’s New?1

David Hillson, "The Risk Doctor"

Project Risk Management has continued to evolve into what many organisations consider to be a largely mature disci-pline. Given this evolution we can ask if there are still new ideas that need to be considered in the context of managingproject risks. In this article we consider the state of project risk management and reflect on whether there is still amismatch between project risk management theory and practice. We also look for gaps in the available practice andsuggest some areas where further improvement may be needed, thereby offering insights into new approaches and per-spectives.

Author

David Hillson (FIRM HonFAPM PMI-Fellow FRSA FCMI),known globally as “The Risk Doctor”, is an international riskconsultant and Director of Risk Doctor & Partners, offeringspecialist risk management consultancy across the globe, at bothstrategic and tactical levels. He has worked in over 40 countrieswith major clients in most industry sectors. David is recognisedinternationally as a leading thinker and practitioner in riskmanagement, and he is a popular conference speaker and authoron the subject. He has written eight books on risk, as well asmany papers. He has made several innovative contributions tothe discipline which have been widely adopted. David is well-known for promoting inclusion of opportunities throughout therisk process. His recent work has focused on risk attitudes (see<http:/www.risk-attitude.com>), and he has also developed ascaleable risk methodology, <http:// www.ATOM-risk.com>.David was named Risk Personality of the Year in 2010 by theInstitute of Risk Management (IRM). He was the first recipientof this award, recognising his significant global contribution toimproving risk management and advancing the risk profession.He is also an Honorary Fellow of the UK Association for ProjectManagement (APM), and a PMI Fellow in the ProjectManagement Institute (PMI®), both marking his contributionto developing project risk management. David was elected aFellow of the Royal Society of Arts (RSA) to contribute to itsRisk Commission. He is currently leading the RSA Fellowsproject on societal attitudes to failure. He is also a CharteredManager and Fellow of the Chartered Management Institute(CMI), reflecting his broad interest in topics beyond his ownspeciality of risk management. <[email protected]>

Project risk managementoffers a structured frameworkfor identifying and managing

risk within the contextof projects

So what could possibly remain to be discovered aboutthis venerable topic? Here are some suggestions for howwe might do things differently and better, under four head-ings:

1 This article was previously published online in the "Advances inProject Management" column of PM World Today (Vol. XII Issue II- February 2010), <http://www.pmworldtoday.net/>. It is republishedwith all permissions.

Page 47: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

46 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

1. Principles2. Process3. People4. Persistence

Problems with PrinciplesThere are two potential shortfalls in the way most project

teams understand the concept of risk. It is common for thescope of project risk management processes to be focusedon managing possible future events which might pose threatsto project cost and schedule. While these are undoubtedlyimportant, they are by no means the full story. The broadproto-definition of risk as "uncertainty that matters" encom-passes the idea that some risks might be positive, with po-tential upside impacts, mattering because they could enhanceperformance, save time or money, or increase value. Andrisks to objectives other than cost and schedule are also im-portant and must be managed proactively. This leads to theuse of an integrated project risk process to manage boththreats and opportunities alongside each other. This is morethan a theoretical nicety: it maximises a project’s chancesof success by intentionally seeking out potential upsides andcapturing as many as possible, as well as finding and avoid-ing downsides.

Another conceptual limitation which is common in theunderstanding of project risk is to think only about detailedevents or conditions within the project when consideringrisk. This ignores the fact that the project itself poses a riskto the organisation at a higher level, perhaps within a pro-gramme or portfolio, or perhaps in terms of delivering stra-tegic value. The distinction between "overall project risk"and "individual project risks" is important, leading to a rec-ognition that risk exists at various levels reflecting the con-text of the project. It is therefore necessary to manage over-all project risk (risk of the project) as well as addressingindividual risk events and conditions (risks in the project).This higher level connection is often missing in the way projectrisk management is understood or implemented, limiting thevalue that the project risk process can deliver. Setting projectrisk management in the context of an integrated Enterprise RiskManagement (ERM) approach can remedy this lack.

Problems with ProcessThe project risk process as implemented by many or-

ganisations is often flawed in a couple of important respects.The most significant of these is a failure to turn analysisinto action, with Risk Registers and risk reports being pro-duced and filed, but with these having little or no effect onhow the project is actually undertaken. The absence of a

formal process step to "Implement Risk Responses" rein-forces this failing. It is also important to make a clear linkbetween the project plan and risk responses that have beenagreed and authorised. Risk responses need to be treated inthe same way as all other project tasks, with an agreedowner, a budget and timeline, included in the project plan,reported on and reviewed. If risk responses are seen as "op-tional extras" they may not receive the degree of attentionthey deserve.

A second equally vital omission is the lack of a "post-project review" step in most risk processes. This is linkedto the wider malaise of failure to identify lessons to belearned at the end of each project, denying the organisationthe chance to learn from its experience and improve per-formance on future projects. There are many risk-relatedlessons to be learned in each project, and the inclusion of aformal "Post-project Risk Review" will help to capturethese, either as part of a more generic project meeting or asa separate event. Such lessons include identifying whichthreats and opportunities arise frequently on typical projects,finding which risk responses work and which do not, andunderstanding the level of effort typically required to man-age risk effectively.

Problems with PeopleIt is common for project risk management to be viewed

as a collection of tools and techniques supporting a struc-tured system or a process, with a range of standard reportsand outputs that feed into project meetings and reviews.This perspective often takes no account of the human as-pects of managing risk. Risk is managed by people, not bymachines, computers, robots, processes or techniques. Asa result we need to recognise the influence of human psy-chology on the risk process, particularly in the way riskattitudes affect judgement and behaviour. There are manysources of bias, both outward and hidden, affecting indi-viduals and groups, and these need to be understood andmanaged proactively where possible.

The use of approaches based on emotional literacy toaddress the human behavioural aspects of managing risk inprojects is in its infancy. However some good progress hasbeen made in this area, laying out the main principles andboundaries of the topic and developing practical methodsfor understanding and managing risk attitude. Without tak-ing this into account, the project risk management processas typically implemented is fatally flawed, relying on judge-ments made by people who are subject to a wide range ofunseen influences, and whose perceptions may be unreli-able with unforeseeable consequences.

The continued failure of projects to deliverconsistent benefits suggests that the problem of risk in projects

has not been completely solved

“”

Page 48: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 47© Novática

Risk Management

Farewell Edition

Problems with PersistenceEven where a project team has a correct concept of risk

that includes opportunity and addresses the wider context,and if they ensure that risk responses are implemented ef-fectively and risk-related lessons are learned at the end oftheir project, and if they take steps to address risk attitudesproactively – it is still possible for the risk process to fail!This is because the risk challenge is dynamic, constantlychanging and developing throughout the project. As a re-sult, project risk management must be an iterative process,requiring ongoing commitment and action from the projectteam. Without such persistence, project risk exposure willget out of control, the project risk process will become in-effective and the project will have increasing difficulty inreaching its goals.

Insights from the new approach of "risk energetics" sug-gest that there are key points in the risk process where theenergy dedicated by the project team to managing risk candecay or be dampened. A range of internal and externalCritical Success Factors (CSFs) can be deployed to raiseand maintain energy levels within the risk process, seekingto promote positive energy and counter energy losses. In-ternal CSFs within the control of the project include goodrisk process design, expert facilitation, and the availabilityof the required risk resources. Equally important are exter-nal CSFs beyond the project, such as the availability of ap-propriate infrastructure, a supportive risk-aware organisa-tional culture, and visible senior management support.

So perhaps there is still something new to be said aboutmanaging risk in projects. Despite our long history in at-tempting to foresee the future of our projects and addressrisk proactively, we might do better by extending our con-cept of risk, addressing weak spots in the risk process, deal-ing with risk attitudes of both individuals and groups, andtaking steps to maintain energy levels for risk managementthroughout the project. These simple and practical steps offerachievable ways to enhance the effectiveness of project riskmanagement, and might even help us to change the courseof future history.

Note: All of these issues are addressed in the book "Manag-ing Risk in Projects" by David Hillson, published in August 2009by Gower (ISBN 978-0-566-08867-4) as part of the Fundamen-tals in Project Management series.

Risks to objectives other than cost and scheduleare also important and must be managed proactively“ ”

Page 49: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

48 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Keywords: Agility, Decision-Making, Latent Uncer-tainty, Management Strategies, Resilience, Risk, TriggerPoint, Uncertainty, Uncertainty Lifecycle, Unexpected Out-comes.

1 IntroductionThere is a fundamental truth that all management pro-

fessionals would do well to heed: all risks arise from un-certainties, but not all uncertainties can be dealt with asrisks. By this we mean that uncertainty is the source of everyrisk (arising from, for example, information that we don’tpossess, something that we can’t forecast, decisions thathave not yet been made). However, a set of project risks –no matter how comprehensive the risk analysis – will onlyaddress a subset of the uncertainties which threaten a project.

We know this empirically. For every credible risk that isidentified, we reject (or choose to ignore) a dozen others.These are ‘ghost risks’ – events considered to be most un-likely to occur, or too costly to make any kind of effectiveprovision for. Risk management quite rightly acts on pri-orities: what are the things that represent the greatest threatto this project, and what action can be take to reduce thisthreat? But prioritisation means that at some point the lineis drawn: above it are the risks that are planned for andactively managed. Below the line, these risks have a lowlikelihood of occurring, or will have minimal impact if theydo, or (sometimes) have no effective means of preventionor mitigation. Not surprisingly, where the line is drawn verymuch depends on a project’s ‘risk appetite’. A project witha low risk appetite where human lives or major investmentis at stake, will be far more diligent in the risk analysis thanone where the prospect of failure may be unwelcome butcan be tolerated.

No matter where the line is drawn in terms of risks wechoose to recognise, there remain risks that cannot be for-mulated at this time, no matter how desirable this might be.By definition, if we cannot conceive of a threat, we cannotformulate it as a risk and manage it accordingly, as Figure 1shows. These may be the so-called ‘black swan’ events, or‘bolts from the blue’ – things that it would be very difficult,if not impossible to know about in advance – or, just as

Author

David Cleden is a senior Project Manager with more than twentyyears experience of commercial bid management and projectdelivery, mainly within the public sector. With a successful trackrecord in delivering large and technically challenging IT projects,he also writes widely on the challenges faced by modernbusinesses striving for efficiency through better managementprocesses. He is the author of "Managing Project Uncertainty"published by Gower, part of the Advances in ProjectManagement series edited by Professor Darren Dalcher, and “BidWriting for Project Managers”, also published by Gower.<[email protected]>

Our Uncertain FutureDavid Cleden

Risk arises from uncertainty but it is difficult to express all types of uncertainty in terms of risks. Therefore managinguncertainty often requires an approach which differs from conventional risk management. A knowledge of the lifecycle ofuncertainty (latency, trigger points, early warning signs, escalation into crisis) helps to inform the different strategieswhich can be used at different stages of the lifecycle. This paper identifies five tenets to help project teams deal moreeffectively with uncertainty, combining pragmatism (e.g. settle for containing uncertainty, don’t try to eliminate it com-pletely), an emphasis on informed decision-making, and the need for projects to be structured in an agile fashion toincrease their resilience in the face of uncertainty.

likely, they may be gaps in our understanding or knowl-edge of the tasks to be undertaken.

A knowledge-based analytical approach is often help-ful to understanding the threat from this kind of uncertainty.Some uncertainty is susceptible to analysis and can be man-aged as risks, but some cannot. We don’t know anythingabout these risks (principally because we have not or can-not conceive of them) but it is entirely possible that someof these would rank highly in our risk register if we could.

Let’s examine the possibilities (see Figure 2). The top-left quadrant describes everything that we know (or thinkwe know about the project). This is the knowledge whichplans are built on, which informs our decision-making proc-esses and against which we compare progress. Broadlyspeaking, these are the facts of the project.

Often there are more facts available than we realise.These are things that we don’t know, but could if we tried.This untapped knowledge can take many forms – a col-

This paper identifiesfive tenets to help

project teams deal moreeffectively with uncertainty

Page 50: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 49© Novática

Risk Management

Farewell Edition

league with relevant experience or skills that we haven’tconsulted, lessons learnt from a previous project which couldaid our decision-making, standards, guidelines and best prac-tices which the project team have overlooked – and manyother things besides. In the knowledge-centric view of un-certainty, clearly the more facts and information we pos-sess, the better able we are to deal with uncertainty.

Naturally, no matter how good our understanding of theproject’s context, there will always be gaps. By acknowl-edging this, we accept that there are some things about theproject that we don’t know or can’t predict with accuracy(the classic ‘known unknowns’). However, as long as theycan be conceived of, they can be addressed as risks usingrisk management techniques.

What does this leave us with? The fourth quadrant, the‘unknown unknowns’ represents the heart of uncertainty.This kind of uncertainty is unfathomable; it is not suscepti-ble to analysis in the way that risks are. By definition wehave little knowledge of its existence (although if we did,we might be able to do something about it). Some terribleevent (a natural disaster or freak combination of circum-stances, say) may occur tomorrow which will fundamen-tally undermine the basis on which the project has beenplanned, but we have no way of knowing the specifics ofthis event or how it might impact the project.

Note that it is possible to know a situation is unfathom-able without being able to change the fundamental nature ofthe uncertainty. Someone may tell us that a terrible dangerlurks behind a locked door, but we still have no idea (and nopractical way of finding out) what uncertainty faces us if weunlock the door and enter. We know the situation is unfathom-able but we don’t know what it is that we don’t know. In otherwords, the future is still unforeseeable.

All this points to a need for a project to have not only asound risk management strategy in place, but an effective

strategy for dealing with uncertainty. The unfath-omable uncertainty of ‘unknown unknowns’ may notbe susceptible to the kind of analysis techniques usedin risk management, but that doesn’t mean a projectcannot be prepared to deal with uncertainty.

2 The Lifecycle of UncertaintyAny strategy for managing project uncertainty

depends on an understanding of the lifecycle of un-certainty. At different stages in this lifecycle we havedifferent opportunities for addressing the issues.

It begins with a source of uncertainty (see Fig-ure 3). In the moment of crisis we may not alwaysbe aware of the source, but hindsight will reveal itsexistence. If detected early enough, anticipationstrategies can be used to contain the uncertainty atsource. Anticipating uncertainty often means tryingto learn more about the nature of the uncertainty;for example by framing the problem it represents,or modelling future scenarios and preparing for them.Using discovery techniques such as constructing aknowledge map of what is and isn’t known about a

particular issue can highlight key aspects of unfathomableuncertainty. Of course, once a source of uncertainty is re-vealed, it is no longer unfathomable and can be dealt withas a project risk.

The greatest threat arises towards the end of the uncer-tainty lifecycle as problems gain momentum and turn intocrises. Something happens to trigger a problem, giving riseto an unexpected event. For example, it may not be untiltwo components are integrated that it becomes apparent thatincorrect manufacturing tolerances have been used. The la-tent uncertainty (what manufacturing tolerance is needed?)triggers an unexpected outcome (a bad fit) only at the pointof component integration, even though the uncertainty couldhave been detected much earlier and addressed.

This trigger may be accompanied by early warning signs.

Figure 1: Not All Uncertainties can be analysed andformulated as Risks.

Figure 2: A Knowledge-centric View of Uncertainty and Risk.

Page 51: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

50 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 3: The Uncertainty Lifecycle and the Strategies Best Suited to addressing Uncertainty.

An alert project manager may be able to respond swiftlyand contain the problem even without prior knowledge ofthe uncertainty, either by recognising the warning signs orremoving the source of uncertainty before it has a chance todevelop.

It is also worth remembering that many kinds of uncer-tainty will never undergo the transition which results in anunexpected outcome. Uncertainty which doesn’t manifestas a problem is ultimately no threat to a project. Once again,the economic argument (that it is neither desirable nor pos-sible to eliminate all uncertainty from a project) is a power-ful one. The goal is to focus sufficient effort on the areas ofuncertainty that represent the greatest threat and have thehighest chance of developing into serious problems.

Based on this understanding of the uncertainty lifecycle,

different sets of strategies are effective at different points:Knowledge-centric strategies: These help to reveal

the sources of uncertainty, resolve them where possible orprepare appropriately, for example through mitigation plan-ning and risk management.

Anticipation strategies: These offer a more holis-tic approach than the knowledge-centred view of uncer-tainty. By looking at a project from different perspectives,for example by visualising future scenarios and examiningcausal relationships, previously hidden uncertainties arerevealed.

Resilience strategies: Trying to contain uncertaintyat source will never be 100 percent successful. Therefore, aproject needs resilience and must be able to detect and re-spond rapidly to unexpected events. Whilst it is impossible

All risks arise from uncertainties,but not all uncertainties can be dealt with as risks“ ”

Page 52: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 51© Novática

Risk Management

Farewell Edition

to predict the nature of the problems in advance, a projectmanager can employ strategies which will imbue theirprojects with much greater resilience.

Learning strategies: These give the project man-ager and the organisation as a whole the ability to improveand benefit from experience over time. No two projects faceexactly the same uncertainty, so it is important to be able toadapt and learn lessons.

3 Five Tenets for Dealing Effectively with ProjectUncertainty

3.1 Aim to contain Uncertainty, not eliminate itNo individual can bring order to the universe, and nei-

ther can the project manager protect his or her project fromevery conceivable threat. Managers who try to do this la-bour under unworkable risk management regimes, construct-ing unwieldy risk logs and impossibly costly mitigationplans. Amidst all the effort being poured into managingsmall, hypothetical risks (the ‘ghost risks’), a project man-ager may be too busy to notice that the nuts and bolts of theproject – where the real focus of attention should be – havecome loose. It is much better to concentrate on detectingand reacting swiftly to early signs of problems. Whilst un-certainty can never be entirely eliminated, it can mostly cer-tainly be contained, and that should be good enough. Ulti-mately this is a far more effective use of resources.

It may be helpful to visualise the project as existing in acontinual state of dynamic tension (see Figure 4). The ac-cumulation of uncertainties continually tries to push theproject off its planned path. If left unchecked, the problemsmay grow so severe that there is no possibility of recover-ing back to the original plan.

The project manager’s role is to act swiftly to correctthe deviations, setting actions to resolve issues, implement-ing contingency plans or nipping problems in the bud. Thisrequires mindfulness and agility: mindfulness to be able tospot things going wrong at the earliest possible stage, andagility in being able to react swiftly and effectively to dampdown the problems and bring the project back on track.

3.2 Uncertainty is an Attribute not an Entity in itsOwn Right

We often talk about uncertainties as if they are discreteobjects when in fact uncertainty is an attribute of every as-pect of the project. The ‘object’ model of uncertainty isunhelpful because it suggests that there are clusters of un-certainties hiding away in the darker corners of the project.If only we could find them, we could dispose of them andour project would be free of uncertainty.

This is a flawed point of view. Uncertainty attaches toevery action or decision much like smell or colour does toa flower. The level of uncertainty may be strong or weakbut collectively we can never completely eliminate uncer-tainty because the only project with no uncertainty is theproject that does nothing.

Once this is accepted, it becomes pointless to attempt tomanage uncertainty in isolation from everything else. Aproject manager cannot set aside a certain number of hourseach week to manage uncertainty, it is inherent in everydecision taken. Uncertainty cannot be compartmentalised.

Figure 4: The Illusion of Project Stability.

Managing uncertainty oftenrequires an approach which

differs from conventionalrisk management

Page 53: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

52 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 5: Collective Team Responsibility to react rapidly during the Transition Period isKey to minimising the Impact of Uncertainty.

Figure 6: Four Possible Modes for confronting Major Uncertainty.

A knowledge of the lifecycle of uncertainty helpsto inform the different strategies which can be used

at different stages of the lifecycle

“”

Page 54: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 53© Novática

Risk Management

Farewell Edition

It lurks in all project tasks, in their dependencies and un-derlying assumptions.

Alertness to any deviation from the norm is vital. A cul-ture of collective problem ownership and responsibility isalso important. All team members need to be capable ofresolving issues within their domain as soon as they arespotted. The period between a trigger event and a full-blowncrisis is often small, so there may not always be time torefer up the management chain and await a decision. Theability to act decisively – often on individual initiative –needs to be instilled in the team and backed up by clearlines of responsibility and powers of delegation. In time,this should become part of the day job for members of theteam at all levels.

Project tolerances can sometimes mask emerging un-certainty. Thresholds need to be set low enough so that is-sues are picked up early in the uncertainty lifecycle, givingmore time to react effectively. It also depends on the natureof the metrics being used to track progress, for example:number of defects appearing at the prototyping stage, indi-vidual productivity measures, number of client issuesflagged, etc. Choose the metrics carefully. The most obvi-ous metrics will not necessarily give the clearest picture (orthe earliest warning) of emerging problems.

3.3 Put Effective Decision-making at the Heart ofManaging Uncertainty

When faced with uncertainty, the project manager hasseveral options available (see Figure 6). The project man-ager must decide how to act – either by suppressing uncer-tainty (perhaps through plugging knowledge gaps), or adapt-ing to it by drawing up mitigation plans, or detouring aroundit and finding an alternative path to the project’s goals.

Whichever action is taken, the quality of decision-mak-ing determines a project’s survival in the face of uncertaintyand is influenced by everything from individual experience,

line management structures, to the establishment of a blame-free culture which encourages those put on the spot to actin the project’s best interests with confidence. As the oldadage says: Decisions without actions are pointless. Ac-tions without decisions are reckless.

The most commonly used tactic against major uncer-tainty is to suppress it, reduce the magnitude of the uncer-tainty and hence the threat it represents. If this can be donepre-emptively by reducing the source of the uncertainty,the greatest benefits will be achieved. Avoiding uncertaintyby suppressing it sounds like a safe bet – and it is, provid-ing it can be done cost-effectively. As the first tenet states,reduction is the goal, not elimination. For novel or highlycomplex projects, particularly those with many co-depend-encies, it may be too difficult or costly to suppress all pos-sible areas of uncertainty.

By adapting to uncertainty, the project tolerates a work-ing level of uncertainty but is prepared to act swiftly tolimit the most damaging aspects of any unexpected events.This is a highly pragmatic approach. It requires agile andflexible management processes which can firstly detectemerging issues in their infancy and secondly, deal withthem swiftly and decisively. For example, imagine a yachtsailing in strong winds. The helmsman cannot predict thestrength of sudden gusts or the direction in which the boatwill be deflected, but by making frequent and rapid tilleradjustments, the boat continues to travel in an approximatelystraight line towards its destination.

Given the choice, we should like to detour around allareas of uncertainty. Avoiding the source of uncertaintymeans that the consequences (that is, the unexpected out-comes) are no longer relevant to the project. Thus there isno need to take costly precautions to resolve unknowns ordeal with their repercussions. Unfortunately, detouringaround uncertainty is hard to achieve, for two reasons.

Firstly, many sources of uncertainty are simply unavoid-

Figure 7: Making an Intuitive Leap to visualise a Future Scenario.

Naturally, no matter how good our understandingof the project’s context, there will always be gaps

“”

Page 55: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

54 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

able, or the avoidance measures are too costly. Considerthe example of a subcontractor who, it later transpires, maybe incapable of delivering a critical input on time. We coulddetour around this uncertainty by dismissing the subcon-tractor in favour of some competitor who can provide a bet-ter service. This will mean cancelling existing contracts,researching the marketplace and renegotiating commercialterms with an alternative supplier – all time-consuming andpotentially costly activities – and with the risk of being nobetter off with the alternative supplier.

Secondly, detouring only works for quantifiable uncer-tainty (the ‘known unknowns’). Unfathomable uncertaintymay well strike too rapidly to permit a detour.

Our final option is reorientation. This is a more dra-matic form of detour where we aim for a modified set ofobjectives in the face of insurmountable uncertainty. Highlynovel projects sometimes have to do this. To plough on inthe face of extreme uncertainty risks total failure. The onlyalternative is to redefine the goals, that is, reorient the projectin a way that negates the worst of the uncertainty. This is not atactic for the faint-hearted. Convincing the client that a projectcannot be delivered as originally conceived is no easy task.But it is worth asking the question, "Is it better to deliver some-thing different (but broadly equivalent) than nothing at all?"

3.4 Uncertainty encompasses both Opportunityand Threat

It is important to seize opportunities when they arise. Ifsome aspects of a project are uncertain, it means there arestill choices to be made, so we must choose well. Too often,the negative consequences dominate the discussion, butperhaps the project can achieve more than was planned, orachieve the same thing by taking a different path. Is there achance to be innovative? Project managers must always beopen to creative solutions. As Einstein said, "We can’t solveproblems by using the same kind of thinking we used whenwe created them."

All approaches to dealing with uncertainty depend to agreater or lesser extent on being able to forecast futureevents. The classic approach is sequential: extrapolatingfrom one logical situation to the next, extending out to somepoint in the future. But with each step, cumulative errorsbuild up until we are no longer forecasting but merely enu-merating the possibilities.

Suppose instead we don’t try to forecast what will hap-pen, but focus on what we want to happen? This meansvisualising a desired outcome and examining which at-tributes of that scenario are most valuable. Working back-wards from this point, it becomes possible to see what cir-cumstances will naturally lead to this scenario. Take an-other step back, and we see what precursors need to be inplace to lead to the penultimate step – and so on until we

have stepped back far enough to be within touching dis-tance of the current project status. (See Figure 7).

This approach focuses on positive attributes (what arethe project’s success criteria?) not the negative aspects ofthe risks to be avoided. Both are important, but many projectmanagers forget to pay sufficient attention to nurturing thepositive aspects. By ‘thinking backwards’ from a future sce-nario, the desired path often becomes much clearer. It isironic that ‘backward thinking’ is often just what is neededto lead a project forward to successful completion.

3.5 Meet Uncertainty with AgilityPerhaps the best defence against uncertainty is to or-

ganise and structure a project in a sufficiently agile fashionto be resilient to the problems that uncertainty inevitablybrings. This manifests in two ways: how fast can the projectadapt and cope with the unexpected, and how flexible isthe project in identifying either new objectives or new waysto achieve the same goals?

One approach is to ensure that the project only ever takessmall steps. Small steps are easier to conceptualise, planfor and manage. They can be retraced more easily if theyhaven’t delivered the required results or if it becomes clearthey are leading in the wrong direction. Small steps alsosupport the idea of fast learning loops. For instance, alengthy project phase reduces the opportunity to quicklyfeedback lessons learned. If the project is too slow to re-spond, it may fail under the accumulated weight of uncer-tainty.

More iterative ways of working are becoming increas-ingly common and do much to increase the agility of aproject. A feature of monolithic projects (i.e. those whichdo not follow an iterative strategy) is the assumption thateverything proceeds more or less as a sequence of tasksexecuted on a ‘right first time’ basis. Generally speaking,more effort is directed at protecting this assumption (forexample, by analysing and mitigating risks which maythreaten the task sequence) than on planning for a certainlevel of rework. In contrast, by planning to tackle tasks it-eratively, two benefits are gained: firstly, early sight of un-fathomable issues which wouldn’t otherwise surface untilmuch later in the schedule, and secondly, greater opportu-nity to make controlled changes.

Finally, an agile project is continuously looking for waysto improve. A project which is unable (or unwilling) to learnlessons is destined to repeat its early mistakes because itignores opportunities to learn from the unexpected. Somelessons are obvious, some require much soul-searching,brainstorming or independent analysis. What matters aboveall else is that the improvements are captured and dissemi-nated and the changes implemented, either in the latter projectstages or in the next project the organisation undertakes.

It may be helpful to visualise the project as existingin a continual state of dynamic tension“

Page 56: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 55© Novática

Risk Management

Farewell Edition

Keywords: Behavioural Complexities, Chaotic Sys-tems, Dynamic Complexities, Quantitative Assessments,Qualitative Estimates, Risk Leadership, Risk Management,Scientific Management, Tame Problems, Wicked Problems.

"We’re better at predicting events at the edge of the gal-axy or inside the nucleus of an atom than whether it’ll rainon auntie’s garden party three Sundays from now. Becausethe problem turns out to be different. We can’t even predictthe next drip from a dripping tap when it gets irregular. It’sthe best possible time to be alive, when almost everythingyou thought you knew is wrong."

"Arcadia" by Tom Stoppard

IntroductionThere is a feeling amongst some risk practitioners, my-

self included, that theoretical risk management has strayedfrom our intuition of the world of project management. His-torically, project risk management has developed from thenumerical disciplines dominated by a preoccupation withstatistics (Insurance, accountancy, engineering etc.) This hasled to a bias towards the numerical in the world of projectmanagement.

In the 1950’s a new type of scientific management wasemerging, that of project management. This consisted ofthe development of formal tools and techniques to helpmanage large complex projects that were considered uncer-tain or risky. It was dominated by the construction and en-gineering industries with companies such as Du Pont de-veloping Critical Path Analysis (CPA) and RAND Corpdeveloping Programme Evaluation and Review Technique(PERT) techniques. Following on the heels of these earlyproject management techniques, institutions began to be

Author

David Hancock is Head of Project Risk for LondonUnderground part of Transport for London, United Kingdom.He has run his own consultancy, and was Director of Risk andAssurance for the London Development Agency (LDA) underboth Ken Livingstone and Boris Johnson’s leadership withresponsibilities for risk management activities including health& safety, business continuity and audit for all of the Agency’sand its partner’s programmes. Prior to this role, for 6 years hewas Executive Resources Director with the Halcrow Group,responsible for the establishing and expanding the businessconsultancy group. He has a wide breadth of knowledge inproject management and complex projects and extensiveexperience in opportunity & risk management, with specialregard to the people & behavioural aspects. He is presently aboard director with ALARM (The National Forum for RiskManagement in the Public Sector), a co-director of the managingpartners’ forum risk panel, member of the programme committeefor the Major Projects Association and a visiting Fellow atCranfield University, United Kingdom, in their School ofManagement. <[email protected]>

The application of the ‘New Sciences’to Risk and Project Management1

David Hancock

The type of problems that need to be solved in organizations are very variable in terms of their complexity ranging from‘tame’ problems to ‘wicked messes’. We state that projects tend to have the characteristics of wicked messes where deci-sion making gets confused by behavioural and dynamic complexities which coexist and interact. To address the situationwe cannot continue to rely on sequential resolution processes, quantitative assessments and simple qualitative estimates.We propose instead to develop the concept of risk leadership which is intended to capture the activities and knowledgenecessary for project managers to accommodate the disorder and unpredictability inherent in project environments throughflexible practices leading to negotiated solutions.

formed in the 1970’s as repositories for these developingmethodologies. In 1969 the American Project ManagementInstitute (PMI) was founded; in 2009 the organization hasmore than 420,000 members, with 250 chapters in morethan 171 countries. It was followed in 1975 by the UK As-sociation of Project Managers (changed to Association forProject Management in 1999) with its own set of method-ologies. In order to explicitly capture and codify the proc-esses by which they believed projects should be managed,they developed qualifications and guidelines to supportthem. However, whilst the worlds of physics, mathematics,economics and science have moved on beyond Newtonianmethods to a more behavioural understanding, the so callednew sciences, led by eminent scholars in the field such asEinstein, Lorenz and Feynman. Project and risk manage-ment appears largely to have remained stuck to the princi-ples of the 1950’s.

1 This article was previously published online in the "Advances inProject Management" column of PM World Today (Vol. XII Issue V- May 2010), <http://www.pmworldtoday.net/>. It is republished withall permissions.

Page 57: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

56 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Box 1: The Butterfly Effect In 1961 whilst working on long range weather prediction, Edward Lorenz made a startling discovery. Whilst working on a particular weather run rather than starting the second run from the beginning he started it part way through using the figures from the first run. This should have produced an identical run but he found that it started to diverge rapidly until after a few months it bore no resemblance to the first run. At first he thought he had entered the numbers in error. However this turned out to be far from the case, what he had actually done was round the figures and instead of using the output of 6 decimal places had used only three (.506 instead of .506127). The difference one part in a thousand he had considered inconsequential especially as a weather satellite being able to read to this level of accuracy is considered quite unusual. This slight difference had caused a massive difference in the resulting end point. This gave rise to the idea that a butterfly could produce small undetectable changes in pressure which would considered in the model and this difference could result in altering the path, delaying or stopping of a tornado over time. Edward N Lorenz . 1972 Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?

Figure: Two pendulums with an initial starting difference of only 1 arcsec (1/3600 of a degree).

Risk ManagementThe general perception amongst most project and risk

managers that we can somehow control the future is, in myopinion, one of the most ill-conceived in risk management.However, we have made at least two advances in the rightdirection. Firstly, we now have a better understanding aboutthe likelihood of unpleasant surprises and, more importantly,we are learning how to recognise their occurrence early onand subsequently to manage the consequences when theydo occur.

Qualitative and Quantitative RiskThe biggest problem facing us is how to measure all

these risks in terms of their potential likelihood, their pos-sible consequences, their correlation and the public’s per-ception of them. Most organisations measure different risksusing different tools. They use engineering estimates forproperty exposures, leading to MFLs (maximum foresee-able loss) and PMLs (probable maximum loss). Actuarialprojections are employed for expected loss levels wheresufficient loss data is available. Scenario analyses and

The type of problems that need to be solved in organizationsare very variable in terms of their complexity ranging from

‘tame’ problems to ‘wicked messes’

Table 1: The implications of the New Concept of Risk Leadership.

“”

Page 58: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 57© Novática

Risk Management

Farewell Edition

Monte Carlo simulations are used when data is thin, espe-cially to answer how much should I apply questions.Probabilistic and quantitative risk assessments are used fortoxicity estimates for drugs and chemicals, and to supportpublic policy decisions. For political risks, managers relyon qualitative analyses of ‘experts’. When it comes to fi-nancial risks (credit, currency, interest rate and market), weare inundated with Greek letters (betas, thetas, and so on)and complex econometric models that are comprehensibleonly to the trained and initiated. The quantitative tools areoften too abstract for laymen, whereas the qualitative tools lackmathematical rigour. Organisations need a combination of bothtools, so that they can deliver sensible and practical assess-ments of their risks to their stakeholders. Finally it is importantto remember that the result of quantitative risk assessment de-velopment should be continuously checked against one’s ownintuition about what constitutes reasonable qualitative behav-iour. When such a check reveals disagreement, then the fol-lowing possibilities must be considered:

1. A mistake has been made in the formal mathemati-cal development;

2. The starting assumptions are incorrect and/or con-stitute too drastic oversimplification;

3. One’s own intuition about the field is inadequatelydeveloped;

4. A penetrating new principle has been discovered.

Tame Messes and Wicked ProblemsOne of the first areas to be investigated is whether our cur-

rent single classification of projects is a correct assumption.The general view at present appears to treat them as linear,deterministic predictable systems, where a complex system orproblem can be reduced into simple forms for the purpose ofanalysis. It is then believed that the analysis of those individualparts will give an accurate insight into the working of the wholesystem. The strongly held feeling that science will explain eve-rything. The use of Gant charts with their critical paths andquantitative risk models with their corresponding risk correla-tions would support this view. However this type of problemwhich can be termed tame appears to be the only part of thestory when it comes to defining our projects.

Tame problems are problems which have straight-for-ward simple linear causal relationships and can be solvedby analytical methods, sometimes called the cascade orwaterfall method. Here lessons can be learnt from pastevents and behaviours and applied to future problems, sothat best practices and procedures can be identified. In con-trast ‘messes’ have high levels of system complexity andare clusters of interrelated or interdependent problems. Herethe elements of the system are normally simple, where thecomplexity lies in the nature of the interaction of its ele-ments. The principle characteristic of which is that they can-not be solved in isolation but need to be considered holistically.Here the solutions lie in the realm of systems thinking. Projectmanagement has introduced the concepts of Programme andPortfolio management to attempt to deal with this type of com-plexity and address the issues of interdependencies. Using strat-egies for dealing with messes is fine as long as most of usshare an overriding social theory or social ethic; if we don’twe face ‘wickedness’. Wicked problems are termed as ‘di-vergent’, as opposed to ‘convergent’ problems. Wickedproblems are characterised by high levels of behaviouralcomplexity. What confuses real decision-making is that be-havioural and dynamic complexities co-exist and interactin what we call wicked messes. Dynamic complexity re-quires high level conceptual and systems thinking skills;behavioural complexity requires high levels of relationshipand facilitative skills. The fact that problems cannot besolved in isolation from one another makes it even moredifficult to deal with people’s differing assumptions andvalues; people who think differently must learn about andcreate a common reality, one which none of them initiallyunderstands adequately. The main thrust to the resolutionof these types of problems is stakeholder participation and‘satisficing’. Many risk planning and forecasting exercisesare still being undertaken on the basis of tame problemsthat assume the variables on which they are based are few,that they are fully understood and able to be controlled.However uncertainties in the economy, politics and societyhave become so great as to render counterproductive, if notfutile, this kind of risk management that many projects andorganisations still practise.

To address the situation we cannot continue to relyon sequential resolution processes, quantitative assessments

and simple qualitative estimates

We propose instead to develop the conceptof risk leadership which is intended to capture

the required activities and knowledge

“”

“”

Page 59: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

58 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Chaos and ProjectsAt best I believe projects should be considered as deter-

ministic chaotic systems rather than tame problems. Here Iam not using the term Chaos as defined in the English lan-guage which tends to be associated with absolute random-ness and anarchy (Oxford English Dictionary describeschaos as "complete disorder and confusion") but based onthe Chaos theory developed in the 1960’s. This theoryshowed that systems which have a degree of feedback in-corporated in them, that tiny differences in input could pro-duce overwhelming differences in output. (The so calledButterfly effect see Box 1[1]). Here chaos is defined as ape-riodic (never repeating twice) banded dynamics (a finiterange) of a deterministic system (definite rules) that is sen-sitive on initial conditions. This appears to describe projectsmuch better than the linear deterministic and predictableview. In which both randomness and order could exist si-multaneously within those systems. The characteristics ofthese types of problem are that they are not held in equilib-rium either amongst its parts or with its environment but arefar from being held in equilibrium and the system operates‘at the edge of chaos’ where small changes in input can causethe project to either settle into a pattern or just as easily veerinto total discord. For those who are sceptical consider thefailing project that receives new leadership it can just aseasily move into abject failure as settle into successful de-livery and at the outset we cannot predict with any certaintywhich one will prevail. At worst they are wicked messes.

ConclusionHow should the project and risk professional exist in

this world of future uncertainly? Not by returning to a reli-ance on quantitative assessments and statistics where noneexists. We need to embrace its complexities and understandthe type of problem we face before deploying our armouryof tools and techniques to uncover a solution, be they theapplication of quantitative data or qualitative estimates. Toaddress risk in the future tense we need to develop the con-cept of ‘risk leadership’ which consists of:

Project managersmust accommodate

the disorderand unpredictability inherent

in project environmentsthrough flexible practices

leading to negotiatedsolutions

Guiding rather than prescribingAdapting rather than formalisingLearning to live with complexity rather than simpli-

fyingInclusion rather than exclusionLeading rather than managingThe implications of the new concept of risk leader-

ship are described in Table 1.What does this all mean? At the least it means we must

apply a new approach for project and risk management forproblems which are not tame. That we should look to en-hance our understanding of the behavioural aspects of theprofession and move away from a blind application of proc-ess and generic standards towards an informed implemen-tation of guidance. That project and risk management ismore of an art than a science and that this truly is the besttime to be alive and being in project and risk management.

References[1] J. Gleick. Chaos: Making A New Science. Penguin,

1987.

Page 60: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 59© Novática

Risk Management

Farewell Edition

Keywords: Case Studies, Communicative Action, ERP,Experiment, Project Risk Management, Project Success.

1 IntroductionThe question as to whether project risk management

contributes to project success is, in the context of projectmanagement practitioners, essentially a question about thevalue of an instrument. An instrument that is employed byproject managers during the planning and execution stagesof a project, employed to secure project success, regardlessof all manner of unexpected events and situations that mayoccur during project execution.

In order to answer the question, a research project [1]was conducted which was divided into four stages. The struc-ture of this article embodies this staged approach, the firststage being a study of recent literature on the relationshipbetween risk management and Information Technology (IT)project success. IT projects are well known for their fre-quent failure (see e.g. [2]), and because of the recommen-dation to use risk management more frequently in order toincrease the success rate ([3]).

From the literature study it appeared that in order to an-swer the question about the contribution of project risk man-agement to IT project success, an additional view on projectrisk management and project success is necessary. This ad-ditional view is developed in the second stage of the re-search. Exploration of the additional view is done in thethird stage, by means of case studies of ERP implementa-tion projects. Finally, in stage four, an experiment is con-ducted in which the influence of a single risk managementactivity on project success is investigated. This article con-cludes with a section on theoretical implications and impli-cations for practitioners.

2 What does Literature tell us about Risk Man-agement and IT Project Success?

The conducted literature study investigated 29 papers,published between 1997 and 2009 in scientific journals, re-porting on the relationship between risk management andproject success in IT projects. The study demonstrates twomain approaches on how risk management is defined in the

Communicative Project Risk Management in IT Projects

Karel de Bakker

Project management practitioners and scientists assume that risk management contributes to project success throughbetter planning of time, money and requirements. However, current literature on the relation between risk managementand IT project success provides hardly any evidence for this assumption. Nevertheless, risk management is used frequentlyon IT projects. Findings from new research provide evidence that individual risk management activities are able to con-tribute to project success through "communicative effects". Risk management triggers or stimulates action taking, itinfluences and synchronizes stakeholders’ perceptions and expectations and it shapes inter-stakeholder relationships.These effects contribute to the success of the project.

Author

Karel de Bakker is a Senior Consultant for Het ExpertiseCentrum, The Netherlands. He received his PhD from theUniversity of Groningen, The Netherlands (2011), and hisMasters’ degree from the University of Enschede, TheNetherlands (1989). Hel has been a PMI certified project ma-nager (PMP) since 2004. His assignments brought him in contactwith various organisations, including ABN AMRO Bank, INGBank, KLPD (Netherlands Police Agency), KPN Telecom, andNS (Dutch Railways). Over the years, risk management becamean important element in his assignments. His scientific work onthe relation between risk management and project success waspublished in International Journal of Project Management,Project Management Journal and International Journal of ProjectOrganisation and Management. <[email protected]>

literature, one of them being the management approach.The management approach considers risk management asbeing an example of a rational problem solving process inwhich risks are identified, analysed, and responses are de-veloped and implemented. Evidence found in all investi-gated papers for the relationship between risk managementand project success is primarily anecdotal or not presentedat all.

Additional empirical findings indicate that the assump-tions underpinning the management approach to risk man-agement are often invalid. Firstly, IT projects contain risksfor which there is no classical or statistical probability dis-tribution available. These risks cannot be managed by meansof the risk management process [4]. Secondly, project man-agers in IT projects show a tendency to deny the actual

Project managementpractitioners and scientists

assume that risk managementcontributes to project success

Page 61: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

60 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 1: Traditional View on Risk Management and its Relation to the Project.

Risk management PROJECT influence

Instrumental action

Instrumental object

Instrumental effect

Risk management PROJECT influence

Social action

Social object

Instrumental and additional

effects

Figure 2: Adjusted (or New) View on Risk Management and its Relation to the Project.

presence of risk; they avoid it, ignore it or delay their ac-tions [5]. This behaviour is not in line with the assumedrational behaviour of actors. Thirdly, project stakeholdersin general deliberately overestimate the benefits of theproject and at the same time they underestimate the projectrisks at the start of the project [6]). Finally, various authors(e.g. [7]) indicate that the complete sequence of risk man-agement activities is often not followed in projects, conse-quently the assumption of rational problem solving is in-correct.

Not only is there very little evidence from recent litera-ture that risk management contributes to IT project success,empirical findings thus far indicate it is also unlikely thatrisk management is able to contribute to IT project success.Taking into consideration the remarks made by various au-thors about the limitations of IT projects, risk managementis able to contribute to IT project success if the project: (1)has clear and fixed requirements, (2) uses a strict method ofsystem development, and (3) has historical and applicabledata available, collected from previous projects. The com-bination of the three mentioned criteria will only occasion-ally be met in IT projects. As an example we can considerthe development of a software module of known function-ality and function points by a software development organi-sation, certified on CMM level 4 or 5.

It remains remarkable that there is such a large gap be-tween project risk management in theory and project riskmanagement in practice. Findings from research indicatethat the complete risk management process as described forinstance in the PMI Body of Knowledge [8], is often notfollowed [9], or even that practitioners do not see the valueof executing particular steps of the risk management proc-ess [7]. In addition, it is remarkable that both project man-agement Bodies of Knowledge and established current lit-

erature ignore the results from research which indicate theassumptions and mechanisms that underpin project riskmanagement only work in specific situations, or do not workat all. This should at least lead to a discussion about thevalidity of certain elements of the Bodies of Knowledge,and to the adjustment of the project risk management proc-ess, which is claimed to be founded on good practice [8] oreven Best Practice [10].

3 An Additional View to Project Risk ManagementAn important assumption in the current literature un-

derpinning both project management and the way risk man-agement influences the project and consequently projectsuccess, is the assumption that projects are taking place in areality that is known, and that reality is responding accord-ing to the laws of nature the project stakeholders either knowor may be able to know (see e.g. [11]). This so calledinstrumentalism assumption defines project risk manage-ment, its effects, and the object on which project risk man-agement works, i.e. the project, in instrumental terms. Fig-ure 1 depicts the relation between risk management and theproject in traditional terms, in other words under the as-sumption of instrumentalism.

Risk management may work well in situations in whichthe object of risk management can be described in terms ofpredictable behaviour (the instrumental context), for instancecontrolling an airplane or a nuclear power plant, or a pieceof well defined software that must be created as part of anIT project. Risk management is then an analytical processin which information is collected and analysed on eventsthat may negatively influence the behaviour of the objectof risk management. However, projects, and particularly ITprojects, generally consist of a combination of elements thatcontain both predictable and human behaviour; the latter of

Page 62: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 61© Novática

Risk Management

Farewell Edition

Success of the project

(an individual stakeholder

opinion)

Instrumental

effects

Communicative

effects

Identification

Risk Management Activities, e.g.:

Registration

Analysis

Allocation

Reporting

Figure 3: Communicative and Instrumental Effects of Risk Management on Project Success.

which is not always predictable. The presence of humanbehaviour makes a project a social object, an object whichdoes not behave completely predictably.

Furthermore, human behaviour, together with humaninteraction, plays a role in the risk management process it-self. During the various activities of the risk managementprocess, participants in these activities interact with eachother. Risk management can then no longer be consideredinstrumental action, but should be considered social actioninstead. These interactions between participants in the riskmanagement process may be able to create effects in addi-tion to the assumed instrumental effects of risk management.Figure 2 presents this adjusted view on the relationship be-tween risk management and the project.

This adjusted view, which considers risk managementas being social action working on a social object, instead ofinstrumental action working on an instrumental object, leadsto various changes in model definitions and assumptionscompared to the traditional view.

The adjusted view considers project success to be theresult of a personal evaluation of project outcome charac-teristics by each stakeholder individually (see e.g. [12]).Timely delivery, delivery within budget limits and deliveryaccording to requirements, being the traditional objective

project success criteria, may play an important role in thisstakeholder evaluation process, but they are no longer theonly outcomes that together determine if the project can beconsidered a success. Therefore, project success becomesopinionated project success, and is no longer considered assomething that can be determined and measured only inobjective terms.

The adjusted view, considering risk management interms of social action, implies that risk management is aprocess in which participants interact with each other. Inaddition to the traditional view, which considers risk man-agement only in terms of instrumental action and instru-mental effects, the additional view assumes that interactionbetween participants or social interaction exists, which maylead to additional effects on the project and its success (seeFigure 3). This research refers to these effects resulting frominteraction as “communicative effects”, and the researchassumes that each risk management activity individuallymay be able to generate communicative effects and maytherefore individually contribute to project success.

Generally speaking, this additional view on risk man-agement creates an environment in which human behav-iour and perception play central roles in terms of describ-ing the effect of risk management and the success of the

New research provides evidence that individual riskmanagement activities are able to contribute to project

success through ‘communicative effects’

“”

Page 63: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

62 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Case 1 Sector Food industry

Project

description

SAP system implemented on two geographic locations in four organisational units.

System used to support a number of different food production processes and financial

activities.

Duration 13 months

Additional

information

Use of method for organisational change, not for project management. Time & Material

project contract. External project manager, hired by the customer, and not related to

the IT supplier.

Case 2 Sector Government

Project

description

SAP system implemented on 40 locations. System used for production, issuing and

administration of personalized cards that provide access to office buildings. SAP linked

on all 40 locations to peripheral equipment (photo equipment, card printers)

Duration 17 months

Additional

information

Internal project with internal project manager. Limited number of external personnel.

No formal project contract. Limited Prince2 methodological approach, combined with

organization specific procedures and templates.

Case 3 Sector Government

Project

description

SAP system implemented on four locations. System used for scheduling duty rosters of

around 3000 employees. Time critical project because of expiring licences of previous

scheduling system.

Duration 24 months (including feasibility study), 21 months excl.

Additional

information

Internal project with internal project manager. Limited number of external personnel.

No formal project contract. Limited Prince2 methodological approach, combined with

organization specific procedures and templates.

Case 4 Sector Energy

Project

description

Creation from scratch of a new company, being part of a larger company. SAP

designed and implemented to support all business processes of the new company. SAP

system with high level of customization.

Duration 9 months (for stage 1; time according to original plan, but with scope limited)

Additional

information

The ERP project was part of a much larger project. Fixed price, fixed time, fixed scope

contract with financial incentives. Project manager from IT Supplier. Project restarted

and re-scoped after failure of first attempt. Strict use of (internal) project management

methodology, procedures and templates

Case 5 Sector Public utility (social housing)

Project

description

ERP system based on Microsoft Dynamics Navision. Implemented to support various

primary business processes, for instance: customer contact, contract administration,

property maintenance

Duration 12 months

Additional

information

Time and material contract. Project restart after failure of first attempt. Project manager

from IT supplier organization. Limited Prince2 methodological approach.

Table 1A: Overview of Seven investigated ERP Implementation Projects.

Page 64: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 63© Novática

Risk Management

Farewell Edition

Duration 11 months

Additional

information

Time and material contract. External project manager, hired by the customer

organization and with no formal relation to the IT Supplier. No formal project

management methodology used

Case 7 Sector Petro-chemical industry

Project

description

Divestment project. Selling all activities of one specific country to a new owner. Existing

ERP systems related to the sold activities carved out of the company wide ERP system

(mainly SAP) and handed over to the new owner.

Duration 14 months (ready for hand-over as planned)

Additional

information

The ERP project was part of a larger project. The ERP project budget was low (less than

5%) compared to the overall deal (approx. 400 million EUR). Internal project manager.

Fixed time project, but delayed several times because of external factors. Internal

project management guidelines and templates used

Case 6 Sector Public utility (social housing)

Project

description

ERP system based on Microsoft Dynamics Navision. Implemented to support various

primary business processes, for instance: customer contact, contract administration,

project. The additional view acknowledges the influence ofstakeholders interacting with each other, and influencingeach other through communication. By doing so, this addi-tional view positions itself outside the strict instrumental or“traditional” project management approach that can be foundin project management Bodies of Knowledge. However, theadditional view does not deny the fact that risk manage-ment may influence project success in an instrumentalway; it only states that in addition to the potential instru-mental effect of risk management, there is a communica-tive effect. Given the limitations of the effectiveness ofthe instrumental effect, the influence of the communica-tive effect of risk management on project success mayprobably be larger than the influence of the instrumentaleffect.

4 Results from Case StudiesSeven ERP implementation projects were investigated

for the presence of communicative effects as a result of theproject risk management process. Presented here is a table(Table 1) with an overview of all investigated ERP imple-mentation projects. A total number of 19 stakeholders fromthe various projects were interviewed. Data collection took placebetween one and two months after project completion.

Considering project success, two projects score low onobjective project success because of serious issues withtime, budget and requirements; both projects had a restart.Four projects score medium on objective project success,all having minor issues with one or more of the objectivesuccess criteria. One project scores high on objective projectsuccess. Variation on opinionated project success is low.Stakeholders from the two low objective success projectsscore lower on opinionated project success thanstakeholders from the other five projects, but based on theobjective success scores, the difference is less than expected.

ERP implementation projects that participated in the re-search were selected based on the criterion that they haddone “something” on risk management. The sample ofprojects therefore does not include projects that performedno risk management at all. Risk identification is conductedon all projects, in various formats including brainstorm ses-sions, moderated sessions and expert sessions. Risk analy-sis was carried out in five projects, but only in a rather ba-sic way; none of the projects used techniques for quantita-tive risk analysis. Other risk management activities, the useof which were investigated in the projects are: the planningof the risk management process, the registration of risks,the allocation of risks to groups or individuals, the report-

Table 1B: Overview of Seven investigated ERP Implementation Projects.

1 This typology of effects is based on The Theory of Communica-tive Action by Jürgen Habermas (1984) [13]. In order to avoid anexcessively wide scope for this article, this theoretical backgroundis not discussed here.

IT projects are well knownfor their frequent failure

“ ”

Page 65: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

64 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

ing of risks to stakeholders or stakeholder groups and thecontrol of risks. Actual use and format of these practicesvary over the projects.

The case studies’ results demonstrate that, according tostakeholders, project risk management activities contributeto the perceived success of the project. Risk identificationis, by all stakeholders, considered to be the risk manage-ment activity that contributes most to project success. Fur-thermore, stakeholders provide a large number of indica-tions on how risk identification, in their view, contributesto project success. Finally, risk identification is, bystakeholders, considered to be able to contribute to projectsuccess through a number of different effects; Action, Per-ception, Expectation and Relation effects1 .

Risk identification triggers, initiates or stimulates ac-tion taking or making actions more effective (Action ef-fect). It influences the perception of an individualstakeholder and synchronizes various stakeholders’ percep-tions (Perception effect). It influences the expectations ofstakeholders towards the final project result or the expecta-tions on stakeholder behaviour during project execution(Expectation effect). Finally, it contributes to the process ofbuilding and maintaining a work and interpersonal relation-ship between project stakeholders (Relation effect). Riskreporting is another risk management activity that influencesproject success through these four effects. Other risk man-agement activities also generate effects, but less than the

four effects mentioned with risk identification and report-ing. The research data demonstrate a positive relation (bothin quantity and in quality) between the effects generatedthrough risk management activities and project success.

5 Results from an ExperimentThe conclusion that individual risk management activi-

ties contribute to project success is based upon the opinionsof individual stakeholders, meaning that the effect of riskmanagement on project success is directly attributable tothose effects as perceived by project stakeholders. Giventhe case study research setting, the possibilities for “objec-tive” validation of these perceptions are limited. In order tocreate additional information on the effect of a specific riskmanagement practice on project success, independently ofvarious stakeholders’ perceptions, an experiment was de-veloped with the aim to answer to the following sub-ques-tion: Does the use of a specific risk management practiceinfluence objective project success and project success asperceived by project members?

Building on the results of the case studies, risk identifi-cation was chosen as the risk management activity for theexperiment. Risk identification is the activity which, accord-ing to the results from the case studies, has the most impacton project success. Furthermore, a project generally startswith a risk identification session, which makes risk identi-fication relatively easy to implement in an experimental set-ting. The experiment was conducted with 212 participantsin 53 project groups. All participants were members of aproject group where, in the project, each member had thesame role. The project team had a common goal, which fur-ther diminished the chances for strategic behaviour of par-ticipants. The common goal situation provided the condi-tions for open communication and therefore for communi-cative effects, generated by the risk management activity.

All project groups that performed risk identification be-fore project execution used a risk prompt list to support therisk identification process. 17 groups did risk identificationby discussing the risks with team members (type 3 groups);18 groups that did risk identification did not discuss riskswith team members (type 2 groups). The control groupprojects (type 1 groups, 18 groups) conducted no risk iden-tification at all before project execution. All project groupshad to execute the same project, consisting of 20 tasks.

Results from the experiment demonstrate that projectgroups that conducted risk identification plus discussion

Figure 4: Trend Line, demonstrating the Influence of RiskIdentification (RI) with or without Group Discussion on theNumber of correctly Performed Tasks. 2 Jonckheere-Terpstra test: (J = 625, r = .36, p < .01, N = 53).

Risks in IT projectscannot be managedby means of the risk

management process

Human behaviour, togetherwith human interaction, playsa role in the risk management

process itself

Page 66: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 65© Novática

Risk Management

Farewell Edition

perform significantly better in the number of correctly com-pleted tasks than the control groups that did not conductrisk identification at all. The number of correctly performedtasks is, in this experiment, one of the indicators for objec-tive project success. A trend test2 demonstrates a highly sig-nificant result, indicating that the number of correctly per-formed tasks increases when groups perform risk identifi-cation, but increases further when groups do risk identifica-tion plus discussion. Figure 4 illustrates this trend. Types ofprojects are on the X-axis. The Y-axis presents the averagenumber of correctly performed tasks by the project team(Q3).

Perceived (opinionated) project success was measuredby asking projects to grade the project result. The analysisof grades demonstrates some remarkable research findings.Project groups that did risk identification plus discussion(type 3) score significantly better on the number of correctlyperformed tasks than control groups (type 1). After projectgroups have been informed about their own project result(and their own result only), all project groups value theirproject result equally. There is no difference in grades as-signed by project groups from any of the group types. Theresult of project groups that conducted risk identificationplus discussion is objectively better, but apparently this bet-ter result is not reflected in the opinion of the project groupswho conducted risk identification plus discussion.

It is remarkable to see that, directly after project execu-tion, before project groups are informed about their projectresult, project groups who conducted risk identification plusdiscussion are significantly more positive about their resultthan groups that conducted no risk identification or risk iden-tification without communication. The grades for projectsuccess given by project groups directly after project ex-ecution indicate that project groups attribute positive effectsto risk management in relation to project success.

6 Conclusions and ImplicationsThe main conclusion of this research is: Project risk

management as described in handbooks for project man-agement and project risk management [14][8] only occa-sionally contributes to project success if project risk man-agement is considered solely in terms of instrumental ac-tion working on an instrumental object. If, on the other hand,project risk management is considered a set of activities in

which actors interact and exchange information, also knownas communicative action, working on a social object, indi-vidual risk management activities contribute to project suc-cess because the activities may generate Action, Percep-tion, Expectation and Relation effects. A positive relationexists between the effects generated through risk manage-ment activities and project success.

The experiment demonstrates that an individual riskmanagement activity is able to contribute to elements ofproject success. For this effect to occur, it is not necessaryto measure or to quantify the risk. For instance in a riskidentification brainstorm, project stakeholders exchangeinformation on what they individually see as the potentialdangers for the project. Such an exchange of informationmay lead to adjustments of the expectations of individualactors and the creation of mindfulness [15]. Mindfulnessincludes awareness and attention; actors become sensitiveto what is happening around them, and they know whenand how to act in case of problems. This leads to a remark-able conclusion, which can be described as “the quantumeffect” of project risk management, because its appearanceis somewhat similar to what Werner Heisenberg in quan-tum mechanics described as the uncertainty principle.

Firstly; in order to influence the risk, it is not necessaryto measure the risk. The experiment demonstrated that arisk prompt list, in which five risks were mentioned thatwere realistic, but all of which had very low probability ofoccurring, is enough to make project members aware ofpotential project risks and to influence their behaviour. Asa result, the project groups who talked about the risks be-fore project execution performed better and gave themselvesa higher grade for the performance of their project. Sec-ondly, as a result of this communicative effect, it is impos-sible to measure risk without changing its probability. Themoment the risk is discussed, stakeholders become influ-enced and this consequently leads to an effect on the prob-ability of the risk.

Based on the research findings the main implication orrecommendation for practitioners is to continue the use ofrisk management on IT projects. However, this researchprovides some important recommendations that should betaken into account when risk management is used on ITprojects. Practitioners should be aware that the assumptionsunderlying the project risk management process as de-

The case studies’ results demonstrate that, accordingto stakeholders, project risk management activities contribute

to the perceived success of the project

The main implication or recommendation for practitioners isto continue the use of risk management on IT projects“ ”

“”

Page 67: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

66 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

scribed in handbooks for project management (the instru-mental view) are often not correct. Hence, only in specificsituations, is the risk management process is able to con-tribute to project success in terms of “on-time, on-budget”delivery of a predefined IT system. If project risk manage-ment is used in a situation in which the assumptions are notmet, it will inevitably lead to a situation in which projectstakeholders think that the project risks are under control,where in fact they are not.

However, individual risk management activities such asrisk identification or risk allocation generate non-instrumen-tal effects, possibly in addition to instrumental effects. Thesenon-instrumental or communicative effects occur as a re-sult of interaction (discussion, exchange of information)between project stakeholders during the execution of riskmanagement activities. Communicative effects stimulateinstrumental action taking by stakeholders, and the effectscreate a common view among project stakeholders aboutthe project situation by influencing stakeholders’ percep-tions and expectations and shaping the inter-stakeholders’relationships. Practitioners should be aware that the crea-tion of communicative effects can be stimulated by provid-ing capacity for interaction during risk management activi-ties. For instance; a risk identification brainstorm sessionor moderated meeting will generate more communicativeeffects than a risk identification session in which only check-lists or questionnaires are used. For the communicative ef-fects to occur it is not necessary that the complete risk man-agement process is executed as described in handbooks forproject management. Individual risk management activitieseach have their own effect on project success through thevarious communicative effects they may generate. The com-municative effect contributes to project success, not only interms of time, budget and quality, but also in terms of per-ceived success.

At the same time, practitioners should be aware that com-municative effects with an effect on project success willnot occur in every project situation, nor that the effect is, inall situations, a positive effect. If, for instance during riskidentification, certain information about risks is labelled asbeing important for the project, where in fact these riskswere relevant in an earlier project, but not in the forthcom-ing project, the risk communication can lead to project mem-bers to focus upon (what later will appear to be) the “wrongrisks”. By focussing upon the wrong risks, project mem-bers are unable to detect and respond to risks that have notbeen identified; one of the cases (case 7) of this researchprovides an example of this type of problem. Furthermore,communicative effects with a positive effect on project suc-cess occur predominantly in situations where informationis not used strategically. In situations in which informationon risks is not shared openly, the positive communicativeeffect may not occur. One other case (case 4) of this re-search provides some indications that not sharing risk re-lated information between customer and IT supplier leadsto lower communicative effects, resulting in lower projectsuccess.

References[1] K. de Bakker. Dialogue on Risk – Effects of Project

Risk Management on Project Success (diss.).Groningen, the Netherlands: University of Groningen.Download at: <http://www.debee.nl>, 2011.

[2] The Standish Group International. Chaos: A Recipe forSuccess, 1999. Retrieved from <http://www.standishgroup.com/sample_research/index.php>,(21.06.07).

[3] Royal Academy of Engineering The Challenges ofComplex IT Projects, 2004. Retrieved from <http://www.raeng.org.uk/news/publications/list>, (19.06.07).

[4] M.T. Pich, C.H. Loch, A. de Meyer. On Uncertainty,Ambiguity and Complexity in Project Management.Management Science 48(8), 1008–1023, 2002.

[5] E. Kutsch, M. Hall. Intervening conditions on the man-agement of project risk: Dealing with uncertainty ininformation technology projects. International Journalof Project Management 23(8), 591–599, 2005.

[6] B. Flyvbjerg, N. Bruzelius, W. Rothengatter. Megaprojectsand Risk – An Anatomy of Ambition. Cambridge, UK:Cambridge University Press, 2003.

[7] C. Besner, B. Hobbs. The perceived value and poten-tial contribution of project management practices toproject success. Project Management Journal 37(3),37–48, 2006.

[8] Project Management Institute. A guide to the project man-agement body of knowledge (PMBOK®). NewtownSquare, PA: Author, 2008.

[9] R.J. Voetsch, D.F. Cioffi, F.T. Anbari. Project risk man-agement practices and their association with reportedproject success. In: Proceedings of 6th IRNOP ProjectResearch Conference, Turku, Finland, August 25-27,2004.

[10] Office of Government Commerce. Managing Success-ful Projects with PRINCE2. Norwich, UK: The Sta-tionary Office, 2009.

[11] T. Williams. Assessing and moving on from the domi-nant project management discourse in the light ofproject overruns. IEEE Transactions on EngineeringManagement 52(4), 497-508, 2005.

[12] N. Agarwal, U. Rathod. Defining “success” for soft-ware projects: An exploratory revelation. InternationalJournal of Project Management 24(4), 358–370, 2006.

[13] J. Habermas. The Theory of Communicative Action –Reason and the Rationalization of Society. Boston, MA:Beacon Press, 1984.

[14] Association for Project Management (APM). ProjectRisk Analysis and Management Guide. Buckingham-shire, UK: Author, 2004.

[15] K.E. Weick, K.M. Sutcliffe. Managing the Unexpected.New York, NY: Wiley, 2007.

Page 68: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 67© Novática

Risk Management

Farewell Edition

1 Decision Making"Decision-making is considered to be the most crucial

part of managerial work and organizational functioning."

Mintzberg in [2 p.829]

According to some definitions, a decision is an alloca-tion of resources. For others, it can be likened to writing acheque and delivering it to the payee. It is irrevocable, ex-cept that a new decision may reverse it. Similarly, the deci-sion maker who has authority over the resources being al-located makes a decision. Presumably, he/she makes thedecision in order to further some objective, which he/shehopes to achieve by allocating the resources [1].

Different definitions of what a decision is and involvesabound in literature that spreads through the knowledge ofmany centuries of all disciplines [2]. Decision Making (DM)is very important to most companies and modern organiza-tional definitions can be traced back to von Neuman andMorgenstein in 1947 [3], who developed a normative deci-sion theory from the mathematical elaboration of the utilitytheory applied to economic DM. Their approach was deeplyrooted in sixteenth century probability theory, has persisteduntil today and can be found relatively intact in present de-cision analysis models such as those defined under the lin-ear decision processes. This well-known approach usesprobability theory to structure and quantify the process ofmaking choices among alternatives. Issues are structuredand decomposed to small decisional levels, and re-aggre-gated with the underlying assumption that many good smalldecisions will lead to a good big decision. Analysis involvesputting each fact in consequent order and deciding on itsrespective weight and importance.

Although most descriptive research in the area of DMconcludes that humans tend to use both an automatic non-conscious thought process as well as a more controlled onewhen making decisions [4], the more controlled approachto DM remains the most important trend in both theoreticaland practical models of DM. However, this dual thoughtprocess is possible because of the human mind’s capabilityto create patterns from facts and experiences, store them in

Decision-Making:A Dialogue between Project and Programme Environments

Manon Deguire

This paper proposes to revisit and examine the underlying thought processes which have led to our present state of DMknowledge at project and programme levels. The paper presents an overview of the Decision Making literature, observa-tions and comments from practitioners and proposes a DM framework which may lead to empowering project and pro-gramme managers in the future.

Author

Manon Deguire is a Managing Partner and founder of ValenseLtd., a PMI Global Registered Education Provider, which offersconsultancy, training and research services in value, project,programme, portfolio and governance management. She has 25years work experience in the field of Clinical and OrganizationalPsychology in Canada, the USA, UK and Europe and hasextensive experience in teaching, as well as in project andprogramme management. From 1988 to 1996 Manon held a full-time academic post at McGill University (Montreal) whichincluded both teaching at graduate and undergraduate levels aswell as being the programme Manager for the Clinical TrainingProgram (P&OT), Faculty of Medicine. Her responsibilitiesinvolved heading and monitoring all professional developmentprojects as well as accreditation processes for over than 120‘McGill Affiliated’ hospital departments. Although Manonrelocated to London in 1996, her more recent North Americanexperience includes being actively involved in PMI® activities.More specifically, she was Director-at-Large of ProfessionalDevelopment for the Education Special Interest Group from 2004to 2006 and has been a member of the Registered EducationalProviders Advisory Group since 2005. Her responsibilities inthis role involve presenting and facilitating activities for REPAGevents worldwide. She is also a regular speaker at PMI®Congresses in the US and abroad, as well as PMI® ResearchConferences and PMI® Chapter events. More recently she hasinitiated a working relationship with the Griffin Tate Group inthe US and completed their ‘train the trainer’ course. She is anAdjunct Professor with the Lille Graduate School ofManagement and conducts research on Decision-Making inproject and programme environments. Her ongoing involvementin academia and her experience as a practitioner withmultinational organizations and multicultural groups give her aunique understanding of both the theory and practice of project-based organizations in general and project management in par-ticular. She regularly teaches both PMP® and CAPM®Certification courses. Manon is currently finishing a PhD inProjects, Programs and Strategy at the ESC Lille School ofManagement (France), she holds a Masters degree in ClinicalPsychology from University of Montreal (CA) and a Mastersdegree in Organizational Psychology from Birkbeck College inLondon (UK). She is a certified PMP® and also holds bothPrince2 Practitioner and MSP Advanced Practitionercertifications (UK). <[email protected]>

Page 69: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

68 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

the registers of long-term memory and re-access them inthe course of assessing and choosing options. Many authorsrefer to this mechanism as "intuitive DM" a term that hasnot gained much credibility in the business environment andis still looked down upon by many decision analysts.

Given the years during which modern Project Manage-ment was developed (as well as other management trends),it is not surprising to find that the more controlled, linearmechanistic approach to DM permeates its literature andthe project context seems to have neglected the importanceof the softer and/or more qualitative aspects of the manage-ment domain that are now being recognized as essential forgood business to develop. Therefore, in the new context ofprojects and programmes, quantitative aspects of the DMprocess are progressively becoming secondary issues to suchqualitative issues as the meaningfulness of a decision fordifferent stakeholders and for the overall organization.

Project managers are repetitively expected to listen todifferent stakeholders’ needs and account for the numerousqualitative and quantitative variables when making deci-sions, however, both information overload and organiza-tional constraints usually make this difficult to implementand very little guidance can be found in the project litera-ture. If anything, the overwhelming importance of the DMissue seems rather accepted as common knowledge forproject managers as it is not mentioned or explored in thePMBOK® Guide [5] or in other popular project approachesdespite the bulk of recent research and growing interest inthis domain. In spite of the increasing importance placed onDM knowledge and skills, many project and programmemanagers continue to struggle with the concept that can standin the way of career progression and may be one of the pri-mary factors preventing project and programme success.

Project management practice is permeated with thethought that in order to facilitate DM in the project context,simple (linear) evaluation tools should be widely used. How-ever, it has now long been documented that these decision-support tools are no longer sufficient when project manag-ers’ roles have grown to accommodate the ever-changingcomplexity of the business environment. This situation hasadded considerably to the number of variables and the di-mensions of an already complex web of relationships broughtabout by the stakeholder focus. With such changes as theimplementation of Project Management Offices, PortfolioManagement, Program Management and Project-Based Or-

ganizations, project managers are now called upon to inter-act with an ever-expanding pool of stakeholders and othertools, such as meetings, reports and electronic networkswhich are also important. Intuition, judgment and visionhave become essential for successful strategic project andprogramme management.

Without an appropriate framework, some authors havesuggested that managers do not characteristically solveproblems but only apply rules and copy solutions from oth-ers [6]. Managers do not seem to use new decision-supporttools that address potential all-encompassing sector-basedelements, such as flexibility, organizational impact, com-munication and adaptability, nor technological and em-ployee developments. There is therefore a potential formanagerial application of new, value creation decision-sup-port tools. Because these are not mature tools, in the firstinstance they might be introduced in a more qualitative way– ‘a way of thinking’, as suggested in [7], to reduce themanagerial skepticism. Recent decision-support tools mightbe fruitfully combined with traditional tools to address criti-cal elements and systematize strategic project management.

It is now a well-accepted fact that traditional problem-solving techniques are no longer sufficient as they lead torestrictive, linear Cartesian conclusions on which decisionswere usually based in the past. Instead, practitioners needto be able to construct and reconstruct the body of knowl-edge according to the demands and needs of their ongoingpractice [8]. Reflecting, questioning and creating processesmust gain formal status in the workplace [9].

In [10] it is implied that management is a series of DMprocesses and assert that DM is at the heart of executiveactivity in business. In the new business world, decisionsneed to be made fast and most often will need to evolve intime. However, most of the research is based on a tradi-tional linear understanding of the DM process. In this lin-ear model, predictions are made about a known future anddecisions are made at the start of a project, taking forgranted that the future will remain an extension of the past.

2 DM at Project LevelThe commonly accepted definition of a project as a

unique interrelated set of tasks with a beginning, an endand a well defined outcome [5] assumes that everyone canidentify the tasks at the outset, provide contingency alter-natives, and maintain a consistent project vision through-out the course of the project [11]. The ‘performance para-digm’ [12][13] used to guide project management holds trueonly under stable conditions or in a time-limited, change-limited, context [14][15]. This is acceptable as long as, bydefinition, the project is a time-limited activity, and for thesake of theoretical integrity, is restricted to "the foresee-able future."

The traditional DM model has provided project manag-ers with a logical step-by-step sequence for making a deci-sion. This is typical of models proposed in the decision-making literature of corporate planning and managementscience of the past. It describes how decisions should be

The paper presentsan overview of the

Decision Making literature,observations and comments

from practitioners

Page 70: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 69© Novática

Risk Management

Farewell Edition

made, rather than how they are made. The ability of thisprocess to deliver best decisions rests upon the activitiesthat make up the process and the order in which they areattended to. In this framework, the process of defining aproblem is similar to making a medical diagnosis, the per-formance gap becomes a symptom of problems in the or-ganization’s health and identification of the problem is fol-lowed by a search for alternative solutions. The purpose ofthis phase of the decision-making process is to seek the bestsolution [16, Ch. 1]. Several authors have identified a basicstructure, or shared logic, underlying how organizations anddecision-makers handle decisions. Three main decision-making phases can be defined: Identification by which situ-ations that require a decision-making response come to berecognized, Development involving two basic routines (asearch routine for locating ready-made solutions and a de-sign routine to modify or develop custom made solutions)and Selection with its three routines (screening, evaluation-choice and authorization) [17].

3 DM at Programme LevelMore recently, many organizations have felt a need to

further develop towards a fully projectised structure, whichgoes beyond a simple portfolio approach and involves themanagement of strategic decisions through programmes[18][19]. This move has somewhat shifted the responsibili-ties and decision-making roles of project and programmesmanagers. At this level, several projects needs to be man-aged together in order to create synergies and deliver ben-efits to the organization rather than delivering a specificproduct or service in isolation and in most organizationsprogramme managers are actively working within a para-dox. They have an official role in a legitimate control sys-tem (project level), facilitating an integrated transactionalchange process, and simultaneously participate in a shadowsystem in which no one is in control [20].

A mechanistic style of management warranting a morerational and linear approach to DM is appropriate when goalsare clear and little uncertainty exists in the prevailing envi-ronment [11][21]. programme management practice is notmeant to replace this management focus; rather, it encom-passes it in a larger context. Here, managers cannot controltheir organization to the degree that the mechanistic per-spective implies, but they can see the direction of its evolu-tion [22]. When several variables are added to a system orwhen the environment is changed, the relationships quickly

lose any resemblance to linearity [23]. This has been raisedby many authors in reference to strategic issues such as theorganization’s competitive position, the achievement of theprogramme’s benefits and the effects of changes on the pro-gramme business case [24][25]. These same issues have tra-ditionally been processed through a project view of changecontrol rather than a strategic view of change managementwith one of the main drawbacks being that these standardapproaches focus on a linear programme lifecycle [26][27].According to these authors, focus on early definition andcontrol of scope severely restricts flexibility thus negatingthe value of having a programme. Furthermore, insistenceon a rigid life cycle intrinsically limits the ability of theprogramme to adapt in response to evolving business strat-egy [26].

When studying the implementation of strategic projects,Grundy [25] found that cognitive, emotional and territorialthemes were so intrinsically interwoven to the decision-making process that he suggested using the concept of "mud-dling through" originally introduced by Lindblom in 1959[28]. Similarly unsatisfied with the rational model of deci-sion-making at top management levels, Isenberg stated in[29] that managers "rely heavily on a mix of intuition anddisciplined analysis" and "might improve their thinking bycombining rational analysis with intuition, imagination andrules of thumb" (p.105).

Much of the literature concerning decision-making athigher management levels seems to manifest perplexity andmore questions than answers. By increasing our knowledgein this domain and providing an appropriate framework,project and programme managers might find material toreflect and possibly enhance their skills to better fit eachenvironment.

4 Discovering Project and Programme LevelViews

Beer [30] felt that most organizational research was ir-relevant to practitioners because practitioners worked in aworld of chaos and complex systems, whereas research wasstill about simple and equilibrated systems operated by re-searchers who maintain their objectivity. In order to respondto such concerns, this research project was set in a partici-patory paradigm [31] and uses a mix of observation andsemi-structured interviews. The interview questions arebased on the theoretical framework that was developed fromthe literature review and designed to capture the complexweb of thought processes leading to decisions. The mainobjective was to uncover characteristics of linear and non-linear decision situations at project and programme levels.All respondents were either project or programme manag-ers and had a good understanding of the differences be-tween these roles and responsibilities.

Project managers typically described their working en-vironment as consisting of "the team of people on theproject" and DM activities involved either these specificpeople or the project specific tasks and goals. DM analysiswas often restricted to project level variables and remained

A Decision Making frameworkis proposed which may leadto empowering project and

programme managersin the future

Page 71: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

70 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

confined to the scope limits and constraints of the project.On the other hand, as this example clearly demonstrates,

one programme manager described her work environmentfrom an organizational point of view and her discourse wasnot programme specific: "The programme manager has torelate not only to the different projects involved in the pro-gramme, but also to the organization in terms of people (hori-zontal and vertical relationships) as well as the short, me-dium and long term strategy". This view was also coherentwith how project managers in our study perceive the pro-gramme managers’ roles and responsibilities.

Programme managers were described as seeing thingsfrom above implying that the thought processes used at theirlevel of analysis is different than those useful to overseeone project. The general impression is one of managing manyongoing concurrent decisions rather than a sequenced se-ries of bounded decisions. A typical response from a projectmanager describing the programme management role was:"programme managers look down from above at differentprojects and need to pace several projects and the resourcesinvolved in groups of projects together." When describingher own role, one programme manager states: "developingstrategic goals that are in line with the governance is reallyimportant, that’s one part of the job. Then figuring out howto deliver that strategy is the other part."

Project managers speak of themselves and are referredto by programme managers as dealing with single projectsand having to make more sequenced isolated decisions (tech-nical, human related…). Decisions at this level are referredto as being more independent from one another andsequenced in time. One decision is followed by resolutionuntil another decision has to be made. Each decision is morediscrete in nature (technical, human resource, procurementrelated) whereas programme decisions are often interrelated,covering many areas simultaneously.

One project manager described his work in the follow-ing way: "My projects have a beginning and an end. I aminvolved mainly in engineering projects at the moment andthey have specific finish dates." A programme manager’sdescriptions of the project manager’s role was that "a projectmanager deals more precisely with things like budgets andconstraints of the project that they are in charge of, theyseem to operate within specific parameters." The vocabu-lary used to describe decisions at the project level was gen-erally more precise and specific.

Both project and programme managers feel that DMactivities occupy a major part of their day or time at work.It was extremely difficult for both groups of respondents toevaluate the number of decisions taken in the course of anyfixed period of time (day, month…). A typical response fromone project manager illustrates this when he says: "I wouldsay I can spend the better part of my nine hours at workmaking decisions, from small ones like deciding to changeactivity or big ones like for example a large screen project[…] this could mean making hundreds of decisions per day."Similarly, one programme manager states that "A great dealof time is devoted to decision making activities at the be-ginning of the programme, perhaps 100% of my time getsdevoted to it at this phase as I am looking at things likerisks involved."

Although it was difficult for both project and programmemanagers to quantify the time spent on DM activities or thenumber of decisions involved in their work, their subjec-tive evaluations all converged to say that they felt they spenta great deal of time in DM activities.

Both groups also feel that in the initial phase of theproject or programme, they spend almost all their timemaking and taking decisions. This was described as an acuteDM time. Later phase decisions seem to focus on more spe-cific issues for project managers; either described as tech-nical or human relation issues. programme managers men-tion the technical issues sporadically and mainly in the con-text of understanding what is going on. But unlike projectmanagers, technical versus human resources is not one ofthe important dichotomies in the themes of their DM dis-course. When technical DM was discussed it was usuallyin terms of grasping a better understanding of what peopleactually did, the skills or appropriate environment to en-hance their performance, but not to actually solve the tech-nical problem at hand or to make any decision about it.

When questioned about the use of specific DM toolsone project manager spontaneously described the traditionalrational method of DM: "When the problem is purely a tech-nical one, it is easy in a way because we have tools to meas-ure what is going on like oscilloscopes and things. Even ifit looks like a complicated problem with thousands of ca-bles, then we look at the symptoms and we come up with adiagnosis often this is based just on our experience of simi-lar problems [...] We have a discussion on how to go aboutit, how to measure it, we cut the problem in half and we

Decision Making is very important to most companies

Practitioners need to be able to constructand reconstruct the body of knowledge according

to the demands and needs of their ongoing practice

“”

“ ”

Page 72: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 71© Novática

Risk Management

Farewell Edition

Figure 1: Decision-Making Model in Projects.

Figure 2: Model DM in Programmes.

again look at the symptoms. So, in a way, in the decisionmaking process we breakdown the problem to somethingthat we can observe or measure." This description couldhave been taken from a number of DM texts that are con-cerned with the way decisions should be made. In fact, forproject managers, most purely technical decisions seem tofollow the traditional DM model, breaking down into moremanageable small decisions and exploring alternativesagainst each other. However, even in this group, many statethat few decisions are purely technical and say that mostdecisions involve a human component that varies in impor-

tance. The importance of this aspect ranges from at leastequal to out-weighting the technical aspect. Together withthe traditional DM breakdown process, experience is usu-ally mentioned as a key factor of the DM process.

Contrary to the discourse held by project managers, thereare no such straightforward textbook answers from pro-gramme managers. This could be simply symptomatic ofthe sample; however, programme managers describe an it-erative ongoing process of information gathering in orderto make sense of holistic situations. One programme man-ager saw herself as constantly gathering information in or-

Page 73: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

72 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

der to organize it in a cohesive way. Talking about the pro-gramme she is presently involved in, she described the proc-ess in the following words: "It involves many different peo-ple at different levels and I need to set time aside to under-stand exactly what is going on. Then, I will need to get backto them and formulate how it all fits in together, but I needto give myself some time to get my head around it."

5 DiscussionThe data analysis shows that project managers seem to

have a natural predisposition toward using a more tradi-tional and structured approach to DM. This observation canbe accounted for in more than one way and the researchmethod employed does not enable the establishment ofcausal relationships. The difference could be caused by thenature of their roles and responsibilities or that people whohave personal affinities for this type of DM approach tendto be attracted to this type of work. Further psychologicaltesting would be necessary to establish this second type ofrelationship. Nevertheless, project managers have describedlogical step-by-step sequences that could actually have beenused as examples for the typical models proposed in theDM literature such as those described in [16] and [17]. Al-though critics of this approach have outlined the fact thatthe ability of this process to deliver best decisions rests uponthe activities that make up the process and the order in whichthey are attended to, the project managers interviewed seemcomfortable with, and skilled at, using this method to re-solve problems.

Within this DM model, project managers also tend touse a process of deductive reasoning more often than pro-gramme managers that have described processes of induc-tive reasoning as a preferential thought process when en-gaged in DM activities. Aristotle, Thales and Pythagorasfirst described deductive reasoning around 600 to 300 B.C.This is the type of reasoning that proceeds from generalprinciples or premises to derive particular information(Merriam-Webster). It is characteristic of most linear DMtools used in the context of high certainty. These tools areaimed at achieving an optimal solution to a problem thathas been modeled with two essential requirements:

a) Each of the variables involved in the decision-mak-ing process behaves in a linear fashion and

b) The number of feasible solutions is limited by con-straints on the solution.

These tools rely almost entirely on the logic and basicunderlying assumptions of statistical analysis, regressionanalysis, past examples and the linear expectations and pre-

dictions they stimulate. A good example is the story of Ar-istotle who is said to have told of how Thales used predic-tive logic to deduct, from accumulated historical data, thatthe next season’s olive crop would be a very large one andbought all the olive presses, making a fortune in the proc-ess. However, given that deductive reasoning is dependenton its premises, a false premise can lead to a false result. Inthe best circumstances, results from deductive reasoningare typically qualified as non-false conclusions such as: "Allhumans are mortal. Paul is a human è Paul is mortal".

From the project managers’ perspective, the project’sbasic assumptions and constraints are the starting premisesfor all further decisional processes. In fact, these initial con-ditions of the project environment act as limits or bounda-ries, necessary for this type of DM process to be effective.Project managers generally feel that most large decisionsare actually made during the first phases of the project, be-fore and during the planning stage. Project managementtypically delivers outputs in the form of products and serv-ices and most project decisions are made to commit to theachievement of these specific outputs [32]. This perspec-tive infers that a series of small decisions that amount tothe project plan, are made during the planning phase andfinally add up to what is referred to as a large decision: theapproved project plan. All these decisions, that shape theproject, are made at the onset of the project. All later deci-sions are considered less important, more specific, andaimed at problem solving; often limited to one domain ofknowledge at a time (i.e. technical, human relations…).Because most large decisions have been made at the onset,once the scope is defined, it limits the number of possibledependant variables in the DM process. The number of sig-nificant stakeholders involved is also limited and the over-all situation is described as limited to the project’s immedi-ate environment. Much of the DM follows a relatively tra-ditional structured model to which the deductive thoughtprocess seems to adapt readily. Figure 1 illustrates this DMmodel for projects.

6 Programme Management FrameworkA particularly interesting finding is the fact that deduc-

tive reasoning does not seem quite as popular or as univer-sally called for in the DM processes of the programme man-agers we interviewed. However, the use of inductive rea-soning seems more popular than for project managers. De-

Three maindecision-making phases

can be defined: Identification,Development and Selection

Contrary to the discourseheld by project managers,

there are no suchstraightforward textbook

answers fromprogramme managers

Page 74: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 73© Novática

Risk Management

Farewell Edition

ductive reasoning applies general principles to reach spe-cific conclusions, whereas inductive reasoning examinesspecific information, perhaps many pieces of specific infor-mation, to derive a general principle.

A well known example of this type of thought process isfound in the story of Isaac Newton. By observation and think-ing about phenomena such as how apples fall and how theplanets move, he induced the theory of gravity. In much thesame way, programme managers relate stories about havingto collect information through observation, questions andnumerous exchanges in order to put the pieces together intoa cohesive story to manage the programme. The use of Anal-ogy (plausible conclusion) is often apparent in the pro-gramme managers’ discourse. This process uses compari-sons such as between the atom and the solar system and theDM process is then based on the solutions of similar pastproblems, intuition or what is often referred to as experi-ence. Contrary to project management where most decisionsare taken to commit to the achievement of specific outputs,programme management typically delivers outcomes in theform of benefits and business case decisions are taken overlonger periods of time depending on the number of projectsthat are progressively integrated to the programme and tothe timing scale of these different projects [32].

These decisions increasingly commit an organization tothe achievement of the outcomes or benefits and the DMperiod, although important at the beginning continues pro-gressively as the situation evolves to accommodate thechanges in this larger environment. Typical responses fromprogramme managers tend to converge toward an ongoingseries of large decisions (affecting the totality of entireprojects) as the programme evolves over time. This can becompared to the project level discourse that described largedecisions at the onset and smaller ones (not affecting theoverall business case of the project) as the project evolved.This is in keeping with the fact that, since programmes de-liver benefits as opposed to specific products or services,the limits of the programme environment are not as specificor as clearly defined as those for the project. Organizationalbenefits are inherently linked to organizational strategy, valuesystems, culture, vision and mission. This creates an un-bounded environment and basic assumptions are not as clearas for the project environment. This could account for the

fact that deductive thought processes are less suited thaninductive ones in the DM processes of programme manag-ers.

7 ConclusionBoth project and programme managers were unanimous

in recognizing the importance and the amount of time spentin decision-making activities and that further knowledge isneeded in this domain.

It would seem that a more mechanistic style of manage-ment warranting a more rational and linear approach to de-cision making is appropriate when goals are clear and littleuncertainty exists in the prevailing environment. The time-limited definition of projects makes them well adapted tothis performance paradigm.

These observations do not aim to lessen the require-ments for traditional DM, but highlight the fact that pro-gramme management DM practice encompasses a largercontext. Here, managers cannot control their organizationsto the degree that the mechanistic perspective implies, buthave to develop an awareness of their future evolution. Theimplications are readily felt at the decisional level; whenseveral variables are added to a system or when the envi-ronment is changed and relationships quickly lose any sem-blance of linearity.

Finally, this dialog has highlighted the fact that the DMprocesses at project and programme level differ significantlyin the timing, pacing and number of major decisions, aswell as the nature of the DM processes employed. Mostlarge or important project decisions are bound by theproject’s basic assumptions and project managers tend tohave a preference for deductive mental processes whenmaking decisions. The occurrence of large or important pro-gramme decisions seems to persist throughout the pro-gramme life cycle as they are prompted by setting the as-sumptions for each project when these kick off. Becausethe programme delivers benefits and that these cannot beas clearly defined as products or services its environmentis not as clearly defined or bound by set basic assumptionsand inductive reasoning seems more suited to meet the pro-gramme managers’ decision making needs.

References[1] T. Spradlin. A Lexicon of Decision Making,

DSSResources.COM, 03/05/2004. Extracted from:<http://dssresources.com/papers/features/spradlin/spradlin03052004.html> on 12 Jan 2007.

[2] R.B. Sambharya. Organizational decisions in multi-national corporations: An empirical study. InternationalJournal of Management, 11, 827-838, 1994.

[3] J. von Neuman, O. Morgenstein. Theory of games andeconomic behavior. Princeton, NJ: Princeton Univer-sity Press, 1947.

[4] R. Hastie, M. Dawes. Rational Choice in an UncertainWorld. Thousand Oaks: CA: Sage Publications, Inc.,2001.

[5] PMI. A Guide to the Project Management Body of

DM processes at projectand programme level differsignificantly in the timing,

pacing and number of majordecisions, as well as the

nature of the DMprocesses employed

Page 75: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

74 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Knowledge (PMBOK® Guide) – 4th ed., 2008.[6] James G. March. How decisions happen in organiza-

tion. Human-Computer Interaction, 6(2), 95-117, 1991.[7] M. Amram, N.. Real Options: Managing Strategic In-

vestment in an Uncertain World (1st ed.). Boston, Mas-sachusetts: Harvard Business School Press, 1999.

[8] D.A. Schön. Educating the Reflective Practitioner. Lon-don: Jossey-Bass, 1987.

[9] C. Bredillet. Knowledge management and organiza-tional learning. In P.W.G. Morris & J.K. Pinto (Eds.),The Wiley project management resource book. NewYork, NY: John Wiley and Sons, 2004.

[10] R.M. Cyert, H.A. Simon, D.B. Trow,. Observations ofa business decision. The Journal of Business, 29(4),237-248, 1956. [23] M. Beer. Why management re-search findings are unimplementable: An action sci-ence perspective. Reflections, The SoL MIT Press-So-ciety for Organizational Learning Journal on Knowl-edge, Learning and Change, 2(3), 58-65, 2001.

[11] M.T. Pich, C.H. Loch, A. de Meyer. On Uncertainty,Ambiguity, and Complexity in Project Management.Management Science, Vol. 48, No. 8 (Aug., 2002),1008-1023.

[12] M. Thiry. Combining value and project managementinto an effective programme management model. In-ternational Journal of Project Management, (SpecialIssue April 2002; 20-3, 221-228, and Proceedings ofthe 4th Annual Project Management Institute-EuropeConference [CD ROM].

[13] M. Thiry. The development of a strategic decision man-agement model: An analytic induction research proc-ess based on the combination of project and value man-agement. Proceedings of the 2nd Project ManagementInstitute Research Conference, 482-492, 2002.

[14] Standish Group International. The CHAOS Report ,1994. Retrieved 25 Feb 2000 from <http://www.standishgroup.com/sample_research/chaos_1994_1.php>.

[15] KPMG. "What went wrong? Unsuccessful informationtechnology projects", 1997. Retrieved 10 Mar. 2000from <http://audit.kpmg.ca/vl/surveys/it_wrong.htm>.

[16] D. Jennings. Strategic decision making. In D. Jennings& S. Wattam (Eds.), Decision making An integratedapproach (2nd ed, pp. 251-282). Harlow, UK: PrenticeHall Pearson, 1998.

[17] H. Mintzberg, D. Rasinghani, A. Theoret. The struc-ture of "unstructured" decision processes, Administra-tive Science Quarterly, June 1976, 246-275.

[18] T.J. Moore. An evolving program management matu-rity model: Integrating program and project manage-ment. Proceedings of the Project Management Insti-tute’s 31st Annual Seminars & Symposium Proceed-ings, 2000 [CD-ROM].

[19] D. Richards. Implementing a corporate programmeoffice. Proceedings of the 4th Project Management In-stitute-Europe Conference, 2001 [CD-ROM]. [20] P.Shaw. Intervening in the shadow systems of organiza-

tions: consulting from a complexity perspective. Jour-nal of Organizational Change Management, 10(3),235-250, 1997.

[20] P. Shaw. Intervening in the shadow systems of organi-zations: consulting from a complexity perspective.Journal of Organizational Change Management, 10(3),235-250, 1997.

[21] PMCC. A Guidebook of Project and Program Man-agement for Enterprise Innovation (P2M)-SummaryTranslation, Revised Edition. Project ManagementProfessionals Certification Center, Japan, 2002.

[22] M. Santosus. Simple, Yet Complex, CIO EntrepriseMagazine, April 15, 1998. Retrieved 20 Jan. 2004 from<http://www.cio.com/archive/enterprise/041598_qanda.html>. Interview of R. Lewin and B. Reginebased on their book "Soul at Work: Complexity Theoryand Business" (published in 2000 by Simon &Schuster).

[23] J.W. Begun. Chaos and complexity frontiers of organi-zation science. Journal of Management Inquiry, 3(4),329-335, 1994.

[24] M. Görög, N. Smith. Project Management for Manag-ers, Project Management Institute, Sylva, NC, 1999.

[25] T. Grundy. Strategic project management and strate-gic behaviour. International Journal of Project Man-agement, 18, 93-103, 2000.

[26] M. Lycett, A. Rassau, J. Danson. Programme manage-ment: a critical review. International Journal of projectManagement, 22:289-299, 2004.

[27] M. Thiry. FOrDAD: A Program Management Life-Cycle Process. International Journal of Project Man-agement, Elseveir Science, Oxford (April, 2004) 22(3);245-252.

[28] C.E. Lindblom. The science of muddling through. Pub-lic Administration Review, 19, 79-88, 1959.

[29] D.J. Isenberg. How senior managers think. HavardBusiness Review, Nov/Dec, 80, 1984.

[30] M. Beer. Why management research findings areunimplementable: An action science perspective. Re-flections, The SoL MIT Press-Society for Organiza-tional Learning Journal on Knowledge, Learning andChange, 2(3), 58-65, 2001.

[31] J. Heron, P. Reason. A participatory inquiry paradigm.Qualitative Inquiry, 3: 274-294, 1997.

[32] OGC. Managing Successful Programmes. Eighth Im-pression. The Stationery Office. London, 2003.

Page 76: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 75© Novática

Risk Management

Farewell Edition

Keywords: Decisions, Managerial Judgement, ProjectAppraisal, Risk, Uncertainty.

1 Investing in Projects in an Uncertain WorldProjects are often thought of as a sequence of activities

with a life cycle from start to finish. One of the biggestproblems at or before the start is being able to foresee theend, at some time in the future. Uncertainty poses a rangeof issues for project planning and risk assessment. If wethink of projects as temporary endeavours, not all outcomesmay be measurable by the end, where lasting benefits maybe desirable. This provides the problem of how we judgeprojects to be successful. Performance of projects has typi-cally been measured by the three constraints of time, moneyand quality. Whilst it may be easy to ascertain whether aproject is delivered on time and within budget, it is harderto assess quality, especially when a project is first deliv-ered. Many projects, even those that were famously late andwell over budget like the Sydney Opera House, can becomeicons in society and be perceived as very successful after alonger period of time. The classic issue in project manage-ment is that only a small minority of projects achieve suc-cess in all three measures, so academics have been search-ing for better ways to measure the success of projects, whichinvolves unpicking ‘quality’, and in whose eyes projectsare perceived to succeed or fail [2].

All strategic decisions that select which projects an or-ganisation should invest in are taken without certain knowl-edge of what the future will hold and how successful the

Author

Elaine Harris is Professor of Accounting and Management andDirector of the Business School at the University of Roehamptonin London, United Kingdom. She is author of Gower Publishing’sStrategic Project Risk Appraisal and Management and ManagingEditor of Emerald’s Journal of Applied Accounting Research(JAAR). She chairs the Management Control Association(MCA), a network of researchers working in the area of controlsystems and human behaviour in organisations. <[email protected]>

Decisions in an Uncertain World:Strategic Project Risk Appraisal

Elaine Harris

This article is developed from the author’s book on strategic project risk appraisal [1] and her special report on projectmanagement for the ICAEW [2]. The book is based on over eight years of research in the area of risk and uncertainty instrategic decision making, including a project funded by CIMA [3] and explores the strategic level risks encountered bymanagers involved in different types of project. The special report classifies these using the suits from a pack of cards. Thisarticle illustrates the key risks for three types of project including IT projects and suggests how managers can deal withthese risks. It makes a link between strategic analysis, risk assessment and project management, offering a new approachto thinking about project risk management.

project will be. Faced with this uncertainty, we can attemptto predict the factors that can impact on a project. Once wecan identify these factors and their possible impacts we cancall them risks and attempt to analyse and respond to them.Risks can be both positive, such as embedded opportuni-ties, perhaps to do more business with a new client or cus-tomer in future, or negative, things that can go wrong, andthose indeed require more focus in most risk managementprocesses. Project risk assessment should begin before theorganisation makes its decision about whether to under-take a project, or if faced with several options, which alter-native to choose.

One common weakness in the approach that organisa-tions take to project risk management is the failure to iden-tify the sources of project risk early enough, before the or-ganisation commits resources to the project (appraisalstage). Another is not to share that risk assessment infor-mation with project managers so that they can develop suit-able risk management strategies. Through action researchin a large European logistics company, a new project riskassessment technique (Pragmatix®) has been developed toovercome these problems. It provides an alternative methodfor risk identification, ongoing risk management, projectreview and learning. This technique has been applied toeight of the most common types of projects that organisa-tions experience.

This article illustratesthe key risks for three types of

project including IT projectsand suggests how managers

can deal with these risks

Page 77: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

76 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

2 Project TypologyWhilst the definition of a project as a temporary activity

with a start and finish implies that each project will be dif-ferent in some way from previous projects, there are manywhich share common characteristics. Table 1 shows the mostcommonly experienced projects, informed by finance pro-fessionals in a recent survey. Each is marked with a suitfrom a pack of cards which attempts to classify projects asfollows:

Hearts – need to engage participants hearts and mindsto succeed

Clubs – need to work to a fixed schedule of eventsDiamonds – products need to capture the imagina-

tion and look attractive in the marketplaceSpades – physical structures e.g. buildings, roads,

bridges, tunnelsThis article features three types of project (1, 2 and 6)

shown in Table 1 to give a flavour of the research findings.

3 Project Appraisal and SelectionIn order to generate a suitable project proposal for this

purpose, the project needs to be scoped and alternative op-tions may need to be developed from which the most suit-able option may be selected. The way the project is definedand described in presenting a business case for investmentcan influence decision makers. It is important for seniormanagers, both financial and non-financial to understandthe underlying psychological issues in managerial judge-ment, such as heuristics (using mental models, personal biasand rules of thumb), framing (use of positive, negative oremotive language in the presentation of data) and consen-

sus (use of political lobbying and social practice to buildsupport for a case). These behaviours can be positively en-couraged to draw on the valuable knowledge and experi-ence of organisational members, or impact negatively, forexample status quo bias creating barriers to change [3].

In many organisations it is possible to observe bottom-up ideas being translated into approved projects by a teamat business unit level working up a business case to justifya proposal using standard capital budgeting templates andprocedures for group board approval (Figure 1). There arefeedback loops and projects may be delayed while suffi-cient information is gathered, analysed and presented. Thisprocess can take days (for example corporate events),months (for example new client or business development)or even years (for example new products where health andsafety features in approval such as drugs or aeroplanes).Where delay is feasible, where the opportunity will not belost in competitive market situations, a real options approachis possible. The use of the term real options here is an ap-proach or way of thinking, not a calculable risk as in de-rivatives. It simply means that there is an option to delay,disaggregate or redefine the project decision to maximise thebenefit of options, for example to build in embedded opportu-nities for further business. This may be more important in dif-ficult economic times as capital may be rationed.

However, where projects are initiated by senior man-agement in a top-down process, the usual steps in capitalinvestment appraisal may not be followed, as there may beexternal pressure brought to bear on a chief executive orfinance director, for example in business acquisitions, stra-tegic alliances etc. Appraisal procedures may be over-rid-

Table 1: Types of Projects. (Source: [2, p. 4].)

Type of project Characteristics Suit 1. IT/systems dev’t Advanced technology manufacturing or new

information systems

2. Site or relocation

New building or site, relocation or site development

3. Business acquisition

Takeovers and mergers of all or part of another business

4. New product dev’t

Innovation, R & D, new products or services in established markets

5. Change e.g. closure

Decommissioning, reorganisation or business process redesign

6. Business dev’t New customers or markets, may be defined by invitation to tender

7. Compliance New legislation or professional standards, e.g. health & safety

8. Events Cultural, performing arts or sporting events, e.g. Olympics

Page 78: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 77© Novática

Risk Management

Farewell Edition

Figure 1: IT Project Risk Map. (Source [4].)

Performance of projects has typically been measuredby the three constraints of time, money and quality“ ”

Page 79: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

78 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

PROJECT RISK ATTRIBUTES Brief Definition

CORPORATE FACTORS: Strategic fit Expertise Impact

Potential contribution to strategy Level of expertise available compared to need Potential impact on company/brand reputation

PROJECT OPPORTUNITY: Size Complexity Planning timescale Quality of customer/supplier

Scale of investment, time and volume of work Number of and association between assumptions Time available to develop proposal pre-decision Credit checking etc. added during version 4 updates

EXTERNAL FACTORS: Cultural fit Quality of information Demands of customer(s) Environmental

Matching set of values, beliefs & practices of parties Reliability, validity & sufficiency of base data Challenge posed by specific customer requirements Likely impact of PEST factors, inc. TUPE

COMPETITIVE POSITION: Market strength Proposed contract terms

Power position of company in contract negotiations Likely contract terms and possible risk transference

Table 2: Project Risk Attributes for Business Development Projects. (Source: adapted from [4].)

den or hi-jacked in such cases, with often negative conse-quences in terms of shareholder value. The justification forsuch projects is often argued on a financial basis, but evidenceshows that the target company shareholders make more moneyout of these than those in the bidding company. This is a keyrisk that may be picked up by internal audit.

4 Risk AnalysisThere is a common risk management framework in busi-

ness organisations that can be applied to projects as well ascontinuing operations. The number and labelling of stepsmight differ, but the process usually involves:

1. Identify risks (where will the risk come from?)2. Assess or evaluate risks (quantify and/or prioritise)3. Respond to risks (take decisions e.g. avoid, mitigate

or limit effect)

4. Take action to manage risks (adopt risk managementstrategies)

5. Monitor and review risks (update risk assessmentand evaluate risk strategies)

Linking these to the project life cycle, steps 1 and 2form the risk analysis that should be undertaken during theproject initiation stage, step 3 links to the planning stage,and steps 4 and 5 should occur during project execution.Risks should also be reviewed as part of the project reviewstage to improve project risk management knowledge andskills for the future [2].

Evidence from practice suggests that steps 1 and 2 arerarely carried out early enough in the project life cycle, step5 monitoring is often undertaken in a fairly mechanical way,and comprehensive review at project level is hardly foundto occur at all after the project has ended, especially in non-project based organisations.

The difficulty in identifying the risks relating to projects,especially at an early stage when the project may not bewell defined, is that no two projects are exactly the same.However, using the project typology in box 1 it can be seenthat headline or strategic risks are likely to be similar for projectsof a similar type. In [9] a range of qualitative methods forproject risk identification is presented, including cognitivemapping, and examples are given for several types of project.

A new projectrisk assessment technique

(Pragmatix®) has beendeveloped to overcome

these problems

Page 80: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 79© Novática

Risk Management

Farewell Edition

Knowledge of where the risks are likely to come from isusually developed intuitively by managers through theirexperience in the organisation and industry. Advanced meth-ods used in the research reported here included repertorygrid1 and cognitive mapping2 techniques to elicit this valu-able knowledge. However common risks may be found inprojects of a similar type, and up to half may be identifiedby applying common management techniques. These areexplained for the three project examples presented.

1 Repertory grid technique (RGT) is a method of discovering howpeople subconsciously make sense of a complex topic from theirrange of experience. This was used to identify the project risk at-tributes in Table 2.2 Cognitive mapping uses a visual representation of conceptsaround a central theme. This was used to display risk attributes ina project risk map in figure 2.

Figure 2: IT Project Risk Map.

Table 3: Mitigating Actions. (Source: adapted from unpublished MBA group coursework with permission.)

Source of Risk Mitigating actions Employees Loss of staff Loss of expertise Effect on morale Poor local labour market

Offer positive and consistent benefits package Negotiate key employees benefits package to encourage move Good communications with staff & transparency of business case Establish good market intelligence (before choice of location)

Management Leadership

Establish dedicated project management team with strong leader

Continuity Current projects

Maintain extra resources during move Flex project schedules for projects spanning relocation period

Organisational impact Culture Business procedures

Use relocation as a catalyst for change, improve existing culture Requires a development plan

Infrastructure Office equipment Capacity

Transport all office equipment from current site, reduce need for new Determine capacity required and ensure building completed in time

Page 81: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

80 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Example 1: Business Development Projects (BDP)These projects involve securing new customers and

markets for existing products or services. The strategic analy-sis of the organisational and environmental context for aBDP can help to generate several possible risks. The analy-sis of strengths, weaknesses, opportunities and threats(SWOT) can identify risk areas for the organisation (corpo-rate factors in table A), and help to analysis the strategic fitof the project. Then a more detailed analysis of the externalfactors, political, economic, social, technical, legal and en-vironmental (PESTLE) can identify further risk areas (ex-ternal and market factors in Table 2). The invitation to ten-der might also help to identify risks in a BDP project, forexample the ‘demands of the customer’ in Table 2.

Example 2: Systems Development or IT ProjectsFor an IT project, which is essentially a supply problem,

the chain from software supplier to client (users) via spon-sor (owner) can reveal at least half of the sources of risk.The functional requirements of the system are defined bythe client, and the risks here may determine whether theclient will be satisfied that the system does what it is sup-posed to do. Internal clients in IT projects may be more de-manding than external clients in BDP projects.

Figure 2 shows a typical project risk map for an ITproject. The figure shows the high risk areas shaded darkerand the lower risk areas lighter. The key to managing theserisks is understanding and responding to stakeholdermotivations and expectations.

Example 3: New Site or Relocation ProjectsA new site may involve the choice of location, acquisi-

tion, construction or refurbishment of buildings. In a relo-cation project, stakeholder analysis can reveal key groupsof people who need managing closely. The employees arethe principal group, followed by management and custom-ers (continuity). Infrastructure risks (geographic factors) maybe revealed by PESTLE analysis. Table 3 shows how riskmanagement strategies can be developed to mitigate theserisks.

The final section of this article shows the analysis of100 risk management strategies into six categories, and drawsconclusions for the use of a strategic approach to projectrisk identification, assessment and management [1].

5 Risk Management StrategiesFor each type of project covered in the research a set of

risk management strategies like those shown in Table 3 wereidentified. These totalled 100 and the following six catego-ries emerge from their analysis, in the order of frequencyof observation:

1: Project Management (23%)This category includes the deployment of project man-

agement methodologies such as work breakdown structure,scheduling, critical path analysis etc. and the establishmentof a project leader and project team, as found in the PMbody of knowledge. The most observations for this type ofrisk management strategy were in IT projects, relocationand events management, where timing is critical.

2: Human Resource Management (21%)This category includes recruitment, training and devel-

opment of personnel, including managers and the manage-ment of change in work practices. This type of strategy fea-tured most strongly in acquisitions, IT projects and reloca-tion.

3: Stakeholder Management (19%)This category includes stakeholder analysis and man-

agement through consultation, relationship management andcommunications. It featured most strongly in systems de-velopment projects, NPD projects and events management,which are necessarily customer-focussed. In IT projects andevents management there are many more stakeholder groupswith diverse interests to manage.

4: Knowledge Management (18%)This category includes searching for information, re-

cording, analysing, sharing and documenting information,for example in market research and feasibility studies. Itfeatures most strongly in BDP and NPD projects and inacquisitions. It is closely related to training and develop-ment, so overlaps with that aspect of human resource man-agement.

5: Financial Management (10%)This category includes credit checking of suppliers and

customers, financial modelling and budget management aswell as business valuation, pricing strategies and contractterms. It is no surprise that it features most in business ac-quisitions, where a high level of financial expertise is re-quired, and next in BDPs where terms are agreed and newcustomers vetted.

Project reviewsare recommended to evaluate

how well risk managementstrategies have worked

The final section of thisarticle shows the analysisof 100 risk management

strategies into sixcategories

Page 82: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 81© Novática

Risk Management

Farewell Edition

6: Trials and Pilot Testing (9%)This category includes testing ideas at the feasibility

study stage, testing possible solutions and new products.This could be clinical trials in pharmaceuticals, tasting panelswith new food products or system testing in IT products, sofeatures most strongly in IT and NPD projects.

Project reviews are recommended to evaluate how wellrisk management strategies have worked and to identify howrisk management can be improved as part of organisationlearning. The evaluation of Pragmatix® for risk identifica-tion, assessment and management revealed important ben-efits for the case organisation, not least the opportunity tolink risk assessment to later project management and postaudit review of projects. This joined up thinking links stra-tegic choice to strategy implementation through projectmanagement.

In conclusion, the identification of likely risks at an earlystage helps managers make better decisions in the face ofuncertainty. However, unless these risks are fully appraisedand communicated to those responsible for managing theimplementation of the project and monitoring the risks, thefull benefits of risk appraisal will not be realised.

References[1] E. Harris. Strategic Project Risk Appraisal and Man-

agement, Farnham: Gower (Advances in Project Man-agement series), 2009.

[2] E.P. Harris. Project Management, London: ICAEW Fi-nance & Management special report SR33, 2011.

[3] E. Harris, C.R. Emmanuel, S. Komakech. "Manage-rial Judgement and Strategic Investment Decisions",Oxford: Elsevier, 2009.

[4] E.P. Harris. "Project Risk Assessment: A EuropeanField Study", British Accounting Review, Vol. 31, No.3, pp.347-371, 1999.

Page 83: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

82 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Keywords: Free Software, Linear Programming,Project, Risk Management, Threat.

1 IntroductionFailing to satisfy project objectives is a major concern

in project management. Risks can generate problems withconsequences that are not often considered, and indeed, inmany cases risk management is not even taken into account[1]. However, the benefits of risk management are consid-erable. Risk management allows, at the beginning of theproject, the detection of problems that could be otherwiseignored, and so effectively to help the Project Manager indelivering the project on time, under budget and with therequired quality [2]. However, if risk management is notperformed along the whole project, the Project Managerprobably will not be able to take advantage of its full ben-efits.

This paper proposes a methodology that consists in build-ing, from a final value of risk for each project pair (threator alternative) and a decision matrix to determine, usingLinear Programming (LP), which is the most effective al-ternative considering the risks. Of course, in a real case thesesame constraints, plus others, can be added to the battery ofconstraints that address environmental matters, economic,technical, financial, political, and so on. The result will re-flect the best selection on the basis of all the constraintsconsidered simultaneously.

The application of LP to this decision-making problemis new in the treatment of risk. It opens a series of possibili-ties in the field of risk management in such a way that thismethodology represents more accurately than other meth-ods a project’s features, solving problems with all kinds ofconstraints, including those related to risk, and thereforeplacing risks at the same level as the economic, social andenvironmental constraints normally considered, with the ideaof raising the discipline of risk management in projects. Inshort, although an even higher level of organizational ma-turity in terms of risk management would correspond to theintegrated risk management of the portfolio of projects, it isexpected that the outcome will be projects driven by riskmanagement [3].

Selection of Project Alternatives while Considering Risks

Marta Fernández-Diego and Nolberto Munier

The selection of projects consists in choosing the most suitable out of a portfolio of projects, or the most fitting alternativewhen there are constraints in regard to financing, commercial, environmental, technical, capacity, location, etc.. Unfortu-nately the selection process does not place the same importance on the various risks inherent in any project. It is possiblehowever, to determine quantitative values of risk for each pair of alternative/threat in order to assess these risk con-straints.

Authors

Marta Fernández-Diego holds a European PhD in Electronicand Telecommunications Engineering. After some research anddevelopment contracts in universities and multinationalcompanies from France, Great Britain and Spain, she is currentlya lecturer in the Department of Business Organization atUniversitat Politècnica de València, Spain, where she teachesproject risk management, among other subjects. <[email protected]>.

Nolberto Munier is a Mechanical Engineer, Master in ProjectManagement and PhD in Design, Manufacturing andManagement of Industrial Projects. He has worked extensivelyin linear programming techniques and applied them to solvingdecision problems in urban projects in several cities in differentcountries. In addition, he developed a methodology called Simusfor solving problems of a complex nature, with multipleobjectives and with any type of constraints. He is currently aninternational consultant on issues of urban and regional planning.<[email protected]>.

This paper proposes amethodology that consists

in building the most effectivealternative considering

the risk

The paper presents in the next section an applicationexample. The following describes in detail the characteris-tics of the problem to determine the choice of one alterna-tive or another according to various criteria, along with itsconstraints. Finally, once the problem is solved by LP, theresults are discussed.

Page 84: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 83© Novática

Risk Management

Farewell Edition

2 Application Example2.1 BackgroundIn the past decade, free software has exploded, even chal-

lenging the inertia that still exists in software engineering,mainly derived from proprietary software, resulting in newbusiness models and product offerings that enable real choicefor consumers.

To understand free software, let us begin by clarifyingthat the fundamental characteristic of proprietary softwareis that all ownership rights therein are exclusively held bythe owner, as well as any possibility of improvement or ad-aptation. The user merely pays for the right to use the prod-uct, rather than buyiong it outright.

The problems associated with software, regardless ofwhether free or proprietary, lie in its own nature. The keyproblem addressed by free software is precisely the possi-bility of reusing it, in the logical sense that you can use partsalready coded by others and create derivatives. For any trans-formation of a person´s work authorization of the copyrightholder is required. Instead of using the simple copyright ofthe proprietary software licenses which means “all rightsreserved”, these other free software licenses only reservesome rights, and report whether or not to allow the user tomake copies, create derivative works such as adaptations ortranslations, or give commercial uses to the copies or de-rivatives.

In contrast, the essential feature of free software is thatit is freely used [4]. Specifically, it allows the user to exer-cise four basic freedoms. These freedoms are:

The freedom to run the program for any purpose,The freedom to study how the program works, and

change it to make it do what you wish,The freedom to redistribute copies,

The freedom to improve the program, and releaseyour improvements to the user.

Open source code1 is required to meet these freedoms.With open source code we mean that the source code isalways available with the program. In addition, the exer-cise of these freedoms facilitates software evolution, ex-posing it as much as possible to its use and change – be-cause greater exposure means that the software receivesmore testing – and by removing artificial constraints to theevolution – being more subject to the environment.

2.2 Background of the Case: Alternatives andObjective

Considering the future commercialization of computermodels with free software preinstalled, an entrepreneur, whoplans to start a small business, analyzes the possibility ofbuying for his business computers with free software in-stalled. Given this possibility, he needs to make a decisionbetween both alternatives, that is, proprietary software orfree software according to risk criteria, with the objectiveconsisting in minimizing the total cost, taking into accountan estimated difference of 100• favoring a computer withfree software operating system.

x1 (Free software)

x2 (Proprietary software) Action Operator B

Threshold Resistance to change 0.85 0.15 MIN ≥ 0.15

Dependency 0.16 0.64 MIN ≥ 0.16 Lack of security 0.125 0.375 MIN ≥ 0.125

Table 1: Characteristics of the Problem.

1 Text written using the format and syntax of the programming lan-guage with instructions to be followed in order to implementtheprogram.

Failing to satisfy projectobjectives is a major concern

in project management

If risk management is notperformed along the

whole project, the ProjectManager probably will notbe able to take advantage

of its full benefits

“”

Page 85: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

84 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

3 Problem CharacteristicsThe characteristics of the problem are summarized in

Table 1.The options and constraints of the problem, reflected in

this table, are explained in the following points.

3.1 CriteriaThis case raises two alternatives, effectively two projects

which will be analyzed on the basis of criteria that take intoaccount the various risks covered by both projects. Specifi-cally, we consider three selection criteria that correspond tothree of the potential threats related to the software, andwhich are mirrored in the main differences between freesoftware and proprietary software. Of course, in a real casethere may be many other criteria related to the economy,availability and experience of personnel, environment, etc.,but all are considered simultaneously, together with the riskcriteria. Therefore, the alternatives or options will have tocomply simultaneously with all factors.

The risk in a project involves a deviation from its objec-tives in terms of the three major project success criteria;schedule, cost and functionality. In this sense, risk indicatesthe probability that something can happen which endan-gers the project outcome.

Risk can be measured as the combination of the prob-ability that an incident occurs and the severity of impact[5]. Mathematically, risk can be expressed as follows:

(1)

In the certainty of the materialization of the threat, therisk would be equal to the impact; if the probability of thethreat materialization is zero, then there is no risk at all.However, risk is a combination of both probability and im-pact, and in statistics, risk is often modeled as the expectedvalue of some impact. This combines the probabilities of vari-ous possible threats and some assessment of the correspond-ing outcomes into a single value. Consequently, each pair threatcontributes partially to this expected value or risk.

The threats considered, which appear as rows in Table1, are as follows:

Resistance to changeIt is clear that there is still a lot of inertia and reluctance

to move from the proprietary model, and despite the advan-tages of free software, this is the main barrier. Inertia is theresistance of the user to give up something he knows (pro-prietary software), i.e. there is a resistance to change (tofree software), supported by the other side in the laws of

physics (e.g., resistance to initiate a movement).Although the data are dependent on many factors in-

cluding company size, the software’s purpose, its scope,field of application, etc., 85% of small businesses wouldopt for proprietary software products by inertia, lack ofknowledge about free software alternatives or simply thefear of moving to a new field, compared with 15% whowould venture into something new. Therefore the likeli-hood of resistance to change for free software is higher(85%) than the one for proprietary software (15%). On theother hand, we consider both the impact is total, i.e. 100%,since what is at stake is the choice of an alternative or an-other.

DependencyA non-technical advantage of free software is its inde-

pendence from the supplier, ensuring business continuityeven though the original manufacturer disappears.

Initially, free software arose from abusive practices usedby leading developers of proprietary software, which re-quires users to permanently buy all updates and upgrades;in this sense the user has their hands tied since they havevery limited rights on the product purchased. But when com-panies turn to free software, they liberate themselves fromthe constraints imposed by the software vendor. Indeed,free software appears to ensure the user certain freedoms.

In addition the user is dependent not only on the manu-facturer, but also on the manufacturer’s related products.The product often works best with other products from thesame manufacturer. With free software, however, users havethe power to make their own decisions.

To simplify the problem equal values of probability andimpact have been considered, resulting in a dependencyrisk of 16% for free software and 64% for proprietary soft-ware.

Lack of securityThere is a widely held belief that free software operat-

ing systems are inherently more secure than a proprietaryone because of their Unix heritage, which was built spe-cifically to provide a high degree of security. This state-ment can be justified as follows:

On the one hand, a coding error can potentially causesecurity risk (such as problems due to lack of validation).Free software is higher quality software, since more peoplecan see and test a set of code, improving the chance of de-tecting a failure and to correct it quickly. This means thatquality is assured by public review of the software and bythe open collaboration of a large number of people. This iswhy free software is less vulnerable to viruses and mali-cious attacks.We could estimate that the vulnerability of

There is a widely held belief that free software operatingsystems are inherently more secure than a proprietary one“ ”

Page 86: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 85© Novática

Risk Management

Farewell Edition

free software against security issues is 25%, while for pro-prietary software such vulnerability amounts to 50%.

On the other hand, the impact of a security problem isgenerally lower in the case of free software, because thesebugs are usually addressed with speedy fixes wherever pos-sible because of an entire global community of developersand users providing input. In contrast, in the world of pro-prietary software, security patches take considerably longerto resolve. We might consider impacts 50% for free soft-ware and 75% for proprietary software.

In short, considering risk as a combination of vulnerabilityand impact, the risk due to lack of security results in 12.5% forfree software versus 37.5% for proprietary software.

Furthermore, since in fact transparency hinders the in-troduction of malicious code, free software is usually moresecure.

3.2 ConstraintsSince in the three cases we are talking of negative events,

or threats, and we have not considered any opportunity, theconstraints that we impose on these criteria respond to mini-mization, effectively finding a solution greater than or equalthe value of minimal risk, since we cannot find a solutionwith lower risk than this.

For example, the opposite of resistance to change couldhave been considered. The term inertia may refer to the dif-ficulty in accepting a change. While not applying any force,we follow our own inertia, which is an opportunity for thefavored option. In this approach, the appropriate action hadbeen to maximize, or find a solution less than or equal to themaximum benefit because we cannot find a solution withgreater benefit.

4 Linear Programming ResolutionThe matrix expression of the LP problem is as follows:

(2)

Where:

is the decision matrix, shown boxed in Table 1.The components Aij of this matrix are the values of risk

that each threat brings for each alternative.

is the vector of unknowns, i.e. the option to choose inthis case.

is the vector of thresholds, i.e. the limits of each con-straint according to the discussion in Section 3.2.

To meet the objective of minimizing the objective func-tion Z, this objective function is expressed as the sum ofthe products between the cost of each alternative for thevalue of each of them (i.e. the unknown X represents whatwe wish to determine).

Thus, assuming that the cost of a computer with freesoftware operating system preinstalled is 600 • and the onefor proprietary software is 700 •, the objective function is:

(3)

Applying the LP simplex method [6] which is essen-tially a repeated matrix inversion (2) according to certainrules, one gets, if it exists, the optimal solution of the prob-lem. That is, the best combination or selection of alterna-tives to optimize the objective function (3).

5 Discussion of Results5.1 Optimal SolutionThe optimal solution to the LP problem is as follows:

(4)

We choose the higher value because, although both con-tribute to obtaining the goal, if only one option is actuallypossible it is clear that the one with the higher value con-tributes more efficiently than the other and therefore is cho-sen.; In the case above, proprietary software (x2) should bechosen.

In our case both values are very close but since the LPindicates that the alternative with proprietary software con-tributes more efficiently to the objective, taking into ac-count risk constraints, it is chosen.

5.2 Dual ProblemEvery direct LP problem, such as this one, can be con-

verted into ‘his image’, which is called the ‘dual problem’.In the dual problem the columns represent threats while therows represent the alternatives. While the direct problemvariables indicate which option contributes best to the goal,the dual problem variables provide us with the values ofthe ‘marginal contributions of each constraint’ or ‘shadowprices’, which is an economic term. In essence, this meansknowing how much the objective function changes per unitvariation in a constraint, which ultimately gives an idea ofthe importance of each constraint.

In this case we obtain the results shown in Table 2.

The main advantageof Linear Programming is

that it is possible to representreal world scenarios withsome degree of accuracy

Page 87: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

86 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

It turns out that the problem of lack of security is themost decisive in the choice of the alternatives, while resist-ance to change comes in a second place, which intuitivelymight be seen as the most decisive.The problem of depend-ency does not affect the solution since its marginal value iszero.

This powerful tool will allow, for example, a discussionof the cost difference that makes the solution change, andthus the selection, making the selection of computers withfree software operating system preinstalled more interest-ing. Moreover, in this case we would observe that the com-ponent of inertia fails to be key or even to influence theselection process, and the real criteria for selecting the al-ternative is in this case the security issue first, and the prob-lem of dependency, second.

6 ConclusionsThe use of LP is a new application in the treatment of

risks in projects. Its main advantage is that it is possible torepresent real world scenarios with some degree of accu-racy, as the number of constraints – and alternatives – canbe measured in the hundreds. On the other hand, whenanalyzing the objective function for various scenarios it ispossible to infer which is the best option [7].

Another major advantage is that, if there is a solution,this is optimal. i.e. the solution cannot be improved, thusconfirming the Pareto optimal.

References[1] M. Fernández-Diego, N. Munier. Bases para la Gestión

de Riesgos en Proyectos 1st ed., Valencia, Spain:Universitat Politècnica de València, 2010.

[2] Project Management Institute. Practice Standard forProject Risk Management, Project Management Insti-tute, 2009.

[3] M. Fernández-Diego, J. Marcelo-Cocho. Driving IS

Another major advantage of LP is that, if there is a solution,this is optimal. i.e. the solution cannot be improved“ ”

Projects. In D. Avison et al., eds. Advances in Infor-mation Systems Research, Education and Practice.Boston: Springer, pp. 113-124, 2008.

[4] R. M. Stallman. Free software, free society: SelectedEssays of Richard M. Stallman. GNU Press, 2002.

[5] International Organization for Standardization. ISO31000:2009 Risk management — Principles and guide-lines, International Organization for Standardization,2009.

[6] G.B. Dantzig. Maximization of a linear function of vari-ables subject to linear inequalities, 1947. Publishedpp. 339-347 in T.C. Koopmans (ed.): Activity Analy-sis of Production and Allocation, New York-London1951 (Wiley & Chapman-Hall).

[7] N. Munier. A strategy for using multicriteria analysisin decision-making. Springer – Dordrecht, Heidelberg,London, New York, 2011.

Equal value Marginal value

Lack of security 0.125 1683.333 Dependency 0.207 0,000

Resistance to change 0.150 458.333

Table 2: Equal Value and Marginal Value.

Page 88: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 87© Novática

Risk Management

Farewell Edition

Keywords: Behaviour Control, Framework for Gov-ernance, Governance Model for Project Management, Gov-ernance Structures, Outcome Control, Project Governance,Project Management Action, Shareholder Orientation,Stakeholder Orientation.

Governance starts at the corporate level and provides aframework to guide managers in their daily work of deci-sion making and action taking. At the level of projects gov-ernance is often implemented through defined policies, proc-esses, roles and responsibilities, which set the frameworkfor peoples’ behaviour, which, in turn, influences the project.Governance sets the boundaries for project managementaction, by

Defining the objectives of a project. These should bederived from the organization’s strategy and clearly outlinethe specific contribution a project makes to the achieve-ment of the strategic objectives

Providing the means to achieve those objectives. Thisis the provision of or enabling the access to the resourcesrequired by the project manager

Controlling progress. This is the evaluation of theappropriate use of resources, processes, tools, techniquesand quality standards in the project.

Without a governance structure, an organization runsthe risk of conflicts and inconsistencies between the vari-ous means of achieving organizational goals, such as proc-esses and resources, thereby causing costly inefficienciesthat negatively impact both smooth running and bottom lineprofitability.

Approaches to governance vary by the particularities oforganizations. Some organizations are more shareholderoriented than others, thus aim mainly for Return on Invest-

Project Governance1

Ralf Müller

Having a governance structure in organizations provides a framework to guide managers in decision making and actiontaking and helps to alleviate the risk of conflicts and inconsistencies between the various means of achieving organiza-tional goals such as processes and resources. This article introduces project governance, a major area of interest inorganizations, which is intended to guide, direct and lead project work in a more successful setting. To that purpose a newthree step governance model is presented and described.

Author

Ralf Müller, PhD, is Professor of Business Administration atUmeå University, Sweden, and Professor of Project Managementat BI Norwegian Business School, Norway. He lectures andresearches in governance and management of projects, as wellas in research methodologies. He is the (co)author of more than100 publications and received, among others, the ProjectManagement Journal’s 2009 Paper of the Year, 2009 IRNOP’sbest conference paper award, and several Emerald LiteratiNetwork Awards for outstanding journal papers and referee work.He holds an MBA degree from Heriot Watt University and aDBA degree from Henley Management College, BrunelUniversity, U.K. Before joining academia he spent 30 years inthe industry consulting large enterprises and governments in 47different countries for their project management and governance.He also held related line management positions, such as theWorldwide Director of Project Management at NCR Teradata.<[email protected]>

ment for their shareholder (i.e. having shareholder orienta-tion), while others try to balance a wider set of objectives,including societal goals or recognition as preferred employer(i.e. having a stakeholder orientation). Within this con-tinuum, the work in organizations might be controlledthrough compliance with existing processes and procedures(i.e. behaviour control), or by ensuring that work outcomesmeet expectations (i.e. outcome orientation). Four govern-ance paradigms derive from that and are shown in Figure 1.

The Conformist paradigm emphasizes compliance withexisting work procedures to keep costs low. It is appropri-ate when the link between specific behaviour and projectoutcome is well known. The Flexible Economist paradigmis more outcomes-focused requiring a careful selection ofproject management methodologies etc. in order to ensureeconomic project delivery. Project managers in this para-

1 This article was previously published online in the “Advances inProject Management” column of PM World Today (Vol. XII Issue III- March 2010), <http://www.pmworldtoday.net/>. It is republishedwith all permissions.

This article introducesproject governance,

a major area ofinterest in organizations

Page 89: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

88 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Shareholder Orientation

Stakeholder Orientation

Out

com

e C

ontro

l

Flexible economist

Versatile artist

Beha

viou

r C

ontro

l

Conformist Agile pragmatist

Figure 1: Four Project Governance Paradigms.

Figure 2: Framework for Governance of Project, Program and Portfolio Management.

digm must be skilled, experienced and flexible and oftenwork autonomously to optimize shareholder returns throughprofessional management of their projects. The VersatileArtist paradigm maximizes benefits by balancing the diverseset of requirements arising from a number of differentstakeholders and their particular needs and desires. Theseproject managers are also very skilled, experienced and workautonomously, but are expected to develop new or tailorexisting methodologies, processes or tools to economicallybalance the diversity of requirements. Organizations usingthis governance paradigm posses a very heterogeneous setof projects in high technology or high risk environments.The Agile Pragmatist paradigm is found when maximizationof technical usability is needed, often through a time-phasedapproach to the development and product release of func-tionality over a period of time. Products developed in projectsunder this paradigm grow from a core functionality, whichis developed first, to ever increasing features, which although

of a lesser and lesser importance to the core functionality,enhance the product in flexibility, sophistication and ease-of-use. These projects often use Agile/Scrum methods, withthe sponsor prioritising deliverables by business value overa given timeframe.

Larger enterprises often apply different paradigms todifferent parts of their organization. Maintenance organi-zations are often governed using the conformist or econo-mist paradigms, while R&D organizations often use the ver-satile artist or agile pragmatist approach to project govern-ance.

Governance is executed at all layers of the organiza-tional hierarchy or in hierarchical relationships in organi-zational networks. It starts with the Board of Directors,which defines the objectives of the company and the roleof projects in achieving these objectives. This implies de-cisions about the establishment of steering groups andProject Management Offices (PMOs) as additional govern-ance institutions. The former often being responsible forthe achievement of the project’s business case through di-rect governance of the project, by setting goals, providingresources (mainly financial) and controlling progress. Thelatter (the PMOs) are set up in a variety of structures andmandates, in order to solve particular project related issueswithin the organization. Some PMOs focus on more tacti-cal tasks, like ensuring compliance of project managers withexisting methodologies and standards. That supports gov-

Governance provides aframework to guide managersin their daily work of decision

making and action taking

Page 90: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 89© Novática

Risk Management

Farewell Edition

ernance along the behaviour control paradigms. Other PMOsare more strategic in nature and perform stewardship rolesin project portfolio management and foster project manage-ment within the organization thereby supporting governancealong the outcome control paradigms. A further governancetask of the Board of Directors is the decision to adopt pro-gramme and/or portfolio management as a way to managethe many projects simultaneously going on in an organiza-tion. Programme management is the governing body of theprojects within its programme, and portfolio managementthe governing body of the groups of projects and pro-grammes that make up the organization. They select andprioritize the projects and programmes and with it their staff-ing.

How Much Project Management is enough for myOrganization?

This is addressed through governance of project man-agement. Research showed that project-oriented companiesbalance investments and returns in project managementthrough careful implementation of measures that address thethree forces that make them successful. These forces are(see also Figure 2):

a) educated project managers. This determines what canbe done;

b) higher management demanding professionalism in

project management. This determines what should be done;and,

c) control of project management execution. Thisshows what is done in an organization in terms of projectmanagement.

Companies economize the investments in project man-agement by using a three step process to migrate from proc-ess orientation to project orientation. Depending on theirparticular needs they stop migration at step 1, 2 or 3 whenthey have found the balance between investments in projectmanagement (and improved project results) in relation tothe percentage of their business that is based on projects.Organizations with only a small portion of their businessbased on projects should invest less, and project-based or-ganizations invest more in order to gain higher returns fromtheir investments. The three steps are (see also Figure 2):

Step 1: Basic training in project management, use ofsteering groups, and audits of troubled projects. Thisrelativly small investment yields small returns and is ap-propriate for businesses with very little activities in projects

Step 2: all of step 1 plus project manager certification,establishment of PMO, and mentor programs for projectmanagers. This medium level of investment yields higherreturns in terms of better project results and is appropriatefor organizations with a reasonable amount of their busi-ness being dependent on projects.

Approaches to governance vary by the particularitiesof organizations

“”

Figure 3: Model of Project Governance.

Page 91: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

90 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Step 3: All of step 1 and 2 plus advanced training andcertification, benchmarking of project management capa-bilities, and use of project management maturity models.This highest level of investment yields the highest returnsthrough better project results and is appropriate for project-based organizations, or organizations whose results are sig-nificantly determined by their projects

The same concept applies for programme and portfoliomanagement. This allows the tailoring of efforts for gov-ernance of project, program and portfolio management tothe needs of the organization. By achieving a balance ofreturn and investment through the establishment of the threeelements of each step, organizations can become mindful oftheir project management needs. Organizations can stop ateach step, after they have reached the appropriate amountof project management for their business.

How does All that link together in an Organiza-tion?

The project governance hierarchy from the board of di-rectors, via portfolio and program management, down tosteering groups is linked with governance of project man-agement through the project governance paradigm (see Fig-ure 3).

A paradigm such as the Conformist paradigm supportsproject management approaches as described above in Step1 of the three step governance model for project manage-ment, that is, methodology compliance, audits and steeringgroup observation. A Versatile Artist paradigm, on the otherhand, will foster autonomy and trust in the project manager,and align the organization towards a ‘project-way-of-work-ing’, where skilled and flexible project managers work au-tonomously on their projects.

The paradigm is set by management and the nature ofthe business the company is in. The project governance para-digm influences the extent to which an organization imple-ments steps 1 to 3 of the governance model for project man-agement. It then synchronizes these project managementcapabilities with the level of control and autonomy neededfor projects throughout the organization. This then becomesthe tool for linking capabilities with requirements in accord-ance with the wider corporate governance approach.

Companies economize the investments in projectmanagement by using a three step process“ ”

Page 92: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 91© Novática

Risk Management

Farewell Edition

Keywords: Enterprise Risk Management, EnterpriseRisk Map, Enterprise Risk Reporting, Enterprise Risk Struc-ture, ERM, ERM Strategy, Horizontal Enterprise Risk Man-agement, Left Shift, Risk Relationships, Scoring Systems,Vertical Enterprise Risk Management, Vertical ManagementChain.

1 IntroductionWith the changing business environment brought on by

events such as the global financial crisis, gone are the daysof focussing only on operational and tactical risk manage-ment. Enterprise Risk Management (ERM), a frameworkfor a business to assess its overall exposure to risk (boththreats and opportunities), and hence its ability to maketimely and well informed decisions, is now the norm.

Ratings agencies, such as Standard & Poors, are rein-forcing this shift towards ERM by rating the effectivenessof a company’s ERM strategy as part of their overall creditassessment. This means that, aside from being best practice,not having an efficient ERM strategy in place will have adetrimental effect on a company’s credit rating.

Not only do large companies need to respond to this newfocus, but also the public sector needs to demonstrate effi-ciency going forward, by ensuring ERM is embedded notonly vertically but also horizontally across their organisa-tions (Figure 1). This whitepaper1 provides help, in the formof five basic steps to implementing a simple and effectiveERM solution.

2 Five Steps to implementing a Simple and Ef-fective ERM Solution

The five steps to implementing a simple and effectiveERM solution are explained in this section.

Author

Val Jonas is a highly experienced risk management expert, withextensive experience of training, facilitating and implementingproject, programme and strategic risk management systems forcompanies in a wide range of industries in the UK, Europe,USA and Australia. With more than 18 years experience in riskmanagement and analysis, working with large organisations,Val has a wealth of practical experience and vision on howorganisations can improve project and business performancethrough their risk management strategic framework and goodpractice. Val played a major part in the design and developmentof the leading Risk Management and Analysis software productPredict!. More recently, she has pioneered Governance and RiskManagement Master Class sessions for senior management inindustry and government and has been a keen and activeparticipant in forging the interfacing of Risk and Earned ValueManagement, including speaking at international conferenceson these topics. She has a joint honors BA in Mathematics andComputing from Oxford University. <[email protected]>.

About Risk Decisions

Risk Decisions Limited is part of Risk Decisions Group, apioneering global risk management solutions company, withoffices in the UK, USA and Australia. The company specialisesin the development and delivery of enterprise solutions andservices that enable risk to be managed more effectively onlarge capital projects as well as helping users to meet strategicbusiness objectives and achieve compliance with corporategovernance obligations. Clients include Lend Lease, MottMacDonald, National Grid, Eversholt Rail, BAE Systems, SelexGalileo, Raytheon, Navantia, UK MoD, Australian DefenceMateriel Organisation and New Zealand Air Force.

Five Steps to Enterprise Risk Management

Val Jonas

With the changing business environment brought on by events such as the global financial crisis, gone are the days offocussing only on operational and tactical risk management. Enterprise Risk Management (ERM), a framework for abusiness to assess its overall exposure to risk (both threats and opportunities), and hence its ability to make timely andwell informed decisions, is now increasingly becoming the norm. Ratings agencies, such as Standard & Poors, are rein-forcing this shift towards ERM by rating the effectiveness of a company’s ERM strategy as part of their overall creditassessment. This means that, aside from being best practice, not having an efficient ERM strategy in place will have adetrimental effect on a company’s credit rating. Not only do large companies need to respond to this new focus, but alsothe public sector needs to demonstrate efficiency going forward, by ensuring ERM is embedded not only vertically but alsohorizontally across their organisations. This whitepaper provides help, in the form of five basic steps to implementing asimple and effective ERM solution.

1 This is first of a series of whitepapers on Enterprise Risk Management. Future papers will expand on each of the steps in this whitepaper as well as continuing to cover Governance and Compliance.

Page 93: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

92 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 1: Vertical and Horizontal ERM.

Figure 2: Enterprise Risk Structure in the Predict! Hierarchy Tree.

Enterprise Risk Management (ERM), is a framework for abusiness to assess its overall exposure to risk

(both threats and opportunities)

“”

Page 94: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 93© Novática

Risk Management

Farewell Edition

Figure 3: Vertical Management Chain of Owners and Leaders.

Figure 4: Global Categories.

Step 1 – Establish an Enterprise Risk StructureERM requires the whole organisation to identify, com-

municate and proactively manage risk, regardless of posi-tion or perspective. Everyone needs to follow a commonapproach, which includes a consistent policy and process, asingle repository for their risks and a common reporting

format. However, it is also important to retain existing work-ing practices based on localised risk management perspec-tives as these reflect the focus of operational risk manage-ment.

The corporate risk register will look different from theoperational risk register, with a more strategic emphasis onrisks to business strategy, reputation and so on, rather thanmore tactical product, contract and project focused risks.The health and safety manager will identify different kindsof risks from the finance manager, while asset risk man-agement and business continuity are disciplines in their ownright. ERM brings together risk registers from different dis-ciplines, allowing visibility, communication and central re-porting, while maintaining distributed responsibility.

In addition to the usual vertical risk registers, such ascorporate, business units, departments, programmes andprojects, the enterprise also needs horizontal, or functionalrisk registers. These registers allow function and businessmanagers, who are are responsible for identifying risks totheir own objectives, to identify risks arising from otherareas of the organisation.

The enterprise risk structure (Figure 2) should matchthe organisation’s structure: the hierarchy represents verti-cal (executive) as well as horizontal (functional and busi-ness) aspects of the organisation. This challenges the con-ventional assumption that risks can be rolled up automati-

Aside from being best practice, not havingan efficient ERM strategy in place will have a detrimental

effect on a company’s credit rating

“”

Page 95: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

94 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Figure 5: Scoring by Cluster Maps from Local to Enterprise Level.

Figure 6: Metrics Reports by Business Objective, Cluster and Supplier.

Also the public sector needs to demonstrate efficiencygoing forward, by ensuring ERM is embedded vertically

and also horizontally across their organisations

“”

Page 96: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 95© Novática

Risk Management

Farewell Edition

Figure 7: Robust Risk Information for Decision-making.

cally, by placing horizontal structures side by side with ver-tical executive structures. Risks should be aggregated usinga combination of vertical structure and horizontal intelli-gence. This is a key factor in establishing ERM.

Step 2 – Assign ResponsibilityOnce an appropriate enterprise risk structure is estab-

lished, assigning responsibility and ownership should bestraightforward. Selected nodes in the structure will havespecified objectives; each will have an associated manager(executive, functional or business), who will be responsiblefor achieving those objectives and managing the associatedrisks. Each node containing a set of risks, along with itsowner and leader, is a Risk Management Cluster. (See Fig-ure 3.)

Vertical managers take executive responsibility not onlyfor their cluster risk register, but also overall leadership re-sponsibility for the Risk Management Clusters2 below. Re-sponsibility takes two forms: ownership at the higher leveland leadership at the lower level. For example, a programmemanager will manage his programme risks, but also haveresponsibility for overseeing risk within each of the pro-gramme’s projects.

Budgetary authority (setting and using ManagementReserve), approval of risk response actions, communica-tion of risk appetite, management reporting and risk per-formance measures are defined as part of the Owner andLeader roles as illustrated in Figure 3. This structure is alsoused to escalate and delegate risks.

Horizontal managers take responsibility for their ownfunctional or business Risk Management Clusters, but alsofor gathering risks from other areas of the Enterprise RiskStructure related to their discipline. For example, the HRfunctional manager will be responsible for identifying com-mon skills shortfall risks to bring them under central man-agement. Similarly, the business continuity manager willidentify all local risks relating to use of a test facility andmanage them under one site management plan. To assist inthis, we use an enterprise risk map – see Step 3.

Step 3 – Create an Enterprise Risk MapRisk budgeting and common sense dictates that risks

should reside at their local point of impact, because this iswhere attention is naturally focused. However, the riskcause, mitigation or exploitation strategy may come fromelsewhere in the organisation and often common causes andactions can be identified. In this case, we take a systemicapproach, where risks are managed more efficiently whenbrought together at a higher level. To achieve this, we needto be able to map risks to different parts of the risk manage-ment structure.

2 Risk Management Clusters® are unique to the Predict! risk man-agement software.

ERM requires the wholeorganisation to identify,

communicate and proactivelymanage risk

Page 97: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

96 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

To create an enterprise risk map, you need:a set of global categories to communicate informa-

tion to the right placethe facility to define the relationships between risks

(parent, child, sibling etc)scoring systems with consistent common impact

types

Global CategoriesFunctional and business managers should use these glo-

bal categories to map risks to common themes, such as stra-tegic or business objectives, functional areas and so on.These categories then provide ways to search and filter onthese themes and to bring common risks together under aparent risk. (See Figure 4).

Risk RelationshipsFor example, if skills shortage risks are associated with

HR, the HR manager can easily call up a register of all theHR risks, regardless of project, contract, asset, etc. acrossthe organisation and manage them collectively.

Similarly, the impact of a supplier failing on any onecontract may be manageable. But across many contractscould be a major business risk. In which case, the supplychain function needs to bring the risks against this suppliertogether and to manage the problem centrally.

Each Risk Management Cluster will include both glo-bal and local categories in a Predict! Group, so that eacharea of the organisation needs only to review relevant in-formation.

Scoring systems are also applied by Risk ManagementCluster, with locally meaningful High, Medium and Lowthresholds which map automatically when rolled up (Fig-ure 5). For example, a High impact of £150k at project or

contract level will appear as Low at corporate level. Whereasa £5m risk at a project or contract level may appear as Highat the corporate level.

Typically, financial and reputation impacts will be com-mon to all clusters, whereas local impacts, such as projectschedule, will not be visible higher up.

Step 4 – Decision Making through Enterprise RiskReporting

The most important aspect of risk management is car-rying out appropriate actions to manage the risks. How-ever, you cannot manage every identified risk, so you needto prioritise and make decisions on where to focus man-agement attention and resources. The decision making proc-ess is underpinned by establishing risk appetite against ob-jectives and setting a baseline, both of which should berecorded against each Risk Management Cluster®.

Enterprise-wide reporting allows senior managers to re-view risk exposure and trends across the organisation. Thisis best achieved through metrics reports, such as the riskhistogram (see Figure 6). For example, you might want toreview the risk to key business objectives by cluster. Orhow exposed different contracts and projects are to varioussuppliers.

Furthermore, there is a need to use a common set ofreports across the organisation, to avoid time wasted inter-preting unfamiliar formats (Figure 7). Such common re-ports ensure the risk is communicated and well understoodby all elements of the organisation, and hence provide timelyinformation on the current risk position and trends, initiallytop-down, then drilling down to the root cause.

Step 5 – Changing Culture from Local to EnterpriseAt all levels of an organisation, changing the emphasis

Figure 8: Proactive Management of Risks – looking ahead.

Page 98: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 97© Novática

Risk Management

Farewell Edition

from ‘risk management’ to ‘managing risks’ is a challenge;however, across the enterprise it is particularly difficult. Itrequires people to look ahead and take action to avert (orexploit) risk to the benefit of the organisation. It also re-quires the organisation to encourage and reward this changein emphasis!

Unfortunately, problem management (fire-fighting) dealswith today’s problems at the expense of future ones. This isgenerally a far more expensive process as the available rem-edies are limited. However, if potential problems are identi-fied (as risks) before they arise, you have far more optionsavailable to affect a ‘Left Shift: from a costly and overlylong process to one better matching the original objectivesset! (See Figure 8.)

Most organisations have pockets of good risk manage-ment, many have a mechanism to report ‘top N’ risks verti-cally, but very few have started to implement horizontal,functional or business risk management. Both a bottom upand top down approach is required. An ERM initiative shouldallow good local practices to continue, provided they are inline with enterprise policy and process (establishing eachpocket of good risk management as a Risk ManagementCluster will provide continuity).

From a top-down perspective, functional and businessfocused risk management needs to be kick started. A risksteering group comprising functional heads and businessmanagers is a good place to start. The benefits of such agroup getting together to understand inter-discipline riskhelps break down stove-piped processes. This can triggerincreasingly relaxed cross-discipline discussions and focuson aligning business and personal objectives that leads torapid progress on understanding and managing risk.

Finally, to ensure that an organisational culture shift isaffected, the senior management must be engaged. This en-gagement is not only aimed at encouraging them to see thebenefits of managing risk, but to also help the organisationas a whole see that proactive management of risk (the LeftShift principle) is valued by all.

A Risk Management MasterClass for the executive boardand senior managers can provide them with the tools neces-sary to progress an organisation towards effective ERM.

3 The BenefitsERM delivers confidence, stability, improved perform-

ance and profitability. It provides:Access to risk information across the organisation in

real timeFaster decision making and less ‘fire fighting’Fewer surprises (managed threats and successful

opportunities)Improved confidence and trust across the stakeholder

community

Reduced cost, better use of resources and improvedmorale

Stronger organisations resilient to change, ready toexploit new opportunities

Over time this will:Increase customer satisfaction, enhance reputation

and generate new businessSafeguard life, company assets and the environmentAchieve best value and maximise profitsMaintain credit ratings and lower finance costs

4 SummaryAll of the risk management skills and techniques re-

quired to implement Enterprise Risk Management can eas-ily be learned and applied. From senior managers to riskpractitioners, Masterclasses, training, coaching and proc-ess definition can be used to support rollout of EnterpriseRisk Management.

Create a practical Enterprise Risk Structure, set clearresponsibilities and hold people accountable. Define a sim-ple risk map and provide localised working practices tomatch perspectives on risk. Be seen to make decisions basedon good risk management information.

Enterprise Risk Management should be simple tounderstand and simple to implement.

Keep it simple! Make it effective!

BibliographyAS/NZS 4360:2004 Risk management. SAI Global Ltd,ISBN 0-7337-5904-1, 2004.Association of Project Management. Project Risk Analysisand Management Guide, Second Edition. Association ofProject Management, ISBN: 1-903494-12-5, 2004.COSO. Enterprise Risk Management - Integrated Frame-work, AICPA, 2004.Office of Government Commerce. Management of Risk:Guidance for Practitioners Book, The Stationary Office,ISBN 13: 9780113310388, 2007.Project Management Institute. Practice Standard forProject Risk Management. Project Management Insti-tute, 2009.ISO 31000: Risk management – Principles and Guide-lines. ISO, <http://www.iso.org>, 2009.ISO/FDIS 31000:2009.ISO Guide 73 – Risk management - Vocabulary

Note: All of these publications are listed at<http://www.riskdecisions.com>.

GlossaryNote: Where ‘source’ is in brackets, minor amendments have beenincorporated to the original definition.

ERM delivers confidence, stability, improvedperformance and profitability

“”

Page 99: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

98 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © Novática

Risk Management

Farewell Edition

Term Definition Source Budget The resource estimate (in £/$s or hours) assigned for the

accomplishment of a specific task or group of tasks. Risk Decisions

Change Control (Management)

Identifying, documenting, approving or rejecting and controlling change.

(PMBoK)

Control Account A management control point at which actual costs can be accumulated and compared to earned value and budgets (resource plans) for management control purposes. A control account is a natural management point for budget/schedule planning and control since it represents the work assigned to one responsible organisational element on one Work Breakdown Structure (WBS) element.

APM EVM guideline

Cost Benefit Analysis The comparison of costs before and after taking an action, in order to establish the saving achieved by carrying out that action.

Risk Decisions

Cost Risk Analysis Assessment and synthesis of the cost risks and/or estimating uncertainties affecting the project to gain an understanding of their individual significance and their combined impact on the project’s objectives, to determine a range of likely outcomes for project cost.

(PRAM)

Enterprise Risk Map The structure used to consolidate risk information across the organisation, to identify central responsibility and common response actions, with the aim of improving top down visibility and managing risks more efficiently.

Risk Decisions

Enterprise Risk Management (ERM)

The application of risk management across all areas of a business, from contracts, projects, programmes, facilities, assets and plant, to functions, financial, business and corporate risk.

Risk Decisions

Left Shift The practice by which an organisation takes proactive action to mitigate risks when they are identified rather than when they occur with the aim of reducing cost and increase efficiency.

Risk Decisions

Management Reserve (MR) Management Reserve may be subdivided into: • Specific Risk provision to manage identifiable and

specific risks • Non-Specific Risk Provision to manage emergent risks • Issues provision

APM EV/Risk Working Group

Non-specific Risk Provision The amount of budget / schedule / resources set aside to cover the impact of emergent risks, should they occur.

APM EV/Risk working group

Operational Risk The different types of risks managed across an organisation, typically excluding financial and corporate risks.

Risk Decisions

Opportunity An ‘upside’, beneficial Risk Event. PRAM Baseline An approved scope/schedule/budget plan for work, against which

execution is compared, to measure and manage performance. (PMBoK)

Performance Measurement The objective measurement of progress against the Baseline APM EV/Risk Working Group

Proactive Risk Response An action or set of actions to reduce the probability or impact of a threat or increase the probability or impact of an opportunity. If approved they are carried out in advance of the occurrence of the risk. They are funded from the project budget.

(PRAM)

Reactive Risk Response An action or set of actions to be taken after a risk has occurred in order to reduce or recover from the effect of the threat or to exploit the opportunity. They are funded from Management Reserve.

(PRAM)

Risk Appetite The amount of risk exposure an organisation is willing to accept in connection with delivering a set of objectives.

APM EV/Risk Working Group

Risk Event An uncertain event or set of circumstances, that should it or they occur, would have an effect on the achievement of one or more objectives.

PRAM

Risk Exposure The difference between the total impact of risks should they all occur and the Risk Provision.

APM EV/Risk Working Group

Risk Management Clusters Functionality in Risk Decisions’ Predict! risk management software that enables users to organise different groups of risks to form a single, enterprise-wide risk map.

Risk Decisions

Risk Provision The amount of budget / schedule / resources set aside to manage the impact of risks Risk provision is a component part of Management Reserve

APM EV/Risk Working Group

Risk Response Activities Activities carried out to implement a Proactive Risk Response. APM EV/Risk

Working Group Schedule Risk Analysis Assessment and synthesis of schedule risks and/or estimating (PRAM)

Page 100: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 99© CEPIS

UPENET

Farewell Edition

Keywords: Apple, Innovation, ITDevices, Steve Jobs.

Although many considered him tobe the best innovator in the technologi-cal world, Steve Jobs was never sucha skilled engineer as he was able torecognize a good idea and do every-thing necessary to realize it and bringit to perfection. Even he himself hon-oured his long-time partner SteveWozniak, with whom he founded thecompany Apple Computer, for his in-genious engineering skills. However,although Steve Wozniak was the manwho was the most responsible for theconstruction of the first revolutionarycomputer Apple I, as he said, the ideaof selling them never crossed his mindat the time. Jobs was the one who gath-ered resources, organized production,and assembled a great team of success-ful managers.

After the success of Apple II, thenext generation of the computer, theinventor of the famous Macintosh com-puter (also known as Mac) Jef Raskininsisted that Apple team, led by Jobs,visit the company Xerox PARC whichwere working on the greatest innova-tions of that time at their premises –the graphic user interface and compu-ter mouse. However, what people fromXerox did not know was how to real-ize their idea, and how to preserve it.Recognizing the ingeniousness of thesecreations, Jobs immediately made histeam work on the development of im-plementation of the idea in the nextgenerations of Apple computers, Lisaand Macintosh.

When mentioning Macintosh, it ishard to find a tech savvy or a market-ing expert who has not heard of the"1984", the famous commercial thatthis computer was presented with in the

Information Society

Steve Jobs Dragana Stojkovic

© 2011 JISA

This paper was first published, in English, by inforeview, issue 5/2011, pp. 58-63. inforeview, a UPENET partner, is a publicationfrom the Serbian CEPIS society JISA (Jedinstveni informatièki savez Srbije – Serbian Information Technology Association). The 5/2011 issue of inforeview can be accessed at <http://www.emagazine.inforeview.biz/si9/index.html>.

Note: Abstract and keywords added by UPGRADE.

This paper offers a review of the role played by the late Steve Jobs in the development and commercialization of trendy andinnovative IT devices (Mac computer, iPod, iPhone, iPad) that have greatly influenced the daily lives of hundreds ofmillions of people around the world.

Author

Dragana Stojkovic has a BachelorDegree in Philology, English languageand literature branch, at the Universityof Belgrade, Serbia. She is a free lancejournalist and English translator.<[email protected]>

Although many considered him to be the best innovatorin the technological world,

Steve Jobs was never such a skilled engineer

“”

USA during the Super Bowl in 1984.What is less known is that the Boardof Directors did not like the commer-cial at all, and that Jobs was the onewho supported the project until its veryrealization. After the premier broad-cast, all three major TV networks ofthe time and around 50 local TV sta-tions broadcasted their reports aboutthe commercial, and hundreds of news-papers and magazines wrote about it,providing publicity worth 5 milliondollars for free.

Page 101: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

100 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

After the dispute within the com-pany, Jobs left Apple and founded hisown computer company called NeXT.When 12 years later Apple boughtNeXT and brought Jobs back, what hefound was a company that was slowlydying since major companies such asMicrosoft, IBM, and Dell had pro-duced the same machines as Apple did,but at a lower cost and with faster proc-essors. Visiting Apples premises,across from the main building in abasement, Jobs found a designer whowas sitting between a bunch of proto-types and thinking about quitting.Among the prototypes he had beenworking on was a monolithic monitorwith soft edges and integrated compo-nents. In that room did Jobs see thatother managers had missed. Almostimmediately, he said to the designer,Jonathan Ive, that from that momenton, they would be working on a newline of computers. That was when thefirst iMacs were born.

The next device that directed thedevelopment of high technology, thistime in the consumer electronics field,was certainly the famous iPod. Con-sidering existing digital music playersto be either too big or too small but

useless, and their software completelyinadequate, Jobs engaged a team ofengineers which would design a com-plete line of iPods. The first model waspresented in 2001 and it was the sizeof a deck of cards, which at the timewas a really great progress, storing upto 1000 songs, while the battery lastedamazing 10 hours. And, of course, thewhole story about iPod devices wouldnot have any sense without the exist-ence of iTunes Music Store announcedin 2003, which caused a revolution inthe mass distribution of digital content.

It might be needless to say howmuch iPhone has affected the devel-opment of smartphones since 2007with its revolutionary design and userinterface. It is enough to mention thatit did not have a worthy competitor atthe market for years, and even todaythe fans of Apple ecosystem would notexchange it for a model of anothercompany. However, the path from theidea to the final product was immenselydemanding and difficult, especially forthe engineers. It is known that Jobsbroke at least three iPhone prototypesinto pieces before he was finally satis-fied.

The iPhone has literally changedthe appearance of the mobile phoneand caused fast growth of smartphonesand subsequently the tablet PCs. Greatinterest in the tablet market was causedby Apple’s iPad which borrowed theOS and interface from the iPhone. Atfirst observed with a dose of scepticism

During his lifehe enjoyed the status

of a rock star

as a useless device which was nothingbut enlarged iPod touch, iPad hasquickly become the best-selling tabletPC ever. If it is about the belief thatyour creation will be something ex-traordinary, then Jobs was certainly thegreatest believer, at least in the techworld.

Jobs was frequently asked to com-ment on his vision. Once, for theAmerican magazine Fortune in Janu-ary 2000, he said:

"This is what customers pay us for– to sweat all these details so it’s easyand pleasant for them to use our com-puters. We’re supposed to be reallygood at this. That doesn’t mean wedon’t listen to customers, but it’s hardfor them to tell you what they wantwhen they’ve never seen anything re-motely like it. Take desktop video edit-ing. I never got one request from some-one who wanted to edit movies on hiscomputer. Yet now that people see it,they say, ‘Oh my God, that’s great!’"

During his life he enjoyed the sta-tus of a rock star thanks to his interest-ing life story, eccentric behaviour, andunmistaken vision when it comes toproducts of the future. It is certain thatthere is a whole team of engineers,designers, and loyal associates stand-ing behind his success, however, Jobswas the one who knew how to spin anidea.

Jobs was the onewho knew how to

spin an idea

“”

“”

Page 102: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 101© CEPIS

UPENET

Farewell Edition

Surveillance Systems

An Intelligent Indoor Surveillance SystemRok Piltaver, Erik Dovgan, and Matjaz Gams

© Informatica, 2011

This paper was first published, in English, by Informatica Informatica Informatica Informatica Informatica (Vol. 35, issue no. 3, 2011, pp. 383-390). InformaticaInformaticaInformaticaInformaticaInformatica, <http://www.informatica.si/> is a quarterly journal published, in English, by the Slovenian CEPIS society SDI (Slovensko DrustvoInformatika – Slovenian Society Informatika, <http://www.drustvo-informatika.si/>).

The development of commercial real-time location system (RTLS) enables new ICT solutions. This paper presents anintelligent surveillance system for indoor high-security environments based on RTLS and artificial intelligence methods.The system consists of several software modules each specialized for detection of specific security risks. The validationshows that the system is capable of detecting a broad range of security risks with high accuracy.

Authors

Rok Piltaver received his B.Sc. degreein Computer Science from the Universityof Ljubljana, Slovenia, in 2008. He is aresearch assistant at the Department ofIntelligent Systems of the Jozef StefanInstitute, Ljubljana, and a Ph.D. studentof New media and e-science at the JozefStefan International PostgraduateSchool where he is working on hisdissertation on combining accurate andunderstandable classifiers. His researchinterests are in artificial intelligence andmachine learning with applications inambient intelligence and ambient assistedliving. He published two papers ininternational scientific journals and eightpapers in international conferences andwas awarded for the best innovation inSlovenia in 2009 and for the best jointproject between business and academiain 2011. <[email protected]>

Erik Dovgan received his B.Sc. degreein Computer Science from the Universityof Ljubljana, Slovenia, in 2008. He is aresearch assistant at the Department ofIntelligent Systems of the Jozef StefanInstitute, Ljubljana, and a Ph.D. studentof New media and e-science at the JozefStefan International Postgraduate Schoolwhere he is working on his dissertationon multiobjective optimization of vehiclecontrol strategies. His research interestsare in evolutionary algorithms, stochasticmultiobjective optimization, classificationalgorithms, clustering and application ofthese techniques in energy efficiency,

Keywords: Expert System, FuzzyLogic, Intelligent System, Real-TimeLocating System, Surveillance.

1 IntroductionSecurity of people, property, and

data is becoming increasingly impor-tant in today’s world. Security is en-sured by physical protection and tech-nology, such as movement detection,biometric sensors, surveillance cam-eras, and smart cards. However, thecrucial factor of most security systemsis still a human [7], providing the in-telligence to the system. The securitypersonnel has to be trustworthy, trainedand motivated, and in good psychicallyand physical shape. Nevertheless, theyare still human and as such tend tomake mistakes, are subjective and bi-ased, get tired, and can be bribed. Forexample, it is well known that a per-son watching live surveillance videooften becomes tired and may thereforeoverlook a security risk. Another prob-lem is finding trustworthy security per-sonnel in foreign countries where lo-cals are the only candidates for the job.

With that in mind there is an op-portunity of using the modern infor-mation-communication technology inconjunction with methods of artificialintelligence to mitigate or even elimi-nate the human shortcomings and in-crease the level of security while low-ering the overall security costs. Our

transportation, security systems andambient assisted living. <[email protected]>

Matjaz Gams is Head of Department ofIntelligent Systems at the Jozef StefanInstitute and professor of computerscience at the University of Ljubljana andMPS, Slovenia. He received his degreesat the University of Ljubljana and MPS.He is or was teaching at 10 Faculties inSlovenia and Germany. His professionalinterest includes intelligent systems, arti-ficial intelligence, cognitive science,intelligent agents, business intelligenceand information society. He is member ofnumerous international programcommittees of scientific meetings,national strategic boards and institutions,editorial boards of 11 journals and ismanaging director of the Informaticajournal. He was co-founder of varioussocieties in Slovenia, e.g. the EngineeringAcademy, AI Society, Cognitive Society,and was president and/or secretary ofvarious societies including ACMSlovenia. He is president of institute andfaculty union members in Slovenia. Heheaded several national and internationalprojects including the major nationalemployment agent on the Internet first topresent over 90% of all available jobs ina country. His major scientificachievement is the discovery of theprinciple of multiple knowledge. In 2009his team was awarded for the bestinnovation in Slovenia and in 2011 for thebest joint project between business andacademia. <[email protected]>

Page 103: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

102 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

The rest of the paper is structuredas follows. Section 2 summarizes therelated work. An overview of softwaremodules and a brief description of usedsensors are given in Section 3. Section4 describes the five PDR modules, in-cluding the Expert System Module andFuzzy Logic Module in more detail.Section 5 presents system verificationwhile Section 6 provides conclusions.

2 Related WorkThere has been a lot of research in

the field of automatic surveillancebased on video recordings. The re-search ranges from extracting low levelfeatures and modelling of the usualoptical flow to methods for optimalcamera positioning and evaluating ofautomatic video surveillance systems[8]. There are many operational imple-mentations of such system increasingthe security in public places (subwaystations, airports, parking lots).

On the other hand, there has notbeen much research in the field of au-tomatic surveillance systems based onreal-time locating systems (RTLS), dueto the novelty of sensory equipment.Nevertheless, there are already somesimple commercial systems with socalled room accuracy RTLS [20] thatenable tracking of objects and basicalarms based on if-then rules [18].Some of them work outdoors usingGPS (e.g., for tracking vehicles [21])while others use radio systems for in-door tracking (e.g., in hospitals andwarehouses). Some systems allowvideo monitoring in combination withRTLS tracking [19].

Our work is novel as it uses severalcomplex artificial intelligence methodsto extract models of the usual behaviourand detect the unusual behaviour basedon an indoor RTLS. In addition, our workalso presents the benefits of combiningvideo and RTLS surveillance.

3 Overview of the PDR SystemThis section presents a short over-

This paper presents an intelligent surveillance systemfor indoor high-security environments

first intelligent security system that isfocused on the entry control is de-scribed in [5]. In this paper we presenta prototype of an intelligent indoor-surveillance system (i.e. it works in thewhole indoor area and not only at theentry control) that automatically de-tects security risks.

The prototype of an intelligent se-curity system, called "Poveljnikovadesna roka" (PDR, eng. commander’sright hand), is specialized for surveil-lance of personnel, data containers, andimportant equipment in indoor high-security areas (e.g., an archive of clas-sified data with several rooms). Thesystem is focused on the internalthreats; nevertheless it also detects ex-ternal security threats. It detects anyunusual behaviour based on user-de-fined rules and automatically extractedmodels of the usual behaviour. The ar-tificial intelligence methods enable thePDR system to model usual and to rec-ognize unusual behaviour. The systemis capable of autonomous learning, rea-soning and adaptation. The PDR sys-tem alarms the supervisor about unu-sual and forbidden activities, enablesan overview of the monitored environ-ment, and offers simple and effectiveanalysis of the past events. Tagging allpersonnel, data containers, and impor-tant equipment is required as it enablesreal-time localization and integrationwith automatic video surveillance. ThePDR system notifies the supervisorwith an alarm of appropriate level andan easily comprehensible explanationin the form of natural language sen-tences, tagged video recordings andgraphical animations. The PDR systemdetects intrusions of unidentified per-sons, forbidden actions of known andunknown persons and unusual activi-ties of tagged entities. The concretescenarios detected by the system in-clude thefts, sabotages, staff negli-gence and insubordination, unauthor-ised entry, unusual employee behav-iour and similar incidents.

“ ”view of the PDR system. The first sub-section presents the sensors and hard-ware used by the system. The secondsubsection introduces software mod-ules. Subsection 3.3 describes RTLSdata pre-processing and primitive rou-tines.

3.1Sensors and other HardwareThe PDR system hardware includes

a real-time locating system (RTLS),several IP video cameras (Figure 1), aprocessing server, network infrastruc-ture, and optionally one or moreworkstations, such as personal comput-ers, handheld devices, and mobilephones with internet access, which areused for alerting the security personnel.

RTLS provides the PDR systemwith information about locations of allpersonnel and important objects (e.g.container with classified documents) inthe monitored area. RTLS consists ofsensors, tags, and a processing unit(Figure 1). The sensors detect the dis-tance and the angle at which the tagsare positioned. The processing unituses these measurements to calculatethe 3D coordinates of the tags. Com-mercially available RTLS use varioustechnologies: infrared, optical, ultra-sound, inertial sensors, Wi-Fi, or ultra-wideband radio. The technology deter-mines RTLS accuracy (1 mm – 10 m),update frequency (0.1 Hz – 120 Hz),covered area (6 – 2500 m2), size andweight of tags and sensors, variouslimitations (e.g., required line of sightbetween sensors and tags), reliability,and price (2.000 – 150.000 •) [13].PDR uses Ubisense RTLS [15] that isbased on the ultra-wide band technol-ogy and is among the more affordableRTLSs. It uses relatively small andenergy efficient active tags, has an up-date rate of up to 9 Hz and accuracy of±20 cm in 3D space given good condi-tions. It covers areas of up to 900 m2

and does not require line of sight.The advantages of a RTLS are that

people feel more comfortable being

Page 104: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 103© CEPIS

UPENET

Farewell Edition

tracked by it than being filmed by videocameras and that localization with aRTLS is simpler, more accurate, andmore robust than localization fromvideo streams. On the other hand,RTLS is not able to locate objects thatare not marked with tags. Therefore,the most vital areas need to be moni-tored by video cameras also in orderto detect intruders that do not wearRTLS tags. However, only one PDRmodule requires video cameras, whilethe other four depend on RTLS alone.Moreover, the cameras enable on-cam-era processing, therefore only extractedfeatures are sent over the network.

3.2 Software StructureThe PDR software is divided into

five modules. Each of them is special-ized for detecting a certain kind of ab-normal behaviour (i.e., a possible se-

curity risk) and uses an appropriateartificial intelligence method for de-tecting it. The modules reason in realtime independently of each other andasynchronically trigger alarms aboutdetected anomalies. Three of the PDRmodules are able to learn automaticallywhile the other two use predefinedknowledge and knowledge entered bythe supervisor. The Video Module de-tects persons without tags and is theonly module that needs video cameras.The Expert System Module iscustomisable by the supervisor, whoenters information about forbiddenevents and actions in the form of sim-ple rules, thus enabling automatic rulechecking. The three learning modulesthat automatically extract models of theusual behaviour for each monitoredentity and compare current behaviourwith it in order to detect abnormalities

Figure 1: Overview of the PDR System.

The system is capable of detecting a broad rangeof security risks with high accuracy“ ”

are Statistic, Macro and Fuzzy LogicModules. The Statistic Module collectsstatistic information about entity move-ment such as time spent walking, sit-ting, lying etc. The Macro model isbased on macroscopic properties suchas the usual time of entry in certainroom, day of the week etc. Both mod-ules analyse relatively long time inter-vals while the Fuzzy Logic Moduleanalyses short intervals. It uses fuzzydiscretization to represent short actionsand fuzzy logic to infer whether theyare usual or not.

3.3 RTLS Data Pre-processingand Primitive Routines

Since the used RTLS has relativelylow accuracy and relatively high up-date rate, a two-stage data filtering isused to increase the reliability and tomitigate the negative effect of the noisylocation measurements. In the firststage, median filter [1] with windowsize 20 is used to filter sequences of x,y, and z coordinates of tags. Equation(1) gives the median filter equation fordirection x. The median filter is usedto correct the RTLS measurements thatdiffer from the true locations by morethan ~1.5 m and occur in up to 2.5 %of measurements. Such false measure-ments are relatively rear and occur onlyin short sequences (e.g., probability ofmore than 5 consecutive measurementshaving a high error is very low) there-fore the median filter corrects theseerrors well.

{ 910 ,,~−−= nnn xxmedx

}98,,..., ++ nn xx (1)

The second stage uses a Kalmanfilter [6] that performs the followingthree tasks: smoothing of the RTLSmeasurements, estimating the veloci-ties of tags, and predicting the missingmeasurements. Kalman filter state is asix dimensional vector that includespositions and velocities in each of thethree dimensions. The new state is cal-

Page 105: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

104 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

culated as a sum of the previous posi-tion (e.g. xn) and a product between theprevious velocity (e.g. vx,n) and the timebetween the consecutive measurementsÄt for each direction separately. Thevelocities remain constant. Equation(2) gives the exact vector formula usedto calculate the next state of theKalman filter. The measurement noisecovariance matrix was set based onRTLS system specification, while theprocess noise covariance matrix wasfine-tuned experimentally.

Once the measurements are fil-tered, primitive routines can be ap-plied. They are a set of basic pre-processing methods used by all thePDR modules and are robust to noisein 3D location measurements. Theytake short intervals of RTLS data asinput and output a symbolic represen-tation of the processed RTLS data.

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

∆∆

=

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

+

+

+

+

+

+

nz

ny

nx

n

n

n

t

t

t

nz

ny

nx

n

n

n

vvvzyx

vvvzyx

,

,

,

1,

1,

1,

1

1

1

100000010000001000

001000001000001

(2)

The first primitive routine detectsin which area (e.g., a room or a user-defined area) a given tag is located,when it has entered, and when it hasexited from the area. The routine takesinto account the positions of walls anddoors. A special method is used to han-dle the situations when a tag movesalong the boundary between two areasthat are not separated by a wall.

The second primitive routine clas-sifies the posture of a person wearinga tag into: standing, sitting, or lying. Aparameterized classifier, trained on

pre-recorded and hand-labelled train-ing data, is used to classify the se-quences of tag heights into the threepostures. The algorithm has three pa-rameters: the first two are thresholdstlo and thi dividing the height of a taginto the three states, while the third pa-rameter is tolerance d. The algorithmstores the previous posture and adjuststhe boundaries between the posturesaccording to it (Figure 2). If the cur-rent state is below the threshold ti, it isincreased by d, otherwise it is de-creased by d. The new posture is set tothe posture that occurs most often inthe window of consecutive tag heightsaccording to the dynamically setthresholds. The thresholds tlo and thi wereobtained from the classification tree thatclassifies the posture of a person basedon the height of a tag. It was trained onhalf an hour long manually labelled re-

Figure 2: Dynamic Thresholds.

The crucial factor of most security systems is stilla human providing the intelligence to the system“ ”

cording of lying, sitting and standing.The third group of primitive rou-

tines is a set of routines that detectwhether a tag is moving or not. This isnot a trivial task due to the consider-able amount of noise in the 3D loca-tion data. There are separate routinesfor detecting movement of persons,movable objects (e.g., a laptop) andobjects that are considered stationary.The routines include hardcoded,handcrafted, common sense algorithmsand a classifier trained on extensive,pre-recorded, hand labelled trainingset. The classifier uses the followingattributes calculated in a sliding win-dow with size 20: the average speed,the approximate distance travelled,sum of consecutive position distances,and the standard deviation of movingdirection. The classifier was trained onmore than two hours long hand-la-belled recording of consecutive mov-ing and standing still. Despite the noisein the RTLS measurements the classi-fication accuracy of 95 % per singleclassification was achieved. [12] de-scribes the classifier in more detail.

The final group of routines detectsif two tags (or a tag and a given 3Dposition) are close together by compar-ing the short sequences of tags’ posi-tions. There are separate methods usedfor detecting distances between twopersons (e.g., used to detect if a visitoris too far away from its host), between

Page 106: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 105© CEPIS

UPENET

Farewell Edition

a person and an object, and between aperson and a given 3D location (e.g.,used to assign tags of moving personsto locations of moving objects detectedby video processing).

All of the described primitive rou-tines are robust to the noise in RTLSmeasurements and are specialized forthe PDR’s RTLS. Primitive routines’parameters were tuned according to thenoise of the RTLS and using data min-ing tools Orange [4] and Weka [16].In case of more accurate RTLS, theprimitive routines could be simpler andmore accurate. Nevertheless, the pre-sented primitive routines perform welldespite the considerable amount ofnoise. This is possible because of therelatively high update rate. If it was sig-nificantly lower, the primitive routineswould not work as well. Therefore, theaccuracy, reliability and update rate ofRTLS are crucial for the performanceof the entire PDR system.

4 PDR Modules4.1Expert System ModuleThe Expert System Module enables

the supervisor to customize the PDRsystem according to his/her needs bysetting simple rules that must not beviolated. It is the simplest and the mostreliable module of the PDR system[11]. It is capable of detecting a vastmajority of the predictable securityrisks, enables simple customization, isreliable, robust to noise, raises almostno false alarms, and offers comprehen-sible explanation for the raised alarms.In addition, it does not suffer from thetypical problems common to the learn-ing modules/algorithms, such as longlearning curve, difficulty to learn fromunlabeled data, relatively high prob-ability of false alarms, and the elusivebalance between false negative andfalse positive classifications. The ex-pert system consists of three parts de-scribed in the following subsections.

4.1.1 Knowledge BaseKnowledge base of an expert sys-

tem contains the currently availableknowledge about the state of the world.The knowledge base of PDR expertsystem consists of RTLS data,predefined rules, and user-definedrules. The first type of knowledge is inform of data stream, while the lattertwo are in form of if-then rules.

The expert system gets the knowl-edge about objects’ positions from theRTLS data stream. Each unit of the datastream is a filtered RTLS measurementthat contains a 3D location with a timestamp and a RTLS tag ID.

User-defined rules enable simplecustomization of the expert system ac-cording to specific supervisor’s needsby specifying prohibited and obliga-tory behaviour. Supervisor can add,edit, view, and delete the rules at anytime using an intuitive graphic userinterface. There are several rule tem-plates available. The supervisor has tospecify only the missing parameters ofthe rules, such as for which entities(tags), in which room(s) or user-de-fined areas(s), and at which time therules apply.

For instance, a supervisor canchoose to add a rule based on the fol-lowing template: "Person P must be inthe room R from time Tmin to time Tmax."and set P to John Smith, R to the hall-way H, Tmin to 7 am, and Tmax to 11 am.Now the expert system knows that Johnmust be in the hallway from 7 am to11 am. If he leaves the hallway duringthat period or if he does not enter itbefore 7 am, the PDR supervisor willbe notified.

Some of the most often used ruletemplates are listed below:

Object Oi is not allowed to en-ter area Ai.

Object Oi can only be moved byobject Oj.

Object Oi must always be closeto object Oj.

The PDR system detects intrusions of unidentified persons,forbidden actions of known and unknown persons

and unusual activities of tagged entities

“”

The predefined rules are a set ofrules that are valid in any applicationwhere PDR might be used. Neverthe-less, the supervisor has an option toturn them on or off. Predefined rulesdefine when alarms about hardwarefailures should be triggered.

4.1.2 Inference EngineThe inference engine is the part of

the PDR expert system that deducesconclusions about security risks fromthe knowledge stored in the knowledgebase. The inference process is done inreal-time. First, the RTLS data streamis processed using the primitive rou-tines. Second, all the rules related to agiven object (e.g., a person) arechecked. If a rule fires, an alarm israised and an explanation for the raisedalarm is generated. An example is pre-sented in the next paragraph.

Suppose that the most recent 3Dlocation of John Smith’s tag (from theprevious example) has just been re-ceived at 8:32 am. The inference en-gine checks all the rules concerningJohn Smith. Among them is the rule Rithat says: "John Smith must be in thehallway H from 7 am to 11 am." Theinference engine calls the primitiveroutine that checks whether John is inthe hallway H. There are two possibleoutcomes. In the first outcome, he isin the hallway H, therefore, the rule Riis not violated. If John was not in thehallway H in the previous instant, thereis an ongoing alarm that is now endedby the inference engine. In the secondoutcome, John is not in the hallway H;hence the rule Ri is violated at this mo-ment. In this case the inference enginechecks if there is an ongoing alarmabout John not being in the hallway H.If there is no such ongoing alarm theinference engine triggers a new alarm.On the other hand, if there is such analarm, the inference engine knows thatthe PDR supervisor was already noti-fied about it.

Page 107: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

106 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

If an alarm was raised every time arule was violated, the supervisorswould get flooded with alarm mes-sages. Therefore, the inference engineautomatically decreases the number ofalarm messages and groups alarm mes-sages about the same incident togetherso that they are easier to handle by thePDR supervisor. The method will beillustrated with an example. Becauseof the noise in 3D location measure-ments the inference engine does nottrigger or end an alarm immediatelyafter the status of rule Ri (violated/notviolated) changes. Instead it waits formore RTLS measurements and checksthe trend in the given time window: ifthere are only few instances when therule was violated they are consideredas noise. On the other hand, if thereare many (over the global threshold setby the supervisor) such instances, thenthe instances when rule was not vio-lated are treated as noise. Two consecu-tive alarms that are interrupted by ashort period of time will therefore re-sult in a single alarm message. A short

period in which a rule seems to be vio-lated because of the noise in RTLSdata, however, will not trigger analarm. The grouping of alarms worksin the following way: the inferenceengine groups the alarm messagesbased on the two rules Ri and Rj to-gether if at the time when rule Ri is vio-lated another rule Rj concerning JohnSmith or hallway H is violated too. Asa result, the supervisor has to deal withfewer alarm messages.

4.1.3 Generating Alarm ExplanationsThe Expert System Module also

provides the supervisor with an expla-nation of the alarm. It consists of threeparts: explanation in natural language,graphical explanation, and video re-cording of the event.

Each alarm is a result of a particu-lar rule violation. Since each rule is aninstance of a certain rule template, ex-planations are partially prepared inadvance. Each rule template has anassigned pattern in the form of a sen-tence in natural language with some

Figure 3: Video Explanation of an Alarm.

Our work is novel as it uses several complex artificialintelligence methods to extract models of the usual behaviourand detect the unusual behaviour based on an indoor RTLS

“”

objects and subjects missing. In orderto generate the full explanation, theinference engine fills in the missingparts of the sentence with details aboutthe objects (e.g., person names, areas,times, etc.) related to the alarm.

Graphical explanation is given inform of a ground plan animation andcan be played upon supervisors’ re-quest. The inference engine determinesthe start and the end times of an alarmand sets the animation to begin slightlybefore the alarm was caused and to endslightly after the causes for the alarmare no longer present. The animationis generated from the recorded RTLSdata and the ground plan of the build-ing under surveillance. The animatedobjects (e.g., persons, objects, areas)that are relevant to the alarm are high-lighted with red colour.

If a video recording of the incidentthat caused an alarm is available it isadded to the alarm explanation. Basedon the location of the person thatcaused the alarm, the person in thevideo recording is marked with abounding rectangle (Figure 3). Thevideo explanation is especially impor-tant if an alarm is caused by a personor object without a tag.

The natural language explanation,ground plan animation, and video re-cordings with embedded bounding rec-tangles produced by the PDR expertsystem efficiently indicate when andto which events the security personnelshould pay attention.

4.2 Video ModuleThe video Module periodically

checks if the movement detected by thevideo cameras is caused by peoplemarked with tags. If it detects move-ment in an area where no authorisedhumans are located, it triggers analarm. It combines the data about taglocations and visible movement to rea-son about unauthorised entry.

Page 108: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 107© CEPIS

UPENET

Farewell Edition

Data about visible moving objects(with or without tags) is available asthe output of video pre-processing.Moving objects are described withtheir 3D locations in the same coordi-nate system as RTLS data, sizes of theirbounding boxes, similarity of the mov-ing object with a human, and a timestamp. The detailed description of thealgorithm that processes the video data(developed at the Faculty of ElectricalEngineering, University of Ljubljana,Slovenia) can be found in [9] and [10].

The Video Module determines thepairing between the locations of taggedpersonnel and the detected movementlocations. If it determines that there ismovement in a location that is farenough from all the tagged personnel,it raises an alarm. In this case the mod-ule reports moving of an unauthorisedperson or an unknown object (e.g., arobot) based on the similarity betweenthe moving object and a person. Theprobability of false alarms can be re-duced if several cameras are used tomonitor the area from various angles.It also enables more accurate localiza-tion of moving objects.

Whenever the Video Module trig-gers an alarm it also offers an explana-tion for it in form of video recordings

with embedded bounding boxes high-lighting the critical areas (Figure 3).The supervisor of the PDR system canquickly determine whether the alarmis true or false by checking the sup-plied video recording.

The video pre-processing algorithmis also capable of detecting if a certaincamera is blocked (e.g. covered with apiece of fabric). Such information isforwarded to the Video Module thattriggers an alarm.

4.3Fuzzy Logic ModuleThe Fuzzy Logic Module is based

on the following presumption: frequentbehaviour is usual and therefore unin-teresting while rare behaviour is inter-esting as it is highly possible that it isunwanted or at least unusual. There-fore the module counts the number ofactions done by the object under sur-veillance and reasons about oddity ofthe observed behaviour based on thecounters. If it detects a high number ofodd events (i.e., events that rarely tookplace in the past) in a short period oftime, it triggers an alarm.

The knowledge of the module isstored in two four- dimensional arraysof counters for each object under sur-veillance (implemented as red-black

frequency of events

oddi

ty o

f eve

nts

flow fhi

> 20 %

< 80 %

low

er cu

mul

ativ

epr

obab

ility

of e

vent

Figure 4: Calculating the Oddity of Events.

The advantages of a RTLS are that people feel morecomfortable being tracked by it than being filmed by video cameras“ ”

trees [2]). Events are split into two cat-egories, hence the two arrays: eventscaused by movement and stationaryevents. A moving event is character-ised by its location, direction, and thespeed of movement. A stationary event,on the other hand, is characterised bylocation, duration and posture (lying,sitting, or standing). When an event ischaracterised, fuzzy discretization [17]is used, hence the name of the module.The location of an event in the floorplane is determined using the RTLSsystem and discretized in classes withsize 50 cm, therefore the module con-siders the area under surveillance as agrid of 50 by 50 cm squares. The speedof movement is estimated by theKalman filter. It is used to calculate thedirection which is discretized in the 8classes (N, NE, E, SE, S, SW, W, andNW). The scalar velocity is discretizedin the following four classes: veryslow, slow, normal, and fast. The pos-ture is determined by a primitive rou-tine (see Section 3.3). The duration ofan event is discretized in the follow-ing classes: 1, 2, 4, 8, 15, 30, seconds,minutes or hours.

The fuzzy discretization has fourmajor advantages. The first is a smalleramount of memory needed to store thecounters, as there is only one counterfor a whole group of similar events.Note that the accuracy of the storedknowledge is not significantly de-creased because the discrete classes arerelatively small. The second advantageis the time complexity of counting theevents that are similar to a given event,which is constant instead of being de-pendent on the number of events seenin the past. The third advantage is thelinear interpolation implicitly intro-duced by fuzzy discretization, whichenables a more accurate estimation ofthe rare events’ frequencies. The fourthadvantage is the low time complexityof updating the counters’ values com-pared to the time complexity of add-ing a new counter with value 1 for each

Page 109: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

108 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

new event.The oddity of the observed behav-

iour is calculated using a sliding win-dow over which the average oddity ofevents is calculated. Averaging theoddity over time intervals prevents thefalse alarms that would be triggered ifthe oddity of single events was usedwhenever RTLS data noise or shortsequences of uncommon events wouldoccur. The oddity of a single event iscalculated by comparing the frequencyof events similar to the given eventwith the frequencies of the otherevents. For this purpose the supervi-sor sets the two relative frequencies flowand fhi. The threshold flow determines theshare of the rarest events that aretreated as completely unusual andtherefore they get assigned the maxi-mum level of oddity. On the other hand,fhi determines the share of the most fre-quent events that are treated as com-pletely usual and therefore they getassigned 0 as the level of oddity. Theoddity of an event whose frequency isbetween the thresholds flow and fhi is lin-early decreasing with the increasingshare of the events that are rarer thanthe given event (Figure 4).

The drawback of the describedmethod is a relatively long learningperiod which is needed before the mod-ule starts to perform well. On the otherhand, the module discards the outdatedknowledge and emphasizes the newdata, which enables adapting to thegradual changes in observed person’sbehaviour. The module is also highlyresponsive: it takes only about 3 sec-onds to detect the unusual behaviour.The module autonomously learns themodel of usual behaviour which ena-bles the detection of the unusual be-haviour. It can detect events such asan unconscious person lying on thefloor, running in a room where peopleusually do not run, a person sitting atthe table at which he usually does notsit etc. The module also triggers analarm when a long sequence of events

happens for the first time. If such falsealarm is triggered, the supervisor canmark it as false. Consequently, themodule will increase the appropriatecounters and will not raise an alarm forthat kind of behaviour in the future.

When the Fuzzy Logic Moduletriggers an alarm, it also provides agraphical explanation for it. It draws atarget-like graph in each square of themesh dividing the observed area. Thecolour of a sector of the target repre-sents the frequency of a given groupof similar events. The concentric cir-cles represent the speed of movement,e.g., a small radius represents a lowspeed. The triangles, on the other hand,represent the direction of movement.The location of a target on the meshrepresents the location in the physicalarea. White colour depicts the lowestfrequency, black colour depicts thehighest frequency while the shades ofgrey depict the frequencies in between.The events that caused an alarm arehighlighted with a scale ranging fromgreen to red. For stationary events, ta-bles are used instead of the targets. Therow of the table represents the posturewhile the column represents the dura-tion. A supervisor can read the graphi-cal explanations quickly and effec-tively. The visualization is also used forthe general analysis of the behaviourin the observed area.

4.4 Macro and Statistic ModulesMacro and Statistic modules ana-

lyse persons’ behaviour and triggeralarms if it significantly deviates fromthe usual behaviour. In order to do that,several statistics about the movementof each tagged person are collected,calculated, and averaged over varioustime periods. Afterwards, these statis-tics are compared to the previouslystored statistics of the same person andthe deviation factor is calculated. If itexceeds the predefined bound, themodules trigger an alarm.

The Statistic Module collects data

The PDR software is divided into five modules:Video, Expert System, Statistic, Macro and Fuzzy Logic“ ”

over time periods from one minute toseveral hours regardless of person’slocation or context. On the other hand,the Macro Module collects data regard-ing behaviour in certain areas (e.g.room), i.e. the behaviour collectionstarts when a person enters the area andends when he/she leaves it.

Both modules use behaviour at-tributes such as: the percentage of thetime the person spent lying, sitting,standing, or walking during the ob-served time period, the average walk-ing speed. Additionally, Macro mod-ule uses the following attributes: areaid, day of the week, length of stay, en-trance time, and exit time.

The behaviours are classified withthe LOF algorithm [3], a density-basedkNN algorithm, which calculates thelocal outlier factor of the tested in-stance with respect to the learning in-stances. The LOF algorithm was cho-sen based on the study [14]. Bias to-wards false positives or false negativescan be adjusted by setting the alarmthreshold.

The modules show a graphical ex-planation for each alarm in form ofparallel coordinates plot. Each attributeis represented with one of the parallelvertical axes, while statistics aboutgiven time periods are represented bya zigzag line connecting values of eachattribute from the leftmost to therightmost one. Past behaviour is rep-resented with green zigzag lines, whilethe zigzag line portending to the be-haviour that triggered the alarm is col-lared red. The visualisation offers aquick and simple way of establishingthe cause of alarm and often indicatesmore specific reason for it.

5 VerificationDue to the complexity of the PDR

system and the diverse tasks that it per-forms it is difficult to verify its qualitywith a single test or to summarize it ina single number such as true positiverate. Therefore, validation was done on

Page 110: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 109© CEPIS

UPENET

Farewell Edition

more subjective and qualitative levelwith several scenarios for each of theindividual modules. Four demonstra-tion videos of the PDR tests are avail-able at http://www.youtube.com/user/ijsdis. A single test case or a scenariois a sequence of actions and events in-cluding a security risk that should bedetected by the system. "A person en-ters a room without the permission" isan example of scenario. Each scenariohas a complement pair: a similar se-quence of actions which, on the con-trary, must not trigger an alarm. "Aperson with permission enters theroom" is the complement scenario forthe above example. The scenarios andtheir complements were carefully de-fined in cooperation and under super-vision of security experts from theSlovenian Ministry of Defence .

The Expert System Module wastested with two to three scenarios perexpert rule template. Each scenario wasperformed ten times with various per-sons and objects. The module has per-fect accuracy (no false positives andno false negatives) in cases when theRTLS noise was within the normal lim-its. When the noise was extremelylarge, the system occasionally triggeredfalse alarms or overlooked securityrisks. However, in those cases evenhuman experts were not able to tell ifthe observed behaviour should trigger

an alarm or not based on the noisyRTLS measurements alone. Further-more, the extreme RTLS noise oc-curred in less than 2 % of the scenariorepetitions and the system made an er-ror in less than 50 % of those cases.

The Video Module was tested us-ing the following three scenarios: "aperson enters the area under surveil-lance without the RTLS tag", "a robotis moving without authorised person’spresence", and "a security camera isintentionally obscured". Scenarioswere repeated ten times with differentpeople as actors. The module detectedthe security risks in all of the scenariorepetitions with movement and distin-guished between a human and a robotperfectly. It failed to detect the ob-scured camera in one out of 10 repeti-tions. The module also did not triggerany false alarms.

The Fuzzy Logic Module wastested with several scenarios while thefuzzy knowledge was gathered overtwo weeks. The module successfullydetected a person lying on the floor,sitting on colleagues chair for a while,running in a room, walking on a table,crawling under a table, squeezing be-hind a wardrobe, standing on the samespot for extended period of time, andsimilar unusual events. However, theexperts’ opinion was that some of thealarms should not have been triggered.

Table 1: Evaluation of PDR System.

Module TP TN FP FN N Expert Sys. 197 199 2 2 400

Video 30 30 0 0 60 Fuzzy Logic 47 42 8 3 100

Macro 9 10 1 0 20 Statistic 9 10 1 0 20

Total 292 291 12 5 600 Percentage

(%) 48.7 48.5 2 0.8

The system is customizable and can be used in a range ofsecurity applications such as confidential data archives and banks“

Indeed we expect that in more exten-sive tests the modules supervised learn-ing capabilities would prevent furtherrepetitions of unnecessary alarms.

The test of the Macro and StatisticModules included the simulation of ausual day at work condensed into onehour. The statistic time periods were 2minutes long. Since the modules re-quire a collection of persons’ past be-haviour, two usual days of work wererecorded by a person constituting oftwo hours of past behaviour data. Af-terwards, the following activities wereperformed 10 times by the same per-son and classified: performing a nor-mal day of work, stealing a containerwith classified data, acting agitated asunder the effect of drugs and running.The classification accuracy was 90 %.This was due to the low amount of pastbehaviour data. Therefore, the modulesdid not learn the usual behaviour of thetest person but only a condensed (simu-lated) behaviour in a limited learningtime. We expect that the classificationaccuracy would be even higher, if thelearning time was extended and if theperson would act as usual instead ofsimulating the condensed day of work.

The overall system performancewas tested on a single scenario: "steal-ing a container with classified docu-ments". In the test five persons tried tosteal the container from a cabinet in asmall room under surveillance. Eachperson tried to steal the container fivetimes with and without a tag. All theattempts were successfully detected bythe system that reported the alarm andprovided an explanation for it.

The validation test data is summa-rized in Table 1. It gives the number oftrue positive (TP), true negative (TN),false positive (FP), and false negativealarms (FN), and total number (N) ofscenario repetitions. Each row givesthe results for one of the five modules.The bottom two rows give the total sumfor each column and the relative per-centage.

Page 111: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

110 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

The system received the award forthe best innovation among researchgroups in Slovenia for 2009 at the FourthSlovenian Forum of Innovations.

6 ConclusionThis paper presents an intelligent

surveillance system utilizing a real-time location system (RTLS), videocameras, and artificial intelligencemethods. It is designed for surveillanceof high security indoor environmentsand is focused on internal securitythreats. The data about movement ofpersonnel and important equipment isgathered by RTLS and video cameras.After basic pre-processing with filtersand primitive routines the data is sentto the five independent software mod-ules. Each of them is specialized fordetecting specific security risk. TheExpert System Module detects suspi-cious situations that can be describedby location of a person or other taggedobjects in space and time. It detectsmany different scenarios with high ac-curacy. The Video Module automati-cally detects movement of persons andobjects without tags, which is not al-lowed inside the surveillance area.Fuzzy Logic, Macro, and StatisticsModules automatically extract theusual movement patterns of personneland equipment and detect deviationsfrom the usual behaviour. Fuzzy Logicis focused on short-term anomalous be-haviour such as entering an area for thefirst time, lying on the ground or walk-ing on the table. Macro and StatisticModules, on the other hand, are focusedon mid- and long-term behaviour suchas deviations in daily work routine.

The validation of the system showsthat it is able to detect all the securityscenarios it was designed for and thatit does not raise too many false alarmseven in more challenging situations. Inaddition, the system is customizableand can be used in a range of securityapplications such as confidential dataarchives and banks.

AcknowledgementResearch presented in this paper was

financed by the Republic of Slovenia, Min-istry of Defence. We would like to thankthe colleges from the Machine Vision

Laboratory, Faculty of Electrical Engineer-ing, University of Ljubljana, Slovenia andSpica International, d.o.o. for fruitful co-operation on the project. Thanks also toBoštjan Kaluza, Mitja Lustrek, and BogdanPogorelc for help regarding the RTLS anddiscussions, and Anze Rode for discussionsabout security systems, expert system rulestemplates and specification of scenarios.

References[1] G. R. Arce. "Nonlinear Signal

Processing: A Statistical Approach",Wiley: New Jersey, USA, 2005.

[2] R. Bayer. "Symmetric Binary B-Trees: Data Structures and Mainte-nance Algorithms", Acta Informá-tica, 1, pp. 290–306, 1972.

[3] M. M. Breunig, H. P. Kriegel, R. T.Ng, J. Sander. "LOF: Identifyingdensitybased local outliers," Pro-ceedings of the International Con-ference on Management of Data –SIGMOD ’00, pp. 93–104, Dallas,Texas, 2000.

[4] J. Demsar, B. Zupan, G. Leban. "Or-ange: From Experimental MachineLearning to Interactive Data Min-ing," White Paper (www. ailab.si/orange), Faculty of computer andinformation science, University ofLjubljana, Slovenia, 2004.

[5] M. Gams, T. Tusar. (2007), "Intel-ligent High-Security Access Con-trol", Informatica, vol 31(4), pp.469-477.

[6] R.E. Kalman. "A new approach tolinear filtering and prediction prob-lems". Journal of Basic Engineer-ing, 82 (1), pp. 35–45, 1960.

[7] M. Kolbe, M. Gams. "Towards anintelligent biometric system for ac-cess control," Proceedings of the9th International MulticonferenceInformation Society - IS 2006,Ljubljana, Slovenia, 2006, pp. 118-122.

[8] B. Krausz, R. Herpers. ‘Event de-tection for video surveillance usingan expert system’, Proceedings ofthe 1st ACM Workshop on Analy-sis and Retrieval of Events/Actionsand Workflows in Video Streams -AREA 2008, Vancouver, Canada,pp. 49-56.

[9] M. Kristan, J. Pers, M. Perse, S.Kovaèiè. "Closed-world tracking of

multiple interacting targets for in-door-sports applications", Compu-ter Vision and Image Understand-ing, vol 113, 5, pp. 598-611, 2009.

[10] M. Perše, M. Kristan, S. Kovaèiè,G. Vuèkoviæ, J. Pers. "A trajectory-based analysis of coordinated teamactivity in a basketball game", Com-puter Vision and Image Under-standing, vol 113, 5, pp. 612-621,2009.

[11] R. Piltaver, G. Matjas. "Expert sys-tem as a part of intelligent surveil-lance system", Proceedings of the18th International Electrotechnicaland Computer Science Conference- ERK 2009, vol. B, pp. 191–194,2009.

[12] R. Piltaver. "Strojno uèenje prinaèrtovanju algoritmov zarazpoznavanje tipov gibanja", Pro-ceedings of the 11th InternationalMulticonference Information Soci-ety - IS 2008, str. 13–17, 2008.

[13] V. Schwarz, A. Huber, M. Tüchler."Accuracy of a Commercial UWB3D Location Tracking System andits Impact on LT Application Sce-narios," Proceedings of the IEEEInternational Conference on Ultra-Wideband, Zürich, Switzerland,2005.

[14] T. Tusar, M. Gams. "Odkrivanjeizjem na primeru inteligentnegasistema za kontrolo pristopa," Pro-ceedings of the 9th InternationalMulticonference Information Soci-ety - IS 2006, Ljubljana, Slovenia,2006, pp. 136-139.

[15] Ubisense: awailable at: http://www.ubisense.net/

[16] H. Witten, E. Frank. Data Mining."Practical Machine Learning Toolsand Techniques" (2nd edition),Morgan Kaufmann, 2005.

[17] L. A. Zadeh. "Fuzzy sets", Informa-tion and Control 8 (3), pp. 338–353,1965.

[18] http://www.pervcomconsulting.com/secure.html

[19] http://www.visonictech.com/Ac-tive-RFID-RTLS-Tracking-and-Mangement-Software-Eiris.html

[20] http://www.aeroscout.com/content/healthcare

[21] http://www.telargo.com/solutions/track_trace.asp

Page 112: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 111© CEPIS

UPENET

Farewell Edition

Author

Franz Baader is Full Professor for Theoretical ComputerScience at TU Dresden, Germany. He has obtained his PhD inComputer Science at the University of Erlangen, Germany. Hewas Senior Researcher at the German Research Institute forArtificial Intelligence (DFKI) for four years, and AssociateProfessor at RWTH Aachen, Germany, for eight years. His mainresearch area is Logic in Computer Science, in particularknowledge representation (description logics, modal logics,nonmonotonic logics) and automated deduction (term rewriting,unification theory, combination of decision procedures).<[email protected]>

Knowledge Representation

What’s New in Description LogicsFranz Baader

© 2011 Informatik Spektrum

This paper was first published, in English, by Informatik-Spektrum (Volume 34, issue no. 5, October 2011, pp. 434-442). Informatik-Spektrum (<http://www.springerlink.com/content/1432-122X/>), a UPENET partner, is a journal published, in German or English, bySpringer Verlag on behalf of the German CEPIS society GI (Gesellschaft für Informatik, <http://www.gi-ev.de/>) and the Swiss CEPISsociety SI (Schweizer Informatiker Gesellschaft - Société Suisse des Informaticiens, <http://www.s-i.ch/>)

Main stream research in Description Logics (DLs) until recently concentrated on increasing the expressive power of theemployed description language while keeping standard inference problems like subsumption and instance manageable inthe sense that highly-optimized reasoning procedure for them behave well in practice. One of the main successes of thisline of research was the adoption of OWL DL, which is based on an expressive DL, as the standard ontology language forthe Semantic Web. More recently, there has been a growing interest in more light-weight DLs, and in other kinds ofinference problems, mainly triggered by need in applications with large-scale ontologies. In this paper, we first review theDL research leading to the very expressive DLs with practical inference procedures underlying OWL, and then sketch therecent development of light-weight DLs and novel inference procedures.

Keywords: Description Logics, Logic-based Knowl-edge Representation Formalism, Ontology Languages,OWL, Practical Reasoning Tools.

1 Mainstream DL research of the last 25 years:towards very expressive DLs with practical infer-ence procedures

Description Logics [BCNMP03] are a well-investi-gated family of logic-based knowledge representationformalisms, which can be used to represent the concep-tual knowledge of an application domain in a structuredand formally well-understood way. They are employedin various application domains, such as natural languageprocessing, configuration, and databases, but their mostnotable success so far is the adoption of the DL-basedlanguage OWL1 as standard ontology language for theSemantic Web [HoPH03].

The name Description Logics is motivated by the factthat, on the one hand, the important notions of the do-main are described by concept descriptions, i.e., expres-sions that are built from atomic concepts (unary predi-cates) and atomic roles (binary predicates) using con-cept constructors. The expressivity of a particular DL isdetermined by which concept constructors are availablein it. From a semantic point of view, concept names andconcept descriptions represent sets of individuals,whereas roles represent binary relations between indi-

1 <http://www.w3.org/TR/owl-features/>.

In this paper we reviewthe Description Logics research

and recent developments

“”

viduals. For example, using the concept names Man,Doctor, and Happy and the role names married and child,the concept of "a man that is married to a doctor, and hasonly happy children" can be expressed using the conceptdescription

On the other hand, DLs differ from their predeces-sors in that they are equipped with a formal, logic-based

Page 113: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

112 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

semantics, which can, e.g., be given by a translation into first-order predicate logic. For example,the above concept description can be translated into the following fifirst-order formula (with onefree variable x):

The motivation for introducing the early predecessors of DLs, such as semantic networks andframes [Quil67, Mins81], actually was to develop means of representation that are closer to theway humans represent knowledge than a representation in formal logics, like fifirst-order predicatelogic. Minsky [Mins81] even combined his introduction of the frame idea with a general rejectionof logic as an appropriate formalism for representing knowledge. However, once people tried toequip these "formalisms" with a formal semantics, it turned out that they can be seen as syntacticvariants of (subclasses of) first-order predicate logic [Haye79, ScGC79]. Description Logics weredeveloped with the intention of keeping the advantages of the logic-based approach to knowledgerepresentation (like a formal model-theoretic semantics and well-defined inference problems), whileavoiding the disadvantages of using full first-order predicate logic (e.g., by using a variable-freesyntax that is easier to read, and by ensuring decidability of the important inference problems).

Concept descriptions can be used to define the terminology of the application domain, and tomake statements about a specific application situation in the assertional part of the knowledgebase. In its simplest form, a DL terminology (usually called TBox) can be used to introduce abbre-viations for complex concept descriptions. For example, the concept definitions

define the concept of a man (woman) as a human that is not female (is female), and the concept

of a father as a man that has a child, where ┬ stands for the top concept (which is interpreted as theuniverse of all individuals in the application domain). The above is a (very simple) example of anacyclic TBox, which is a finite set of concept definitions that is unambiguous (i.e., every conceptname appears at most once on the left-hand side of a definition) and acyclic (i.e., there are no cyclicdependencies between definitions). In general TBoxes, so-called general concept inclusions (GCIs)can be used to state additional constraints on the interpretation of concepts and roles. In our exam-ple, it makes sense to state domain and range restrictions for the role child. The GCIs

say that only human beings can have human children, and that the child of a human being mustbe human.

In the assertional part (ABox) of a DL knowledge base, facts about a specific application situa-tion can be stated by introducing named individuals and relating them to concepts and roles. Forexample, the assertions

state that John is a man, who has the female child Mackenzie.Knowledge representation systems based on DLs provide their users with various inference

services that allow them to deduce implicit knowledge from the explicitly represented knowledge.For instance, the subsumption algorithm allows one to determine subconcept-superconcept relation-ships. For example, w.r.t. the concept definitions from above, the concept Human subsumes the conceptFather since all instances of the second concept are necessarily instances of the first concept, i.e.,whenever the above concept definitions are satisfied, then Father is interpreted as a subset of Hu-man. With the help of the subsumption algorithm, one can compute the hierarchy of all conceptsdefined in a TBox. This inference service is usually called classification. The instance algorithmcan be used to check whether an individual occurring in an ABox is necessarily an instance of agiven concept. For example, w.r.t. the above assertions, concept definitions, and GCIs, the indi-vidual MACKENZIE is an instance of the concept Human. With the help of the instance algorithm,one can compute answers to instance queries, i.e., all individuals occurring in the ABox that areinstances of the query concept C.

In order to ensure a reasonable and predictable behavior of a DL system, the underlying infer-ence problems (like the subsumption and the instance problem) should at least be decidable for theDL employed by the system, and preferably of low complexity. Consequently, the expressive powerof the DL in question must be restricted in an appropriate way. If the imposed restrictions are too

Page 114: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 113© CEPIS

UPENET

Farewell Edition

severe, however, then the important notions of the appli-cation domain can no longer be specified using conceptdescriptions. Investigating this trade-off between theexpressivity of DLs and the complexity of their infer-ence problems has been one of the most important issuesin DL research.

The general opinion on the (worst-case) complexitythat is acceptable for a DL has changed dramatically overtime. Historically, in the early times of DL research peo-ple have concentrated on identifying formalisms forwhich reasoning is tractable, i.e. can be performed inpolynomial time [Pate84]. The precursor of all DL sys-tems, KL-ONE [BrSc85], as well as its early successorsystems, like KANDOR [Pate84], K-REP [MaDW91],and BACK [Pelt91], indeed employed polynomial-timesubsumption algorithms. Later on, however, it turned outthat subsumption in rather inexpressive DLs may be in-tractable [LeBr87], that subsumption in KL-ONE is evenundecidable [Schm89], and that even for systems likeKANDOR and BACK, for which the expressiveness ofthe underlying DL had been carefully restricted with thegoal of retaining tractability, the subsumption problemis in fact intractable [Nebe88]. The reason for the dis-crepancy between the complexity of the subsumption al-gorithms employed in the above mention early DL sys-tems and the worst-case complexity of the subsumptionproblems these algorithms were supposed to solve wasdue to the fact that these systems employed sound, butincomplete subsumption algorithms, i.e., algorithmswhose positive answers to subsumption queries are cor-rect, but whose negative answers may be incorrect. Theuse of incomplete algorithms has since then largely beenabandoned in the DL community, mainly because of theproblem that the behavior of the systems is no longerdetermined by the semantics of the description language:an incomplete algorithm may claim that a subsumptionrelationship does not hold, although it should hold ac-cording to the semantics. All the intractability resultsmentioned above already hold for subsumption betweenconcept descriptions without a TBox. An even worse blowto the quest for a practically useful DL with a sound,complete, and polynomial-time subsumption algorithmwas Nebel’s result [Nebe90] that subsumption w.r.t. anacyclic TBox (i.e., an unambiguous set of concept defi-nitions without cyclic dependencies) in a DL with con-junction and value restriction is already in-tractable.2

At about the time when these (negative) complexityresults were obtained, a new approach for solving infer-ence problems in DLs, such as the subsumption and theinstance problem, was introduced. This so-called tableau-

based approach was first introduced in the context ofDLs by Schmidt-Schau [Schm89] and Smolka [ScSm91],though it had already been used for modal logics longbefore that [Fitt72]. It has turned out that this approachcan be used to handle a great variety of different DLs(see [BaSa01] for an overview and, e.g., [HoSa05,HoKS06, LuMi07] for more recent results), and it yieldssound and complete inference algorithms also for veryexpressive DLs. Although the worst-case complexity ofthese algorithms is quite high, the tableau-based approachnevertheless often yields practical procedures: optimizedimplementations of such procedures have turned out tobehave quite well in applications [BFHN*94, Horr03,HaMo08], even for expressive DLs with a high worst-case complexity (ExpTime and beyond). The advent oftableau-based algorithms was the main reason why theDL community basically abandoned the search for DLswith tractable inference problems, and concentrated onthe design of practical tableau-based algorithms for ex-pressive DLs. The most prominent modern DL systems,FaCT++ [TSHo06], Racer [HaMo01b], and Pellet[SiPa04] support very expressive DLs and employ highly-optimized tableau-based algorithms. In addition to thefact that DLs are equipped with a well-defined formalsemantics, the availability of mature systems that sup-port sound and complete reasoning in very expressivedescription formalisms was an important argument infavor of using DLs as the foundation of OWL, the stand-ard ontology language for the Semantic Web. In fact,OWL DL is based on the expressive DL ,for which reasoning is in the worst-case NExpTime-com-plete [HoPa04].

The research on how to extend the expressive powerDLs has actually not stopped with the adoption of

as the DL underlying OWL. In fact, thenew version of the OWL standard, OWL 2,3 is based onthe even more expressive DL , which is2NExpTime-complete [Kaza08]. The main new featuresof are the use of qualified number restric-

tions rather than simple number restrictions ,and the availability of (a restricted form of) role inclu-sion axioms . For example, with a simple number re-striction we can describe the concept of a man that hasthree children

but we cannot specify properties of these children, asin the qualified number restriction

2 More recent developments: Light-weight DLsand the need for novel inference tools

In this section, we first discuss the and the DL-

2 All the systems mentioned above supported these two conceptconstructors, which were at that time viewed as being indispensa-ble for a DL. The DL with exactly these two concept constructors iscalled [Baad90c]3 <http://www.w3.org/TR/2009/REC-owl2-overview-20091027/>.

Page 115: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

114 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Description Logics differ from their predecessorsin that they are equipped with a formal, logic-based semantics“ ”

Description Logics are a well-investigated familyof logic-based knowledge representation formalisms“ ”

w.r.t. an acyclic TBox is coNP-complete [Nebe90] andw.r.t. GCIs it is even ExpTime-complete [BaBL05]. Incontrast, subsumption in stays tractable even w.r.t.GCIs [Bran04], and this result is stable under the addi-tion of several interesting means of expressivity [BaBL05,BaBL08].

The polynomial-time subsumption algorithm for[Bran04, BaBL05] actually classifies the given TBox

, i.e., it simultaneously computes all subsumption re-

lationships between the concept names occurring in .This algorithm proceeds in four steps:

1. Normalize the TBox.2. Translate the normalized TBox into a graph.3. Complete the graph using completion rules.4. Read off the subsumption relationships from the

normalized graph.An -TBox is normalized if it only contains GCIs

of the following form: ,

where are concept names or the top-concept ┬ . Any -TBox can be transformed in poly-nomial time into a normalized one by applying equiva-lence-preserving normalization rules [Bran04]. In thenext step, a classification graph is built, where

V is the set of concept names (including ┬ ) oc-curring in the normalized TBox ;

S labels nodes with sets of concept names (againincluding ┬ );

R labels edges with sets of role names.The label sets are supposed to satisfy the following

invariants:S(A) contains only subsumers of A w.r.t. .

Litee families of light-weight DLs, and then consider in-ference problems different from the subsumption and theinstance problem.

Light-weight DLs: the familyThe ever increasing expressive power and worst-case

complexity of expressive DLs, combined with the in-creased use of DL-based ontology languages in practicalapplications due to the OWL standard, has also resultedin an increasing number of ontologies that cannot be han-dled by tableau-based reasoning systems without manualtuning by the system developers, despite highly optimizedimplementations. Perhaps the most prominent exampleis the well-known medical ontology SNOMED CT4 ,which comprises 380,000 concepts and is used as a stand-ardized health care terminology in a variety of countriessuch as the US, Canada, and Australia. In tests performedin 2005 with FaCT++ and Racer, neither of the two sys-tems could classify SNOMED CT [BaLS05]5, and Pelletstill could not classify SNOMED CT in tests performedin 2008 [Meng09].

From the DL point of view, SNOMED CT is an acy-clic TBox that contains only the concept constructors con-junction , existential restriction , and the topconcept ( ┬ ). The DL with exactly these three conceptconstructors is called [BaKM99]. In contrast to its

counterpart with value restrictions, , the light-weight

DL has much better algorithmic properties. Whereassubsumption without a TBox is polynomial in both

[BaKM99] and [LeBr87], subsumption in

Figure 1: The completion rules for subsumption in w.r.t. general TBoxes.

4 <http://www.ihtsdo.org/snomed-ct/>.5 Note, however, that more recent versions of FaCT++ and Racerperform quite well on SNOMED CT [Meng09], due to optimizationsspecifically tailored towards the classification of SNOMED CT.

Page 116: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 115© CEPIS

UPENET

Farewell Edition

R(A,B) contains only roles r such that sub-

sumes A w.r.t. .Initially, we set for all nodes ,

and for all edges . Obvi-ously, the above invariants are satisfied by these initiallabel sets.

The labels of nodes and edges are then extended byapplying the rules of Figure 1. Note that a rule is onlyapplied if it really extends a label set. It is easy to seethat these rules preserve the above invariants. The factthat subsumption in w.r.t. TBoxes can be decided inpolynomial time is an immediate consequence of the factsthat (i) rule application terminates after a polynomialnumber of steps, and (ii) if no more rules are applicablethen S(A) contains exactly those concept names B occur-ring in that are subsumers of A w.r.t. (see [Bran04,BaBL05] for more details and full proofs).

Light-weight DLs: the DL-Lite familyAnother problematic issue with expressive DLs is that

query answering in such DLs does not scale too well toknowledge bases with a very large ABox. In this con-text, queries are conjunctions of assertions that may alsocontain variables, of which some can be existentiallyquantified. For example, the query

asks for all men that have a child that is a woman6 ,but in general the use of variables allows the formula-tion of more complex queries than simple instance que-ries. In the database world, these kinds of queries arecalled conjunctive queries [AbHV95]; the difference tothe pure database case is that, in addition to the instancedata, we also have a TBox. As an example, consider theABox assertions stating facts about John and Mackenziefrom the previous section. Without any additional infor-mation about the meaning of the predicates Man, child,and Woman, the individual JOHN is not an answer to theabove query. However, if we take the concept definitions

and GCIs introduced in the previous section into account,then JOHN turns out to be an answer to this query.

Query answering in expressive DLs such as the al-ready mentioned (i.e., without con-crete domains) is 2ExpTime-complete regarding com-bined complexity [Lutz08], i.e., the complexity w.r.t. thesize of the TBox and the ABox. Thus, query answeringin this logic is even harder than subsumption while at thesame time being much more time critical. Moreover,query answering in is coNP-complete [OrCE08]regarding data complexity (i.e., in the size of the ABox),which is viewed as "unfeasible" in the database commu-nity. These complexity hardness results for answeringconjunctive queries in expressive DLs are dramatic sincemany DL applications, such as those that use ABoxes asweb repositories, involve ABoxes with hundreds of thou-sands of individuals. It is a commonly held opinion that,in order to achieve truly scalable query answering in theshort term, it is essential to make use of conventionalrelational database systems for query answering in DLs.Given this proviso, the question is what expressivity cana DL offer such that queries can be answered using rela-tional database technology while at the same time mean-ingful concepts can be specified in the TBox. As an an-swer to this, the DL-Lite family has been introduced in[CGL+05, CDL+-KR06, CGL+07], designed to allow theimplementation of conjunctive query answering "on topof" a relational database system.

DL-Litecore is the basic member of the DL-Lite family[CGL+07]. Concept descriptions of this DL are of theform where A is a concept name, ris a role name, and r - denotes the inverse of the rolename r. A DL-Litecore knowledge base (KB) consists of aTBox and an ABox. The TBox formalism allows for GCIsand disjointness axioms between DL-Litecore concept de-scriptions C;D:

where disj(C,D) states that C,D must always be in-terpreted as disjoint sets. A DL-Litecore-ABox is a finiteset of concept and role assertions: A(a) and r(a; b), whereA is a concept name, r is a role name, and a; b are indi-vidual names.

In contrast to , DL-Lite cannot express qualifiedexistential restrictions such as in theTBox. Conversely, does not have inverse roles, which

DLs is used asthe foundation of OWL,the standard ontology

language forthe Semantic Web

Knowledge representationsystems based on DLs provide

their users with various inferenceservices that allow them todeduce implicit knowledge

6 This simple query could also be expressed as an instance queryusing the -concept description , but ingeneral the use of variables allows the formulation of more com-plex queries than simple instance queries.

Page 117: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

116 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

The medical ontologySNOMED CT comprises380,000 concepts and

is used as a standardizedhealth care terminology

in the US, Canada,and Australia

”are available (albeit in a limited way) in \dllite.

In principle, query answering in DL-Lite can be real-ized as follows:

1. use the TBox to reformulate the given conjunc-

tive queries q into an first-order query and then dis-card the TBox;

2. view the ABox as a relational database ;

3. evaluate in the database using a relationalquery engine.

In practice, more work needs to be done to turn thisinto a scalable approach for query answering. For exam-ple, the queries generated by the reformulation stepare very different from the SQL queries usually formu-lated by humans, and thus relational database enginesare not optimized for such queries.

Interestingly, also in it is possible to implementquery answering using a relational database system[LuToWo-IJCAI-09]. In contrast to the approach for DL-Lite, the TBox is incorporated into the ABox and not intothe query. In addition, some limited query reformulation(independent of both the TBox and the ABox) is also re-quired.

The relevance of the light-weight DLs discussedabove is underlined by the fact that both of them are cap-tured in the official W3C profiles7 document for OWL2. Each of the OWL 2 profiles are designed for specificapplication requirements. For applications that rely onreasoning services for ontologies with a large number ofconcepts, the profile OWL 2 EL has been introduced,which is based on , a tractable extension of .For applications that deal with large sets of data and thatmainly use the reasoning service of query answering, theprofile OWL 2 QL has been defined. The DL underlyingthis profile is a member of the DL-Lite.

Novel inference problemsThe developers of the early DL systems concentrated

on the subsumption and the instance problem, and thesame was true until recently for the developers of highlyoptimized systems for expressive DLs. The development,maintenance, and usage of large ontologies can, how-ever, also be profit from the use of other inference pro-cedures. Certain non-standard inference problems, likeunification [BaNa00, BaMo09], matching [BKBM99,BaKu00], and the problem of computing least commonsubsumers [BaKu98, BaKM99, BaST07, DCNS09] havebeen investigated for quite a while [BaKu06]. Unifica-tion and matching can, for example, help the ontologyengineer to find redundancies in large ontologies, andleast common subsumers and most specific concepts canbe used to generate concepts from examples.

Others non-standard inference problems have, how-ever, come into the focus of mainstream DL research onlyrecently. One example is conjunctive query answering,which is not only investigated for light-weight DLs (seeabove), but also for expressive DLs [GHLS07, Lutz08].

Another is identification and extraction of modulesinside an ontology. Intuitively, given an ontology anda signature (i.e., a subset of the concept and role namesoccurring in ), a module is a subset of such thatthe following holds for all concept descriptions C,D thatcan be built from symbols in is subsumed by D w.r.t.

if C is subsumed by D w.r.t. . Consequently, if oneis only interested in subsumption between concepts builtfrom symbols in , it is sufficient to use is instead ofthe (possibly much larger) whole ontology . Similarly,one can also introduce the notion of a module for otherinference problems (such as query answering). An over-view over different approaches for defining modules anda guideline for when to use which notion of a modulecan be found in [SaSZ09]. Module identification and ex-traction is computationally costly for expressive DLs, andeven undecidable for very expressive ones such as OWLDL [LuWW07]. Both for the family [LuWo07,Sunt08] and the DL-Lite family [KWZ-KR-08], the rea-soning problems that are relevant in this area aredecidable and usually of much lower complexity than forexpressive DLs.

For a developer or user of a DL-based ontology, it isoften quite hard to understand why a certain consequencecomputed by the reasoner actually follows from theknowledge base. For example, in the DL version of themedical ontology SNOMED CT, the concept Amputation-of-Finger is classified as a subconcept of Amputation-of-Arm. Finding the six axioms that are responsible forthis error [BaSu08] among the more than 350,000 con-

DL-Litecore is the basicmember of theDL-Lite family

7 <http://www.w3.org/TR/owl2-profiles/>.

“”

Page 118: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 117© CEPIS

UPENET

Farewell Edition

cept definitions of SNOMED CT without support by anautomated reasoning tool is not easy. Axiom pinpointing[ScCo03] has been introduced to help developers or us-ers of DL-based ontologies understand the reasons whya certain consequence holds by computing minimal sub-sets of the knowledge base that have the consequence inquestion (called MinAs or Explanations). There are twogeneral approaches for computing MinAs: the black-boxapproach and the glass-box approach. The most naïvevariant of the black-box approach considers all subsetsof the ontology, and computes for each of them whetherit still has the consequence or not. More sophisticatedversions [KPHS07] use a variant of Reiter’s [Reit87] hit-ting set tree algorithm to compute all MinAs. Instead ofapplying such a black-box approach to a large ontology,one can also first try to find a small and easy to computesubset of the ontology that contains all MinAs, and thenapply the black-box approach to this subset [BaSu08].The main advantage of the black-box approach is that itcan use existing highly-optimized DL reasoners un-changed. However, it may be necessary to call thereasoner an exponential number of times. In contrast, theglass-box approach tries to find all MinAs by a singlerun of a modified reasoner.

Most of the glass-box pinpointing algorithms de-scribed in the DL literature (e.g., [ScCo03, PaSK05,LeMP06]) are obtained as extensions of tableau-basedreasoning algorithms [BaSa01] for computing conse-quences from DL knowledge bases. To overcome theproblem of having to design a new pinpointing exten-sion for every tableau-based algorithm, the papers[BaPe07, BaPe09] introduce a general approach for ex-tending tableau-based algorithms to pinpointing algo-rithms. This approach is based on a general notion of"tableau algorithm," which captures many of the knowntableau-based algorithms for DLs and Modal Logics, butalso other kinds of decision procedures, like the polyno-mial-time subsumption algorithm for the DL sketched above. Any such tableau algorithm can be ex-tended to a pinpointing algorithm, which is correct inthe sense that a terminating run of the algorithm com-putes all MinAs. Unfortunately, however, terminationneed not transfer from a given tableau to its pinpointingextension, and the approach only applies to tableau-basedalgorithms that terminate without requiring any cycle-checking mechanism (usually called "blocking" in theDL community). Though these problems can, in princi-ple, be solved by restricting the general framework toso-called forest tableaux [BaPe09], this solution makesthe definitions and proofs more complicated and less in-tuitive.

In [BaPe08], a different general approach for obtain-ing glass-box pinpointing algorithms, which also appliesto DLs for which the termination of tableau-based algo-rithms requires the use of blocking. It is well-known thatautomata working on infinite trees can often be used toconstruct worst-case optimal decision procedures for suchDLs [BaTo01, CaGL02]. In this automata-based ap-proach, the input inference problem à is translated into atree automaton , which is then tested for emptiness.Basically, pinpointing is then realized by transformingthe tree automaton into a weighted tree automatonworking on infinite trees, and computing the so-calledbehavior of this weighted automaton.

3 ConclusionThe DL research of the last 30 years has lead, on the

one hand, to highly expressive ontology languages, whichcan nevertheless be supported by practical reasoningtools. On the other hand, the recent development of light-weight DLs and specialized reasoning tools for them en-sures that DL reasoning scales to large ontologies withhundreds of thousands of terminological axioms (likeSNOMED CT) and, by using database technology, tomuch larger sets of instance data. In addition, novel in-ference methods such as modularization and pinpointingsupport building and maintaining high-quality ontologies.

The DL research of the last 30 years has leadto highly expressive ontology languages“ ”

Page 119: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

118 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

References[AbHV95]Serge Abiteboul, Richard Hull, and

Victor Vianu. Foundations ofDatabases. Addison Wesley Publ.Co., Reading, Massachussetts,1995.

[Baad90c] Franz Baader. Terminological cy-cles in KL-ONE-based knowledgerepresentation languages. In Proc.of the 8th Nat. Conf. on ArtificialIntelligence (AAAI’90), pages621-626, Boston (Ma, USA), 1990.

[BaBL05] Franz Baader, Sebastian Brandt,and Carsten Lutz. Pushing the ELenvelope. In Leslie Pack Kaelblingand Alessandro Saffiotti, editors,Proc. of the 19th Int. Joint Conf.on Artificial Intelligence (IJCAI2005), pages 364-369, Edinburgh(UK), 2005. Morgan Kaufmann,Los Altos.

[BaBL08] Franz Baader, Sebastian Brandt,and Carsten Lutz. Pushing the ELenvelope further. In Kendall Clarkand Peter F. Patel-Schneider, edi-tors, In Proceedings of the Fifth In-ternational Workshop on OWL:Experiences and Directions(OWLED’08), Karlsruhe, Ger-many, 2008.

[BCNMP03] Franz Baader, Diego Calvanese,Deborah McGuinness, DanieleNardi, and Peter F. Patel-Schnei-der, editors. The Description LogicHandbook: Theory, Implementa-tion, and Applications. CambridgeUniversity Press, 2003.

[BFHN*94] Franz Baader, EnricoFranconi, Bernhard Hollunder,Bernhard Nebel, and Hans-JürgenProtlich. An empirical analysis ofoptimization techniques for termi-nological representation systemsor: Making KRIS get a move on.Applied Artificial Intelligence.Special Issue on Knowledge BasedManagement, 4:109-132, 1994.

[BaKu98] Franz Baader and Ralf Küsters.Computing the least commonsubsumer and the most specificconcept in the presence of cyclicALN-concept descriptions. InProc. of the 22nd German AnnualConf. on Articial Intelligence(KI’98), volume 1504 of LectureNotes in Computer Science, pages129-140. Springer-Verlag, 1998.

[BaKu00] Franz Baader and Ralf Küsters.Matching in description logics withexistential restrictions. In Proc. ofthe 7th Int. Conf. on Principles ofKnowledge Representation andReasoning (KR 2000), pages 261-272, 2000.

[BaKu06] Franz Baader and Ralf Küsters.Nonstandard inferences in descrip-tion logics: The story so far. InD.M. Gabbay, S.S. Goncharov, andM. Zakharyaschev, editors, Math-ematical Problems from AppliedLogic I, volume 4 of InternationalMathematical Series, pages 1-75.Springer-Verlag, 2006.

[BKBM99] Franz Baader, Ralf Küsters,Alex Borgida, and Deborah L.McGuinness. Matching in descrip-tion logics. J. of Logic and Com-putation, 9(3):411-447, 1999.

[BaKM99]Franz Baader, Ralf Küsters, andRalf Molitor. Computing leastcommon subsumers in descriptionlogics with existential restrictions.In Proc. of the 16th Int. Joint Conf.on Artifucial Intelligence(IJCAI’99), pages 96-101, 1999.

[BaLS05] Franz Baader, Carsten Lutz, andBoontawee Suntisrivaraporn. Istractable reasoning in extensions ofthe description logic EL useful inpractice? In Proceedings of the2005 International Workshop onMethods for Modalities (M4M-05), 2005.

[BaMo09] Franz Baader and BarbaraMorawska. Unification in the de-scription logic EL. In Ralf Treinen,editor, Proc. of the 20th Int. Conf.on Rewriting Techniques and Ap-plications (RTA 2009), volume5595 of Lecture Notes in Compu-ter Science, pages 350-364.Springer-Verlag, 2009.

[BaNa00] Franz Baader and PaliathNarendran. Unification of conceptsterms in description logics. J. ofSymbolic Computation,31(3):277-305, 2001.

[BaPe07] Franz Baader and Rafael Peñaloza.Axiom pinpointing in general tab-leaux. In Proc. of the Int. Conf. onAnalytic Tableaux and RelatedMethods (TABLEAUX 2007),volume 4548 of Lecture Notes inArtificial Intelligence, pages 11-27.Springer-Verlag, 2007.

[BaPe08] Franz Baader and Rafael Peñaloza.Automata-based axiom pinpoint-ing. In Alessandro Armando, PeterBaumgartner, and Gilles Dowek,editors, Proc. of the Int. Joint Conf.on Automated Reasoning (IJCAR2008), volume 5195 of LectureNotes in Artificial Intelligence,pages 226-241. Springer-Verlag,2008.

[BaPe09] Franz Baader and Rafael Peñaloza.Axiom pinpointing in general tab-leaux. Journal of Logic and Com-

putation, 2009. To appear.[BaSa01] Franz Baader and Ulrike Sattler. An

overview of tableau algorithms fordescription logics. Studia Logica,69:5-40, 2001.

[BaST07] Franz Baader, Baris Sertkaya, andAnni-Yasmin Turhan. Computingthe least common subsumer w.r.t.a background terminology. J. ofApplied Logic, 5(3):392-420,2007.

[BaSu08] Franz Baader and BoontaweeSuntisrivaraporn. DebuggingSNOMED CT using axiom pin-pointing in the description logicEL+. In Proceedings of the Inter-national Conference on Represent-ing and Sharing Knowledge UsingSNOMED (KR-MED’08), Phoe-nix, Arizona, 2008.

[BaTo01] Franz Baader and Stephan Tobies.The inverse method implementsthe automata approach for modalsatisfiability. In Proc. of the Int.Joint Conf. on Automated Reason-ing (IJCAR 2001), volume 2083 ofLecture Notes in Artificial Intelli-gence, pages 92-106. Springer-Verlag, 2001.

[BrLe85] Ronald J. Brachman and Hector J.Levesque, editors. Readings inKnowledge Representation.Morgan Kaufmann, Los Altos,1985.

[BrSc85] Ronald J. Brachman and James G.Schmolze. An overview of the KL-ONE knowledge representationsystem. Cognitive Science,9(2):171-216, 1985.

[Bran04] Sebastian Brandt. Polynomial timereasoning in a description logicwith existential restrictions, GCIaxioms, and what else? In RamonLópez de Mántaras and LorenzaSaitta, editors, Proc. of the 16thEur. Conf. on Artificial Intelligence(ECAI 2004), pages 298-302,2004.

[CGL+05] Diego Calvanese, Giuseppe DeGiacomo, Domenico Lembo,Maurizio Lenzerini, and RiccardoRosati. DL-Lite: Tractable descrip-tion logics for ontologies. InManuela M. Veloso and SubbaraoKambhampati, editors, Proc. of the20th Nat. Conf. on Artificial Intel-ligence (AAAI 2005), pages 602-607. AAAI Press/The MIT Press,2005.

[CDL+-KR06] Diego Calvanese, Giuseppe deGiacomo, Domenico Lembo,Maurizio Lenzerini, and RiccardoRosati. Data complexity of queryanswering in description logics. In

Page 120: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 119© CEPIS

UPENET

Farewell Edition

Patrick Doherty, John Mylopoulos,and Christopher A. Welty, editors,Proc. of the 10th Int. Conf. on Prin-ciples of Knowledge Representa-tion and Reasoning (KR 2006),pages 260-270. AAAI Press/TheMIT Press, 2006.

[CGL+07] Diego Calvanese, Giuseppe DeGiacomo, Domenico Lembo,Maurizio Lenzerini, and RiccardoRosati. Tractable reasoning and ef-ficient query answering in descrip-tion logics: The DL-Lite family. J.of Automated Reasoning,39(3):385-429, 2007.

[CaGL02] Diego Calvanese, GiuseppeDeGiacomo, and MaurizioLenzerini. 2ATAs make DLs easy.In Proc. of the 2002 DescriptionLogic Workshop (DL 2002), pages107-118. CEUR Electronic Work-shop Proceedings, http://ceur-ws.org/Vol-53/, 2002.

[DCNS09]Francesco M. Donini, SimonaColucci, Tommaso Di Noia, andEugenio Di Sciascio. A tableaux-based method for computing leastcommon subsumers for expressivedescription logics. In CraigBoutilier, editor, Proc. of the 21stInt. Joint Conf. on Artificial Intel-ligence (IJCAI 2009), pages 739-745, 2009.

[Fitt72] Melvin Fitting. Tableau methods ofproof for modal logics. NotreDame J. of Formal Logic,13(2):237-247, 1972.

[GHLS07] Birte Glimm, Ian Horrocks,Carsten Lutz, and Ulrike Sattler.Conjunctive query answering forthe description logic SHIQ. InManuela M. Veloso, editor, Proc.Of the 20th Int. Joint Conf. on Ar-tificial Intelligence (IJCAI 2007),pages 399-404, Hyderabad, India,2007.

[HaMo01b] Volker Haarslev and RalfMöller. RACER system descrip-tion. In Proc. of the Int. Joint Conf.on Automated Reasoning (IJCAR2001), volume 2083 of LectureNotes in Artificial Intelligence,pages 701-706. Springer-Verlag,2001.

[HaMo08] Volker Haarslev and Ralf Möller.On the scalability of descriptionlogic instance retrieval. J. of Auto-mated Reasoning, 41(2):99-142,2008.

[Haye79] Patrick J. Hayes. The logic offrames. In D.Metzing, editor,Frame Conceptions and Text Un-derstanding, pages 46-61. Walterde Gruyter and Co., 1979. Repub-

lished in [BrLe85].[Horr03] Ian Horrocks. Implementation and

optimization techniques. In[BCNMP03], pages 306-346.2003.

[HoKS06] Ian Horrocks, Oliver Kutz, andUlrike Sattler. The even more irre-sistible SROIQ. In Patrick Doherty,John Mylopoulos, and ChristopherA. Welty, editors, Proc. of the 10thInt. Conf. on Principles of Knowl-edge Representation and Reason-ing (KR 2006), pages 57-67, LakeDistrict, UK, 2006. AAAI Press/The MIT Press.

[HoPa04] Ian Horrocks and Peter F. Patel-Schneider. Reducing OWL entail-ment to description logicsatisfiability. J. Web Sem.,1(4):345-357, 2004.

[HoPH03] Ian Horrocks, Peter F. Patel-Sch-neider, and Frank van Harmelen.From SHIQ and RDF to OWL: Themaking of a web ontology lan-guage. Journal of Web Semantics,1(1):7-26, 2003.

[HoSa05] Ian Horrocks and Ulrike Sattler. Atableaux decision procedure forSHOIQ. In Proc. of the 19th Int.Joint Conf. on Artificial Intelli-gence (IJCAI 2005), Edinburgh(UK), 2005. Morgan Kaufmann,Los Altos.

[KPHS07] Aditya Kalyanpur, Bijan Parsia,Matthew Horridge, and EvrenSirin. Finding all justifications ofOWL DL entailments. In Proceed-ings of the 6th International Se-mantic Web Conference and 2ndAsian Semantic Web Conference,ISWC 2007 + ASWC 2007, vol-ume 4825 of Lecture Notes inComputer Science, pages 267-280,Busan, Korea, 2007. Springer-Verlag.

[Kaza08] Yevgeny Kazakov. RIQ andSROIQ are harder than SHOIQ. InGerhard Brewka and Jérôme Lang,editors, Proc. of the 11th Int. Conf.on Principles of Knowledge Rep-resentation and Reasoning (KR2008), pages 274-284. AAAIPress, 2008.

[KWZ-KR-08] Roman Kontchakov, FrankWolter, and MichaelZakharyaschev. Can you tell thedifference between DL-Liteontologies? In Gerhard Brewkaand Jérôme Lang, editors, Proc. ofthe 11th Int. Conf. on Principles ofKnowledge Representation andReasoning (KR 2008), pages 285-295. Morgan Kaufmann, Los Al-tos, 2008.

[LeMP06] Kevin Lee, Thomas Meyer, andJeff Z. Pan. Computing maximallysatisfiable terminologies for the de-scription logic ALC with GCIs. InProc. of the 2006 DescriptionLogic Workshop (DL 2006), vol-ume 189 of CEUR ElectronicWorkshop Proceedings, 2006.

[LeBr87] Hector J. Levesque and Ron J.Brachman. Expressiveness andtractability in knowledge represen-tation and reasoning. Computa-tional Intelligence, 3:78-93, 1987.

[Lutz08] Carsten Lutz. The complexity ofconjunctive query answering in ex-pressive description logics. InAlessandro Armando, PeterBaumgartner, and Gilles Dowek,editors, Proc. of the Int. Joint Conf.on Automated Reasoning (IJCAR2008), Lecture Notes in ArtificialIntelligence, pages 179-193.Springer-Verlag, 2008.

[LuMi07] Carsten Lutz and Maja Milicic. Atableau algorithm for descriptionlogics with concrete domains andgeneral tboxes. J. of AutomatedReasoning, 38(1-3):227-259,2007.

[LuToWo-IJCAI-09] Carsten Lutz, DavidToman, and Frank Wolter. Con-junctive query answering in the de-scription logic EL using a relationaldatabase system. In Proceedings ofthe 21st International Joint Confer-ence on Artificial IntelligenceIJCAI09. AAAI Press, 2009. Toappear.

[LuWW07] Carsten Lutz, Dirk Walther,and Frank Wolter. Conservative ex-tensions in expressive descriptionlogics. In Manuela M. Veloso, edi-tor, Proc. of the 20th Int. Joint Conf.on Artificial Intelligence (IJCAI2007), pages 453-458, Hyderabad,India, 2007.

[LuWo07] Carsten Lutz and Frank Wolter.Conservative extensions in thelightweight description logic EL. InFrank Pfenning, editor, Proc. of the21st Int. Conf. on Automated De-duction (CADE 2007), volume4603 of Lecture Notes in Compu-ter Science, pages 84-99, Bremen,Germany, 2007. Springer-Verlag.

[MaDW91] E. Mays, R. Dionne, and R.Weida. K-REP system overview.SIGART Bull., 2(3), 1991.

[Mins81] Marvin Minsky. A framework forrepresenting knowledge. In JohnHaugeland, editor, Mind Design.The MIT Press, 1981. A longer ver-sion appeared in The Psychologyof Computer Vision (1975). Re-

Page 121: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

120 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

published in [BrLe85].[Nebe88] Bernhard Nebel. Computational

complexity of terminological rea-soning in BACK. Artificial Intelli-gence, 34(3):371-383, 1988.

[Nebe90] Bernhard Nebel. Terminologicalreasoning is inherently intractable.Artificial Intelligence, 43:235-249,1990.

[OrCE08] Magdalena Ortiz, DiegoCalvanese, and Thomas Eiter. Datacomplexity of query answering inexpressive description logics viatableaux. J. of Automated Reason-ing, 41(1):61-98, 2008.

[PaSK05] Bijan Parsia, Evren Sirin, andAditya Kalyanpur. DebuggingOWL ontologies. In Allan Ellis andTatsuya Hagino, editors, Proc. ofthe 14th International Conferenceon World Wide Web (WWW’05),pages 633-640. ACM, 2005.

[Pate84] Peter F. Patel-Schneider. Small canbe beautiful in knowledge repre-sentation. In Proc. of the IEEEWorkshop on Knowledge-BasedSystems, 1984. An extended ver-sion appeared as Fairchild Tech.Rep. 660 and FLAIR Tech. Rep.37, October 1984.

[Pelt91] Christof Peltason. The BACK sys-tem: an overview. SIGART Bull.,2(3):114-119, 1991.

[Quil67] M. Ross Quillian. Word concepts:A theory and simulation of somebasic capabilities. Behavioral Sci-ence, 12:410-430, 1967. Repub-lished in [BrLe85].

[Reit87] R. Reiter. A theory of diagnosisfrom first principles. Artificial In-telligence, 32(1):57-95, 1987.

[SaSZ09] Ulrike Sattler, Thomas Schneider,and Michael Zakharyaschev.Which kind of module should I ex-tract? In Proc. of the 2008 Descrip-tion Logic Workshop (DL 2009),volume 477 of CEUR WorkshopProceedings, 2009.

[ScCo03] Stefan Schlobach and Ronald Cor-net. Non-standard reasoning serv-ices for the debugging of descrip-tion logic terminologies. In GeorgGottlob and Toby Walsh, editors,Proc. of the 18th Int. Joint Conf.on Artificial Intelligence (IJCAI2003), pages 355-362, Acapulco,Mexico, 2003. Morgan Kaufmann,Los Altos.

[Schm89] Manfred Schmidt-Schauß.Subsumption in KL-ONE is unde-cidable. In Ron J. Brachman, Hec-tor J. Levesque, and Ray Reiter,editors, Proc. of the 1st Int. Conf.on the Principles of Knowledge

Representation and Reasoning(KR’89), pages 421-431. MorganKaufmann, Los Altos, 1989.

[ScSm91] Manfred Schmidt-Schauß and GertSmolka. Attributive concept de-scriptions with complements. Ar-tificial Intelligence, 48(1):1-26,1991.

[ScGC79] Len K. Schubert, Randy G. Goebel,and Nicola J. Cercone. The struc-ture and organization of a seman-tic net for comprehension and in-ference. In N. V. Findler, editor, As-sociative Networks: Representa-tion and Use of Knowledge byComputers, pages 121-175. Aca-demic Press, 1979.

[SiPa04] Evren Sirin and Bijan Parsia. Pel-let: An OWL DL reasoner. In Proc.of the 2004 Description LogicWorkshop (DL 2004), pages 212-213, 2004.

[Sunt08] Boontawee Suntisrivaraporn.Module extraction and incremen-tal classification: A pragmatic ap-proach for EL+ ontologies. In SeanBechhofer, Manfred Hauswirth,Joerg Ho mann, and ManolisKoubarakis, editors, Proceedingsof the 5th European Semantic WebConference (ESWC’08), volume5021 of Lecture Notes in Compu-ter Science, pages 230-244.Springer-Verlag, 2008.

[Meng09] Boontawee Suntisrivaraporn. Poly-nomial-Time Reasoning Supportfor Design and Maintenance ofLarge-Scale BiomedicalOntologies. PhD thesis, FakultätInformatik, TU Dresden, 2009.<http://lat.inf.tu-dresden.de/re-search/phd/#Sun-PhD-2008>.

[TSHo06] Dmitry Tsarkov and Ian Horrocks.Fact++ description logic reasoner:System description. In UlrichFurbach and Natarajan Shankar,editors, Proc. of the Int. Joint Conf.on Automated Reasoning (IJCAR2006), volume 4130 of LectureNotes in Artificial Intelligence,pages 292-297. Springer-Verlag,2006.

Page 122: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 121© CEPIS

UPENET

Farewell Edition

Keywords: Computer Science,Digital Literacy, Schools.

With so many organisations de-pending on computing, computer sci-ence itself should be viewed as a fun-damental discipline like Maths andEnglish. Engineering- and science-based industries require computers tosimulate, calculate, emulate, model andmore, yet there is a shortage in the UKof people with the requisite abilities torun these systems. And the problembegins in school.

The Next Gen report shows that 40per cent of teachers conflate ICT withcomputing, not appreciating that ICTis learning to use applications but com-puting is learning how to make them.This is a fundamental difference thatcan be compared to that between read-ing and writing.

Children certainly need to learnabout digital literacy and BCS alreadyaddresses some of these issues withqualifications like Digital Creator,ECDL, Digital Skills, eType and thelike. But teaching computing as a dis-cipline in schools will allow childrento express creativity.

Author

Brian Runciman MBCS has been atBCS (British Computer Society, UnitedKingdom) since 2001, starting out as awriter, moving onto being ManagingEditor and now acting as Publisher foreditorial content. He tweets via@BrianRunciman. <[email protected]>

Computer Science

The Future of Computer Science in SchoolsBrian Runciman

© 2011 The British Computing Society

This paper was first published by ITNOW (Volume 53, num. 6, Winter 2011, pp. 10-11). ITNOW, a UPENET partner, is themember magazine for the British Computer Society (BCS), a CEPIS member. It is published, in English, by Oxford University Presson behalf of the BCS, <http://www.bcs.org/>. The Winter 2011 issue of ITNOW can be accessed at <http://itnow.oxfordjournals.org/content/53/6.toc>. © Informatica, 2011

We all know that digital literacy is vital in the modern world, but are we making sure our next generation of research-ers and academics, the innovators that will produce the UK’s valuable digital intellectual property of the future, are beinglooked after too?

Disciplines learnt in even oldercomputing courses apply because theseare based on principles. It’s the skillsarea, such as specific programminglanguages, that change. Of course,practical work is still need to pick uppractical techniques, but an under-standing of the discipline can take chil-dren right through from primary schoolto a university computer sciencecourse.

What about the teachers andthe schools?

Unfortunately teaching computingseems to have gone backwards inschools. In the 1980s children usingBBC Micros had the opportunity tolearn programming and wanted to cre-ate something using digital buildingblocks. But at a certain point that dis-appeared and schools took to teachingICT – how to use word processors,spreadsheets and the like. Whilst theseskills are useful you can’t forge a ca-reer in a creative industry with them.

The qualification network has beenset up in such a way that the main mo-tivation for schools is to climb theleague tables, so they go for ICT quali-

Computer cience itself should be viewed asa fundamental discipline like Maths and English“ ”

fications that are based around usingsoftware. The teachers available haveoften done a great job teaching ICT,but there aren’t enough of them. Sothose two things together have actu-ally lowered standards and don’t cre-ate the environment where head teach-ers want to teach computer science-re-lated syllabuses.

This means fighting the ethos ofmany head teachers that they go for aqualification because they can get avery high pass rate in it rather thangetting children involved in a moredemanding qualification that wouldlead to our next generation of innova-tors.

This also affects the motivation of

Page 123: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

122 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

the teachers who could teach compu-ter science-related areas, because ICTteaching has been seen as somethingthat can be done by anyone who hasthose basic IT skills.

Strange approachesStrangely, the new English Bacca-

laureate doesn’t have computer sci-ence, or even ICT, included in it. Evenart isn’t included, so this could haveknock-on effects in, for example,games development, which is a com-ing together of art of technology.

A way of thinking of this is seeingthe teaching of computing as three-lay-ered: firstly the basic digital literacy,which most people come out of thewomb with now; then the next level ofthe intelligent user, perhaps in archi-tecture or the like; then there is the toplayer: those who are specialists in com-puting and are creating new technolo-gies and applications. These ones keepus at the forefront of the creativeeconomy.

An interesting example of skewedviewpoints was demonstrated recentlywhen Michael Gove spoke of MarkZuckerberg, surely an excellent com-puter science role model as founder ofFacebook, as having studied Latin inschool. Gove didn’t mention that hehad also studied computer science,surely much more relevant. This showsthe traditional emphasis on the classics,but computer science should also bepart of the curriculum.

"For me computer science is thenew Latin", said Ian Livingstone at thispoint in the discussion.

Another example of the difficultiesfaced in changing approaches is shownin games development as promoted byuniversities. There are 144 gamescourses at universities, but only 10 of

those have been approved as fit for pur-pose by Skillset. Most are really up-dated versions of media studies, show-ing context and impact, but not teach-ing how to create games.

The codes used by universities tograde courses are also viewed as notreally doing the job. The universitiescould help more by labeling coursesmore accurately.

How do we get young people ex-cited about computer science inschools?

A drawback to the current curricu-lums means that a child could be taughtthe use of Excel spreadsheets threetimes over their time at school, whenmost could probably master it in aweek. It’s no wonder many of themfind ICT so boring.

Parents, guardians and teachersneed to be aware of the opportunitiescomputer science can offer. What ITcan do in the creative areas is excitingfor children. For children in second-ary education seeing the application ofcomputer science in, for example, ro-botics, such as Lego Mindstorms, canshow them that through a computeryou can build and animate an entireworld.

If they see the creative potentialwhile they are young they will stayengaged later.

There are also exciting possibilitiesin the games industry – despite the badpress, 97 per cent of what is producedis family friendly – and very innova-tive. It’s true in the financial industrytoo, which uses advanced modelingtechniques. Many physics PHDs windup in the city of London doing com-puter science activities. Computermodeling in engineering is vibrant;pharmaceutical companies are depend-ent on modeling too. There are huge

Teaching computing as a discipline in schoolswill allow children to express creativity

The new English Baccalaureate doesn’t havecomputer science, or even ICT, included in it“ ”

“ ”opportunities for those with program-ming talent.

We could also make better use ofrole models. If you stopped the aver-age child in the street they would behard pushed to name an IT role model.Possibly they would think of Sir TimBerners-Lee, but we need to championthese more too.

What progress is being madeand what can be done?

"This is where BCS and the Com-puting at Schools group have a veryimportant role, because they can bringtogether the academic community,grow it and help others get involved,"commented Andrew Herbert.

This needs to include a partnershipbetween the universities and schools.Until recently the government washappy that there were plenty of ICTqualifications and a curriculum inplace, but with the national curriculumreview, it seems that the DFE now rec-ognise not only the importance of dig-ital literacy, but the core academic dis-cipline of computing.

The UK needs to take this seriouslywhen in China there are a milliongraduates with computer science, en-gineering and software engineeringdegrees. Some of the best intellectualproperty in technology is coming outof Israel, where computer science istaught in schools nationally.

Industry can help too, perhaps en-couraging the young to program onnew mobile platforms through compe-titions and the like. This is being done,but more is always helpful.

What next?The panel agreed that computer

science should be an option in the sci-ence part of STEM and that education

Page 124: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 123© CEPIS

UPENET

Farewell Edition

Parents, guardians and teachers need to be awareof the opportunities computer science can offer

“”

needs to be reformed in schools anduniversities. Computer science needsto be seen as an essential discipline andon the school curriculum from earlystages.

Bill Mitchell concluded: "Everychild should be experiencing comput-ing throughout their school life, start-ing at primary school, through to age16, even 18."

Note: This article is based on avideo round table discussion producedby BCS, The Chartered Institute for IT,on behalf of the BCS Academy ofComputing. It was attended by BCSAcademy of Computing Director BillMitchell; Andrew Herbert, formerChairman of Microsoft Research Eu-rope and a key player in setting up theComputing at Schools Working Group;and Ian Livingstone of EIDOS, coau-thor of the recent NESTA report, ‘Nextgen’. Brian Runciman MBCS chaired.

The full video is at <http://www.bcs.org/video>.

The NESTA report is at <http://www.nesta.org.uk/publications/assets/features/next_gen>.

Page 125: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

124 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

1 New Brain Imaging Tools al-low the Study of the Cerebral Ac-tivity in vivo in Human Beings

In the history of science the devel-opment of new analysis tools has of-ten allowed the exploration of new sci-entific horizons and the overcoming ofold boundaries of knowledge. In thelast 20 years the scientific research hasgenerated a set of powerful tools tomeasure and analyze the cerebral ac-tivity in human beings in a completely"non-invasive" way. It means that theycould be employed to gather data inawake subjects causing no harm totheir skin. Such tools provide imagesof the brain cerebral activity of the sub-ject while s/he is performing a giventask. These could be then presented bymeans of colors on real images of thecerebral structure. In such a wayneuroscientists could observe, like ona geographic map, the cerebral areasmore active (more colored) during aparticular experimental task. The highresolution electroencephalography(hrEEG) is a brain imaging tool thatgathers the cerebral activity of humanbeings "in vivo" by measuring the elec-trical potential on the head surface [1,2]. The hrEEG returns images of the

IT for Health

Neuroscience and ICT: Current and Future Scenarios

Gianluca Zaffiro and Fabio Babiloni

© Mondo Digitale, 2011

This paper was first published, in its original Italian version, under the title "Neuroscienze e ICT: Una Panoramica", by MondoDigitale (issue no. 2-3, June-September 2011, pp. 5-14, available at <http://www.mondodigitale.net/>). Mondo Digitale, a foundingmember of UPENET, is the digital journal of the CEPIS Italian society AICA (Associazione Italiana per l’Informatica ed il CalcoloAutomatico, <http://www.aicanet.it/>.)

In the last couple of decades the study of human brain has made great advancements thanks to the powerful neuroimagingdevices such as the high resolution electroencephalography (hrEEG) or the functional magnetic resonance imaging (fMRI).Such advancements have increased our understanding of basic cerebral mechanisms related to memory and sensoryprocesses. Recently, neuroscience results have attracted the attention of several researchers from the Information andCommunication Technologies (ICT) domain in order to generate new devices and services for disabled as well as normalpeople. This paper reviews briefly the applications of Neuroscience in the ICT domain, based on the research actuallyfunded by the European Union in this field.

Authors

Fabio Babiloni holds a PhD inComputational Engineering from theHelsinki University of Technology,Finland. He is currently Professor ofPhysiology at the Faculty of Medicineof the Università di Roma La Sapienza,Italy. Professor Babiloni is author ofmore than 185 papers on bioengineeringand neurophysiological topics oninternational peer-reviewed scientificjournals, and more than 250 contributionsto conferences and books chapters. Histotal impact factor is more than 350 andhis H-index is 37 (Google Scholar).Currents interests are in the field ofestimation of cortical connectivity fromEEG data and the area of BCI. ProfessorBabiloni is currently grant reviewer forthe National Science Foundation (NSF)USA, the European Union through theFP6 and FP7 research programs andother European agencies. He is anAssociate Editor of four scientificJournals: "IEEE Trans. On NeuralSystem and Rehabilitation Engine-

This paper reviews briefly the applicationsof Neuroscience in the ICT domain“ ”

ering", "Frontiers in Neuroprosthesis","International Journal of Bioelectro-magnetism" and "ComputationalIntelligence and Neuroscience".<[email protected]>

Gianluca Zaffiro graduated inElectronic Engineering from thePolitecnico di Torino, Italy, and joinedthe Italian company Telecom in 1994.He has participated in internationalresearch projects funded by the EU andMIUR, occupying various positions ofresponsibility. He has participated inactivities in IEC standards intelecommunications. Currently he holdsa position as senior strategy advisor inthe Telecom Italia Future Centre, wherehe is in charge of conducting analysisof technological innovation, definesscenarios for the evolution of ICT andits impact on telecommunicationsservices. He is the author of numerousarticles in journals and conferences.<[email protected]>

Page 126: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 125© CEPIS

UPENET

Farewell Edition

cerebral activity with a high temporalresolution (a millisecond or less), anda moderate spatial resolution (on theorder of fractions of centimeters). Fig-ure 1 presents images of the cerebralactivity some milliseconds after per-forming a sensorial stimulation on theright wrist of a healthy subject. The tri-dimensional head model, on the leftside of the picture, is employed for theestimation of the cerebral activity. Thecerebral cortex, dura mater (the me-ningeal membrane that envelopes thebrain), the skull and the head surfaceare represented. The spheres show theposition of the electrodes employed forthe recording of the hrEEG. In the samepicture, in the upper row we can ob-serve the sequence of the distributionof the cerebral activity during an elec-trical stimulation on the wrist, codedwith a color scale ranging from purpleto red. In the second row we present

the cortical activity, related to the sametemporal instants represented in theprevious line, that is to say the super-ficial part of the brain (the cortex)which plays a key role in complexmental mechanisms such as memory,concentration, thought, and language.

In the last decades, the use of mod-ern tools of brain imaging has allowedto clarify the main cerebral structuresinvolved in cognitive and motor proc-esses of the human being. These tech-niques have highlighted the key roleof particular cerebral areas, such as theones located just on the back of fore-head and near the sockets (prefrontaland orbitofrontal areas), in the planningand generation of voluntary actions, aswell as in the short and medium termmemorization of concepts and images[3]. In the last years "signs" of the cer-ebral activity related to variation ofmemorization, attention and emotion,

Figure 1: Images of the Cerebral Activity some Milliseconds after a Sensorial Stimulation.

There are tools that provide images of the brain cerebral activityof a subject while s/he is performing a given task“ ”

in tasks always more similar to every-day life conditions, have been meas-ured and recognized.

2 Brain-computer Interfaces’Working Principle

In the last years researchers haveobserved, by means of hrEEG tech-niques, how, in human beings, the actof evoking motion activities occurs inthe same cerebral areas related to thecontrol of the real movement of thelimbs. This important experimentalevidence is at the basis of a technol-ogy, known as "brain computer inter-face" (BCI), which aims at controllingelectronic and mechanical devices onlyby means of the modulation of people’scerebral activity. Figure 2 presents thescheme of a typical BCI system: on theleft side a user is represented that withhis/her own mental effort produces achange of the electrical brain activity

Page 127: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

126 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Figure 2: Logical Scheme of a BCI System.

which can be detected by means of re-cording devices and analysis of theEEG signals. If such activity is gener-ated periodically, an automatic systemcan recognize the generation of suchmental states by means of proper clas-sification routines. Then, the systemcan generate actions in the outsideworld and give feedback to the user.

In particular, it can be observedexperimentally that a subject can learnto autonomously modify the frequencypattern of his/her own EEG signals,without the need to recur to some ex-ternal stimuli. The so called mu-rhythm, which is a particular EEGwave, can be recorded from the scalpby means of superficial electrodes lo-cated near the top of one’s head and in

posterior direction (central-parietal ar-eas). It is known like such a rhythm issubjected to a strong diminution of itsamplitude of oscillation (around 8-12Hz) during limb movements. Such phe-nomenon is known in literature as de-synchronization of the alpha rhythm.Through training, a subject can learnhow to achieve such a de-synchroni-zation of the EEG rhythm in absenceof a visible movement, simply by evok-ing the movement of the same limb. Insuch a way it is possible to achieve theuser’s voluntary control of a compo-nent of the own cerebral activity whichcan be detected in a particular EEG fre-quency band (8-12 Hz), preferentiallyon electrodes overtop particular corti-cal areas (sensory-motor areas). As al-

These techniques have highlightedthe key role of particular cerebral

areas in the planning and generationof voluntary actions

ready explained, the simple evocationof motion acts generates patterns ofcerebral activity which are basicallystable and repeatable in time wheneverthe subject performs such an evocation[4, 5]. It is not obvious nor simple forautomatic systems to recognize volun-tary modification of the EEG trace withlow error rates such to safely drive me-chanical and electronic devices. Themain difficulties addressed in the rec-ognizing of the induced potential modi-fication on the scalp are of manifoldnature. First, a proper learning tech-nique is required to let the subject con-trol a specific pattern of his/her ownEEG. Such a technique requires the useof appropriate instrumentation thatanalyzes in real time the EEG signals

Page 128: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 127© CEPIS

UPENET

Farewell Edition

Figure 3: The subject generates a cortical activity recognizable by a computerby varying his/her own mental state. This phenomenon moves the cursor (redpoint on the screen) towards one of possible targets (red bar on the edge ofthe screen).

Figure 4: Two subjects playing electronic ping-pong without moving muscles, by means of a brain-computer interface installed at Fondazione Santa Lucia in Rome, Italy. (Panels run from A) to D).)

and send instantaneous feedback to thesubject, the availability of a propermethodology such that the subject isnot frustrated by common temporaryfailures during the training session and,at last, proper knowledge for using the

training software in such a way that theoperator can efficiently correct specificBCI parameters to facilitate the con-trol for each subject. The second diffi-culty in recognizing the mental activ-ity by EEG analysis comes from the

low signal-to-noise ratio, which is atypical feature of the EEG itself. Infact, in still sate this signal is charac-terized by an oscillatory behaviorwhich normally makes the variation ofthe mu-rhythm amplitude difficult todetect. In order to properly address thisissue, specific techniques of signalprocessing must be adopted to extractthe most relevant EEG features byemploying adequate automatic routinesof classification, known as classifiers.Like fingerprints are compared in apolice database to recognize people, inthe same way the EEG features arecompared with those obtained by thesubject during the training period. Theextraction of the EEG features is oftendone by means of an estimate of thepower spectral density of the signal it-self in a frequency range of 8-16 Hz.Later, the recognition of these featuresas belonging to a specific user’s men-tal state generated during the trainingperiod is performed by classifiers im-plementing mechanisms relying on ar-tificial neural networks. Once suchclassifiers make a decision related tothe user’s motor evocation state, a con-trol action is performed by an elec-

Page 129: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

128 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Figure 5: The Figure presents several moments related to the control of some electronic devicesin a room by using the modulation of the cerebral activity. (Experiments performed in thelaboratories of Prof. Babiloni at Fondazione Santa Lucia, Rome, Italy.)

tronic or mechanical device in the sur-rounding environment. This physicalaction is therefore an answer to a purelymental event generated by the user,acquired by the hrEEG device and laterclassified by the BCI software. In Fig-ure 3 it is shown like a user can di-rectly move a cursor in two dimensionsby recognition of mental states. Thecommand that triggers the movementto the right corresponds to evoking theright hand movement, and vice versafor evoking the left hand. The evoca-tion of right and left foot movementsslides the cursor towards upper orlower positions. All the experimentshave been performed at IRCCSFondazione Santa Lucia in collabora-tion with the Physiology and Pharma-cology Department of Università diRoma La Sapienza, Italy.

In Figure 4 the image of two sub-jects playing ping-pong by means of aBCI is shown. In such a case the modu-lation of the mental activity translatesinto the movement of a cursor on thescreen towards upper and lower posi-tions for both subjects.

3 Examples of Use of the BCITechnology in the ICT Domains ofRobotic and Device Control

In Figure 5 some existingfunctionalities available for the controlof simple electronic devices in a roomare presented. In frames A and B it canbe noticed how the subject switches alight on through the selection of an ap-propriate icon on the screen just by us-ing mental activity. In the C and D framesof the same figure it can be observed howthe same user can control the movementof a simple robot by using the modula-tion of cerebral activity. The possibilityof controlling the robot, equipped with acamera on its head, allows the disableduser to showing his/her presence in other

The ‘brain computer interface’ (BCI)aims at controlling electronic and

mechanical devices only by meansof the modulation

of people’s cerebral activity

parts of the home instead of having touse ubiquitous videocameras in everyroom, which harm the privacy of thecaregivers.

In the area of assisted living thereare companies working at the creationof a prototype of a motorized wheel-chair controlled by using the braincomputer interface technology. A pos-sible example of such device is shownin Figure 7, as recently demonstratedby Toyota [6]

Figure 6 presents how a roboticdevice is controlled by using cerebralactivity that could be used also in con-texts beyond tele-presence ordomotics, for example in entertainmentapplications.

Page 130: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 129© CEPIS

UPENET

Farewell Edition

Figure 6: Robotic device (Aibo from Sony) driven by the modulation of EEGbrainwaves as gathered by the EEG cap visible in some frames at the bottomright corner (frames have to be read from left to right and from the upper to thelower part). These pictures show the possibility of sending mental commandsvia wireless technology to the Sony Aibo robot.

Figure 7: Motorized Wheelchair driven by BCI Technology, from Toyota.

4 About the Use of BCI Sys-tems in the Next Future

BCI systems are studied currentlyto improve the quality of life of patientsaffected by severe motor disabilities,in order to provide them with somedegree of autonomous movement au-tonomy or decision. The next step forsuch systems it is to make those sys-tems available to non disabled people,in normal daily life situations. For in-stance, a videogame could be control-led just by thoughts (see Section 5) ormessages could be sent to other usersthat could be constantly connected tous by modulating our mental activity.Such activity will be gathered by fewsubtle, invisible sensors disposed onthe scalp, and the computational unitwill be not greater than a watch andeasily wearable. Although such kind ofscenario seems taken from a science-fic-tion book or movie, a description like thisabout our future comes from a study fromthe European Union about new life stylesin 2030, fruit of several days of debatebetween scientists in different disciplines,including ICT and health [7].

A subject canlearn to

autonomously modifythe frequency patternof his/her own EEGsignals, without the

need to recurto some

external stimuli

5 Neuroscience and BCIs arealready used in Entertainmentand Cognitive Training MarketFields

Several examples of commercialsolutions based on BCIs are reportedin this section to demonstrate that thesetechnologies are present also outsideresearch labs. In most cases those so-lutions, such as gaming, healthcare,

Page 131: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

130 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Figure 10: Mattel’s Toy based on BCI.

Figure 9: XWAVE plays by using Cerebral brainwaves on iPhone.

Figure 8: Game Controller developedby Emotiv, a California based Company.

coaching or training, are sold in therange of ten to thousand dollars. Somecompanies address controller marketfor PC videogames: for example, twoAmerican companies, Emotiv andOCZ Technologies, are providing BCIsthat interpret both the muscle move-ments and electrical cortical signals.Their devices consist of a headband orhelmet equipped with special elec-trodes, which sell for $100-300. TheEmotiv controller, shown in Figure 8,comes with a set of classic arcadegames such as the Ping Pong and Tetris"brain-controlled" versions.

Other companies are offering thesegame controllers for smartphones ortablets, such as Xwave or MindSet.Mindset is a BCI developed by Ameri-can NeuroSky which allows you toplay BrainMaze with a Nokia N97,driving a ball with your mind in a laby-rinth [8]. Xwave, a PLX Devices crea-tion (Figure 9), is a device connectedto your iPhone or iPad which allowsyou to compete in games or train yourmind [9]. BCIs have also made inroadsin toys: big companies like Mattel andUncleMilton are producing two simi-lar toys, respectively Mind Flex (Fig-ure 10) and the Star Wars Science ForceTrainer. These toys are available forabout $100. Both of them are based on

There are companies working atthe creation of a prototype of a

motorized wheelchair controlled byusing the brain computer

interface technology

”a brainwave controlled fan used to levi-tate a foam ball, which in turn has tobe moved around to a given position.In the United Stated of America (USA)alone the market of "cognitive train-ing" has increased from 2 million dol-

lars in 2005 to 80 million of 2009 [10].Much attention has been raised byneurofeedback, a technique aimed attraining to control your ownbrainwaves through their graphical dis-play. This procedure is used both in

Page 132: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 131© CEPIS

UPENET

Farewell Edition

medicine as a treatment for disorderssuch as ADD (Attention Deficit Dis-order), and in training of profession-als, students, and athletes as to improvetheir concentration, attention and learn-ing performances.

At CES 2011, the most importantexhibition for consumer electronicsworldwide, a BCI-based prototype sys-tem for ADD treatment, BrainPal, hasbeen unveiled [11]. In Sweden,Mindball, a therapeutic toy used totrain the brain to relax or concentrate,is available from ProductLine Interac-tive. Some top level soccer teams likeAC Milan and Chelsea have been un-dertaking neurofeedback training.

6 Applied Neuroscience cansupport Marketing and Advertis-ing of Products and Services

Business people are looking intoneuroscience in order to understandand predict the human buying mecha-nisms. Neuromarketing is a disciplineborn from the combination of these twoscientific fields, aiming at knowingwhy a buyer chooses a product or serv-ice. Much attention is now directed tothe analysis of advertising, notoriouslyone of the most effective stimuli forpurchases.

Traditional marketing assesses peo-

ple’s reactions to advertising stimuliwith indirect techniques (observation,interviews and questionnaires) whilstNeuromarketing investigates the directphysiological response caused by ad-vertising stimuli (electrical response ofthe brain) and from this it infers thecognitive implications (levels of atten-tion, memory and pleasure).

Neuromarketing does not assessbehaviors but tries to find out how ad-vertising stimuli "leave their mark" inthe brain of people. Two approachesbased on cortical EEG measures havemainly been adopted in the market.One is the scientific approach, whichstarts from the neuroscience evidenceto infer the effectiveness of a givenstimulus by measuring with a high den-sity EEG (>60 electrodes) the corticalelectrical activity in all the areas of thebrain. This approach can be simplifiedby limiting the area of the neural sig-nal measurements to the frontal lobes,

Figure 11: An experimental setup related to an experiment of synthetic telepathy at the laboratoryof the Fondazione Santa Lucia and the Università di Roma La Sapienza, Italy, led by Prof. Babiloni.The two subjects are exchanging simple information bits (the cursor is moving up or down) just bymodulating their cerebral activity through a brain computer interface system linking them.

Neuromarketing is a discipline bornfrom the combination of these

two scientific fields,Neuroscience and Marketing

on which a minimun of 10 electrodesshould be applied, that are sufficientto acquire indicators for levels of at-tention, memory and emotion.

The obvious advantage with thisapproach is that the results can be di-rectly related to scientific evidence, butthere are limits to the practicality andscalability of the test since often meas-urement devices are required that areuncomfortable to wear and time-con-suming in terms of the subject’s prepa-ration.

The other approach is the heuristicone, which has its strength in the useof proprietary EEG equipments thathave a reduced number of electrodes(it could be just an electrode centrallypositioned on top of the head or twoon the frontal lobes) with which youmeasure the parameters of interest inneuromarketing. The simplified ar-rangements encourage portability byreducing discomfort and preparation

Page 133: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

132 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Focus on Neuromarketing

In this section we report the application areas that neuromarketing companies are addressing today, associated withsome examples of studies promoted by well-known international companies.

Advertising: Neuromarketing is widely used to measure the effectiveness of print ads or videos (commercials) andtheir enhancement as a function of communication campaigns. Case Studies: we report an analysis produced by BrainSigns,spin-off of "La Sapienza" University of Rome. Figure A in this box presents two diagrams obtained for a population of viewerswatching a TV commercial. The spot featured a flirt scene (a girl’s message immediately interrupted) that literally "catalysed"the attention of the viewers and the memorization at the expense of attention and memorization of the brand advertised andits message. The viewers liked the spot, but they did not get the intended message from it. As a second example, Coca-Colacommissioned EmSense [13] to perform a study using neuromarketing techniques to choose, between several possibilities,the most effective commercial to air on television during the Superbowl, the final game of the USA National Footnall League. Finally on Google’s behalf, NeuroFocus used neuromarketing techniques [14] to assess the impact on users of the introduc-tion on Youtube of Invideo Ads, which are semitransparent banner ads superimposed on YouTube videos streamed over theInternet.

Multimedia: Neuromarketing can evaluate a movie trailer, an entire movie or a television show with the aim of under-standing how the engagement level of the audience changes in time and identify the points of a movie where, for example,there are high levels of suspense or surprise in the audience. Case Studies: 20-th Century Fox has commissioned Innerscope[15] to evaluate the movie trailers for the films "28Weeks Later" and "Live Free or Die Hard". NBC has commissioned Innerscopeas well [15] to study the viewers’ perception of advertising during the fast forward of a recorded TV content.

Ergonomics: Neuroscience can improve the design process of device interfaces and improve the user experience,assessing the cognitive workload that is required to learn how to use the device, and the engagement, satisfaction or stresslevels generated by its use. Case study: in 2006 Microsoft [16] decided to apply EEG to experimentally investigate how toperform a user task classification using a low-cost electroencephalograph.

Packaging: Neuromarketing can be used to obtain a more appealing package design, so that, for example, a customercan recognize the product more easily on a shelf in a supermarket, chosen among others like it.

Videogames: Neuromarketing can evaluate the players’ engagement, identify the most interesting features of thegames and optimize their details. During all phases of the game, the difficulty level can be calibrated properly so that a gameis challenging, but not excessively difficult. Case Study: EmSense conducted a study [17] on the "first person shooting" genreof videogames in which, during the game, they evaluated the levels of positive emotion, engagement and cognitive activationof the players in function of time.

Product Placement: Neuromarketing studies can support the identification of the best positioning of a product on theshelf of a supermarket and the optimal placement of advertising for a product or a brand in a scene during a TV show.

Politics: Neuromarketing techniques can be applied to carry out studies in the political sphere, for example by meas-uring the reactions of voters to candidates at rallies and speeches. Case Study: during the elections of the UK Prime Ministerin 2010 [18] NeuroFocus conducted and published a study about the measured prospective voters’ neurological reactions,highlighting the subconscious scores evoked by the candidates on a sample of subjects.

Figure A: Mean changes of attention (left) and memorization (right) of a given audience while watching a commercial. The higherthe signal, the more active processes of attention and memory toward the spot. (Courtesy BrainSigns Ltd.)

Page 134: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 133© CEPIS

UPENET

Farewell Edition

time, with the aim of make the testingprocess as equivalent as possible to theactual experience of the subject. How-ever today it is not possible to com-pare the obtained results with the sci-entific literature.

Neuromarketing is extremely suit-able for supporting the design of ad-vertising spots, and it allows to increasethe ability to stimulate attention andmemory retention, and placing the ad-vertisement in a manner consistentwith the brand. In the TV spot post-creative phase, it is useful to measurecomparative efficacy and to select andoptimize the existing spots, reducingtheir time format. Finally, in the spotprogramming phase it allows tooptimize the frequency in a givenbroadcasting timeframe, checking inlab how long subjects have to be ex-posed for the commercial to be memo-rized.

Today, most companies operatingin neuromarketing are located in theUSA where they were founded in thelast five years. Many of these employdevices for neurophysiological meas-ures (EEG and sensors) developed in-house, while others adopt technologi-cal solutions from third parties (see thebox section "Focus on Neuromarketing").

7 What is going on in Researchabout ICT and Neuroscience

During the years 2007-2011 theEuropean Union has supported withmore than 30 million of Euros researchprojects linked to the use of BCI sys-tems for the control of videogames,domestic appliances, andmechanotronic prosthesis for handsand limbs. In addition, EU funding hasbeen directed also for the evaluationof the mental state of passengers ofaircrafts during transoceanic flights, inorder to provide them with board serv-ices in agreement with their emotional

feels. Another interesting area of re-search in which EU supported scien-tific studies is the on-line monitoringof the cerebral workload of drivers ofpublic vehicles, such as aircrafts, ortrains as well as cars.

Recently a research line related tothe field of the so called "synthetic te-lepathy" is being developed in theUSA, where the capability of two com-mon persons to exchange informationbetween them just by using the modu-lation of their cerebral activity is be-ing tested. This is made possible byusing the concepts developed in thefield of the BCI. In particular, Figure11 presents an experimental setup of"synthetic telepathy" developed at thejoint laboratories of the FondazioneSanta Lucia and the Università diRoma La Sapienza, Italy. In the pic-ture two subjects are exchanging in-formation about the position of an elec-tronic cursor on the screen that theyare able to move by using a modula-tion of their cerebral activity.

Although in this moment the speedtransmission is really limited to fewbits per minute, the proof of conceptof such devices has been already dem-onstrated.

8 ConclusionsIn this paper it has the main re-

search streams involving both neuro-science and ICT have been described

Neuromarketing is extremelysuitable for supporting the design

of advertising spots

The capability of two common personsto exchange information between them

just by using the modulation oftheir cerebral activity

is being tested

“”

briefly. There is an increasing interestfrom the ICT area for the results of-fered by neuroscience in terms of a newgeneration of ICT devices and tools"powered" by the ability of beingguided by mental activity. Although thestate of the art is still far from every-day technological implementations likethose shown in the modern science-fic-tion movies, there are thousands of re-searchers that are nowadays engagedin the area of brain computer interfacesresearching about next generation elec-tronic devices, while 10 years ago therewere very few. As the eminent neuro-scientist Martha Farah said recently[12] the question is not "if" but rather"when" and "how" our future will beshaped by neuroscience. At that timeit will be better to be ready to ride the"neuro-ICT revolution".

References[1] Babiloni F., Babiloni C., Carducci

F., Fattorini L., Onorati P., UrbanoA., Spline Laplacian estimate ofEEG potentials over a realistic mag-netic resonance-constructed ,scalpsurface model. Electroenceph.clin.Neurophysiol, 98(4):363-373,1996.

[2] Nunez P., Neocortical Dynamicsand Human EEG Rhythms, Ox-ford University Press, 1995

[3] Damasio A. R., L’ errore diCartesio. Emozione, ragione ecervello umano, Adelphi, 1995.

[4] Wolpaw J. R., Birbaumer N.,McFarland D. J., Pfurtscheller G.,Vaughan T. M., Brain computerinterfaces for communication andcontrol. Clinical Neurophysiol-ogy, 113:767-791, 2002.

[5] Babiloni F., Cincotti F., MarcianiM.Salinari S., Astolfi L., Aloise

Page 135: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

134 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

F., De Vico Fallani F., Mattia D.,On the use of brain-computer in-terfaces outside scientific labora-tories toward an application indomotic environments., Int RevNeurobiol. 86:133-46, 2009.

[6] Toyota, <http://www.toyota.co.jp/en/news/09/0629_1.html>.

[7] COST (European Cooperation inScience and Technology), <http:// w w w. c o s t . e s f . o rg / e v e n t s /foresight_2030_ccst-ict>.

[8] Engadget, <http://www.engadget.com/2010/01/18/nokia-n97s-brain-maze-requires-steady-hand-typical-mind-contro>.

[9] Plxwave, <http://www.plxwave.com>.

[10] e! Science News, <http://esciencenews.com/articles/200902/09study. questions.effectiveness.80.million. year.brain.exercise.products.industry>.

[11] I2R TechFest, <http://techfest.i2r.a-star.edu.sg/index. php?o p t i o n = c o m _ c o n t e n t &view=article& id=75&Itemid=56>.

[12] Farah M., Neuroethics: the prac-tical and the philosophical, Trendsin Cogn. Sciences., vol. 9, 2005.

[13] Adweek, <http://www.adweek.com/aw/content_display/news/media/e3i975331243e08d74c5b66f857ff12cfd5>.

[14] Neurofocus, <http://www.n e u r o f o c u s . c o m / n e w s /mediagoogle.html>.

[15] Boston.com, <http//www.boston.com/ae/tv/articles/2007/05/13/emote_controlA.

[16] Lee J. C., Desney T. S., Using aLow-Cost Electroencephalographfor Task Classification in HCI Re-search, UIST, 2006.

[17] GGl.com, <http://wire.ggl.com/news/if-you-want-a-good-fps-use-close-combat>.

[18] PR Newswire, <http://www.prnewswire.com/news-releases/neurotips-for-each-candidate-in-the-final-48-hours-of-uk-prime-minister-campaign-be-aware-of-voters-subconscious-scores-for-strengthsweaknesses-be-wary-of-gender-splits-92777229.html>.

Page 136: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 135© CEPIS

UPENET

Farewell Edition

Keywords: Education, Katmus,Music Transcriptions, Open SourceSoftware, Teaching Resources.

1 Introduction and MotivationJust as books and texts capture

thoughts and human speech, musicalnotation provides a written represen-tation of a complex musical perform-ance. While only approximately per-fect, this notational system, in its mod-ern incarnation, provides not only arepresentation of the full set of notesand their durations, but also key sig-natures, rhythm, dynamic range (vol-ume), and a set of ornamental symbolsfor expressing suggested performancequeues and articulations, using a com-plex system of symbols [1].

Within the scope of this paper,musical transcription can be defined asthe act of listening to a melody (or po-lyphony arrangement) and translatingit into its corresponding musical nota-tion, consisting of the set of notes withtheir duration with respect to the in-

Authors

Orlando García-Feal was born inSpain. He obtained an engineeringdegree in 2008 from the ESEI, Univer-sidad de Vigo, Spain. Since 2008, he hasworked at the Environmental PhysicsLaboratory (Universidad de Vigo), buil-ding, maintaining, and writing softwarefor their large-scale cluster computingfacility. He is presently pursuing hisPh.D degree in Computer Science. Hisresearch interests include the study ofmusical signal processing and clustercomputing. <[email protected]>

Silvana Gómez-Meire holds a PhDfrom the Universidad de Vigo, Spain.She was born in Ourense, Spain, in 1972.She works as full-time lecturer in theComputer Science Department of theUniversidad de Vigo, collaborating as aresearcher with the research group SING(New Generation Computer Systems)belonging to the Universidad de Vigo.Regarding her field of research, she hasworked on topics related to audio signalanalysis and music transcription soft-

IT for Music

Katmus: Specific Application to supportAssisted Music Transcription

Orlando García-Feal, Silvana Gómez-Meire, and David Olivieri

© Novática, 2011

This paper will be published, in Spanish, by Novática. Novática <http://www.ati.es/novatica>, a founding member of UPENET, isa bimonthly journal published by the Spanish CEPIS society ATI (Asociación de Técnicos de Informática – Association of ComputerProfessionals).

In recent years, computers have become an essential part of music production. Thus, versatile music composition softwarewhich is well mapped to the underlying process of producing music is essential to the professional and novice practitioneralike. The demand for computer music software covers the full spectrum of music production tasks, including software forsynthesizers, notation editors, digital audio sequencers, automatic transcription, accompaniment, and educational use.Since different music composition tasks are quite diverse, there is no single application that is well suited to all applicationdomains and so each application has a particular focus. In this paper, we describe a novel software package, calledKatmus, whose design philosophy accurately captures the specific manual process of transcribing complex musical pas-sages from audio to musical scores. A novel concept, introduced within Katmus, is the synchronization between the audiowaveform and the notation editor, intimately linking the time segments of the music recording to be transcribed to themeasures of the sheet music score. Together with playback, frequency domain analysis of the input signal and a completeproject management system for handling multiple scores per audio file, this system greatly aids the manual transcriptionprocess and represents a unique contribution to the present music software toolset.

ware development, although at presentshe is centred on the study of hybridmethods of Artificial Intelligence andtheir application to real problems.<[email protected]>

David Olivieri was born in the USA.He received his BSc, MSc and PhDdegrees in Physics (1996) from theUniversity of Massachusetts, Amherst(USA). From 1993-1996 he was a doc-toral fellow at the Fermi NationalAccelerator Laboratory (Batavia, IL,USA) studying accelerator physics.From 1996-1999 he was a staff engineerat Digital Equipment Corporation for theAlpha Microprocessor product line.Since 1999, he has been an AssociateProfessor at the Universidad de Vigo, inthe School of Computer Engineering(Spain). He has publications in pure andapplied physics, computer science, audiosignal processing, and bioinformatics.His present research interests focus onsignal processing and applicationsin sound, images and videos.<[email protected]>

Page 137: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

136 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

ferred time signature and rhythm [2].Given this definition, Figure 1 showsa short audio signal waveform seg-ment, where different measures withcorresponding notes have been identi-fied from the well defined beats orrhythm extracted from the signal. Thisschematic mapping from the audio sig-nal to the musical notation is referredto as musical transcription.

In the field of music analysis [3],the automatic transcription of mono-phonic melodies has been widely stud-ied [4] and essentially is considered tobe a solved problem. Although moresophisticated machine learning basedmethods may be applied, simple algo-rithms for extracting monophonicmelodies based on peak-tracking tech-niques [5] have been shown to be quiteeffective. This success, however, is nottrue for the general polyphonic musictranscription problem [6]. Indeed, evenfor the case of a single polyphony in-strument such as the piano, automatictranscription methods still performpoorly. For the more general po-lyphony case, consisting of several dif-ferent instruments (for example in anorchestra) recorded in the same chan-nel, it is far beyond the capabilities ofpresent transcription systems or, atbest, success is limited to special cases.

Thus, while automatic transcriptionmay help in certain situations, manualtranscription remains the gold stand-ard for music practitioners wishing todocument their own performances ortranscribe performances of others. This

traditional manual transcription, how-ever, is a time intensive task thatstrongly depends on the training, mu-sical knowledge, and experience of theperson undertaking the process. In-deed, transcription consist of an itera-tive process: listening to (and normallyrepeating) short time segments of anaudio recording, transcribing the notesin this segments, and then moving onto the next short segment, and often re-turning to transcribed sections in or-der to qualitatively evaluate the over-all consistency. For those not possess-ing nearly perfect musical memory andpitch this involves tedious and repetitiveinteraction with the input audio signal aswell as some music notation editor.

While not directly providing auto-matic transcription, several softwaretools exist whose purpose is to assistthe task of transcription. Some of thesetools, such as Noteedit [7], focus moreupon facilitating a notationalWYSIWYG editor and provide notools for directly interacting with theaudio signal while transcribing. Othersystems, such as Transcribe [8], pro-vide direct frequency analysis from thesegments of the time domain audio sig-nal, thereby indicating fundamentaltones, yet offer no facilities for simul-taneously writing the musical score.

Figure 1: Relationship between an Audio Segment and its corresponding Musical Notation.

In recent years, computers have becomean essential part of music production

Motivated by the shortcomings ofpresently available software in thisdomain, the work described in this pa-per grew out of the need to create anew software application that could aidthe process of transcribing music fromrecorded digital music that wouldmerge the strengths of editing softwareand those of audio signal analysis. Thenovelty of our software application andfundamental design criteria is based onthe inter-reaction between the notationeditor and the audio signal beinganalyzed, which we directly linked intime. For the user, the present workingmeasure in the note editor is high-lighted in a different color on the ren-dered audio waveform to show thisdirect correspondence. Not only doesthis provide for an intuitive user expe-rience, but it really helps transcription,since the user always knows which partof the audio signal corresponds to partsthat have been transcribed and whichto those parts that are yet to be tran-scribed.

2 State of the ArtAs described in the previous sec-

tion, other music transcription softwareis focused either on music score edit-ing or on the implementation of differ-ent analysis tools, but not both together.

“”

Page 138: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 137© CEPIS

UPENET

Farewell Edition

There are currently many software so-lutions, both commercial and opensource, for music notation editing, andthey are often presented in the contextof a much larger and more encompass-ing computer music tools suite. How-ever, software applications in the par-ticular domain of automatic music tran-scription have only modest success andlack many features to make them use-ful to the music practitioner. Since thenumber of computer music softwareand applications domains is large andbeyond the scope of this paper, we pro-vide a brief review of software that weconsider to be forerunners to Katmus,for the specific purpose of aidingmanual music transcription. Thus, fol-lowing an analysis of these applica-tions, we provide a comparative tablesummarizing the features of each thatare of interest for this task.

Sibelius [9] is a commercial musicediting software that is both popularand easy to use. It supports playbackof complete polyphony arrangementsby using backend instrument synthe-sis with the use of standard sound fonts,and the basic functionality can be ex-tended through the use of plugin librar-ies. The Sibelius suite provides a widearray of tools, useful for both the ex-perienced musician as well as the ama-teur. Moreover, several features arealso useful for teaching through func-tions that allow for the creation of les-sons. However, it is not directly de-signed for manual transcription anddoes not provide any facilities for fre-quency analysis or connection with anexternal audio file.

Finale [10] is another proprietarysoftware tool which is quite completeand widely used by musicians. LikeSibelius, it does not provide the abil-ity to simultaneously interact with thetime-domain signal and the notion edi-tor at the same time, nor does it pro-vide frequency analysis of the time-

domain signal. It is exclusively gearedtowards the high quality publication ofsheet music scores.

Noteedit [7] is an open source soft-ware application that provides similarfunctionality as its commercial coun-terparts, just described, for editingscores, as well as saving and export-ing midi files. The application is a hy-brid between a full sequencer and asimple notation editor in that it requiresa real time audio server (Jack on Linux)to provide a patchbay input/output formidi based instruments, that can inter-face directly with the notation editor.However, for the purpose of transcrib-ing music, this application does notprovide useful tools that link the audiofile with the editor, nor does it providethe ability for project management.

Rosegarden [11] is a more generalpurpose open source audio workstationsoftware that provides audio and MIDIsequencing, in the same way as Noteedit,and also provides notation editing. It isfocused on providing a wide range ofcomplex audio sequencing functionalityand interaction with MIDI inputs foundin high end commercial computer musicsuites. For compositional purposes,Rosegarden contains a complete mu-sic score editor which can be connectedto MIDI input devices; however,Rosegarden does not have specific fa-cilities for manual transcription.

Transcribe [8] is a software toolspecifically designed for aiding manualmusic transcription. The application

In this paper, we describe a software package, called Katmus,whose design allows transcribing complex musical passages

from audio to musical scores

Musical transcription can be definedas the act of listening to a melody,

or polyphony arrangement,and translating it into its corresponding

musical notation

“”

supports different audio file formatsand provides a graphical interface forthe input audio waveform and corre-sponding frequency analysis. The mainwindow consists of an active pianokeyboard with note synthesis so that auser can compare the tone of a synthe-sized note with that in the input audiosignal. The main window also containsthe rendered audio waveform and aplot of the frequency domain is super-posed on the scale of the piano key-board so that the peaks in frequencyare centered on the correspondingnotes of the piano keyboard. Anotheruseful feature of Transcribe is the abil-ity to replay small segments as well aschange speed without sacrificing thetone, called time-warping.

With its intuitive design, Transcribeis a lightweight program useful formonophonic melodies, yet practicallyuseless for polyphonic music. Anothershortcoming of this tool for more seri-ous transcription tasks is that it doesnot provide a notation integrated edi-tor that collects the results of the fre-quency analysis, nor is it possible toadd extensions and the software is notopen source, so it cannot be extendedto include new features.

Like the previous tool, TwelveKeys[12] is a proprietary software tool thatprovides analysis of monophonic andpolyphonic music recordings in orderto identify notes through the displayof the frequency analysis of short timedomain segments of the audio file.

Page 139: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

138 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Once again, there is no provision forannotation of the notes identified, soexternal editing software must be used.

AudioScore [13] is another com-mercial transcription tool that displaysthe signal in the time domain togetherwith the frequency domain analysis fornote identification. This software doesprovide an editing environment but, as

with all the other software tools de-scribed, the audio file is not directlylinked to the notation editor, so the userdoes not know which time domain seg-ment corresponds to a particular meas-ure in the score.

Thus, despite the wide array of soft-ware tools for music composition andtranscription described, we believe that

Table 1: Comparison of Relevant Software Tools

there is a well defined conceptual gapin the way software tools have ap-proached the problem of transcribingmusical pieces, since they ignore themanner in which transcription is nor-mally accomplished. We believe thekey concept for a useful transcriptionsoftware tool is to provide an explicitcorrespondence between the time

Page 140: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 139© CEPIS

UPENET

Farewell Edition

Figure 2: Main Window Workspace in Katmus.

points in audio input waveform withthe associated measures in the sheetmusic score. Our open source softwaretool, Katmus [14], bridges this gap.Moreover, Katmus not only combinestime-frequency analysis of the audiowaveform with a powerful WYSIWYGscore editor, but also introduces aproject based workflow in the processof music transcription. In the Katmusenvironment, having synchronizationbetween the audio waveform and thenote editor means that updates to thescore have a corresponding update tothe state of the associated time segmentof the waveform. In particular, the stateof being transcribed or not is repre-sented on the rendered waveform as acolor-coded highlight. Thus, an incom-

plete measure in the note editor is repre-sented on the corresponding segment ofthe waveform with a different color froma measure that is completely transcribed.

Another important difference be-tween Katmus and other similar soft-ware is our emphasis on a project man-agement workflow approach. In thisway, a user can save Katmus sessions,thereby saving the present state of allparameters in XML format. With thisstate persistence, a user can return at alater time to an unfinished transcrip-tion and continue at the point of thelast session with all the previous pa-rameters restored. Thus, there is noreason for the transcription to proceedin a linear order; large sections can beleft untranscribed to be returned to at

There are currently many software solutions,both commercial and open source,

for music notation editing

a later time. Sections that are complexcan be marked as unfinished, indicat-ing to the user that these are pointswhich must be revised and/or requirefurther effort. Project creation inKatmus is flexible, allowing for mul-tiple scores per audio file, as well asselecting scores based upon single ormultiple staves. As with other full fea-ture notation editors, Katmus providesscore playback with sound synthesissupport and allows for exporting scoresto PDF or MIDI (Musical InstrumentDigital Interface).

Perhaps one of the most powerfulsoftware architecture features is theplugin management infrastructure,which allows for smaller softwaremodules to be hot-plugged without the

“”

Page 141: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

140 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Figure 3: Typical Workflow in Katmus.

need for recompilation of the entireapplication. This has the advantage thatexperimental audio analysis and otheradditional features can be insertedwithout affecting the underlying soft-ware kernel of the application. Severalmodules have been written using thisplugin system, including the time-stretching features found in other simi-lar tools, frequency spectral analysisand filters, and experimental automatictranscription that can provide sugges-tions to the user. In this way, Katmuscan act as a powerful workbench forresearchers developing different audioapplications related to musical analy-sis.

Table 1 shows a comparative sum-mary of the different tools that havebeen described in this section, includ-

ing the most relevant parameters for thespecific task of transcription.

Table 1 provides a comparisonwhich helps describe the advantages ofKatmus, our application for musicaltranscription, showing strengths andweaknesses of other applications withrespect to this problem domain.

3 Environmental Features ofKatmus

As described in the previous sec-tion, the novel aspect of Katmus is thetailored workflow for helping userstranscribe complex musical composi-tions. This workflow consists of projectmanagement, and a graphical interfacethat exposes a WYSIWYG notationeditor coupled to a graphical represen-tation of the time domain signal of the

Katmus is a software platformdesigned to help musicians, professionals, teachers and students

with complex music transcription tasks

audio signal. Also integral to thisgraphical interface is the ability to lis-ten and apply various analysis algo-rithms to transcribe the musical ar-rangement, displaying at all times thecorrespondence between what is writ-ten, where it appears in the audio sig-nal and score, and playback.

Figure 2 shows the main windowof the Katmus application. The toppanel displays the representation of thefrequency domain audio signal, themiddle panel the time domain signal,and the bottom panel is the correspond-ing score. The representation in the fre-quency domain helps to identify thenotes present in the audio segment. Inthe plot representing the audio wave-form, it is possible to manually markthe limits of the measures, thus link-

“”

Page 142: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 141© CEPIS

UPENET

Farewell Edition

Table 2.Tests based upon Scenarios.

Usage Case Background Sequence of Events Open

Request opening an existing Katmus project that contains different scores containing various notes, chords, ties and other stylistic symbols, and/or an audio file with format wav, mp3 and/or ogg. This project may not contain errors, be corrupt or have associated audio files.

1. If the source file is correct, the application should properly load the project with all source elements.

2. If the source file has errors, the application should display a dialog box indicating that the file is corrupt or not found.

Close A request is made to close a project. 1. Confirms or cancels the closure of a project. 2. Asks the user whether to open or create a

new project. Create A request is made to create a new project. 1. This opens the project creation wizard.

2. Features are introduced into the new project.

3. The option is available to return and update previously entered element values.

4. Loading an invalid file will display an error message.

Project

Save Create a new score in the project with different elements (time signature, notes, and other symbols).

1. The project is saved with the original elements and the updated score.

2. If the project has been previously saved, the name can be changed.

Delete

A score can be selected and removed from the project tree.

1. It shows the dialog box for confirmation and removal.

2. If cancelled, the process is aborted. 3. If the score is the only one associated with

the project, it cannot be removed. Export

The user wants to export a score to be rendered.

1. The user can choose the format in which to export the score.

2. If the file exists, it can be overwritten. New

The user wants to add a new score to the project.

1. A dialog box is displayed for confirmation. 2. The name and score type is entered.

Rename

The user wants to change the name of a score in the project.

1. The score is selected from the project tree. 2. The rename option is chosen and the new

name entered. 3. If confirmed, the score is renamed,

otherwise the current name is retained.

Score

Playback The user wants to play back a selected score.

1. An instrument is chosen. 2. The score is reproduced with the chosen

instrument. 3. The user can choose the same controls as

with the reproduction of the audio signal. Delete

The user selects one or more notes of a measure, or several measures, to be deleted.

1. The selected notes are deleted.

Copy

The user wants to select one or more notes of a measure, or several measures, to be copied to another measure.

1. All selected notes are copied and pasted into the desired measures.

Cut

The user wants to select one or more notes of a measure, or several measures, to be cut and pasted into another measure.

1. The cut notes are removed and pasted in the desired measure.

Insert

The user wants to select a note or a stylistic symbol to be inserted in the score.

1. If the score does not have any measures, then nothing happens.

2. If measures exist, the selected elements are inserted into the desired measure.

Paste

The user wants to paste one or more notes previously copied.

1. An empty space is selected and the notes are copied.

2. If the user tries to paste over an existing note or outside the present measure, then the process is aborted.

Notes

Select The user wants to select one or more notes in one or more measures.

1. The user selects the note and the background color changes.

2. If an empty zone is pressed, the present selection is undone.

3. Selection of multiple notes is performed by selecting the first and dragging the cursor to the last note desired.

4. All notes of a measure or pentagram are selected by selecting an empty zone prior to notes desired and dragging the cursor to

Page 143: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

142 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

ing the original signal with the noteeditor.

The application can import audiofiles that can be played and will be usedfor analysis during the transcriptionprocess. The interface also allows youto incorporate Katmus imported audiofiles corresponding to transcriptions inthe form of scores. The major featuresthat Katmus provides the user with arethe following:

Import and associate an audiofile to be transcribed within the project.Several audio file formats are sup-ported, including uncompressed wav,mp3 and ogg vorbis.

Associate different transcrip-tions to the same audio file within asingle project. This feature allows theuser to save and maintain several tran-scriptions of the same audio file or tohave individual transcriptions for dif-ferent instruments.

Zoom capability in both the au-dio waveform and notation editor,which allows the positioning of pre-cise selections of audio segments forfast musical passages.

Synchronization of the audiowaveform with the measures in thescore, thereby associating each audiosegment with a compass. Combinedwith a color coding, this provides apowerful functional advantage sincecompleted and uncompleted parts ofthe audio file and/or score are indi-cated, saved and restored for multi-ses-sion work.

Playback of the audio signal.The user can replay the entire signalor certain segments, selected by drag-ging the mouse over the graphical rep-resentation of the audio signal. A pow-erful feature for transcription is pitchinvariant time-stretching, where timedomain segments can be slowed downwithout effecting the pitch. This fea-ture is especially interesting for rapidmusical segments or in cases wherecomplicated chords (polyphony) needto be resolved.

Edit scores. The integrated nota-tion editor provides basic functionalityfor the editing of musical symbols.

Play back the score. This func-tionality makes it possible to compare

the original melody with the tune ofthe current work.

Export scores. Supporting for-mats are PDF, MIDI, Lilypond, SVGor PNG.

4 System Description andTechnical Specifications

One of the fundamental aspects ofthe Katmus development and philoso-phy has been the use of open sourcesoftware for its implementation,thereby encouraging future contribu-tions from a wider community of de-velopers. The application is written inC++ and makes extensive use of theopen source Qt4 graphical interface

library (originally developed byTrolltech and now owned by Nokia).The significant advantages of the Qt4library are that it is cross-platform,object-oriented, and provides extensivetechnical documentation.

Since Qt provides a complete frame-work for developing applications, thecore capabilities and functionality ofKatmus rely heavily upon the standardand advanced features of the library.Some noteworthy features provided byQt in the Katmus application are: (i) theuse of the Plugin Manager API for de-veloping shared modules, which extendsthe basic functionality of an applicationand encourage third party contributionsand experimentation, (ii) the use of thespecialized Qt thread classes, which cangreatly accelerate the applications per-formance on computer architectures thatcan take advantage of multi-threading,(iii) the use of interoperability with theuse of XML document exchange throughstandard SAX and DOM technology, and(iv) a clean implementation of object

The novel aspectof Katmus is the tai-lored workflow forhelping users tran-

scribe complex musi-cal compositions

event callback handling with the use ofthe signals and slots paradigm, charac-teristic of the Qt framework.

Project management in Katmus isimplemented with the use of XMLthrough a DOM implementation of-fered explicitly in the Qt4 library. Inorder to produce high quality score ren-dering, scores are exported using aLilypond based file generator [15]which produces the specific languagesyntax for post-processing by theLilypond compiler that produces thedesired output format (PS, PDF orMIDI). Within the application, the au-dio signal is played by invoking thelibao library [16]. This library is cross-platform and provides a simple API foraudio playback that can be used inter-nally or through different standard au-dio drivers such as ALSA or OSS.Playback of scores is done with the useof the Lilypond syntax generator togenerate MIDI files and uses Timid-ity++ [17] for sound synthesis. Theslow motion playback is programmedusing the Rubberband library [18],which implements a phase-vocoderthat can change the speed of originalmusical audio in real time without af-fecting the pitch. Finally, to obtain thefrequency domain from the time do-main of the audio signal, the popularopen source Fourier transform library,fftw3 [19], is used.

Figure 3 shows the typical workflowof Katmus, which is a standalone appli-cation with a complete user interface. Thefirst action when launching the applica-tion is either to create a new a project,select an existing project saved on disk,or start a default project by directly im-porting an audio file.

Since many different options canbe used to instantiate a new project, agraphical wizard guides the userthrough the process of creating a newproject. The information queried dur-ing this process includes: (i) the typeof audio file to be imported, (ii) thechannel (if stereo), (iii) the type ofscore (one or two staves) and (iv) thename of the project. Once the projectis successfully created, the standardwork area of the application is instan-tiated, which consists of three discreteparts, shown in Figure 2, and described

Page 144: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 143© CEPIS

UPENET

Farewell Edition

as follows:1. Display window for audio sig-

nal: Provides zoom capability tochange both the time as well as ampli-tude scale of the audio waveform,which is an important feature for tran-scription. Thus, the user can focus uponshort time scale segments of the audiowaveform. An important feature is theability to select the time intervals cor-responding to the different measuresof the score, which are marked in thedisplay window by vertical lines. Sincethere can be variability of time meterswithin a musical composition, Katmusoffers two different ways of making thecorrespondence between measures andthe times in the audio signal: (i) manualselecting the limits of each measure inthe signal display window by mousepoint/click events, or (ii) assigning aconstant time duration for all measures,and then making small tweaks to theduration of individual measures wherenecessary. The user may also interactswith the waveform display window byselecting small segments with mousedrag events. In this way, all the pointsincluded in the selected waveform seg-ment can be used for subsequent analy-sis with built-in functions, plugin func-tions, or repetitive playbacks.

2. Intelligent Score Editor: Oncethe measures and the time signature aredefined, the user can insert the variousmusical notes and symbols correspond-ing to the transcription. An importantfeature of the editor is that the timecomputation is automatically validatedso that only measures with the correcttime signature can be marked as com-plete.

3. Project tree: Displays the dif-ferent elements, such as scores, meas-ures and audio file, which are part ofthe complete transcription project. Asdescribed previously, an advantage ofthe project paradigm is that it providesan intuitive way of allowing Katmusto contain many transcription scoresfor a single audio file, thereby contain-ing different versions of a transcriptionor assigning different musical instru-ments to each separate score.

In order to ensure the proper func-tioning of the system described and thequality of the software, several evalu-

ation tests were performed. There arenumerous methods of evaluation in theliterature that ensure the quality of soft-ware development based upon theproduct type and metrics [20]. ForKatmus, both functional and structuraltests were performed focusing specifi-cally on object-oriented systems.

The method chosen for these testsis based upon scenarios, since it fo-cuses upon actions that the user per-forms in order to discover interactionerrors. This means that tasks performedby users must be captured in a seriesof user cases and any possible variantswhich may arise. Tests are then per-formed on this set of cases.

Table 2 shows set of tests based onusage scenarios of the application.Applying each of these scenarios to theKatmus software system resulted in athorough method for debugging theapplication.

Functional, or black box testing,was applied to the user interface fortesting usage cases. The evaluation wasbased on informal handling tests fol-lowing the evaluation cycle of the in-terface [21] during which users evalu-ated beta versions of the software inorder to provide informal feedback fordebugging the final design.

Once extensive usage tests and bugfixes were performed, Katmus wasmade available to the wider user com-munity at SourceForge in July 2009.Since then, hundreds of users have suc-cessfully downloaded and installed theapplication without significant inci-dence.

5 Conclusions and future workThis paper presents the architec-

ture, implementation, and philosophyof Katmus, which is an easy to use soft-ware platform designed to help musi-cians, professionals, teachers and stu-

Katmus is available at SourceForgesince July 2009. Since then, hundreds ofusers have successfully downloaded and

installed the application

”dents with complex music transcriptiontasks. For the expert musician, theKatmus philosophy and implementa-tion provides a natural mapping of thetranscription process which is a greataid to producing the final arrangement.For the more novice users, Katmus pro-vides facilities that reinforce the rec-ognition of musical notes and chordsand may serve as an educational tool.

The end result is open source tran-scription software that is both intuitiveand easy to use. Moreover, the appli-cation is easily extendible with the useof a plugin architecture that simplifiesthe addition of enhancements or ex-perimental algorithms. By taking ad-vantage of open source philosophy,future code enhancements and extend-ible modules could be provided bycommunity contributions in the follow-ing areas:

Extending the capabilities of thenotation editor.

Using beat detection algorithmsfor accurately associating measures.

Extending the support forsequencing MIDI to handle more com-plex polyphony output.

Extending the labeling systemfor measures.

Providing support for multichannel audio analysis.

Module development for audiosignal editing.

References[1] R. Bennett. Elementos básicos de

la música. Ed. Jorge Zahar, 1998.[2] K.D. Martin. Automatic transcrip-

tion of simple polyphonic music:a robust front end processing.MIT Media Laboratory Percep-tual Computing Section. Techni-cal Report No. 385, 1996.

[3] M. Pizczalski. A computationalmodel of music transcription.

Page 145: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

144 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

PhD. Thesis, University of Michi-gan, Ann Arbor, 1986.

[4] A. Sterian. Model based segmen-tation of time-frequency imagesfor musical transcription. PhD.Thesis, University of Michigan,1999.

[5] W. Hess. Pitch determination ofspeech signals. Springer-Verlang,New York, 1983.

[6] A. Klapuri and M. Davy. Signalprocessing methods for musictranscription, Springer cop., NewYork, 2006. doi=10.1.1.1.4071

[7] Notedit. <http:// noteedit.berlios.de/>.

[8] Transcribe. <http:// www.seventhstring.com/>.

[9] Sibelius. <http://www.sibelius.com>.

[10] Finale. <http://www.finalemusic.com/>.

[11] Rosegarden. <http:// www.rosegardenmusic.com/>.

[12] Twelvekeys. <http:// twelvekeys.softonic.com/>.

[13] Audioscore. <http:// www. neuratron.com/audioscore.htm>.

[14] Katmus. <http:// katmus. sourceforge.net/>.

[15] Lilypond. <http://www.cs.wisc.edu/condor/>.

[16] Libao. <http:// www.xiph.org/ao/>.[17] Timidity++. <http:// timidity.

sourceforge.net/>.[18] Rubberband. <http://www. breakfast

quay.com/rubber band/>.[19] Fftw3. <http://www.fftw.org/>.[20] A. R. Hevner, S. T. March, J. Park,

S. Ram. Design Science in Infor-mation Systems Research Man-agement Information SystemsQuarterly, Vol. 28 No. 1, 2004

[21] R.S. Pressman. Software Engi-neering: A Practitioner’s Ap-proach. McGraw Hill, 6th edition,2006.

Page 146: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 145© CEPIS

UPENET

Farewell Edition

Keywords: Information Security,IT Security Training, Online Attacks,Security Risks, Tele-Lab Project, Vir-tual Lab Environment.

IntroductionIncreasing propagation of complex

IT systems and rapid growth of theInternet more and more draws atten-tion to the importance of IT securityissues. Technical security solutionscannot completely overcome the lack-ing awareness of computer users,caused by indifference or laziness, in-attentiveness, and lack of knowledgeand education. In the context of aware-ness creation, IT security training hasbecome a topic of strong interest – aswell as for companies as for individu-als.

Traditional techniques of teaching(i.e. lectures or literature) have turnedout to be not suitable for IT securitytraining, because the trainee cannotapply the principles from the academicapproach to a realistic environmentwithin the class. In IT security train-ing, gaining practical experiencethrough exercises is indispensable forconsolidating the knowledge. Precisely

the allocation of an environment forthese practical exercises poses a chal-lenge for research and development.That is, because students need privi-leged access rights (root/administratoraccount) on the training system to per-

Authors

Christian Willems studied computerscience at the University of Trier,Germany, and received his diplomadegree in 2006. Currently he is researchassistant at the Hasso-Plattner-Institutefor IT Systems Engineering, givingcourses on internet technologies andsecurity. Besides that he is working onhis PhD thesis at the chair of Prof. Dr.Christoph Meinel. His special researchinterests focus on Awareness Creation,IT security teaching and virtualizationtechnology. <[email protected]>

Orestis Tringides is the Managing Di-rector of Amalgama InformationManagement and has participated inresearch projects since 2003 in the areasof e-learning, e-business, ICT Securityand the Right of Access to Information.He holds a B.Sc. degree in Computer

IT Security

Practical IT Security Education with Tele-Lab

Christian Willems, Orestis Tringides, and Christoph Meine

© 2011 Pliroforiki

This paper was first published, in English, by Pliroforiki (issue no. 21, July 2011, pp. 30-38). Pliroforiki, ("Informatics" inGreek), a founding member of UPENET, is a journal published, in Greek or English, by the Cyprus CEPIS society CCS (CyprusComputer Society, <http://www.ccs.org.cy/about/>) . The July 2011 issue is available at <http://www.pliroforiki.org/>.

The rapid burst of Internet usage and the corresponding growth of security risks and online attacks for the everyday useror the enterprise employee have emerged the terms Awareness Creation and Information Security Culture. Nevertheless,security education has remained widely an academic issue. Teaching system security or network security on the basis ofpractical experience inherits a great challenge for the teaching environment, which is traditionally solved using a compu-ter laboratory at a university campus. The Tele-Lab project offers a system for hands-on IT security training in a remotevirtual lab environment – on the web, accessible at any time.

Security education has remained widely an academic issue“ ”

Science, an M.Sc. degree in InformationSystems and is currently pursuing anMBA degree. He also participates inCivil Society projects and is interestedin elderly care, historical remembrance,soft tourism and social inclusion.<[email protected]>

Christoph Meinel is scientific direc-tor and CEO of the Hasso-Plattner-Institute for IT Systems Engineeringand professor for computer science atthe University of Potsdam, Germany.His research field is Internet and WebTechnologies and Systems. Prof. Dr.Meinel is author or co-author of 10 textbooks and monographs and of variousconference proceedings. He has publishedmore than 350 per-reviewed scientificpapers in highly recognised internationalscientific journals and conferences.<[email protected]>

form most of the perceivable securityexercises. With these privileges, stu-dents could easily destroy a trainingsystem or even use it for unintended,illegal attacks on other hosts within thecampus network or on the Internet.

Page 147: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

146 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

The classical approach requires adedicated computer lab for IT securitytraining. Such labs are exposed to anumber of drawbacks: they are immo-bile, expensive to purchase and main-tain and must be isolated from all othernetworks on the site. Of course, stu-dents are not allowed to have Internetaccess on the lab computers. Hands-on exercises on network security top-ics even demand to provide more thanone machine to each student, whichhave to be interconnected (i.e. a Man-in-the-Middle attack needs three com-puters: one for the attacker and twoother machines as victims).

Teleteaching for security educationmostly consists of multimediacourseware or demonstration software,which do not offer real practical exer-cises. In simulation systems users dohave a kind of hands-on experience,but a simulator doesn’t behave like arealistic environment and the simula-tion of complex systems is very diffi-cult – especially when it comes to in-teracting hosts on a network. The Tele-Lab project builds on a different ap-proach for a Web-based teleteaching sys-tem (explained in detail in section 2).

Furthermore, we will describe a setof exercise scenarios to illustrate the

Figure 1: Screenshot of the Tele-Lab Tutoring Interface

In IT security training, gaining practical experience throughexercises is indispensable for consolidating the knowledge“ ”

capabilities of the Tele-Lab training en-vironment: a simple learning unit onpassword security, an exercise oneavesdropping, and the practical appli-cation of a Man-in-the-Middle attack.

Tele-Lab: A Remote Virtual Se-curity Laboratory

Tele-Lab, accessible at <http://www.tele-lab.org>, was first proposedas a standalone system [4], later en-hanced to a live DVD system introduc-ing virtual machines for the hands-ontraining [3], and then emerged to theTele-Lab server [2, 6]. The Tele-Labserver provides a novel e-learning sys-

Page 148: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 147© CEPIS

UPENET

Farewell Edition

tem for practical security training in theWWW and inherits all positive char-acteristics from offline security labs.It basically consists of a web-basedsystem (see Fig. 1) and a training en-vironment built of virtual machines.The tutoring system provides learningunits with three types of content: in-formation chapters, introductions tosecurity- and hacker tools and finallypractical exercises. Students performthose exercises on virtual machines(VM) on the server, which they oper-ate via remote desktop access. A vir-tual machine is a software system thatprovides a runtime environment foroperating systems. Such software-emulated computer systems allow easydeployment and recovery in case offailure. Tele-Lab uses this feature torevert the virtual machines to the origi-nal state after each usage. This is a sig-nificant advantage over the traditionalsetting of a physical dedicated lab,

since the recovery to the original statecan be performed quicker, more oftenand without any manual maintenanceefforts.

With the release of the current Tele-Lab 2.0, the platform introduced thedynamic assignment of several virtualmachines to a single user at the sametime. Those machines are connectedwithin a virtual network (known asteam, see also in [1]) providing thepossibility to perform complex net-work attacks such as Man-in-the-Mid-dle or interaction with a virtual(scripted) victim (see exemplary de-scription of a learning unit below).

A short overview of the Tele-Labarchitecture is given later in this sec-tion.

A Learning Unit in Tele-LabAn exemplary Tele-Lab learning

unit on malware (described in moredetail in [5]) starts off with academic

Figure 2: Architecture of the Tele-Lab Platform.

The classical approach requires adedicated computer lab for IT security training

1 See [9] for different teaching approaches.

“ ”knowledge such as definition, classi-fication, and history of malware(worms, viruses, and Trojan horses).Methods to avoid becoming a victimand relevant software solutions againstmalware (e.g. scanners, firewalls) arealso presented. Afterwards, various ex-isting malware kits and ways for dis-tribution are described in order to pre-pare the hands-on exercise. Followingan offensive teaching approach1 , theuser is asked to take the attacker’s per-spective – and hence is able to livelyexperience possible threats to his/herpersonal security objectives, as ifphysical live systems were used. Theclosing exercise for this learning uniton malware is to plant a Trojan horseon a scripted victim called Alice – inparticular, the Trojan horse is the out-

Page 149: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

148 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

dated Back Orifice2 . In order toachieve that, the student has to preparea carrier for the BO server componentand send it to Alice via e-mail. Thescript on the victim VM will reply bysending back an e-mail,indicating thatthe Trojan horse server has been in-stalled (that the e-mail attachment hasbeen opened by the victim). The stu-dent can now use the BO client to takecontrol of the victim’s system and spyout some private information. Theknowledge of that information is theuser’s proof to the Tele-Lab tutoringenvironment, that the exercise has beensuccessfully solved.

Such an exercise implies the needfor the Tele-Lab user to be providedwith a team of interconnected virtualmachines: one for attacking (with allnecessary tools pre-installed), a mailserver for e-mail exchange with thevictim and a vulnerable victim system(in this particular case, an unpatchedWindows 95/98). Remote DesktopAccess is only possible to the attack-er’s VM.

Learning units are also available one.g. authentication, wireless networks,secure e-mail, etc. The system can eas-ily be enhanced with new content. Forexample, in a project participating theHasso-Plattner-Institute, the VilniusGediminas Technical University(VGTU), nSoft and Amalgama Infor-mation Management Ltd., new learn-ing units were easily added to theVGTU implementation of Tele-Lab,<http://telelab.vgtu.lt>, and have beenshared among partners. The contentwas translated for Lithuanian languagelocalization. For the future, the projectconsortium plans to add more learningunits and expand localization for theGreek language.

Architecture of the Tele-Lab ServerThe current architecture of the Tele-

Lab 2.0 server is a refactored enhance-ment to the infrastructure presented in[6]. Basically, it consists of the follow-ing components (illustrated in Fig. 2).

Portal and Tutoring Environment:The Web-based training system ofTele-Lab is a custom Grails3 applica-tion running on a Tomcat applicationserver. This web application handlesuser authentication, allows navigationthrough learning units, delivers theircontent and keeps track of the students’progress. It also provides controls torequest a team of virtual machines forperforming an exercise. The Portal andTutoring Environment (along with theDatabase and Administration Interfacecomponents described later on) offertutors and students facilities of a Learn-ing Management System, such as cen-tralized and automated administration,assembly and delivery of learning con-tent, reuse of the learning units, etc.[11]

Virtual Machine Pool: The serveris loaded with a set of different virtualmachines needed for the exercise sce-narios – the pool. The resources of thephysical server limit the maximum to-tal number of VMs in the pool. In prac-tice, a few (3-5) machines of every kindare started up. Those machines are dy-namically connected to teams andbound to a user on request. The cur-rent hypervisor solution used to pro-vide the virtual machines is KVM/Qemu4 . The way virtual machines areused in Tele-Lab’s architecture, allowfor further creative ways to allocate

The Tele-Lab project builds ona different approach for a

Web-based teleteaching system

Students perform those exerciseson virtual machines (VM) on the server,

which they operate via remotedesktop access

2 BackOrifice (BO) is a Remote Access Tro-jan Horse developed by the hacker group„Cult of the Dead Cow", see <http://www.cultdeadcow.com/tools/bo.php>.3 Grails is an open-source frame work forweb application development, see <http://www.grails.org/>.4 See <http://www.linux-kvm.org/ and http://www.qemu.org>/.

“”

resources in an optimized and collabo-rative manner, by setting collaborationamong different instances of the Tele-Lab system that are installed on differ-ent sites: in the example of theabovementioned consortium, HPI’sand VGTU’s Tele-Lab servers share re-sources in order to dynamically pro-vide virtual machines to each other,when needed. For example: if a stu-dent from VGTU requests to conducta laboratory exercise, but the VGTU’sTele-Lab server has already reached themaximum limit of VMs that can be al-located, it automatically requests HPI’sTele-Lab server to allocate a VM fromits own resources (and vice versa). Thisautomatic process occurs seamlessly,so the user does not experience any dis-ruptions. In the future, this collabora-tion arrangement can easily be ex-panded into a grid of different institu-tions, sharing their Tele-Lab server’sresources to each other, thus evenlydistributing the whole process work-load, when e.g. there is a peak in VMdemand at one of the partners’ site.

Page 150: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 149© CEPIS

UPENET

Farewell Edition

For the network connections withinthe teams, Tele-Lab uses the VirtualDistributed Ethernet (VDE)5 package.VDE emulates all physical aspects ofEthernet LANs, in software. The Tele-Lab Control Services launch virtualswitches or hubs for each virtual net-work defined for a team of VMs andconnect the machines to the appropri-ate network infrastructure. For the dis-tribution of IP addresses in the virtualnetworks, a DHCP server is attachedto every network. After sending out allleases, the DHCP server is killed dueto security constraints. [7]

Database: The Tele-Lab databaseholds all user information, the contentfor web-based training and learningunit structure as well as the informa-tion on virtual machine and team tem-plates. A VM template is the descrip-tion of a VM disk image that can becloned in order to get more VMs of thattype. Team templates are models forconnected VMs that are used to per-form certain exercises. The databasealso persists current virtual machinestates.

Remote Desktop Access Proxy: TheTele-Lab server must handle concur-rent remote desktop connections forusers performing exercises. This is re-alized using the open-source projectnoVNC6 , a client for the Virtual Net-work Computing Protocol based onHTML5 Canvas and WebSockets. ThenoVNC package comes with theHTML5 client and a WebSocketsproxy which connects the clients to theVNC servers provided by QEMU. En-suring a protected environment forboth the Tele-Lab users and system isa challenge that is important to thor-oughly implement at all levels, as theissue of network security for virtualmachines in a Cloud Computing set-ting (such as the case of Tele-Lab)poses special requirements. [8] Thesystem uses a token-based authentica-tion system: an access token for a re-

mote desktop connection is generated,whenever a user requests a virtual ma-chine team for performing an exercise.Using TLS ensures the confidentialityof the token.

Administration Interface: The Tele-Lab server comes with a sophisticatedweb-based administration interfacethat is also implemented as a Grailsapplication (not depicted in Fig. 2). Onthe one hand, this interface is made forcontent management in the web-basedtraining environment and on the other,for user management. Additionally, theadmin interface can be used for manualvirtual machine control, monitoringand for registering a new virtual ma-chine or team templates.

Tele-Lab Control Services: Thepurpose of the central Tele-Lab con-trol services is bringing all the abovecomponents together. To realize anabstraction layer for encapsulation ofthe virtual machine monitor (orhypervisor) and the remote desktopproxy, the system implements anumber of lightweight XML-RPC webservices. The vmService is for control-ling virtual machines – start, stop orrecover them, grouping teams or as-signing machines or teams to a user.The remoteDesktopService is used toinitialize, start, control and terminateremote desktop connections to as-signed machines. The above-men-tioned Grails applications (portal, tu-toring environment, and web admin)allow the user to control the whole sys-tem using the web services.

On the client side, the user only needsa web browser supporting SSL/TLS. Thecurrent implementation of the noVNCclient does not even need an HTML5-capable browser: for older browsers,HTML5 Canvas and/or the WebSocketsare emulated using Adobe Flash.

IT Security ExercisesAs stated before, one of the

strengths of Tele-Lab (and other iso-

5 See <http://vde.sourceforge.net/>.6 See <http://kanaka.github.com/noVNC/>.7 See i.e. <http://www.rsa.com/solutions/consumer_authentication/reports/9381_Aberdeen_Strong_User_Authentication.pdf>.8 See <http://techcrunch.com/2009/12/14/rockyou-hack-security-myspace-facebook-pass-words/>.

lated laboratories) is the ability to pro-vide secure training environments forexercises, where the student takes theperspective of an attacker. Next to thelearning unit on Trojan horses pre-sented in chapter 2, we introduce a setof additional exercise scenarios to il-lustrate this approach: Attacks on Ac-counts and Passwords, Eavesdroppingof Network Traffic, and a Man-in-the-Middle Attack.

Exercise Scenario A: Attacks onAccounts and Passwords

Gaining valid user credentials fora computer system is obviously majorobjective for any attacker. Hackers canget access to personal and confiden-tial data or use a valid login as a start-ing point for numerous further attacks,such as gaining privileged access totheir target system.

It is well known that one should seta password consisting of letters (up-per and lower case), numbers and spe-cial characters. Moreover, the longer apassword is, the harder it is to crack.Thus, it is inherently important for auser to choose strong credentials – eventhough passwords of high complexityare harder to memorize.

Studies7 show, that users stillchoose very weak passwords, if al-lowed so. In December 2009, a hackerstole passwords from the popularonline platform rockyou.com and re-leased a dataset of 32 million pass-words to the Internet8 . An analysis ofthose passwords revealed several in-teresting findings:

30% of the users chose pass-words with a length of 6 characters orless, 50% had a password not longerthan 7 characters

Almost 60% of the users chosetheir password from a limited set ofalphanumeric characters

Nearly 50% used names, slangwords, dictionary words or trivial pass-words (consecutive digits, adjacentkeyboard keys, and so on)

The learning unit on Password Se-curity explains how passwords arestored within computer systems (i.e.password hashes in Linux), and howtools like Password Sniffers, Dumpersand Crackers work.

Page 151: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

150 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

In the exercise section, the user isasked to experience how fast weakpasswords can be cracked. On thetraining machine (Windows XP) theuser must dump the passwords to a fileusing PwDump, and crack the hasheswith the well-known John-the-Ripper9

password recovery tool. It gets obvi-ous, that passwords like the usernameor words from dictionaries usually canbe cracked within a few seconds.

The learning unit concludes withhints, how to choose a strong passwordthat can be memorized easily.

Exercise Scenario B: Eavesdrop-ping of Network Traffic

The general idea of eavesdroppingis to secretly listen to the private com-munication of two (or more) commu-nication partners without their consent.In the domain of computer networks,the common technique for eavesdrop-ping is packet sniffing. There are anumber of tools for packet sniffing –

packet analyzers – freely available onthe Internet, such as the well-knowntcpdump or Wireshark10 (used in thislearning unit).

A learning unit on packet sniffingin a local network starts with an intro-duction to communication on the data-link layer (Ethernet) and explains thedifference between a network with ahub and a network in a switched envi-ronment.

This is important for eavesdrop-ping, because this kind of attack ismuch easier when connected to a hub.The hub will forward every packetcoming in to all its ports and hence toall connected computers. These hostsdecide if they accept and further com-pute the incoming data based on theMAC address in the destination fieldof the Ethernet frame header: if thedestination MAC is their own MACaddress, the Ethernet frame is accepted,or dropped otherwise. If there is apacket analyzer running, also frames

Figure 3: Man-in-the-Middle Attacks.

With Tele-Lab 2.0, the platform introduced the dynamicassignment of several virtual machines

to a single user at the same time

9 See <http://www.openwall.com/john> for information on John-the-Ripper, <http://www.foofus.net/~fizzgig/pwdump/> for PwDump6.10 See <http://www.wireshark.org/>.

“”

not intended for the respective host canbe captured, stored and analyzed. Thissituation is different in a switched net-work: the switch does not broadcastincoming data to all ports but interpretsthe MAC destination to "switch" adedicated line between source and des-tination ports. In consequence, theEthernet frame is only delivered to theactual receiver.

After this general information onEthernet-based networking, the learn-ing unit introduces the idea of packetsniffing and describes capabilities andusage of the packet analyzerWireshark, especially on how to cap-ture data from the Ethernet device andhow to filter and read the captured data.

The practical exercise presents thefollowing task to the learner: "Sniff andanalyze network traffic on the localnetwork. Identify login credentials anduse them to obtain a private docu-ment." The student is challenged toenter the content of this private docu-ment to proof, that he/she has solvedthe task.

When requesting access to a train-ing environment, the user is assignedto a team of three virtual machines: theattacker machine that is equipped withthe Wireshark tool, and two machinesof (scripted) communication partners:Alice and Bob. In this scenario, Bob’smachine hosts an FTP server and a Webserver, while Alice’s VM runs a scriptthat generates traffic by initiating ar-bitrary connections to the services onBob’s host. Among those client/serverconnections are successful logins toBob’s FTP server. As this learning unitfocuses on sniffing and the interpreta-tion of the captured traffic of the ma-chines are connected with a hub. Thereis no need for the attacker to get into aMan-in-the-Middle position in order tocapture the traffic between Alice andBob.

Page 152: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 151© CEPIS

UPENET

Farewell Edition

Since FTP does not encrypt creden-tials, the student can obtain usernameand password to log in to that service.On the server, the student finds a filecalled private.txt that contains the re-sponse to the challenge mentionedabove.

The section concludes with hints onpreventing eavesdropping attacks,such as the usage of services with se-cure authentication methods (i.e. SFTPor ftps instead of plain FTP) and dataencryption.

Exercise Scenario C: Man-in-the-Middle Attack with ARP Spoofing

The general idea of a Man-in-the-Middle attack (MITM) is to interceptcommunication between two com-muni-cation partners (Alice and Bob)by initiating connections between theattacker and both victims and spoof-ing the identity of the respective com-munication partner (Fig. 3). More spe-cifically, the attacker pretends to beBob and opens a connection to Alice(and vice versa). All traffic betweenAlice and Bob is being relayed via theattacker’s computer. While relaying,the messages can be captured and/ormanipulated.

MITM attacks can be implementedon different layers of the TCP/IP net-work stack, i.e. DNS cache poisoningon the application layer, ICMP redi-recting on the internet layer or ARPspoofing in the data-link layer. Thislearning unit focuses on the last-men-tioned attack, which is also called ARPcache poisoning.

The Address Resolution Protocol(ARP) is responsible for resolving IPaddresses to MAC addresses in a localnetwork. When Alice’s computeropens an IP-based connection to Bob’scomputer in the local network, it hasto determine Bob’s MAC address atfirst, since all messages in the LAN aretransmitted via the Ethernet protocol(which is only aware about the MAC

addresses). If Alice only knows the IPaddress of Bob’s host, (i.e.192.168.0.10) she performs an ARPrequest: Alice sends a broadcast mes-sage to the local network and asks,"Who has the IP address192.168.0.10?" Bob’s computer an-swers with an ARP reply that containsits IP address and the correspondingMAC address. Alice stores that addressmapping in her ARP cache for furthercommunication.

ARP spoofing [10] is basicallyabout sending forged ARP replies: re-ferring to the above example, the at-tacker repeatedly sends ARP replies toAlice with Bob’s IP address and MACaddress – the attacker pretends to beBob. When Alice starts to communi-cate with Bob, she sends the ARP re-quest and instantly receives one of theforged ARP replies from the attacker.She then mistakenly thinks that the at-tacker’s MAC address belongs to Boband stores the faked mapping in herARP cache. Since the attacker per-forms the same operation for Alice’sMAC address, he/she can also manageto trick Bob, that his/her MAC addressis the one of Alice. In consequence,Alice sends all messages to Bob to theMAC address of the attacker (and thesame applies for Bob’s messages toAlice). The attacker just has to storethe original MAC addresses of Aliceand Bob to be able to relay to the origi-nal receiver.

A learning unit on ARP spoofingbegins with general information oncommunication in a local network. Itexplains the Internet Protocol (IP),ARP and Ethernet including the rela-tionship between the two addressingschemes (IP and MAC addresses).

Subsequently, the above attack isdescribed in detail and a tool, that im-plements ARP spoofing and a numberof additional MITM attacks is pre-sented: Ettercap11 . At this point, thelearning unit also explains what the

The continuous evolvement of the issue of IT securitydemands for a constant updating the curriculum

with new learning units

11 See <http://ettercap.sourceforge.net/>.

“”attacker can do, if he/she becomesMan-in-the-Middle successfully, suchas specifying Ettercap filters to ma-nipulate the message stream.

The hands-on exercise of this chap-ter asks the student to perform two dif-ferent tasks. The first one is the sameas described in the exercise on packetsniffing above: to monitor the networktraffic, gain FTP credentials and steala private file from Bob’s FTP server.The training environment is also set upsimilarly to the prior scenario. The dif-ference is that this time the team ofthree virtual machines is connectedthrough a virtual switch (instead of ahub), so that capturing the traffic withWireshark would not reveal the mes-sages between Alice and Bob. Again,the student has to proof the successfulattack by putting in the content of thesecret file in the tutoring interface.

The second (optional) task is toapply a filter on the traffic and replaceall images in the transmitted HTMLcontent by an image from the attack-er’s host (which would be displayed inAlice’s browser).

This kind of attack is still workingand dangerous in many currently de-ployed local network installations. Theonly way to protect oneself againstARP spoofing would be the usage ofSSL with a careful verification of thehost’s certificate, which is explained inthe conclusion of the learning unit.

A future enhancement of the prac-tical exercise on ARP spoofing wouldbe the interception of an SSL securedchannel: Ettercap also allows a moresophisticated MITM attack includingthe on-the-fly generation of faked SSLcertificates, which are presented to thevictims instead of the original ones.The Man-in-the-Middle can then de-crypt and re-encrypt the SSL trafficwhen relaying the messages.

Page 153: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

152 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

UPENET

Farewell Edition

Outlook and ConclusionThe Tele-lab system has been de-

veloped in order to attend to the par-ticular challenges and needs posed inIT security training and IT securitylaboratory settings. First of all, it isessential for an IT security course tobe able to provide real hands-on expe-rience to the learners, by using the nec-essary systems and contemporary ITsecurity tools. For this, the use of vir-tual machines is an obvious approachin order to, on one hand, deliver real-istic hands-on exercises to the learn-ers and on the other hand, to isolatesuch exercises from the "real" networkinfrastructure of the training provider.

The continuously increasing impor-tance of the issue of IT security, as it ispresented everyday in the mass media,and the very serious negative repercus-sions it can bring nowadays, pushes formore awareness and a more imperativeneed for IT security knowledge andpractical skills. Academic institutionsand training providers need to providesuch training that is in the state of theart, however, constructing an IT Secu-rity training environment (i.e., a com-puter laboratory devoted to IT securitytraining) requires knowledge, a consid-erable upfront investment for acquisi-tion, costs for administration and main-tenance) and poses risks when there areomissions in properly insulating suchphysical laboratories from the rest ofthe network infrastructure. Tele-Labmitigates those difficulties by provid-ing a fairly cheaper solution, that addsup to nearly no effort at all for mainte-nance and administration.

More important, the continuousevolvement of the issue of IT security(that evolves in parallel to, and per-plexes with, all innovations in ICT)demands for a constant updating thecurriculum with new learning units, orupdate existing learning units with newperplexing factors. Although Tele-Labprovides the facilities for easy addition

of new learning units and exercises, thebig feat of updating the knowledgebase can be achieved by collaborationof different institutions that are usingTele-Lab, and sharing amongst themthe new learning units and newly con-structed system functionalities. Alsosharing resources (e.g. Virtual Ma-chines) in order to even the systemsworkload is a valuable outcome of suchcooperations. In the example of theproject consortium mentioned in sec-tion 2 and 3, such arrangements havealready been put in place, and the con-sortium partners share knowledge, de-velopment tasks, functionalities, newcurriculum content and resources.

It is a challenge to prove that Tele-Lab, in combination with such a col-laborative and evolving model of co-operation among networks of institu-tions, can achieve delivering an inno-vative and always updated course ofhigh standards, that can address thedifficulties faced in modern IT secu-rity training.

References[1] C. Border. The development and

deployment of a multi-user, re-mote access virtualization systemfor networking, security, and sys-tem administration classes.SIGCSE Bulletin, 39(1): 576–580, 2007.

[2] J. Hu, D. Cordel, and C. Meinel.A Virtual Machine Architecturefor Creating IT-Security Labora-tories. Technical report, Hasso-Plattner-Insitut, 2006.

[3] J. Hu and C. Meinel. Tele-Lab IT-Security on CD: Portable, reliableand safe IT security training.Computers & Security, 23:282–289, 2004.

[4] J. Hu, M. Schmitt, C. Willems,and C. Meinel. A tutoring systemfor IT-Security. In Proceedings ofthe 3rd World Conference in In-formation Security Education,

The Tele-Lab consortium partners share knowledge,development tasks, functionalities, new curriculum content

and resources

“”

pages 51–60, Monterey, USA,2003.

[5] C. Willems and C. Meinel. Aware-ness Creation mit Tele-Lab IT-Security: Praktisches Sicherheitstraining im virtuellen Labor amBeispiel Trojanischer Pferde. InProceedings of Sicherheit 2008,pages 513–532, Saarbrücken,Germany, 2008.

[6] C. Willems and C. Meinel. Tele-Lab IT-Security: an Architecturefor an online virtual IT SecurityLab. International Journal ofOnline Engineering (iJOE), X,2008.

[7] C. Willems and C. Meinel, Prac-tical Network Security Teachingin a Virtual Laboratory. In Pro-ceedings of Security and Manage-ment 2011, Las Vegas, USA, 2011(to appear).

[8] C. Willems, W. Dawoud, T.Klingbeil, and C. Meinel. Secu-rity in Tele-Lab – Protecting anOnline Virtual Lab for IT Secu-rity Training, In Proceedings ofELS’09 (in conjunction with 4thICITST), IEEE Press, London,UK, 2009.

[9] W. Yurcik and D. Doss. Differentapproaches in the teaching of in-formation systems security. In Se-curity, Proceedings of the Infor-mation Systems Education Con-ference, pages 32–33, 2001.

[10] S. Whalen. An Introduction toARP Spoofing. Online: <http://www.rootsecure.net/content/downloads/pdf/arp_spoofing_intro.pdf>.

[11] J. Bersin, C. Howard, K.O’Leonard, and D. Mallon.Learning Management Systems2009, Technical Report, Bersin &Associates, 2009. Online: <http:// w w w. b e r s i n . c o m / L i b / R s /Details.aspx?docid=10339576>.

Page 154: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

CEPIS UPGRADE Vol. XII, No. 5, December 2011 153© CEPIS

CEPIS News

Farewell Edition

CEPIS Projects

Selected CEPIS News

Fiona Fanning

e-Skills and ICT Professional-ism Interim Report Now Published

The interim report of the e-Skillsand ICT Professionalism project hasbeen published. This project is con-ducted by CEPIS and the InnovationValue Institute (IVI) on behalf of theEuropean Commission. The synthesisreport marks the halfway point of theresearch which is due to be completedin 2012 and also signifies the end ofphase 1 of the project. CEPIS and IVIaim to provide detailed proposals for aEuropean Framework for ICT Profes-sionalism, and a European TrainingProgramme for ICT Managers in thefinal report.

Phase 1 combined desktop re-search, analysis, and hundreds of in-terviews with ICT experts from acrossEurope, North America and Asia Pa-cific through the ICT ProfessionalismSurvey. The research analysis so farsuggests that the following four keyareas act as building blocks for an ICTprofession:

a common body of knowledgecompetencescertification, standards and quali-

ficationsprofessionals ethics/codes of

conductCEPIS would like to thank all of

their Members who participated in theICT Professionalism Survey and whoprovided essential expert informationabout their attitudes to structures ofprofessionalism within ICT. We alsowelcome any further comments.

The European ICT ProfessionalismProject Interim Report can bedownloaded at: <http://www.cepis.org/media/EU_ICT_Prof_interim_report_PublishedVersion1.pdf>.

Scoreboard shows only a Thirdof World’s Top 50 R&D Investorsare European

The 2011 EU Industrial R&D In-vestment Scoreboard which ranks theworld’s top 1,400 companies by theirR&D investment during 2010 has justbeen published by the European Com-mission. Overall, R&D investment byEuropean companies has increased by6.1% following the post-economic cri-sis decrease of 2.6% in 2009. HoweverUS companies reported an even higherrate of R&D investment at 10% dur-ing 2010.

European companies continue to lagbehind other global R&D investors es-pecially since only 15 of the top 50 com-panies in the world to invest in R&Dduring 2010 are European. Most of thenon-EU companies in the top 50 with thelargest increases were in the pharmaceu-tical and ICT sector, yet for those Euro-pean companies only four companieswere in ICT. You can access the 2011EU Industrial R&D Investment Score-board at: <http://iri.jrc.ec.europa.eu/re-search/docs/2011/SB2011.pdf>.

European Commission Pro-poses 80 Billion Horizon 2020 Pro-gramme for Research and Innova-tion

The European Commission re-cently announced a new programmefor investment in research and inno-vation called Horizon 2020. Horizon2020 will bring together all EU re-search and innovation funding togetherunder one programme, and in doing soaims to simplify rules, procedures andgreatly reduce the amount of time-con-suming bureaucracy associated withfunding programmes until now.

Funding will be focused towardsthree main objectives:

Support Europe’s position as aworld leader in science

Help source industrial leader-ship in innovation

Address major concerns acrossseveral themes such as energy effi-ciency and inclusive, innovative andsecure societies

The proposal and overall budget ofHorizon 2020 is currently under nego-tiation with the European Parliamentand the Council of Europe, and byJanuary 2014 the first calls for propos-als are expected to be launched. Hori-zon 2020 is the financial instrument ofthe flagship initiative Innovation Un-ion and forms part of the drive to cre-ate new growth and jobs in Europe. Tofind out more about Horizon 2020,please click here: <http://ec.europa. eu/research/horizon2020/index_en.cfm?pg=home>.

CEPIS Research shows GenderImbalance in the IT ProfessionRisks Europe’s Growth Potential

Less than one fifth of European ITprofessionals are women according tonew research that calls for Europe tourgently redress the gender imbalance.Highly skilled roles and enough humancapital to fill these jobs will be vitalfor the smart growth economy thatEurope aspires to create by 2020. Yeta recent European report as announcedin the last issue of CEPIS UPGRADEby the Council of European Profes-sional Informatics Societies (CEPIS)shows that women represent only 8%of IT professionals in some countries.With few women entering the IT pro-fession as the demand for skilled IT

Page 155: UPGRADE, Vol. XII, issue no. 5, December 2011 - cepis.orgcepis.org/upgrade/media/full_2011_51.pdf · 99 From inforeview (JISA, Serbia) Information Society Steve Jobs — Dragana Stojkovic

154 CEPIS UPGRADE Vol. XII, No. 5, December 2011 © CEPIS

CEPIS News

Farewell Edition

professionals increases, Europe’s eco-nomic success may be jeopardized.

The research identified and analysedthe e-competences of almost 2,000 ITprofessionals across 28 countries ingreater Europe. It presents an up-to-datesnapshot of the e-competences held byIT professionals today and it shows thatworryingly, less than one fifth of IT pro-fessionals in Europe are female. Inthe European report CEPIS puts forwardkey recommendations for action, includ-ing a call for all countries to urgently re-dress the gender imbalance and increasethe participation of women in IT careers,<http://cepis.org/media/CEPIS_Prof_eComp_Pan_ Eu_Report_FINAL_101020111.pdf>.

CEPIS recommends that existinginitiatives with a focus on role modelsand mentoring programmes, such asthe European Commission’s Shadow-ing Days, should be replicated andscaled up.

Another means to encourage a bet-ter balance would be to provide fiscalincentives for companies that adoptgender equity as part of their organi-sational culture, hiring practices andcareer advancement programmes.CEPIS strongly believes that the Eu-ropean Commission has a role to playin continuing to promote a Europeanculture of gender equity in the IT pro-fession. You can read more about theCEPIS Professional e-CompetenceProject at: <http://www.cepis.org/professionalecompetence>.


Recommended