Exmouth House 3–11 Pine Street London EC1R 0JH T +44 20 7832 5850 F +44 20 7832 5853 E [email protected] W www.adelard.com
Assurance Cases Divide and conquer or divide and fall?
Robin Bloomfield Adelard and City University London LAW 2012 3rd December 2012 [email protected] [email protected]
CSR Building confidence in a computerised world
© ADELARD
Topics
1. Introduction • Safety and security
2. Assurance Cases • Claims, Arguments, Evidence
3. Research into structuring cases • Divide and conquer?
4. Composition/decomposition challenges • Divide and fall, compose and fail?
5. Conclusions and next steps
© ADELARD
Protection parameters and viability domains
Pressure
Temperatureplant damage boundaryprotection envelopenormal operation
© ADELARD
WENRA Safety Objectives for New Nuclear Power Plants
1. O1. Normal operation, abnormal events and prevention of accidents
2. O2. Accidents without core melt
3. O3. Accidents with core melt
4. O4. Independence between all levels of defence-in-depth
5. O5. Safety and security interfaces
6. • ensuring that safety measures and security measures are designed and implemented in an integrated manner. Synergies between safety and security enhancements should be sought.
7. O6. Radiation protection and waste management
8. O7. Leadership and management for safety
© ADELARD
SESAMO project"
Security and Safety Modelling!for embedded systems!14 companies and 6 research institutes!in Europe and the U.S.!
http://sesamo-project.eu/!Objectives include:"
• joint reasoning about safety and security properties, conflicts and synergies"
• a model-based methodology and solutions for addressing safety and security within an integrated process, supported by an effective tool chain"
• validation in use cases in multiple industrial domains (e.g. aerospace, energy management, automotive, metropolitan rail and mobile medical) "
© ADELARD
Computer trading and systemic risks
1. The approaches to systemic risk definition and evaluation.
2. The definition of protection system parameters, risk controls and architecture.
3. The need for trust in computer-based systems.
Nuclear, complex adaptive systems and financial systems perspectives
The Future of Computer Trading in Financial Markets - Foresight Driver Review – DR 26
!
Computer trading and systemic risk: a nuclear
perspective
© ADELARD
Medical systems
• Tempo
• Heterogeneous systems
• Patient’s own devices
• Accidental systems
• Ad hoc Apps
• Off label
• Local and global
• Multi-stakeholder
FDA Policy on Infusion Pumps
Health Foundation Report
© ADELARD
Safety and security
1. Safety – concerns the damage the system can do to the environment
2. Security – the damage the environment (in a broad sense) does to the system
3. A system that is not secure is not safe • Security informed safety • (Artemis)
4. Responsibility for safety
5. Safe systems have to be work as well • So justifying safety and operation
6. Have to deal with both aleatory and epistemic uncertainties
© ADELARD
Safety Case
• “a documented body of evidence that provides a convincing and valid argument that a system is adequately safe for a given application in a given environment”
© ADELARD
Some Definitions “A documented body of evidence that provides a convincing and valid argument that a system is
adequately safe for a given application in a given environment”
ASCAD Manual, 1998 A structured argument, supported by a body of evidence,
that provides a compelling, comprehensible and valid case
that a system is safe for a given application in a given
environment
Def Stan 00-56 issue 4
A security assurance case uses a structured set of arguments and a corresponding body of evidence to demonstrate that a system satisfies specific claims with respect to its security properties.
BSI Portal Copyright © Carnegie Mellon University 2005-2007
A formal presentation of evidence, arguments
and assumptions aimed at providing assurance
that a system, product or other change to the
railway has met its safety requirements and that
the safety requirements are adequate. Yellow Book issue 4
“Software Assurance is the level of confidence that software is
free of exploitable vulnerabilities, either intentionally designed as
part of the software, or inadvertently created.”
Software Assurance Programs Overview
Daniel mMrtin, Mitre
© ADELARD
Structured Safety Case
• “a documented body of evidence that provides a convincing and valid argument that a system is adequately safe for a given application in a given environment”
Claim
Sub-Claim
Argument
Evidence
© ADELARD
Types of argument
1. Deterministic or analytical application of predetermined rules to derive a true/false claim (given some initial assumptions), e.g. formal proof (compliance to specification, safety property), execution time analysis, exhaustive test, single fault criterion
2. Probabilistic quantitative statistical reasoning, to establish a numerical level, e.g. MTTF, MTTR, reliability testing
3. Qualitative compliance with rules that have an indirect link the desired attributes, e.g. compliance with QMS and safety standards, staff skills and experience
© ADELARD
Information or evidence?
100s documents, grouped according to phases and scope of documents
Layered assurance
© ADELARD
Original concepts
Claim
Warrant Backing
Evidence Assumptions
Claims
Argument
Grounds
Toulmin
Claim
Argument
Evidence
© ADELARD
Concept
“a documented body of evidence that provides a convincing and valid argument that a system is adequately dependable for a given application in a given environment”
Claim
Sub-Claim
Argument
Evidence
© ADELARD
In practice … the engineering
© ADELARD
In practice …
© ADELARD
What a Case for?
1. Structured case has two key roles: communication and reasoning • communication is an essential function of the case, from this
we can build confidence ‒ boundary objects that record the shared understanding
between the different stakeholders • a method for reasoning about dependability (safety, security,
reliability, resilience ...) properties of the system
2. Both are required to have systems that are trusted and trustworthy
© ADELARD
Assurance process – building confidence, challenging assumptions
1. Captured in assurance management system and in meta-case
2. Challenge and response cycle essential
3. Proof as a social, technical, adversarial process
© ADELARD
Reasoning, communication, confidence
© ADELARD
Approaches
Standards compliance
with
Safety properties satisfied
Vulnerabilities and hazards
mitigated
Assurance goals
Goal based
Rule basedRisk informed
© ADELARD
Map evidence to claims
• iterative selection of techniques that generate evidence
© ADELARD
23
Selecting techniques and activities to generate evidence
1. Catalogues of techniques • e.g. in IEC 61508 Part3 • P Bishop book
2. Standards leave it as “exercise for the reader” in justifying selection • Supported by case
3. Two useful mappings are • Activities/techniques à role in case • Attributes à techniques
4. Examples tables
© ADELARD
24
Conservative long term prediction
MTTFT > e.T / N.d
Bishop, P.G., Bloomfield, R.E., "A Conservative Theory for Long-Term Reliability Growth Prediction", IEEE Trans. Reliability, vol. 45, no 4., pp. 550-560, 1996
Confirms every engineers intuition
© ADELARD
25
Is this enough?
© ADELARD
26
Can we trust the evidence?
© ADELARD
27
Evidence
“a documented body of evidence that
provides a convincing and valid argument that a system is
adequately safe for a given application
in a given environment”
© ADELARD
Maturity
1. Safety Cases • As a concept - cross sector, international • Structured safety cases (GSN, CAE)
2. Assurance and safety cases • Defence (ordnance, aviation, command and control, marine) • Civil Aviation (ATM) • Nuclear (claims, arguments, evidence in New Build) • Railways • Medical (FDA and infusion pumps) • Information infrastructure (financial sector) • International Standardisation (IEC/ISO, OMG) • Industrial strength tool ASCE
© ADELARD
Not a silver bullet, not a panacea
1. Can be used and abused • Vague claims, weak arguments, questionable evidence • Commoditization, no controlling mind
2. Wide range of practice (too wide range?)
3. Needs technical underpinning for claims, arguments and evidence
© ADELARD
Summary
1. Claims Argument Evidence • Simple framework
2. Emphasis on behaviour • But also compliance, design principles
3. Evidence not just information • Threat models
4. Two key roles for Case • Communication and reasoning • Allowing challenge and confidence/doubt
5. Importance of both narrative and graphical structure
© ADELARD
Developing of “Fog” approach
1. Part of long term research for UK and Swedish nuclear industry • ASCAD 12 yrs ago
2. Consolidate • after(5-7 yrs)
3. Broad set of issues addressed
4. Process • Balance invention, empirical study, case studies, model
building
5. Publish structuring CAE
© ADELARD
Scope of project
1. Structuring cases with Fog • Fog concepts • Presentation and definition of Fog patterns • Architecting cases – selection and application of patterns
2. Challenging safety cases • Safety case process • Empirical vulnerabilities • Confidence • Systematic checklist based review
3. Application, maturity and gap analysis • Applying Fog patterns to industry examples • Review of existing cases
© ADELARD
Scope of project
Appendix A – Fog normal form: notation and proofs
Appendix B – Detailed patterns
Appendix C – Model of assessment process
Appendix D – Hazard analysis applied to safety case
Appendix E – Licensing vulnerabilities
Appendix F – Stopping rules
Appendix G – Issues in confidence
Appendix H – Assurance strategies
Appendix I – CCF case study
Appendix J – Cemsis example
Appendix K – Property decomposition and Cogs
Appendix L – Supporting Fog patterns using ASCE
© ADELARD
Systematic Structuring of Cases – claims and arguments
Divide and conquer?
© ADELARD
Approximate process
1. Developed more formal approach
2. Reviewed practice
3. Case studies and experimentation, consolidation
4. Revisited approach – simplification, clarity, guidance
5. Reviewed practice –more substantial
6. Case studies and examples
7. Tool support experiments
8. Consolidation and guidance
9. Beta release
© ADELARD
Cases reviewed
• Smart sensor safety case for the nuclear industry
• CCF case from previous FOG results
• The safety of a computer based medical device
• Generic medical device safety case
• The dependability of an electronic funds transfer system
• Changes to a payments system
• A defence training system
• Safety of changes to a command and control system
• An approach to assessing safety of ordnance
• A weapons safety case
• A case supporting vulnerability testing of an eVoting machine
© ADELARD
State of practice
1. Claim expansions and decompositions that appear in real-world cases are often very hard to justify, even informally. • tendency for practitioners to use extended expansions that
cover many issues.
2. Architecting cases is a specialised activity. • Some of the current cases we reviewed would need significant
rewriting to use the Fog patterns.
3. State of cases reflects the lack of guidance on claims-argument-evidence to users • wide variability in cases and large gaps between practice and
“best practice”. • things to be fixed other than formality
© ADELARD
From influence diagrams to claims
��� ��� ����������������
������ ����������� ����#�������������� ��
�������
����� ��� � �� ���������! �����������
� ������������� �������������������������
�������������������!������ ����
� ����������� ��������
� ������� ����� ������������ ����
����� ��#������������������������
����� ����� #��������������
���� ����������"��#
���� ������������ ��#
���� ���#���������� ����
---
���� ���������������
������������ ���#�
����������
Increased work load
�������#---
more difficult access
-----
-----
++
Claim C
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Influence digram CAE structure
Engineeering modelsMental models
© ADELARD
Developing claim structure – define, design, challenge
Defining the claim
Designing the argument, focusing on arguments for decomposition/split of the claims
Conjunction of sub-Claims
Challenging the validity of the CAE structure developed in the design step
Claims as propositions
Explicit warrants or side-conditions
Logical vs epistemic, verification vs validation
Claim C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
© ADELARD
40
Restricted types of claim expansion - argument
1. Claim expansion language initially unconstrained • CAE
2. Empirically found a small set of constructs expressive enough • “blocks” • new case studies identify a few new blocks • some blocks are used far more than others
3. Easy wins from a few core blocks: • property decomposition • object decomposition • concretion
© ADELARD
Range of elements
© ADELARD
Guidance and justification
Work to de done developing tables and guidance
Claim C1
Argument A
sub Claim C11
sub Claim C12
Guidance in selecting argument
Table to justify pattern use
© ADELARD
Using the “blocks”- definition
1. Uniform representation
2. Textual descriptions
3. Assumptions
4. Semi-formal representation
5. CAE diagram
6. Examples
© ADELARD
Supporting tables
Name Complete Object Decomposition Applicability This pattern is used to claim that if a property holds for each component of
an object, then it holds for the whole object. Rule X = X1 + X2 +...+ Xn
P(X1)∧P(X2 )∧...∧P(Xn )⇒ P(X)
Summary Show that if X is composed of , then property can be demonstrated by reference to properties P(X1),P(X2 )...P(Xn )
Assumptions X is an object, P is a property of X.
X can be decomposed into sub-objects such that X = X1 + X2....+ Xn
can be inferred from P(X1),P(X2 )... P(Xn )
Instantiation ● The objects are .. ● The property P is …. ● The rule is stating that….
Guidance on satisfying rule
● There will be a need to show that the abstraction of the system X can be split into the n components for demonstrating the property P. There will be issues around the definition of component: it might be necessary to include the links between components, although in some cases the links might be ignored.
● The rule is formulated around “object”. Note that this will probably include a system and an environment. It might be useful to consider these explicitly.
Questions ● What is the property? ● What is the object? ● What are the components? ● Why these and no others? What about links between components? ● What else is in the object? What else is it connected to? ● How can the object property be inferred from the component properties?
!
nXXX ,...,, 21 )(XP
nXXX ,...,, 21
)(XP
Defining the initial claim –tables and “prompt trees”
Define
Scope of the claim
Details of claim
Is level of abstraction sufficient? (what is missing)
What is assumed?
modes of object covered
fault and error handling
Is claim a proposition, a positive property?
Is more concretion needed?
Environment (physical, organisational, threat)
System/service (or service delivered by application of system)
Property see attribute list
© ADELARD
Increasing rigour and formality
For common structures justification reusable - templates
Meta Case
Claim C1
Argument A
sub Claim C11
sub Claim C12 More formal justification
e.g. link to models, reasoning tools
W: C11 /\ C12 => C1
Guidance in selecting expansion
Table to justify pattern use
© ADELARD
Formal definition of elements
1. Fog definitions reviewed • Natural deduction style, PVS
2. Simplified formal definitions in Sal • Defined types • Auxiliary functions • Warrants • Proofs • Proved well formed
3. Need for review and development
© ADELARD
Other issues
1. Embed in overall process
2. Visualisation approaches and efficiencies
© ADELARD
Overall process
1. Production of a preliminary CAE structure • Exploration and brainstorming to produce a preliminary claim-argument
structure and an initial identification of analysis techniques and evidence. • Review edges of the overall CAE structure, consider whether it is feasible for
the evidence to demonstrate the claims and consider alternative strategies.
2. Interim CAE structure • Revisit using a “Define, Design and Challenge” approach. Develop detailed CAE
structure. Specify justification activities. Define the split between the case and its supporting meta-case.
• Identify evidence and supporting analysis required.
3. Final CAE structure • Undertake detailed justification, analysis and evidence-production activities,
and link to claims structure. Adjust case in the light of results.
© ADELARD
Simplifying presentation - scaling
Claim C
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Explicit arguments, warrants and their justification
© ADELARD
Recap- Fog approach
• Empirically based • Propositionalisation of
claims • Logical arguments (no
disjunctions) • Un-interpreted
functions linking to informal to formal
• Separation of logical and epistemic, verification and validation
• Tabular guidance, prompt trees
Meta Case
Claim C1
Argument A
sub Claim C11
sub Claim C12 More formal justification
e.g. link to models, reasoning tools
W: C11 /\ C12 => C1
Guidance in selecting expansion
Table to justify pattern use
Challenges and issues
Divide and fall?
Compose and fail?
© ADELARD
Challenges and issues
1. Lack of independence in defence in depth
2. Composing hazards and threats
3. Abstraction validity
4. System boundaries
5. “Small” changes – significant or insignificant
6. Claims about time split
7. ALARP and stopping rules
Divide and fall?
© ADELARD
Difficulty and correlated failures
Failures of defence in depth barriers not independent
Even when different technologies, organizations, processes
(independence a special cases)
Issue of what is best approach on average and in a particular example
AV probability of detection
Other experimental examples
© ADELARD
Hazards and threats
Many cases consider hazard classes separately, issues
1. Asset aggregation • Increase chance of attack • Design basis threat changes
2. Hazardous material combination • Two reagents, separately OK but together explosive • Proximity of person to source increased
3. Assume/guarantee • Hard to make work with high confidence, as abstraction may
hide relevant details • Don’t know what others might rely on or be vulnerable to • Might have to think of adding ontologies ‒ List of assets, components
© ADELARD
Abstraction
Justifying decomposition relies on abstraction and its validation
1. Static integrity analysis • System as built different from architecture
2. Claims about “all modes” • Abstract process into events • Ignoring transitions and pluralities
3. Architecture links • Ignoring glue, attribute dependent
4. Toads and missed opportunities • Abstraction can remove indicators
© ADELARD
Toads
Common toads appear to be able to sense an impending earthquake and will flee their colony days before the seismic activity strikes.
The evidence from a population of toads which left their breeding colony three days before an earthquake that struck L'Aquila in Italy in 2009.
How toads sensed the quake is unclear, but most breeding pairs and males fled.
They reacted despite the colony being 74km from the quake's epicentre, say biologists in the Journal of Zoology.
http://news.bbc.co.uk/earth/hi/earth_news/
newsid_8593000/8593396.stm
Abstraction can remove key indicators
© ADELARD
System boundaries
1. Often defined from legal rather than system engineering viewpoint • Responsibilities, liabilities
2. Transport safety • Trains vs cars
3. Training simulator safety • Risk of theatre vs risks of training • Perceived ability to cope with risks impacts risk decision
makers • Adaptive behaviour • Values and rationality
© ADELARD
System boundaries and underestimating risks
!
Interdependencies and openness
© ADELARD
Adaptation
Systems are socio-technical, some attempted factorisations
safe(people)+ safe(equipment) => safe(system)
Fog “block” on object decomposition
Or
socio-technical = socio + technical
socio-technical = socio + (socio + technical) + technical
© ADELARD
The human element
1. People as a source of resilience, as a threat and as victims
2. Don’t blame the pilot – but the system
3. “fat fingers”, attacks, market abuse, fraud
© ADELARD
62
Socio-technical, adaptation
1. A socio-technical perspective: • Define a range of
vulnerabilities (narrow scope, misaligned responsibilities, undifferentiated users, adaptation, automation biases,) and develop arguments of how they might be addressed.
2. Develop alignment of incentives so system evolution is shaped
© ADELARD
Performative models – markets and security
1. In the past models used to design and assess the risks do not affect the threats or challenges that the system faces • Modelling severe weather does not change the wind speed in
London ‒ Except perhaps via a slow political process and peoples’
behaviour
2. In the financial and security area this is not the case • Models can performative, having a direct and unforeseen
impact on the markets/systems and how they fail • Donald MacKenzie , An Engine, Not a Camera: How Financial
Models Shape Markets and also Do Economists Make Markets?: On the Performativity of Economic
3. Cases may inform an adversary • a potential impact on the threats a system faces
© ADELARD
Small changes and monotonicity
1. Appeal monotonicity arguments • Just a small change, one more makes no difference, can not
make worse • Non-linear responses, cliff edges
2. Examples • Cascades and critical infrastructures • Advisory systems • Adding another device (Infusion pump) • Small change to procedures (ATM, video) • Stock market price changes and other complex systems
© ADELARD
Difficulty in predicting bubbles
/ Physics Procedia 00 (2010) 1–17 9
Figure 2: Time series of observed prices in USD of “NYMEX Light Sweet Crude, Contract 1” from the Energy Informa-tion Administration of the U.S. Government (see http://www.eia.doe.gov/emeu/international/Crude2.xls)and simple LPPL fits (see text for explanation). The oil price time series was scanned in multiple windows defined by(t1, t2), where t1 ranged from 1 April 2003 to 2 January 2008 in steps of approximately 3 months (see text) and t2 wasfixed to 23 May 2008. Shaded box shows the 80% confidence interval of fit parameter tc for fits with tc less than sixmonths beyond t2. Also shown are dates of our original analysis in June 2008 and the actual observed peak oil price on3 July 2008. Reproduced from [24].
/ Physics Procedia 00 (2010) 1–17 10
half of 2009. We thus successfully predicted time windows for this crash in advance with thesame methods used to successfully predict the peak in mid-2006 of the US housing bubble [26]and the peak in July 2008 of the global oil bubble [24]. The more recent bubble in the Chineseindexes was detected and its end or change of regime was predicted independently by two groupswith similar results, showing that the model has been well-documented and can be replicated byindustrial practitioners.
In [27], we presented a thorough post-crash analysis of this 2008-2009 Chinese bubble and,also, the previous 2005-2007 Chinese bubble in Ref. [28]. This publication also documentsanother original forecast of the 2005-2007 bubble (though there was not a publication on that,except a public announcement at a hedge-fund conference in Stockholm in October 2007). Also,it clearly lays out some of our many technical methods used in testing our analyses and forecastsof bubbles: the search and fitting method of the LPPL model itself, Lomb periodograms ofresiduals to further identify the log-periodic oscillation frequencies, (H, q)-derivatives [29, 30]and, most recently, unit root tests of the residuals to confirm the Ornstein-Uhlenbeck property oftheir stationarity [17]. Here, we reproduce the main figure documenting the advance predictionwhich included the peak of the bubble on 4 August 2009 in its 5-95% confidence limits. Thecurves are fitted by first order Laudau model:
log p(t) = A + B(tc � t)m +C(tc � t)m cos(! ln(tc � t) � �) (2)
4. Detection of Rebounds using Pattern Recognition Method
Until now, we have focused our attention on bubbles, their peaks and the crashes that oftenfollow. We have argued that bubbles develop due to positive feedbacks pushing the price upwardtowards an instability at which the bubble ends and the price may crash.
But positive feedbacks do not need to be active just for price run-ups, i.e., for exuber-ant “bullish” market phases. Positive feedbacks may also be at work during “bearish” marketregimes, when pessimism replaces optimism and investors sell in a downward spiral. Zhou andSornette (2003) have developed a generalized Weierstrass LPPL formulation to model the 5 dropsand their successive rebounds observed in the US S&P 500 index during the period from 2000 to2002 [31].
Here, we go further and develop a methodology that combines a LPPL model of acceleratingprice drops, termed a “negative bubble,” and a pattern recognition method in order to diagnosethe critical time tc of the rebound (the antithesis of the crash for a “positive bubble”). A negativebubble is modeled with the same tools as a positive bubble, that is, with the power law expression(1). But the essential di↵erence is that the coe�cient B is positive for a negative bubble (whileit is negative for a normal positive bubble, as discussed above). The exponent m obeys the samecondition 0 < m < 1 as for the positive bubbles. The positivity of B together with the condition0 < m < 1 implies that the average log-price trajectory exhibits a downward curvature, express-ing a faster-than exponential downward acceleration. In other words, the log-price trajectoryassociated with a negative bubble is the upside-down mirror image of the log-price trajectory ofa positive bubble. Additional log-periodic oscillations are added to the LPPL model, which alsoaccount for the competition between value investors and trend followers.
We adapt the pattern recognition method of [32] to generate predictions of rebound times infinancial markets. A similar method has been developed by Sornette and Zhou (2006) to combine
© ADELARD
Medical systems
• Tempo
• Heterogeneous systems
• Patient’s own devices
• Accidental systems
• Ad hoc Apps
• Off label
• Local and global
• Multi-stakeholder
© ADELARD
Time
1. Typical Case might split on “ok now” and “ok in the future”
2. Arrive at future in small steps • Each one seems rationale at the time • Longevity - threat models change • Overall goals may change
3. Abstract processes into events • (see abstraction issues)
4. Abstraction hides connectivity • Maintenance and modification ‒ Might break independence assumptions ‒ Supply chain risks
5. Evidence degrades ‒ Risks of refurbishment ‒ Implicit safety information in design
6. Systems evolution vs design • Align incentives, economics • Markets drive out resilience
© ADELARD
How does Fog help?
1. Emphasis on clarity of claims
2. Design of Fog separates • logical arguments and epistemic arguments ‒ recursive
• provides emphasis on validity of models, challenge ‒ scope and boundaries ‒ design basis threats ‒ abstraction ‒ adaptation ‒ monotonicty
3. Restricted set of argument structures • means can identify issues • and provide guidance and challenges
© ADELARD
Summary and conclusions
© ADELARD
Communication and reasoning
1. Structured justification has two roles: • communication is an essential function of the case, from this
we can build confidence ‒ boundary objects that record the shared understanding
between the different stakeholders • a method for reasoning about dependability (safety, security,
reliability, resilience ...) properties of the system
2. Both are required to have systems that are trusted and trustworthy
© ADELARD
State of practice
1. Many examples of composed cases • Submarines, aircraft, industrial plant
2. Vulnerabilities • Vague claims, weak arguments, questionable evidence • Commoditization, no controlling mind • Wide range of practice, lack of method and guidance • Unclear design basis to threats • Even when clear arguments, what about the system
© ADELARD
Rigorous engineering and argumentation models
��� ��� ����������������
������ ����������� ����#�������������� ��
�������
����� ��� � �� ���������! �����������
� ������������� �������������������������
�������������������!������ ����
� ����������� ��������
� ������� ����� ������������ ����
����� ��#������������������������
����� ����� #��������������
���� ����������"��#
���� ������������ ��#
���� ���#���������� ����
---
���� ���������������
������������ ���#�
����������
Increased work load
�������#---
more difficult access
-----
-----
++
Claim C
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Argument A
sub Claim C11
sub Claim C12
W: C11 /\ C12 => C1
Influence diagram CAE structure
Engineeering modelsMental models
© ADELARD
Recap- Fog approach
• Propositionalisation of claims
• Logical arguments (no disjunctions)
• Un-interpreted functions linking to informal to formal
• Separation of logical and epistemic, verification and validation
• Tabular guidance, prompt trees
Meta Case
Claim C1
Argument A
sub Claim C11
sub Claim C12 More formal justification
e.g. link to models, reasoning tools
W: C11 /\ C12 => C1
Guidance in selecting expansion
Table to justify pattern use
© ADELARD
Some claims about approach
1. Improved rigour • Formality to underpin semi-formal approaches • Formality – details do matter • Phased rigour – provide by formal basis, to both claims and
arguments
2. Sufficiently expressive • Generic “blocks” to capture 80% of structures • Combined into larger blocks, idioms and templates • Extensible
3. Intrinsically multi-model • Integrative • Narrative continues to be important
© ADELARD
More claims
1. Challenge intrinsic to approach • Tables and guidance • Instantiation and proof • Supporting process – define, design, challenge
2. Epistemic doubts central • Vulnerabilities to validity ‒ abstraction, lack of independence in defence in depth,
composing hazards, system boundaries, “small” changes and monotonicity
3. Informal notations should be clear and more efficient • Reuse • Lingua-franca
© ADELARD
More claims
1. Deployable • Adaptation path • Developing efficient tool support • Defining workflow • Scalable (Information management of real cases)
2. Prototype tool support • “blocks” and templates supported • Support addition of new “blocks” and proof well formed (in
SAL)
© ADELARD
Conclusion
1. Need for methodology for CAE • Rigorus cases, efficiency, new domains, critical systems
2. Fog a step forward • Empirical justified • Formally based • Innovation in combing formal and informal parts • Challenge inherent in approach • Integrative (to models and other justifications) • Composition vulnerabilities addressed from ‒ Limited types, guidance and separation of logic and
epistemic validity • Review and experimentation
© ADELARD
Next steps
1. Consolidate • Set of blocks, semantics, guidance and selection, naming of “blocks”
and approach, Fog?
2. Develop blocks, phrases and templates • For specific applications, domains
3. Tool support • For experimentation, templates
4. Publication plans • Part of security informed safety justification framework • Expose Fog to beta testers early 2013 (need guidance, tool support)
5. Teaching material • City MSc Information Security and Resilience, Assurance Case module
(Feb 2013) • Adelard courses on Assurance Cases for medical devices 2013