Date post: | 09-Feb-2017 |
Category: |
Technology |
Upload: | mark-burgess |
View: | 779 times |
Download: | 0 times |
Fault-resilience as promise theory project
An attempt to understand and formalise
Very hard to write down true statements about systems, and provide non-use-case specific advice.
My attempt to make progress:
http://markburgess.org/Faults.pdf
Five key words
• Who are “we”? — it’s subjective
• What does “know” mean? — know it like a friend
• What is “the system”? — where does it start and end?
• What is “working”? — what’s the basis for judgement?
• What is “well”? — can we quantify it?
“How can we know when a system is working well?”
“System” - the modularity misconception
• In a “system”, there are freedoms and constraints in balance.
• Is it one thing, or many components working together?
• NB,this is a question of scale and perspective.
• Microservices?
• Causation becomes entangled - the modularity myth
Things to understand in systems
• What is intended / what is claimed (promised) - fit for purpose
• What is actual / what is measured (assessed) - are we succeeding?
• My goals might be different from yours (subjective)
• Only the origin agent is authoritative about its intent (autonomy)
• How do assessments change with scale and perspective (relativity)
Understanding “What” happens (2000-2003)
• On what scale?
• Space • Time
• Assessments … uncertainty
• Non-deterministic
• Dynamics, but something is missing …
“Well” - dynamically and semantically suitable
• It meets “our” expectations • Who is “us”?
• Expectations are based on • Assumptions • Whose assumptions?
• The origin agent is the authoritative source • But the receiver is responsible
DESIGNING SYSTEMS FOR COOPERATION
MARK BURGESS
THINKING INPROMISES
“This is where a great quote goes. Excellent book!” –Joe Blough
“We” - the subjective (2004-2017)
• Fitness for purpose
• Desired outcome
• Forms the measuring stick for correct outcome
• Define agent
• Define promise
• Define assessment
The basics of promise theory • An agent can only make a promise about its own behaviours
• Obliging others is an ineffective strategy.
• An agent can only assess others’ promises from its own perspective
• Any reliance (dependency) on another agent invalidates a promise
“Working” - faults and resilience (2015-)
• To define a fault, error, flaw, you have to know what was intended
• But at what scale?
• What measuring stick? • Agents and promises • What are the consequences of
promises not kept?
The knowledge ladder transforms: time —> space
experiment - customize - productisation - commoditisation - utilities
Can we still still keep our promises if we scale the system?
• Change of spacetime scale • Space/bigger • Time/faster
• Scaling is more than “make it bigger” • Dynamics and semantics • Do we trade functionality for size?
How do we know a system is working well?
• Know - a relationship between the system curator and its agents
• System - a collection of agents collaborating by promises
• Working - Promises made and kept
• Well - what stats about promise keeping?
If we can’t formalise these, we can’t answer this question
@markburgess_osl
DESIGNING SYSTEMS FOR COOPERATION
MARK BURGESS
THINKING INPROMISES
“This is where a great quote goes. Excellent book!” –Joe Blough
http://markburgess.org/Faults.pdf
Does it scale? — single agents and queues
• Wind-tunnel (dynamics)
• Dimensionless ratios - universal scaling
• 1 dimensional bias: queues
Scaling to optimize…what?
• Microservices / modularity - optimizes human knowledge
• Monolith - optimizes localization
• Continuous delivery - optimises convergence
• What costs the most?
• What optimizes certainty?
Repair versus redundancy Continuous delivery vs fault avoidance
• Rapid repair cycle is the best strategy for temporal continuity
• Repair does not change the bulk scaling assumptions - using time agility instead of bulk for resilience (space —> time)
• Local time trumps space, because space is non-local time (!)