+ All Categories
Home > Business > Measuring black boxes

Measuring black boxes

Date post: 16-Jan-2015
Category:
Upload: phil-huggins
View: 1,071 times
Download: 0 times
Share this document with a friend
Description:
A short presentation (20 minutes) I gave to an internal audience on the use of attack surface and complexity / coupling metrics in analysing system architectures.
20
Measuring Black Boxes Practical Architecture Analysis 1 Internal Presentation, September 2013, V1 Phil Huggins
Transcript
Page 1: Measuring black boxes

1

Measuring Black BoxesPractical Architecture Analysis

Internal Presentation, September 2013, V1Phil Huggins

Page 2: Measuring black boxes

2

Who am I?

Security Architect for large delivery programmes: Multiple projects Challenging

stakeholders Large, complex systems Multi-year delivery 100+ people customer

delivery teams 200+ people supplier

delivery teams Security mattered

Government

Commercial Airport Terminal New Build Smart Metering for a Big6 UK

Energy Supplier 7x UK Airports security

refresh. UK Banking ecommerce

infrastructure Cloud Software as a Service

Provider

Page 3: Measuring black boxes

3

Plugging together black boxes

Many sub-systems Multiple stakeholders and connecting parties Multiple COTS products Multiple unsupported OSOTS products System-specific glue code and configuration Business-specific logic and processes Shared data models

SDLC doesn’t help for the majority of the vulnerabilities in the systems

Trust Issues

Design Flaws

Software Bugs

Configuration Errors

Most Vulnerabilities

Page 4: Measuring black boxes

4

The black box problem

Page 5: Measuring black boxes

5

Early Attack Surfaces

A measure of attackability NOT of vulnerability. Doesn’t look inside the box.

Michael Howard at Microsoft (2003)Michael Howard & Jeanette Wing at Carnegie Mellon (2003) Relative Attack Surface Quotient 20 Attack Vectors (open sockets, weak ACLs, guest

accounts etc) Channels Process Targets Data Targets Process EnablersPretty informal model Needs an expert to apply to software not previously analysed

Page 6: Measuring black boxes

6

Later Attack Surfaces

Pratyusa Manadhata & Jeanette Wing at Carnegie Mellon (2004 – 2010)

Positively correlated severity of MS Security Bulletins vulnerabilities with the following indicators:

Method Privilege Method Access Rights Channel Protocol Channel Access Rights Data Item Type Data Item Access Rights

Attackers use a Channel to invoke a Method and send or receive a Data Item

Page 7: Measuring black boxes

7

Method: Damage Potential & Attacker Effort

Methods

Privilege Value Access Rights Value

System 5 AuthN Admin 4

Admin 4 AuthN Priv User 3

Priv User 3 AuthN User 2

User 1 UnAuthN 1

Attack Surface Contribution = Method Privilege Value / Method Access Rights Value

Page 8: Measuring black boxes

8

Channel: Damage Potential & Attacker Effort

Channel

Protocol Value

Access Rights Value

Raw Stack Access 5 AuthN Admin 4

Constrained Protocol Access

4 AuthN Priv User 3

Encoded Message Access 3 AuthN User 2

Signal Only 1 UnAuthN 1

Attack Surface Contribution = Channel Protocol Value / Channel Access Rights Value

Page 9: Measuring black boxes

9

Data item: Damage Potential & Attacker Effort

Data Type

Type Value Access Rights Value

Persistent Executable 5 AuthN Admin 4

Persistent File / Data Item

1 AuthN Priv User 3

AuthN User 2

UnAuthN 1

Attack Surface Contribution = Data Type Value / Data Type Access Rights Value

Page 10: Measuring black boxes

10

In use

Attack Surface Measurement = Sum of all Attack Surface Contributions

Assumes probability of a exploitable vulnerability in a Method, Channel or Data Item is 1

Comparing two boxes against each other or against differently configured versions of themselves is relatively easy. Beware: Similar attack surface scores may hide boxes with a

small attack surface but a very high damage potential!

Only considers attackability no consideration of the impact of the attack

This is not risk

Page 11: Measuring black boxes

11

Coupling and Complexity Problems

Page 12: Measuring black boxes

12

Coupling and Complexity

“The worst enemy of security is complexity.”Bruce Schneier

“Connectedness and complexity are what cause security disasters.”

Marcus Ranum

"Risk is a necessary consequence of dependence“Dan Geer

“Left to themselves, creative engineers will deliver the most complicated system they think they can debug.”

Mike O’Dell

Page 13: Measuring black boxes

13

Normal Accident Theory (Charles Perrow, 1984)

Coupling How fast cause and effect propagate through the system. Time dependent Rigid ordering Single path to successful outcome

Complexity Number of interactions between components. Branching Feedback loops Un-planned sequences of events.

Multiple component failures cause systemic cascade failures or accidents.

Accidents are inevitable in complex, tightly-coupled systems.

Page 14: Measuring black boxes

14

Component Complexity

Also a common solution architecture concern.Simple Component Complexity

Fan-In Complexity Sum of all possible protocol connections to the component

Fan-Out Complexity Sum of all possible protocol connections from the component

Total Component Complexity

Sum of Fan-In & Fan-Out Complexity

Complex Component Complexity

Fan-In Complexity Sum of all Methods offered by the component on each Channel

Fan-Out Complexity Sum of all Methods used by the component on each Channel

Total Component Complexity

Sum of Fan-In & Fan-Out Complexity

Page 15: Measuring black boxes

15

Coupling & Security

Closely-coupled in security is analogous to highly-trusted. I propose that measuring the trust of connections has the

following aspects:Connection

Channel Privilege

Value Channel Privacy

Value Channel Access Rights

Value

System 5 Plain Text 4 AuthN Admin 4

Admin 4 Binary 4 AuthN Priv User 3

Priv User 3 Obfuscated

3 AuthN User 2

User 1 Encrypted 1 UnAuthN 1

Coupling = Channel Privilege Value x (Channel Privacy Value / Channel Access Rights Value)

Page 16: Measuring black boxes

16

In Use

The coupling and connectivity of the system can be represented by a graph:

Components = Nodes Connections = Edges Number of Methods = Edge Weighting Coupling = Edge Weighting

This doesn’t need special tooling, you can represent a graph in a matrix

(A spreadsheet for example).

Graphs can be clustered using complexity or coupling to identify structurally related components in a system

Page 17: Measuring black boxes

17

Design Structure Matrix

A good example representing system graphs matrices in system engineering is a Design Structure Matrix (DSM)

http://www.dsmweb.org These are easy to knock up while you’re working to aid your analysis.

Simple complexity DSM example:

WWW APP DB MESSAGE1 MESSAGE2 Fan-Out Total WWW 1 1 1 0 3 3APP 0 1 1 0 2 3DB 0 0 0 0 0 2MESSAGE1 0 0 1 1 4MESSAGE2 0 0 0 0 0 1

Fan-In 0 1 2 3 1

Page 18: Measuring black boxes

18

Bringing it Together

Page 19: Measuring black boxes

19

Results

Components can have A relative attack surface measurement A relative total component complexity measurement

Connections between components can be relatively weighted by Complexity Coupling

These are all indicators you can use to identify high risk areas of large complex systems that you can then focus to address. More testing Re-Design

This has previously highlighted an interesting situation where a firewall HA pair between two logical networks that routed a closely-coupled application protocol connection with a high level of privilege between two components was effectively useless as a security control and was removed.

Page 20: Measuring black boxes

20

http://blog.blackswansecurity.com


Recommended