Post on 29-Dec-2015
transcript
CS363Week 8 - Wednesday
Last time
What did we talk about last time? Authentication
Challenge response Biometrics
Started Bell-La Padula model
Questions?
Project 2
Security PresentationYuki Gage
Bell-La Padula Model
Bell-La Padula overview
Confidentiality access control system
Military-style classifications Uses a linear clearance
hierarchy All information is on a
need-to-know basis It uses clearance (or
sensitivity) levels as well as project-specific compartments
Unclassified
Restricted
Confidential
Secret
Top Secret
Security clearances
Both subjects (users) and objects (files) have security clearances
Below are the clearances arranged in a hierarchy
Clearance Levels Sample Subjects Sample Objects
Top Secret (TS) Tamara, Thomas Personnel Files
Secret (S) Sally, Samuel E-mail Files
Confidential (C) Claire, Clarence Activity Log Files
Restricted (R) Rachel, Riley Telephone List Files
Unclassified (UC) Ulaley, Ursula Address of Headquarters
Simple security condition
Let levelO be the clearance level of object O Let levelS be the clearance level of subject S The simple security condition states that S
can read O if and only if the levelO ≤ levelS and S has discretionary read access to O
In short, you can only read down Example? In a few slides, we will expand the simple
security condition to make the concept of level
*-Property
The *-property states that S can write O if and only if the levelS ≤ levelO and S has discretionary write access to O
In short, you can only write up Example?
Basic security theorem
Assume your system starts in a secure initial state
Let T be all the possible state transformations
If every element in T preserves the simple security condition and the *-property, every reachable state is secure
This is sort of a stupid theorem, because we define “secure” to mean a system that preserves the security condition and the *-property
Adding compartments
We add compartments such as NUC = Non-Union Countries, EUR = Europe, and US = United States
The possible sets of compartments are: {NUC} {EUR} {US} {NUC, EUR} {NUC, US} {EUR, US} {NUC, EUR, US}
Put a clearance level with a compartment set and you get a security level
The literature does not always agree on terminology
Romaine lattice
The subset relationship induces a lattice {NUC, EUR, US}
{NUC, US}
{EUR}
{NUC, EUR} {EUR, US}
{NUC} {US}
Updated properties
Let L be a clearance level and C be a category
Instead of talking about levelO ≤ levelS, we say that security level (L, C) dominates security level (L’, C’) if and only if L’ ≤ L and C’ C
Simple security now requires (LS, CS) to dominate (LO, CO) and S to have read access
*-property now requires (LO, CO) to dominate (LS, CS) and S to have write access
Problems?
Clark-Wilson Model
Clark-Wilson model
Commercial model that focuses on transactions Just like a bank, we want certain conditions to hold
before a transaction and the same conditions to hold after
If conditions hold in both cases, we call the system consistent
Example: D is the amount of money deposited today W is the amount of money withdrawn today YB is the amount of money in accounts at the end of
business yesterday TB is the amount of money currently in all accounts Thus,
D + YB – W = TB
Clark-Wilson definitions
Data that has to follow integrity controls are called constrained data items or CDIs
The rest of the data items are unconstrained data items or UDIs
Integrity constraints (like the bank transaction rule) constrain the values of the CDIs
Two kinds of procedures: Integrity verification procedures (IVPs) test that
the CDIs conform to the integrity constraints Transformation procedures (TPs) change the
data in the system from one valid state to another
Clark-Wilson rules
Clark-Wilson has a system of 9 rules designed to protect the integrity of the system
There are five certification rules that test to see if the system is in a valid state
There are four enforcement rules that give requirements for the system
Certification Rules 1 and 2
CR1: When any IVP is run, it must ensure that all CDIs are in a valid state
CR2: For some associated set of CDIs, a TP must transform those CDIs in a valid state into a (possibly different) valid state By inference, a TP is only certified to
work on a particular set of CDIs
Enforcement Rules 1 and 2 ER1: The system must maintain the certified
relations, and must ensure that only TPs certified to run on a CDI manipulate that CDI
ER2: The system must associate a user with each TP and set of CDIs. The TP may access those CDIs on behalf of the associated user. If the user is not associated with a particular TP and CDI, then the TP cannot access that CDI on behalf of that user. Thus, a user is only allowed to use certain TPs on
certain CDIs
Certification Rule 3 and Enforcement Rule 3
CR3: The allowed relations must meet the requirements imposed by the principle of separation of duty
ER3: The system must authenticate each user attempting to execute a TP In theory, this means that users don't
necessarily have to log on if they are not going to interact with CDIs
Certification Rules 4 and 5 CR4: All TPs must append enough
information to reconstruct the operation to an append-only CDI Logging operations
CR5: Any TP that takes input as a UDI may perform only valid transformations, or no transformations, for all possible values of the UDI. The transformation either rejects the UDI or transforms it into a CDI Gives a rule for bringing new information into
the integrity system
Enforcement Rule 4
ER4: Only the certifier of a TP may change the list of entities associated with that TP. No certifier of a TP, or of any entity associated with that TP, may ever have execute permission with respect to that entity. Separation of duties
Clark-Wilson summary
Designed close to real commercial situations No rigid multilevel scheme Enforces separation of duty
Certification and enforcement are separated
Enforcement in a system depends simply on following given rules
Certification of a system is difficult to determine
Chinese Wall Model
Chinese Wall overview
The Chinese Wall model respects both confidentiality and integrity
It's very important in business situations where there are conflict of interest issues
Real systems, including British law, have policies similar to the Chinese Wall model
Most discussions around the Chinese Wall model are couched in business terms
Chinese Wall definitions
We can imagine the Chinese Wall model as a policy controlling access in a database
The objects of the database are items of information relating to a company
A company dataset (CD) contains objects related to a single company
A conflict of interest (COI) class contains the datasets of companies in competition
Let COI(O) be the COI class containing object O Let CD(O) be the CD that contains object O We assume that each object belongs to exactly
one COI
COI Examples
Bank COI Class
Gasoline Company COI Class
Bank of America
a
Citibankc
Bank of the West
b
Shell Oils
Standard Oile
Union '76u
ARCOn
CW-Simple Security Condition
Let PR(S) be the set of objects that S has read
Subject S can read O if and only if any of the following is true1. There is an object O' such that S has
accessed O' and CD(O') = CD(O)2. For all objects O', O' PR(S) COI(O')
COI(O)3. O is a sanitized object
Give examples of objects that can and cannot be read
CW-*-Property
Subject S may write to an object O if and only if both of the following conditions hold1. The CW-simple security condition
permits S to read O2. For all unsanitized objects O', S can
read O' CD(O') = CD(O)
Biba Model
Biba overview
Integrity based access control system Uses integrity levels, similar to the
clearance levels of Bell-LaPadula Precisely the dual of the Bell-LaPadula
Model That is, we can only read up and write down Note that integrity levels are intended only
to indicate integrity, not confidentiality Actually a measure of accuracy or reliability
Formal rules
S is the set of subjects and O is the set of objects
Integrity levels are ordered i(s) and i(o) gives the integrity level of s or
o, respectively Rules:
1. s S can read o O if and only if i(s) ≤ i(o)2. s S can write to o O if and only if i(o) ≤ i(s)3. s1 S can execute s2 S if and only if i(s2) ≤
i(s1)
Extensions
Rules 1 and 2 imply that, if both read and write are allowed, i(s) = i(o)
By adding the idea of integrity compartments and domination, we can get the full dual of the Bell-La Padula lattice framework
Real systems (for example the LOCUS operating system) usually have a command like run-untrusted
That way, users have to recognize the fact that a risk is being made
What if you used the same levels for integrity AND security, could you implement both Biba and Bell-La Padula on the same system?
Theoretical Limitations on Access Control
Determining security
How do we know if something is secure? We define our security policy using our
access control matrix We say that a right is leaked if it is added
to an element of the access control matrix that doesn’t already have it
A system is secure if there is no way rights can be leaked
Is there an algorithm to determine if a system is secure?
Mono-operational systems In a mono-operational system, each
command consists of a single primitive command: Create subject s Create object o Enter r into a[s,o] Delete r from a[s,o] Destroy subject s Destroy object o
In this system, we could see if a right is leaked with a sequence of k commands
Proof
Delete and Destroy commands can be ignored No more than one Create command is needed (in
the case that there are no subjects) Entering rights is the trouble We start with set S0 of subjects and O0 of objects With n generic rights, we might add all n rights to
everything before we leak a right Thus, the maximum length of the command
sequence that leaks a right is k ≤ n(|S0|+1)(|O0|+1) + 1
If there are m different commands, how many different command sequences are possible?
Turing machine
A Turing machine is a mathematical model for computation
It consists of a head, an infinitely long tape, a set of possible states, and an alphabet of characters that can be written on the tape
A list of rules saying what it should write and should it move left or right given the current symbol and state
1 0 1 1 1 1 0 0 0 0
A
Turing machine example
3 state, 2 symbol “busy beaver” Turing machine:
Starting state A
Tape Symbo
l
State A State B State C
Write Move Next Write Move Next Write Move Next
0 1 R B 0 R C 1 L C
1 1 R HALT 1 R B 1 L A
Church-Turing thesis
If an algorithm exists, a Turing machine can perform that algorithm
In essence, a Turing machine is the most powerful model we have of computation
Power, in this sense, means the ability to compute some function, not the speed associated with its computation
Halting problem
Given a Turing machine and input x, does it reach the halt state?
It turns out that this problem is undecidable
That means that there is no algorithm that can be to determine if any Turing machine will go into an infinite loop
Consequently, there is no algorithm that can take any program and check to see if it goes into an infinite loop
Leaking Undecidable
Simulate a Turing machine
We can simulate a Turing machine using an access control matrix
We map the symbols, states and tape for the Turing machine onto the rights and cells of an access control matrix
Discovering whether or not the right leaks is equivalent to the Turing machine halting with a 1 or a 0
The bad news
Without heavy restrictions on the rules for an access control, it is impossible to construct an algorithm that will determine if a right leaks
Even for a mono-operational system, the problem might take an infeasible amount of time
But, we don’t give up! There are still lots of ways to model
security Some of them offer more practical
results
Upcoming
Next time…
Finish theoretical limitations Trusted system design elements Common OS features and flaws OS assurance and evaluation Taylor Ryan presents
Reminders
Read Sections 5.4 and 5.5 Keep working on Project 2 Finish Assignment 3