Date post: | 19-Dec-2015 |
Category: |
Documents |
View: | 214 times |
Download: | 1 times |
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #1
ECE579S Computer and Network Security
3: Design Principles, Access Control Mechanisms, & Covert
Channels
Professor Richard A. Stanley, P.E.
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #2
Last Time: Digital Signatures Summary
• Combining hashing algorithms and asymmetric cryptography, we can digitally sign a message
• A digitally signed message can, under certain conditions, assure both the integrity of the contents and the authenticity of the sender
• Trust relationships are necessary to extend this concept. Digital certificates can be used within a trust relationship to validate the public key belonging to a user. The most common such system is X.509 v3
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #3
X.509 Certificates
• Your web browser is loaded with many root certificates, no matter which browser it is– For most folks, the browser is the most
common client involving signatures
• If you have the root certificate in the chain that issued a user certificate, you can validate the user’s certificate
• Let’s take a look…
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #4
Design Basics
• KISS!
• Simplicity– Don’t make things more complex than they
absolutely need to be
• Restriction– Allow only what is required to get the job done
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #5
Design Principles
• Least privilege• Fail-safe defaults• Economy of mechanism• Complete mediation• Open design• Separation of privilege• Least common mechanism• Psychological acceptability
There is a lot here!
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #6
Access Control Model
Subject RequestReferenceMonitor Object
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #7
Access Control Types
• Discretionary: the file owner is in charge
• Mandatory: the system policy is in charge
• One can exist within the other, especially discretionary within a class of mandatory
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #8
Access Control Matrix
• A = set of access operations permitted• S = set of subjects• O = set of objects
M M so s S o O M Aso
, ,
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #9
Access Control Issues
• Conflicts
• Revocation of privileges
• Copying and amplifying privileges
• OS-specific implementations
• How do we make all this happen?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #10
Security Kernel View
Applications
Services
OS
OS kernel
Hardware
NB: access control decisions made by the security kernel are far removed from access control decisions
made by applications
Security Kernel
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #11
Important!• Users should be able to invoke the
operating system
• Users must not be able to alter or misuse the operating system
These are competing requirements, generally implemented by use of
controlled invocation and status monitoring
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #12
How To Implement the Rules?
• Modes of operation– Must distinguish between
• Computation on behalf of the OS
• Computation on behalf of users
– Processors and OS’s can provide distinct• Supervisor mode
• User mode
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #13
Processes and Threads
• Process is a program in execution– Contains executable code, data, context
– Has own address space
– Communicates w/other processes through OS
– Context switch between processes expensive
• Thread is strand of execution within process– Threads share process address space
– Context switching relatively inexpensive
– Security vulnerability by shared addressing
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #14
Controlled Invocation Revisited
• Predefined supervisor mode operations– Return to user mode before transferring control
to user• Prevents user from writing directly to memory, etc.
• Distinguish programs from data– The computer doesn’t know one from another
without help
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #15
Interrupts
• Interrupt the current process– Have different priorities
– An interrupt points to an entry in the interrupt vector table of the same number as the interrupt
– The interrupt vector points to the interrupt handler memory location for that interrupt
– Interrupts can themselves be interrupted by interrupts of higher priority
– Security vulnerabilities?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #16
Interrupt Processing
Interrupt n *
Interrupt vector
Interrupt Handler n n
10
Interrupt vector table Memory
* Raised by error, user request, I/O needs, etc.
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #17
How to Subvert Interrupts?
• To assure security, an interrupt must return the system to the same mode it was in when the interrupt was raised
• What about the following?– Redirecting the vector output– Changing the vector table entries– Altering the priority masking
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #18
Hardware Implementations• 68000 series
– One status bit = two modes– 16-bit status register has security features
• 80x86 / Pentium– Two status bits = four privilege levels– Changeable by only one instruction that runs in
level 0 only
• Not all software implements these features
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #19
68000 Protections• Always boots in supervisor mode
– Allows access to S byte of status register– Once set to 0, only interrupts and errors can switch S
back to 1– OS calls executed by interrupt
• Can run in supervisor mode• Switch S back to user mode after running
• Status register can implement memory segmentation, by knowing where an object or process originated
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #20
80x86 Protections
• It’s common knowledge that the 80x86 has a poor security implementation, right?
• WRONG!• 80x86 provides for 4 protection rings
– Processes can access only what they dominate– System object data stored in descriptors– Descriptors accessed by selectors
• Why the bad rep? Common software until recently ignored these features; everything runs in Ring 0
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #21
How Does It Work?
• Descriptors stored in Descriptor Table– Contains DPL
• Selectors are 16-bit fields containing an index pointer to Descriptor Table and RPL– Only OS can access selectors
• Current process privilege level, CPL, stored in the code segment register
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #22
Selectors and Descriptors
Descriptor DPL
Descriptor Table0
n
Index RPL
Selector
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #23
80x86 *-property
• Outer ring processes can access inner ring services through gates– Gates permit execute-only access to inner ring– No outward calls permitted from within
• Only gates in same ring as current process can be used
• Subroutine calls– CPL changes to level gate points to– On return, CPL restored to that of calling process
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #24
A Problem Here?
• Where is the subroutine return data stored?– On the stack
– Is this a good idea?
• No, as the stack is vulnerable if it is in an outer ring
• So, selected stack data is copied to an inner ring• BUT ... If the data is left on the stack in the outer
ring?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #25
Reference Monitor
• Operating Systems– Manage access to data and resources– Do not interpret user data (usually)– Must maintain own integrity, user separation
• OS integrity => separate user/OS space– File management: logical memory separation– Memory management: physical separation
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #26
Memory Structures
• Segmentation– Named segments with relative addressing– Divides data into logical units– Segments have variable lengths
• Paging– Divides memory into equal-sized pages– Relative addressing from page boundary– Efficient, but page faults a problem
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #27
Memory Protection
• Sandboxing– Confine each process to a memory segment– Requires fixed address coding, a problem
• Fence registers– Fence defines top of OS space– User space defined by offset above fence value– Bounds register can provide upper limits
• Tagged architecture– Each data item tagged with type– OS or hardware forbids type violation
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #28
Segment Descriptor Word
• Segment ID
• Pointer to object
• Indicator flags– read– execute– write
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #29
Security Permissions• Objects
– Kept in the SDW
• Subjects– Kept in process level table– Kept in current level table
• Active segment table keeps track of active processes– Only active processes can access an object
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #30
Writing Secure Code
• Topic of a textbook of the same name that is required reading at Microsoft
• Good code doesn’t just happen—it must be planned
• Let’s peek at the table of contents to see what the authors had in mind
• Table of contents
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #31
Why Are We Here?
• To design and build security into computer systems in an effective manner, right?
• We have studied all the technologies and tools, so nothing can go wrong, right?
• Wrong!– There are lots of things that exist that can make
our jobs harder and more challenging
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #32
Secure Development? Why?
• Security is like quality– You cannot improve the quality of a poorly-
manufactured product by inspecting it repeatedly
– You cannot improve the inherent security of software by trying to “bolt it on” after the software development is completed
• Thus, security has to be planned from the start, which means during development
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #33
Some definitions
• Threat: potential occurrence that can have an undesirable effect on the system assets or resources
• Vulnerability: a weakness that makes it possible for a threat to occur
• Risk: the chance that a threat will exploit a vulnerability on an asset you seek to protect
• In general, threats and vulnerabilities are not well-defined at development time (and sometimes, never)
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #34
Threats and Vulnerabilities Compared
• Threats are “just there” – you cannot generally remove them by design decisions
• Vulnerabilities occur due to design choices we make along the way
• They are not the same thing!
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #35
Vulnerability Assessment
• What is it?
• Why do we care?
• Whose job is it?
• How good a job do we have to do?
• How can we describe vulnerabilities?– OVAL
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #36
Balancing Technology and Reality
• Technologists tend to focus on technical solutions to technical problems
• Once the problem is defined, the “goodness” of the definition is rarely questioned
• A crucial issue is whether the problem as defined is actually the one that should be solved
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #37
An Example
• Problem as defined by the techie: Client company has sales of $10M, net profit of $1M, and fraud losses of $150K. Thus, problem is to reduce fraud loss
• Possible client view: Need to increase sales by 100% to $20M. Even if fraud losses remain at a constant percentage, this will provide profit of $1.7M, a 70% increase
• How does a technical solution to the first problem help to solve the second? Which does the client value more?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #38
The Three Supermarkets
• Problem: how to cut shrinkage?
• Solutions:– RF tags– Face recognition– Self checkout
Ref: Anderson, 22.2.1
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #39
The Real Problem
• Balance risk and reward– Risk and reward is what business is all about– Business incur risk by entering into business
• Much money is spent before any receipts are taken
• If no one pays, business goes bankrupt
– Profit is reward for assuming risk
• How does this coincide with technical problem definition and solution?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #40
Typical Security Problems
• “Just Say No”
• Failure to understand the business
• Focus on problems absent an understanding of the importance of the problem
• Inward- vs. outward-looking
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #41
Organizational Problems
• Risk thermostat
• Security/reliability interaction
• Solving the wrong problem
• Incompetent/inexperienced security staff
• Accountability
• Self-fulfilling prophesies
• Blame shared is blame diminished
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #42
Developmental Issues
• What problems to solve?
• Focus
• Methodology– Is the desired output a quality product, or strict
adherence to a methodology? Don’t be too quick to decide.
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #43
Lipner’s Security Requirements
• Users will not write their own programs• Program development will not be done on
production systems• Special process required to install program from
development to production system• The above special process must be both controlled
and audited• Managers and auditors must have access to both
system state and system logs
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #44
Principles of Operation• These follow from Lipner’s Rules• Separation of duties
– Critical functions broken into steps, where no single individual can perform all needed steps
• Separation of functions– Development and production systems separated to prevent
info leakage from one to the other
• Audit– Analyze what actually was done, compare to policies,
identify inappropriate actions (if any)– Done by still another group of individuals from above
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #45
Life Cycle
• Systems have life cycles, just as do people• The life cycle begins with the statement of
specifications• The life cycle ends when the system is totally
phased out• Systems development must deal with security at
all phases of the life cycle• Minimizing cost over the life cycle is often a goal,
and is monumentally hard to measure
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #46
Life Cycle Stages
• Conception– Requirements definition– Specification development
• Manufacture– Detailed planning of each activity– Software development– Testing
• Deployment– Initial roll-out– Continued field support
• Retirement
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #47
Assurance and Requirements
• Assurance is confidence that an entity meets its security requirements, based on specific evidence provided by the application of assurance techniques
• Requirement is a statement of system goals that must be satisfied by the design
• Trusted System has been shown to meet well-defined requirements under credible evaluation by a disinterested third party (certification of the evaluator may be required)
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #48
More Assurance• Policy Assurance is evidence establishing that the
security requirements in the policy is complete, consistent, and technically sound
• Design Assurance is evidence that a design is sufficient to meet the security policy requirements
• Implementation Assurance is evidence establishing that the system implementation is consistent with the security policy requirements
• Operational Assurance is evidence that system sustains the the security policy requirements during installation, configuration, and operation
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #49
Methodologies
• Waterfall model
• Iterative model
• Exploratory programming
• Prototyping
• Formal transformation
• Reusable components
• Extreme programming
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #50
Top-Down Design: The Waterfall Model
• Evaluate the problem—This is where the concept is born. • Identify deficiencies with existing solutions and gather information. • Propose a solution—Present a detailed description of the solution, including pros
and cons and the problems the software will address. Finalize timelines, budgets, work breakdown structures, and other supporting documentation. Most importantly, identify and analyse requirements.
• Design the architecture—Once the proposal has been accepted, create models of the solution, including workflow and dataflow diagrams, module and functionality layouts, and any other descriptions required by the solution. A vigorous review process usually accompanies this phase.
• Develop the code—Use the blueprints created in previous phases to write, debug, and unit-test the code. Next, integrate the code and test portions of the system. Finally, test the entire system. This cycle isn’t complete until all tests have passed.
• Deploy and use the system—Roll out the resulting functionality and provide training and documentation to users as needed.
• Maintain the solution—Support and upgrade the software when necessary and fix post-production bugs.
http://www.zdnet.com.au/builder/manage/project/story/0,2000035082,20266893,00.htm
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #51
Waterfall Model View
Requirements
Specification
Implement/Unit test
Integration/System test
Opns & Maintenance
Refine
Code
Build
Field
Validate
Validate
Verify
Verify
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #52
Waterfall Model Advantages
• Easy to control
• Limits cross-team interaction
• Relatively easy to estimate resources
• Nicely compatible with project management schemes
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #53
Waterfall Model Disadvantages
• Easy for manager, hard for developers– Testing comes at the end
• Debugging can be complex• Entire system cannot be tested until close to
end of project, a high risk• Feedback nominally occurs only at end of
each phase• If final system fails, how to correct?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #54
Iterative Development
• Spiral model developed by Barry Boehm, 1986– Boehm led the way in developing software cost models
in the 1980’s most notably COCOMO
– Uses prototypes to refine/define the requirements
– Uses waterfall model for each step• This fact is often overlooked by spiral supporters
– Provided number of iterations is limited (and pre-agreed) risk can be controlled
– Believed to be more flexible than waterfall model
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #55
Spiral Model
http://www.cc.gatech.edu/classes/cs3302_98_winter/1-08-mgt/sld012.htm
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #56
Exploratory Development
• No requirements
• No specifications
• “Quick and dirty” systems are developed, and then modified repeatedly until they achieve the desired performance
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #57
Prototyping
• Similar to exploratory development• BUT…the objective of phase one system is
to refine and produce system requirements• After that, a production-quality system is
developed, perhaps even using another model
• Goal is to avoid “You gave me just what I asked for, but it’s not what I wanted.”
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #58
Formal Transformation
• Begins with formal statement of specification
• Transformed into software by formal, correctness-preserving transforms
• Idea is to produce a system that provably complies to the specification
• This is where Bell and LaPadula and their contemporaries were headed
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #59
System Assembly FromReusable Components
• Objective is to avoid constantly re-developing components that already exist– e.g. sine function calculation
• Presupposes existence of library of tested, reusable components
• May be merged with other development models• Source, compliance of components is an issue
– What if they are in another language?– What is known about security, correctness, etc.?
• A goal of Ada, and a common technique
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #60
Extreme Programming
• Rapid prototyping
• Best practices
• Frequent review and test
• “Build a little, test a lot”
• Vulnerable to lack of an overall security (or any other) architecture
• Common approach for commercial software
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #61
Some Issues
• How to define and verify security issues
• How do we write programs whose security can be verified?– Modular programming is often viewed as a
panacea in this role; it is not
• What about verifying the requirements?
• And how about the specification?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #62
Threat Trees
• Security application of fault-tree analysis– Widely used in insurance industry
• Root is the undesired behavior• Successive nodes are decomposition into
possible causes• Can be expressed as a binary tree or as a
nested outline• Very useful for modeling threats
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #63
Threat Tree: Human Actors
Using Network Access
http://www.cert.org/archive/pdf/OCTAVEthreatProfiles.pdf
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #64
Threat Tree: Human Actors Using Physical
Access
http://www.cert.org/archive/pdf/OCTAVEthreatProfiles.pdf
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #65
http://www.cert.org/archive/pdf/OCTAVEthreatProfiles.pdf
Threat Tree: System
Problems
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #66
Threat Tree: Other
Problems
http://www.cert.org/archive/pdf/OCTAVEthreatProfiles.pdf
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #67
Threat Trees: Some Thoughts
• Threat trees work best when you have some idea of the probability of each threat– Then you can work up the tree along the lines of
highest probability to find the worst problems first
• This methodology can be automated, and simulations run to determine system issues
• It is seductive to allow this methodology to become the end result rather than a tool to produce a secure system
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #68
Failure Modes and Effects Analysis (FMEA)
• Threat tree upside-down: work from failure modes to overall effect– Widely used within NASA
• This is a fairly standard engineering approach in any complex system
• Combining top-down and bottom-up analysis should lead to better systems
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #69
Requirements Creep
• Every system experiences evolution in its requirements; the challenge is to capture and manage the changes– Bug fixes– Controls and governance– Tragedy of the commons– Organizational change
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #70
Managing the Process
• Risk management
• Using parallel processes
• Real world economics
• Real world sociology
• Real world politics
• Providing for audits
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #71
Risk Management
• Annual loss expectancy (ALE) is a widely used measure of risk, and is required in US Government procurements
Loss Type Amount Incidence/Yr ALE
EFT fraud $50,000,000 .005 $250,000
ATM fraud (large) $250,000 .2 $100,000
ATM fraud (small) $20,000 .5 $10,000
Teller fraud $3,240 200 $648,000
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #72
ALE Observations
• Intended to highlight most costly losses
• Vulnerable to poorly defined probabilities– e.g. is the teller fraud really as high as the ALE
seems to show, or is the incidence figure off?
• Vulnerable to jiggering the figures to match funds available– “We want all the security we can afford.”
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #73
Parallel Inputs
• Good ideas are not confined to those at the top of the hierarchical pyramid
• Lots of input can find things single contributors cannot
• Requires a very good, broad-minded “finisher” to collect all the inputs and put into a consolidated requirement, specification, or what have you
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #74
Economics
• How much can be afforded is a very real constraint
• How to reconcile this with security and performance demands?
• How to trade off security and performance or reliability?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #75
Sociology
• Social engineering– People want to be liked, and most hate
confrontation– This can be used to compromise security
• Groups act differently from individuals
• Organizational culture is important
• Politics are a real facet of real life
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #76
Audits
• Lipner was right• Audits need to be accomplished at all stages
of the development process, to ensure that goals are met and to identify problems
• Objective is not necessary to lay blame, but rather to discover potential problems before they become actual problems
• Auditors must be independent and trusted
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #77
Auditing Guidelines
• ISO 17799
• COBIT (Control Objectives for Information and Related Technology)
• ISO 9001 (maybe)
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #78
A Parting Thought
• “Management is that for which there is no algorithm. Where there is an algorithm, it’s administration.”
» Roger Needham
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #79
• FBI counterintellingence agent Robert Hanssen convicted for espionage
• What can we learn from this?– He wasn’t caught because he was careless
– He knew all the tricks used to catch spies
– He was arrogant (Philby book)
– He did “exceptionally grave” damage to the nation, and is probably directly responsible for at least two people being executed
• So what does that have to do with system security?
Spies at Work
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #80
Where to Hide Things?
• In a difficult to find location?
• In a safe deposit box?
• In a dead drop?
• How about in plain sight?
• And…why are we hiding them, anyway?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #81
One Worry
• This is a stegosaurus
• We need to worry about steganography
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #82
Steganography• “Covered writing”
– from the Greek steganos and graphos– steganos = covered (or roofed)– graphos = writing
• Includes such arcana as invisible ink, hollow heels in shoes, open codes
• A real problem for systems security, as we shall see
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #83
Null Cipher Example
News Eight Weather: Tonight increasing snow. Unexpected precipitation smothers eastern towns. Be extremely cautious and use snowtires especially heading east. The highways are knowingly slippery. Highway evacuation is suspected. Police report emergency situations in downtown ending near Tuesday.
Newt is upset because he thinks he is President.
Decodes as:
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #84
Actual WWII Null Cipher
Apparently neutral's protest is thoroughly discounted and ignored.
Isman hard hit. Blockade issue affects pretext for embargo on
by products, ejecting suets and vegetable oils.
Pershing sails from NY June 1.
Decodes as:
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #85
Another Example
S0:
S1:
Result:
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #86
Interesting, but So What?
• What if we were to replace the least significant bits of a complex data file with information we wanted to transmit secretly?
• File compression– Lossless (e.g., GIF, BMP)– Lossy (e.g. MPEG, JPEG)
• Downgrading information--how can you be sure what you downgraded?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #87
King’s College,Cambridge (UK)
The image in whichanother image willbe hidden using steganography
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #88
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #89
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #90
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #91
Stego Summary
• Careful comparison of the two King’s College photos shows the stego image is slightly less sharp than the original
• Careful examination of the Pentagon aerial photos shows the recovered image is slightly less sharp than the original
• BUT…you knew what to look for
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #92
Stego Implications
• How can you be sure that what has been downgraded does not hide other information?
• Steganography can be used as a covert channel that is very hard to find
• Steganography also provides a tool that can be used to watermark a complex file
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #93
Fortunately, Steganography is so complexand hard to implement that is not likelythe average hacker or crook would be
able to exploit it.
Equally fortunately, we have discovered that the moon is made of green cheese.
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #94
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #95
Some Stego Tools
• OutGuess
• Information Hiding Homepage
• Steganography Tools
• Invisible Secrets
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #96
Other Stego Uses
• Covert information distribution– eBay images have been found which contain
stego information believed to be messages to terrorist cells
– Much of the imagery on the Internet contains stego data, which could be executables
• Don’t get too cute -- why would you suddenly start trading pictures with someone?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #97
Some Thoughts
• What about Bell and Lapadula’s model?– No write down?– No read up?
• The Internet thrives on visual imagery. What does this imply for security based on what we have studied tonight?
• Why did it take 15 years to catch Hanssen? How long would it find to uncover stego?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #98
Another Problem
How do you counter these?
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #99
Summary – 1
• System design should be based on simplicity and restriction– Use the 8 design principles to get as close to
security as you are able
• Access control can be accomplished in several ways, but matrices / lists dominate
• Security kernels are the gatekeepers for implementing access control
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #100
Summary - 2
• Developing systems is hard; developing secure systems is even harder
• It is important to define requirements and specifications at the beginning, and to understand what can be traded for what
• Many development models exist; none is the “best” for any particular purpose
• Beware systems that promise tight analytical information where you have sketchy inputs (which you almost always have)
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #101
Summary - 3
• Security kernels involve hardware and software at the lowest levels of the system
• Implementing security kernels is difficult and often complicates the use of the OS
• Security must be planned• Proof of security can still be elusive• Many hardware security features exist that
remain unused in commercial products
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #102
Summary - 4
• The existence of secure tools and protocols is not a guarantee of security
• Human spies are a real problem, and hard to catch• Steganography is one way for information to leak out of a
system (This is why you can’t take an unclassified file from a classified network and insert it onto a classified network without risking disclosure of classified material.)
• Steganography can be very hard to find, but it is very easy to implement at low cost
• New, helpful devices can make security much harder than it used to be
Spring 2011© 2000-2011, Richard A. Stanley
ECE579S/3 #103
Assignment for Next Class
• Read text, Chapter 24(?) on O/S security
• Prepare a one-page description of your proposed research topic. Describe what you plan to research and the topic areas within the overall area to be covered. Identify the team members. One proposal per team.