+ All Categories
Home > Documents > This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www ›...

This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www ›...

Date post: 23-Jun-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
15
PROTECTION AND CONTROL OF INFORMATION SHARING IN MULTICS by Jerome H. Saltzer Massachusetts Institute of Technology Department of Electrical Engineering and Project MAC ABSTRACT This paper describes the design of mechanisms to control sharing of Information in the Multics system. Seven design principles help provide insight into the tradeoffs among different possible designs. The key mechanisms described include access control lists, hierarchical control of access specifications, identifi- cation and authentication of users, and primary memory protection. The paper ends with a discussion of several known weaknesses in the current protection mechanism design. An essential part of a general-purpose computer utility system is a set of protection mechanisms which control the transfer of information among the users of the utility. The Multics system*, a proto- type computer utility, serves as a useful case study of the protection mechanisms needed to permit controlled sharing of information in an on-line, general-purpose, information-storing system. This paper provides a survey of the various techniques currently used in Multics to provide controlled sharing, user authentication, inter-user isolation, supervisor-user protection, user-written proprie- tary programs, and control of special privileges. Controlled sharing of information was a goal in the initial specifications of Multics[8, 11], and thus has influenced every stage of the system design, starting with the hardware modifications to the General Electric 635 computer which produced the original GE 645 base for Multics. As a result, information protection is more thoroughly inte- grated into the basic design of Multics than is the case for those commercial systems whose original specifications did not include comprehensive con- sideration of information protection. Multics is an evolving system, so any case study must be a snapshot taken at some specific time. The time chosen for this snapshot is summer, 1973, at which time Multics is operating at M.I.T. using the Honeywell 6180 computer system. Rather than trying to document every detail of a changing environment, this paper concentrates on the protection strategy of Multics, with the goal of communicating those ideas which can be applied or adapted to other operating systems. ________________________ This research was supported by the Advanced Research Projects Agency of the Department of Defense under ARPA Order No. 2095 which was monitored by ONR Contract No. NOQ014-70-A-0362-0006. * A brief description of Multics, and a more com- plete bibliography, are given in the paper by Corbató, Saltzer, and Clingen[6]. 1 What is new ? In trying to identify the ideas related to protection which were first introduced by Multics, a certain amount of confusion occurs. The design was initially laid out in 1964-1967, and ideas were borrowed from many sources and embellished, and new ideas were added. Since then, the system has been available for study to many other system designers, who have in turn borrowed and embellished upon the ideas they found in Multics while construc- ting their own systems. Thus some of the ideas reported here have already appeared in the litera- ture. Of the ideas reported here, the following seem to be both novel and previously unreported: - The notion of designing a comprehensive com- puter utility with Information protection as a fundamental objective. - Operation of the supervisor under the same hardware constraints as user programs, under descriptor control and in the same address space as the user. - Facilities for user-constructed protected subsystems. - An access control system applicable to batch as well as on-line jobs. - Extensive human engineering of the user authen- tication (password) interface. - Decentralization of administrative control of the protection mechanisms. - Ability to allow or revoke access with immediate effect. Multics is unique in the extent to which infor- mation protection has been permitted to influence the entire system design. By describing the range of protection ideas embedded in Multics, the ex- tent of this influence should become apparent. Design Principles Before proceeding, it is useful to review several design principles which were used in the development of facilities for information protec- tion in Multics. These design principles provided This document was originally prepared off-line. This file is the result of scan, OCR, and manual touchup, starting with an original paper copy dated August 10, 1973. This is the author's original version of the paper. A revised version was published in Communications of the ACM 17 , 7 (July 1974) pages 388-402. http://doi.acm.org/10.1145/361011.361067
Transcript
Page 1: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

PROTECTION AND CONTROLOF

INFORMATION SHARING IN MULTICS

by

Jerome H. Saltzer

Massachusetts Institute of TechnologyDepartment of Electrical Engineering and Project MAC

ABSTRACT

This paper describes the design of mechanisms to control sharing of Information in the Multics system.Seven design principles help provide insight into the tradeoffs among different possible designs. The keymechanisms described include access control lists, hierarchical control of access specifications, identifi-cation and authentication of users, and primary memory protection. The paper ends with a discussion ofseveral known weaknesses in the current protection mechanism design.

An essential part of a general-purpose computerutility system is a set of protection mechanismswhich control the transfer of information among theusers of the utility. The Multics system*, a proto-type computer utility, serves as a useful casestudy of the protection mechanisms needed to permitcontrolled sharing of information in an on-line,general-purpose, information-storing system. Thispaper provides a survey of the various techniques currently used in Multics to provide controlledsharing, user authentication, inter-user isolation,supervisor-user protection, user-written proprie-tary programs, and control of special privileges.

Controlled sharing of information was a goalin the initial specifications of Multics[8, 11],and thus has influenced every stage of the systemdesign, starting with the hardware modifications tothe General Electric 635 computer which producedthe original GE 645 base for Multics. As a result,information protection is more thoroughly inte-grated into the basic design of Multics than is thecase for those commercial systems whose originalspecifications did not include comprehensive con-sideration of information protection.

Multics is an evolving system, so any casestudy must be a snapshot taken at some specifictime. The time chosen for this snapshot issummer, 1973, at which time Multics is operatingat M.I.T. using the Honeywell 6180 computer system.Rather than trying to document every detail of achanging environment, this paper concentrates onthe protection strategy of Multics, with the goalof communicating those ideas which can be appliedor adapted to other operating systems.

________________________This research was supported by the Advanced ResearchProjects Agency of the Department of Defense underARPA Order No. 2095 which was monitored by ONRContract No. NOQ014-70-A-0362-0006.

* A brief description of Multics, and a more com-plete bibliography, are given in the paper byCorbató, Saltzer, and Clingen[6].

1

What is new?

In trying to identify the ideas related toprotection which were first introduced by Multics,a certain amount of confusion occurs. The designwas initially laid out in 1964-1967, and ideaswere borrowed from many sources and embellished,and new ideas were added. Since then, the systemhas been available for study to many other systemdesigners, who have in turn borrowed and embellishedupon the ideas they found in Multics while construc-ting their own systems. Thus some of the ideasreported here have already appeared in the litera-ture. Of the ideas reported here, the followingseem to be both novel and previously unreported:

- The notion of designing a comprehensive com- puter utility with Information protection as a fundamental objective.

- Operation of the supervisor under the same hardware constraints as user programs, under descriptor control and in the same address space as the user.

- Facilities for user-constructed protected subsystems.

- An access control system applicable to batch as well as on-line jobs.

- Extensive human engineering of the user authen- tication (password) interface.

- Decentralization of administrative control of the protection mechanisms.

- Ability to allow or revoke access with immediate effect.

Multics is unique in the extent to which infor-mation protection has been permitted to influencethe entire system design. By describing the rangeof protection ideas embedded in Multics, the ex-tent of this influence should become apparent.

Design Principles

Before proceeding, it is useful to reviewseveral design principles which were used in thedevelopment of facilities for information protec-tion in Multics. These design principles provided

This document was originally prepared off-line. This file is the result of scan, OCR, and manual touchup, starting with an original paper copy dated August 10, 1973.

This is the author's original version of the paper. A revised version was published in Communications of the ACM 17, 7 (July 1974) pages 388-402. http://doi.acm.org/10.1145/361011.361067

Page 2: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

guidance in many decisions, although admittedlysome of the principles were articulated onlyduring the design, rather than in advance.

1. Every designer should know and understand theprotection objectives of the system. At thepresent rather shaky stage of understanding ofoperating system engineering, there are manypoints at which an apparently "don't care"decision actually has a bearing on protection.Although these decisions will eventually cometo light as the system design is integrated, asystem design cannot withstand very many rever-sals of early design decisions if it is to becompleted on a reasonable schedule and withina budget. By keeping all designers aware ofthe protection objectives, the early decisionsare more likely to be made correctly.

2. Keep the design as simple and small as possible.This principle is stated so often that it be-comes tiresome to hear. However, it bearsrepeating with respect to protection mechanisms,since there is a special problem: design andimplementation errors which result in unwantedaccess paths will not be immediately noticedduring routine use, since routine use usuallydoes not include attempts to utilize improperaccess paths. Therefore, techniques such ascomplete, line-by-line auditing of the protec-tion mechanisms are necessary; for suchtechniques to be successful, a small and simpledesign is essential.

3. Protection mechanisms should be based on per-mission rather than exclusion. This principlemeans that the default situation is lack ofaccess, and the protection scheme providesselective permission for specific purposes.The alternative, in which mechanisms attemptto screen off sections of an otherwise opensystem, seems to present the wrong psychologi-cal base for secure system design. A conser-vative design must be based on arguments onwhy objects should be accessible, rather thanon why they should not; in a large system someobjects will be inadequately considered and adefault of lack of access is more fail-safe.Along the same line of reasoning, a design orimplementation mistake in a mechanism whichgives explicit permission tends to fail by re-fusing permission, a safe situation, since itwill be quickly detected. On the other handa design or implementation mistake in amechanism which explicitly excludes accesstends to fail by not excluding access, a fail-ure which may go unnoticed.

4. Every access to every object must be checkedfor authority. This principle, when appliedmethodically, is the primary underpinning ofthe protection system. It forces a system-wide view of access control which includesinitialization, recovery, shutdown, and main-tenance. It also implies that a foolproofmethod of identifying the source of every re-quest must be devised. In a system designedto operate continuously, this principle re-quires thai when access decisions are remem-bered for future use, careful considerationbe given to how changes in authority are pro-pagated into such local memories.

2

5. The design is not secret. The mechanisms donot depend on the ignorance of potentialattackers, but rather on possession of speci-fic, more easily protected, protection keys orpasswords. This strong decoupling between pro-tection mechanisms and protection keys permitsthe mechanisms to be reviewed and examined byas many competent authorities as possible,without concern that such review may itselfcompromise the safeguards. Peters[19] andBaran[2] discuss this point further.

6. The principle of least privilege. Every pro-gram and every privileged user of the systemshould operate using the least amount of privi-lege necessary to complete the job. If thisprinciple is followed, the effect of accidentsis reduced. Also, if a question related tomisuse of a privilege occurs, the number ofprograms which must be audited is minimized.Put another way, if one has a mechanism avail-able which can provide "firewalls", the prin-ciple of least privilege provides a rationalefor where to install the firewalls.

7. Make sure that the design encourages correctbehavior in the users, operators, and admin-istrators of the system. Experience withsystems which did not follow this principlerevealed numerous examples in which users ig-nored or bypassed protection mechanisms forthe sake of convenience. It is essential thatthe human interface be designed for natural-ness, ease of use, and simplicity, so thatusers will routinely and automatically applythe protection mechanisms.

The application of these seven design principleswill be evident in many of the specific mechanismsdescribed in this paper.

Finally, in the design of Multics there weretwo additional functional objectives worth dwellingupon. The first of these was to provide the optionof complete decentralization of the administrationof protection specifications. If the system designforces all administrative decisions (e.g., protec-tion specifications) to be set by a single adminis-trator, that administrator quickly becomes a bottle-neck and an impediment to effective use of thesystem, with the result that users begin adoptinghabits which bypass the administrator, often com-promising protection in the bargain. Even if re-sponsibility can be distributed among several ad-ministrators, the same effects may occur. Only bypermitting the individual user some control of hisown administrative environment can one insist thathe take responsibility for his work. Of course,centralization of authority should be available asan option. It is easy to limit decentralization;it seems harder to adapt a centralized design toan environment in which decentralization is needed.

The second additional functional objectivewas to assume that some users will require protec-tion schemes not anticipated in the original design.This objective requires that the system provide acomplete set of handholds so that the user, withoutexercising special privileges, may construct a pro-tection environment which can interpret access re-quests however he desires. The method used is topermit any user to construct a protected subsystem,which is a collection of programs and data withthe property that the data may be accessed

Page 3: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

only by programs in the subsystem, and the programsmay be entered only at designated entry points. Aprotected subsystem can thus be used to programany desired access control scheme.

The Storage System and Access Control Lists

The central fixture of Multics is an organizedinformation storage system.[8] Since the storagesystem provides both reliability and protectionfrom unauthorized information release, the user isthereby encouraged to make it the repository of allof his programs and data files. All use of infor-mation in the storage system is implemented bymapping the information into the virtual memory ofsome Multics process. Physical storage location isautomatically determined by activity. As a result,the storage system is also used for all system databases and tables, including those related to protec-tion. The consequence of these observations is thatone access control mechanism, that of the storagesystem, handles almost all of the protectionresponsibility in Multics.

Storage is logically organized in separatelynamed data storage segments, each of which containsup to 262,144 36-bit words. A segment is the cata-loguing unit of the storage system, and it is alsothe unit of separate protection. Associated witheach segment is an access control list, an open-ended list of names of users who are permitted toreference the segment*. To understand the struc-ture of the access control list, first considerthat every access to a stored segment is actuallymade by a Multics process. Associated with eachprocess is an unforgeable character string identi-fier, assigned to the process when it was created.In its simplest form, this identifier might consistof the personal name of the individual responsiblefor the actions of the process. (This responsibleperson is commonly called the principal, and theidentifier the principal identifier.) Wheneverthe process attempts to access a segment or otherobject catalogued by the storage system, the prin-cipal identifier of the process is compared withthose appearing on the access control list of theobject; if any match is found access is granted.

Actually Multics uses a more flexible schemewhich facilitates granting access to groups ofusers, not all of whose members are known, andwhich may have dynamically varying membership. Aprincipal identifier in Multics consists of severalparts; each part of the identifier corresponds toan independent, exhaustive partition of all usersinto named groups. At present, the standardMultics principal identifier contains three parts,corresponding to three partitions:

1. The first partition places every individualuser of the installation in a separate accesscontrol group by himself, and names the groupwith his personal name. (This partition isidentical to Lhe simple mechanism of theprevious paragraph.)

2. The second partition places users in groupscalled projects, which are basically sets ofusers who cooperate in some activity such asconstructing a compiler or updating an

________________________* The Multics access control list correspondsroughly to a column of Lampson's protectionmatrix. [16]

3

inventory file. One person may be a member ofseveral projects, although at the beginning ofany instance of his use of Multics he must de-cide under which project he is operating.

3. The third partition allows an individual userto create his own, named protection compart-ments. Private compartments are chiefly use-ful for the user who has borrowed a programwhich he has not audited, and wishes to insurethat the borrowed program does not access cer-tain of his own files. The user may designatewhich of his own partitions he wishes to useat the time he authenticates his identity*.

Although the precise description in terms ofexhaustive partitions sounds formidable, in practicea relatively easy-to-use mechanism results. Forexample, the user named "Jones" working on the pro-ject named "Inventory" and designating the personalcompartment named "a" would be assigned the princi-pal identifier:

Jones.Inventory.a

Whenever his process attempts to access an objectcatalogued by the storage system, this three partprincipal identifier is first compared with succes-sive entries of the access control list for theobject. An access control list entry similarly hasthree parts, but with the additional conventionthat any or all of the parts may carry a specialflag to indicate "don't care" for that particularparLition. (We represent the special flag with anasterisk in the following examples.) Thus, theaccess control list entry

Jones.Inventory.a

would permit access to exactly the principal of ourearlier example. The access control list entry

Jones.*.*

would permit access to Jones no matter what projecthe is operating under, and independent of his per-sonally designated compartment. Finally, the accesscontrol list entry

*.Inventory.*

would permit access to all users of the "Inventory"project. Matching is on a part by part basis, sothere is no confusion if there happens to be aproject named "Jones".

Using multi-component principal identifiers itis straightforward to implement a variety of stan-dard security mechanisms. For example, the military"need-to-know" list corresponds to a series oraccess control list entries with explicit user namesbut (possibly) asterisks in the remaining fields.The standard government security compartments areexamples of additional partitions, and would beimplemented by extending the principal identifierto four or more parts, each additional part corres-ponding to one compartment in use at a particularinstallation. (Every person would be either in orout of each such compartment.) A restriction ofaccess to users who are simultaneously in two ormore compartments is then easily expressed.

________________________* The third partition has not yet been completelyimplemented. The current system uses the thirdpartition only to distinguish between interactiveand absentee use of the system.

Page 4: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

We have used the term "object" to describe theentities catalogued by the storage system with theintent of implying that segments are not the onlykinds of objects. Currently, four kinds of objectsare implemented or envisioned:

1. Segments

2. Message queues (experimental implementation) 3. Directories (called catalogues in some systems)

4. Removable media descriptors (not yet imple- mented)

For each object, there are several separatelycontrollable modes of access to the object. Forexample, a segment may be read, written, or exe-cuted as a procedure. If we use letters r, w,and e for these three modes of access, an accesscontrol list entry fur a segment may specify any ofthe combinations of access in table I. Certainaccess mode combinations are prohibited either be-cause they make no sense (e.g. write only) or cor-rect implementation requires more sophisticatedmachinery than implied by the simple mode settings.(For example, an execute-only mode, while appealingas a method for obtaining proprietary procedures,leaves unsolved certain problems of general pro-prietary procedures, such as protection of returnpoints of calls to other procedures. The protec-tion ring mechanism described later is used inMultics to implement proprietary procedures. Theexecute-only mode, while probably useful for lessgeneral cases, has not been pursued.)___________________________________________________ Mode | Typical use (none) | access denied r | read-only data re | pure procedure rw | writeable data rew | impure procedure

Table I: Acceptable combinations of access modes for a segment.__________________________________________________ In a similar way, message queues permit sepa-rate control of enqueueing and dequeueing ofmessages, tape reel media descriptors permitseparate control of reading, writing, and appendingto the end of a tape reel, and directories permitseparate control of listing of contents, modifyingexisting entries, and adding new entries. Controlof these various forms of access to objects is pro-vided by extending each access control list entryCo Include access mode indicators. Thus, the access control list entry

Smith.*.* rw

permits Smith to read and write the data segmentassociated with the entry.

It would have been simpler to associate anaccess mode with the object itself, rather thanwith each individual access control list entry, butthe flexibility of allowing different users to havedifferent access modes seems useful. It also makespossible exceptions to the granting of access toall members of a group. In the case where morethan one access control list entry applies, withdifferent access modes, the convention is made thatthe first access control list entry which matches

4

the principal identifier of the requesting processis the one which applies. Thus, the pair of accesscontrol list entries:

Smith.Inventory.* (none) *.Inventory.* rw

would deny access to Smith, while permitting allother members of the "Inventory" project to readand write the segment*. To insure that such con-trol is effective, when an entry is added to anaccess control list, it is sorted into the listaccording to how specific the entry is by the fol-lowing rule: all entries containing specific namesin the first part are placed before those with"don't cares" in the first part. Each of thosesubgroups is then similarly ordered according tothe second part, and so on. The purpose of thissorting is to allow very specific additions to anaccess control list to tend to take precedence overpreviously existing (perhaps by default) lessspecific entries, without requiring that the usermaster a language which permits him arbitraryordering of entries. The result is that most com-mon access control intentions are handled correctlyautomatically, and only unusually sophisticatedintentions require careful analysis by the user toget them to come out right. To minimize the explicit attention which auser must give to setting access control lists,every directory contains an "initial access controllist". Whenever a new object is created in thatdirectory, the contents of the initial access con-trol list are copied into the access control listof the newly created object**. Only if the userwishes access to be handled differently than thisdoes he have to take explicit action. Permissionto modify a directory's contents implies alsopermission to modify its initial access controllist.

The access control list mechanism illustratesan interesting subtlety. One might consider pro-viding, as a convenience, checking of new accesscontrol list entries at the time they are made, forexample to warn a user that he has just created anaccess control list entry for a non-existent person.Such checks were initially implemented in Multics,

________________________* This feature violates design principle three,which proscribes selective exclusion from an other-wise open environment because of the risk of un-detected errors. The feature has been providednevertheless, because the alternative of listingevery user except the few excluded seems clumsy.

** An earlier version of Multics did not copy theinitial access control list, but instead consideredit to be a common appendix to every access controllist in that directory. That strategy made auto-matic sorting of access control list entries in-effective, so sorting was left to the user. As aresult, the net effect of a single change to thecommon appendix couLd be different for every objectin the directory, leading to frequent mistakes andconfusion, in violation of the seventh design prin-ciple. Since in the protection area, it is essen-tial that a user be able to easily understand theconsequences of an action, this apparently moreflexible design was abandoned in favor of the lessflexible but more understandable one.

Page 5: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

but it was quickly noticed that they represented akind of compromise of privacy: by creating anaccess control list entry naming an individual, thepresence or absence of an error message would tellwhether or not that individual was a registereduser of the system, thereby possibly compromisinghis privacy. For this reason, a name-encodingscheme which required checking of access controlentry names at the time they were created wasabandoned.

It is also interesting to compare the Multicsaccess control scheme with that of the earlier CTSSsystem[6]. In CTSS, each file had a set of accessrestriction bits, applying to all users. Sharingof files was accomplished by permitting other usersto place in their directories special entriescalled links, which named the original file, andtypically contained further restrictions on allow-able access modes. The CTSS scheme had several de-fects not present in the Multics arrangement:

1. Once a link was in place there was no way toremove it without modifying the borrower'sdirectory. Thus, revocation of access wasawkward.

2. A single user, using the same file via differ-ent links, could have different access privi-leges, depending un which link he used.Allowing access rights to depend on the namewhich happens to be used for an object cer-tainly introduced an extra degree of flexi-bility, but this flexibility more often re-sulted in mistakes than in usefulness.

3. As part of a protection audit, one would liketo be able to obtain a list of all users whocan access a file. To construct that list,on CTSS, one had to search every directory inthe system to make a list of links. Thus suchan audit was expensive and also compromisedother users' privacy.

Multics retains the concept of a link as a namingconvenience, but the Multics link confers no accessprivileges -- it is only an indirect address.

Early in the design of Multic«[8] an additionalextension was proposed for an access control listentry: the "trap" extension, consisting of a one-bit flag and the name of a procedure. The ideawas that for all users whose principal identifiermatched with that entry, if the trap flag were onthe procedure named in the trap extension shouldbe called before access be granted. The procedure,supplied by the setter of the access control listentry, could supply arbitrary access constraints,such as permitting access only during certain hoursor only after asking another logged in user for anOK. This idea, like that of the execute-only pro-cedure, is appealing but requires an astonishingamount of supporting mechanism. The trap proce-dure cannot be run in the requesting user's address-ing and protection environment, since he is in con-trol of the environment and could easily subvertthe trap procedure. Since the trap procedure issupplied by another user, it cannot be run in thesupervisor's protection environment, either, so aseparate, protected subsystem environment is calledfor. Since the current Multics protected subsystemscheme allows a subsystem to have access to all ofits user's files, implementation of the trap exten-sion could expose a user to unexpected threats fromtrap procedures on any data segment he touches.

5

Therefore, at the least, a user should be able torequest that he be denied access to objects pro-tected by trap extensions, rather than be subjectto unexpected threats from trap procedures. Finally,if such a trap occurs on every read or write refer-ence to the segment, the cost would seem to be high.On the other hand, if the trap occurs only at thetime the segment is mapped into a user's addressspace*, then design principle four, that everyreference be validated, is violated; revocation ofaccess becomes difficult especially if the systemis operated continuously for long periods. The sumtotal of these considerations led to temporarilyabandoning the idea of the trap extension, perhapsuntil such time as a more general domain scheme,such as that suggested by Schroeder[21] isavailable.

Both backup copying of segments (for reliabil-ity) and bulk input and output to printers, etc.are carried out by operator-controlled processeswhich are subject to access control just as areordinary users. Thus a user can insure that print-ed copies of a segment are not accidentally made,by failing to provide an access control list entrywhich permits the printer process to read thesegment**. Access control list entries permittingbackup and bulk I/O are usually part of the defaultinitial access control list. Bulk input of cardsis accomplished by an operator process which readsthem into a system directory, and leaves a note forthe user in question to move them to his owndirectory. This strategy guarantees that there isno way in which one user can overwrite anotheruser's segment by submitting a spurious card inputrequest. These mechanisms are examples of thefourth design principle: every access to everyobject is checked for authority.

An administrative consequence of the accesscontrol list organization is that personal and pro-ject names, once assigned, cannot easily be reused,since the names may appear in access control lists.In principle, a system administrator could, when auser departs, unregister him and then examine everyaccess control list of the storage system for in-stances of that name, and delete them. The systemhas been deliberately designed to discourage sucha strategy, on the basis that a system administratorshould not routinely paw through all the directoriesof all system users. Thus, the alternative schemewas adopted, requiring all user names, once regis-tered, to be permanent.

Finally, the one most apparent limitation ofthe scheme as presently implemented is its "one-way" control of access. With the described accesscontrol list organization, the owner of a segmenthas complete control over who may access it. Thereare some cases in which users other than the ownermay wish to see access restricted to an objectwhich the owner has declared public. For example,an instructor of a class may for pedagogical pur-poses wish to require his students to write a

–––––––––––––––––––––––* Or, in traditional file systems, at the time thefile is "opened".

** Of course, another user who has permission toread the segment could make a copy and then havethe copy printed. Methods of constraining evenusers who have permission are the subject of con-tinuing research[20].

Page 6: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

particular program rather than make use of an equiva-lent one already publicly available in the system.Alternatively, a project administrator concernedabout security may wish to insure that his projectmembers cannot copy sensitive information into stor-age areas belonging to other users and which arenot under his control. He may also want to preventhis project members from setting access controllists to permit access by users outside the project.This kind of control can be expressed in Multicscurrently only by going to the trouble of construc-ting a protected subsystem which examines all super-visor calls, thereby permitting complete controlover which objects are mapped into the address spaceand what terms are added to access control lists.Fortunately, there have so far appeared only a fewexamples in which such control is required, and theescape suggested has proven adequate for those cases.A more general, yet quite simple, solution would beto associate with the user's process two constrain-ing lists: a list of pathnames of directorieswhose contents he may access, and a list of accesscontrol list terms which he is permitted to place onaccess control lists. These two constraining listswould be set only by the project administrator orsecurity officer. The constraining lists would beespecially useful in the military security environ-ment, since they would help in the construction ofa list of items a defector might have had access to.

As is evident, the Multics access control listmechanism represents an engineering tradeoff amongthree conflicting goals: flexibility of expression,ease of understanding and use, and economy ofimplementation. Additional flexibility of expres-sion was tried (e.g., the common access controllist mechanism previously footnoted) with the con-clusion that the additional confusion which resultsfrom accidental misuse of the generality can out-weigh the benefits; apparently the correct directionis the opposite, toward simpler, less general, andmore easily understandable protection structures.

Hierarchical Control of Access Specifications

Since in Multics every object, including adirectory, must be catalogued in some directory, allobjects are arranged into a single hierarchical treeof directories. This naming hierarchy also providesa hierarchy of control of access, through theability to modify the contents of a directory.Since a directory entry consists of the name of someobject and its access control list, having access tomodify directory entries is interpreted to includethe ability to modify the access control lists ofall the objects catalogued in that directory. Nofurther hierarchical control is provided; forexample, there is no ability to say "Allow read ac-cess to Jones for all segments below this node inthe naming tree". Such specifications are similarin nature to the "common access control list" men-tioned before; they make it difficult for a user tobe sure of all the consequences of a change to theaccess specification. For example, removing aspecification such as that quoted above, which per-mits only reading, might render effective a forgottenaccess control term lower in the naming hierarchywhich permits both reading and writing*.________________________* Early versions of Multics provided a limitedform of higher-level specification in the form ofability to deny all use of a directory, and

6

Although it would appear that the hierarchicalscheme provides an inordinate amount of power to aproject administrator and, above him, to a systemadministrator, in practice it forces a carefulconsideration of the lines of authority over pro-tected information, and explicit recognition of anauthority hierarchy which already existed. In someenvironments, it would probably be appropriate topublicly log all modifications of directory accessabove some level, so as to provide a measure ofcontrol of the use of hierarchical authority. Moreelaborate controls might include requiring coopera-tive consent of some quasi-judicial committee ofusers for modification of high-level directoryaccess. Such controls are relatively easy for aninstallation or a project to implement, using pro-tected subsystems.

It is possible, by choosing access modescorrectly, to use the hierarchical access controlscheme in combination with the initial access con-trol list to accomplish a totally centralized con-trol of all access decisions. If, for example, aproject administrator creates a directory for auser, places an initial access control list in thatdirectory, and then grants to the new user per-mission only to add new entries to the directory,all such new entries would automatically receive acopy of the initial access control list determinedby the administrator -- the user would have no con-trol over who may use the objects he creates. Bypolicy, a system administrator could run an entireinstallation under this tight control, and retainfor himself complete authority to determine whataccess control list is placed on every object, asin IBM'S Resource Security System[14]. Alterna-tively, any smaller portion of the naming hier-archy can be kept under absolute control by theperson having authority to modify access controllists at the top node of the portion.

The other obvious alternative to a hierarchi-cal control of modification of access control listswould be some form of self-control. That is, theability to modify an access control list would beone of the modes of access controlled by the listitself. A very general version of this alternativehas been explored by Rotenberg[20]. This alterna-tive has not been tried out in the Multics context,partly because the implications of the hierarchicalmethod were easier to understand in the first imple-mentation. Probably the chief advantage of self-control of access modification would be that onecould provide an individual a fully private workarea in which no one -- manager, security officer,or system administrator -- could intrude. On theother hand, the implementation of a "locksmith"while easy to do may require introducing hiddenaccess paths which are then subject to misuse*.________________________therefore of the objects contained within it. Forthe reasons suggested, this feature has beendisabled.

* A locksmith would be an administrator who canprovide accountable intervention when mistakes aremade. For example, if an organization's key database is under the exclusive control of a managerwho has been disabled in an automobile accident,the locksmith could then provide another managerwith access to the file. It seems appropriate toformalize the concept of a locksmith so that appro-priate audit trails and authority to be a locksmith

Page 7: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

Also, one wonders how a self-control scheme wouldfit smoothly into an organization which does notusually give an individual the privilege of choos-ing his own office door lock. Clearly, the socialand organizational consequences of the choice be-tween these two design alternatives deserve fur-ther study.

Authentication of users

All of the machinery of access control lists,access modes, protected subsystems, and hierarchi-cal control depend on an accurate principal iden-tifier being associated with every process.Accuracy of identification depends on authentica-tion of the user's claimed identity. A variety ofmechanisms are used to help insure the security ofthis authentication. The general strategy chosenby Multics is to maintain individual accountabilityon a personal basis. Every user u£ a given instal-lation (with one class of exception, noted later)is registered at the installation, which means thata unique name, usually his last name plus one ortwo initials, is permanently entered in a systemregistry. Associated with his name at the time heis registered is a password of up to eight ASCIIcharacters. Whenever any person proposes to usethe system, he supplies his unique name, at whichpoint the system demands also that he provide hispassword.

Thus far, the authentication mechanism ofMultics is essentially the same as for most otherremote-accessed systems. However, Multics usesseveral extra measures related to user authentica-tion, which are not often found in other systems.For one. all use of the system, whether interactiveor absentee (batch) is authenticated interactively.That is, initiation of a batch job is not done onthe basis of information found in a card reader.Arriving card decks are read in and held in on-linestorage by a system process, for which an operatoris responsible. All absentee jobs, whether theyare to be controlled by files created from cardsor files constructed interactively or files con-structed by another program, must be initiated bysome job already on the system, and whose legiti-macy has been previously authenticated. Althougha chain of absentee job requests can be developed,the chain must have begun with an interactive job,which requires interactive authentication. Inthe simplest case, the individual responsible goesto an interactive console, identifies and authen-ticates himself, and requests execution of the jobrepresented by the incoming card deck. If neces-sary, the request will automatically wait untilthe card deck arrives, so that the user need notwait for the operator or for a card reader queue*.Thus, no job is every run without prior positiveidentification of the responsible party. Notethat for installations in which responsibility forcard controlled jobs is considered unimportant, itis rather trivial to construct a Multics program,run under the responsibility of the card reader______________________

can be well-defined. The alternative of sendinga system programmer into the computer room withinstructions to directly patch the system or itsdata may leave no audit trail and almost certainlyencourages sloppy practice.

* The automatic wait is not yet implemented.

7

operator, which accepts and runs as a job anythingfound in the card reader. All such jobs would berun in processes bearing the principal identifier ofthe card reader operator, and are thus constrainedin the range of on-line information which they canaccess. The inviolate principle of access controlremains that on-line authentication of identity, bypresenting a password, is required in order to starta process labeled with a particular desired principalidentifier. Note also that the fact that a jobhappens to be operated without an interactive ter-minal has no bearing on its privileges, except asexplicitly controlled by its principal identifier.Finally, to handle the situation where a busyresearcher asks a friend to submit the batch job,a proxy login scheme permits the friend to identifyhimself, under his own password, and then requestthat the job be run under the principal identifierof the original researcher. The system will permitproxy logins only if the person responsible for theprincipal identifier to be used has previouslyauthorized such logins by giving a list of proxies*. As to protection of passwords, several facili-ties are provided. The user may, after authenti-cating himself, change his password at any time hefeels that the old one may have been compromised.A program is available which will generate a newrandom eight-character password with English digraphstatistics, thereby making it pronounceable and easyto memorize, and minimizing the need for writtencopies of the password. Users are encouraged toobtain their passwords from this program, ratherthan choosing passwords themselves, since human-chosen passwords are often surprisingly easy toguess. Passwords are stored in the file system inmildly encrypted form, using a one-way encryptionscheme along the lines suggested by Wilkes[29],As a result, passwords are not routinely known byany system administrator or project administrators,and there is never any occasion for which it is evenappropriate to print out lists of passwords. If,through some accident, a stored password is exposed,its usefulness is reduced by its encrypted form.

When the user is requested to give his password,at login time, the printer on his terminal is turnedoff, if possible, or else a background of garblingcharacters is first printed in the area where he isto type his password. Although the user could beindoctrinated to tear off and destroy the piece ofpaper containing his password, by routinely protec-ting it for him the system encourages a concern forsecurity on the part of the user. In addition, ifthe user's boss (or someone from four levels ofmanagement higher) happens to be looking over hisshoulder as he logs in, the user is not faced withthe awkward social problem of scrambling to concealhis password from a superior who could potentiallytake offense at an implication that he is not to betrusted with the information.

A time-out is provided to help protect theuser who leaves his terminal, is distracted, andforgets to log out. If no activity occurs for aperiod, a logout is automatically generated. Thelength of the time-out period can be adjusted tosuit the needs of a particular installation.Similarly, whenever service is interrupted by asystem failure for more than a moment, a new login

–––––––––––––––––––––––––* The proxy login it not yet implemented.

Page 8: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

is required of all interact ive users, since someusers may have given up and Left their terminals.

Finally, several logging and penetrationdetection techniques help prevent attacks via thepassword routine. If a user provides an incorrectpassword, the event of an incorrect login attemptis noted in a threat-monitoring log, and the user ispermitted to try again, up to a limit of ten timesat which point the telephone (or network) connec-tion is forcibly broken by the system, introducingdelay to frustrate systematic penetration attempts*.Whenever a user logs in, the time and physical lo-cation (terminal identification) of his previouslogin are printed out in his greeting message,thus giving him an opportunity to notice if hispassword has been used by someone else in hisabsence. Similarly, monthly accounting reportsbreak down usage by shift and services used, andmay be reviewed on-line at any time, thereby pro-viding an opportunity for the individual to comparehis pattern of use with that observed by thesystem, and perhaps to thereby detect unauthorizeduse. If either of these mechanisms suggests un-authorized use, the individual involved may askthe system administrator to check the system log,which contains an entry for every login and logoutgiving date and time, terminal type used, and ter-minal identification, if any.

For a project which maintains especially sen-sitive information, the project administrator maydesignate the initial procedure to be executed bysome or all processes created using the name ofthat project as part of its principal identifier.This initial procedure, supplied by the projectadministrator, has complete control of the process,and can demand further authentication (e.g., aone-time password or a challenge-response scheme,)perform project logging of the result, constrainthe user to a subset of the available facilities,or initiate a logout sequence, thereby refusingaccess to the user. In the other direction, someprojects may wish to allow unlimited public accessto their files. If so, the project administratormay indicate that his project will accept login ofunauthenticated users. In such a case, the system

______________________

* With ASCII passwords chosen to match Englishdigraph frequency, a little less than four bits ofinformation are represented by each character(despite the eight or nine bits required to storethe characters.) An eight character password thuscarries about 30 bits of information, which wouldrequire about 109 guesses using an informationtheoretic optimum guessing strategy. If one mount-ed a simultaneous attack from 100 computer-driventerminals, and the system-imposed delays averageonly 10 milliseconds per attempt, about 105 seconds,or one full day of systematic attack would be re-quired to guess a password. Although use of auniformly random password generator would increasethis work factor by several orders of magnitude,resistance to use of hard-to-remember passwords andthe need to make written copies might act to wipeout the gain. Of course, this work factor calcula-tion presumes that the attacker has no furtherbasis on which to narrow the range of passwordpossibilities, for example, by knowing that theuser in question may have chosen his own passwordor by wiretapping a previous login.

8

does not demand a password, instead assigning thepersonal name "anonymous" to the principal identi-fier of the process involved, using the name of theresponsible project for the second part of theprincipal identifier. The principal identifier"anonymous" is the one exception to the registrationscheme mentioned earlier. Allowing anonymous usersdoes not compromise the security of the storagesystem, since the principal identifier is constrain-ed, and all storage system access is based on theprincipal identifier. The primary use of anonymoususers has been for educational purposes, in whichall students in a class are to perform some assign-ment. Sometimes, this feature is coupled with theproject-designated initial procedure, so that theproject may implement its own password scheme, orcontrol what facilities are made available, so asto limit its financial liability. Some statisticalanalysis and data-base development projects alsopermit anonymous use of data-retrieval programs.

The objective of many of these mechanisms, suchas simple registration of every user, the proxylogin, the anonymous user, concealment of printedpasswords, and user changeable passwords, togetherwith a storage system which permits all authorizedsharing of information, is to provide an environ-ment in which there is never any need for anyoneto know a password other than his own. Experiencewith the earlier CTSS system demonstrated that byomitting any of these features, the system itselfmay encourage borrowing of passwords, with anattendent reduction in overall security.

Primary Memory Protection

We may consider the access control list to bethe first level of mechanism providing protectionfor stored information. Most of the burden ofkeeping users' programs from interfering with oneanother, with protected subsystems, and with thesupervisor is actually carried by a second level ofmechanism, which is descriptor-based. This secondlevel is introduced essentially for speed, so thatarbitration of access may occur on every referenceto memory. As a result, the second level is imple-mented mostly in hardware in the central processingunit of the Honeywell 6180. Of course, thisstrategy requires that the second level of mechanismbe operated in such a way as to carry out the intentexpressed in the first level access control lists.

As described by Bensoussan et al.[4] theMultics virtual memory is segmented to permit shar-ing of objects in the virtual memory, and to simpli-fy address space management for the programmer.The implementation of segmentation uses addressingdescriptors, a technique used, for example, in theBurroughs B5000 computer systems[9]. The Burroughsimplementation of a descriptor is exclusively as anaddressing and type-labeling mechanism, with protec-tion provided on the basis that a process may accessonly those objects for which it has names. InMultics, the function of the descriptor* is extendedto include modes of access (read, write, and exe-cute) and to provide for protected subsystems whichshare object names with their users. Evans andLeClerc[lO] were among the first to describe theusefulness of such an extension.

________________________* With the exception of type identification,which is not provided in Multics.

Page 9: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

As shown in figure one, there are threeclasses of descriptor extensions for protectionpurposes: mode control, protected subsystem entrycontrol, and control on which protected subsystemsmay use the descriptor at all. Every reference ofthe processor to the segment described by thisdescriptor is thus checked for validity.

The virtual address space of a Multics pro-cess is implemented with an array of descriptors,called a descriptor segment, as in figure two.Every reference LO the virtual memory specifiesboth a segment number (which is interpreted as anindex into the descriptor segment) and a word num-ber within the segment.

Figure two also helps illustrate why the pro-tection information is associated with the address-ing descriptor rather than with the data itself*.Each computation is carried out in its own addressspace, so each computation has its own privatedescriptor segment. Using this mechanism, a singlephysical segment may appear in different addressspaces with different access privileges for differ-ent users, even though they are referring to thesame physical data. Since in a multiprocessorsystem such as Multics two such processes may beexecuting simultaneously, a single protectionspecification associated with the data is not

________________________

* The alternate option is chosen, for example, inthe IBM 360/67 and the IBM 370 "Advanced Function"virtual memory systems[24].

____________________________________________________________________________________________________________________

9

__________________________________________________sufficient. Having the protection specificationassociated with the descriptor allows for suchcontrolled sharing to be handled easily.

An unusual feature of the descriptors used inMultics is embodied in the second and third exten-sions of figure one. Together, they allow hard-ware enforcement of protected subsystems. A pro-tected subsystem is a collection of procedures anddata bases which are intended to be used only bycalls to designated entry points, known in Multics

Page 10: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

as gates. If this intention is hardware enforced,it is possible to construct proprietary programswhich cannot be read, data base managers whichreturn only statistics rather than raw data to somecallers, and debugging tools which cannot be acci-dentally disabled. The descriptor extensions areused to authenticate subroutine calls to protectedsubsystems. Two important advantages flow fromusing a hardware checked call:

1. Calls to protected subsystems use the samestructural mechanisms as do calls to unpro-tected subroutines, with the same cost inexecution time. Thus a programmer does notneed to take the fact that he is calling aprotected subsystem into account when he triesto estimate the performance of a new programdesign.

2. It is quite easy to extend to the user theability to write protected subsystems of hisown. Without any special privileges, any usermay develop his own proprietary program, data-screening system, or extra authenticationsystem, and be assured that even though he per-mits others to use his protected subsystem.the information he is protecting receives thesame kind of security as does the supervisoritself.

In support of call protection, hardware is alsoprovided to automatically check the addresses ofall arguments as they are used, to be sure thatthe caller has access to them. Checking the rangeof the argument values is left to the protectedsubsystem.

Protected subsystems are formed by using thethird field of the descriptor extension of figureone. To simplify protected subsystem implementa-tion, Multics imposes a hierarchical constrainton all subsystems which operate within a singleprocess: each subsystem is assigned a number, be-tween 0 and 7, and it is permitted to use all ofthose descriptors containing protected subsystemnumbers greater than or equal to its own. Amongthe descriptors available to a subsystem may besome permitting it to call to the entry points ofother protected subsystems. This scheme goes bythe name rings of protection, and is more com-pletely described by Graham[12] and by Schroederand Saltzer[22].* As far as is known, the onlypreviously existing systems to permit general,user-constructed protected subsystems are theM.I.T. PDP-1 time-sharing system[l] and the CALoperating system[15].

The descriptor-based strategy permits two fur-ther simplifying steps to be taken:

1. All information in the storage system is readand written by mapping it into the virtualmemory, and then using load and store instruc-tions whose validity is checked by thedescriptor mechanism.

2. The supervisor itself is treated as an exampleof a protected subsystem, which operates in avirtual memory arbitrated by descriptors,

_______________________

* A more general approach, not yet implemented,but which removes the restriction that the protectedsubsystem be hierarchical, is described by Schroederin his doctoral thesis[21).

10

exactly the same as do the user programswhich it supports.

The reasons why the first step provides simplifica-tion for the user have been discussed extensivelyin the literature[4,13]. The second step deservessome more comment. By placing the supervisor it-self under the control of the descriptors, as infigure two, a rather substantial benefit isachieved: the supervisor then operates with thesame addressing and machine language code genera-tion environment as the user, which means thatsupervisor programs may be constructed using thesame compilers and debugging tools available to auser. The effect on protection is non-trivial:programs constructed and checked out with morepowerful tools tend to have fewer errors, anderrors in the supervisor which compromise protec-tion often escape notice.

Perhaps equally important is that the deter-mination of whether one is in or out of the super-visor is not based on some processor mode bit whichcan be accidentally left in the wrong state whencontrol is passed to a user program. Instead, theaddressing privileges of the current protected sub-system are governed by the subsystem identification,located in the descriptor of the segment whichsupplied the most recent instruction. Every trans-fer of control to a different program is thusguaranteed to automatically produce addressingprivileges appropriate to the new program. If asupervisor procedure should accidentally transferto a location in a user procedure, that procedurewill find that the protection environment has auto-matically returned to the state appropriate forrunning user procedures.

Finally, the descriptors are adjusted to pro-vide only the amount of access required by thesupervisor, in consonance with design principle six.For example, procedures are not writeable, and databases are not executable. As a result, programmingerrors related to using incorrect addresses tendto be immediately detected as protection violations,and do not persist into delivered systems. If onereviews the operation of Multics starting with theinitial loading of the system on an empty machine,he will find that only the first hundred or soinstructions do not use descriptors. Once adescriptor segment has been fashioned, all memoryreferences by the processor from that point on arearbitrated by descriptors.

These mechanisms do not prohibit the super-visor from making full use of the hardware whenappropriate. Rather, they protect against acciden-tal overuse of supervisor privileges. Clearly, thesupervisor must be able to write into the descrip-tor segment, in order to initially set it up, andalso to honor requests to map additional objectsof the storage system into segments of the virtualmemory. This adjustment of descriptors is donewith great care, using a single procedure whoseonly function is to construct descriptors whichcorrespond to access control list entries. A callto the storage system which results in adjustmentof a descriptor is illustrated in figure two. Inthis figure, it is worth noting that even thewriting of the descriptor is done with use of adescriptor for the descriptor itself. Thus thereis little danger of accidentally modifying a des-criptor segment belonging to some other user,

Page 11: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

since the only descriptor segment routinelyappearing in the virtual memory of this processis its own.

Entries to the supervisor which implement"special privileges" (e.g., the operator may havethe privilege of shutting the system down) aregenerally controlled by ordinary access controllists, either on the gates of supervisor entries,or in some cases by having the supervisor proce-dure access some data segment before proceedingwith the privileged operation. If the userattempting to invoke the privilege does not appearon the access control list of the data segment, anaccess violation fault will occur, rather than anunauthorized use of the privilege.

The final step of "locking up" the supervisorlies in management of source-sink input-output.Recall first that all access to on-line cataloguedinformation of the storage system is handled bydirect mapping into the virtual memory. Thus, in-put and output operations in Muitics consist onlyof true source-sink operations, that is of streamsof information which enter or leave the system.Such operations are performed by hardware I/O chan-nels, following channel programs constructed by theI/O system in response to I/O requests of the call-ing program. These I/O channel programs are placedin a part of the virtual memory accessible only tothe supervisor*. Similarly, all input data is readinto a protected buffer area, accessible only tothe supervisor. Only after the input has arrivedand the supervisor has had a chance to check it isit turned over to the user, either by copying it,or by modifying a descriptor to make it accessibleto the user. A similar, inverse pattern is usedon output. Since during I/O neither the data northe channel program is accessible to the user,there is no hesitation about permitting him to con-tinue his computation in parallel with the I/Ooperation. Thus, fully asynchronous operations arepossible.

The system is initialized from a magnetic tapewhich contains copies of every program residing inthe most protected area. In this way, the integrityof the protection mechanisms depends on protectingonly one magnetic tape, and is independent of thecontents of the secondary storage system (disk anddrums) which are more exposed to compromise bymaintenance staff. On the other hand, since thesystem is designed for continuous operation, there

______________________

* And to the I/O channels, which use absoluteaddresses. If separate I/O channels were availableto each physical device and the I/O channels usedthe addressing descriptors, protected supervisorprocedures would not be required for I/O operationsafter device assignment (which requires a descrip-tor to be constructed.)

Here is an example of a place where building a newsystem, rather than modifying an old one, has sim-plified matters. On some computer systems, theuser constructs his own channel programs, and mayeven expect to modify them dynamically duringchannel operation. It is quite hard to invent asatisfactory scheme for protecting other usersagainst such I/O operations without placing re-strictions on their scope, or inhibiting paralleloperation of the user with his I/O channel programs.

11

appears to be no need for a separate package con-sisting of passwords and clearance information assuggested by Weissman[28].

To round out the discussion of primary andvirtual memory protection, we should consider stor-age residues. A storage residue is the data copyleft in a physical storage device after the previoususer has finished with it. Storage residues mustbe carefully controlled to avoid accidental releaseof information. In a virtual memory system, theonly way a storage residue could be examined wouldbe to read from a previously unused part of thevirtual memory. By convention, in Multics, thesupervisor provides pages of zeros in response tosuch attempts. Since all access to on-line storageis via the virtual memory, no additional mechanismis required to insure that a user never sees aresidue from the storage system.

Weaknesses of the Multics protection Mechanisms

One is always hesitant to list the weaknessesin his system, for a variety of reasons. Often,they represent mistakes or errors o£ judgement,which are embarrassing to admit. Such a list pro-vides an easy target for detractors of a design,and in the protection area provides an invitationfor potential attackers at production installationswhich happen to be using the system. In the caseof a system still evolving, such as Multics, knownweaknesses are being corrected as rapidly asfeasible, so any list of weaknesses is rapidlyobsolete. And finally, any list of weaknesses isalmost certainly incomplete, being subject to allof the built-in blindnesses of its authors. Never-theless, such a list is quite useful, both to lookfor specific interesting unsolved problems, andalso to establish what level of considerations arestill considered relevant by the designers of thesystem. The weaknesses described here begin withtwo major areas, followed by several smallerproblems.

Probably the most important weakness in thecurrent Multics design lies in the large number ofdifferent program modules which have the ability,in principle, to compromise the protection system.Of the 2000 program modules which comprise Multics,some 400, or 20%, are in the "most protected area",consisting of system initialization, the storagesystem, miscellaneous supervisor functions, andsystem shutdown. Although all of these 400 modulesoperate using the descriptor-based virtual memorydescribed earlier, the descriptors serve for themonly as protection against accidentally generatedillegal address references; these modules are notconstrained by the inability to construct suitabledescriptors in the same way as the remaining 1600modules and user programs. Thus any of these 400modules (averaging perhaps 200 lines of sourcecode each) might contain an error which compromisesthe security mechanisms, or even a security viola-tion intentionally inserted by a system programmer.The large number of programs and the very highinternal intricacy level frustrates line by lineauditing for errors, misimplementation, or inten-tially planted trapdoors. This weakness is notsurprising for the first implementation of a sophis-ticated system, and upon review it is now apparentthat with mild software restructuring plus help fromspecialized hardware the number of lines of code inthe most protected area can be greatly reduced --

Page 12: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

perhaps by as much as an order of magnitude. Inexamining many specific examples, there seem to havebeen three common, interrelated reasons for theextra bulk currently found in the protected area:

• economics: at the time of design, a functioncould be implemented more cheaply in the mostprotected region. Since the protection ringmechanism was originally simulated by software,there were design decisions based on theassumption that calls across ring boundarieswere expensive.

• rush to get on the air: in the hurry to getan initial version of the system going, ashortcut was found, which required unnecessar-ily placing a module in the most protectedregion.

• lack of understanding: a complex subsystemwas not carefully enough analyzed to separatethe parts requiring protection; the entiresubsystem was therefore protected.

With hardware-supported protection rings,hindsight, and the experience of a complete workingimplementation, it is apparent that a smaller "mostprotected area" can be constructed. It now appearspossible to make complete auditing a feasible task.A project is now underway to test this hypothesisby attempting to develop an auditable version ofthe most protected region of Multics.

The second serious weakness in the currentMultics design is in the complexity of the userinterface. In creating a new segment, a user shouldspecify permitted lists of users and projects,specify allowed modes of access for each, decidewhether or not backup copies should be allowed andwhether or not bulk I/O should be permitted for thesegment, and whether or not the segment should bepart of a protected subsystem. He should checkthat permissions he has given to modify higher-level directories interact in the desired way withhis current intent. A variety of defaults havebeen devised to reduce the number of explicitchoices which need be made in common cases: asalready mentioned, a per-directory "initial accesscontrol list" is by default assigned to any newsegment created in that directory. The defaultsmerely hide the complex underlying structure, how-ever, and do not help the user with an unusualprotection requirement, who must figure out forhimself how to accomplish his intentions amid amyriad of possiblities, not all of which he under-stands. The situation for a project administrator,who can control the initial program his users get,and may perhaps force all of his users to interactvia a limited, protected subsystem is similar, butwith fewer defaults and more possibilitiesavailable.

The solution to this problem lies in betterunderstanding the nature of the typical user'smental description of protection intent, and thendevising interfaces which permit more direct speci-fication of that protection intent. As an example,a graduate student devised a simple Multics programwhich prints a list of all users which may forceaccess to a segment (by virtue of having modifyaccess to some higher level directory.) This listdoes not correspond to any single access controllist found anywhere in the system, yet it is clearlyrelevant to one's image of how the segment isprotected. Setting up the mechanisms of access

12

control lists, accessibility modes, and rings ofprotection perhaps should be viewed as a problem ofprogramming in which, as usual, the structuresavailable in initial designs do not corresponddirectly with the user's way of thinking, eventhough there may be some way of programming thestructure to accomplish any intent. In the area ofprotection, the problem has a special edge, sinceif a user, through confusion, devises an overly per-missive protection specification, he may not dis-cover his mistake until too late.

At a level of significance well below the twomajor points of system size and user interface com-plexity are several other kinds of problems. Theseproblems are felt to be less significant not becausethey cannot be exploited as easily, but rather be-cause the changes required to strengthen these areasare straightforward and relatively easy to implement.These problems include:

1. Communication links are weak. Of course, anyuse of switched telephone lines leads to vul-nerability, but provision for integration ofa Lucifer-like system[23] for end-to-endencryption of messages sent over public linesor through a communication network would pro-bably be a desirable (and simple) addition.As an example of a typical problem in thisarea, the Bell System 202C6 DATAPHONE dataset,which is used for 1200 bps terminals, does notinclude provision for reporting telephone linedisconnection to the computer system duringdata output transmission. If a user acciden-tally hangs up his telephone line during out-put, another user dialing to the same port onthe computer may receive the output, and cap-ture control of the process. Although remedialmeasures such as requiring reauthenticationevery few minutes could be used, automaticdetection of the line disconnection would befar more reassuring. (Note that for the morecommonly used 103A DATAPHONE dataset, whichdoes report telephone line disconnections,this problem does not exist; upon observingthe dropping of the carrier detect line fromthe dataset, Multics immediately logs the userout.)

2. The operator interface is weak. The primaryinterface of the operator is as a logged-inuser, where his interactions can be logged,verified, and suitably restricted. However,he has a secondary interface: the switchesand lights of the hardware itself. It wouldappear that the potential for error or sabo-tage via this route is far higher thannecessary. If every hardware switch in thesystem were both readable and settable by(protected supervisor) programs, then all suchswitches could be declared off limits to theoperator, and perhaps placed behind lockedpanels. Since all operator interaction wouldthen be forced to take place via his terminal,his requests can be checked for plausibilityby a program. What has really gone wrong hereis a failure to completely reconsider the roleof the operator in a computer system operatingas a utility. Functions such as operation ofcard readers and printers do not require accessto switches on the side of the processor -- oreven physical presence in the same room as thecomputer, for that matter. The decision thata system failure has occurred and the

Page 13: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

appropriate level of recovery action to Cakeare probably the operator functions which arehardest to automate or decouple from the phy-sical machine room, but certainly much move-ment in this direction would be easy toaccomplish.

3. Users are permitted to specify their ownpasswords, leading to easy-to-guess passwords.The resulting loss of security has alreadybeen well documented in the literature[25],and this method has been used at least once toimproperly obtain access to Multics at M.I.T.,when a programmer chose as his Multics pass-word the same password he used on another, un-secured time-sharing system. A better strategyhere would be to force the use of system-gen-erated randomly chosen passwords, and also toplace an expiration date on them, to forceperiodic password changes. For sensitiveapplications, or situations where the passwordmust be exposed to unknown observers (as inusing a system via the ARPA network), thesystem should provide lists of one-timepasswords.

4. The supervisor interface is vulnerable to mis-implementation. Although this difficultycould be described as a specific example of asupervisor too large and complex to audit, itis worth identifying in its own right. Theproblem has to do with checking the range ofarguments passed to the supervisor. The hard-ware automatically checks that argumentaddresses are legitimately accessible to thecaller, and completely checks all use ofpointer variables as indirect addresses. How-ever, it provides no help in determiningwhether the ultimate argument values are"reasonable" for the supervisor entry inquestion. Each entry must be prepared tooperate correctly (or at least safely) no mat-ter what combination of argument values issupplied by the caller. Certain kinds ofinterfaces make for difficulty in auditing aprogram to see if it properly checks range ofarguments. For example, if the allowed rangeof one argument depends on the result of com-putation which is based in part on anotherargument, then it may be hard to enforce aprogramming standard which requires that allsupervisor entries check the range of all theirarguments before performing any other computa-tion. The current Multics interface hasexamples of situations in which, to verify thata supervisor entry is correctly programmed sothat it does not blow up when presented withan illegal argument, one must trace hundredsof lines of code and many subroutine calls.Such interfaces discourage routine auditingof the supervisor interface, and probably re-sult in some undetected implementation errors.It would be interesting to explore the designof argument range-checking hardware, whichwould force the system programmer to declarethe allowed range of arguments for his entries,and thereby force out into the open the exist-ence of arguments whose range is not triviallytestable, for interface design revision.

5. Secondary storage residues are not cleared un-til they are reassigned. When a segment isdeleted, all descriptors for the physical

13

storage area are destroyed, and the area ismarked as reusable. No further descriptorsfor the storage area will ever be constructedwithout first clearing the storage area, butmeanwhile the residue remains intact. Inprinciple, there is no way to exploit theseresidues using the system itself, but auto-matic overwriting of the residues at the timeof deletion would provide an additional safe-guard against accidents, and guarantee that asegment, once deleted, is not accessible evento a hardware maintenance engineer. A similarproblem exists for the magnetic tapes contain-ing backup copies of segments. In at leastone case on another time-sharing system, thepersistence of backup copies has provedembarrassing: a government agency requestedthat a file containing a list of special tele-phone access codes be completely deleted; theinstallation administrator found himself withno convenient way to purge the residues on thebackup tapes. These tapes should probably beencrypted, using per-segment keys known onlyby the operating system. It is an interestingproblem to construct a strategy for safely en-crypting backup copy tapes, while ensuringthat encrypting keys do not get destroyed uponsystem failure, making the backup copiesworthless.

6. Over-privileged system administrator. Somesystem functions have been organized in such away that the administrators of the system re-quire more privilege than really necessary.For example, measures of secondary storageusage are stored in the using directory ratherthan in an account file. As a result, theadministrative accounting programs which pre-pare bills for secondary storage use must haveaccess to read every directory in the storagesystem. For another example, the "locksmith"function, mentioned earlier, is currentlyimplemented by giving the locksmith permissionto modify the root directory of the storagesystem directory hierarchy. Thus the lock-smith has the unaudited ability to grant him-self access to every file in the storagesystem. Such a design means that one of theeasiest ways to attack is to attempt to in-fluence the system administrator, possibly bysurreptitiously inserting traps in some pro-gram he is likely to use* while running aprocess whose principal identifier needlesslypermits extensive privileges. The countermeasure, currently partially implemented, isto provide administrators with protected sub-systems from which they cannot escape, whichare certified to exercise a minimum of privi-lege, and which maintain audit trails.

7. Ponderous backup copy and retrieval scheme.It has been noticed that the general methodcurrently used for indexing the contents ofstorage system backup copy tapes is weak, sothat the only effective way to identify a de-sired copy of a damaged segment is to permitthe user to manually scan printed journals ofthe names o£ the segments copied onto eachtape. These journals contain the names of

________________________

* This technique has been described as the "TrojanHorse" attack[5].

Page 14: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

other users' segments and directories, andwere intended for use only for emergency sit-uations and with proper clearance. Unfortu-nately, the number of retrieval requests whichcan be handled on other than an emergency basisis a sensitive function of the quality of thetools available for searching the journalsautomatically while maintaining privacy. Asimple scheme based on a protected subsystemfor searching journals has recently been pro-posed, but is not yet implemented.

8. Counter-intelligence techniques have not beenexploited. Although logs of suspicious events(such as incorrectly supplied passwords) aremaintained no true counter-intelligence strate-gies are employed. For example Turn, et al.[26] have suggested inserting carefully moni-tored apparent flaws in the system. Theseflaws would be intended to attract a would-beattacker, any attempt to exploit them wouldresult in an early warning of attack and anopportunity to apprehend the attacker.

9. Some areas of potentially vulnerability havenot been examined. These include vulnerabilityto undetected failures of the hardware protec-tion apparatus[17],* electromagnetic radiationfrom the physical hardware machine[3], andtraffic analysis possibilities, using perfor-mance measurement tools available to any user.

It is interesting to note that none of thesenine specific weaknesses represent intrinsic diffi-culties of full-scale computer utility systems --relatively straightforward modification can easilystrengthen any of these areas. In fact, neitherthe two major weaknesses nor the nine specific onesrepresent "holes" in the sense of being immediatelyexploitable by an attacker. Rather, they are areasin which an attacker is more likely to discover amethod of entry caused by misimplementation, mis-understanding, or mismanagement of an otherwisesecurable system. Thus we might describe the pro-tection system as usable, though with known areasof weakness.

Conclusions

This paper has surveyed the complete range ofinformation protection techniques which have beenapplied to a specific example of a system designedfor production use as a computer utility. Overthree years of experience in a production environ-ment at M.I.T. has demonstrated that the mechanismsare generally useful. A commonly asked question(especially in the light of recent experienceswith attempts to add security to other commerciallyavailable computer systems) is "how much perfor-mance is lost?" This question is difficult toanswer since, as is evident, the protection struc-ture is deeply integrated into the system and

______________________

* Although the 6180 hardware is less vulnerablethan some. An asynchronous processor-memory inter-face tends to stop when an error occurs rather thanproceeding with wrong data; complete instructiondecoding explicitly traps all but legal operationcodes and addressing modifiers; and the multipro-cessor organization helps obviate the need forpipelines and other accident-prone highly-tunedlogic tricks.

14

cannot by simply "turned off" for an experiment.*However, one significant observation may be made.in general, the protection mechanisms are closelyrelated to naming mechanisms, and can be implementedwith a minimum of extra fuss in a system which pro-vides a highly structured naming environment. Thus,the users of Multics apparently have found that theoverall package of a structured virtual memory withprotection comes at an acceptable price.

The Multics protection mechanisms were designedto be basic and extendable, rather than a completeimplementation of some specialized security model,Thus there are mechanisms which may be used to pro-vide the multilevel security classification (topsecret, secret, confidential, unclassified) and theaccess compartments of the U.S. governmental secur-ity system[27]. If one wished to precisely imitatethe government security system, he could do so with-out altering the operating system. In this sense,Multics differs with, say, SDC's ADEPT[28] andIBM'S Resource Security System[14], both of whichspecifically implement models of the governmentsecurity system, but which do not permit, forexample, user-written program-protected data bases.

We should also note that the Multics systemwas designed to be securable, which is differentthan stating that any particular site is actuallyoperated in a completely secured fashion. Suchmatters as machine room security, certification ofof hardware maintenance engineers and system opera-tors, and telephone wire tapping are largely out-side of the scope of operating system design. Inaddition, correct administration can be encouragedby the design of an operating system, but notenforced. Further we have reported the design ofthe system, realizing that its implementation hasnot yet been completely audited and therefore maycontain trivial programming errors which affectprotection.

Acknowledgements

As is usual in any large system design, manyindividuals have contributed ideas and suggestions,and a complete acknowledgement is very hard tocompose. Professor E.L. Glaser provided the firmconviction that information protection was a reason-able goal during the critical initial design periodof the Multics system. He also suggested severalof the design principles and many of the specificprotection mechanisms which were ultimately in-cluded. Professor R.M. Graham worked out theinitial design of the protection ring mechanism,and Professor M.D. Schroeder expanded that designto include automatic argument validation and com-plete hardware support. Integration of protectioninto the storage system was accomplished by R.C.Daley. More recent upgradings of the user inter-face have been designed by V.L. Voydock, R.J.Feiertag, and T.H. VanVleck. P.A. Belmont,

______________________* In analogy, we may consider a mouse. The mousehas an elaborate system which maintains a constantbody temperature, where, for example, a lizarddoes not. There is a sense in which the mouse isthereby less efficient, but one may also crediblyargue that the question of efficiency is incorrect-ly posed. In a similar way, comparison of systemswith and without protection may also be incorrect.(Analogy thanks to Caria M. Vogt.)

Page 15: This document was originally prepared off-line. This file ...web.mit.edu › Saltzer › www › publications › protmult.pdf · Seven design principles help provide insight into

D.A. Stone, and M.A. Meer developed an early inter-nal memorandum which helped articulate the designissues. Others offering significant help includeProfessor F.J. Corbató, C.T. Clingen, D.D. Clark,M.A. Padlipsky, and P.G. Neumann. Of course, everysystem programmer who worked in the most protectedregion of Multics has also contributed by his extracare and understanding of the protection objective.

References

1. Ackerman, W.B., and W.W. Plummer, "An Implemen-tation of a Multiprocessing Computer System."ACM Symposium on Operating Systems Principles,October, 1967, Gatlinburg, Tennessee.

2. Baran, P., "Security, Secrecy, and Tamper-FreeConsiderations," On Distributed Communications9, Rand Corp. Technical Report RM-3765-PR.

3. Beardsley, C.W., "Is your computer insecure';"IEEE Spectrum 9, 1 (January, l972), pp. 67-78.

4. Bensoussan, A., C.T. Clingen, and R.C. Daley,"The Multics Virtual Memory: Concepts andDesign," Comm. ACM 15, 5 (May, 1972),pp. 308-318.

5. Branstad, D.K., "Privacy and Protection inOperating Systems," Computer 6, 1, 1973,pp. 43-47.

6. The Compatible Time-Sharing System: AProgrsmmer's Guide, M.I.T. Press, 1966.

7. Corbató, F.J., J.H. Saltzer, and C.T. Clingen,"Multics: The First Seven Years," AFIPSConf. Proc. 40, (1972 SJCC), pp. 571-583.

8. Daley, R.C., and P.G. Neumann, "A General-Purpose File System for Secondary Storage,"AFIPS Conf. Proc. 27, (1965 FJCC), pp. 213-229.

9. The Descriptor -- A Definition of the B5000Information Processing System. BurroughsCorporation, Business Machines Group, SalesTechnical Services, Systems Documentation,Detroit, Michigan, 1961.

10. Evans, D.C., and J.Y. LeClerc, "Address Mappingand the Control of Access in an InteractiveComputer," AFIPS Conf. Proc. 30, (1967 SJCC),pp. 23-30.

11. Glaser, E.L., "A Brief Description of PrivacyMeasures in the Multics Operating System,"AFIPS Conf. Proc. 30, (1967 SJCC), pp. 303-304.

12. Graham, P.M., "Protection in an InformationProcessing Utility," Comm. ACM 11, 5 (May,1968), pp. 365-369.

13. Holland, S.A., and C.J. Purcell, "The CDC Star-100 -- A Large Scale Network Oriented ComputerSystem," IEEE International Computer SocietyConf., (September, 1971), pp. 55-56.

14. IBM Application Program Manual "OS/MVT withResource Security. General Information andPlanning Manual," File no. GH20-1058-0, IBMCorporation, December, 1971.

15. Lampson, B.W., "An Overview of the CAL Time-sharing System," Computer Center, Universityof California, Berkeley, (September 5, 1969).

16. Lampson, B.W., "Protection," Proc. 5thPrinceton Conf. on Information Sciences andSystems, (March, 1971), pp. 437-443.

15

17. Molho, L.M., "Hardware aspects of securecomputing," AFIPS Conf. Proc. 36, (1970 SJCC)pp. 135-141.

18. Needham, P.M., "Protection Systems and Pro-tection Implementations." AFIPS Conf. Proc. 41,Vol I. (1972 FJCC), pp. 572-578.

19. Peters, B., "Security considerations in a multi-programmed computer system," AFIPS Conf. Proc.30, (1967 SJCC), pp. 283-286.

20. Rotenberg, L;, "Making Computers Keep Secrets,"Ph.D. Thesis, Department of ElectricalEngineering, Massachusetts Institute of Tech-nology, September, 1973. (Also available asM.I.T. Project MAC Technical Report TR-116.)

21. Schroeder, M.D., "Cooperation of MutuallySuspicious Subsystems in a Computer Utility,"Ph.D. Thesis, Department of ElectricalEngineering, Massachusetts Institute of Tech-nology, September, 1972. (Also available asM.I.T. Project MAC Technical Report TR-104.)

22. Schroeder, M.D., and J.H. Saltzer, "A HardwareArchitecture for Implementing Protection Rings,"Comm. ACM 15, 3 (March, 1972), pp. 157-170.

23. Smith, J.L., W.A. Notz, and P.R. Osseck, "AnExperimental Application of Cryptography to aRemotely Accessed Data System," Proc. ACM1972 Conf., pp. 282-297.

24. System 370 Principles of Operation. IBM SystemsReference Library File no. GA22-7000.

25. "Third Party ID Aided Program Theft,"ComputerWorld V, 14, April 7, 1971.

26. Turn, R., R. Fredrickson. and D. Hollingworth,Data Security at the Rand Corporation, RandCorp. Technical Report P-4914, October, 1972.

27. Ware, W.. et al., "Security Controls forComputer Systems, Rand Corp. Technical ReportR-609, 1970. (Classified Confidential).

28. Weissman, C., "Security Controls in theADEPT-50 Time-Sharing System," AFIPS Conf.Proc. 35, (1969 FJCC), pp. 119-133.

29. Wilkes, M.V., Time-Sharing Computer Systems.American Elsevier Publishing Co., 1968.


Recommended