+ All Categories
Home > Documents > archprog-brinch_hansen

archprog-brinch_hansen

Date post: 06-Apr-2018
Category:
Upload: juliojgd
View: 222 times
Download: 0 times
Share this document with a friend

of 329

Transcript
  • 8/3/2019 archprog-brinch_hansen

    1/328

    THE ARCHITECTURE

    OF CONCURRENT

    PROGRAMS

  • 8/3/2019 archprog-brinch_hansen

    2/328

    P r e n t i c e - H a l lS e r i e s i n A u t o m a t i c C o m p u t a t i o n

    AHO, ed., Currents in the Theory of ComputingAHO and ULLMAN, The Th eory of Parsing, Translation, and Compiling,

    Volume I: Parsing; Volume II: CompilingANDREE, Comput er Programming: Techniques, Analysis, and Mathema ticsANSELONE, Collectively Compact Operator Approximation Theory

    and Applications to Integral EquationsAVRIEL, Nonlinear Programming: Analysis and Method sBENNETT, JR. , Scient ific and Engineering Pr oblem.Solving wit h the ComputerBLAAUW, Digital System Implementa tionBLUMENTHAL, Management Information SystemsBRENT, Algorithms fo r Minimization without DerivativesBRINCH HANSEN, The Architecture of Concurrent ProgramsBRINCH HANSEN, Operating System PrinciplesBRZOZOWSKI and YOELL, Digital NetworksCOFFMAN and DENNING, Operating Systems TheoryCRESS, et al., FOR TRA N IV with WATFOR and WATFIVDAHLQUIST, BJORCK, and ANDERSON, Numerical MethodsDANIEL, The Approximate Minimization of FunctionalsDEO, Graph Theory with Application s to Engineering and Computer ScienceDESMONDE, Computers and Their Uses, 2nd ed.DIJKSTRA, A Discipline of ProgrammingDRUMMOND, Evaluation and Measurement Techniques for Digital Computer Syste msEC KHOUSE, Minic ompu ter Syste ms: Organization and Programming (PDP-11)FIKE, Computer Evaluation o f Mathematical FunctionsFIKE, PL 1 for Scientific ProgrammersFORSYTHE, MALCOLM, and MOLLER, Computer Methods for Mathematical ComputationsFORSYTHE and MOLLER,

    Computer Solution of Linear Algebraic SystemsGEAR, Numerica l Initial Value Problem s in Ordinary Differe ntial EquationsGILL, Applied Algebra for the Computer SciencesGORDON, System SimulationGRISWOLD, String and List Processing in SNOBOL4: Techniques and ApplicationsHANSEN, A Table o f Series and Product sHARTMANIS and STEARNS, Algebraic Structure Theory of Sequential MachinesHILBURN and JULICH, Microcomputers/Microprocessor: Hardware, Software, and ApplicationsHUGHES a nd MICHTOM, A Structured Approach to ProgrammingJACOBY, et al., Iterative Methods for Nonlinear Optimization ProblemsJOHNSON, System Structure in Data, Programs, and ComputersKIVIAT, et al., The SIMSCRIPT II Programming LanguageLAWSON and HANSON, Solving Least Squares ProblemsLO RIN, Parallelism in Hardware and Soft ware : Re al and A pparen t Concurre ncyLOUDEN and LEDIN, Programming the IBM 1130, 2nd ed.MARTIN, Communications Satellite SystemsMARTIN, Computer Data-Base Organization, 2nd ed.MARTIN, Design of Man -Computer Dialogues

    L I B R A RY

  • 8/3/2019 archprog-brinch_hansen

    3/328

    MARTIN, Design of Real-Time Computer Syste msMARTIN, Future Developments in Telecommunications, 2nd ed.MARTIN, Principles of Data-Base ManagementMARTIN, Programming Real-Time Computing SystemsMARTIN, Security, Accuracy, and Privacy in Computer SystemsMARTIN, Sys tems Analysis for Data TransmissionMARTIN, Telecommunications and the Computer, 2nd ed.MARTIN, Teleprocessing Net wor k OrganizationMARTIN and NORMAN, The Computerized Society

    MCKEEMAN, et al., A Compiler GeneratorMEYERS, Time-Sharing Computation in the Social SciencesMINSKY~ Computation: Finite and Infinite MachinesNIEVERGELT, et al., Computer Approaches to Mathematical ProblemsPLANE and MCMILLAN, Discrete OptimizationPOLIVKA and PAKIN, APL: The Language and Its UsagePRITSKER and KIVIAT, Simulation with GASP II: A FORTRAN-b ased Simulatio n LanguagePYLYSHYN, ed., Perspectives on the Computer RevolutionRICH, Internal Sorting Me thods Illustrated with PL / LProgramsRUDD, Assembly Language Programming and the IBM 360 and 370 ComputersSACKMANN and CITRENBAUM, eds., On-Line Planning: Towards Creative Problem-SolvingSALTON, ed., The SM ART Retrieval System: Experiments in Automatic Docume nt ProcessingSAMMET, Programming Languages: History and Fu ndament alsSCHAEFER, A Mathematical Theory of Global Program OptimizationSCHULTZ, Spline AnalysisSCHWART Z, et al., Numerical Analysis o f Sy mmetric MatricesSHAH, Engineering Simulation Using Small Scientific Comp uters

    SHAW, :The Logical Design of Operating SystemsSHERMAN, Techniques in Computer ProgrammingSIMON and SIKLOSSY, eds., Representa tion and Meaning:

    Experiments with Information Processing SystemsSTERBENZ, Floating-Point ComputationSTOUTEMYER, PL /1 Programming for Engineering and ScienceSTRANG and FIX, An Analysis of the Finite Element MethodSTROUD, Appro xima te Calculation of Multiple IntegralsTANEN BAUM, Structured Computer OrganizationTAVISS, ed., The Computer ImpactUHR, Pattern Recognition, Learning, and Thought:

    Computer-Programmed Models of Higher Mental ProcessesVAN TASSEL, Computer Security ManagementVARGA, Matrix Iterative AnalysisWAITE, Implementing Software for Non-Numeric ApplicationWILKINSON, Roun ding Errors in Algebraic ProcessesWIRTH, Algorith ms + Data Structure s = ProgramsWIRTH, Systematic Programming: An IntroductionYEH, ed., Applied Computation Theory: Analysis, Design, Modeling

  • 8/3/2019 archprog-brinch_hansen

    4/328

    To m y f a th e r

  • 8/3/2019 archprog-brinch_hansen

    5/328

    T HE A R C H IT E C T U R E

    O F C O N C U R R E N TP R O G R A M S

    M 4,1 L L i 6 7 p A ,9

    P ER B R I N C H H A N S E N

    Universi ty of Southern Cal i fornia

    PRE NTIC E-HAL L, INC. Englewood Cliffs, New Jersey 07632

  • 8/3/2019 archprog-brinch_hansen

    6/328

    Libra ry of Congress Cataloging in Publicat ion Data

    B r i n c h H a n s e n , P e r ,T h e a r c h i t e c t u r e o f c o n c u r r e n t p r o g r a m s .

    ( P r e n ti c e - H a ll se r ie s in a u t o m a t i c c o m p u t a t i o n )S u m m a r y i n D a n i s h .B i b l i o g r a p h y : p .I n c l u d e s i n d e x .1 . C o n c u r r e n t P a s c a l ( C o m p u t e r p r o g r a m m i n g l a n g u ag e )

    2 . O p e r a t i n g s y s t e m s ( C o m p u t e r s ) I . T i t le .Q A 7 6 . 7 3 . C 6 5 B 7 3 1 9 7 7 0 0 1 . 6 ' 4 2 4 7 7 - 4 9 0 1I S B N 0 - 1 3 - 0 4 4 6 2 8 - 9

    7 3,C 5

    / 77C,I

    1 9 7 7 b y P r e n t i c e - H a l l , I n c . , E n g l e w o o d C l i ff s , N e w J e r s e y

    A l l r i g h ts r e s e rv e d . N o p a r t o f t h i s b o o k m a y b er e p r o d u c e d i n a n y f o r m , b y m i m e o g r a p h y o r b y a n ym e a n s , w i t h o u t p e r m i s s i o n i n w r i ti n g f r o m t h e p u b l i s h e r .

    0 7 6 3 2

    1 0 9 8 7 6 5 4 3

    P r i n t e d i n th e U n i t e d S t a t e s o f A m e r i c a

    P R E N T I C E - H A L L I N T E R N AT I O N A L , I NC .,LondonP R E N T I C E - H A L L O F A U S T R A L I A P TY. LT D .,SydneyP R E N T I C E - H A L L O F C A N A D A , LT D . ,TorontoP R E N T I C E - H A L L O F I N D I A P R I VAT E L I M I T E D ,New DelhiP R E N T I C E - H A L L O F J A PA N , I N C .,TokyoP R E N T I C E - H A L L O F S O U T H E A S T A S I A P T E. LT D .,SingaporeW H I T E H A L L B O O K S L I M I T E D , We ll in g t on ,New Zealand

  • 8/3/2019 archprog-brinch_hansen

    7/328

    C O N T E N T S

    P R O G R A M M I N G T O O LS

    1 . D E S I G N P R I N C I P L E S 3

    1.1. Program quality 31.2. Simplicity 41.3. Reliability 61.4. Adaptabili ty 81.5. Portability 91.6. Efficiency 91.7. Generality 101.8. Conclu sion 111.9. Lite ratur e 11

    2 . P R O G R A M M I N G C O N C EP T S 15

    2.1. Concurrent processes 1 6

    2.2. Private data 1 72.3. Peripherals 19 ....2.4. Shai-ed dat a 192.5. Access rights 212.6. Abstract data types 2 32.7. Hierarchical structure 2 5

    " j

    vi i

  • 8/3/2019 archprog-brinch_hansen

    8/328

    v i i i CONTENTS

    3 . S E Q U E N T I A L PA S C A L 2 9

    3.1. Program structure 303.2. Constants and variables 313.3. Simple data types 333.4. Structure d data types 363.5. Routines 403.6. Scope rules 413.7. Type checking 423.8. Literature 45

    4 . C O N C U R R E N T PA S C A L 4 7

    4.1 . Input]output 474.2. Processes 49

    4.3. Monitors52

    4.4. Queues 544.5. Classes 544.6. A complete program 574.7. Execu tion times 624.8. Conclusion 634.9. Literature 65

    C O N C U R R E N T P R O G R A M S

    5 . T H E S O L O O P E R AT I N G S Y S T E M 6 9

    5.1. Overview 695.2. Job interface 80

    5.3. Processes, mon ito rs, and classes 985.4. Disk sched uling 14 25.5 List of Solo comp onen ts 14 7

    6 . T H E J O B S T R E A M S Y S T E M 1 48

    6.1. Functi on and performance 14 86.2. Sequential programs and files 15 36.3. Concu rren t program 16 66.4. Final remarks 18 66.5. List of Job stream comp onen ts 18 7

    7 . A R E A L - T I M E S C H E D U L E R 1 89

    7.1. Purpose and design18 9

    7.2. Programming 19 77.3. Testing 21 47.4. Final remarks 22 67.5. List of Real-time comp onen ts 2 27

    6 7

  • 8/3/2019 archprog-brinch_hansen

    9/328

    CONTENTS ix

    L A N G U A G E D E T A I L S

    8 . C O N C U R R E N T PA S C A L R E P O R T 2 3 1

    8.1. Introduction 23 18.2. Synt ax graphs 23 28.3. Character set 23 28.4. Basic symbo ls 23 38.5. Blocks 23 58.6. Constants 23 58.7. Types 23 68.8. Variables 2 468.9. Expressions 24 98.10. Statements 25 08.11. Routines 25 1

    8.12. Queues 25 58.13. Scope rules 25 68.14. Concurren t programs 25 78.15. PDP 11/45 system 25 78.16. ASCII character set 2 678.17. Index of report 26 8

    9 . C O N C U R R E N T PA S C A L M A C H I N E 2 7 1

    9.1. Store allocation 27 19.2. Code interpr etation 27 89.3. Kernel 28 39.4. Compiler 29 3

    T H E N E X T S T EP 2 9 8

    2 2 9

    R E F E R E N C E S 3 0 1

    L I S T O F P R O G R A M C O M P O N E N T S 3 0 4

    D A N I S H S U M M A R Y 3 0 7

    I N D E X 3 1 1

  • 8/3/2019 archprog-brinch_hansen

    10/328

    P R E FA C E

    C O N C U R R E N T P R O G R A M M I N G

    This bo ok describes a me tho d for writ ing concurren t computer pro-grams of high quality. It is written for professional programmers and stu-dents who are faced with the complicated task of building reliable computeroperating systems or real-time control programs.

    The motivations for mastering concurrent programming are both eco-nomic and intellectual. Concurrent programming makes it possible to use acomputer where many th ings need a t tent ion a t the same t ime--be theypeople at terminals or tempe ratur es in an industrial plant. It is wit hou tdoub t the mo st difficult form of programming.

    This book presents a systematic way of developing concurrent programsin a structured language called Concurrent Pascal--the first of its kind. Theuse of this language is illustrated by three non-trivial conc urre nt programs :a single-user operating system, a job-stream system, and a real-time sched-

    uler. All of these have been used successfully on a PDP 11/45 computer.The book includes the complete text of these three programs and explainshow th ey are structured, programmed, tested, and described.

    In an earlier book, Operating System Principles [Prentice-Hall, 1973],

    x i

  • 8/3/2019 archprog-brinch_hansen

    11/328

    x i i P R E FA C E

    I tried to establish a background for studying existing operating systems interms of basic concepts. This new text tells the other side of the story:how concurrent programs can be constructed systematically from scratch.I t also i l lustrates detai ls of importan t design pr obl ems --t he management ofinpu t/ou tput , data f i les, and progra ms-- whic h were deliberately omittedfrom the first book. So it is useful both as a practical supplement to operat-ing system courses and also as a han dbo ok on struc ture d con curr ent pro-gramming for engineers.

    C O M P I L A T I O N A N D T E S T I N G

    A concurrent program consists of sequential processes that are carried

    out simultaneously. The processes cooperate on common tasks by exchang-ing data throug h shared variables. The proble m is that unrest ricte d accessto the shared variables can make the result of a concurrent program de-penda nt on the relative speeds of its processes. This is obvious if you thinkof a car and a tra in passing thro ugh the sam e railr oad crossing : it is therelat ive t iming of these "processes" that determines whether they wil lcollide.

    Unfortunately, the execution speed of a program will vary somewhatfrom one run to the next . I t wil l be influenced by other (unrelated) pro-grams running simultaneously and by operators responding to requests.So you can never be quite sure what an incorrect , concurrent programis going to do. If you execute it many times with the same data you willget a diffe rent result each time. This makes it hopeless to judge what we nt

    wrong. Program testing is simply useless as a means of locating time-depen-dent errors.

    Some of these errors can no doubt be located by proofreading. I haveseen a progr amme r do this by looking at an assembly language programfor a week. But, to proofread a large program, you must understand it incomplete detail. So the search for an error may involve all of the people whowrote the program, and even then you cannot be sure it will be found.

    Well, i f w e c a n n o t m a k e c o n c u r r e n t p r o g r a m s w o r k b y p r o o f r e a d i n go r t e s ti n g , t h e n I ca n s e e o n l y o n e o t h e r e f f e c t i v e m e t h o d a t th e m o m e n t :t o w r i t e a l l c o n c u r r e n t p r o g r a m s i n a p r o g r a m m i n g l a ng u a ge t h a t is s os t r u c t u r e d t h a t y o u c a n s p e c i f y e x a c t l y w h a t p r o c e s s e s ca n d o t o s h a r e dv a ri a bl e s a n d d e p e n d o n a c o m p i l e r t o c h e c k t h a t t h e p r o g r a m s s a t i s fyt h e s e a s s u m p t i o n s . Concurrent Pascal is the first language that makesthis possible.

    In the long run it is not advisable to write large concurrent programsin machin e-orie nted languages that perm it un restr icted use of store loca-

  • 8/3/2019 archprog-brinch_hansen

    12/328

    P R E F A C E x i ii

    tions and their addresses. There is just no way we will be able to make suchprograms reliable (even with the help of complicated hardware mechanisms).

    CONCURRENT PASCAL

    From 1963-65 I was one of ten programmers who wrote a Cobol com-piler in assembly language. This program of 40,000 instructions took 15man-years to build. Although it worked well, the compiler was very diffi-cult to maintain since none of us understoo d i t completely.

    Five years later, compiler writing was completely changed by the se-quential programming language Pascal, invented by Niklaus Wirth. Pascal isan abstract language that hides irrelevant machine detail from the program-

    mer. At the same time it is efficient enoug h for system programming. It iseasily understood by programmers familiar with Fortran, Algol 60, Cobol,or PL/I.

    In 1974 A1 Hartmann used Sequential Pascal to write a compiler for mynew programming language, called Concurrent Pascal. This compiler iscomparable to a machine program of 35,000 instructions. But, writ ten inPascal, the program text is only 8,300 lines long and can be completelyund ers too d by a single person. The prog ramming and testing of this com-piler took only 7 months.

    The aim of Concurrent Pascal is to do for operating systems whatSequential Pascal has done for compilers: t o reduce the programmin g effo rtby an order of magnitude.

    Concurrent Pascal extends Sequential Pascal with concurrent processes

    and monitors. The compiler prevents some time-depende nt programmingerrors by checking that the private variables of one process are inaccessibleto an other. Processes can only co mmunicat e by means of monitors.

    A monitor defines all the possible operations on a shared data structure.It can, for example, define the send and receive operations on a messagebuffer. The compiler will check that processes only perform these twooperations on a buffer.

    A monitor can delay processes to make their interactions independ ent oftheir speeds. A process that tries to receive a message from an empty bufferwill, for example, be de layed until anot her process sends a message to it.

    If a programmer can design a process or monitor correctly, the rest ofa program will not be able to make that component behave erratically(since no other part of the program has direct access to the variables usedby a component). The controlled access to private and shared variablesgreatly reduces the risk of time- depe nden t program behavior caused byerroneous processes.

  • 8/3/2019 archprog-brinch_hansen

    13/328

    x i v P R E F A C E

    M O D E L O P E R A T I N G S Y S TE M S

    This book stresses the prac t i ce of concurrent programming. I t con-tains a complete description of three model operating systems written inConcurrent Pascal.

    Chap ter 5 describes a single-user oper atin g syste m, called S o l o . It sup-ports the developme nt of Sequential and Concurrent Pascal programs onthe PDP 11/45 computer. Input/output are hand]Led by concurrent pro-cesses. Pascal programs can call one ano the r recursively and pass arbitrarypara mete rs amon g themselves . This makes it possib]Le to use Pascal as a jobcontr ol language. Solo is the first major exampl e of a hierarchical con cur ren tprogram made of processes and monitors.

    Chapter 6 presents a j o b - s t r e a m s y s t e m that compiles and executesshort Pascal programs which are input from a card reader and are output ona l ine printer. Input, execution, and outpu t take place simultaneously, usingbuffers stored on disk.

    Chapter 7 discusses a r ea l - t ime schedu le rfor process control applicationsin which a fixed number of concurrent tasks are carried out periodicallywith frequencies chosen by an operator.

    These chapters no t only describe how to build different kinds of operat-ing systems but also illustrate the main steps of the program deve lop mentprocess.

    The Solo system shows how a concurrent program of more than athousand lines can be s t r u c t u r e d and p r o g r a m m e d as a sequence of com-ponents of less than one page each. The real-time scheduler is used to dem-onstrate how a hierarchical, concurrent program can be t e s t e d systematically.The job-stream system illustrates how a program structure can be derivedfrom p e r f o r m a n c e considerations.

    L A N G U A G E D E F I N I T I O N A N D I M P L E M E N T AT I O N

    I have tried to make this book as readable as possible to share an archi-tectonic view of concurrent programming effectively. Formalism is often astumbling bl ock in the first encoun ter with a new field, and the practice ofstructured concurrent programming is not commonplace yet . So I haveassumed in chapters 3 and 4 that you are so familiar with one or more

    program ming languages that it is sufficient to show the flavor of Sequentialand Concurrent Pascal by examples before describing the model operatingsystems.

    But when you wish to use a new programming language in your ownwork, a precise definition of it becomes essential. So the C o n c u r r e n t P a s c alr e p o r t is included in chapter 8.

  • 8/3/2019 archprog-brinch_hansen

    14/328

    P R E FA C E x v

    The whole purpose of this work is to show how much a concurrentprogramming effort can be reduced by using an abstract language thatsuppresses as much machine detail as one can afford to without losingcontrol of program efficiency. For this reason the introduction to Concur-rent Pascal ignores the question of how the language is implemented.

    Chapter 9 is an overview of the language implementation for those whofeel uncom fort able unless th ey have a d ynam ic feeling for what their pro-grams make the machine do. I suspect that most of us belong to that group.Once you understand what a machine does, however, it is easier to forgetthe details again and start relying completely on the abstract concepts thatare built into the language.

    T E A C H I N G A N D E N G I N E E R I N G

    Very few operating systems are so well-structured and well-documentedthat they are worth studying in detail . And few (if any) compu ting centersmake it possible for students to write their own concurrent programs in anabstract language. Since students can neither study nor build realistic operat-ing systems it is almost impossible to make them feel comfortable about thesubject.

    This book tries to remedy that situation. It defines an abstract languagefor concurrent programming that has been implemented on the PDP 11/45computer. The compiler can be moved to other computers since i t is writ tenin Sequential Pascal and generates code for a simple machine that can besimulated efficiently by microprogram or machine language.

    The book also offers complete examples of model operating systemsthat can be studied by students.

    If you are a professional programmer you can seldom choose your ownprogramming language for large projects. But you can benefit from newlanguage constructs--such as processes and monitors--by taking them asmodels of a systematic pro gramming style t hat can be imitated as closely aspossible in other languages (including assembly languages).

    The system kernel that is described in chapter 9 illustrates this. It is anassembly language program written entirely by means of classes (a conceptsimilar to monitors) . Since this conce pt is not in the assembly language it isdescribed by c omments only.

    The book can also be used as a handbook on the design of small operat-ing systems and significant portions of larger ones.

    If yo u are a software engineer you may feel that the o perating systemsdescribed here are much smaller than those you are asked to build. Thisraises the question of whether the concepts used here can help you buildhuge systems. My r ecommen datio n is to use abstract programming concepts

  • 8/3/2019 archprog-brinch_hansen

    15/328

    x v i PREFACE

    (such as processes and monitors) wherever yo u can. This will probably solvemost programmin g problems in a simple manner and leave you with onlya few really machine-dependent components (such as a processor schedulerand a storage al locator) . As a means of organizing :your thou ght s, Concur-rent Pascal can only be helpful.

    But I should also admit that I do not see a future for large operatingsystems. They never worked well and they probably never will. They arejust too complicated for the human mind. They were the product of anearly stage in which none of us had a good feeling for what software quali tymeans. The new technology that supports wide-spread use Of cheap, per-sonal computers will soon make them obsolete.

    Although operating systems have provided the most spectacular exam-ples of the diffi culty of making conc urre nt programs reliable, there are

    other applications that present problems of their own. As an industrialprogrammer I was involved in the design of p r o c e s s c o n t r o l p r o g r a m s for achemical plant, a power plant, and a meteorological institute. These real-time applications had one thing in common: they were all unique in theirsoftware requirements.

    When the cost of developing a large program cannot be shared by manyusers the pressure to reduce the cost is much greater than it is for general-purpose software, such as compilers and operating systems. The only prac-tical way of reducing cost then is to give the process control engineers anabstract language for concurrent programming. To :illustrate this I rewrotean existing real-time scheduler from machine language into ConcurrentPascal (chapter 7).

    The recent r educt ion of hardware costs fo r microprocessors will soon

    put even greater pressure on software designers to reduce their costs aswell. So there is every reason for a realistic progra mmer to keep an eyeon recent developments in programming methodology'.

    P R O JE C T B A C K G R O U N D

    In 1971, Edsger Dijkstra suggested tha t concu rren t programs might beeasier to unders tand if all synchroni zing operations on a shared datastructure were collected into a single program unit (which we now calla m o n i t o r ) .

    In May 1972 I wrote a chapter on R e s o u r c e P r o t e c t i o n for O p e r a t i n gS y s t e m P r i n c i p l e s . I introduced a language notation for monitors andpointed out that resource protection in operating systems and type checkingin compilers are solutions to the same problem: to verify automa tica llythat programs only perform meaningful operations on data structures. Myconclusion was that "I e x p e c t t o s e e m a n y p r o t e c t i o n r u le s in fu t u r e o p e r a t -

  • 8/3/2019 archprog-brinch_hansen

    16/328

    P R E F A C E x v i i

    ing sys tems be ing enforced in the cheapes t poss ib le mann er by type check ingat com pile t ime. H ow ever, this wil l require exclusive use o f eff ic ient , w el l-s t ruc tu red l anguages fo r p rogra mm ing ."This is still the idea behind Con-current Pascal.

    I developed Concurrent Pascal at the California Institute of Technologyfrom 1972-75. The compiler was written by A1 Hartmann. Robert Deverilland Tom Zepko wrote the interpreter for the PDP 11/45. I built the modeloperating systems, and Wolfgang Franzen made improvements to one ofthem (Solo).

    A C K N O W L E D G E M E N T

    The Institute of Electrical and Electronics Engineers, North-HollandPublishing Company, and John Wiley and Sons kindly granted permissionto reprint parts of the papers:

    "The programming language Concurrent Pascal."IEEE Transac t ions on Sof tware Engineer ing 1 ,2, June 1975.

    "Universal types in Concurrent Pascal."In format ion Process ing Le t te r s 3 ,6, July 1975.

    "The Solo operating system."S o f t w a r e - - P r a c t i c e & E x p e r i e n c e 6 ,

    2, April-June 1976.

    The development of Concurrent Pascal was partly supported by the NationalScience Foundation under grant number DCR74-17331.

    Giorgio Ingargiola, Luis Medina, and Ramon Varela all gave helpfulcomments on the text. I also wish to thank Christian Gram, Ole-Johan Daht,and Peter Naur for a constructive, detailed evaluation of this work.

    P E R B R I N CH H A N S E N

    Universi ty o f Sou thern Cal i fornia

  • 8/3/2019 archprog-brinch_hansen

    17/328

    P R O G R A M M I N G T O O L S

  • 8/3/2019 archprog-brinch_hansen

    18/328

    i

    !

    D E S I G N P R I N C I P L E S

    This book describes a method for writing concurrent programs of highquality. Since there is no common agreement among programmers about

    the qualities a good program should have, I will begin by describing myown requirements.

    1.1 P R O G R A M Q U A L I T Y

    A good program must be simple, reliable, and adaptable. Without sim-plicity one cannot expect to understand the purpose and details of a largeprogram. Without reliability one cannot seriously depend on it. And with-out adaptabil ity to changing requirement s a program eventually becomesa fossil.

    Fortunately, these essential requirements go hand in hand. Simplicitygives one the confidence to believe that a program works and makes it clear

    how it can be changed. Simplicity, reliability, and adaptability make pro-grams manageable.

    In addition, it is desirable to make programs that can work efficientlyon several different computers for a variety of similar applications. Buteff iciency, portabi l i ty,and generality should never be sought at the expense

  • 8/3/2019 archprog-brinch_hansen

    19/328

    4 D E S I G N P R I N C I P L E S C h a p . 1

    of simplicity, reliability, and adaptability, for only the latter qualities make

    it possible to understand what programs do, depend on them, and extendtheir capabilities.

    The poor quality of much existing software is, to a large extent, theresult of turning these priorities upside down. Some programmers justifyextremely complex and incomprehensible programs by their high efficiency.Others claim that the poor reliability and efficiency of their huge programsare outweighed by their broad scope of application.

    Personally I f ind the efficiency of a tool that nobody fully understandsirrelevant. And I find it difficult to appreciate a general-purpose tool whichis so slow that it cannot do anything well. But these are matters of taste andstyle and are likely to remain so.

    Whenever program qualities appear to be in con flic t with one anoth erI shall consistently settle the issue by giving first priority to manageability,second priority to efficiency, and third priority to generality. This boilsdown to the simple rule of l imit ing our computer applicat ions to thosewhich programmers fully understand and which machines can handle well.Although this is too narrow a view for experimentM computer usage it issound advice for professional programming.

    Let us now look more closely at these program qualities to see how theycan be achieved.

    1.2 SIMPLICITY

    We will be writing concurrent programs which are so large that one can-not understand them all at once. So we must reason about them in smallerpieces. What properties should these pieces have? Well, they should be sosmall that any one of them is trivial to understand in itself. It would be idealif they were no more than one page of text each so that they can be compre-hended at a glance.

    Such a program could be studied page by page as one reads a book. Butin the end, when we have understood what all the pieces do, we must still beable to see what their combined effect as a whole is. If it is a program ofman y pages we can only d o this by ignoring most of our detailed knowledgeabout the pieces and relying on a much simpler descript ion of what they doand how they work together.

    So our program pieces must allow us to make a clear separation of theirdetailed behavior and that small part of it which is of interest when we

    consider combinations of such pieces. In other words, we must distinguishbetween the inner and outer behavior of a program piece.Program pieces will be built to perform well-defined, simple functions.

    We will then combine program pieces into larger configurations to carry outmore compl icated functions. This design meth od is effective because it splits

  • 8/3/2019 archprog-brinch_hansen

    20/328

    Sec. 1.2 SIM PLI CIT Y S

    a complic ated task into simpler ones: First you convince yourse lf tha t the

    pieces wor k individually, and then you th ink about how t hey work together.During the second part of the argument it is essential to be able to forgethow a piece works in detail--otherwise, the problem becomes too compli-cated. But in doing so one makes the fundamental assumption that the piecealways will do the same when it carries out its function. Otherwise, youcould not afford to ignore the detailed behavior of that piece in your reason-ing about the whole system.

    So r e p r o d u c i b l e b e h a v i o r is a vital property of program pieces that wewish to build and study in small steps. We must clearly keep this in mindwhen we select the kind of program pieces that large concurrent programswill be made of. The ability to repeat program behavior is taken for grantedwhen we write sequential programs. Here the sequence of events is com-pletely defined by the program and its input data. But in a concurrentprogram simultaneous events take place at rates not fully controlled by theprogrammer. They depend on the presence of other jobs in the machine andthe scheduling policy used to execute them. This means that a consciouseffort must be made to design concurrent programs with reproducible be-havior.

    The idea of reasoning first about w h a t a pieces does and then studyingh o w it does it in detail is most effective if we can repeat this process byexplaining each piece in terms of simpler pieces which themselves are builtfrom still simpler pieces. So we shall confine ourselves to h ie r a r ch i ca l s t ruc -tu r e s composed of layers of program pieces.

    It will certainly simplify our understanding of hierarchical structures ifeach part only depends on a small number of othe r parts. We will therefor e

    try to build structures that have m i n i m a l i n t e r f a c e sbetween their parts.This is extremely difficult to do in m a c h i n e l a n g u a g e since the slightestprogramming mistake can make an instruction destroy any instruction orvariable. Here the w h o l e s t o r e can be the interface between any two instruc-tions. This was made only too clear in the past by the practice of printingthe conten ts of the entire store just to locate a single programming error.

    Programs written in abs t r ac t l anguages (such as Fortran, Algol, andPascal) are unable to modify themselves. But they still have broad inter-faces in the form of g loba l va r i ab l e s that can be changed by every statement(by intention or mistake).

    We will use a programming language called C o n c u r r e n t P a s c a l , whichmakes i t possible to divide the global variables int o smaller parts. Each ofthese is accessible to a small number of stat ements only.

    The main contribution of a good programming language to simplicityis to provide an abstract r e a d a b l e n o t a t i o n that makes the parts and structureof programs obvious to a reader. An abstract programming language s u p -p r e s s e s m a c h i n e d e t a i l (such as addresses, registers, bit patterns, interrupts,and sometimes even the number of processors available). Instead the lan-

  • 8/3/2019 archprog-brinch_hansen

    21/328

    6 D E S I G N P R I N C I P L E S C h ap . 1

    guage relies on a b s t r a c t c o n c e p t s (such as variables, data types, synchro-nizing operations, and concurrent processes). As a result, program textswritten in abstract languages are often an order of magnitude shorter thanthose writte n in machine language. This t e x t u a l r e d u c t i o n simplifies programengineering considerably.

    The fastest way to discover whether or not you have invented a simpleprogram structure is to try to desc r ibe i t in completely readable terms-adopting the same standards of clarity t hat are required of a survey paperpublished by a journal. If you t ake pride in your own description you haveprobably invented a good program structure. But if you discover that thereis no simple way of describing what you intend to do, then you shouldprobably look for some other way of doing it .

    Once you appreciate t he value of de scription as an early warning signalof unnecessary complexity it becomes self-evident that program structuresshould be described (without detail) b e f o r e they are built and should bedescribed by the d e s i g n e r (and not by anybody else). P r o g r a m m i n g i s t h e a r to f w r i t i n g e s s a y s in c r y s ta l c le a r p r o s e a n d m a k i n g t h e m e x e c u t a b l e .

    1.3 RELIABILITY

    Even the most readable language notation cannot prevent programmersfrom making mistakes. In looking for these in large programs we need all thehelp we can get. A whole range of techniques is available

    correctness proofs

    proofreadingcompilation checksexecution checkssystematic testing

    With the exception of correctness proofs, all these techniques played a vitalrole in making the concurrent programs described in this book work.

    Formal proofs are still at an experimental stage, particularly for con-current programs. Since my aim is to describe techniques that are immedi-ately useful for professional software development, I have omitted proofshere.

    Among the useful verification techniques, I feel those that reveal errorsat the earliest possible time during the program development should be

    emphasized to achieve reliability as soon as possible.One of the primary goals of Concurrent Pascal is to push the role of

    c o m p i l a t i o n c h e c k s to the limit and reduce the use of e x e c u t i o n c h e c k sas much as possible. This is not done just to make compiled programs moreefficient by reducing the overhead of execution checks. In program en-

  • 8/3/2019 archprog-brinch_hansen

    22/328

    S ec . 1 . 3 R E L I A B I L I T Y 7

    gineering, compi latio n and exe cuti on checks play the same roles as preven-

    tive maintenance and flight recorders do in aviation. The latter only tellyou why a system crashed; the fo rmer prevents it. This distinction seemsessential to me in the design of real-time systems that will control vital func-tions in society. Such systems m ust be highly reliable before they are putinto operation.

    Extensive compilation checks are possible only if the language notationis redundant. The progra mmer must be able to specify import ant propert iesin at least two different ways so that a compiler can look for possible incon-sistencies. An example is the use of declarations to introduce variables andtheir types before they are used in statements. The compiler could easilyderive this information from the statements--provided these statementswere always correct.

    We shall also follow t he crucial princi ple o f language design suggestedby Hoare: The behavior of a program written in an abstract language shouldalways be explainable in terms of the concep ts of that language and shoul dnever require insight into the details of compilers and computers. Otherwise,an abstract notation has no significant value in reducing complexity.

    This principle immediately rules out the use of machine-oriented fea-tures in programming languages. So I shall assume that all programming willtake place in abstract pr ogram ming languages.

    Dijkstra has remarked that testing can be used only to show the presenceof errors but never their absence. However true that may be, it seems veryworthwhile to me to show the presence of errors and remove them one ata time. In my experience, the combination of careful proofreading, extensivecompil ation checks, and systematic testing is a very effective way to make a

    program so dependable that i t can work for months without problems. Andthat is about as reliable as most othe r technol ogy we depen d on. I do notknow of better m ethods for verifying large programs at the moment .

    I view programming as the art of building program pyramids by addingone brick at a time to the structure and making sure that it does not collapsein the process. The pyramid must remain stable while it is being built. I willregard a (possibly incomplete) program as being stable as long as it behavesin a predicta ble manner.

    Why is program testing so often difficult? Mainly, I think, because theaddition of a new pr ogram piece can spread a burst of errors thr oug hou t therest of a program and make previously tested pieces behave differently. Thisclearly violates the sound principle of being able to assume that when youhave built and tested a part of a large program it will continue to behave

    correctly under all circumstances.So we will make the strong req uire men t that new program pieces addedon top of old ones must not be able to make the latter fail. Since this prop er-ty must be verified befor e program testing takes place, it must be done bya compiler. We must therefore use a language notation that makes it clear

  • 8/3/2019 archprog-brinch_hansen

    23/328

    8 D E S I G N P R I N C I P L E S C h a p . 1

    what program pieces can do to one another. This strong conf inement ofprogram errors to the part in which they occur will make it much easier todeter mine fr om the behavior of a large program where its errors are.

    1.4 ADAPTABILITY

    A large program is so expensive to develop that it must be used forseveral years to make the effort worthwhile. As time passes the users' needschange, and i t becomes necessary to modif y the program s omewhat to satisfythem. Quite often these modifications are done by people who did not de-velop the program in the first place. Their main diffic ulty is to find out h owthe program works and whet her it will still work after being changed.

    A small group of people can ofte n succeed in developing the first version

    of a program in a low-level language with little or no docum ent ati on to sup-port them. They do it by talking to one another daily and by sharing a men-tal picture o f a simple structure.

    But later, when the same program must be extended by other program-mers who are not in frequent contact with the original designers, it becomespainfully clear that the "simple" structure is not described anywhere andcertainl y is not revealed by the primitive language notat ion used. It is impor-tant to realize that for program maintenance a simple and well-documentedstructure is even more import ant than it is during program developme nt. Iwill not talk about the situation in which a program that is neither simplenor well documented must be changed.

    There is an interesting relationship bet ween programm ing errors andchanging user requirements. Both of them are sources of instability in the

    program c onstr uctio n process that make it difficult to reach a state in whichyou have complete confidence in what a program does. They are caused byour inability to fully comprehend at once what a large program is supposedto do in detail.

    The relative frequencies of program errors and c:hanging require ments areof crucial importance. If programming introduces numerous errors that aredifficult to locate, many of them may still be in the program when the userrequests changes of its functi on. And when an engineer cons tant ly finds him-self changing a system that he never succeeded in raaking work correctly inth e first place, he will eventually end up with a very unstable product .

    On the other hand, if program errors can be located and corrected at amuch faster rate than the system develops, then the addition of a new piece(or a change) t o the program will soon lead to a stable situation in which thecurrent version of the program works reliably and predictably. The engineercan then, with mu ch greater confidence , adapt his prod uct to slowly chang-ing needs. This is a strong incentive to make pr ogram verification and testingfast.

    A hierarchical struct ure consists of program pieces that can be studied

  • 8/3/2019 archprog-brinch_hansen

    24/328

    S e c. 1 . 6 E F F I C I E N C Y 9

    one at a time. This makes it easier to read the program and get an initial un-

    derstanding of what i t does and h ow it does i t . Once you have that insight,the consequences of changing a hierarchical program become clear. Whenyou change a part of a program pyramid you must be prepared to inspectand perhaps change the program parts that are on top of it (for they are theonly ones that can possibly depend on the one you changed).

    1.5 PORTABILITY

    The ability to use the same program on a variety of computers is desir-able for eco nomic reasons: Many users have diffe rent comput ers; sometimesthey replace them with new ones; and quite often they have a commoninterest in sharing programs developed on different machines.

    Portability is only practical if programs are written in abstract languagesthat hide the differences between computers as much as possible. Otherwise,it will require extensive rewriting and testing to move programs from onemachine to another. Programs written in the same language can be madeportable in several ways:

    (1) by having different compi lersfor different machines. This is onlypractical for the most widespread languages.

    (2) by having a single compilerthat can be modified to generate codefor diffe rent machines. This requires a clear separation within the compilerof those parts that check programs and those t hat generate code.

    (3) by having a s ingle computerthat can be simulated efficiently ondifferent machines.

    The Concurrent Pascal compiler generates code for a simple machinetailored to the language. This machi ne is simulated by an assembly languageprogram of 4 K words on the PDP 11/45 computer. To move the language toanother computer one rewrites this interpreter. This approach sacrificessome e fficienc y to make porta bilit y possible. The loss of efficienc y can beeliminated on a microprogrammable machine.

    1 . 6 E F F I C I E N C Y

    Efficient programs save time for people waiting for results and reducethe cost of computation. The programs described here owe their efficiency

    to

    special-purpose algorithmsstatic store allocationminimal run-time checking

  • 8/3/2019 archprog-brinch_hansen

    25/328

    10 DESIGN PRINCIPLES Chap. 1

    Initially the loading of a large program (such as a compiler) from disktoo k ab out 16 sec on the PDP 11/45 compute r. This was later reduc ed to 5sec by a disk allocation algorithm that depends on the special characteristicsof program files (as opposed to data files). A scheduling algorithm th at triesto reduce disk head movement in general would have been useless here. Thereasons for this will be made clear later.

    Dynamic store algorithms that move programs and data segments aroundduring exec ution can be a serious source of inefficiency that is not unde r theprogrammer's control . The implement ation of Conc urrent Pascal does no trequire garbage collection or demand paging of storage. It uses static alloca-tion of store among a fixed number of processes. The store requirements aredetermined by the compiler.

    When programs are written in assembly language it is impossible to

    predict what they will do. Most computers depend on hardware mechanismsto prevent such programs from destroying one another or the operatingsystem. In Concurrent Pascal most of this protection is guaranteed by thecompiler and is not supported by hardware mechanisms during execution.This drastic reduction of run-time checking is only possible because allprograms are written in an abstract language.

    1.7 GENERALITY

    To achieve simplicity and reliability we will depend exclusively on amachine-independent language that makes programs readable and extensivecompil ation checks possible. To achieve efficiency we will use the simplestpossible store a llocation.

    These decisions will no doubt reduce the usefulness of Concurrent Pascalfor some applications. But I see no way of avoiding that. To impose s t ruc-ture upon yourself is to impose res t r ic t ions on your freedom of program-ming. You can no longer use the machine in any way you want (becausethe language makes it impossible to talk directly about some machinefeatures). You can no longer delay certain program decisions until executiontime (because the c ompiler checks an d freezes things much earlier). But thefreed om y ou lose is often illusory any how , since it can complicate program-ming to the point where you are unable to cope with i t .

    This bo ok describes a range of small operati ng systems. Each of the mprovides a special service in the most effic ient and simple manner. They

    show that Concu rren t Pascal is a useful progr amming language for mini-computer operating systems and dedicated real-t ime applications. I expectth at the language will be useful (but no t suffici ent) for writing large, general-purpose operating systems. But that still remains to be seen. I have tried to

  • 8/3/2019 archprog-brinch_hansen

    26/328

    Sec. 1.9 LIT ERA TUR E 11

    m a k e a p r o g r a m m i n g t o o l t h a t is v e r y c o n v e n i e n t f o r m a n y a p p l i c a t io n sr a t h e r t h a n o n e w h i c h i s t o l e r a b l e f o r a l l p u r p o s e s .

    1 . 8 C O N C L U S I O N

    I ha v e d i s c u s s e d th e p r o g r a m m i n g g o a ls o f

    s i m p l i c i t yr e l i ab i l i t ya d a p t a b i l i t ye f f i c i e n c yp o r t a b i l i t y

    a n d h a v e s u g g e s te d t h a t t h e y c a n b e a c h i e v e d b y c a r e f u l d e s ig n o f p r o g r a ms t r u c t u re , l a n g u a g e n o t a t i o n , c o m p i l e r , a n d c o d e i n t e r p r e te r . T h e p r o p e r t ie st h a t w e m u s t l o o k f o r a re t h e f o l l o w i n g :

    s t r u c t u r e : h i e r a r c h i c a l s t r u c t u r es m a l l p a r t sm i n i m a l i n t e r f a c e sr e p r o d u c i b l e b e h a v i o rr e a d a b l e d o c u m e n t a t i o n

    n o t a t i o n : a b s t r a c t a n d r e a d a b l es t r u c tu r e d a n d r e d u n d a n t

    c o m p i l e r : r e l i a b l e a n d f a s te x t e n s iv e c h e c k i n gp o r t a b l e c o d e

    i n t e r p r e t e r : r e l i a b le a n d f a s tm i n i m a l c h e c k i n gs t a t i c s t o r e a l l o c a t i o n

    T h i s is t h e p h i l o s o p h y w e w i l l f o l l o w i n t h e d e s ig n o f c o n c u r r e n t p r o g ra m s .

    1 . 9 L I T E R AT U R E

    F o r m e t h e m o s t e n j o y a b l e t h in g a b o u t c o m p u t e r p r o g ra m m i n g is t h ei n s i g h t i t gi ve s i n t o p r o b l e m s o l v i n g a n d d e s ig n . T h e s e a r c h f o r s i m p l i c i t ya n d s t r u c t u r e i s c o m m o n t o a ll i n t e l l e c t u a l d i s c i p li n e s .

  • 8/3/2019 archprog-brinch_hansen

    27/328

    1 2 D E S I G N P R I N C I P L E S C h a p. 1

    Here are a historian and a biologist talking about the impo rtan ce of rec-

    ognizing structur e:

    "It is a matter of some importance to link teaching and research, evenvery detailed research, to an acceptable architectonic vision of the whole.Without such connections, detail becomes mere antiquarianism. Yet whilehistory without detail is inconceivable, without an organizing vision it quick-ly becomes i ncom pre hens ible . . . What cannot be understood becomesmeaningless, and reasonable men quite prope rly refuse to pay att ent ion tomeaningless matters."

    William H. McNeill [19 74]

    "There have been a numb er o f physicists wh o suggested that biologicalphenomena are related to the finest aspects of the constitution of matter, ina manner of speaking below the chemical level. But the evidence, which isalmost too abundant, indicates that biological phe nom ena operate on the'systems' level, that is, above chemistry."

    Walter M. Elsasser [1975]

    A linguist, a psychologist, and a logician have this to say about writingand nota t ion:

    "O mi t needless words. Vigorous writin g is concise. A sentenc e shou ldcontai n no unnecessary words, a paragraph no unnecessary sentences, forthe same reason that a drawing should have no unnecessary lines and a ma-chine no unnecessary parts. This requires no t that the writer make all hissentences short, or that he avoid all detail and treat his subject only inoutline, but that every word tell."

    William Strun k, Jr. [1959]

    "How complex or simple a structure is depends critically upon the wayin which we describe it. Most of the complex structures found in the worldare enormously redundant, and we can use this redundancy to simplifytheir description. But to use it, to achieve the simplification, we must findthe right representation."

    Herbert A. Simon [1969]

    "There is some thin g uncanny ab out the pow er of a happily chosen ideo-graphic language; for it often allows one to express relations which have nonames in natural language and therefore have never been noticed by anyone.

  • 8/3/2019 archprog-brinch_hansen

    28/328

    S ec . 1 . 9 L I T E R AT U R E 1 3

    Symbolism, then,

    tion. "

    becomes an organ of discovery rather than mere nota-

    S u s a n n e K . L a n ge r [ 1 9 6 7 ]

    A n e n g i n e e r a n d a n a r c h i t e c t d i s c u s s t h e i n f l u e n c e o f h u m a n e r r o r s a n dc u l t u r a l c h a n g e s o n t h e d e s i g n p r o c e s s :

    "First, one must perform perfectly. The computer resembles the magicof legend in this respect, too. If one character, one pause, of the incanta tionis not strict ly in prop er form, the magic doesn't work. Huma n beings are no taccusto med to being perfect, and few areas o f human activity demand it.Adjus ting to the requirement for perfect ion is, I think, the most difficult

    part of learning to program."F r e d e r i c k P. B r o o k s , J r. [ 1 9 7 5 ]

    "Misfit provides an incentive to change... However, for the fit to occurin practice, one vital condition must be satisfied. It must have time to hap-pen. The process mu st be able to achieve its equilibr ium before the ne xt cul-ture change upsets it again. It mus t actually have time to reach itsequilibrium every time it is disturbe d--or , if we see the process as continu-ous rather than inte rmittent , the adju stment of forms must proceed morequickly than the drif t of the culture cont ext ."

    C h r i s t o p h e r A l e x a n d e r [ 1 9 6 4 ]

    F i n a l ly, h e r e a r e a m a t h e m a t i c i a n a n d a p h y s i c i s t w r i t in g a b o u t t h eb e a u t y a n d j o y o f c re a t iv e w o r k :

    "The mathematician's patterns, like the painter's or the poet's, mustbe beautiful; th e ideas, like the colours or the words, mus t fit togeth er in aharmonious way. Beauty is the first test: there is no permanent place inthe world for ugly mathematics."

    G . H i H a r d y [ 1 9 6 7 ]

    "The mos t po wer fu l drive in the ascent of man is his pleasure in his ownskill. He loves to do what he does well and, having done it well, he loves to

    do it better. You see it in his science. Y ou see it in the magnificenc e withwhich he carves and builds, the loving care, the gaiety, the effrontery. Themonu ment s are supposed to comme morat e kings and religions, heroes,dogmas, but in the end the man they commemorate is the builder. "

    J a c o b B r o n o w s k i [ 1 9 7 3 ]

  • 8/3/2019 archprog-brinch_hansen

    29/328

    1 4 D E S I G N P R I N C I P L E S C h a p. 1

    R E F E R E N C E S

    ALEXANDER, C.,Note s on the syn thes i s o f fo rm.Harvard U niversity Press, Cambridge,MA, 1964.

    BRONOW SKI, J.,The ascen t o f man .Little, Brown and Com pany, Boston, MA , 1973.

    BROOKS, F. P., The myth ica l man-month . Essays on so f tware eng inee r ing .Addison-W esley, Reading, MA , 1975.

    ELSASSER, W. M.,The ch ie f abs t rac t ions o f b io logy.Am erican Elsevier, New York,NY, 1975.

    HARDY, G. H. , A mathem at ic ian ' s apo logy.Camb ridge University Pr ess, New Y ork,NY, 1967.

    LANGER, S. K.,An in t roduc t ion to sym bol ic logic.Dover Publications, New York,NY, 1967.

    MCNEILL, W. H.,The shape o f European h i s to ry.Oxfo rd University Press, New York,NY, 1974.

    SIMON, H. A.,The sciences of the artificial.M.I.T. Press, Camb ridge, MA, 1969.

    STRUNK , W., and W HITE, E. B.,The e lements o f s ty le .Macmillan, New York, NY,1959.

  • 8/3/2019 archprog-brinch_hansen

    30/328

    P R O G R A M M I N G C O N C E P T S

    We will construct large concurrent programs as hierarchies of smallercomponents. Each compon ent should have a well-defined function that can

    be implemented and tested as an almost independent program. The com-ponents and their combinations should have reproducible behavior. Andthe verification and testing of such programs must take place much fasterthan t hey will change due to new requirements.

    This chapter introduces the kind of component s we will use and de-scribes how to connect them. Our programming tool is a language calledConcurrent Pascal. It extends the sequential programming language Pascalwith new concepts called processes, monitors, and classes.

    This is an informal description of Concurrent Pascal. It uses examples,pictures, and words to bring out the creative aspects of new programmingconcepts without getting into their finer details. Other chapters will intro-duce a language notation for these concepts and define them concisely. Thisform of presentation is perhaps not precise from a formal point of view. But

    it is, I hope, more effective from a huma n p oint Of view.

    1 5

  • 8/3/2019 archprog-brinch_hansen

    31/328

    1 6 P R O G R A M M I N G C O N C E P T S C h ap . 2

    2 .1 C O N C U R R E N T P R O CE S S E S

    I will introduce the language by solving a simple and useful problem:How can text be copied as fast as possible from a card reader to a lineprinter?

    Figure 2.1 shows a card reader, a line printer, and a program that copiesdata fro m one to the other. The card reader and line printer can transfer1000 and 600 lines/min (corresponding to 60 and 100 msec/line).

    The simplest s olution t o the problem is a cyclical, sequential program

    cycle input; output e n d

    that inputs one l ine at a t ime from the card reader and outputs i t to the l ineprinter.

    Unfo rtun atel y, this is very inef ficient since it forces th e card readerand line printer to alternate

    input , output , input , output . . .

    so th at one of them always waits while the o ther operates. As a result thecopying speed is only 375 lines/min (or 160 msec/line).

    We can only increase the speed by letting the card reader and the lineprinter operate simultaneously (Fig. 2.2). The copy program now consistsof two sequential processes tha t are executed simulta neously

    card process: cycle input ; send endprinter process: cycle receive; out put end

    A card process inputs one line at a time from the card reader and sendsit through a buffer to a printer process that receives and outputs i t to theline printer. This program copies text at the speed of the slowest device (600lines/min).

    Since we are interested in abstract programming i t is not important

    C A R D R E A D E R L I N E P R I N T E R

    P R O G R A M

    Fig. 2.1 Data copyin g

  • 8/3/2019 archprog-brinch_hansen

    32/328

    Sec. 2.2 P R I VAT E D AT A 17

    C A R D R E A D E R B U F F E R L I N E P R I N T E R

    C A R D P R O C ES S P R I N T E R P R O C ES S

    Fig. 2.2 Data flow among concurrent processes

    how concurrent processes are implemented on a computer. All we need toknow is that they are executed simultaneously so that they can make peri-pherals run at the same time.

    On some computers, a single processor will be multiplexed among con-current processes by means of clock interrupts. On other computers, each

    process will be exec uted by its own processor. We will deliberately ignorethese details and assume that they are taken care of by the machine whichexecutes the compiled code of our abstract concurre nt programs. (Chapter 9describes the implementation of Concurrent Pascal on the PDP 11/45computer. )

    Our refusal to be concerned with machine detail makes it impossible topred ict the absolute and relative speeds of co ncurre nt processes. We will,however, assume that all processes have positive speeds. (After all, whywrite a piece of program unless we know that the ma chine will execu te it?)The machine will often be much faster than its peripherals so that we canexpect processes to run roughly at the speed of the devices they control.

    2 .2 P R I VA T E D A T A

    We will build concurrent programs out of sequential processes that areexecuted simultaneously. This is quite attractive since most programmersalready have a deep intuitive understanding of sequential programming.

    A sequential process consists of a data structure and a sequential pro-gram that operates on it (Fig. 2.3). The program statements are executedstrictly one at a time.

    The important thing about a sequential program is that it always gives

    P R I VAT E D ATA

    S E Q U E N T I A LP R O G R A M

    Fig. 2.3 A process

  • 8/3/2019 archprog-brinch_hansen

    33/328

    18 PROGRAMMING CONCEPTS Chap. 2

    the same results when it operates on the same data i n d e p e n d e n t l y o f h o wf a s t i t i s e x e c u t e d . All that matters is the s e q u e n c e in which operations arecarried out.

    A p r o g r a m m i n g e r r o r in a sequential program cma always be located byrepeating the execution of the program several times with the data thatrevealed the error. In each of these e x p e r i m e n t s , the values of selectedvariables are recorded to determine whether or not a certain program partworks. This process of elimination continues until the error has been lo-cated.

    When a program part has been found to behave correctly in one testwe can ignore that part (and its variables) in subsequent tests because itwill continue to behave in exactly the same manner each time the programis executed with the same data. So o u r a b i l i t y t o t e s t a l ar ge , s e q u e n t i a l

    p r o g r a m i n s m a l l s t e p s d e p e n d s f u n d a m e n t a l l y o n t h e r e p r o d u c i bl e b e h a v i o ro f t h e p r o g r a m .The time-independent behavior of a sequential process is guaranteed,

    however~ only if its variables are inaccessible to other processes. But if a pro-cess uses the values of a variable which another process changes, then theresult depends on the relative speeds of the processes.

    When a concurrent program is executed several times with the samedata, the relative speed of the processes will always vary somewhat. In amult iplexed computer the execution of a process wil l be influenced by thepresence of other (perhaps unrelated) processes. And in a multiprocessorsystem, execution speeds will depend on how fast operators react to programrequests.

    If a concurrent program contains an error that makes one process change

    the variables of another process at unpredictable times, then that programwill give different results each time it is execut ed with the same data.

    Such unpredictable program behavior makes it impossible to locate anerror by systematic testing. It can perhaps be found by studying the programte xt in detail f or days. But this can be very frustrati ng (if not impossible)when it consists of thousands of lines and one has no clues about where tolook.

    I f w e w i s h t o s u c c e e d i n b u i l d i n g la rg e, c o n c u r r e n t p r o g r a m s w h i c h a rer e li ab le , w e m u s t u s e p r o g r a m m i n g l a ng u a g es t h a t a r e s o w e l l s t r u c t u r e dt h a t a c o m p i l e r c a n c a tc h m o s t t i m e - d e p e n d e n t e r ro r s(because nob ody elsecan). So we will choose a language nota tio n that clearly shows which variablesa process owns. The compiler will then ma ke sure that these p r i v a t e v a r i a b l e sare inaccessible to other processes.

  • 8/3/2019 archprog-brinch_hansen

    34/328

    S ec . 2 . 4 S H A R E D D A TA l g

    2 .3 P E R I P H E R A L S

    Peripheral devices are a pote ntia l source of erratic program behaviorthat deserves careful attention. The classical programming technique forsimultaneous input and processing of data is to use a double buffer that isaccessible both to a sequential program and its input device.

    The program inputs the first data item in a buffer variable x. Whilethe program operates on x, the device inputs the second data item in anotherbuffer variable y. The program then processes y while tile third data item isbeing input to x, and so on.

    More than one progra mmer has made the mist ake of referring to a dataitem before it has been input completely. This makes the program resultdepend on the relative speed of program execution and input transfers.

    The problem is that this programming technique turns a program andits peripherals into concurrent processes that can refer to each other's "pri-vate" variables by mistake.

    In Concurrent Pascal a peripheral device can only be accessed by anoperation io that delays the calling process until the input/output has beencompleted. So a variable is at any time accessible either to a single processor to a single device (but not to both of them). A data transfer is just an-other sequential operation with a completely reproducible result.

    While a process is waiting for the completion of a data transfer, thecomputer can execute other processes. So this approach does not necessarilymake the machine idle. Simultaneous input and processing of data items canbe done by t wo processes conne cted by a buffer (Fig. 2.2).

    Another benefit of making input/output an indivisible operation is thatperipheral interrupts become irrelevant to the programmer. They are handledcomple tely at the machine level.

    When compute r problems first arise they are ofte n solved in very compli-cated ways. It takes a long time to discover the obvious solutions. And thenit takes a while longer to get used to them. The programming of input/ out-put illustrates this well.

    2 .4 S H A R E D D AT A

    Although it is vital to make sure that some variables are private to proc-esses, they must also be able to share data structures (such as a buffer).Otherwise, concurrent processes cannot exchange data and cooperate on

    common tasks. But since shared data are the major pitfall of concurrentprogramming we mu st proceed with extreme care and define exactly whatprocesses can do with such dat a structures.

    The buffer in the copying program is a data structur e shared by tw o con-

  • 8/3/2019 archprog-brinch_hansen

    35/328

    20 PRO GRA MM ING CONCEPTS Chap. 2

    curr ent processes (Fig. 2.2). T he details of how such a buffe r is cons truc ted

    are irrelevant to its users. All the processes need to know is that they cans e n d a n d r e c e i v e data th rough i t. If they try to operate on the buffer in anyother way i t is probably ei ther a programming mistake or an example oftricky programming. In both cases, one would like a compiler to detect suchmisuse of a shared data structure.

    To make this possible, we must intr oduc e a language cons truc t thatwill enable a programmer to tell a compiler how a shared data structure canbe used by processes. This kind of system com pon en t is called a m o n i t o r .A monitor can synchronize concurrent processes and transmit data amongthem. It can also control the order in which competing processes use shared,physical resources.

    A monitor defines a s h a r e d d a t a structure and all the operations proc-esses can pe rfor m on it (Fig. 2.4). These synchron izing operations are calledm o n i t o r p r o c e d u r e s . A monitor also defines an i n i t i a l ope ra t i on that is exe-cuted when its data structure is created.

    We can define a b u f f e r as a monitor. It will consist of shared variablesdefining the c o n t e n t s of the buffer. I t wil l also include two monitor proce-dures, s e n d a n d r e ce i ve . T h e initial operation will make the buffer e m p t yto begin with.

    Processes cannot operate directly on shared data. They can only callmonitor procedures (such as send and receive) that have access to the data.A monitor procedure is executed as part of the calling process (just like anyother procedure).

    If concurrent processes simultaneously call monitor procedures whichoperate on the same shared data, these procedures must be executed str ict-ly one at a time. Otherwise, processes might find the data structure in some(unknown ) intermediate state, which would make the results of moni torcalls unpredict able.

    This means that the machine must be able to delay processes for shortperiods of tim e until it is their turn to e xecu te mo nit or procedures. We willnot be conce rned abo ut how this is done, b ut will just notice that a process

    S H A R E D D ATA

    S Y N C H R O N I Z I N GO P E R AT I O N S

    I N I T I A LO P E R AT I O N

    Fig . 2 .4 Am o n i t o r

  • 8/3/2019 archprog-brinch_hansen

    36/328

    Sec. 2.5 ACCESS RIGH TS 21

    has e x c l u s i v e a c c e s s to shared data while it executes a monitor procedure.

    (Chapter 9 explains the implementation details of this.)So the machine on which concurrent processes run will handle s h o r t -

    t e r m s c h e d u l i n g of simultaneous monitor calls . But the programmer mustalso be able to delay processes for longer periods of time until their requestsfor dat a and other resources can be satisfied. Fo r e xample, if a process triesto receive data from an empt y b uffer i t must be delayed unti l anotherprocess sends mor e data.

    Concurrent Pascal includes a simple data type, called a q u e u e , that canbe used by monitor procedures to control m e d i u m - t e r m s c h e d u l in g ofprocesses. A monitor can either d e l a y a calling process in a queue or c o n -t i n u e a process waiting in a queue.

    It is not important yet to understand how these queues work exceptfor the following rule: A p r o c e s s h a s e x c l u s i v e a c c e s s t o s h a r e d d a t a o n l ya s l o n g as i t c o n t i n u e s t o e x e c u t e s t a t e m e n t s w i t h i n a m o n i t o r p r o c e d u r e . A ss o o n a s a p r o c e s s i s d e l a y e d i n a q u e u e i t l o s e s it s e x c l u s i v e a c c e s s u n t i la n o t h e r p r o c e s s c al ls t h e s a m e m o n i t o r a n d c o n t i n u e s i ts e x e c u t i o n .

    A compiler will check that processes only access a monitor through itsprocedures. This has dramatic consequences for program r e l i a b i l i t y. It meansthat once a mon itor has been implemen ted correctly other parts of a pro-gram cannot make it fail. It remains a s t ab l e, c o r r e c t c o m p o n e n t no matterwhat the rest of the program does. Compile-time protection of privatevariables has the same effect on processes.

    Programming languages, such as Fortran, Cobol, PL/1, and Pascal usecommon data structures ("global variables") as interfaces between separateprogram parts. This makes it easy for one part of a program to crash anot herby changing its data structure in unexpected ways.

    Concurrent Pascal is based on the assumption that p r o c e d u r e s a r e am u c h s a f e r i n t e r fa c e m e c h a n i s m t h a n c o m m o n d a ta s t r u ct u r e s .Proceduresassociated with a data structure make it possible for a programmer to defineall the possible operations on the data and depen d on a compile r to preventthe rest of a program from using the data in any other way.

    2 . 5 A C C E S S R I G H T S

    So far I have only introduced the c o m p o n e n t s from which concurrentprograms can be constructed, namely processes and monitors. But we stillneed a precise way of describing how these comp one nts can be c o n n e c t e d

    to form h i e r a r c h ic a l s t r u c t u r e s .Figure 2.2 makes it obvious that data flow from a card process througha buffer to a printer process. We will call this a d a t a f l o w g r a p h .

    Figure 2.5 shows the same system from a different viewpoint. Thecircles are s y s t e m c o m p o n e n t s , and the arrows are the a c c e s s r i g h t s of these

  • 8/3/2019 archprog-brinch_hansen

    37/328

  • 8/3/2019 archprog-brinch_hansen

    38/328

    Sec. 2.6 A B S T R A C T D A T A T Y PE S 23

    Fig. 2.7

    A C C E S S R I G H T S

    S H A R E D D A T A

    SYNCHRONIZINGO P E R AT I O N S

    I N I T I A LO P E R AT I O N

    A m oni to r wi th access r igh ts

    2.6 ABSTRACT DATA TYPES

    A p r o c e s s e x e c u t e s a s e q u e n t i a l p r o g r a m - - i t is a n a c ti v e c o m p o n e n t . Am o n i t o r i s j u s t a c o l l e c t i o n o f p r o c e d u r e s w h i c h d o n o t h i n g u n t i l t h e y a r ec a l le d b y p r o c e s s e s - - i t is a p a ss iv e c o m p o n e n t . B u t t h e r e a r e s t r o n g s im i la ri -t ie s b e t w e e n a p r o c e s s a n d a m o n i t o r : b o t h d e f i n e a d a t a s t r u c t u r e ( p r i v a teo r s h a r e d ) a n d t h e m e a n i n g f u l o p e r a t i o n s o n i t. T h e m a i n d i f f e r e n c e b e t w e e np r o c e s s e s a n d m o n i t o r s is t h e w a y t h e y a r e s c h e d u l e d f o r e x e c u t i o n .

    I t s ee m s n a t u r a l , t h e r e f o r e , t o r e g a r d p r o c e s s e s a n d m o n i t o r s a sa b s t r a c td a t a t y p e s d e f i n e d in t e rm s o f th e o p e r a t i o n s o n e c a n p e r f o r m o n t h e m .T h e y a r e a b s t r a c t b e c a u s e t h e r e st o f a p r o g r a m o n l y k n o w sw h a t o n e c a n d ow i th t h e m w i t h o u t d e p e n d i n g o nh o w t h e d a t a a r e s tr u c t u r e d a n d m a n i p u -l a t e d . I t is e v e n p o s s i b l e t o c h a n g e t h ed a t a r e p r e s e n t a t i o n w i t h o u t i n f l u e n c-

    i n g t h e r e s t o f a p r o g r a m a s l o n g as t h e o p e r a t i o n s r e m a i n t h e s a m e .I n t h e c o p y i n g s y s t e m t h e b u f f e r c a n b e r e p r e s e n t e d e i t h e r b y a s in g le

    l i n e s l o t , a n a r r a y o f l i n e s , a l i n k e d l is t , o r a t r e e s t r u c t u r e . A n d i t c a n b es t o r e d e i t h e r i n c o r e o r o n d i sk . T h e p r o c e s s e s d o n o t c a r e as l o n g as t h e yc a n s e n d a n d r e c e i v e l in e s t h r o u g h i t. T h is g iv es t h e p r o g r a m m e r t h e f r e e d o mt o e x p e r i m e n t w i t h d i f f e re n t d a ta r ep r e s e n t a t io n s t o i m p r o v e p e r f o r m a n c e .

    T h e h i d i n g o f i m p l e m e n t a t i o n d e t a il s w i t h i n a n a b s t r a c t d a t a t y p em a k e s i t e as i e r t o t u n e a p r o g r a m l o c a l ly. I t a l s o m a k e s i t e a s ie r t o u n d e r -s t a n d w h a t t h e p r o g r a m d o e s a s a w h o l e , si n c e al l t h e s e d i f f e r e n t d a t a r e pr e -s e n t a t i o n s i m p l e m e n t t h e sa m e a b s t r a c t i d e a o f s e n d i n g a n d r e ce i vi n g.

    S i n c e a c o m p i l e r c a n c h e c k t h a t t h e s e o p e r a t i o n s a r e t h e o n l y o n e sc a r r ie d o u t o n t h e a b s t r a c t d a t a s t r u c t u r e w e c a n h o p e t o b e a b l e t o b u i ldv e r y r el ia b l e, c o n c u r r e n t p r o g r a m s i n w h i c hc o n t r o l l e d a c c e s s t o d a t a a n d

    p h y s i c a l r e s o u r c e s is g u a r a n t e e d b e f o r e t h e s e p r o g r a m s a r e p u t i n t o o p e ra -t i o n ( o r e v e n t e s t e d ) . T h i s w i ll s o l v e t o a l a rg e e x t e n t t h er e s o u r c e p r o t e c t i o np r o b l e m s i n t h e c h e a p e s t p os s ib l e m a n n e r ( w i t h o u t h a r d w a r e m e c h a n i s m sa n d r u n - t i m e o v e r h e a d ) .

  • 8/3/2019 archprog-brinch_hansen

    39/328

    24 PROGRAMMING CONCEPTS Chap. 2

    CARD READER BUFFERS LINE PRINTER

    CARD PROCESS COPY PR OC ES S PRINTER PROCESS

    Fig. 2.8 A pipel ine system

    A useful concept can be used over and over again (and not just once). Sowe will define processes and monitors as data types and make it possible touse several instances of each of them in a system. We can, for example, uset w o b u f f e r s to build a p i p e l i n e s y s t e m in which data pass through a cardprocess, a cop y process, and a printe r process (Fig. 2.8).

    The c o p y p r o c es s will format the text so that each file begins and endswith a blank page, each page begins and ends with a bl ank line, and each lineis surrounded by blank margins.

    Since inp ut/ out put and exec ution alternate str ict ly within peripheralprocesses it is desirable t o keep their dat a processing minimal to make thedevices run as fast as possible. This is achieved by formatting the text in aseparate process that can run while the other processes are waiting forinpu t/out put. This extension of the copying system also has the advantage

    of leaving all the previous com pon ent s unc hange d (Fig. 2.5). So here we havean example of how one can adapt a program to new requirements witho utchanging i t completely.

    In a concurrent program the programmer only def ines the buffer t y p eonce but declares two i n s t ances of it. I will distinguish between definitionsand instances of comp onent s by call ing them s y s t e m t y p e s and s y s t e mc o m p o n e n t s . Access graphs (such as Fig. 2.8) will always show systemcomponents (not system types).

    During program execution the machine creates a separate data structurefor each system c omponen t. But componen ts of the same type share a singlecopy of the procedures associated with the data. So the pipeline system usestwo copies of the buffer variables but only one copy of the send and receiveprocedures.

    To make the programming language useful for hierarchical system de-sign it should permit the division of a system type, such as the copy process,into smaller system types. Let us assume that the buffers in Fig. 2.8 transmitwhole l ines of text between the processes. The text formatting can then bedone one step at a time by means of three abstract data types inserted be-tween the copy process and i ts out put buffer (Fig. 2.9).

    The copy process calls a f i l e m a k e r which adds blank pages to each text

  • 8/3/2019 archprog-brinch_hansen

    40/328

    Sec. 2.7 HIERARCHICAL STRUCTURE 25

    SUFFER

    LINE M AKER

    ( ~ ) PA G EMAKER

    ~ FILE AKER

    COPY PROCESS

    Fig. 2.9 Decomposition of the copy process

    file. The file maker in turn calls a p a g e m a k e r which adds blank lines to eachtext page. The page maker then calls a l i n e m a k e r which adds a margin toeach text line before sending it through the buffer.

    The file, page, and line makers are only used by the copy process. Suchcomponents which can only be called by a single other component will becalled c lasses .

    A class defines a data structure and the possible operations on it (justlike a mon it or) . The exclusive access of a process to class variables can beguaranteed completely at compile t ime. The machine does not have to sched-ule simultaneous calls of class procedures at run time, because such callscannot occur. This makes class calls considerably faster than monitor calls.

    2 .7 H I E R A R C H I C A L S T R U C TU R E

    If we pu t all the co mpo nent s of the pipeline system t oget her we get acomplete picture of its structure. In Fig. 2.10, classes, monitors, and pro-cesses are marked, C, M, and P.

    In an access graph a process is a node that no other node has access to.A class is one that a single other node has access to. And a monitor is onethat two or more other nodes have access to. (The phrase "has access to"also means "points to." )

    Some years ago I was part of a team that built a multiprogrammingsystem in which processes can appear and disappear dynamically [BrinchHansen, 1970]. In practice, this system was used mostly to set up a fixedconfiguration of processes. This is to be expected, since most concurrentprograms control c omputers with a f ixed configuration of peripherals andperform a f ixed number of control tasks in some environment.

  • 8/3/2019 archprog-brinch_hansen

    41/328

    2 6 P R O G R A M M I N G C O N CE P T S Chap. 2

    C A R D R E A D E R

    B U F F E R L I N E P R I N T E R

    F I L E M A K E R

    COPY PROCESS

    CARD PROCESS

    Fig. 2.10 Hierarchical system structure

    Dynamic process deletion certainly complicates the meaning and imple-mentation of a programming language considerably. And since it seems tobe unneces sary in ma ny real-time applications, it is probably wise to excludeit altogether. So a concurrent program wil l consist of a f ixed number ofprocesses, monitors, and classes. These components and their data struc-tures will exist forever after system initialization. A concurrent programcan, however, be extended by recompilation.

    It r emains to be seen whet her this restriction will simplify or complicateoperating system design. But the poor quality of most existing operatingsystems clearly demonstrates an urgent need for simpler approaches.

    In other programming languages the data structures of processes, moni-tors, and classes would be called global data. This term would be misleadingin Concurrent Pascal, where each data structure can be accessed by a singlecomponent only. It seems more appropriate to call them permanent da tastructures.

    A Concurr ent Pascal compiler will check tha t t he private data of aprocess are accessed only by tha t process. It will also check that the dat astructur e of a class or mon it or is accessed only by its procedures.

    Figure 2.10 shows that the access rights within a concurrent programnormally are not tree structured. Instead they form a directed graph. Thispartly explains why the tradit ional scope rules of block structured languages

    are inconveni ent for concur rent pr ogramming (and, I believe, for sequentialprogramming as well). In addition, the access rights to variables in theselanguages are not very selective (a block can use not only its own variablesbut also those defined in all blocks surrounding it). In Concurrent Pascal, a

  • 8/3/2019 archprog-brinch_hansen

    42/328

    S ec . 2 .7 H I E R A R C H I C A L S T R U C T U R E 2 7

    program component has access to only a small number of other components.And these compo nent s are accessible only through well-defined procedures.Since the e xecuti on of a monit or procedur e will delay the execution of

    further calls of the same monitor, we must prevent a monitor from callingitself recursively. Otherwise, processes can become deadlocked waiting (invain) for themselves to leave monitors before they reenter them. So thecompiler will check that the access rights of system components are hier-archically ordered (or, if you like, that there are no cycles in the accessgraph).

    The hierarchical ordering of system components has vital consequencesfor system design and testing:

    A hierarchical, concurrent program can be tested component by com-ponent, bottom up (but could, of course, be conceived top down or by

    iteration). Here the "b ot to m" of a program is all the components which donot use any other components, while the "top" is those components whichno other components use.

    When an incomplete program has been shown to work correctly (byproof or testing), a compiler can guarantee t hat this part of the system willcontinue to work correctly when new untested components are added ontop of it. Programming errors within new components cannot make oldcomponents fail because old components do not call new components, andnew com ponent s onl y call old component s t hrough well-defined proceduresthat have already been tested.

    Several other reasons besides program correctness make a hierarchicalstructure attractive :

    (1) A hierarchical sys tem can be studi ed in a stepwise man ner as asequence of abstract machines simulated by programs [Dijkstra, 1971].

    (2) A partial ordering of process interactions permits one to usemathematical induction to prove certain overall properties of the system(such as the absence of deadlocks) [Brinch Hansen, 1973b].

    (3) Efficient resource utilization can be achieved by ordering theprogram components according to the speed of the physical resources theycontrol (with the fastest resources being controlled at the bottom of thesystem) [Dijkstra, 1971].

    (4) A hierarchi cal syst em designed accor ding to the previous criteria isoften nearly decomposable from an analytical point of view. This means tha t

    one can develop stochastic models of its dynamic behavior in a stepwisemanner [Simon, 1969].

    It seems most natural to represent a hierarchical system, such as Fig.2.10, by a two-dimensional picture. But in order to write a concurrent

  • 8/3/2019 archprog-brinch_hansen

    43/328

    i.

    28 PRO GRA MM ING CONCEPTS Chap. 2

    program, we must somehow represent these access rules by linear text. Thislimitation of written language tends to obscure the simplicity of the originalstructure. That is why I have tried to explain the purpose of ConcurrentPascal by means of pictures instead of language notation.

    The next two chapters introduce the language notation of Sequentialand Concurrent Pascal and present a complete, executable program for thepipeline system.

  • 8/3/2019 archprog-brinch_hansen

    44/328

    S E Q U E N T I A L PA S C A L

    The purpose of this work is to experiment with a small number of con-cepts for concur rent programming. Instead of inventing a new programming

    language from scratc h I have used an existing sequenti al language Pascal as ahost for these ideas. The resulting language is Concurr ent Pascal.

    The model operating systems described here are written in ConcurrentPascal. All other programs are written in Sequential Pascal: compilers, edi-tors, in put /ou tpu t drivers, job c ontrol interpreters , disk allocators, anduser programs.

    This is a short, informal overview of Sequential Pascal. It is neither com-plete nor concise. But it should be sufficient to understand the programsdescribed later. For historic reasons there are minor differences betwee n themost recent version of Pascal [J ensen and Wirth, 1974] and the one usedhere. Since these differences do not change the direction of this work theywill be ignored.

    The repr esentati on of basic symbols is some what restricted by the

    character set used (ASCII). I have improved this slightly by using bold facetypes for word symbols in this book. Apart from this, the programs arepresented in their original executable form. I have become used to thisprogram representation and find it as readable as any other.

    29

  • 8/3/2019 archprog-brinch_hansen

    45/328

    30 SEQU ENTI AL PASCAL Chap. 3

    3.1 PROGRAM STRUCTURE

    A S e q u e n t i a l P a s ca l p r o g r a m c o n s i st s o f d e c l a r a t i o n s o f

    c o n s t a n t sd a t a t y p e sv a r i a b l e sr o u t i n e s

    a n d a s e q u e n c e o fs t a t e m e n t s t h a t o p e r a t e o n t h e s e o b j e c t s . T h e s t a t e m e n t sw i ll b e e x e c u t e d o n e a t a t i m e .

    A n o u t l i n e o f a p r o g r a m is s h o w n b e l o w .

    c o n s t l i n e l e n g t h = 1 3 2 ;

    t y p e l i n e = a r r a y ( . 1 . . li n e l e n g t h . ) o f c h a r ;

    v a r p a g e n o , m a x n o : i n t e g e r; o k : b o o l e a n ;

    p r o c e d u r e w r i t e t e x t ( t e x t : l in e );v a r i: i n t e g e r ; c : c h a r ;b e g i n

    i : = 0 ;r e p e a t

    i : = i + l ;

    c : = t e x t ( . i . ) ;d i s p l a y ( c ) ;

    u n t i l c = ' '"e n d ;

    , , o , ,

    b e g i n , o

    i f p a g e n o = m a x n o t h e nb e g i n

    w r i t e t e x t ( ' f i l e l i m i t ' ) ;o k : = f a l s e ;

    e n d ;. , . o

    e n d .

    T h e p r o g r a m d e f i n e s a c o n s t a n tl i ne l eng th w i t h t he ; v a l u e 1 3 2 a n d a d a t at y p e l ine w h i c h i s a n a r r a y o f c h a r a c t e r s n u m b e r e d 1 , 2, 3 . . . . , l i n e le n g t h .

  • 8/3/2019 archprog-brinch_hansen

    46/328

    Sec, 3 .2 CONSTANTS AN D VAR IABL ES 31

    I t u s e s t h r e e v a r i a b l e s : t w o i n t e g e r s , c a l l e dp a g e n o a n d m a x n o , a n