+ All Categories

Ch15

Date post: 19-Nov-2015
Category:
Upload: luss4u
View: 3 times
Download: 0 times
Share this document with a friend
Description:
mechanical engineering
Popular Tags:
104
Mish, K.D. and Mello, J. Computer-Aided EngineeringMechanical Engineering Handbook Ed. Frank Kreith Boca Raton: CRC Press LLC, 1999 c 1999 by CRC Press LLC
Transcript
  • Mish, K.D. and Mello, J. Computer-Aided EngineeringMechanical Engineering HandbookEd. Frank KreithBoca Raton: CRC Press LLC, 1999

    c1999 by CRC Press LLC

  • e

    s

    a

    ional

    Computer-AidedEngineering

    15.1 Introduction....................................................................15-1Definition of Terms (CAD/CAM/CAE) Overview of Computer-Aided Engineering Goals of Computer-Aided Engineering Chapter

    15.2 Computer Programming and Computer Architecture....................................................................15-3Software Engineering Overview Computer Languages Data Base Systems Operating System Characteristics Parallel Computation Computer Graphics and Visualization

    15.3 Computational Mechanics............................................15-35Computational Solid, Fluid, and Thermal Problems Mathematical Characteristics of Field Problems Finite-Element Approximations in Mechanical Engineering Finite-Difference Approximations in Mechanics Alternative Numerical Schemes for Mechanics Problems Time-Dependent and Nonlinear Computational Mechanics Standard Packages Criteria Selection of Software Benchmarking

    15.4 Computer Intelligence..................................................15-78Artificial Intelligence Expert Systems in Mechanical Engineering Design Neural Network Simulations Fuzzy Logic and Other Statistical Methods

    15.5 Computer-Aided Design (CAD)..................................15-85Introduction Entity-Based CAD Systems Solid or Feature-Based CAD Systems Computer Aided Manufacturing (CAM)

    15.1 Introduction

    The revolution in computer technology that has taken place over the last few decades has changed thface of all engineering disciplines. Fortunately, computers appropriate for solving complex mechanicalengineering problems have evolved from complicated, rare, and expensive mainframes or supercomputerof yesterday to simple, common, and inexpensive desktop microcomputers widely available today. Duringthis period, the application of improved computer technology has revolutionized the profession ofmechanical engineering and enabled practicing engineers to solve problems routinely that a few decadesago would have resisted any available solution technique. However, many engineers possess only limited grasp of important topics such as computational mechanics, computer science, and advancedcomputer-aided design principles. Thus, this chapter the present fundamentals of these computat

    Kyran D. MishCalifornia State University, Chico

    Joseph MelloAerojet Corporation15-1 1999 by CRC Press LLC

  • 15

    -2

    Section 15

    h-l

    esign,

    y is

    he

    toary

    licationsted

    i-

    a-

    nics,sotopics, in order to assist the practicing mechanical engineer in maintaining an appropriate higevelunderstanding of these emerging engineering tools.

    Definition of Terms (CAD/CAM/CAE)

    First, it is necessary to define a few terms related to the integration of the computer into various aspectsof engineering practice:

    Computer-aided engineering (CAE) is a broadly inclusive term that includes application of thecomputer for general tasks encountered in mechanical engineering, including analysis, dand production.

    Computer-aided manufacture (CAM) is the topic concerned with the integration of the computerinto the manufacturing process, including such tasks as controlling real-time production deviceson the factory floor. This subject is treated in Section 13.

    Computer-aided design (CAD) is a general term outlining areas where computer technologused to speed up or eliminate efforts by engineers within the design/analysis cycle. This topic isalso covered in Section 11.

    The acronym CAD is also used as an abbreviation for computer-aided drafting. Since drafting is akey component of the mechanical engineering design process, this more narrow meaning will beintegrated into the general notion of computer-aided design practice.

    Overview of Computer-Aided Engineering

    There are many important components of integrating the computer into mechanical engineering. In tpast few decades, integrating the computer into mechanical engineering practice was largely a matterof generating input data for computational analyses and subsequent examination of the output in orderto verify the results and to examine the response of the computer model. Today there are a wide varietyof integration schemes used in engineering practice, ranging from artificial intelligence applicationsoriented toward removing the engineer from the design/analysis cycle, to engineer in the loop simu-lations intended to offload to the computer the quantitative tedium of design and analysis in order permit the engineer to concentrate instead on qualitative issues of professional judgment. In the nefuture, these existing computer integration techniques will be combined with virtual reality technologand improved models for human/computer interaction. In order to use these new developments, engineersmust be appropriately schooled in the fundamental tenets of modern computer-aided engineering.

    Goals of Computer-Aided Engineering Chapter

    This chapter is designed to aid practicing mechanical engineers in understanding the scope of appof computer technology to their profession. To avoid premature obsolescence of the material presenhere, a high-level orientation is utilized. Details of computational techniques are relegated to a catalogof appropriate references, and high-level practical concepts are emphasized. Low-level details (suchas computer performance issues) change rapidly relative to high-level concerns (such as general classfications of computer function, or qualitative measures of computer solution quality). Thus, concentratingon a technical overview of computer-aided engineering will permit this chapter to stay up-to-date overthe long term. In addition, this approach provides the practicing mechanical engineer with the fundmental principles required to accommodate the relentless pace of change that characterizes the world ofmodern computation.

    In addition to providing engineers with basic principles of computer science, computational mechacomputer intelligence, and examples of the application of CAD/CAM principles, this chapter alprovides insight into such simple but commonly neglected issues as: 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -3

    rnneripline.

    million

    heractice

    tal

    -

    nlit

    tructing

    les

    What are the best ways to identify and judge computer applications for integration and use in apractical engineering setting?

    What common pitfalls can be encountered using general-purpose CAE software, and how canthese problems be avoided?

    What are the most important characteristics of engineering computation, and how do engineeringcomputer problems differ from computational methods used in other fields?

    The development of material in this chapter is oriented toward presenting the practical how ofeffectively using engineering software as opposed to the mathematical why of identifying the cause ofpathological computer results. This chapter therefore first provides the practicing mechanical engineewith an overview of the field of computer-aided engineering, and then presents that material in a maless likely to become obsolete due to the rapid pace of change in this important engineering discFinally, in order to avoid overlap with other chapters, many details (for example, mathematical defini-tions) are relegated to other chapters, or to appropriate references. Thus this chapter mainly provides anoverview of the broad subject of computer-aided engineering, leaving the individual component subjectsto other chapters or to outside references.

    15.2 Computer Programming and Computer Architecture

    Few other aspects of engineering practice change as rapidly as the use of the computer. Fifty years ago,mechanical engineers analyzed and designed automobiles that are not radically different from theautomobiles of today, but practicing engineers of that era had no means of electronic calculation. Twenty-five years ago, mechanical engineers designed high-performance aircraft and space vehicles that aresimilar to those currently under development, but the computers of that more recent era had less powerthan a typical microprocessor available today in an automobiles braking system. Ten years ago, anadvanced supercomputer appropriate for high-performance engineering computation cost several milliondollars, required custom construction, installation, and support, and could perform between 10 and 100 million floating-point operations per second. Today, one can buy such computational power fora few thousand dollars at a retail electronics store. Given this background of astonishing change in tperformance, size, and cost of computers, it is clear that the applications of computers to the pof mechanical engineering can be expected to advance at an equally breathtaking rate.

    This section presents fundamental principles from the viewpoint of what can be expected betweentoday and the beginning of the next century. The emphasis is on basic principles and fundamendefinitions.

    Software Engineering Overview

    The modern field of software engineering is an outgrowth of what used to be called computer programming. Like many other engineering disciplines, software engineering gains much of its knowledge basefrom the study of failures in the specification, design, and implementation of computer programs. Ukemost engineering fields, however, the accepted codes of practice for software engineering are not yeuniversally adhered to and many preventable (and expensive) software failures still occur.

    History of the Software Engineering Discipline

    Software engineering is barely two decades old. Early efforts in this field are based on viewing theprocess of designing and implementing computer programs as analogous to designing and consphysical objects. One of the most influential works demonstrating this manufacturing tool approachto the conceptualization of software is Software Tools (Kernighan and Plauger, 1976). In this view, largesoftware projects are decomposed into reusable individual components in the same manner that compexmachinery can be broken up into smaller reusable parts. The notion that computer programs could bconstructed from components that can then be reused in new configurations is a theme that run 1999 by CRC Press LLC

  • 15

    -4

    Section 15

    ble

    ryea of

    l

    an

    bility of

    ted by

    s a

    dingils

    throughout the history of the software engineering field. Modularity alone is not a panacea, however, inthat programming effort contained within modules must be readable so as to facilitate verification,maintenance, and extension. One of the most influential references on creating readable and reliacode is The Elements of Programming Style (Kernighan and Plauger, 1978).

    Emphasizing a modular and reusable approach to software components represents one of the primathreads of software engineering, and another important related thread arises in parallel to the idmanufacture of software as an administrative process. After the failure of several large software projectsin the 1960s and 1970s, early practitioners of software engineering observed that many of these softwarefailures resulted from poor management of the software development process. Chronicles of typicafailures and the lessons to be learned are detailed in Fred Brooks The Mythical Man-Month (Brooks,1975), and Philip Metzgers Managing a Programming Project (Metzger, 1983). Brooks conclusionsare generally considered among the most fundamental principles of the field. For example, his observationthat adding manpower to a late software project makes it later is generally accorded the status of axiom, and is widely termed Brookss law. More recent efforts in software management have concen-trated on constructive solutions to past failures, as exemplified by Watts Humphreys Managing theSoftware Process (Humphrey, 1989).

    Understanding of the local (poor design) and global (poor management) causes of software failurehas led to the modern study of software engineering. Obstacles to modularization and reuse of softwaretools were identified and proposed remedies suggested and tested. Administrative and large-scale archi-tectural failures have been documented, digested, and their appropriate solutions published. The resultis a substantial body of knowledge concerning such important topics as specification and design ofcomputer programs, implementation details (such as choice of programming language), and usasoftware (e.g., how the human user relates to the computer program). This collection of knowledge formsthe basis for the modern practice of software engineering (see Yourdon, 1982).

    Standard Models for Program Design and Implementation

    A simple conceptual model of a program design and implementation methodology is presenYourdon (1993). This model is termed the waterfall model because its associated schematic overviewbears a resemblance to a waterfall. The fundamental steps in the waterfall model (see Figure 15.2.1)proceed consecutively as follows:

    System Analysis the requirements of the software system are analyzed and enumerated aset of program specifications.

    System Design the specifications are translated into precise details of implementation, includecomposition of the software system into relevant modules, and the implementation detarequired within this modular structure.

    FIGURE 15.2.1 Waterfall model for software development.

    System Analysis

    System Design

    Programming

    Testing

    Delivery 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -5

    e (or

    layed

    to

    ssar

    of theify

    l

    ation,

    e is

    Programming the proposed design is implemented in a computer programming languagcollection of languages) according to the structure imposed in the system design phase.

    Testing the resulting computer program is tested to insure that no errors (bugs) are present,and that it is sufficiently efficient in terms of speed, required resources (e.g., physical memory),and usability.

    Delivery the completed program is delivered to the customer.

    This approach for software development works well for the small tasks, but for larger mechanicalengineering projects there are many problems with this model. Probably the most important difficultyis that large projects may take years of development effort, and the customer is forced to wait nearlyuntil the end of the project before any results are seen. If the projects needs change during theimplementation period (or more commonly, if those needs were not specified with sufficient accuracyin the first place), then the final product will be inadequate, and required changes will have to be specifiedbefore those modifications can be percolated back through the waterfall life cycle again. Anotherimportant difficulty with this scheme occurs because quality-control issues (e.g., testing) are deuntil the project is nearly finished. In practice, preventative quality-assurance measures work better atreducing the likelihood that errors will occur in the program.

    To avoid the inherent difficulties of the waterfall design, restructure this sequential design model ina spiral form that emphasizes more rapid incremental deliveries of programming function. These modifiedtechniques are termed incremental or spiral development schemes (see Figure 15.2.2). The emphasison quick delivery of incremental programming function permits seeing results sooner and, if necey,making changes.

    One important interpretation of the spiral development model is that the initial implementation of theprogram (i.e., the first pass through the spiral) can often be constructed as a simple prototypedesired final application. This prototype is often implemented with limited function, in order to simplthe initial development process, so that little or no code developed in the prototypical implementationends up being reused in the final commercial version. This approach has a rich history in practicasoftware development, where it is termed throwing the first one away (after a similarly titled chapterin Brooks, [1975]). This widely used scheme uses the prototype only for purposes of demonstreducation (i.e., to learn the location of potential development pitfalls), and marketing. Once the prototypehas been constructed and approved, design and implementation strategies more similar to the traditionalwaterfall approach are used for commercial implementation.

    An obvious argument against spiral development models that incorporate a disposable prototypthat the cost of development is paid twice: once for the prototype and once again for the final imple-mentation. Current remedies for this inherent problem include rapid application development (RAD)

    FIGURE 15.2.2 Spiral model for software development.

    PreliminaryAnalysis

    PreliminaryDesign

    PrototypeEvaluation

    PrototypeProgramming

    START

    ApplicationAnalysis

    ApplicationDesign

    ApplicationProgramming

    ApplicationEvaluation

    END (ornext spiral) 1999 by CRC Press LLC

  • 15

    -6

    Section 15

    lly

    to

    ter

    of

    , and

    rily

    s

    e, and

    methods, which emphasize the use of computer-assisted software engineering (CASE) tools to permitsubstantial reuse of the prototype in the final version. With this approach, the programmer uses speciadesigned computer applications that build the programs out of existing components from a softwarelibrary. Program components that are especially mundane or difficult to implement (such as thosecommonly associated with managing a programs graphical user interface) often represent excellentcandidates for automation using RAD or CASE tools. This area of software design and implementationwill undoubtedly become even more important in the future, as the ability of computer programscreate other computer applications improves with time.

    Computer Languages

    Regardless of what future developments occur in the field of CASE tools, most current compuprogramming efforts are carried out by human programmers using high-level programming languages.These programming languages abstract low-level details of computer function (such as the processesperforming numerical operations or allocating computer resources), thus allowing the programmer toconcentrate on high-level concepts appropriate for software design and implementation. Figure 15.2.3diagrams the various interrelationships among abstractions of data, abstraction of instructionscombinations of abstracted data and instructions. In general, more expressive languages are to be foundalong the diagonal line, but there are many instances in which increased abstraction is not necessaa primary goal (for instance, performing fast matrix operations, which can often be done efficiently ina procedural language).

    In addition, high-level programming languages provide a means for making computer programportable, in that they may be moved from one type of computer to another with relative ease. Thecombination of portability of computer languages and adherence to standard principles of software designenables programmers to produce applications that run on a wide variety of computer hardware platforms.In addition, programs that are designed to be portable across different computers available today aregenerally easy to migrate to new computers of tomorrow, which insulates the longer-term softwaredevelopment process from short-term advances in computer architecture.

    Perhaps the best example of how good program design and high-level language adherence worktogether to provide longevity for computer software is the UNIX operating system. This commonoperating system was designed and implemented in the 1970s using the C programming languagruns with only minor modifications on virtually every computer available today. This operating systemand its associated suite of software tools provide an excellent demonstration of how good software

    FIGURE 15.2.3 Language abstraction classification.

    ProceduralLanguages

    AssemblyLanguages

    DataBaseLanguages

    Object-Prog.Languages

    Abstraction of Data

    Abstraction of Instruction

    M ore Expressive LanguagesAbstraction of Data More Expressive Languages

    Abstraction of Instruction 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -7

    aratural

    arac- (pro-

    - recu

    e

    ap-ata on

    f

    dataon

    code

    al

    e

    thods

    lble for

    odulardesign, when implemented in a portable programming language, can result in a software product thatexhibits a lifespan much greater than that associated with particular hardware platform.

    Computer Language Classification

    Computer programs consist of two primary components: data and instructions. Instructions reflect actionstaken by the computer program, and are comparable to verbs in natural human languages. In a similmanner, data reflect the objects acted on by computer instructions, in analogy with nouns in nlanguages. Just as one would not attempt to construct sentences with only nouns or verbs, both data andinstructions are necessary components of computer programs. However, programming languages areclassified according to the relative importance of each of these components, and the resulting chterization of programming languages as data-centered (object-oriented) or instruction-centeredcedural) has major ramifications toward design, implementation, and maintenance of computer software.

    Older programming languages (such as FORTRAN and COBOL) reflect a preeminent role for instructions. Languages that abstract the instruction component of programs permit the programmer tor-sively decompose the overall computational task into logically separate subtasks. This successivedecomposition of overall function into component subtasks highlights the role of the individual task asan encapsulated set of programming instructions. These encapsulated individual tasks are termed pro-cedures (e.g., the SUBROUTINE structure in FORTRAN), and this instruction first approach toprogram design and construction is thus termed procedural programming. In procedural programming,the emphasis on decomposition of function obscures the important fact that the data component of thprogram is inherited from the procedural design, which places the data in a secondary role.

    A more recent alternative model for programming involves elevating the data component of theprogram to a more preeminent position. This newer approach is collectively termed object programming,or object-oriented programming. Where procedural models abstract programming function via encsulation of instructions into subroutines, object programming models bind the procedures to the dwhich they operate. The design of the program is initially oriented toward modeling the natural data othe system in terms of its behaviors, and once this data component has been specified, the proceduresthat act upon the data are defined in terms of their functional operations on these preeminent structures. Object programming models have been very successful in the important task of the creatiof reusable code, and are thus valuable for settings (like implementation of a graphical user interface,where consistency among applications is desired) where there is a natural need for considerablereuse.

    The choice of language in computer programming is difficult because proponents of various languagesand programming models favoring particular approaches often castigate those who advocate otherwise.Fortunately, there are few other branches of engineering that are so heavily politicized. It is importantto look beyond the dogmatic issues surrounding language choice toward the important pragmatic goalsof creating readable, verifiable, extensible, and reusable code.

    Finally, it is important to recognize that the classification of programming languages into procedurand object-programming models is not precise. Regardless of whether data or instructions are givenrelative precedence, computer programs need both, and successful software design demands that threquirements of each component be investigated carefully. Although object-programming models relegateprocedures to a secondary role to data structures, much of the final effort in writing an object-orientedprogram still involves design, creation, and testing of the procedures (which are termed class mein object-programming models) that act upon the data. Similarly, it is possible to gain many object-programming advantages while using strictly procedural languages. In fact, some of the most successfulanguages utilized in current practice (e.g., the C++ programming language) are completely suitause as either procedural or object-oriented programming languages.

    Procedural Programming Models

    The most commonly used procedural programming languages in current use include Ada, C, FORTRAN,Basic, and Pascal. These procedural programming languages are characterized by a natural m 1999 by CRC Press LLC

  • 15

    -8

    Section 15

    r each.hropriate

    -

    t

    de

    amme

    a

    an

    ulating

    e run-

    d by the

    of

    monly

    structure based on programming function, which results in similar design methods being used foSince procedural languages are primarily oriented toward encapsulating programming function, eaclanguage has a rich set of control structures (e.g., looping, logical testing, etc.) that permits an applevel of control over the execution of the various procedural functions. Beyond this natural similarity,most procedural languages exhibit vast differences based on the expressiveness of the language, therange and extensibility of native data types, the facilities available for implementing modularity in thecomponent procedures, and the run-time execution characteristics of the language.

    Design Principles for Procedural Programming Models. The fundamental design principle for procedural programming is based on the concept of divide and conquer, and is termed functional decom-position or top-down design. The overall effort of the program is successively decomposed into smallerlogically separate subtasks, until each remaining subtask is sufficiently limited in scope so as to admiimplementation within a procedure of appropriately small size. The overall process is diagrammed inFigure 15.2.4.

    The basic motivation behind functional decomposition is that the human mind is incapable of unr-standing an entire large computer program unless it is effectively abstracted into smaller black boxes,each of which is simple enough so that its complete implementation can be grasped by the progrr.The issue of exactly how large an individual procedure can be before it becomes too large to understandis not an easy question to answer, but rules of thumb range from around ten up to a few hundred linesof executable statements. While it is generally accepted that extremely long procedures (more than few hundred lines) have been empirically shown to be difficult to comprehend, there is still a livelydebate in the software engineering community about how short procedures can become before increased error rate (e.g., average number of errors per line of code) becomes apparent.

    Although procedural programming languages possess the common characteristic of encapsprogramming function into separate granular modules, there are few similarities beyond this basicarchitectural resemblance. The various procedural languages often exhibit substantial differences inexpressiveness, in the inhibition of practices associated with common language errors, and in thtime characteristics of the resulting computer program.

    Expressiveness represents the range of features permitted by the language that can be useprogrammer to implement the particular programming design. There is considerable evidence from thestudy of linguistics to support the notion that the more expressive the language, the wider the range thoughts that can be entertained using this language. This important postulate is termed the Sapir-Whorfhypothesis. While the Sapir-Whorf hypothesis is considered controversial in the setting of natural humanlanguages, in the vastly simpler setting of programming languages, this phenomenon has been com

    FIGURE 15.2.4 Sample functional decomposition.

    Com plete Task

    Subtask 1 Subtask 2 Subtask 3

    Subtask 3aSubtask 1a Subtask 2a Subtask 3bSubtask 1b Subtask 2b Subtask 2c 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -9

    orege

    .

    t

    d

    anner

    ft

    ple

    t data

    rules

    current

    le

    esobserved in practice, where use of more expressive high-level languages has been correlated with overallprogrammer productivity. Expressiveness in a computer language generally consists of permitting mnatural control structures for guiding the execution of the program, as well as permitting a wide ranof data representations appropriate for natural abstraction of data.

    Sample Procedural Programming Languages. Some procedural languages (including FORTRAN andBasic) permit only a limited set of control structures for looping, branching, and logical testingForinstance, before the FORTRAN-77 standard was promulgated, FORTRAN had no way of expressingthe standard logical if-then-else statement. The FORTRAN-77 specification still does not permistandard nondeterministic looping structures, such as do while and repeat until. The standard Basiclanguage suffers from similar limitations and is further impeded by the fact that most implementationsof Basic are interpreted (each line of code is sequentially translated and then executed) instead ofcompiled (where the entire programs code is translated first and executed subsequently). Interpretelanguages such as Basic are often incredibly inefficient, especially on problems that involve substantiallooping, as the overhead of retranslating each line of code cannot be amortized in the same mavailable to compiled languages. Finally, because Basic is limited in its expressiveness, many imple-mentations of Basic extend the language to permit a greater range of statements or data types. Whilelanguage extension facilitates programmer expression, it generally compromises portability, as differentnonstandard dialects of the extended language generally develop on different computer platforms. Theextreme case is illustrated by Microsofts Visual Basic language, which is completely tied to Microsoapplications and operating systems software (and thus inherently nonportable), but so useful in itsextensions to the original Basic language that it has become the de facto scripting language for Microsoftapplications and operating systems.

    Ada and Pascal are very expressive languages, permitting a rich set of control structures and a simextension of the set of permitted data types. In addition, Ada and some Pascal dialects force theprogrammer to implement certain forms of data modularity that are specifically designed to aid in theimplementation of procedural programs. In a similar vein, standard Pascal is so strongly typed that iforces the programmer to avoid certain common practices (such as misrepresenting the type of astructure passed to a procedure, which is a widespread and useful practice in FORTRAN) that areassociated with common errors in program implementation. In theory, Pascals strict approach to repre-senting data structures and its rich set of control structures ought to make it an attractive language forengineering programming. In practice, its lack of features for arithmetic calculation and its stricton data representation make it fairly difficult to use for numeric computation. Ada is a more recentlanguage that is based on Pascal, but remedies many of Pascals deficiencies. Ada is a popular languagein mechanical engineering applications, as it is mandated for use on many Department of Defenseprogramming projects.

    C and FORTRAN are among the most common procedural languages used in large-scale mechanicalengineering software applications. Both are weakly typed compiled languages with a rich set of availablemathematical operations. C permits a considerable range of expressive control structures and extensibledata structures. In addition, C is extremely portable and generally compiles and runs quickly, as thelanguages features are closely tuned to the instruction sets of modern microprocessors used in generations of computers. The original C language specification was replaced in 1988 by a new ANSIstandard, and this current language specification adds some features (such as type checking on argumentspassed to procedures, a facility that aids greatly in preventing common programming errors) that resembthose found in Pascal, but do not seem to compromise the overall utility of the original C languagestandard.

    FORTRANs current implementation (FORTRAN-90) adds user-defined extensible data structuresand a more expressive instruction set to the FORTRAN-77 standard, but the FORTRAN-90 standardhas so far been slow to gain acceptance in the programming community. FORTRAN-77 compilers arestill common, and the problems associated with this version of the language (e.g., minimal resourcfor abstracting data, limited control structures for expressing program flow) still compromise the archi- 1999 by CRC Press LLC

  • 15

    -10

    Section 15

    s,

    ly

    tes

    ting

    s engi-

    e ame-

    on

    eduresntical

    quires

    ming

    d then

    cte

    repre-bjects

    f

    encetecture of FORTRAN programs. However, FORTRAN retains a rich set of intrinsic numeric operationso it is still a good choice for its original goal of Formula Translation (where the language derives itsname). In addition, FORTRAN programs often execute very rapidly relative to other procedural lan-guages, so for programs that emphasize rapid mathematical performance, FORTRAN is still a goodlanguage choice. Finally, many FORTRAN-callable libraries of mathematical operations commonencountered in engineering applications are available, and this ability to leverage existing procedurallibraries makes FORTRAN an excellent choice for many mechanical engineering applications.

    Advantages and Disadvantages of Procedural Programming. Procedural programming has inherenadvantages and disadvantages. One of the most important advantages of some procedural languag(notably FORTRAN) is the existence of many complete libraries of procedures for solving complextasks. For example, there are many standard libraries for linear algebra (e.g., LINPACK, EISPACK,LAPACK) or general scientific numerical computation (e.g., IMSL) available in FORTRAN-callableform. Reuse of modules from these existing libraries permits programmers to reduce development costssubstantially for a wide variety of engineering applications. Under most current portable operasystems, multiple-language integration is relatively straightforward, so high-quality FORTRAN-callablelibraries can be called from C programs, and vice-versa. The development of standard procedural librarieis largely responsible for the present proliferation of useful computer applications in mechanicalneering.

    Another important advantage of procedural languages is that many important computational tasks(such as translating mathematical models into analogous computer codes) are naturally converted fromthe underlying mathematical algorithm (which is generally a sequence of instructions, and hencnable to encapsulation within a procedure) into an associated modular procedure. As long as the dataused within a program do not become unduly complex, procedural languages permit easy implementatiof many of the standard methods used in engineering analysis and design.

    Perhaps the biggest disadvantage of procedural models is that they are harder to reuse than competitiveobject-programming models. These obstacles to code reuse arise from the fact that data are modeled inprocedural programming as an afterthought to the simulation of instructions. In order to reuse procbetween two different programs, the programmer must force the representation of data to be ideacross the different computer applications. In procedural programming, the goal of code reuse restandardization of data structures across different computer programs, regardless of whether or not suchstandardization is natural (or even desired) by those disparate computer programs. Object programmodels are an attractive alternative to procedural programming schemes in large part because thesenewer programming methods successfully avoid such unwarranted data standardization.

    Object Programming Models

    Object-programming models place modeling of data structures in a more preeminent position, anbind to the data structures the procedures that manipulate the data. This relegation of procedures (whichare termed methods in object programming) to a more secondary role facilitates a degree of codereuse substantially better than is feasible with conventional procedural programming languages. Objeprogramming languages employ aggregate data types (consisting of various data fields, as well as thassociated methods that manipulate the data) that are termed classes, and these classes serve as templatesfor creation of objects that represent specific instances of the class type. Objects thus form the sentative granularity found in object-oriented programming models, and interactions among oduring program execution are represented by messages that are passed among the various objects present.Each message sent by one object to another tells the receiving object what to do, and the details oexactly how the receiving object accomplishes the associated task are generally private to the class. Thislatter issue of privacy regarding implementation details of object methods leads to an independamong objects that is one of the main reasons that object-programming schemes facilitate the desiredgoal of code reuse. 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -11

    ed to

    esata forousl

    ram

    , is theout the

    re

    -

    n

    s)

    ocal to

    tion. Inceduresr form

    fn

    One of the most important limitations of procedural languages is abstraction. High-level languagessuch as FORTRAN or C permit considerable abstraction of instructions, especially when comparthe machine and assembly languages they are designed to supplant. Unfortunately, these languages donot support similar levels of abstraction of data. For example, although FORTRAN-77 supports severaldifferent numeric types (e.g., INTEGER, REAL, DOUBLE PRECISION), the only derived types avail-able for extending these simple numeric representations are given by vectors and multidimensionalarrays. Unless the data of a problem are easily represented in one of these tabular forms, they cannotbe easily abstracted in FORTRAN. To some extent, experienced programmers can create new user-defined data types in C using structures and typedefs, but effectively abstracting these derived data typesrequires considerable self-discipline on the part of the programmer.

    Object-oriented programming languages avoid these pitfalls of procedural languages by using classas templates for abstraction of both instructions and data. By binding the instructions and the dclasses together, the programmer can abstract both components of a programming model simultaney,and this increased level of abstraction results in a radically new programming model. For instance, anatural class for finite-element modeling would be the class of finite-element mesh objects. A meshobject (which could easily be composed internally of node and element objects) makes it possible forthe object programmer to hide all the details of mesh representation from the rest of the prog. Aprocedural programming model would require standardization of the mesh to consist of (for example):

    A list of nodes, each associated with individual nodal coordinates given in 1D, 2D, or 3D(depending on the geometry of the model used)

    A list of elements, each with a given number of associated nodes

    A list of element characteristics, such as material properties or applied loads

    In this representation, each procedure that manipulates any of the mesh data must know all of thedetails of how these data have been standardized. In particular, each routine must know whether a 1D,2D, or 3D finite-element analysis is being performed, and pertinent details of the analysis (e.g.problem being modeled thermal conduction or mechanical deformation?) are also spread throughcode by the standardization of data into predefined formats. The sum of these constraints is to requithe programmer to recode substantial components of a procedural program every time a major modifi-cation is desired.

    In the setting of objects, the finite-element mesh object would store its particular geometric implementation internally, so that the rest of the program would be insulated from the effects of changes inthat representation. Rather than calling a procedure to generate a mesh by passing predefined lists ofnodes, elements, and element characteristics, an object-oriented approach to mesh generatiowouldemploy sending a message such as discretize yourself to the mesh object. This object would thencreate its internal representation of the mesh (perhaps using default values created by earlier messageand store this information privately. Alternatively, the object-oriented program might later send a solveyourself message to the mesh object and then a report your results in tabular form message forgenerating output. In each case, the rest of the program has no need to know the particular details ofhow the mesh object is generating, storing, or calculating results. Only the internal procedures lthe class (i.e., the class methods) generally need to know this private data, which are used locally toimplement the functions that act on the class.

    This hiding of internal function within an object is termed encapsulation, and object programmingmodels permit simultaneous encapsulation of both data and instructions via appropriate abstracthis setting, encapsulation permits the programmer to concentrate on creating data and pronaturally, instead of forcing either component into predefined formats such as floating-point arrays (fodata) or predefined subroutine libraries (for instructions). Data and instruction abstraction of thisare thus useful additions to the similar (but less flexible) features available in procedural languages. Ithese new features constituted the only improvements available from object-programming models, the 1999 by CRC Press LLC

  • 15

    -12

    Section 15

    onin

    the

    ent

    eedsed

    the

    ct

    l often

    desire

    e

    - writing

    n

    ics of

    e bestthey would offer only slight advantages over traditional procedural programming. There are many otheradvantages present in object-programming models.

    The most important advantages of object programming occur because of the existence of classhierarchies. These hierarchies permit new objects to be created from others by concentrating onlythe differences between the objects behaviors. For instance, a finite-element mesh for a rod lying three dimensions can be derived from a one-dimensional mesh by adding two additional coordinates ateach node. An object programmer could take an existing one-dimensional mesh class and derive a three-dimensional version using only very simple steps:

    Adding internal (private) representation for the additional coordinate data

    Overriding the existing discretization method to generate the remaining coordinates whendiscretize yourself message is sent

    Overriding some low-level calculations in class methods pertinent to performing local elemcalculations using the new coordinate representation

    Note that all of these steps are private to the mesh object, so that no other part of the program nto be changed to implement this major modification to the problem statement. In practice, the adddetails are implemented via the creation of a derived class, where the additional coordinates and modified methods are created. When messages appropriate to the new class are sent, the derived objectcreated from the new class will handle only the modified data and instructions, and the parent obje(the original mesh object) will take care of the rest of the processing. This characteristic of object-programming models is termed inheritance, as the individual derived (child) objects inherit theirbehavior from the parent class. When the changes required by the modification to the programsspecifications are small, the resulting programming effort is generally simple. When the changes arelarge (such as generalizing a one-dimensional problem to a fully three-dimensional one), it is stilfeasible to make only minor modifications to the program to implement the new features.

    One of the most important rationales for using object-programming methods arises from the to provide a consistent user-interface across diverse programs. Existing standardized graphical interfacemodels (such as the Motif interface available on OSF/UNIX, or the Microsoft Windows interface usedon Windows and Windows NT) place a premium on a consistent look and feel across differentapplications. Since managing the user-interface commonly constitutes much of the programming effortrequired to implement interactive engineering applications, it is advantageous to consolidate all of thcode required to implement the standard graphical user-interface into a class library and allow theprogrammer to derive new objects pertinent to the application at hand.

    One such class library is the Microsoft Foundation Classes, which implement the Windows interfacevia a class hierarchy requiring around a hundred thousand lines of existing C++ source code. Programmers using class libraries such as these can often generate full-featured graphics applications byonly a few hundred or a few thousand lines of code (notably, for reading and storing data in files, fordrawing content into windows, and for relevant calculations). In fact, it is relatively easy to graft graphicaluser interfaces onto existing procedural programs (such as old FORTRAN applications) by wrapping aC++ user-interface layer from an existing class library around the existing procedural code, and byrecycling relevant procedures as class methods in the new object-oriented setting. This interfacewrapper approach to recycling old procedural programs is one of many standard techniques used ireengineering of existing legacy applications (Barton and Nackman, 1994).

    One other important characteristic of many object-programming languages is polymorphism. Poly-morphism (Latin for many forms) refers to the ability of a single message to spawn different behaviorsin various objects. The precise meaning of polymorphism depends upon the run-time characteristthe particular object-programming language used, but it is an important practical feature in any object-programming language.

    Object-Oriented Design Principles. Because object-oriented programming is a relatively new disciplineof software engineering (when compared to procedural programming), one cannot yet identify th 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -13

    il than

    uld be

    eir

    n

    ee codeactice.ss isct-e-

    cluding

    as

    mming.

    s

    design schemes among the various competing object-oriented design principles. For this reason (and toavoid prejudging the future), this section treats the subject of object-oriented design in less detaprocedural design methods.

    The fundamental tenet of object-oriented program design is that the programming objects shochosen to model any real-world objects present in the system to be analyzed and simulated. For example,in a thermal analysis of a microprocessor, one might identify such natural physical objects as heatsink, thermocouple, and fan. In general, the nature of the physical objects in a mechanical systemis stable over long periods of time, so they make natural candidates for programming objects, as thspecifications are least likely to vary, and thus they will require minimal modifications to the basicprogram design.

    The next step in performing an object-oriented design is to model the behaviors of the various objectsidentified within the system. For example, a fan object can turn on and turn off, or might vary inintensity over a normalized range of values (e.g., 0.0 = off, 1.0 = high speed), and this behavior willform the basis for the messaging protocols used to inform objects which behaviors they should exhibit.At the same time, any relevant data appropriate to the object (in this case, fan speed, power consumption,requisite operating voltage) should be identified and catalogued. Here, these individual items of datawill represent the private data of the fan class, and the behaviors of this class will be used to desigclass methods.

    The final step in specifying an object-oriented design is to examine the various objects for interrela-tionships that can be exploited in a class hierarchy. In this setting, heat sink and fan could beconsidered to be derived from a larger class of cooling devices (although in this trivial example, thisaggregation is probably unnecessary). Careful identification of hierarchical relationships among thcandidate objects will generally result in an arrangement of classes that will permit considerablreuse through inheritance, and this is one of the primary goals of object-programming design pr

    In practice, there is no final step in designing object-oriented programs, as the design procenecessarily more complex and iterative than procedural programming models. In addition, the objeoriented designer must take more care than given here in differentiating the role of classes (which arthe static templates for construction of objects) from objects themselves, which are the dynamic realization of specific members of a class created when an object-oriented program executes. Objects arethus specific instances of generic classes, and the process of creating objects at run time (insetting all appropriate default values etc.) is termed instantiation.

    Sample Object-Oriented Languages. There are not as many successful object-oriented languages there are procedural languages, because some languages (such as Ada and FORTRAN-90) that possesslimited object-oriented features are more properly classified as procedural languages. However, ADA95 does include excellent facilities for object-oriented programming.

    C++ is the most commonly used object-oriented language and was primarily developed at Bell Labsin the same pragmatic vein as its close procedural relative, C. In theory, the C++ language includes bothprocedural and object-programming models, and thus C++ can be used for either type of prograIn practice, the procedural features on C++ are nearly indistinguishable from those of ANSI C, andhence the phrase programming in C++ is generally taken to mean object-programming in C++. C++is well known as an object-programming language that is not particularly elegant, but that is very popularbecause of its intimate relation with the C procedural programming language (C++ is a superset of ANSIC) and because of its extensive features. The design goal of maintaining back-compatibility with ANSIC has led to shortcomings in the C++ language implementation, but none of these shortcomings haseriously compromised its popularity. C++ is an efficient compiled language, providing the features ofobject-programming models without undue loss of performance relative to straight procedural C, andC++ is relatively easy to learn, especially for knowledgeable C programmers. It supports extensiveinheritance, polymorphism, and a variety of pragmatic features (such as templates and structured excep-tion handling) that are very useful in the implementation of production-quality code. 1999 by CRC Press LLC

  • 15

    -14

    Section 15

    ernnd

    nerally

    so

    aries,gram

    h

    base

    ofd

    ntAn important recent development in object-oriented design is the Java programming language: thepopularity of this new language is closely tied to the explosion of interest in the Internet. Java is widelyused to provide interactive content on the World-Wide-Web, and it has a syntax very similar to C++, apervasive object-orientation, and provides portable elements for constructing graphical user interfaces.Java programs can be deployed using interpreted forms over the web (utilizing a Java Virtual Machineon the client platform), or by a more conventional (though less portable) compilation on the targetcomputer.

    SmallTalk is one of the oldest and most successful object-programming languages available, and wasdesigned at the Xerox Corporations Palo Alto Research Center (also responsible for the design of modgraphical user interfaces). SmallTalk supports both inheritance (in a more limited form than C++) apolymorphism, and is noted as a highly productive programming environment that is particularly ame-nable to rapid application development and construction of prototypes. SmallTalk is not a compiledlanguage, and while this characteristic aids during the program implementation process, it geleads to computer programs that are substantially less efficient than those implemented in C++. SmallTalkis generally used in highly portable programming environments that possess a rich library of classes,that it is very easy to use SmallTalk to assemble portable graphical interactive programs from existingobject components.

    Eiffel is a newer object-oriented language with similar structure to object-oriented variants of thePascal procedural programming language. Eiffel is similar in overall function to C++ but is considerablymore elegant, as Eiffel does not carry the baggage of backward compatibility with ANSI C. Eiffel hasmany important features that are commonly implemented in commercial-quality C++ class librincluding run-time checking for corruption of objects, which is a tremendous aid during the prodebugging process. Even with its elegant features, however, Eiffel has not gained the level of acceptanceof C++.

    There are other object-oriented programming languages that are worth mentioning. The procedurallanguage Ada provides some support for objects, but neither inheritance or polymorphism. FORTRAN-90 is similarly limited in its support for object-programming practices. Object Pascal is a variant ofPascal that grafts SmallTalk-like object orientation onto the Pascal procedural language, and severalsuccessful implementations of Object Pascal exist (in particular, the Apple Macintosh microcomputerused Object Pascal calling conventions, and this language was used for most commercial Macintosapplication development for many years). For now, none of these languages provides sufficient supportfor object-oriented programming features (or a large-enough user community) to provide serious com-petition for C++, SmallTalk, Eiffel, or Java.

    Data Base Systems

    In procedural programming practice, modeling data are relegated to an inferior role relative to modelinginstructions. Before the advent of object-oriented programming languages, which permit a greater degreeof data abstraction, problems defined by large or complex data sets required more flexibility for modelingdata than traditional procedural programming techniques allowed. To fill this void, specialized data basemanagement systems were developed, and a separate discipline of computer programming (data management) arose around the practical issues of data-centered programming practice. The study ofdata base management evolved its own terminology and code of application design, and suffered throughmany of the same problems (such as language standardization to provide cross-platform portability) thathad plagued early efforts in procedural programming. The data base management subdiscipline software engineering is still fundamentally important, but the widespread adoption of object-orientelanguages (which permit flexible modeling of data in a more portable manner than that provided byproprietary data base management systems) has led to many of the concepts of data base managemebecoming incorporated into the framework of object-oriented programming practice. 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -15

    tions.

    ndcularluse

    ust be

    sented

    a

    nism

    i

    dural

    ditional

    initial

    tre

    struc-

    ponentTechnical Overview of Data base Management

    Many important engineering software applications are naturally represented as data base applicaData base applications are generally developed within specialized custom programming environmentsspecific to a particular commercial data base manager, and are usually programmed in a proprietary (aoften nonportable) data base language. Regardless of these issues, data base programming is a partiform of computer programming, and so the relevant topics of software engineering, including proceduraand object models, portability, reliability, etc., apply equally well to data base programming. Becamany of these principles have already been presented in considerable detail, the following sections ondesign and programming issues for data base systems are kept relatively concise.

    Data base applications are very similar to conventional programming applications, but one of the mostimportant differences is in the terminology used. Data base applications have developed a nomenclaturespecifically defined to dealing with structured and unstructured data, and this terminology maddressed. Some of the most appropriate terms are enumerated below.

    Table: a logical organized collection of related data

    Record: a collection of data that is associated with a single item (records are generally repreas rows in tabular data base applications)

    Field: an individual item of data in a record (fields are generally represented as columns intabular data base)

    Schema: the structure of the data base (schema generally is taken to mean the structure andorganization of the tables in a tabular data base)

    Query: a structured question regarding the data stored in the data base (queries are the mechafor retrieving desired data from the data base system)

    There are many other relevant terms for data base management, but these are sufficient for this briefintroduction. One important high-level definition used in data base management is Structured QueryLanguage, or SQL. SQL is a standard language for creating and modifying data bases, retrevinginformation from data bases, and adding information to data bases. In theory, SQL provides an ANSIstandard relational data base language specification that permits a degree of portability for data baseapplications. In practice, standard SQL is sufficiently limited in function so that it is commonly extendedvia proprietary addition of language features (this situation is similar to that of the Basic procelanguage, which suffers from many incompatible dialects). The practical effect of these nonstandardextensions is to compromise the portability of some SQL-based data base systems, and adstandardization schemes are presently under development in the data base management industry. Onesuch scheme is Microsofts Open data base Connectivity (ODBC) programming interface, which providesportable data base services for relational and non-relational data base applications.

    Classification of Data Base Systems

    There are many different types of data base systems in common use. One of the most importantsteps in designing and implementing a data base application is to identify the relevant characteristics ofthe data in order to choose the most appropriate type of data base for development. Depending uponthe structure of the data to be modeled, the data base application developer can select the simplesscheme that provides sufficient capabilities for the problem. Three sample data base structures apresented below: flat-file data bases, relational data bases, and object-oriented data bases.

    Flat-File Data Bases. Flat-file data bases represent the simplest conceptual model for data baseture. A flat data base can be idealized as a table with a two-dimensional matrix or grid structure. Theindividual data base records are represented by the rows of the matrix, and each records componentfields are represented by the columns of the matrix structure, as shown in Figure 15.2.5. Flat-file databases are thus confined to applications where all records are structurally identical (i.e., have the sameconfiguration of fields) and where the underlying matrix structure naturally represents the data comof the application. 1999 by CRC Press LLC

  • 15

    -16

    Section 15

    rticular

    ong

    ch ase

    lated

    ge

    parent

    p

    ing

    e

    ithent

    The simplicity of a flat-file data base is simultaneously its greatest advantage and worst disadvantage.The main advantage of using a flat-file data base structure is that querying the data base is extremelysimple and fast, and the resulting data base is easy to design, implement, and port between padata base applications. In practice, spreadsheet applications are often used for constructing flat-file databases, because these packages already implement the requisite tabular structure and include a rich varietyof control structures for manipulating the data.

    The biggest disadvantage of flat-file data bases is that the extreme simplicity of the flat structuresimply does not reflect many important characteristics of representative data base applications. Forprograms requiring flexibility in data base schema, or complex relationships among individual data fields,flat-file data bases are simply a poor choice, and more complex data base models should be used.

    Relational Data Bases. In practice, data base applications often require modeling relationships amvarious fields that may be contained in separate data files. Applications with these relational featuresare term relational data bases. Relational data base technology is a rapidly evolving field, and this familyof data bases is very common in practical data base applications.

    Relations provide a way to generalize flat-file data base tables to include additional features, suvariation in the numbers of fields among different records. A schematic of a simple relational data basschema is shown in Figure 15.2.6. Here a data base of material properties is represented by retables. Note that because of the disparity in number of material constants (i.e., differing numbers offields for each material record), a flat-file data base would not be suitable for this data base storascheme.

    The material properties tables (containing the lists of material properties) are related to their table, which contains overall identification information. These parent-child relationships give relationaldata bases considerable flexibility in modeling diverse aggregates of data, but also add complexity tothe task of storing and retrieving data in the data base. In a flat-file data base system, a simple looku(similar to indexing into a two-dimensional array) is required to find a particular field. In a complexrelational data base, which may exhibit many nested layers of parent-child relations, the task of querymay become very complex and potentially time-consuming. Because of this inherent complexity instoring and retrieving data, the topics of efficient data base organization and of query optimization aressential for careful study before any large-scale relational data base application is undertaken.

    Object-Oriented Data Bases. Many of the data base schemes found in relational and flat-file data basesystems arose because of the inability to model data effectively in older procedural programminglanguages like FORTRAN. Commercial relational data base managers combined powerful data-modelingcapabilities with new procedural languages (such as SQL or XBase) specifically designed to manipulatedata base constructs. Recently, the current proliferation of object-oriented programming languages, wtheir innate ability to abstract data as effectively as possible with dedicated data base managemsystems, has led to the development of object-oriented data base systems. These object-oriented database packages provide extremely powerful features that may ultimately make traditional SQL-basedrelational data base applications obsolete.

    FIGURE 15.2.5 Flat data base example.

    Field 1:Name

    Field 2:Yield Strength

    Field 3:Youngs Modulus

    Field 4:Shear Modulus

    Record 1:Material 1

    Record 2:Material 2

    Record 3:Material 3

    Aluminum

    Magnesium

    Steel

    250 MPa

    150 MPa

    400 MPa

    70 GPa

    45 GPa

    200 GPa

    25 GPa

    18 GPa

    85 GPa 1999 by CRC Press LLC

  • Computer-Aided Engineering

    15

    -17

    library

    l

    tice.

    must ben-line-ac-

    One interesting example of object-oriented data base technology is the integration of data basetechnology into a C++ framework. The Microsoft Foundation Class library for C++ provides numerousfeatures formerly requiring custom data base programming that are implemented as C++ classmembers. For example, there are extensible data base classes that provide direct support for commondata base functions, and there are ODBC (Open Data Base Connectivity, the extension of SQL to genericdata base environments) classes allowing the C++ program to access existing relational data basesdeveloped with specialized data base management systems. Given the extensibility of C++ class libraries,this object-oriented approach makes it feasible to gain all of the advantages of proprietary relationadata base applications, while preserving the numerous features of working in a standard portableprogramming language.

    Operating System Characteristics

    Computer programs depend on low-level resources for execution support, including file services forinput/output, graphical display routines, scheduling, and memory management. The software layers thatprovide these low-level services are collectively termed the computers operating system. Operatingsystems thus insulate individual programs from the details of the hardware platform where they areexecuted, and choosing the right operating system can be a critical decision in engineering prac

    Engineering computation is generally identified by three fundamental characteristics:

    Large demand for memory, where extremely large data sets (generally on the order of megabytesor gigabytes) are used, and where all components of these demanded memory resourcesaccessible simultaneously (This demand for memory can be contrasted with standard otransaction-processing schemes used in finance and commerce, where there is a similar charteristic of large data sets, but these large financial data models are seldom required to have allcomponents available in memory at the same time.)

    Dependence on floating-point computation, where there are high-precision floating-point repre-sentations of numbers (i.e., numbers stored in the binary equivalent of scientific notation, wherestorage is divided among sign, mantissa, and exponent, requiring more extensive storage than thatrequired for characters or integer data types)

    FIGURE 15.2.6 Example relational data base structure.

    Material NameMaterial ID

    Steel

    Wood

    1

    2

    Material Type

    Isotropic

    Orthotropic

    1

    1

    1

    2

    2

    2

    2

    2

    2

    Steel Yield Strength

    Steel Youngs Modulus

    Steel Shear Modulus

    Wood Tensile Strength with Grain

    Wood Cross-Grain Compressive Strength

    Wood Shear Strength

    Wood Tensile Elastic Modulus

    Wood Compressive Elastic Modulus

    Wood Shear Modulus

    Material ID MaterialProperty 1999 by CRC Press LLC

  • 15-18 Section 15

    an

    neouslectingtally

    Us tocationeing

    et

    ar

    s on

    e

    yd.rd

    n ma

    le for

    n

    system.

    es, Extensive use of graphics in input and display, as graphics is generally characterized as engineers second language, because only the human visual sense has sufficient bandwidth toprocess the vast amounts of data generally present in engineering computation

    While many of these characteristics may be found in other computational settings, the simultapresence of all three is a hallmark of computation in science and engineering. Identifying and sean operating system that provides appropriate support for these characteristics is thus a fundamenimportant problem in the effective development and use of engineering software.

    Technical Overview of Operating Systems

    A simple and effective way to gain an overview of operating systems theory is to review the classificationscheme used to identify various operating systems in terms of the services that they provide. The mostcommon characteristics used for these classifications are enumerated below.

    Multitasking. Humans are capable of performing multiple tasks simultaneously, and this characteristicis desirable in a computer operating system as well. Although an individual computer CPU can onlyprocess the instructions of one application at a time, it is possible with high-performance CPmanage the execution of separate programs concurrently by allocating processing time to each appliin sequence. This sequential processing of different applications makes the computer appear to bexecuting more than one software application at a time. When an operating system is capable of managthe performance of concurrent tasks, it is termed a multitasking operating system. Many early operatingsystems (such as MS/DOS) could only execute a single task at a time and were hence termed single-tasking systems. While it is possible to load and store several programs in memory at one time and lthe user switch between these programs (a technique sometimes termed context switching that iscommonly used in MS/DOS applications), the lack of any coherent strategy for allocating resourcesamong the competing programs limits the practical utility of this simple tasking scheme.

    A simple generalization of context switching is known as cooperative multitasking, and this simpletasking scheme works remarkably well in some settings (in fact, this method is the basis for the populMicrosoft Windows 3.x and Apple Macintosh 7.x operating systems). In a cooperative multitaskingsetting, the allocation of computer resources is distributed among the competing programs: the individualprograms are responsible for giving up resources when they are no longer needed. A comparison tohuman experience is a meeting attended by well-behaved individuals who readily yield the floor wheneveranother speaker desires to contribute. Just as this scheme for managing human interaction dependthe number of individuals present (obviously, the more people in the meeting, the more difficult the taskof distributed management of interaction) as well as on the level of courtesy demonstrated by thindividual speakers (e.g., there is no simple means for making a discourteous speaker yield the floorwhen someone else wants to speak), the successful use of cooperative multitasking schemes is completeldependent on the number and behavior of the individual software applications that are being manageIll-behaved programs (such as a communications application that allocates communications hawarewhen executed, but refuses to release it when not needed) compromise the effectiveness of cooperativemultitasking schemes and may render this simple resource-sharing model completely unusable inycases.

    The obvious solution to managing a meeting of humans is to appoint a chair who is responsiballocating the prioritized resources of the meeting: the chair decides who will speak and for how long,depending upon scheduling information such as the meetings agenda. The computer equivalent of thisapproach is termed preemptive multitasking and is a very successful model for managing the allocatioof computer resources. Operating systems that use a preemptive multitasking model make use of ascheduler subsystem that allocates computer resources (such as CPU time) according to a priority Low-priority applications (such as a clock accessory, which can update its display every minute or sowithout causing serious problems) are generally given appropriately rare access to system resourcwhile high-priority tasks (such as real-time data acquisition applications used in manufacturing, whichcannot tolerate long intervals without access to the operating systems services) are given higher priority. 1999 by CRC Press LLC

  • Computer-Aided Engineering 15-19

    iatelyra

    thisions.

    ls that

    e

    whereng reportsing oft

    .

    dsg-

    such

    es

    meOf course, the scheduler itself is a software system and generally runs at the highest level of priorityavailable.

    Preemptive multitasking operating systems are natural candidates for engineering software, as theintense memory and hardware resources associated with engineering computation require approprhigh-powered operating system support. Virtually all large engineering computers of the present e(e.g., workstations, mainframes, and supercomputers) run operating systems that provide preemptivemultitasking, and many microcomputers are now available with similar operating system support.

    Multithreading. In the setting of multitasking, the term task has some inherent imprecision, andambiguity leads to various models for allocation of computer resources among and within applicatIn the simplest setting, a task can be identified as an individual software application, so that a multitaskingoperating system allocates resources sequentially among individual applications. In a more generacontext, however, individual programs may possess internal granularity in the form of subprocessemay execute in parallel within an application. These subprocesses are termed threads, and operatingsystems that support multiple threads of internal program execution are termed multithreaded operatingsystems.

    Examples of multiple threads of execution include programs that support internally concurrent opr-ations such as printing documents while other work is in progress (where a separate thread is spawnedto handle the printing process), displaying graphical results while performing other calculations (a separate thread can be used to paint the screen as data are read or calculated), or generatifrom within a data base application while other queries are performed. In general, multithreadindividual subtasks within an application will be advantageous whenever spawned threads represencomponents of the application that are complicated enough so that waiting for them to finish (whichwould be required in a single-threaded environment) will adversely affect the response of the program

    Multiprocessing. One of the most important advantages of separating a program into multiple threais that this decomposition of programming function permits individual threads to be shared amondifferent processors. Computers with multiple CPUs have been common platforms for performing highend engineering computation for over a decade (e.g., multiprocessor supercomputer architectures,as the Cray X/MP and Cray Y/MP models introduced in the 1980s), but the availability of multipleprocessing units within a single computer has finally gravitated to the realm of low-end microcomputers.The ability of an operating system to support concurrent execution of different program threads ondifferent processors is termed multiprocessing. Multiprocessing occurs in two fundamental flavors:

    Symmetric multiprocessing (SMP), where each individual CPU is capable of executing anyprocess, including threads originating within applications or within operating system servic

    Asymmetric multiprocessing (ASMP), where different processors are relegated to different tasks,such as running applications or running operating systems services

    Asymmetrical processing is commonly implemented using a dual-CPU architecture involving a mas-ter/slave relation between the processing units. The master CPU performs the application and sosystem services, while the slave CPU is relegated to pure system tasks (such as printing, waiting forslow input/output devices, etc.). Asymmetric multiprocessing architectures provide some speed-up ofindividual programs, but this increased performance is often limited to reducing the wait time requiredfor some system services. Symmetric multiprocessing can produce substantial gains in program executionspeed, as long as individual threads do not contend for resources. The ability of a program (or anoperating system) to take advantage of multiple CPU resources is termed scalability, and scalableoperating systems are well positioned to take advantage of current improvements in available multipro-cessing hardware platforms.

    Virtual Memory. Providing the extensive memory resources required for most engineering software canbe an expensive undertaking. Dynamic Random-Access Memory (DRAM) is too expensive to maintainan appropriate supply for every program used in a multitasking environment. In practice, much of the 1999 by CRC Press LLC

  • 15-20 Section 15

    blyin

    at is

    r a

    lti- -e

    ting

    t single

    ome

    ngl to

    sential

    virtualitDe.,h

    memory demand in a multitasking setting can be satisfied by caching some of the blocks of data ostensistored in main memory to a fast disk storage subsystem. These blocks of data can be reloaded to mamemory only when they are absolutely required, and this practice of paging memory to and from thedisk is termed virtual memory management. In most common implementations of virtual memory, thepaging scheme provides a level of independence of memory addressing between processes thcarefully implemented so that one process cannot corrupt the memory of another. Such schemes thatimplement memory protection to prevent interapplication memory corruption are termed protected virtualmemory management.

    Depending on demand for physical memory, virtual memory schemes may be a great help ohindrance. While there are sophisticated paging algorithms available that are designed to prevent writingneeded to memory to disk, in practice, if there are enough different applications competing for memory,the relative disparity in speed of memory vs. disk subsystems may lead to very sluggish performancefor applications whose memory resources have been written to the disk subsystem. In addition, muprocessing architectures place further constraints on virtual memory performance in order toavoidcorruption of memory by different threads running on different CPUs. Modern virtual memory management is an active area of research in computer science, but one empirical rule is still true: perhaps thbest way to improve the performance of any virtual memory operating system is to add physical (real)memory!

    Networking and Security. One of the most fundamental shifts in computing over the last decade hasbeen the transition from disconnected individual computers to a distributed computing model character-ized by networked workstations that support various remote processing models. Most modern operasystems support standard networking protocols that allow easy integration of different computers intolocal- and wide-area networks, and also permit sharing of resources among computers. Traditionalnetworking functions (such as sharing files between different computers on the same network) have beenaugmented to encompass remote computing services, including sharing applications between neworkedcomputers (which represents a generalization of symmetric multiprocessing architectures from acomputer to a disparate network of connected computers).

    Because of the tremendous pace of changes in the field of computer networking, one of the mostimportant features of any network operating system involves adherence to standard networking protocols.Networking standards provide a portable implementation of networking function that effectively abstractsnetwork operations, allowing existing networking applications to survive current and future changes innetworking hardware and software. The most common current networking model is one promulgatedby the International Standards Organization and termed the Open Systems Interconnect (OSI) referencemodel. The OSI model uses layers (ranging from low-level hardware to high-level application connec-tions) to idealize networking function. Adherence to the OSI model permits operating systems to becinsulated from improvements in networking hardware and software, and thus preserves operating systeminvestment in the face of rapid technological improvements in the field of computer networking.

    Once an individual computer is connected to a network, a whole host of security issues arise pertainito accessibility of data across the network. Secure operating systems must satisfy both internal (locaan individual computer) and global (remote access across a network) constraints to ensure that sensitivedata can be protected from users who have no right to access it. Since many mechanical engineeringapplications involve the use of military secrets, adherence to appropriate security models is an escomponent of choosing an operating system for individual and networked computers.

    There are many aspects to securing computer resources, including some (such as protected memory schemes) that satisfy other relevant computer needs. In the setting of computer secury,operating systems are classified according to criteria developed by the Department of Defense (DO5200.28-STD, December 1985). These DOD criteria provide for such features as secure logons (i.logging into a computer requires a unique user identifier and password), access control structures (whicrestrict access to computer resources such as files or volumes), and auditing information (which provides 1999 by CRC Press LLC

  • Computer-Aided Engineering 15-21

    ts

    e)

    fttandard-

    hard-

    ntly

    ure

    asm-tas

    h tasks

    ring is

    msautomated record keeping of security resources so as to help prevent and detect unauthorized attempat gaining access to secure computer resources).

    Portability. Some operating systems (for example, MS/DOS, written in Intel 8080 assembly languagare inextricably tied to the characteristics of a particular hardware platform. Given the rapid pace ofdevelopment in CPU hardware, tying an operating system to a particular family of processors potentiallylimits the long-term utility of that operating system. Since operating systems are computer sowaresystems, there is no real obstacle to designing and implementing them in accordance with spractice in software engineering, and in particular, they can be made portable by writing them in highlevel languages whenever possible.

    A portable operating system generally abstracts the particular characteristics of the underlyingware platform by relegating all knowledge of these characteristics to a carefully defined module respon-sible for managing all of the interaction between the low-level (hardware) layer of the operating systemand the overlying systems services that do not need to know precise details of low-level function. Themodule that abstracts the low-level hardware layer is generally termed a hardware abstraction layer(HAL), and the presence of a HAL permits an operating system to be ported to various processors withrelative ease. Perhaps the most common portable operating systems are UNIX and Windows NT. Bothof these operating systems are commonly used in engineering applications, operate on a widevarietyof different CPUs, and are almost entirely written in the procedural C language.

    Classification of Representative Operating Systems

    Several operating systems commonly encountered in engineering practice are classified below in accor-dance with the definitions presented above. Note that some of these operating systems are presedisappearing from use, some are new systems incorporating the latest advances in operating systemdesign, and some are in the middle of a potentially long life span.

    MS/DOS and Windows 3.x. The MS/DOS (Microsoft Disk Operating System) operating system wasintroduced in the 1980s as a low-level controlling system for the IBM PC and compatibles. Its architectis closely tailored to that of the Intel 8080 microprocessor, which has been both an advantage (leadingto widespread use) and disadvantage (relying on the 8080s arcane memory addressing scheme hprevented MS/DOS from realizing effective virtual memory schemes appropriate for engineering coputation). MS/DOS is a single-processor, single-tasking, single-threaded operating system with no naivesupport for virtual memory, networking, or security. Despite these serious shortcomings, MS/DOS hfound wide acceptance, primarily because the operating system is so simple that it can be circumventedto provide new and desirable functions. In particular, the simplicity of MS/DOS provides an operatingsystem with little overhead relative to more complex multitasking environments: such low-overheadoperating systems are commonly used in realtime applications in mechanical engineering for sucas process control, data acquisition, and manufacturing. In these performance-critical environments, theincreased overhead of more complex operating systems is often unwarranted, unnecessary, or counter-productive.

    Microsoft Windows is an excellent example of how MS/DOS can be patched and extended to provideuseful features that were not originally provided. Windows 3.0 and 3.1 provided the first widely usedgraphical user-interface for computers using the Intel 80 86 processor family, and the Windowssubsystem layers, which run on top of MS/DOS, also provided for some limited forms of cooperativemultitasking and virtual memory for MS/DOS users. The combination of MS/DOS and Windows 3.1was an outstanding marketing success. An estimated 40 million computers eventually ran this combinationworldwide. Although this operating system had some serious limitations for many engineering applica-tions, it is widely used in the mechanical engineering community.

    VAX/VMS. Another successful nonportable operating system that has found wide use in engineeVAX/VMS, developed by Dave Cutler at Digital Equipment Corporation (DEC) for the VAX family ofminicomputers. VMS (Virtual Memory System) was one of the first commercial 32-bit operating syste 1999 by CRC Press LLC

  • 15-22 Section 15

    i-

    that

    rtsh-

    d

    the.,g

    ostd

    aloft

    h

    hom-gesthat provided a modern interactive computing environment with features such as multitasking, multthreading, multiprocessing, protected virtual memory management, built-in high-speed networking, androbust security. VMS is closely tied to the characteristics of the DEC VAX microprocessor, which haslimited its use beyond that platform (in fact, DEC has created a software emulator for its current familyof 64-bit workstations that allows them to run VMS without the actual VAX microprocessor hardware).But the VMS architecture and feature set is widely imitated in many popular newer operating systems,and the flexibility of this operating system was one of the main reasons that DEC VAXs became verypopular platforms for midrange engineering computation during the 1980s.

    Windows NT. Windows NT is a scalable, portable, multitasking, multithreaded operating systemsupports OSI network models, high-level DOD security, and protected virtual memory. The primaryarchitect of Windows NT is Dave Cutler (the architect of VAX/VMS), and there are many architecturalsimilarities between these two systems. Windows NT is an object-oriented operating system that suppothe client-server operating system topology, and is presently supported on a wide range of higperformance microprocessors commonly used in engineering applications. Windows NT provides aWindows 3.1 subsystem that runs existing Windows 3.1 applications within a more robust and crash-proof computational e


Recommended