+ All Categories
Home > Documents > A Multi-Code-Coupling Interface for...

A Multi-Code-Coupling Interface for...

Date post: 05-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
11
AIAA 01–0974 A Multi-Code-Coupling Interface for Combustor/Turbomachinery Simulations Sriram Shankaran Juan J. Alonso Stanford University Stanford, CA 94305 May-Fun Liou Nan-Suey Liu NASA Glenn Research Center Cleveland, OH 44135 Roger Davis United Technologies Research Center East Hartford, CT 06108 39th AIAA Aerospace Sciences Meeting and Exhibit January 8–11, 2001/Reno, NV For permission to copy or republish, contact the American Institute of Aeronautics and Astronautics 1801 Alexander Bell Drive, Suite 500, Reston, VA 20191–4344
Transcript
Page 1: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

AIAA 01–0974A Multi-Code-Coupling Interface forCombustor/TurbomachinerySimulations

Sriram ShankaranJuan J. Alonso

Stanford UniversityStanford, CA 94305

May-Fun LiouNan-Suey Liu

NASA Glenn Research CenterCleveland, OH 44135

Roger DavisUnited Technologies Research Center

East Hartford, CT 06108

39th AIAA Aerospace SciencesMeeting and Exhibit

January 8–11, 2001/Reno, NVFor permission to copy or republish, contact the American Institute of Aeronautics and Astronautics1801 Alexander Bell Drive, Suite 500, Reston, VA 20191–4344

Page 2: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

AIAA 01–0974

A Multi-Code-Coupling Interface forCombustor/Turbomachinery Simulations

Sriram Shankaran∗

Juan J. Alonso†

Stanford UniversityStanford, CA 94305

May-Fun Liou‡

Nan-Suey Liu§

NASA Glenn Research CenterCleveland, OH 44135

Roger Davis¶

United Technologies Research CenterEast Hartford, CT 06108

This paper describes the design, implementation and validation of a method tocouple multiprocessor solvers whose solution domains share a common surface. UsingMessage Passing Interface (MPI) constructs, parallel communication pathways are estab-lished between various simulation codes. These pathways allow applications to exchangedata, synchronize time integrations and reinitialize communication data structures whenmeshes change their relative positions. At an interface with another simulation code,applications request specific flow variables, typically for a ghost/halo layer of cells ornodes. Numerical estimates of these flow variables are provided by the simulation soft-ware on the other side of the interface through three-dimensional interpolation. Withan aim at achieving conservative interfacing between applications, particular instances ofthe requested flow variables and interpolation stencils will be used for different problems.Communication tables are built for processes involved with the exchange of informationand all exchanges occur strictly between specific processes, thereby minimizing commu-nication bottlenecks. This paradigm has been used to build a code coupling interfacefor a three-dimensional combustor/turbine interaction simulation in which a new mas-sively parallel computational fluid dynamic solution procedure for turbomachinery, calledTFLO, has been coupled with an unstructured-grid, parallel procedure for combustors,called NCC. Numerical and physical issues regarding the exchange of information aswell as the coupling of physics-disparate analyses will be discussed. Several developmenttest cases have been used to ensure the soundness of the communication procedures. Amulti-component simulation for a dump combustor/exit duct has been performed as ademonstration of the new interface.

Introduction

COMPUTATIONAL Fluid Dynamic (CFD) sim-ulations are an essential and integral element

in the design process of modern gas turbine jet en-gines, providing engineering predictions of aerody-namic performance, heat transfer, and flow behav-ior. Steady-state flow predictions are commonplacefor problems ranging in size from the design of anindividual compressor or turbine blade, to large sec-tions of a complete component such as a combustoror low-pressure turbine. Entire component unsteady

∗Doctoral Candidate, Stanford University†Assistant Professor, Member AIAA‡Aersopace Engineer§Aerospace Engineer, Senior Member, AIAA¶United Technologies Research CenterCopyright c© 2001 by the American Institute of Aeronautics and

Astronautics, Inc. No copyright is asserted in the United Statesunder Title 17, U.S. Code. The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed hereinfor Governmental Purposes. All other rights are reserved by thecopyright owner.

or multi-component steady-flow simulations have notyet become routine in the design process because ofthe exceedingly large resource requirements to performthese analyses and the disparate flow physics of eachcomponent. However, multi-component interaction ef-fects, such as combustor/turbine hot streak migrationor compressor/combustor instability, are of great in-terest, especially at off-design conditions, due to theiradverse effects on performance, durability, and oper-ability.

Shared and distributed memory parallel computersystems and networks have helped to greatly extendthe feasible size and reduce the solution time of large-scale gas-turbine design and analysis problems. Manynew advances in computer hardware, network commu-nications, and simulation software are required, how-ever, to bring large-scale simulations into practical,everyday use. The design and analysis of gas tur-bine jet engines is not the only engineering or scientificarena that has these bottlenecks due to the overwhelm-

1 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 3: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

ing problem size and the limited computer systemsthat can handle them. As a result, the Department ofEnergy (DoE) has launched the Accelerated StrategicComputing Initiative (ASCI) to promote the devel-opment of massively-large parallel computer systemsand the simulation software that can take advantage ofthem. As part of this initiative, the current effort hasfocused on developing and demonstrating the capabil-ity to simulate the steady or unsteady flow throughmulti-components of a gas-turbine engine. In addi-tion, plans for more ambitious simulations which arecurrently underway are brielfy mentioned.In the following sections a detailed description of

the integration framework and an overview of the twocodes, TFLO and NCC is presented. Finally, resultsfrom coupling the two codes for a few model problemsas well as a dump combustor/duct are presented.

Integration FrameworkOne of the crucial steps in the coupling of different

simulation codes is the need to develop a frameworkwhich allows these codes to communicate with eachother within a parallel computing environment. Inthe following sections we outline a method which aimsat developing one such framework. This frameworkallows codes to request/provide the necessary datafrom/to other codes, synchronize time integration, andreset communication data structures for meshes in rel-ative motion. The framework described here has beendeveloped with the aim of coupling TFLO and NCC.However, it has been kept as general as possible andit should be capable of handling multiple code inter-facing problems. Another aspect of coupling multiplecodes is to identify the numerical nature of the cou-pling process. There are different ways to arrive at anumerical scheme and they can be placed in increasinghierarchy of complexity as

• loosely coupled systems in which the presence ofthe interface can be treated by the individualcodes as a boundary condition. This involves noexplicit exchange of values between the partici-pating codes.

• moderately coupled systems in which the pri-mary or primitive variables at the interface areexchanged between codes.

• tightly coupled systems, where in addition to theprimary variables, the fluxes across the interfaceare also exchanged.

• simultaneously coupled systems, where algorith-mic variables required by the numerical schemesof the individual codes are exchanged.

The integration framework described in the followingsections, aims to result in moderately coupled systemswith possibility of extension to tightly coupled sys-tems.

Problem Description

The interface between different codes requires thetransfer of information from one code to the other.Typically, for problems in turbomachinery, an Un-steady Reynolds Averaged Navier-Stokes (URANS)code that handles the turbine/compressor requires in-formation at the entrance/exit of the physical domainbeing simulated. This information is to be provided bythe code that handles the combustion process, whichcan also be URANS-based or can use a Large EddySimulation (LES) approach. For both cell-centeredand cell-vertex schemes the values of the flow variablesin a row of surrounding halo/ghost cells/nodes needto be updated with information from the simulationsoftware handling the opposite side of the interface(see Figure 1). Hence, to manage the transfer of in-formation it is necessary to identify ‘donor’ cells foreach halo/ghost cell/node that requests information.If the identification of this ‘donor’ is performed inan algorithmically efficient manner, codes which usedifferent solution methodologies can be coupled ef-ficiently. In combustor/turbomachinery simulations,the grids on either side of the interface between thecodes are mostly stationary. However, with emergingtechnologies, there is a possibility that in the futurethe first blade row of the turbine might be designed tobe set of rotating blades. In this situation, for everyghost/halo cell, it will become necessary to recomputeits ‘donor’ whenever the grids move relative to eachother. It is also crucial to minimize the amount of in-formation exchanged so that any bottlenecks that mayarise due to communication may be eliminated. Thisis of particular importance in large-scale parallel com-puting applications, where the cost of the use of thecommunication sub-system is relatively high comparedto the cost of actual computing.In the following sections, we describe the important

steps needed to set-up a framework which allows forthe integration of different simulation codes. Sugges-tions are made for the naming of the various compo-nents of the integration framework, as well as for thealgorithmic details of the implementation.

Interface For Multi-Code Integration

The two important steps involved in developing thisframework are :

• Initialization - This step sets up the communica-tion tables which allow each processor to iden-tify the processors to/from which it needs tosend/receive information. For each code, the pro-cessors involved in the interface must also identifythe cells/vertices which will be used to computethe information requested by the other code. Fur-thermore, it is necessary to perform this initializa-tion step whenever the grids in either code moverelative to each other or when any of the grid sys-tems are adaptively refined.

2 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 4: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

• Communication - Once the processors have gath-ered the requisite information, this second step isused whenever information has to be exchangedbetween the two codes. The communication stepwill be used repeatedly during the pseudo-time it-erations of all participating simulation codes, and,therefore, extreme care is placed in making thisstep as efficient as possible.

For the sake of clarity, the following discussion as-sumes that the two codes to be interfaced are TFLOand NCC. However, no inherent assumptions are madein the methodology and, hence, one should be able togeneralize this method to any pair of grid-based sim-ulation codes.

Initialization StepIn order to facilitate the integration of component

simulation codes into a single parallel computing en-vironment, we will make extensive use of user-definedMPI communicators. The MPI standard allows forthe creation of sets of processors which are grouped ac-cording to a commonality of the tasks to be performed.At the very least, every component code should be amember of a separate communicator. In addition, sub-groups of processors within a simulation code may alsobelong to additional communicators if they perform aspecific task which differentiates them from the otherprocessors.The main reason for the introduction of additional

communicators is to restrict global communication op-erations to only the processors that need to participatein the communication. In addition, the enrollment ofvarious processors into a specific communicator allowsa simulation code to provide information to itself andother component codes without apriori knowledge ofthe functions that these codes may perform. Usingsuch a communicator-based processor decompositioncertain MPI global communication commands (such asMPI BCAST and MPI GATHER) can be restricted tomembers of a particular processor subset.

Creation of new CommunicatorsTo distinguish between the processors that are

running TFLO and NCC, all processors runningTFLO will enroll in the TFLO WORLD communi-cator, while those that run NCC will become partof NCC WORLD. Moreover, we shall make a fur-ther distinction within each component code. Thoseprocessors from TFLO that are involved in the com-munication across the interface will become part of anew communicator, TFLO AEE WORLD, and thosefrom NCC that perform a complementary role willenroll in NCC AEE WORLD. The AEE portion ofthe names of the communicators stands for ArbitraryEulerian-Eulerian which represents the types of simu-lation codes we have thought of in the development ofthis environment. In general, a given simulation code

will need to enroll its processors in a communicatorcalled CODENAME WORLD, and the subset of itsprocessors that participate in the exchange of informa-tion across the interface will become part of the CO-DENAME AEE WORLD communicator. The proces-sors in CODENAME AEE WORLD of one simulationcommunicate with their counter-parts in other sim-ulations through new communicators called CODE-NAME1 CODENAME2. These communicators arecalled inter-communicators in MPI. While couplingmultiple codes a suitable naming strategy can be usedto identify the different inter-communicators. Notethat this strategy can also be used to provide commu-nication pathways between multiple instances of thesame simulation.

Identification of Processors at the InterfaceOnce the new interface communicators have been

created, every processor in each simulation code mustdetermine for itself whether it belongs to either one ofthese groups. This determination may be made basedon a variety of criteria. For example, special bound-ary condition flags may be specified in the input fileof each simulation program. Alternatively, processorsmay conduct a series of geometric tests to determinewhether any part of their mesh lies on an apriori de-fined interface surface. However, this choice is entirelyleft to the developers of each simulation program.Although typically all processors participating in

the communication across the interface will lie directlyon the interface, it is possible that other processorsthat do not lie directly on the interface will also haveto provide information. This situation may arise whenthe size of the cells on both sides of the interface varygreatly. In such a situation, the halo of a large cellmay lie inside a processor that is not in direct contactwith the interface. In the event that one or more ofthe required cells are not identified, the value of the re-quested data can be computed from the updated valuesof the neighboring cells/nodes in the local processor.

Compilation of Information to Be RequestedOnce the processors that are involved in the ex-

change of information are identified and properly en-rolled into the new communicators, these processorsneed to compile a list of locations and variables forwhich they require information from the other code.For each location at which information is needed thefollowing pieces of information will be compiled:

• Processor ID : ID of the processor in which therequested location resides.

• Block/Cell/Node ID : Block/Cell/Node numberin the local processor where the information pro-vided by the other code will be stored. The formatof this data depends greatly on the data structureused by the simulation code. In TFLO, for exam-

3 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 5: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

ple, each processor may handle an arbitrary num-ber of blocks from the multiblock mesh. However,a one-dimensional array is used as the underlyingdata structure, and then, the Cell/Node ID willcorrespond to the position in this one-dimensionalarray.

• x,y,z location : Cartesian coordinates at whichinterpolated data is required.

• Information Request Flag : This flag is an integervalue which encodes the specific variables, theirnumber, and the order in which these variableswill be provided. For example, TFLOmay requestρ, k, and ω from NCC, while NCC may only needp, ρu, and T .

Each processor in TFLO AEE WORLD andNCC AEE WORLD compiles this list which is thenbroadcast to all its counterpart processors and onlyto them.The initial exchange of information is to be done in

three ‘steps’.

• During the first ‘step’, each and every pro-cessor in the TFLO AEE WORLD andNCC AEE WORLD groups will broadcast tothe other group the complete list of locations andadditional information described above for thatspecific processor.

• The receiving processors must then, in a sec-ond ‘step’ sort through all the entries in all thelists provided during the broadcast operation andidentify those that they can provide informationfor. Internally, each processor will generate infor-mation regarding the interpolation weights nec-essary to provide the information requested, andwill store this information using a data structureappropriate to the code in question.

• In the final third ‘step’, all processors must com-municate with those processors they can providedata to and inform them of the specific requeststhat can be fulfilled. In theory, all data requestsby a given processor will be fulfilled by a com-bination of processors in the group on the otherside of the interface. However, there may be sit-uations in which this will not be true; orphanedcells may occur for which the error checking andextrapolation procedures described below will benecessary.

Identifying ‘donor’ cells and Send buffersEach processor that receives the complete list that

is broadcast in the first ‘step’, builds a different list foreach of the processors to which it will provide inter-polated data during the communication step. Hence,each processor, scans the entries from all the lists that

are broadcast to it and determines those entries forwhich it can provide information. The process of iden-tifying the entries for which each processor can provideinformation is left to the developers of the respectivesimulation codes. This task constitutes the second‘step’ described in the previous section. During theprocess of identifying the entries each processor is re-sponsible for, each processor builds a separate list foreach processor on the other side of the interface towhich it will provide information. The entries in thislist will contain the following entries:

• Processor ID : ID of the processor to which thedata is to be sent to.

• Block/Cell/Node ID : Block/Cell/Node numberon the remote processor to which the data is sentto.

• Data : Interpolated data that will be used toupdate the values in the Cell/Node ID of the pro-cessor that requested those data.

Each processor in TFLO AEE WORLD andNCC AEE WORLD sends this information to theappropriate processors on the other side of the inter-face. The receiving processors map this information,to an array that is stored locally in each processor, todetermine the cells/nodes that will be updated in thecommunication step.Step three of the above process is illustrated through

the following example. If the list received by proces-sor 15 (say) which is in TFLO AEE WORLD fromprocessor 48 (say) which is in NCC AEE WORLD is

15, 1001, p15, 2035, ρ, u, v, p15, 5019, ρ

The local array that is built by processor 15 will be

48, 1001, Information Request Flag for p48, 2035, Information Request Flag for ρ,u,v,p48, 5019, Information Request Flag for ρ

Each receiving processor builds this local arraywhich contains entries in the order in which they willbe received from the processor that sends this infor-mation. This local array is built, to prevent the needto transmit the cell/node ID during the communica-tion step. Hence, during the communication step, eachprocessor builds only a list of interpolated data thatit will communicate to the processor that needs theinformation in the exact same order specified above.The local processors, then determine the cells/nodesto be updated using the pointer list that was compiledduring the initialization step.

4 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 6: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

As was mentioned in the previous section, theremight be situations in which some cells/nodes on a pro-cessor requesting information do not find a donor fromthe processors on the other side of the interface. Thelocal arrays can easily determine the identity of theseorphaned cells/nodes by direct comparison betweenthe request list and the lists sent by the processorson the other side of the interface. The simplest way tohandle this scenario would be to use an extrapolationroutine to update the values in this cell/node from itsneighboring values.The processors in TFLO AEE WORLD and

NCC AEE WORLD also compile a local array whichallows them to build the send buffers during thecommunication step. This list would contain thefollowing information:

• Processor ID : ID of the processor to which therequested entry is to be sent to.

• Interpolation weights : These entries are specificto the nature of the interpolation and they areto be decided by the developers of the simulationcode. This entry should be programmed to allowfor ease of use of different interpolation stencils.

Communication StepIn the communication step, each processor compiles

a list of interpolated data to be sent. Hence, the sendbuffers during the communication step would only con-tain the following information for each processor:

• Data : Interpolated data that will be used to up-date the values in the Block/Cell/Node ID of theprocessor that requested the data.

Only the actual data is needed here, since the ini-tialization ‘step’ described in previous sections hascarefully created lists of pointers that translate theinformation in an ordered communication buffer intospecific memory locations of the ghost/halo cells of thereceiving processor.To reiterate, each processor which receives this infor-

mation, then uses the local arrays that it built duringthe initialization step, to determine which cell/nodeneeds to be updated.

Overview of TFLOThe unsteady Reynolds Averaged Navier-Stokes

equations are solved using a cell-centered discretiza-tion on arbitrary multiblock meshes. The solver isparallelized using domain decomposition, an SPMD(Single ProgramMultiple Data) strategy, and the Mes-sage Passing Interface (MPI) Standard.1

The solution procedure is based on efficient ex-plicit modified Runge-Kutta methods with several con-vergence acceleration techniques such as multigrid,residual averaging, and local time-stepping. These

techniques, multigrid in particular, provide excellentnumerical convergence and fast solution turnaround.Turbulent viscosity is computed from a k − ω two-equation turbulence model. The dual-time steppingtechnique2–4 is used for time-accurate simulations thataccount for the relative motion of rotors and statorsas well as other sources of flow unsteadiness.The multiblock strategy facilitates the treatment of

arbitrarily complex geometries using a series of struc-tured blocks with point-to-point matching at their in-terfaces. This point-to-point matching ensures globalconservation of the flow variables. The structure of themesh is specified via a connectivity file which allowsfor arbitrary orientations of the blocks. Two layers ofhalo cells are used for inter-block information transferand an efficient communication scheme is implementedfor the halo cell data structures. The load of each pro-cessor is balanced on the basis of a combination of theamount of computation and communication that eachprocessor performs. Communication of halo cell val-ues is conducted at every stage of the Runge-Kuttaintegration and in every level of the multigrid cycle inorder to guarantee fast convergence rates. A generaland parallel efficient procedure has been developed tohandle the inter-blade row interface models, for multi-stage turbomachinery simulations.

Overview of NCCThe National Combustion Code (NCC)5 is an

unstructured-mesh solver for the solution of the timedependent compressible Navier-Stokes equations withturbulent combustion. A finite-volume, cell-centeredscheme is employed with the explicit four-stage Runge-Kutta algorithm to advance the solution. Localpseudo-time stepping and residual smoothing are usedto accelerate the convergence. Turbulence closure isobtained via either a high or a low Reynolds numberk − ε model. A Lagrangian scheme based on particletracking and the dilute spray approximation is used tosolve the liquid-phase equations. NCC has been runon various massively parallel platforms.

Choice of VariablesThe location and choice of variables that are ex-

changed between the participating codes are depen-dent on how we wish to model the coupled systems athand. As mentioned earlier, in this approach we aimto build a set of moderately coupled systems with thepossibility of extension to tightly coupled systems.Our experiments with coupling TFLO and NCC re-

vealed that the implementation specifics of each codealso plays an important role in determining the setof variables. Differing assumptions for the materialproperties of air (R, Cp, γ) in the two codes, producediffering states of the gas at the interface. This makesit impossible to derive a set of variables which main-tains continuity in all variables of interest, namely

5 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 7: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

density, pressure, temperature, velocity and energy.Furthermore, NCC and TFLO model the dimensionaland non-dimensional forms of the governing equations.This demanded that the two codes have knowledgeof the non-dimensionalization procedure used in theother code. When the conditions of the flow war-rant a turbulent simulation, variables that enable theestimate of the turbulent viscosity need to be ex-changed. Differing turbulence models in the two codesrequired the identification of a new set of turbulencevariables that need to be exchanged at the interface.Another factor that had to be accounted for was thedisparate numerics in the two codes. For example, theimplementation of pre-conditioning in NCC demandsknowledge of the minimum velocity in the whole com-putational domain and hence this had to be includedin the set of variables that were exchanged betweenthe two codes.To resolve these issues, a choice of variables that

allowed for the primitive variables to be continu-ous across the interface while allowing for a jumpin the energy was adopted and it has produced thebest set of results for the problems that were tested.Specifically, in the following analysis, the quantitiesthat were exchanged between the two codes were:ρ, ρu, ρv, ρw, T, p. Note that this set will provide con-tinuity in the primitive variables but there will be ajump in the energy due to the differing methods to es-timate the energy. This set of variables loosely couplesthe systems in question. Extensions can be made tothis set to include the fluxes through the appropriatecell faces on either side of the interface. However, thisdemands the knowledge of the intersection of the pla-nar grid from either side of the interface, a challengingtask while coupling unstructured and structured grids.The above set of variables were requested for the cellcenters of the two layers of face halo cells in TFLOand the cell and face center values of the one layer ofhalo cells in NCC. For turbulent flow simulations, inaddition to the set of variables mentioned above, theturbulent kinetic energy and the turbulent viscositywere exchanged between the two codes. However, thisdid not provide satisfactory answers. Further researchis required to determine an optimal set.

ResultsThe results from using this integration module to

couple TFLO to NCC for a few model problems arepresented in this section.

Coupling TFLO and TFLO

Inviscid Transonic flow over a bumpThe inviscid transonic flow over a two dimensional

bump was modeled by dividing the domain into twoequal regions. The two sub-domains form an interfaceat the maximum height of the bump (Figure 2). Theintegration module was used to exchange information

across the interface. The inlet Mach number was heldat 0.675 and the ratio of the back pressure to the inlettotal pressure was held at 0.737. The individual blocksare structured grids and have dimensions 32× 17× 9.The pressure contours of the steady state flow-field areshown in Figure 3 (a). The contours are fairly contin-uous through the interface suggesting that no spuriousdisturbances were introduced by the coupling process.The convergence histories of the two codes were alsonot significantly affected by the coupling process.

Coupling NCC and NCC

Inviscid Transonic flow over a bumpThe same procedure that was done to couple TFLO

to TFLO was repeated with NCC. The grids, inlet andexit conditions were the same as was used in couplingTFLO to TFLO. The pressure contours obtained fromthis coupling process are shown in Figure 3 (b). Notethat the contours are continuous through the interface.

Coupling NCC and TFLO

Inviscid Transonic flow over a bumpThe flow over the bump was now modeled by cou-

pling TFLO and NCC. NCC was used on the left halfof the bump and TFLO was used on the down-streamside. The inlet Mach number and exit conditions werethe same as before. The pressure contours are shownin Figure 4. Again, the contours are fairly continu-ous through the interface. A few of the contours arenot strictly continuous which could be resolved by amore optimal set of variables that are exchanged be-tween the TFLO and NCC. Also, note that the rangesof color scales are slightly different between the twofigures.

Dump combustor/DuctThe inviscid flow through a backward facing circular

channel is modeled by coupling NCC and TFLO. NCCsimulates the region within the dump combustor andTFLO handles the flow through the duct (Figure 5).NCC uses an unstructured tetrahedral grid (13922 el-ements) and TFLO uses a multi-block structured grid(5 blocks, each of dimension 17 × 17 × 17). The gridfor the whole geometry is shown in Figure 5. The inletvelocity to the dump combustor was fixed at 10 m/sand the back pressure was held at at 1 atm. The veloc-ity vectors are shown in Figure 6. The uniform flowdown-stream of the dump combustor passes throughthe interface and exits through the duct. There are nonoticeable glitches near the interface suggesting thatthe numerical errors introduced by the interface arenegligible.

ConclusionsThe development of a framework to integrate mul-

tiple simulations was outlined in this paper. Theframework was tested for a few model problems bycoupling the participating codes (TFLO and NCC) to

6 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 8: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

one another and to themselves. The computationalresults from these experiments validate the implemen-tation of the interface code. Further research needs tobe done to identify a general ‘rule’ to determine an op-timal set of flow variables that the participating codeswill exchange. Ongoing work is focussed on couplingturbulent simulations and future work will aim at cou-pling a turbulent combustion simulation to a turbulentflow simulation in the turbine.

AcknowledgementsThe authors would like to thank the U.S. Depart-

ment of Energy (DoE) for its generous support underthe ASCI Program. We would also like to thankDr. Jixian Yao, Stanford University, Dr. T. H.Shih, NASA Glenn Research Center, for generatingthe TFLO and NCC grids respectively for the dumpcombustor and Dr. Jeff Modor for the valuable dis-cussions on the NCC integration framework. Theauthors would also wish to thanks United Technolo-gies Research Center, and Pratt and Whitney for theirsupport.

References1Yao, J., Jameson, A., Alonso, J. J., and Liu, F., “Devel-

opment and Validation of a Massively Parallel Flow Solver forTurbomachinery Flows,” AIAA Paper 00-0882, Reno, NV, Jan-uary 2000.

2Jameson, A., “Time Dependent Calculations Using Multi-grid, with Applications to Unsteady Flows Past Airfoils andWings,” AIAA Paper 91-1596, AIAA 10th Computational FluidDynamics Conference, Honolulu, HI, June 1991.

3Alonso, J. J., Martinelli, L., and Jameson, A., “MultigridUnsteady Navier-Stokes Calculations with Aeroelastic Applica-tions,” AIAA Paper 95-0048, AIAA 33rd Aerospace SciencesMeeting and Exhibit, Reno, NV, 1995.

4Belov, A., Martinelli, L., and Jameson, A., “Three-Dimensional Computations of Time-Dependent IncompressibleFlows with an Implicit Multigrid-Driven Algorithm on ParallelComputers,” Proceedings of the 15th International Conferenceon Numerical Methods in Fluid Dynamics, Monterey, CA, June1996.

5Liu, N.-S. and Quealy, A., “A Multidisciplinary De-sign/Analysis Tool for Combustion,” NASA CP 1999-208757,NASA HPCCP/CAS Workshop Proceeding, Cleveland,OH,January 1999.

7 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 9: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

Fig. 1 Grid structures for TFLO and NCC

Fig. 2 Section of the grid

8 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 10: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

a)

P12327.585174706.5896

-2914.5-6725-10535.5-14346-18156.5-21967-25777.5-29588-33398.5-37209-41019.5

b)

Fig. 3 Transonic Bump : a) TFLO coupled to TFLO, b) NCC coupled to NCC

a) b)

Fig. 4 NCC coupled to TFLO, a) NCC, b) TFLO

9 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974

Page 11: A Multi-Code-Coupling Interface for Simulationsaero-comlab.stanford.edu/Papers/shankaran.aiaa.01-0974.pdf · Sriram Shankaran ∗ Juan J. Alonso† ... to be updated with information

Fig. 5 NCC coupled to TFLO, Grids for Dump Combustor/duct

a) b)

Fig. 6 NCC coupled to TFLO, a) NCC, b) TFLO

10 of 10

American Institute of Aeronautics and Astronautics Paper 01–0974


Recommended