+ All Categories
Home > Documents > BTJ Book V1 N1 2008 Final

BTJ Book V1 N1 2008 Final

Date post: 26-Dec-2014
Category:
Upload: leonal626
View: 320 times
Download: 14 times
Share this document with a friend
228
Volume 1, No. 1 Foreword Editorial Seismic Soil Pressure for Building Walls — An Updated Approach Integrated Seismic Analysis and Design of Shear Wall Structures Structural Innovation at the Hanford Waste Treatment Plant Systems Engineering — The Reliable Method of Rail System Delivery Simulation-Aided Airport Terminal Design Safe Passage of Extreme Floods — A Hydrologic Perspective FMC: Fixed-Mobile Convergence The Use of Broadband Wireless on Large Industrial Project Sites Desktop Virtualization and Thin Client Options Computational Fluid Dynamics Modeling of the Fjarðaál Smelter Potroom Ventilation Long-Distance Transport of Bauxite Slurry by Pipeline World’s First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade ® LNG Process Innovation, Safety, and Risk Mitigation via Simulation Technologies Optimum Design of Turbo-Expander Ethane Recovery Process Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical Boilers CO 2 Capture and Sequestration Options — Impact on Turbomachinery Design Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants BECHTEL SYSTEMS & INFRASTRUCTURE, INC. Farhang Ostadan, PhD Thomas D. Kohli; Orhan Gürbüz, PhD; and Farhang Ostadan, PhD John Power, Mark Braccia, and Farhang Ostadan, PhD CIVIL Samuel Daw Michel A. Thomet, PhD, and Farzam Mostoufi Samuel L. Hui; André Lejeune, PhD; and Vefa Yucel COMMUNICATIONS Jake MacLeod and S. Rasoul Safavian, PhD Nathan Youell Brian Coombe MINING & METALS Jon Berkoe, Philip Diwakar, Lucy Martin, Bob Baxter, C. Mark Read, Patrick Grover, and Don Ziegler, PhD Terry Cunningham OIL, GAS & CHEMICALS Cyrus B. Meher-Homji, Tim Hattenbach, Dave Messersmith, Hans P. Weyermann, Karl Masani, and Satish Gandhi, PhD Ramachandra Tekumalla and Jaleel Valappil, PhD Wei Yan, PhD; Lily Bai, PhD; Jame Yao, PhD; Roger Chen, PhD; Doug Elliot, PhD; and Stanley Huang, PhD POWER Kathi Kirschenheiter, Michael Chuk, Colleen Layman, and Kumar Sinha Justin Zachary, PhD, and Sara Titus Sanj Malushte, PhD; Orhan Gürbüz, PhD; Joe Litehiser, PhD; and Farhang Ostadan, PhD v vii 3 13 23 33 43 49 57 77 91 101 111 125 141 157 171 181 201 Bechtel Technology Journal Contents Authors An Independent Analysis of Current Technology Issues December 2008
Transcript
Page 1: BTJ Book V1 N1 2008 Final

V o l u m e 1 , N o . 1

B r i s b a n e , A u s t r a l i a

F r e d e r i c k , M a r y l a n d , U S A

H o u s t o n , T e x a s , U S A

L o n d o n , E n g l a n d , U K

M o n t r e a l , C a n a d a

N e w D e l h i , I n d i a

S a n F r a n c i s c o , C a l i f o r n i a , U S A

S a n t i a g o , C h i l e

S h a n g h a i , C h i n a

T a i p e i , T a i w a n , R O C

www.bechtel.com

Vo

lum

e 1

, No

. 12

00

8

ForewordEditorial

Seismic Soil Pressure for Building Walls — An Updated Approach Integrated Seismic Analysis and Design of Shear Wall StructuresStructural Innovation at the Hanford Waste Treatment Plant

Systems Engineering — The Reliable Method of Rail System DeliverySimulation-Aided Airport Terminal DesignSafe Passage of Extreme Floods — A Hydrologic Perspective

FMC: Fixed-Mobile ConvergenceThe Use of Broadband Wireless on Large Industrial Project SitesDesktop Virtualization and Thin Client Options

Computational Fluid Dynamics Modeling of the Fjarðaál Smelter Potroom Ventilation

Long-Distance Transport of Bauxite Slurry by Pipeline

World’s First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade® LNG ProcessInnovation, Safety, and Risk Mitigation via Simulation TechnologiesOptimum Design of Turbo-Expander Ethane Recovery Process

Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical BoilersCO2 Capture and Sequestration Options — Impact on Turbomachinery DesignRecent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants

BECHTEL SYSTEMS & INFRASTRUCTURE, INC.

Farhang Ostadan, PhDThomas D. Kohli; Orhan Gürbüz, PhD; and Farhang Ostadan, PhD

John Power, Mark Braccia, and Farhang Ostadan, PhD

CIVIL

Samuel DawMichel A. Thomet, PhD, and Farzam Mostoufi

Samuel L. Hui; André Lejeune, PhD; and Vefa Yucel

COMMUNICATIONS

Jake MacLeod and S. Rasoul Safavian, PhDNathan YouellBrian Coombe

MINING & METALS

Jon Berkoe, Philip Diwakar, Lucy Martin, Bob Baxter, C. Mark Read,Patrick Grover, and Don Ziegler, PhD

Terry Cunningham

OIL, GAS & CHEMICALS

Cyrus B. Meher-Homji, Tim Hattenbach, Dave Messersmith,Hans P. Weyermann, Karl Masani, and Satish Gandhi, PhD

Ramachandra Tekumalla and Jaleel Valappil, PhD Wei Yan, PhD; Lily Bai, PhD; Jame Yao, PhD; Roger Chen, PhD;

Doug Elliot, PhD; and Stanley Huang, PhD

POWER

Kathi Kirschenheiter, Michael Chuk, Colleen Layman, and Kumar SinhaJustin Zachary, PhD, and Sara Titus

Sanj Malushte, PhD; Orhan Gürbüz, PhD; Joe Litehiser, PhD; and Farhang Ostadan, PhD

v

vii

3

13

23

33

43

49

57

77

91

101

111

125

141

157

171

181

201

Bechtel Technology Journal

ContentsAuthors

A n I n d e p e n d e n t A n a l y s i s o f C u r r e n t T e c h n o l o g y I s s u e s

D e c e m b e r 2 0 0 8

Be

chte

l Tech

no

logy Jo

urn

al

8570

M a j o r O f f i c e s I n :

Page 2: BTJ Book V1 N1 2008 Final

Prem Attanayake, PhD Amos Avidan, PhD August Benz Siv Bhamra, PhD

Peter Carrato, PhD Doug Elliot, PhD Angelos Findikakis, PhD Benjamin Fultz

Orhan Gürbüz, PhD William Imrie Joe Litehiser, PhD Jake MacLeod

Sanj Malushte, PhD Cyrus B. Meher-Homji Ram Narula Farhang Ostadan, PhD

Stew Taylor, PhD Linda Trocki, PhD Ping Wan, PhD Fred Wettling

Bechtel Systems & Infrastructure, Inc. (BSII)BSII (US Government Services) engages in a wide range of government and civil infrastructure development, planning, program management, integration, design, procurement, construction, and operations work in defense, demilitarization, energy management, telecommunications, and environmental restoration and remediation.

CivilCivil is a global leader in developing, managing, and constructing a wide range of infrastructure projects from airport, rail, and highway systems to regional development programs; from ports, bridges, and offi ce buildings to theme parks and resorts.

CommunicationsCommunications integrates mobilization speed and a variety of disciplines, experience, and scalable resources to quickly and effi ciently deliver end-to-end deployment services for wireless, wireline, and other telecommunications facilities around the world.

Mining & Metals (M&M)Mining & Metals excels at completing logistically challenging projects—often in remote areas—involving ferrous, nonferrous, precious, and light metals, as well as industrial minerals, on time and within budget.

Oil, Gas & Chemicals (OG&C)Oil, Gas & Chemicals has the experience with a broad range of technologies and optimized plant designs that sets Bechtel apart as a worldwide leader in designing and constructing oil, gas, petrochemical, LNG, pipeline, and industrial facilities.

PowerPower is helping the world to meet—in ways no other company can match—an ever-greater energy demand by providing services for existing plants, by designing and constructing fossil- and nuclear-fueled electric generation facilities incorporating the latest technologies, and by taking the initiative in implementing new and emerging energy technologies.

Bechtel FellowsChosen for their substantial technical achievement over the years,

the Bechtel Fellows advise senior management on questions related to their areas of expertise, participate in strategic planning, and help

disseminate new technical ideas and findings throughout the organization.

Bechtel Global Business Units

Page 3: BTJ Book V1 N1 2008 Final

TECHNOLOGY PAPERS

BechtelTechnology Journal

V o l u m e 1 , N o . 1

D e c e m b e r 2 0 0 8

Page 4: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal ii

Contents

Foreword v

Editorial vii

BECHTEL SYSTEMS & INFRASTRUCTURE, INC. (BSII)

Seismic Soil Pressure for Building Walls — An Updated Approach 3Farhang Ostadan, PhD

Integrated Seismic Analysis and Design of Shear Wall Structures 13Thomas D. Kohli; Orhan Gürbüz, PhD; and Farhang Ostadan, PhD

Structural Innovation at the Hanford Waste Treatment Plant 23John Power, Mark Braccia, and Farhang Ostadan, PhD

CIVIL

Systems Engineering — The Reliable Method of Rail System Delivery 33Samuel Daw

Simulation-Aided Airport Terminal Design 43Michel A. Thomet, PhD, and Farzam Mostoufi

Safe Passage of Extreme Floods — A Hydrologic Perspective 49Samuel L. Hui André Lejeune, PhD, Université de Liège, BelgiumVefa Yucel, National Security Technologies, LLC

COMMUNICATIONS

FMC: Fixed-Mobile Convergence 57Jake MacLeod and S. Rasoul Safavian, PhD

The Use of Broadband Wireless on Large Industrial Project Sites 77Nathan Youell

Desktop Virtualization and Thin Client Options 91Brian Coombe

MINING & METALS (M&M)

Computational Fluid Dynamics Modeling of the 101 Fjarðaál Smelter Potroom VentilationJon Berkoe, Philip Diwakar, Lucy Martin, Bob Baxter, and C. Mark Read Patrick Grover, Alcoa, and Don Ziegler, PhD, Alcoa Primary Metals

Long-Distance Transport of Bauxite Slurry by Pipeline 111Terry Cunningham

Page 5: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1

OIL, GAS & CHEMICALS (OG&C)

World’s First Application of Aeroderivative Gas Turbine Drivers 125for the ConocoPhillips Optimized Cascade® LNG Process Cyrus B. Meher-Homji, Tim Hattenbach, and Dave MessersmithHans P. Weyermann, Karl Masani, and Satish Gandhi, PhD, ConocoPhillips Company

Innovation, Safety, and Risk Mitigation via Simulation Technologies 141Ramachandra Tekumalla and Jaleel Valappil, PhD

Optimum Design of Turbo-Expander Ethane Recovery Process 157Wei Yan, PhD; Lily Bai, PhD; Jame Yao, PhD; Roger Chen, PhD; andDoug Elliot, PhD; IPSI LLCStanley Huang, PhD, Chevron Energy Technology Company

POWER

Controlling Chemistry During Startup and Commissioning of 171 Once-Through Supercritical BoilersKathi Kirschenheiter, Michael Chuk, Colleen Layman, and Kumar Sinha

CO2 Capture and Sequestration Options — 181 Impact on Turbomachinery Design Justin Zachary, PhD, and Sara Titus

Recent Industry and Regulatory Developments in 201Seismic Design of New Nuclear Power PlantsSanj Malushte, PhD; Orhan Gürbüz, PhD; Joe Litehiser, PhD; and Farhang Ostadan, PhD

© 2008 Bechtel Corporation. All rights reserved.

Bechtel Corporation welcomes inquiries concerning the BTJ. For further information or for permission to reproduce any paper included in this publication in whole or in part, please email us at [email protected].

Although reasonable efforts have been made to check the papers included in the BTJ, this publication should not be interpreted as a representation or warranty by Bechtel Corporation of the accuracy of the information contained in any paper, and readers should not rely on any paper for any particular application of any technology without professional consultation as to the circumstances of that application. Similarly, the authors and Bechtel Corporation disclaim any intent to endorse or disparage any particular vendors of any technology.

iii

The BTJ is also available on the Web at www.bechtel.com/.(Enter BTJ in the search field to find the link.)

Page 6: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal iv

3GPP is a trademark of the European Telecommunications Standards Institute (ETSI) in France and other jurisdictions.

Alcatel-Lucent is a trademark of Alcatel-Lucent, Alcatel, and Lucent Technologies.

Alvarion is a registered trademark of Alvarion Ltd.

AMD is a trademark of Advanced Micro Devices, Inc.

ANSYS and FLUENT are registered trademarks of ANSYS, Inc., or its subsidiaries in the United States or other countries. [ICEM CFD is a trademark used by ANSYS, Inc. under license.]

Aspen HYSYS is a registered trademark of Aspen Technology, Inc.

cdma2000 is a registered trademark and certification mark of the Telecommunications Industry Association (TIA-USA).

Cisco Systems and Aironet are registered trademarks of Cisco Systems, Inc., and/or its affiliates in the United States and certain other countries.

ConocoPhillips Optimized Cascade is a registered trademark of ConocoPhillips.

Econamine FG is a service mark of Fluor Corporation.

Enhanced NGL Recovery Process is a service mark of IPSI LLC (Delaware Corporation).

Glenium is a registered trademark of Construction Research & Technology GMBH.

Huawei is a registered trademark of Huawei Corporation or its subsidiaries in the People’s Republic of China and other countries (regions).

IBM is a registered trademark of International Business Machines Corporation in the United States.

Intel is a registered trademark of Intel Corporation in the US and other countries.

Java is a trademark of Sun Microsystems, Inc., in the United States and other countries.

Microsoft and Excel are registered trademarks of the Microsoft group of companies.

Motorola is registered in the U.S. Patent and Trademark Office by Motorola, Inc.

Nortel is a trademark of Nortel Networks.

Rectisol is a registered trademark of Linde AG.

Selexol is a trademark owned by UOP LLC, a Honeywell Company.

SmartPlant is a registered trademark of Intergraph Corporation.

Tecore Networks is a registered trademark with the U.S. Patent and Trademark Office.

UNIX is a registered trademark of The Open Group.

Wi-Fi is a registered trademark and certification mark of the Wi-Fi Alliance.

WiMAX and WiMax Forum are trademarks of the WiMAX Forum; WiMAX is also the Forum’s certification mark.

Bechtel Technology Journal

Volume 1, Number 1

ADVISORY BOARD

Thomas Patterson – Principal Vice President and Corporate Manager of EngineeringRam Narula – Chief Technology Officer, Bechtel Power; Chair, Bechtel Fellows

Jake MacLeod – Principal Vice President, Bechtel Corporation; Chief Technology Officer, Bechtel Communications; Bechtel Fellow

S. Rasoul Safavian, PhD – Vice President, Technology and Network Planning, Bechtel Communications

EDITORIAL TEAM

Pam Grimes – Production CoordinatorBarbara Oldroyd – Coordinating Technical Editor

Richard Peters – Senior Technical EditorTeresa Baines – Senior Technical Editor

Peggy Dufour – Technical EditorBrenda Thompson – Technical Editor

Drake Ogilvie – Technical EditorZelda Laskowsky – Technical Editor

Jan Davis – Technical EditorRuthanne Evans – Technical EditorBrenda Goldstein – Technical Editor

JoAnn Ugolini – Technical EditorAnn Miller – Technical Editor

GRAPHICS/DESIGN TEAM

Keith Schools – Graphic DesignAndy Johnson – Graphic DesignJohn Cangemi – Graphic DesignDavid Williams – Graphic Design

Joe Kelly – Graphic Design Allison Levenson – Graphic Design

Matt Twain – Graphic DesignMatthew Long – Graphic Design

Michael Wescott – Graphic DesignJanelle Cataldo – Graphic DesignLuke Williams – Graphic DesignKim Catterton – Graphic Design

TRADEMARK ACKNOWLEDGMENTSAll brand, product, service, and feature names and trademarks mentioned in this Bechtel Technology Journal are the property of their respective owners. Specifically:

EDITORIAL BOARD

S. Rasoul Safavian, PhD – Editor-in-ChiefFarhang Ostadan, PhD – BSII GBU Editor

Siv Bhamra, PhD – Civil GBU Editor S. Rasoul Safavian, PhD – Communications GBU Editor

Bill Imrie – M&M GBU EditorCyrus B. Meher-Homji – OG&C GBU EditorSanj Malushte, PhD – Power GBU Editor

Page 7: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1

It is our great pleasure to present you with this inaugural issue of the Bechtel Technology Journal (BTJ). The BTJ’s compilation of technical papers addresses current issues and technology advancements from each of Bechtel’s six Global Business Units (GBUs): Bechtel Systems & Infrastructure, Inc.; Civil;

Communications; Mining & Metals; Oil, Gas & Chemicals; and Power.

The objective of the BTJ is to provide our customers and colleagues a fresh look at technical and operational advances that are of prime interest to the various industries we serve. This publication is a logical extension of our attempts, over the years, to look over the horizon to identify relevant issues that pose unique challenges and require new and innovative technical solutions.

Given the complexity of the arenas in which we do business, it is not difficult to identify numerous issues; the real challenge is to prioritize them relative to their impact on performance and return on investment. Therefore, we challenge our technical experts to collaborate with their counterparts around the world to define the critical issues insofar as possible. We then task them to address these issues from a technical perspective. Judging by the response to their past papers and presentations, we have experienced a high degree of success in many areas.

The papers in the BTJ are grouped by business unit, although many apply to more than one. The primary authors are from their indicated Bechtel business unit or an affiliate; some co-authors are from our customer organizations.

We want to take this opportunity to thank the authors and co-authors for helping make the BTJ a reality, and we certainly hope that you find their contributions of value. If you have an idea for a future paper, please feel free to contact me at [email protected]. Your suggestions are always welcome.

Sincerely,

Ram NarulaChief Technology Officer, Bechtel PowerChair, Bechtel Fellows

v

Foreword

Page 8: BTJ Book V1 N1 2008 Final
Page 9: BTJ Book V1 N1 2008 Final

Editorial

December 2008 • Volume 1, Number 1 vii

I am happy to bring to you our very first issue of the Bechtel Technology Journal (BTJ). Some of you may already be familiar with the Bechtel Communications Technical Journal (BCTJ) that has been produced since 2002. The BCTJ focuses on our Communications business and related areas, whereas

the BTJ has been developed with a much broader focus, encompassing all six of Bechtel’s Global Business Units (GBUs): Bechtel Systems & Infrastructure, Inc.; Civil; Communications; Mining & Metals; Oil, Gas & Chemicals; and Power.

Following in the footsteps of the BCTJ, we have modeled the BTJ to address leading technical, operational, business, and regulatory issues relevant to the different Bechtel GBUs. The papers selected address issues of current and future interest to their corresponding industries in general and our valued customers in particular. New technologies and trends are highlighted in papers such as Structural Innovation at the Hanford Waste Treatment Plant by Power et al. New regulatory concerns are discussed in other papers, such as Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants by Malushte et al.

I hope you find this first edition of the BTJ informative and useful. As always, I look forward to your comments and contributions. I would also like to take this opportunity to wish everyone a very happy, prosperous, and safe new year!

Happy Reading!

Dr. S. Rasoul SafavianVice President, Technology and Network Planning, Bechtel CommunicationsEditor-in-Chief

Page 10: BTJ Book V1 N1 2008 Final
Page 11: BTJ Book V1 N1 2008 Final

BSIIT e c h n o l o g y P a p e r s

3Seismic Soil Pressure for Building Walls — An Updated ApproachFarhang Ostadan, PhD

13Integrated Seismic Analysis and Design of Shear Wall StructuresThomas D. KohliOrhan Gürbüz, PhD Farhang Ostadan, PhD

23Structural Innovation at the Hanford Waste Treatment PlantJohn PowerMark Braccia Farhang Ostadan, PhD

BSII — Waste Treatment PlantBechtel is providing engineering, procurement, and construction of this fi rst-of-a-kind nuclear

and hazardous chemical processing facility for the US Department of Energy in Richland, Washington.

Page 12: BTJ Book V1 N1 2008 Final
Page 13: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 3

INTRODUCTION

The Mononobe-Okabe (M-O) method of predicting dynamic earth pressure was

developed in the 1920s. [1, 2] Since then, a great deal of research has been performed to evaluate its adequacy and develop improvements. This research includes the work by Seed and Whitman [3], Whitman et al. [4, 5, 6], Richard and Elms [7], and Matsuzawa et al. [8]. A good summary of the various methods and their application is reported in [9]. Most developments cited above are based on the original M-O method. The M-O method is, strictly speaking, applicable to soil retaining walls, which, upon experiencing seismic loading, undergo relatively large movement to initiate the sliding wedge behind the wall and to relieve the pressure to its active state. Unfortunately, the method has been and continues to be used extensively for embedded walls of buildings as well. Recent field observations and experimental data, along with enhancements in analytical techniques, have shown that hardly any of the assumptions used in the development of the M-O method are applicable to building walls. The data and the subsequent detailed analysis have clearly shown that the seismic soil pressure is a result of the interaction between the soil and the building during the seismic excitation and as such is a function of all parameters that affect soil-structure interaction (SSI) responses. Some of the more recent observations and experimental data, including an expanded discussion of the method presented herein, are reported in [10].

The major developments that consider the soil-wall interaction under dynamic loading are those by Wood [11] and Veletsos et al. [12, 13]. The solution by Wood commonly used for critical facilities [14] is, in fact, based on static “1 g” loading of the soil-wall system and does not include the wave propagation and amplification of motion. The recent solution by Veletsos et al. is a much more rigorous solution and considers the effects of wave propagation in the soil mass. The solution, however, is complex and lacks simple computational steps for design application. The effect of soil non-linearity and incorporation of harmonic solution for application to transient design motion require additional development with an associated computer program not currently available.

At this time, while elaborate finite element techniques are available to obtain the soil pressure for design, no simple method has been proposed for quick prediction of the maximum soil pressure, thus hindering the designer’s ability to use an appropriate method in practice. To remedy this problem, the current research was conducted to develop a simple method that incorporates the main parameters affecting the seismic soil pressure for buildings. This paper presents the development of the simplified method and a brief summary of its extensive verification. Its application for a typical wall is demonstrated by a set of simple numerical steps. The results are compared with the commonly

Originally Issued: August 2005Updated: December 2008

Farhang Ostadan, PhD

[email protected]

Abstract—The Mononobe-Okabe (M-O) method of predicting dynamic earth pressure developed in the 1920s in Japan continues to be widely used despite many criticisms and its limitations. The method was developed for gravity walls retaining cohesionless backfill materials. In design applications, however, the M-O method and its derivatives are commonly used for below-ground building walls. In this regard, the M-O method is one of the most abused methods in the geotechnical practice. In recognition of the M-O method’s limitations, a simplified method was recently developed to predict lateral seismic soil pressure for building walls. The method is focused on the building walls rather than retaining walls and specifically considers the dynamic soil properties and frequency content of the design motion in its formulation.

Keywords—Mononobe-Okabe, SASSI2000, seismic, soil pressure, SSI

SEISMIC SOIL PRESSURE FOR BUILDING WALLS—AN UPDATED APPROACH

Page 14: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 4

In recent years,

the understanding

of the attributes

of seismic

soil pressure

has improved

significantly.

used methods such as the M-O method and the solution by Wood.

The proposed method has been adopted and recommended by the National Earthquake Hazard Reduction Program (NEHRP). [15]

Significance of Seismic Soil Pressure in Design: Recent Observations Seed and Whitman [3] summarized damage to wall structures during earthquakes. Damage to retaining walls with saturated backfills is typically more dramatic and is frequently reported in the literature. However, reports of damage to walls above the water table are not uncommon. A number of soil-retaining structures were damaged in the San Fernando earthquake of 1971. Wood [11] reports that the walls of a large reinforced concrete underground reservoir at the Balboa Water Treatment Plant failed as a result of increased soil pressure during the earthquake. The walls were approximately 20 ft high and were restrained by top and bottom slabs.

Damage has been reported for a number of underground reinforced concrete box-type flood control channels. Richards and Elms [7] report damage to bridge abutments after the 1968 earthquake in Inangahua, New Zealand. Out of the 39 bridges inspected, 24 showed measurable movements and 15 suffered abutment damage. In the Madang earthquake of 1970 in New Guinea, the damage patterns were similar. Of the 29 bridges repaired, some experienced abutment lateral movements of as much as 20 in. Reports on failed or damaged bridge abutments indicate mainly settlement of the backfill and pounding of the bridge superstructure against the abutment in longitudinal and transverse directions.

Nazarian and Hadjian [16] also summarized damage to soil-retaining structures during past

earthquakes. Damage to bridges has also been reported from various earthquakes, including 1960 Chile, 1964 Alaska, 1964 Nigata, 1971 San Fernando, and 1974 Lima. Most of the reported damage can be attributed to the increased lateral pressure during earthquakes.

Numerous reports are also available from recent earthquakes that report damage to the embedded walls of buildings. However, it is not possible to quantify the contribution of seismic soil pressure to the damage because the embedded walls often carry the inertia load of the superstructure, which is combined with seismic soil pressure load contributing to the damage. On the other hand, simple structures, such as underground box-type structures, retaining walls, and bridge abutments, have suffered damage due to the increased soil pressure. All of these reports and others not mentioned highlight the significance of using appropriate seismic soil pressure in design.

In recent years, the understanding of the attributes of seismic soil pressure has improved significantly. This is mainly due to extensive field and laboratory experiments and data collected from instrumented structures, as well as to the improvement in computational methods in handling the SSI problems. Recent experiments and analyses of the recorded response of structures and seismic soil pressure have been reported in numerous publications. [17–24] These observations confirm that seismic soil pressure is caused by the interaction of the soil and structure and is influenced by the dynamic soil properties, the structural properties, and the characteristics of the seismic motion. The new insight prompted the US Nuclear Regulatory Commission (NRC) to reject the M-O and M-O-based methods for application to critical structures.

SIMPLIFIED METHODOLOGY

The simplified methodology presented in this paper focuses on the building walls

rather than soil-retaining walls and specifically considers the following factors:

• Deformation of the walls is limited due to the presence of the floor diaphragms and the internal cross walls, and the walls are considered rigid and non-yielding.

• The effect of wave propagation in the soil mass and interaction of the soil and wall are considered.

• The frequency content of the design motion is fully considered. Use of a single parameter, such as the peak ground acceleration, as a

ABBREVIATIONS, ACRONYMS, AND TERMS

ATC Applied Technology Council

M-O Mononobe-Okabe

NEHRP National Earthquake Hazard Reduction Program

NRC Nuclear Regulatory Commission

SASSI System for Analysis of Soil-Structure Interaction

SDOF single degree of freedom

SSI soil-structure interaction

TF transfer function

Page 15: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 5

The NRC

rejects the

conventional

M-O method

for application to

critical structures.

measure of design motion may misrepresent the energy content of the motion at frequencies important for soil pressure.

• Applicable dynamic soil properties, in terms of soil shear wave velocity and damping, are included in the analysis.

• The method is flexible to allow for consideration of soil nonlinear effect where soil nonlinearity is expected to be significant.

It is recognized that the seismic soil pressure is affected not only by the kinematic interaction of the foundation, but also by the inertia effect of the building. The mass properties of buildings vary significantly from one to another. The proposed solution is limited to prediction of seismic soil pressure as affected by the kinematic interaction effects of the building, consistent with the inherent assumption used in the current methods. Experience from numerous rigorous SSI analyses of buildings confirms that using the proposed solution can adequately predict the amplitude of the seismic soil pressure for many buildings even when the inertia effect is included. Some local variation of soil pressure may be present, depending on the layout of the interconnecting slabs and the interior cross walls to the exterior walls, and the relative stiffness of the walls and the soil.

To investigate the characteristics of the lateral seismic soil pressure, a series of seismic soil-structure interaction analyses was performed

using the Computer Program SASSI2000. [25] A typical SASSI model of a building basement is shown in Figure 1. The embedment depth is designated by H and the soil layer is identified by shear wave velocity Vs, Poisson’s ratio ν, total mass density ρ, and soil material damping β. The basemat is resting on rock or a firm soil layer. A column of soil elements next to the wall is explicitly modeled to retrieve the pressure responses from the solution. The infinite extent of the soil layers on the sides of the building, as well as the half-space condition below the building, are internally modeled in SASSI in computing the impedance functions for the structural nodes in contact with soil.

The assumption of a firm soil or a rock layer under the basemat eliminates the rocking motion of the foundation. For deep soil sites, and depending on the aspect ratio of the foundation, the rocking motion can influence the magnitude and distribution of soil pressure. Due to space limitation, the extension of the method for deep soil sites is not presented in this paper. A detailed discussion is reported in [10].

For the SASSI analysis, the acceleration time history of the input motion was specified at the top of the rock layer corresponding to the basemat elevation in the free-field. To characterize the dynamic behavior of the soil pressure, the most commonly used wave field, consisting of vertically propagating shear waves, was specified as input motion. The frequency

Figure 1. Typical SASSI Model of the Foundation

Rigid Boundary at the Base

Infin

ite La

tera

l Soi

l

Infin

ite La

tera

l Soi

l

Page 16: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 6

characteristics of the pressure response were examined using harmonic shear waves for a wide range of frequencies. For each harmonic wave, the amplitude of the normal soil pressure acting on the building wall at several locations along the wall was monitored. To evaluate the frequency contents of the pressure response, the pressure transfer function (TF) amplitude was obtained. This consists of the ratio of the amplitude of the seismic soil pressure to the amplitude of the input motion (1 g harmonic acceleration in the free-field) for each harmonic frequency. The analyses were performed for a building with embedment of 15.2 m (50 ft) and soil shear wave velocities of 152, 305, 457, and 610 m/sec (500, 1,000, 1,500, and 2,000 ft/sec), all with the Poisson’s ratio of 1/3. The material damping in the soil was specified to be 5 percent. The transfer function results for a soil element near top of the wall are shown in Figure 2. As shown in this figure, the amplification of the pressure amplitude takes place at distinct frequencies. These frequencies increase as the soil shear wave velocity increases.

To evaluate the frequency characteristics of each transfer function, the frequency axis was also normalized using soil column frequency f, which was obtained from the following relationship:

(1)

In the above equation, Vs is the soil shear wave velocity and H is the embedment depth of the building. The amplitude of soil pressure at low frequency was used to normalize the amplitude of the pressure transfer functions for all frequencies. The normalized transfer functions are shown in Figure 3. As can be seen, the amplification of the pressure and its frequency characteristics are about the same for the range of the shear wave velocities considered.

In all cases, the maximum amplification takes place at the frequency corresponding to the soil column natural frequency. The same dynamic behavior was also observed for all soil elements along the height of the walls.

Examining the dynamic characteristics of the normalized pressure amplitudes (such as those shown in Figure 3), it is readily evident that such characteristics are those of a single-degree-of-freedom (SDOF) system. Each response begins with a normalized value of one, increases to a peak value at a distinct frequency, and subsequently reduces to a small value at high frequency. Dynamic behavior of an SDOF system is completely defined by the mass, stiffness, and associated damping constant. It is generally recognized that response of an SDOF system is controlled by stiffness at low frequency, by damping at resonant frequency, and by inertia at high frequencies.

The dynamic

characteristics

of the seismic

soil pressure

are the same as

the SDOF system.

Figure 2. Typical Transfer Functions for Soil Pressure Amplitude

Ampl

itude

of P

ress

ure

Trans

fer F

unct

ion

Frequency, Hz

40,000

35,000

30,000

25,000

20,000

15,000

10,000

5,000

Vs = 152 m/sec (500 ft/s)

Vs = 305 m/sec (1,000 ft/s)

Vs = 457 m/sec (1,500 ft/s)

Vs = 610 m/sec (2,000 ft/s)

H4V

f s=

Page 17: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 7

Following the analogy for an SDOF system and to characterize the stiffness component, the pressure amplitudes at low frequencies for all soil elements next to the wall were obtained. The pressure amplitudes at low frequency are almost identical for the wide range of the soil shear wave velocity profiles considered, due to the long wave length of the scattered waves at such low frequencies. The shape of the normalized pressure was used as a basis for determining seismic soil pressure along the height of the building wall.

A similar series of parametric studies was also performed by specifying the input motion at the ground surface level [10]. The results of these studies also showed that the seismic soil pressure, in normalized form, could be represented by an SDOF system. For all cases considered, the low frequency pressure profiles depict the same distribution of the pressure along the height of the wall. This observation is consistent with the results of the analytical model developed by Veletsos et al. [12, 13] Since the SSI analyses were performed for the Poisson’s ratio of 1/3, the pressure distribution was adjusted forthe soil’s Poisson’s ratio using the factor recommended by Veletsos et al. The Ψν factor is defined by:

(2)

For the Poisson’s ratio of 1/3, Ψν is 1.897. Use of Ψν in the formulation allows correction of the soil pressure amplitude for various Poisson’s ratios. The adjusted soil pressure profile is compared with the normalized solution by Wood and the M-O method in Figure 4. In the proposed method, the maximum soil pressure is at the top of the wall. This is due to amplification of

The analogy of

an SDOF

is used

to formulate

the new method

for predicting

seismic soil pressure.

Figure 3. Normalized Transfer Functions

Norm

alize

d Am

plitu

de

Normalized Frequency

4.5

4.0

3.5

3.0

2.5

2.0

1.5

1.0

0.5

0

Vs = 152 m/sec (500 ft/s)Vs = 305 m/sec (1,000 ft/s)Vs = 457 m/sec (1,500 ft/s)Vs = 610 m/sec (2,000 ft/s)

0 0.25 0.50 0.75 1.00 1.25 1.50

Figure 4. Comparison of Normalized Pressure Profiles

Y/H

M-O MethodWood (v = 1/3)Proposed Method

0 0.2 0.4 0.6 0.8 1.0 1.21.0

Page 18: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 8

the motion in the soil with highest amplification at ground surface level. This effect was not considered in the Wood solution.

Using the adjusted pressure distribution, a polynomial relationship was developed to fit the normalized pressure curve. The relationship in terms of normalized height, y = Y/H (Y is measured from the bottom of the wall and varies from 0 to H), is as follows:

(3)

The area under the curve can be obtained by integrating the pressure distribution over the height of the wall. The total area is 0.744 H for a wall with a height of H. Having obtained the normalized shape of the pressure distribution, the amplitudes of the seismic pressure can also be obtained from the concept of an SDOF. The response of an SDOF system subjected to earthquake loading is readily obtained from the acceleration response spectrum of the input motion at the damping value and the frequency corresponding to the SDOF. The total load is subsequently obtained from the product of the total mass times the acceleration spectral value at the respective frequency of the system.

To investigate the effective damping associated with the seismic soil pressure amplification and the total mass associated with the SDOF system,

the system in Figure 1 with wall height of 15 m (50 ft) and soil shear wave velocity of 457 m/sec (1,500 ft/sec) was subjected to six different input motions in successive analyses. The motions were specified at the ground surface level in the free-field. The acceleration response spectra of the input motions at 5 percent damping are shown in Figure 5.

The motions are typical design motions used for analyses of critical structures. From the set of six motions shown in Figure 5, two motions labeled EUS local and distant are the design motions for a site in the Eastern United States with locations close and far away from a major fault. The Applied Technology Council (ATC) S1 motion is the ATC recommended motion for S1 soil con-ditions. The WUS motion is the design motion for a site close to a major fault in the Western United States. The RG 1.60 motion is the standard site-independent motion used for nuclear plant structures. Finally, the Loma Prieta motion is the recorded motion from the Loma Prieta earthquake. All motions are scaled to 0.30 g and limited to frequency cut-off of 20 Hz for use in the analysis. This cut-off frequency reduces the peak ground acceleration of the EUS local motion to less than 0.30 g due to the high frequency content of this motion.

From the SASSI analysis results, the maximum seismic soil pressure from each element along the wall height was obtained for each of the input motions. The amplitude of the pressure

The proposed

method is verified

for typical

design motions

used for

critical facilities.

Figure 5. Motions Used in the Study

Spec

tral A

ccel

erat

ion,

g

Frequency, Hz

1.4

1.2

1.0

0.8

0.6

0.4

0.2

0.1 1 10 100

0

EUS – Local

EUS – Distant

ATC S1

WUS

RG 1.60

Loma Prieta

p ( y) = – 0.0015 + 5.05y – 15.84y2

+ 28.25y3 – 24.59y4 + 8.14y5

Page 19: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 9

changes from one motion to the other, with larger values associated with use of RG 1.60 motion. Using the computed pressure profiles, the lateral force acting on the wall for each input motion was computed. The lateral force represents the total inertia force of an SDOF for which the system frequency is known. The system frequency for the case under considera-tion is the soil column frequency, which is 7.5 Hz based on Equation 1. The total force divided by the spectral acceleration of the system at 7.5 Hz at the appropriate damping ratio amounts to the mass of the SDOF.

To identify the applicable damping ratio, the acceleration response spectrum of the free-field response motions at a depth of 15 m (50 ft) was computed for a wide range of damping ratios. Knowing the total force of the SDOF, the frequency of the system, and the input motion to the SDOF system, the relationship in the form proposed by Veletsos et al. [12] was used to compute the total mass and the damping of the SDOF system. For the total mass, the relationship is

(4)

where ρ is the mass density of the soil (total weight density divided by the acceleration of gravity), H is the height of the wall, and Ψν is the factor to account for the Poisson’s ratio as defined in Equation 2. In the analytical model developed by Veletsos et al., a constant coefficient of 0.543 was used in the formulation of the total mass. Study of the soil pressure transfer functions and the free-field response motions at a depth of 15 m (50 ft) showed that spectral values at the soil column frequency at 30 percent damping have the best correlation with the forces computed directly from the SSI analysis. The high value of 30 percent damping is due to the radiation damping associated with soil-wall interaction. However, the spectral values of the motions at the depth corresponding to the base of the wall in the free-field are insensitive to the spectral damping ratios at the soil column frequency due to the dip in the response motion that appears in the acceleration response spectra at the soil column frequency (soil column missing frequency). The various motions, however, have significantly different spectral values at the soil column frequency, depending on the energy content of the design motion at the soil column frequency.

This observation leads to the conclusion that while the frequency of the input motion,

particularly at the soil column frequency, is an important component for magnitude of the seismic soil pressure, the spectral damping ratio selected is a much less sensitive parameter. In practice, it is often warranted to consider the variation of soil properties typically using the best estimate and the lower and upper bound range of soil velocity profile. This, in effect, shifts the soil column frequency to a wider range.

Computational StepsTo predict the lateral seismic soil pressure for below-ground building walls resting on a firm foundation and assuming rigid walls (no significant deformation), the following steps should be taken:

1. Perform free-field soil column analysis and obtain the response motion at the depth corresponding to the base of the wall in the free-field. The response motion in terms of acceleration response spectrum at 30 percent damping should be obtained. The free-field soil column analysis may be performed using the Computer Program SHAKE [26] with input motion specified either at the ground surface or at the depth of the foundation basemat. The choice for location of control motion is an important decision that needs to be made consistent with the development of the design motion. The location of input motion may significantly affect the dynamic response of the building and the seismic soil pressure amplitudes.

2. Use Equations 4 and 2 to compute the total mass for a representative SDOF system using the Poisson’s ratio and mass density of the soil.

3. Obtain the lateral seismic force from the product of the total mass obtained in Step 2 and the acceleration spectral value of the free-field response at the soil column frequency obtained at the depth of the bottom of the wall (Step 1).

4. Obtain the maximum lateral seismic soil pressure at the ground surface level by dividing the lateral force obtained in Step 3 by the area under the normalized seismic soil pressure, 0.744 H.

5. Obtain the pressure profile by multiplying the peak pressure from Step 4 by the pressure distribution relationship shown in Equation 3.

One of the attractive aspects of the simplified method is its ability to consider soil nonlinear effects. Soil nonlinearity is commonly

The proposed

method is based

on a set of simple

computational steps

to develop

an accurate

seismic soil pressure

profile for design.

m = 0.50 ρ H 2 Ψν

Page 20: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 10

considered by use of the equivalent linear method and the strain-dependent soil properties. Depending on the intensity of the design motion and the soil properties, the effect of soil nonlinearity can be important in changing the soil column frequency and, therefore, the amplitude of the spectral response at the soil column frequency.

Accuracy of the Simplified Method The simplified method outlined above was tested for building walls with the embedment depths of 4.6, 9, and 15.2 m (15, 30, and 50 ft) using up to six different time histories as input motion. The results computed directly with SASSI are compared with the results obtained from the simplified solution. To depict the level of accuracy, typical comparisons for a 4.6 m (15 ft) wall with shear wave velocity of 152 m/sec (500 ft/sec) using the ATC motion, a 9.2 m (30 ft) wall with soil shear wave velocity of 305 m/sec (1,000 ft/sec) using the Loma Prieta motion, and a 15.2 m (50 ft) wall with soil shear wave velocity of 457 m/sec (1,500 ft/sec) using the EUS local motion are shown in Figures 6, 7, and 8, respectively. In all cases, the soil material damping of 5 percent and Poisson’s ratio of 1/3 were used. These comparisons show a relatively conservative profile of seismic soil pressure predicted by the simple method as compared with a more rigorous solution. A comprehensive validation of the proposed method is presented in [10].

Comparison with Other Commonly Used Methods The seismic soil pressure results obtained for a building wall 9.2 m (30 ft) high embedded in a soil layer with shear wave velocity of 305 m/sec (1,000 ft/sec) using the M-O, Wood, and proposed simplified methods are com-pared in Figure 7. For the simplified method, the input motions defined in Figure 5, all scaled to 0.30 g peak ground acceleration, were used.

The simplified

method was

verified extensively

for a wide range

of soil properties,

wall heights, and

design motions.

Figure 6. Predicted and Directly Computed Seismic Soil Pressure, 4.6 m (15 ft) Wall, Vs = 152 m/sec (500 ft/sec), ATC Motion

Wal

l Hei

ght,

m

Wal

l Hei

ght,

ft

Maximum Seismic Soil Pressure, psf

Maximum Soil Pressure, kPa

Predicted

Computed

,

Figure 7. Predicted and Directly Computed Seismic Soil Pressure, 9.2 m (30 ft) Wall,

Vs = 305 m/sec (1,000 ft/sec), Loma Prieta Motion

Wal

l Hei

ght,

m

Wal

l Hei

ght,

ft

Maximum Seismic Soil Pressure, psf

Maximum Soil Pressure, kPa

,,

Predicted

Computed

Figure 8. Predicted and Directly Computed Seismic Soil Pressure, 15.2 m (50 ft) Wall,

Vs = 457 m/sec (1,500 ft/sec), EUS Local Motion

Wal

l Hei

ght,

m

Wal

l Hei

ght,

ft

Maximum Seismic Soil Pressure, psf

Maximum Soil Pressure, kPa

,,,

PredictedComputed

Page 21: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 11

The same soil shear wave velocity was used for all motions to compare the effects of frequency content of each motion on the pressure amplitude. In real application, the average strain-compatible soil velocity obtained from the companion free-field analysis would be used.

The M-O method and the Wood solution require only the peak ground acceleration as input, and each yields one pressure profile for all motions. For the M-O method, it is commonly assumed (although specified by neither Mononobe nor Okabe) that the seismic soil pressure has an inverted triangular distribution behind the wall. As shown in Figure 9, the M-O method results in lower pressure. This is understood, since this method relies on the wall movement to relieve the pressure behind the wall. The Wood solution generally results in the maximum soil pressure and is independent of the input motion as long as the peak acceleration is 0.30 g. The proposed method results in a wide range of pressure profiles, depending on the frequency contents of the input motion, particularly at the soil column frequency. For those motions for which the ground response motions at the soil column frequency are about the same as the peak ground acceleration of the input motion, e.g., RG 1.60 motion, the results of the proposed method are close to those of the Wood solution. There is a similar trend in the results of the various methods in terms of the magnitude of the total lateral force and the overturning moment.

CONCLUSIONS

Using the concept of an SDOF system, a simplified method was developed to predict

maximum seismic soil pressures for buildings resting on firm foundation materials. The method incorporates the dynamic soil properties and the frequency content of the design motion in its formulation. It was found that the controlling frequency that determines the maximum soil pressure is the one corresponding to the soil column adjacent to the embedded wall of the building. The proposed method requires the use of conventionally used, simple, one-dimensional soil column analysis to obtain the relevant soil response at the base of the wall. More importantly, this approach allows soil nonlinear effects to be considered in the process. The effect of soil nonlinearity can be important for some applications depending on the intensity of the design motion and the soil properties. Following one-dimensional soil column analysis, the proposed method involves a number of simple hand calculations to arrive at the distribution of the seismic soil pressure for design. The accuracy of the method relative to the more elaborate finite element analysis was verified for a wide range of soil properties, earthquake motions, and wall heights. The simplified method has been adopted by design codes and standards such as the NEHRP standards and the ASCE standards for nuclear structures.

ACKNOWLEDGMENT

The Bechtel technical grant for development of the method is acknowledged.

REFERENCES

[1] N. Mononobe and H. Matuo, “On the Determination of Earth Pressures During Earthquakes,” Proceedings of World Engineering Congress, Tokyo, Japan, Vol. 9, Paper 388, 1929.

[2] S. Okabe, “General Theory of Earth Pressures and Seismic Stability of Retaining Wall and Dam,” Journal of Japan Society of Civil Engineers, Vol. 12, No. 1, 1924.

[3] H.B. Seed and R.V. Whitman, “Design of Earth Retaining Structures for Seismic Loads,” ASCE Specialty Conference on Lateral Stresses in the Ground and Design of Earth Retaining Structures, Cornell University, Ithaca, New York, June 22–24, 1970.

[4] R.V. Whitman, “Seismic Design and Behavior of Gravity Retaining Walls,” Proceedings of ASCE 1990 Specialty Conference on Design and Performance of Earth Retaining Structures, Cornell University, Ithaca, New York, June 18–21, 1990, pp. 817–842, access via <http://cedb.asce.org/cgi/WWWdisplay.cgi?9002388>.

The simplified

method has

been adopted

by design codes

and standards,

e.g., ASCE4-09 and

NEHRP standards.

Figure 9. Maximum Seismic Soil Pressure for the 9.2 m (30 ft) Wall

Wal

l Hei

ght,

m

Wal

l Hei

ght,

ft

Maximum Seismic Soil Pressure, psf

Maximum Soil Pressure, kPa

EUS – LocalEUS – DistantATCLoma PrietaRG 1.60WUSM-OWood

0

2

4

6

8

,,,

Page 22: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 12

[5] R.V. Whitman and J.T. Christian, “Seismic Response of Retaining Structures,” POLA Seismic Workshop, San Pedro, California, 1990.

[6] R.V. Whitman, “Seismic Design of Earth Retaining Structures,” Proceedings of 2nd International Conference on Recent Advances in Geotechnical Earthquake Engineering and Soil Dynamics, St. Louis, Missouri, March 11–15, 1991, pp. 1767–1778.

[7] R. Richards, Jr., and D.G. Elms, “Seismic Behavior of Gravity Retaining Walls,” ASCE, Journal of Geotechnical Engineering, Vol. 105, No. GT4, April 1979, pp. 449–464.

[8] H. Matsuzawa, I. Ishibashi, and M. Kawamura, “Dynamic Soil and Water Pressures of Submerged Soils,” ASCE, Journal of Geotechnical Engineering, Vol. 111, No. 10, September 1984.

[9] R.M. Ebeling and E.E. Morrison, Jr., “The Seismic Design of Waterfront Retaining Structures,” US Army Corps of Engineers, Technical Report ITL-92-11, 1992.

[10] F. Ostadan and W.H. White, “Lateral Seismic Soil Pressure – An Updated Approach,” Bechtel Technical Grant report, 1997.

[11] J.H. Wood, “Earthquake Induced Soil Pressures on Structures,” Doctoral Dissertation, EERL 73-05, California Institute of Technology, Pasadena, California, 1973.

[12] A. Veletsos and A.H. Younan, “Dynamic Soil Pressure on Rigid Vertical Walls,” Earthquake Engineering and Soil Dynamics, Vol. 23, 1994, pp. 275–301.

[13] A. Veletsos and A.H. Younan, “Dynamic Modeling and Response of Soil-Wall Systems,” ASCE, Journal of Geotechnical Engineering, Vol. 120, No. 12, December 1994.

[14] ASCE 4-98, “Seismic Analysis of Safety-Related Nuclear Structures and Commentary,” American Society of Civil Engineers, 1998.

[15] NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, 2000 Edition, FEMA 369, March 2001.

[16] H. Nazarian and A.H. Hadjian, “Earthquake Induced Lateral Soil Pressure on Structures,” ASCE, Journal of Geotechnical Engineering, Vol. 105, No. GT9, September 1979.

[17] Electric Power Research Institute, “Proceedings: EPRI/NRC/TPC Workshop on Seismic Soil-Structure Interaction Analysis Techniques Using Data From Lotung, Taiwan,” EPRI Publication No. NP-6154, Two Volumes, March 1989.

[18] Electric Power Research Institute, “Post-Earthquake Analysis and Data Correlation for the 1/4-Scale Containment Model of the Lotung Experiment,” EPRI Publication No. NP-7305SL, October 1991.

[19] M. Hirota, M. Sugimoto, and S. Onimaru, “Study on Dynamic Earth Pressure Through Observation,” Proceedings of 10th World Conference on Earthquake Engineering, Madrid, Spain, July 1992.

[20] H. Matsumoto, K. Arizumi, K. Yamanoucho, H. Kuniyoshi, O. Chiba, and M. Watakabe, “Earthquake Observation of Deeply Embedded Building Structure,” Proceedings of 6th Canadian Conference on Earthquake Engineering, Toronto, Canada, June 1991.

[21] M. Watakabe, H. Matsumoto, Y. Fukahori, Y. Shikama, K. Yamanouchi, and H. Kuniyoshi, “Earthquake Observation of Deeply Embedded Building Structure,” Proceedings of 10th World Conference on Earthquake Engineering, Madrid, Spain, July 1992.

[22] T. Itoh and T. Nogami, “Effects of Surrounding Soils on Seismic Response of Building Basements,” Proceedings of 4th US National Conference on Earthquake Engineering, Palm Springs, California, May 20–24, 1990.

[23] K. Koyama, O. Watanabe, and N. Kusano, “Seismic Behavior of In-Ground LNG Storage Tanks During Semi-Long Period Ground Motion,” Proceedings of 9th World Conference on Earthquake Engineering, Tokyo-Kyoto, Japan, August 2–9, 1988.

[24] K. Koyama, N. Kusano, H. Ueno, and T. Kondoh, “Dynamic Earth Pressure Acting on LNG In-Ground Storage Tank During Earthquakes,” Proceedings of 10th World Conference on Earthquake Engineering, Madrid, Spain, July 1992.

[25] J. Lysmer, F. Ostadan, and C.C. Chen, “SASSI2000 – A System for Analysis of Soil-Structure Interaction, Department of Civil Engineering, University of California, Berkeley, 1999.

[26] P.B. Schnabel, J. Lysmer, and H.B. Seed, “SHAKE – A Computer Program for Earthquake Response Analysis of Horizontally Layered Sites,” Earthquake Engineering Research Center, University of California, Berkeley, Report No. EERC 72-12, December 1972.

BIOGRAPHYFarhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse

group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects.

Dr. Ostadan has published more than 30 technical papers on topics related to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations.

Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of California’s Seismic Safety Commission.

Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

The original version of this paper was published in the Journal of Soil Dynamics and Earthquake Engineering,

Vol. 25, Issues 7–10, August–October 2005, pp. 785–793.

Page 23: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 13

INTRODUCTION

The current approach to the design of safety-related shear wall structures generally

involves using the SASSI2000 [1] computer code for the seismic soil-structure interaction (SSI) analysis. Acceleration profiles obtained from the SASSI analysis are applied to a detailed finite element model as equivalent static loads to determine the seismic forces. The SASSI models may be coarser than the static models. The design may be carried out using a concrete design program, with appropriate combination of applicable static loads. In the design process, maximum seismic forces in each of the three orthogonal directions are combined. This step assumes that all maximum seismic loads are acting at the same time, thus resulting in a very conservative design.

The conventional two-step design procedure described above is tedious and requires two separate analyses to develop design loads and the compatibility of the static and the dynamic models needs to be demonstrated for each application. However, it has the advantage that a detailed static model considering major openings and composite slabs can be analyzed

for design. The proposed approach requires the same model to be used for static and dynamic analysis and thus offers a more robust approach for concrete design, if the same static model can be used in the dynamic analysis.

For dynamic analysis, the new version of the SASSI2000 code is used. This version has the state-of-the-art thin/thick shell element with five stress output points allowing computation of out-of-plane shear forces. To avoid transfer of large sets of stress time histories from SASSI2000 to optimum concrete (OPTCON) [design computer code], the transfer function solutions, which comprise a much smaller set of data, are imported to OPTCON and the STRESS module in SASSI2000 is implemented in OPTCON to compute element stresses. OPTCON combines the shell stresses into one 3-D record for design while preserving the maximum responses. OPTCON imports static loads, such as dead-load from SAP2000 [2] models. OPTCON module is a Windows program using a project database.

An earlier version of the OPTCON reinforced concrete design engine was developed 30 years ago and used extensively in the design of

Originally Issued: April 2006Updated: December 2008

Thomas D. Kohli

[email protected]

Orhan Gürbüz, PhD

[email protected]

Farhang Ostadan, PhD

[email protected]

Abstract—This paper summarizes a new approach for the design of concrete shear wall structures for nuclear facilities. Static and dynamic analyses are carried out with the same finite element model, using SAP2000 and SASSI2000 computer programs, respectively. The method imports the dynamic solution from SASSI2000 into the optimum concrete (OPTCON) design computer code in order to compute the stresses in the concrete members for every time step of analysis. Static stresses are imported from the static solution for applicable static loads, and total stresses are computed for concrete design. The design process allows both element-based and cut-section design methods.

This approach has the advantage of considering the stress time history in the design of concrete members, avoiding the conventional approach of combining maximum seismic stresses for all elements simultaneously. Significant savings in concrete design (both time and material) were obtained in a test problem, simulating a typical shear wall structure for nuclear facilities.

Keywords—computer code, cut-section design, element-based design, impedance matrix, integrated design, mass matrix, optimum design, reinforcement, seismic design, shear wall structure, shell element, static and dynamic analysis, stiffness matrix

INTEGRATED SEISMIC ANALYSIS AND DESIGN OF SHEAR WALL STRUCTURES

Page 24: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 14

The process

streamlines

the analysis and

design process

and reduces the

engineering time

for design

significantly.

nuclear power plants. [3] OPTCON optimizes its reinforcement design by considering all the factored shell forces at once, determining the “best-fit” reinforcement at each face of the concrete.

For the new integrated design approach, the OPTCON computer code was modified to affect the design as follows:

• Design is performed using the element shell forces as a function of time, obtained from the Dynamic Link Library (DLL) using factored time history loads, thus preserving the phasing of the response motions and combining with applicable static loads.

• OPTCON assures design is adequate for stresses for all time steps.

• OPTCON performs automatic element-grouping where forces on sections of elements need to be considered as a set.

• OPTCON uses parabolic stress-strain relationship for concrete.

• All design meets the requirements of ACI 349-01 (both shear walls as well as floor diaphragms are designed).

• Selective output includes the required reinforcement on each face of the wall or the slab, in each direction, for each element.

• Contour plots of shell forces and computed reinforcement are available to the designer.

The process streamlines the analysis and design process and reduces the engineering time for design significantly. The integrated process incorporates theoretical accuracy and engineer-ing judgment and is a valuable tool in the design of next generation of nuclear power plants.

METHODOLOGY

The integrated design methodology involves (1) seismic analysis of the overall structure

using SASSI2000, (2) static analysis under the

non-seismic loads using the same model, and (3) design of the structural members using the OPTCON module. Each of these steps is described below.

Seismic Analysis with SASSI2000The computer program is widely used in the nuclear industry for seismic SSI analysis of structures and development of seismic responses for structural design and equip- ment design. In SASSI, the equation of motion is formulated in frequency domain and fast Fourier transform (FFT) techniques are used to convert the frequency domain solution to time domain solution. For each selected frequency of analysis, SASSI solves for the following equation of motion where [C] is a complex frequency- dependent dynamic stiffness matrix

(1)

and [K] and [M] are the global complex stiffness and mass matrices, respectively. Using the following subscripts, which refer to degrees of freedom (DOF) associated with different nodes (see Figure 1):

(2)

Subscript Nodes

b the boundary of the total system

i at the boundary between the soil and the structure

w within the excavated soil volume

g at the remaining part of the free-field site

s at the remaining part of the structure

f combination of i and w nodes

Formulation of the dynamic stiffness and mass matrices is very similar to all other finite element codes. To include the SSI effects, SASSI2000 requires computation of the free-field motion U′i in Equation 1 and the impedance matrix for all foundation interaction nodes Xii. The impedance matrix is, in effect, a complex frequency-dependent spring and dashpot, which is interconnected to all other interaction nodes and is obtained from the point load solution for each interaction node.

ABBREVIATIONS, ACRONYMS, AND TERMS

ACI American Concrete Institute

CG center of gravity

DLL Dynamic Link Library

DOF degree(s) of freedom

FFT fast Fourier transform

OPTCON optimum concrete

SSI soil-structure interaction

⎪⎭

⎪⎬

⎪⎩

⎪⎨

⎧ ′

=⎪⎭

⎪⎬

⎪⎩

⎪⎨

⎥⎥⎥⎥

⎢⎢⎢⎢

−−

−+−

00

iUX

UUU

C0C

0CC

CCCXCCii

s

w

i

IIIss

IIIsi

IIww

IIwi

IIIis

IIiwii

IIii

IIIii

[C ] = [K ] – w2 [M ]

Page 25: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 15

Figure 1 depicts the substructuring method used in SASSI2000 analyses. Details of SASSI2000 substructuring methods and the internal modeling can be obtained from the theoretical manual of the program (see [1]). The solution from Equation 1 in terms of U is the transfer function solution for each degree of the freedom in the model. The transfer function solution isconvolved with the Fourier components of the input motion and converted to time domain to obtain the response time history for the respective degree of freedom in the model. The stress time histories for each element are computed from the response time histories of the nodes forming that element using the stress-strain relationship of the respective element.

Static Analysis for Non-Seismic Loads The finite element model of the structure used in SASSI2000 for seismic analysis should be used for the non-seismic loads in accordance with the project criteria. Any general purpose finite element analysis program can be used for this purpose. In this study, SAP2000 is used. Since the design will be carried out for the total structure, it is necessary to capture the internal forces and moments for all members. Consequently, the finite element model must include the soil stiffness to capture the basemat response. Also, if the structure is embedded, the lateral soil pressures must be included in the static analysis.

Figure 1. Substructuring Method in SASSI2000 Analysis

Qb

Qb

(a) Total System

(c)Substructure II

Excavated Soil Volume

(d)Substructure III

Structure

(b)Substructure IFree-Field Site

g w i w ibs

i

Page 26: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 16

The analysis results are saved in the project database that will be used during the design process.

Definition of Design Forces Design of shear wall structures has been carried out in the past using both element stresses and “cut section” forces. In the integrated approach, a combination of both is used as described below.

In the OPTCON program module, rows of elements in both walls and diaphragms are designated as “element-groups.” For each element-group, in-plane membrane forces and moments are calculated. For a wall, these forces are: membrane force in the vertical direction Pu, in-plane overturning moment Mu, and horizontal shear force Vu. For a diaphragm, these forces and moments would be calculated in both directions.

The out-of-plane design forces are calculated on an element basis, including the out-of-plane bending moments Mx and My and out-of-plane shears Vxz and Vyz .

All analysis is done on a time-step basis for the entire time-history duration using the 3-D combined shell stresses. At each time-step, the ACI 349-01 code criteria is considered so the envelope of the controlling design step is assured.

Integrated Design Using the OPTCON Module The shell stresses computed in the SASSI2000 STRESS DLL in OPTCON use the transfer-functions of the DOF defined by the connectivity of the 3-D shell finite element model. To do this, the SHL17 thin/thick shell element’s stress recovery routines were rewritten using complex arithmetic so that their inputs were the frequency domain nodal transfer-functions rather than the original nodal displacement time histories. The stress recovery was therefore performed in the frequency domain and converted to time domain using the SASSI2000 FFT routines. The time histories of shell element stresses are used for design. The SHL17 quadrilateral shell element has five output points where the shell stresses are computed; one is at the center of gravity (CG) of the element and the other four are located about 80 percent of the way from the CG to the corner nodes.

SASSI2000 performs the analysis one direction at a time (three runs for X-, Y-, and Z-excitation) and uses the three components of time histories,

one for each respective direction. Therefore, the process used by OPTCON is for the user to isolate all the shell elements to be considered in a design (i.e., a shear wall or a floor diaphragm) and then the internal STRESS DLL is executed three times, once for each of the global directions and the results of these analyses are saved to disk. Then, each of the three directional analyses is post-processed so the result is a combined 3-D time history record for each shell element in the analysis. During the 3-D combination, the three directions of excitations are permutated in terms of plus/minus sign for all eight possible combinations so that their maximum resulting shell stress components are captured. Thus, the final 3-D combined shell stresses contain all the time-history stress components—Sxx , Syy , Sxy , Mxx , Myy , Mxy , Vxz , and Vyz—for each shell at all five stress output points on the SHL17 element.

Then, OPTCON uses the imported static shell stresses for dead-load and live-load along with user specified loading-combination scale factors and for each time-step combines them into the final shell element stresses to be used for design.

For element-based design (i.e., with no element-grouping) the final shell element stresses are used directly. In this, typically Sxx and Mxx are considered as P and M pairs at each time-step to design the horizontal reinforcement in the shear wall and Syy and Myy to design the vertical reinforcement. The twisting moment, Mxy is added to amplify both Mxx and Myy so that the design at each time-step is conservative. OPTCON performs element-based design using the single shell element stresses using an iteration process where both As and As’ are initially set to their minimums and then swept through all the steps so that all P and M pairs best-fit within a numerical reinforced concrete interaction diagram.

For element-groups, OPTCON automatically assembles the individual shell elements to be considered in the design into internal logical groups. This is accomplished considering openings in the designs such as doors and windows in shear walls or openings in floor diaphragms. Such openings form piers at each of the element-levels that, in certain cases, require special ACI 349-01 code considerations. Thus, OPTCON, with element-group design, always considers all possible cut-sections of groups of elements in the design at each element- level. Figure 2 shows the element-grouping on a simple shear wall with one door opening.

All analysis

is done on a

time-step basis

for the entire

time-history

duration

using the

3-D combined

shell stresses.

Page 27: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 17

In Figure 2, vectors 1, 2, 4, and 5 represent cut-sections on the wall piers to be used for design. Vectors 3 and 6 are cut-sections on multiple piers, and vectors 7 and 8 are cut-sections on the wall where no piers exist, across the whole wall.

When using element-grouping, the shell stresses at the five output points on the element are averaged and assumed to be located at the CG of the element. Then, the individual CG

shell stresses are integrated at each time step to determine the design forces and/or moments. Figure 3 shows how these integrations are performed using an example of a five-element group:

Applied Axial Load on Wall (tension is positive):

(3)

When using

element-grouping,

the shell stresses

at the five

output points

on the element

are averaged.Figure 2. Elevation of Shear Wall with One Opening , Showing Horizontal Groups

Element Column

No. 1

Element Column

No. 8

Shell Elements

Element Level No. 4

Element Level No. 1

Pier No. 1 Pier No. 2

1

4

2

5

7

8

3

6

∑=

=

=ni

1iiu FyP

Figure 3. Integration of CG Shell Stresses to Determine Design Forces and/or Moments — Five-Element Group

Fxi = (Sxx)(hi)

Fyi = (Syy)(wi)

Fvi = (Sxy)(wi)

Iw/2 Iw/2

wi

hi

Xj

X

Y

Yj

Syy, Fyi

Sxx, Fxi

Sxy, Fvi

(5 Shell Elements Shown)

Pu (+ = tension) (kips)

Mu (ft–kips)

Vu (kips)C Wall Section

CG of Shell nCG of Shell i

L

Page 28: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 18

Applied Moment about Center Line of Wall or Wall Segment (counter-clockwise is positive):

(4)

Applied Shear Load on Wall:

(5)

Concrete Stress BlockWhen designing reinforcing steel with element- groups while considering membrane forces and their overturning moments, OPTCON considers equally distributed rebar located along the section to be designed. Figure 4 shows the design of a typical cut-section where no openings exist:

Strain in Rebar Set, es , as a Function of the Strain of the Concrete, ec , at the Edge of the Concrete Sections:

(6)

Stress in Bar Set:

(7)

Force in Bar Set:

(8)

NOTE: Forces on bar sets in compression zone are reduced to consider force taken by concrete.

If openings such as doors or windows should exist in the cut-section shown in Figure 4, OPTCON iterates on the area of reinforcement considering differing concrete stress block having discontinuities and the reinforcing steel is not considered where the openings exist.

DESIGN EXAMPLE

The integrated approach was applied to a shear wall structure in a high seismic

zone. The two-story building is approximately 150 ft x 250 ft and 64 ft high. As can be observed from this model, numerous openings exist in the walls and slabs and therefore the design forces must be calculated considering these discontinuities.

The finite element model for this building is shown in Figure 5. The model used a 5 ft x 5 ft mesh size, resulting in 9,000 nodes and 8,000 elements.

The design of the lower exterior shear wall illustrated in Figure 5 is shown as an example below. The shear wall runs from the basemat at elev. 0.0 to the first floor at elev. 27.27 and is 206 ft long, extending across the entire building. Figure 6 illustrates the shell element mesh in the wall segment used in the example.

The shear wall has three shear panels—numbered 4, 5, and 6—formed by intersecting interior walls as shown above.

Figure 7 is a contour plot of the Syy shell forces in units of kips per foot of shell width. These are the absolute value of the maximum 3-D combined seismic plus static vertical shell forces in the wall. This is an illustration of a plot of the eight shell stress components that may be viewed at the option of the engineer. These plots are for information only and are used

When designing

reinforcing steel

with

element-groups

while considering

membrane forces

and their

overturning

moments, OPTCON

considers equally

distributed rebar

located along

the section

to be designed.

Figure 4. Design of a Typical Cut-Section with No Openings

e0 = 0.002

xbarC

ƒc = 0.85ƒc2ee0 e0

e 2

Xi

e

Lna

es

lw

(Parabolic Hognestad Concrete Stress Block)

bar set i

Concrete Stress Block0.85ƒc

ec (max) = 0.002

o Neutral Axis

CL

inch)perinchesi.e.,(strain,

LLx

eena

naics ⎟⎟

⎞⎜⎜⎝

⎛ −=

area)ofunitper(force

f0.9Eef ysss ≤=

force)of(unitsAxfF sss =

⎭⎬⎫

⎩⎨⎧

⎟⎟⎠

⎞⎜⎜⎝

⎛−∑ −=

=

=i

wi

ni

1iu x

21

×FyM

∑==

=

ni

1iiu FνV

Page 29: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 19

to view the stress concentrations that tend to dominate the design of the reinforcing steel.

In-Plane Reinforcement Requirements The entire wall was checked using OPTCON with element-grouping for limiting shear

strength per Section 21.6.5.6 of ACI 349-01. The largest demand-capacity ratio for indi- vidual piers was 0.45 and the largest for piers sharing a common lateral force was 0.39.

Horizontal shear reinforcement was designed with OPTCON, using element-grouping to meet

Figure 5. Example of Shear Wall Structure

Y

XZ

Figure 6. Lower Exterior Shear Wall

Figure 7. Contour Plot of Syy Shell Membrane Forces

Page 30: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 20

the provisions of ACI 349-01, equations 21-7, 11-31, and 11-32 of ACI 349-01. The controlling reinforcement designed was 0.86 sq.-in. per foot of shell width per face.

Shear-friction was checked at the bottom of the wall (basemat intersection) using element- grouping per paragraph 11.7 of ACI 349-01 but did not control.

Reinforcement Required Resulting From Out-of-Plane Loads on the Wall OPTCON was used with element-based design using only the shell moments Mxx and Myy, amplified by Mxy with the membrane forces Sxx and Syy set to zero (since they had already been considered in the in-plane design above). This resulted in the added reinforcement needed to resist the out-of-plane loadings on the wall.

RESULTS

The total reinforcement was obtained by combining reinforcement required for in-plane

loadings with the added reinforcement required to resist out-of-plane loadings and considering minimum code requirements. The proposed approach resulted in reinforcement requirements that were 77%–93% of the reinforcement determined using the two-step approach.

No out-of-plane reinforcements (stirrups) were required in the wall during the design.

CONCLUSIONS

Adequate tools are important for the design of complex structures for both commercial

nuclear power plants and US Department of Energy facilities. The integrated design approach presented in this paper takes advantage of the time history phase relationship of the seismic forces and also optimizes the design to provide a balanced design. This design tool will accelerate the design process and, at the same time, will minimize the peer review process that has become a large part of such projects.

The example design shown above was accomplished in less than 3.5 hours using a high-end PC running the OPTCON Windows program.

Because the design process meets the ACI code requirements, it can be readily applied to complex projects.

REFERENCES

[1] J. Lysmer, F. Ostadan, and C. Chin, “SASSI2000 – A System for Analysis of Soil-Structure Interaction – Theoretical Manual,” Geotechnical Engineering Division, Civil Engineering Department, University of California, Berkeley, 1999, access via <http://203.96.131.85/SASSI2000/index_html>.

[2] SAP2000 User’s Manual, Computers and Structures, Inc., Berkeley, CA, 2005, access via <http://orders.csiberkeley.com/ProductDetails.asp?ProductCode=SAP2KDOC-3>.

[3] T. Kohli and O. Gürbüz, “Optimum Design of Reinforced Concrete for Nuclear Containments, Including Thermal Effects,” Proceedings of the Second ASCE Specialty Conference on Structural Design of Nuclear Plant Facilities, New Orleans, Louisiana, December 1975, pp. 1292–1319, access via <http://cedb.asce.org/cgi/WWWdisplay.cgi?7670213> and <http://cedb.asce.org/cgi/WWWdisplaybn.cgi?0872621723>.

BIOGRAPHIESThomas D. Kohli is a consulting engineer, retired from Bechtel after 25 years of service. He has more than40 years of experience in the analysis and design of nuclear related structures.

Thomas served on the Senior Structural Staff in the Bechtel Los Angeles Office and

managed the Containment Specialty Group that performed front-end design for Nuclear Containment design in the 1970s. His specialty is modern windows software engineering as applied to Finite Element Analysis and optimized reinforced concrete design meeting the requirements of ACI 349 and ACI 359 Codes, directly related to time-history design due to combined static and seismic loading.

Thomas holds a BS in Civil and Structural Engineering from the University of Southern California, Los Angeles.

Orhan Gürbüz is a Bechtel Fellow and senior principal engineer with over 35 years of experience in structural and earthquake engineering. As a Fellow, he is an advisor to senior management on technology issues, and represents Bechtel in technical societies and at industry associations.

The integrated

design approach

presented

in this paper

takes advantage

of the time history

phase relationship

of the

seismic forces

and also optimizes

the design.

The original version of this paper was published in the Proceedings of the 8th U.S. National Conference

on Earthquake Engineering (Paper No. 996), held April 18–22, 2006, in San Francisco, California, USA.

Page 31: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 21

As a senior principal engineer, he provides support to various projects and performs design reviews and independent peer reviews. The scope of work includes development of design criteria, seismic evaluations, structural evaluations and investigations, technical review and approval of design, serving as Independent Peer Reviewer for special projects, investigation and resolution of design and construction issues, and supervision of special analyses.

Dr. Gürbüz is a member of the American Society of Civil Engineers’ Dynamic Analysis of Nuclear Structures Committee and the American Concrete Institute 349 Committee. These committees develop and update standards and codes used for the nuclear safety-related structures, systems, and components.

Dr. Gürbüz received a PhD and an MS in Structural Engineering, and a BS in Civil Engineering, all fromIowa State University, Ames, Iowa.

Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse

group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects.

Dr. Ostadan has published more than 30 technical papers on topics relating to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations.

Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of California’s Seismic Safety Commission.

Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

Page 32: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 22

Page 33: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 23

INTRODUCTION

Situated on the banks of the Columbia

River in the shrub- steppe desert area of southeastern Washington state, the US Department of Energy’s (DOE’s) Hanford Site was where plutonium was produced for the Manhattan Project atomic bomb program that began in the early 1940s. Throughout the four-decade Cold War, the facility continued its national security mission to produce materials for nuclear weapons. The end of production at Hanford brought the task of cleanup and finding a long-term storage solution for the site’s legacy: 53 million gallons of highly radioactive waste stored on site in underground steel tanks.

In December 2000, the DOE Office of River Protection awarded Bechtel National, Inc., the Hanford Waste Treatment Plant (WTP) project to build the world’s largest such facility to solve the long-term storage problem. The $12 billion facility will separate the waste into high-level and low-level streams, which when mixed with silica, heated to melting, and deposited in stainless-steel canisters will cool and harden into a stable glass safe for long-term storage.

The low-activity waste will be stored on site over the long term while the high-level waste is planned for shipment to Yucca Mountain in the Nevada desert.

As shown in Figure 1, WTP consists of three mammoth nuclear waste processing facilities—Pretreatment, Low-Activity Waste Vitrification, and High-Level Waste (HLW) Vitrification—each with reinforced concrete core structures surrounded by structural steel frames. A fourth facility, the large Analytical Laboratory, will contain a concrete “hot cell” where radioactive materials will be tested and categorized. The concrete cores will house process equipment, with support services located in the steel portion of the structures.

STRUCTURAL INNOVATION AT THE HANFORD WASTE TREATMENT PLANT

Abstract—Three innovative techniques have been used in meeting the structural challenges of designing and constructing the mammoth facilities constituting the first-of-a-kind Hanford Waste Treatment Plant. Addressing the areas of design production, nuclear safety assurance, and constructability, the first technique is a systematic approach to developing bounding loads considering dynamic response; the second is an advanced technique in soil-structure interaction (SSI) analysis, which provides confidence in the adequacy of the design; and the third approach is a construction tool that ensures the quality of concrete placement in highly congested areas.

Keywords—acceleration, bounding load, congestion, Hanford, high-level waste, high-slump concrete, incoherency, low-activity waste, nuclear waste, radioactive waste, response acceleration, seismic criteria, soil-structure interaction (SSI), structural innovation, structural steel

Issue Date: December 2008

John Power

[email protected]

Mark Braccia

[email protected]

Farhang Ostadan, PhD

[email protected]

Figure 1. Hanford Waste Treatment Plant Facilities, Washington State

Page 34: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 24

The four facilities are designed to meet a combination of commercial and nuclear codes and regulations, and two of them, the Pretreatment Facility (PTF) and the HLW facility, are designed to the same rigor as nuclear power generating facilities, including full dynamic analyses using site-specific seismic criteria.

The sheer size and scope of these facilities present first-of-a-kind challenges. Due to schedule constraints, WTP is a “close-coupled” project in which the design of the upper floors was under way during construction of the foundation. This approach posed an enormous challenge in developing reasonably conservative design loads in the anticipation of structural design changes.

This paper describes three of the several innovative techniques used in designing and

constructing the facilities, addressing the areas of design production, nuclear safety assurance, and constructability. The first technique is a systematic approach to developing bounding loads considering dynamic response; the second is an advanced technique in soil-structure interaction (SSI) analysis, which provides confidence in the adequacy of the design; and the third approach is a construction tool that ensures the quality of concrete placement in highly congested areas.

DETERMINING BOUNDING LOADS [1]

The first of these innovative techniques is the two-step method for determining

design bounding loads. Current codes and DOE regulations require consideration of

Y(North) X

Z

Steel Structure

Mesh

Concrete Shell Element

Mesh

YX

Z

Figure 2. SAP2000 Static Model for Steel Comparison

ABBREVIATIONS, ACRONYMS, AND TERMS

ACI American Concrete InstituteASCE American Society of Civil EngineersDOE US Department of EnergyHLW high-level wasteISRS in-structure response spectraNRC US Nuclear Regulatory Commission

psi pounds per square inchPTF Pretreatment FacilitySASSI System for Analysis of Soil-Structure InteractionSSI soil-structure interactionWTP Waste Treatment PlantZPA zero period acceleration

Determining

adequate bounding

loads is critical

when design is

“close-coupled”

with construction.

Page 35: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 25

SSI and dynamic response effects in the analysis of major nuclear facilities, such as the PTF and HLW facilities. However, current SSI software does not efficiently combine seismic responses with other loads required for design. Compounding this problem is the enormity of the structures and the analytical models that depict them (e.g., the finite element model for these structures exceeds 100,000 elements).In addressing the challenge, WTP is one of the major nuclear projects using a detailed finite-element model for seismic SSI analysis. The state-of-the-art methodology and computer program SASSI (System for Analysis of Soil-Structure Interaction) was chosen for the SSI analysis. SASSI uses a complex frequency-response technique to simulate the time-history seismic analysis. Three separate SSI analyses are performed in which the seismic input motion—in terms of acceleration time history at grade—is applied in the X, Y, and Z directions, respectively. A range of soil cases is considered in the analysis to account for the variability of soil properties and the results are enveloped. Maximum acceleration profiles and in-structure response spectra (ISRS) are calculated taking into account the co-directional responses.

By reviewing the transfer functions obtained from the SASSI analyses, the response of the concrete structure and steel roof is adequately captured. Each steel member’s maximum forces and moments are computed for validation purposes. These steel forces are spatially combined using the component factor method

(100/40/40), which assumes that when the maximum response from one component occurs, the responses from the other two components are 40% of the maximum. [2] Also for validation purposes, shear stresses are calculated in the concrete shear walls.

In the second step, acceleration profiles derived from the first step are used, with an adjustment scale factor greater than or equal to 1.0, for the statistical analysis of the detailed finite-element model using SAP2000 computer code. [3] Figure 2 shows the finite-element model of the building studied in the steel member comparison. The Y-axis is in the N-S direction and the Z-axis is vertical upward. Figures 3 and 4 show the steel roof frame and identify the critical bracing members for which seismic force comparisons were made. The structure is supported by

Z

YCritical Vertical Brace (N-S)

El. 119’

El. 108.25’

El. 97.5’

Concrete Walls (Down to El. 0’-0”)Concrete Slabs (Down to El. 0’-0”)

Figure 4. Section Cut AA

A-A

Critical Horizontal Brace Critical Vertical Brace (E-W) Critical Vertical Brace (N-S)Y

N

X

Figure 3. Plan View of Roof — Critical Bracing Members

Page 36: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 26

appropriate linear elastic soil springs. All steel member forces are extracted for design, and forces are combined using the component factor method.

The key to the two-step method is the use of conservative acceleration profiles derived from the SASSI analysis results, adjusted to ensure that calculated seismic stresses are conservative, even at local high-stress concentration areas. Establishing conservative acceleration profiles requires a good understanding of the dynamic responses of the structure from the first-step SSI analysis.

One of the tools used to show the dynamic behavior of the structure is the acceleration “bubble plot.” Available in Microsoft® Excel®, a bubble plot is simply a two-dimensional coordinate plan view of one elevation of the structure with a bubble plotted at every node within that elevation. The size of the bubble is proportional to the magnitude of the maximum acceleration at that node. Nine bubble plots are normally generated at each elevation, representing the response accelerations in the X, Y, and Z directions due to seismic input. In Figure 5, the bubble plot shows the maximum nodal response accelerations in the X direction due to seismic input in the X direction at the elevation of the steel roof.

Based on the nine bubble plots, a mass-weighted average maximum acceleration is determined for each story; then, if required, the story is divided into regions with similar accelerations and a conservative acceleration is assigned to

each region. (These adjusted acceleration profiles are applied to the static model in the second step of analysis.) The acceleration plots provide a clear picture of the dynamic response of the structure, floor by floor and wall by wall, so that timely decisions on design changes can be made to improve the seismic response. Such an iterative process would not have been possible following the commonly used practice of developing seismic shear and moment diagrams, and would not have enabled examination of the dynamic response of individual structural members.

ADVANCED SOIL-STRUCTURE INTERACTION ANALYSIS USING INCOHERENCY EFFECT [4]

The second innovation entails consideration of incoherency when determining seismic

accelerations in the design of major nuclear facilities. The WTP facilities will contain and process highly radioactive materials, which if released would cause a serious threat. The facilities are just miles from the Columbia River, the largest North American river to flow into the Pacific Ocean. The public and regulators demand that appropriately conservative assumptions be made to bound the natural phenomena hazards the facility could experience. Earthquake is one of the primary design inputs, and although prescriptive techniques are available to define the potential seismic events, they require making assumptions that would directly influence the final spectra. With this situation in mind, the WTP design team took

The two-step

method provides

reasonable

bounding loads to

allow design and

construction to

progress.

Y

X

0

–50

–100

–150

–200

–250100 500400300200

.55 .56 .56 .57 .57 .58 .59 .59 .62 .67 .69 .67 .64 .60 .56 .53 .50 .48 .47 .46 .45 .44 .44

.48 .47 .45 .46 .47 .52 .61 .67 .72 .74 .73 .71 .63 .59 .55 .56 .49 .48 .45 .47 .47 .49 .53

.45 .43 .43 .42 .45 .49 .60 .70 .73 .78 .77 .83 .76 .74 .72 .72 .67 .69 .68 .68 .66 .68 .66

.53 .50 .51 .47 .52 .52 .59 .61 .67 .67 .71 .68 .71 .70 .72 .68 .71 .70 .71 .70 .73 .72 .75

.46 .43 .43 .41 .43 .41 .43 .42 .44 .42 .45 .44 .47 .45 .50 .48 .53 .54 .59 .56 .61 .63 .73

.56 .53 .53 .50 .51 .49 .50 .50 .53 .53 .56 .56 .59 .61 .68 .69 .74 .76 .84 .86 .93 .93 .97

.53 .54 .51 .52 .50 .51 .50 .52 .51 .54 .53 .58 .59 .65 .68 .75 .76 .85 .87 .98 1.02 1.09 1.08

.49 .46 .40 .42 .40 .42 .41 .44 .43 .45 .44 .46 .48 .53 .55 .61 .59 .62 .63 .84 .99 1.11 1.15

.57 .59 .56 .56 .53 .54 .52 .53 .51 .53 .52 .55 .55 .59 .62 .70 .72 .86 .92 1.05 1.10 1.17 1.17

.59 .56 .58 .55 .56 .53 .54 .52 .52 .51 .53 .52 .54 .55 .60 .63 .71 .77 .89 .96 1.03 1.11 1.16

.45 .41 .44 .40 .43 .41 .44 .43 .44 .41 .43 .41 .44 .43 .47 .48 .52 .52 .59 .67 .82 .87 .89

.58 .56 .57 .56 .58 .56 .56 .54 .54 .52 .52 .50 .51 .49 .52 .51 .55 .58 .62 .64 .70 .73 .76

.57 .59 .61 .63 .62 .62 .60 .58 .57 .57 .55 .52 .50 .47 .44 .42 .43 .44 .46 .48 .52 .56 .60

1.43

.75

.76

.84

1.03

.74

.92

.75

1.06

.77

.79

.85

.77

.79

.85

1.72

1.11

1.70

Figure 5. Roof Bubble Plot — X Acceleration (g) due to X-Direction Seismic Input

Page 37: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 27

on the task of showing that the development of conventional spectra is conservative because it does not consider that not all seismic waves impact the structure in unison.When determining the SSI, a fully coherent model will lead the ground motion to transfer directly to the structure across the full range of frequencies. Introducing incoherency into the model reduces the structural response at high frequencies, thereby reducing the seismic forces used to qualify equipment. This approach can result in significant project savings by lowering the cost of major equipment.

The basic concept of incoherency is that high-frequency waves imposed on a large structure will not translate into the full building motion that would result in smaller structures. The best way to visualize this concept is to consider two vessels in an ocean: a 30-foot sailboat and an aircraft carrier. Four-foot swells would cause noticeable rocking and bouncing for the sailboat’s passengers, but passengers on the aircraft carrier would barely perceive the motion. The same concept applies to the process equipment in the primary facilities of a major nuclear facility.

While the details of the incoherency model are beyond the scope of this paper, Figure 6 demonstrates the resulting reduction in accelerations at high frequencies as compared to the results using a fully coherent model. The peak acceleration is approximately 11% lower and the zero period acceleration (ZPA) has been reduced by 25%. These reductions will translate into significant savings when

seismically qualifying equipment in the high-frequency range. Following this important development and its documentation, the US Nuclear Regulatory Commission (NRC) approved the incoherency model and its implementation in the SASSI program for SSI analysis. [5]

DEVELOPMENT OF CONCRETE MIX FOR HIGHLY CONGESTED AREAS

The third innovation is the use of high-slump concrete in congested concrete placements.

The plant process areas will be located in multistory reinforced concrete structures. In addition to the heavy reinforcement, surface plates with embedded studs used throughout to support equipment and commodity attachments result in highly congested forms (see Figures 7 and 8). The danger in this situation is voids, which could require expensive repairs.The proposed solution was to use a high-slump concrete mix that could work through the congestion without leaving voids. It was specified as a 5,000-psi (pounds per square inch) mix with a maximum water-cement ratio of 0.45. Additionally, maximum aggregate size was limited to 3/4 inch to prevent bridging or binding during placement. The construction team also sought a mix with 11-inch slump, which results in 1-inch final slump cone height, just slightly higher than the aggregate size. Finally, as the mix had to meet the criteria established in ACI 349, Bechtel worked with the project’s concrete supplier to develop a mix that met the criteria.

The NRC

has approved

the use of

incoherency

in design of

nuclear facilities.

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.1 1 10 100

Frequency, Hz

Spec

tral A

ccel

erat

ion,

g

Incoherent Motion

Coherent Motion

Figure 6. Foundation Motion in the Horizontal Direction (5% Damping, Rigid Massless Square Foundation)

Page 38: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 28

The concrete supplier designed two high-slump concrete mixtures. The F-6 high-slump mixture uses a 3/8 inch aggregate and the F-7 high-slump mixture uses both a 3/4 inch and a 3/8 inch aggregate. The high slump is achieved using Master Builders 1 Glenium® 3000NS high-range water-reducing admixture. As a precaution to prevent segregation and excessive bleed water, a viscosity-modifying admixture, Master Builders VMA 358, was introduced into the mix.

To prove that the mix was viable, testing demonstrated not only compressive strength, but also showed that it would not segregate. Four-inch-thick wall mock-ups were placed and allowed to cure for 24 hours. Once the forms were stripped, the concrete was broken and cross sections examined. Even with 11-inch slump, the aggregate remained evenly distributed throughout the matrix. Strength tests were also positive.

CONCLUSIONS

Using the high-slump mix in highly congested areas, WTP has avoided and

will continue to avoid costly repairs associated with voids, saving DOE and taxpayers significant sums of money.

TRADEMARKS

Glenium is a registered trademark of Construction Research & Technology GMBH.

Microsoft and Excel are registered trademarks of the Microsoft group of companies.

REFERENCES

[1] D. Watkins, O. Gurbuz, F. Ostadan, and T. Ma, “Two-Step Method of Seismic Analysis,” First European Conference on Earthquake Engineering and Seismology, Geneva, Switzerland, September 4–6, 2006, Paper Number 1230.

[2] ASCE 4-98, “Seismic Analysis of Safety-Related Nuclear Structures,” American Society of Civil Engineers, 2000, access via <https://www.asce.org/bookstore/book.cfm?book=3929>.

[3] SAP2000 User’s Manual, Computers and Structures, Inc., Berkeley, California, 2005, access via <http://orders.csiberkeley.com/ProductDetails.asp?ProductCode=SAP2KDOC-3>.

[4] F. Ostadan, N. Deng, and R. Kennedy, “Soil-Structure Interaction Analysis Including Ground Motion Incoherency Effects,” 18th International Conference on Structural Mechanics in Reactor Technology (SMiRT 18), Beijing, China, August 7–12, 2005, Paper Number SMiRT18-K04-7 <http://www.iasmirt.org/iasmirt-2/SMiRT18/K04_7.pdf>.

[5] Interim US NRC Staff Guidance (ISG) supplement to NUREG-0800, “Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants,” Section 3.7.1, Seismic Design Parameters, regarding the review of seismic design information submitted to support design certification (DC) and combined license (COL) applications, May 2008.

Use of

high-slump

concrete mixes

allows placement

of concrete

in congested forms

without

costly repairs

due to voids. Figure 7. Rebar at Shield Window(Note Multiple Layers of Reinforcing)

Figure 8. Reinforcing Spacing

1 Master Builders is the brand name of a line of products manufactured by the Admixture Systems business of BASF Construction Chemicals, Cleveland, Ohio, a division of BASF Aktiengesellschaft, which is headquartered in Ludwigshafen, Germany.

Page 39: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 29

BIOGRAPHIESJohn Power, deputy discipline production manager, CSA, on the Waste Treatment Project in Richland, Washington, has 22 years of experience in structural and civil engineer- ing. He assists in managing a staff of 100 structural and civil engineers, and architects.

John has a Bachelor of Civil Engineering from the Georgia Institute of Technology, Atlanta, Georgia, and is a certified Six Sigma Black Belt.

Mark Braccia, discipline production manager, CSA, on the Waste Treatment Project in Richland, Washington, has 30 years of experience in structural and civil engineer-ing. Along with managing a staff of 100 structural, civil, and architectural engineers, he has been serving as

the structural technical interface with the Defense Nuclear Facilities Safety Board and the Department of Energy for the project.

Mark has a BS in Civil Engineering from the University of California, Berkeley.

Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse

group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects.

Dr. Ostadan has published more than 30 technical papers on topics relating to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations.

Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of California’s Seismic Safety Commission.

Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

Page 40: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 30

Page 41: BTJ Book V1 N1 2008 Final

33Systems Engineering — The Reliable Method of Rail System DeliverySamuel Daw

43Simulation-Aided Airport Terminal DesignMichel A. Thomet, PhDFarzam Mostoufi

49Safe Passage of Extreme Floods —A Hydrologic PerspectiveSamuel L. Hui André Lejeune, PhDVefa Yucel

Bechtel CivilT e c h n o l o g y P a p e r s

Civil — Albanian Motorway This 38-mile (61-km) four-lane highway in Albania,

the central leg of a 106-mile (171-km) motorway traversing the country, is being constructed by

a Bechtel-ENKA joint venture.

Page 42: BTJ Book V1 N1 2008 Final
Page 43: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 33

INTRODUCTION

Many rail system delivery projects suffer with similar, if not the same, issues and

problems, which usually result in increased cost and/or schedule delays.

Within a highly commercial and competitive environment, rail projects are rarely able to deliver required long-term operational performance benefits while satisfying short-term project delivery objectives. In some cases, requirements related to a railway’s long-term operational performance are compromised to fulfill short-term project delivery objectives, and overall performance is adversely affected. When the impact of such a compromise is understood, significant efforts are made to ensure that the project delivers what is required, but these efforts increase the project’s cost and/or delay its schedule, to the detriment of the business case.

After a brief background discussion, this paper defines 10 key areas of a rail system delivery project in which issues and problems commonly occur. Then, in the next section, it describes how a systems engineering approach, especially when applied from the outset, provides a project team with a reliable method of managing a complex rail system delivery project effectively in a commercial and competitive environment.

BACKGROUND

Objectives of Rail System Delivery ProjectsThe primary objective for most, if not all, rail system delivery projects is to deliver the defined system within the financial targets and time constraints agreed upon between the customer

and supplier, with minimal disruption to existing operations. The rail system’s definition should accurately reflect the needs of its users, customers, and operations and maintenance personnel, and specify requisite features and functions, operational safety and performance requirements, and other requirements related to support over its life cycle.

Market ForcesThe financial authority to undertake a rail system delivery project is usually based on the strength of the related business case, one which demonstrates that the perceived benefits to be delivered by the project will exceed anticipated costs and a reasonable return will be made over a reasonable period. Once a case has been made and the financial authority has been granted, the next step is to select a suitable supplier (prime contractor), usually through a competitive bidding process, the rigor of which will be aligned to the scale and complexity of the project. The procurement agent will be seeking a supplier that can fully satisfy the technical and commercial requirements, usually at lowest cost.

The successful supplier will most likely be the one that develops confidence in its ability to deliver what is required within the financial targets and time constraints set by the business case. In some cases, the procurement agent will work with the preferred supplier to further reduce cost and schedule associated with the project as a part of contract negotiation. Due to an environment where market forces prevail and similarly capable suppliers compete for the same work, cost and schedule can be driven down to unrealistic levels. It is not unusual for suppliers to accept very challenging and potentially

Samuel Daw

[email protected]

Issue Date: December 2008

Abstract—This paper discusses the common issues and problems many rail system delivery projects face and provides insight into the way in which a systems engineering approach to rail system delivery would address them.

Keywords—collaboration, integration, rail system, requirements, systems engineering

SYSTEMS ENGINEERING —THE RELIABLE METHOD OF RAIL SYSTEM DELIVERY

Page 44: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 34

unrealistic financial targets and schedules to secure competitive contracts to supply railway systems and subsystems.

Unfortunately, overly zealous attempts to drive down cost and shorten schedules to safeguard the business case can actually end up threatening it. The need for a supplier to make a reasonable return for its efforts may be overlooked. The competitive bidding approach sometimes aims at short-term cost reductions with little or no understanding of the challenges and cost buildup of supply, making long-term results very costly. In contrast, a supplier development process would seek to better understand the cost buildup of supply, encourage openness and innovation, and identify improved and leaner ways of working as a means of reducing costs over the longer term.

“It’s unwise to pay too much, but it is worse to pay too little. When you pay too much, you lose a little money, that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing it was bought to do. The common law of business balance prohibits paying a little and getting a lot—it can’t be done. If you deal with the lowest bidder, it is well to add something for the risk you run. And if you do that, you will have enough to pay for something better.”

—John Ruskin, English philosopher (1819–1900)

When the business case is cost and/or schedule sensitive, the project needs to be managed particularly carefully to ensure that a financial return is actually delivered. In this respect, the sponsor will carefully monitor the progress of the project as a means of ensuring the investment.

Economies of ScaleProduct sales volumes in the railway industry are significantly lower than product sales volumes in many other industries and markets. In addition, many railway administrations operate their railways differently and apply variants of products in different applications. Hence, the market size for rail products can be relatively small.

Unfortunately, this can lead suppliers to focus on their home markets and develop specific products for specific railway administrations, reducing the size of the overall market through which development costs can be recovered—in effect, increasing project costs. A number of global suppliers have begun developing generic products that can be used by multiple railway administrations, which allows development costs to be recovered over a wider market. However, these generic products usually require some form of adaptation before they can be used by a particular administration. In common with product development risk, product adaptation can also represent risk to project delivery.

Many rail projects seek to reduce technology risks by applying only “proven” technologies. However, unless the equipment has been previously applied, operated, and maintained in a very similar specific application environment, each new specific application represents some application risk.

Progress MeasurementNormally, project progress is measured by the achievement of planned deliverables. In other words, the supplier must deliver documentation, construction materials, equipment, and infrastructure, and other requirements, on an agreed upon schedule.

In many cases, measurements that quantify deliverables but do not take into account their quality—determine whether or not they truly fulfill project requirements—create a false sense of progress. This can result in the project moving from one phase in its life cycle to the next pre-maturely, and, like building on a foundation of sand, it is almost certain to introduce difficulties during later phases of the project life cycle.

A supplier

development

process seeks to

better understand

the cost buildup

of supply,

encourages

openness and

innovation,

and identifies

improved and

leaner ways

of working

as a means of

reducing costs

over the

longer term.

ABBREVIATIONS, ACRONYMS, AND TERMS

EN50126:1999 European Standard entitled “Railway Applications – The Specification and Demonstration of Reliability, Availability, Maintainability, and Safety (RAMS)”

EU European Union

QA/QC quality assurance/ quality control

RIA Regulatory Impact Analysis

SDS system design specification

SRS system requirements specification

SSDS subsystem design specification

SSRS subsystem requirements specification

Page 45: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 35

Complex projects

are delivered through

the collaboration

of people and

organizations.

DEFINITION

CollaborationIt is worth noting that complex projects are delivered through the collaboration of people and organizations. Most project organizations, however, are based on vertical structures and organized into the engineering disciplines and functions that are required to deliver the project. Unfortunately, this type of structure does not encourage collaboration among various disciplines and functions—or even among individuals within these groups—in pursuit of a common goal.

To the contrary, verticality actually encourages the engineering disciplines and functions to work in isolation from one another toward their own objectives. This problem can be further exacerbated on major projects where these groups are geographically separated. Organizational problems can lead to rework, schedule delays, and increased cost.

Operational ConceptFor many rail system projects, insufficient consideration is given up front to the definition of the operational concept. The operational concept defines how a system is intended to operate within the application environment, taking into account how it interacts with and influences adjacent systems, and the roles of its operators and maintainers. The operational concept should also define how operations are to be recovered in the event of a failure or disturbance, and the provisions that are required to facilitate preventative and reactive maintenance activities.

In most cases, operational principles having to do with safety are clearly set out, and rightly so. But operational principles related to performance and availability may be neglected. Without a well understood and clearly defined operational concept, it is difficult to develop an accurate and complete set of system requirements and to convey those requirements between customer and supplier.

Crucially, the system’s inability to support an effective operational concept is only discovered during system validation, or worse, later.

StandardsIn many cases, projects are required to demonstrate compliance not only with customer and/or contract requirements, but also with a range of related standards, such as legislative and industry requirements and local custom and practice.

The process of identifying relevant standards and eliminating those that are not relevant can be time consuming, as is the subsequent capture, apportionment, and tracking of requirements.

When the hierarchy of industry and legislative standards has been aligned, compliance to high-level requirements can be proven simply by demonstrating compliance with the low-level requirements, as they are derived from the high-level requirements.

However, in some cases the hierarchy of industry and legislative standards may not be aligned. Furthermore, many standards are based on the solutions already in use on the railway, and while it may be possible to extrapolate the underlying requirements for the existing solutions, it can be difficult to achieve agreement of these underlying requirements.

For the supplier of modern products and systems, this can significantly increase overheads associated with the management of standards and noncompliances, and, in some cases, it can lead to delays in the acceptance of new products when the underlying requirements are unclear.

Definition and Apportionment of RequirementsFor many projects, system and subsystem requirements are poorly defined, if they are defined at all. While it is an objective of many projects to make use of existing products and systems in new applications, and rightly so, the purpose of tracking requirements is to ensure that customer, contract, legislative, and industry requirements are all satisfied. One of the main reasons for defining, apportioning, and tracking the requirements through design and verification is to demonstrate this compliance. As such, it is an important means of determining to what degree requirements can be fulfilled using standard products and systems, allowing potential gaps to be identified so that appropriate action can be taken.

In some cases, customer requirements are used as the basis of apportionment to the subsystems, leading to difficulties in subsystem delivery and systems integration. Customer requirements usually contain:

• Actual customer requirements• System requirements• Useful information• Constraints

System requirements should define “what” is required of the system in order to fulfill the customer requirements and to deliver the

Page 46: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 36

operational concept. Quite often, the definition of system requirements is missed altogether, and the project concentrates on the definition and fulfillment of subsystem requirements, in some cases defined around product specifications, rather than through the definition and top-down apportionment of system requirements.

All in all, the definition of “what” the project is required to deliver can be quite poor. While the system may demonstrably fulfill the requirements that have been defined, missing features and functions are normally identified at a very late stage of the project, usually during validation, with significant impacts on project delivery.

Project Life CycleAnother common feature of rail projects appears to be the desire to “get on with the job,” and move into detailed design and construction as soon as possible. Enthusiasm for progress sometimes causes a project to move from one phase to the next before it is ready to do so. Even when deliverables and content from a previous phase are found to be missing, and attempts to complete them are prioritized during the current phase, enthusiasm for progress may lead the project to advance prematurely yet again to the next phase. But in such cases, it is usually only a matter of time before the project arrives at a point where incorrect assumptions have been made and rework becomes the order of the day.

Project Processes and ProceduresA related project deficiency is the lack of suitable project processes, which should be rolled out to those that are required to implement them from the outset. A manual of project processes and procedures should be developed that documents the way in which members of the project team have agreed to collaborate and work together.

In some cases, the task of defining project processes and procedures is allocated to the quality assurance/quality control (QA/QC) function. While members of the QA/QC function are capable of writing project processes and procedures, these documents may not necessarily reflect the way in which the project team members intend to work together.

In other cases, there is a belief that the project team knows what it is doing because it has delivered similar projects before. While this may be true, it is almost certain that the project team has not delivered a rail system with the identical group of people, or with the identical conditions of this particular application.

Ironically, with experienced and competent individuals, project processes and procedures should be easily agreed on and defined; but it’s not the experienced and competent individuals who will need to refer to and rely on them most—it’s those who are less experienced and competent.

On many projects, processes and procedures are treated in isolation from one another and are not integrated, with the catchall statement “to be read in conjunction with ….”

If the way in which processes and procedures relate to one another is left open to interpretation, there is room for error. This has an impact on a project’s ability to deliver the system effectively. It can lead to an alarming audit trail and further burden the project with corrective actions, some of which may be valid, but many of which are not, distracting the project’s attention from the delivery of the system.

Project ScheduleAnother common weakness in many projects is the way in which the project schedule is developed and managed. In many cases, each project discipline and function defines its own sub-schedules, and attempts are then made to join the sub-programs together at the master program level.

Due to the level of complexity of integrating the sub-schedules, as a compromise the sub-schedules are sometimes rolled up a level and integrated into the master program at a higher level only. Obviously, this means that any necessary integration at the detailed level can be missed, resulting in inputs that are not always made available when required, and information that is not exchanged as needed.

Systems IntegrationIn many projects, systems integration is perceived as the stage in the project where the black boxes are connected together and where validated subsystems are integrated with one another to form the system.

Systems integration actually takes place when the scope of the subsystems is defined, i.e., when system requirements are apportioned to the subsystems. At this stage, the system has essentially been disintegrated in such a way that parts of the system can be delivered in a manageable way and effectively integrated with one another at a later stage. Hence, the system is designed for integration.

Systems integration

actually takes place

when the scope of

the subsystems

is defined.

Page 47: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 37

Due to this misconception, systems integration is not always properly considered during early project phases. Also, a systematic approach to systems integration is sometimes lacking, and projects fail to identify the integration risks they face at an early enough stage. As a result, they fail to take positive action to eliminate integration risks at an early stage of the project.

Competence ManagementOne issue that is faced on almost every project is the effective management of competence. Staff are sometimes appointed to roles based upon their capability rather than their competence. A capable person is someone who can recognize the competencies required to undertake a role, develop those competencies, and then undertake the role.

Hence, if we appoint capable individuals, they will get the job done, but much of their time in the early stages will be spent learning rather than doing. If this learning period is not recognized and carefully managed, perhaps through mentoring, it can lead to inappropriate decisions and inappropriate direction for the project during early stages, decisions and direction that must be maintained to save face.

Role of the System AuthorityIn many systems projects, the need to establish a system authority from the outset is not identified. In complex, multidisciplinary projects, it is not usual to identify a single person who understands all of the technical issues and challenges faced by the project and who is able to take all of the significant decisions and set the project direction in the interests of the project.

Normally, a system authority would be consti- tuted, usually someone with diverse expertise, to provide strong direction, make good decisions, and manage and coordinate the activities of the subsystem delivery teams. Without the system authority, it is possible that a suboptimal solution will be delivered that will adversely impact operations during initial operations, at least until problems and shortcomings are resolved.

SYSTEMS ENGINEERING APPROACH

CollaborationA systems engineering approach clearly recognizes that projects are delivered through the collaboration of people and organizations. For example, the approach toward the definition and integration of project processes and procedures,

as described in later sections of this paper, encourages collaboration from the outset, with project staff working together to agree on the way in which they will deliver the project.

The organization of the project, taking into account the various constituent engineering disciplines and functions and their geographical locations, is a key element of the project design as the vehicle to deliver the system. While a vertical organizational structure is valuable from the outset, allowing like-minded individuals to work together as they develop and refine their thinking, it can constrain the overall collaboration. Groups in such organizations often end up at cross purposes when issues, challenges, and problems arise.

In some instances, changing from a vertical organizational structure (based on various engineering disciplines and functions) to a horizontal organizational structure with multi-disciplinary and multifunctional teams (based on tasks to be undertaken and deliverables to be produced) can greatly improve collaboration. However, the timing of the change is important and, as with any organizational change, requires sensitive management.

Operational ConceptThe systems engineering approach embraces defining the operational concept from the outset, and uses modeling to check and confirm understanding with all affected stakeholders, and to demonstrate the operational benefits to them, when possible, to secure their buy-in.

By using modeling, as appropriate, the operational concept can be validated from the outset, providing a graphical definition of the way in which the system is to operate, including the various modes of operation and human-system collaboration. This approach also fosters a common understanding between customer and supplier at an early stage.

Baselines of Standards and StakeholdersA systems engineering approach establishes clear baselines of relevant standards and key stakeholders from the outset, including a record of decisions and justifications relating to the selection of standards and stakeholder input.

The baselines are subject to rigorous change control, such that the implications of any changes in inputs arising through changes in either related standards or related stakeholder input can be readily identified and considered prior to acceptance or implementation.

The organization

of a project

is a key element

of project design

as the vehicle

for delivering

a system.

Page 48: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 38

Definition and Apportionment of RequirementsSystems engineering takes a systematic approach to the development and definition of customer requirements, and to defining acceptance criteria for each requirement—i.e., what the project is required to do to demonstrate that customer requirements have been fully satisfied.

Similarly, a systematic approach is taken toward defining system requirements and the associated verification criteria—i.e., what the project is required to do to verify that system requirements have been fully satisfied.

System requirements and the design outline are then refined through analyses and assessments from different perspectives, such as operability, safety, performance, human-factors integration, constructability, maintainability, etc., in an iterative manner. Also, system requirements are apportioned to the subsystems according to the outline system design.

Importantly, the systems engineering approach aims to determine the minimal set of system and subsystem requirements necessary for full coverage. Getting that balance right from the beginning is essential, because establishing too many requirements will overburden the project, while establishing too few can lead to deficiencies in the system’s design and operability.

Figures 1 and 2 provide illustrations of the iterative approach to the refinement of system requirements and system design, to the apportionment of system requirements to subsystem requirements, and to the refinement of subsystem requirements and design. They also show how the system design is updated to reflect each decision made during the detailed design of the subsystems.

Project Life CycleA systems engineering approach defines an appropriate project life cycle model at the outset, one that takes into account the profile of technical and commercial risks over the project life cycle. The life cycle may be based on an industry standard life cycle, e.g., EN50126:1999, but it is tailored to ensure its applicability to the project, taking into account the nature and context of the system to be delivered, its constituent subsystems, and their relationships to the system and to one another. By this means, the system management task is clearly defined.

Objectives, inputs, requirements, outputs, and phase verification criteria are clearly defined for each phase of the project life cycle, and it is only possible to move from one phase to the next when all requirements of the current phase have been demonstrably fulfilled.

Figure 3 provides an illustration of a project life cycle for a typical railway system delivery project, based on EN50126:1999 and organized as a “V” representation.

Project Processes and ProceduresA systems engineering approach seeks to define and harmonize the processes and procedures to be implemented by the project through a collaboration of the personnel who are required to implement them—effectively defining “how” they intend to work together to deliver the project, and encouraging a collaborative approach from the outset. Inputs, tasks, and outputs are clearly identified for each process, and processes are integrated with one another to ensure that all inputs can be satisfied and that owners of shared information take into account the needs of users.

The processes may be integrated through the use of a matrix, providing visibility of owners and users of shared information. This approach allows interactions between the various processes to be clearly identified, and aims to establish a dialogue between owners and users as to the format and content of information to

Figure 1. Iterative Refinement of Requirements and Design

SystemRequirements

SystemDesign

SubsystemRequirements

Apportionment of System Requirements

Design Optimizationand Verification

VerifySubsystem

Design

SubsystemDesign

Analyses and

Assessments

Analyses and

Assessments

VerifySystem Design

A systems

engineering approach

encourages

collaboration

on a project

from the outset.

Page 49: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 39

Figure 2. Apportionment of System Requirements

ContractualRequirements

LegalRequirements

OtherRequirements

RIA StatedStandards

Stakeholdersand Users

Analyses andAssessments

EU Legislationand

Standards

NationalInstruments

and Standards

EngineeringFunctionalExpertise

EngineeringDomain

Expertise

ProcessRequirements

(”How”)

TechnicalRequirements

(”What”)

SRS

SDS

SSRS

SSDSSSDSSSDSSSDSSSDSSSDS SSDS

SSRS

RequirementsApportionment Requirements

Tracing RequirementsFulfillment

SSRSSSRSSSRSSSRS SSRS

RequirementsIdentification Requirements

Analysis RequirementsClassification

KEY:RIA = Regulatory Impact Analysis SRS = system requirements specificationSDS = system design specification SSRS = subsystem requirements specificationSSDS = subsystem design specification

Figure 3. Typical Project Life Cycle

ProgramManagement

1. Concept

2. SystemDefinition and

ApplicationConditions

3. RiskAnalysis

4. SystemRequirements

5. Apportionmentof System

Requirements7. Manufacturing

8. Installation

ProgramManagement

9. SystemValidation

10. SystemAcceptance

11. Operationand

MaintenanceVerification

Verification

Verification

Verification

Verification Verification

Verification

Verification

Verification

Verification

Design and Implementation

6.1 SubsystemOutline Design

6.3 SubsystemInstallation

Design

6.2 SubsystemDetailedDesign

ContractualMechanism

ContractualMechanism

System Authority

System Authority

A systematic

approach

is taken toward

defining system

requirements.

Page 50: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 40

be shared and exchanged. Hence, it encourages collaboration among the various engineering disciplines and functions within the project.

Project ScheduleFollowing a systems engineering approach, the project schedule is based on the project life cycle and processes, and includes detailed information relating to task ownership and task durations, etc. By this means, the schedule reinforces project processes and procedures and vice versa.

The schedule is carefully checked to ensure that all inputs will be made available as they are needed, and that all outputs required of the project will be delivered. If a required input is not clearly identified in the schedule, it is doubtful that it will somehow materialize on time. Therefore, it is crucial that all inputs and outputs are included in the schedule.

Systems IntegrationA systems engineering approach will aim to ensure that the system is specifically designed to enable integration, recognizing that system integration actually takes place during the apportionment of system requirements to subsystem requirements.

Integration risks are clearly identified and ranked at an early stage of the project life cycle, and specific activities are defined to mitigate potential risks at the earliest opportunity, making use of modeling, simulation, and testing as appropriate. As with all other project-related activities, it is essential that systems integration risk identification and mitigation activities be included in the project schedule.

Competence ManagementA systems engineering approach seeks to identify the competencies that are required for each of the roles to be undertaken within the project. The aim is to employ both competent (with significant relevant experience) and capable individuals (with the ability to become competent with experience), and ensure that early decisions are made by those who are competent to make them while capable individuals are being developed and mentored in their project roles.

The mix of personal characteristics required to contribute to project definition, development, and delivery requires careful consideration. Most projects attempt to retain technically competent staff from start to finish, mainly for consistency and familiarity reasons. However, this may not be in the best interests of the staff or project

for a number of reasons. During the project’s initial phases, personnel who are “shapers” are needed to conceive and establish the project structure, as the vehicle for delivering the system, although they must be kept within the bounds of reality by well-grounded individuals. As the project progresses, “completer finishers” are needed to focus on delivery. Getting the balance of personality characteristics wrong in personnel who are assigned to each phase can adversely impact a supplier’s project delivery performance.

Role of the System AuthorityDepending on the nature and complexity of the system to be delivered, a systems engineering approach will implement a properly constituted system authority, whose role and responsibilities will be clearly defined, with decision-making authority to provide:

• Guidance and direction, based on highly relevant, broad experience

• System management, including oversight and coordination of subsystem delivery projects, interfaces, systems integration, and change management

• Long-term thinking

CONCLUSIONS

Systems engineering is an effective means of addressing many of the problems that we

predictably experience in rail systems delivery projects. It ensures that customer requirements are actually delivered by the supplier in the most effective and efficient manner.

The systems engineering approach was deliberately conceived to ensure that the long-term objectives of a project are fulfilled in an ever-changing external environment by the most efficient route possible, focusing the minds of customers and their suppliers on a common goal based on a common understanding.

Ideally, systems engineering starts at the outset of project definition and continues as the facilitator for effective rail system delivery throughout the project life cycle. Although systems engineering is unable to prevent false expectations from being agreed upon at the outset, it represents the most reliable method available for successful rail system delivery.

Unfortunately, it seems that systems engineering is not well understood in the railway industry, and its half-hearted implementation on many

The systems

engineering

approach

was deliberately

conceived to

ensure that the

long-term objectives

of a project

are fulfilled in an

ever-changing

external

environment.

Page 51: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 41

projects has resulted in easily preventable project delivery difficulties. Managers of rail system delivery projects often don’t recognize the need to adopt a systems engineering approach until their projects experience difficulties. Fortunately, even when systems engineering is not applied until the middle stage of a project, it can be used to minimize the impact of difficulties on the outcome—although in some cases the approach may be applied too late to fully recover and satisfy all of the project objectives.

To reap its maximum benefits, a systems engineering approach should always be implemented at the start of a rail systems delivery project and applied throughout its life cycle, until successful system completion and turnover is accomplished.

BIOGRAPHYSamuel Daw’s 24 years of experience in the railway industry includes 2 years with Lloyd’s Register Rail Limited as head of systems integra-tion, 4 years with Siemens Transportation Systems as principal engineer for products and systems engineer, and 3 years with Bechtel Civil as

a rail systems engineer. Sam began his career as an electronic technician apprentice with ABB Signal Limited, in Plymouth, England, where he advanced to the position of electronic design engineer before joining Adtranz Limited (ABB Daimler-Benz Transporation/Signal), in Plymouth as product manager.

Sam’s extensive technical experience in rail systems covers systems integration and systems integration management, including operational concept definition, modeling, and validation; requirements engineering; project life cycle definition and integrated process definition and implementation; and system architecture and interface control.

Sam is a chartered engineer and registered European engineer. He is also a member of the Institution of Engineering and Technology (IET), the Institution of Railway Signalling Engineers (IRSE), the Institute of Electrical and Electronic Engineers (IEEE), and the International Council on Systems Engineering (INCOSE).

Sam earned a Diploma in Management Studies with distinction and a Certificate in Management from the Plymouth Business School, University of Plymouth, Drake Circus, Plymouth, England. He also holds a BE in Electrical and Electronic Engineering with honors from Polytechnic South West, Drake Circus, Plymouth, England.

Page 52: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 42

Page 53: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 43

INTRODUCTION

The new terminal at Curaçao International Airport (see Figure 1) began operating in

July 2006. When it opened, the terminal was capable of handling 1.6 million passengers annually, although that traffic level is not expected to occur before July 2011. Its ultimate capacity will be 2.5 million passengers per year. The airport is a terminal for Caribbean Basin traffic serving mainly European (primarily Dutch) and US tourists (via Miami), and a small business segment. There is also a very small number of transfer passengers to and from other islands in the Netherlands Antilles region, including Aruba, Bonaire, and St. Maarten.

When the new terminal was being designed, 18 airlines were expected to serve the airport, including three U.S.-based companies, American, Continental, and Delta. One airline, Dutch Caribbean Express, was expected to carry almost half of all passengers and connect Curaçao to the main Caribbean islands of Jamaica, Haiti, Santo Domingo, Trinidad and Tobago, and the other islands in the Netherlands Antilles region,

including Aruba, Bonaire, and St. Maarten; and to cities in nearby Venezuela, including Caracas, Valencia, and Maracaibo. The airline would also offer long-range flights to Miami and Amsterdam. Other airlines would serve several South American countries, including Venezuela, Colombia, Surinam, and Peru, and Central American countries, including Costa Rica. Flights to Cuba and Puerto Rico would also be available from Curaçao. The projected activity meant that Curaçao International Airport was poised to become a flexible and convenient hub for the Caribbean Basin.

Michel A. Thomet, PhD

[email protected]

Farzam Mostoufi

[email protected]

Issue Date: December 2008

Abstract—This paper presents the application of simulation techniques to the design of a new passenger terminal at Curaçao International Airport. The purpose of the simulation was to confirm that the design would meet or exceed International Air Transport Association (IATA) level of service C (LOS C) planning standards during peak activity periods of the design day. The simulation model is a dynamic, object-oriented passenger movement analysis tool. The model is driven by a realistic flight schedule developed for a 24-hour design day, thereby providing passenger volumes and flows that reflect the arrival and departure of aircraft and passengers over the course of an entire day.

Keywords—airport terminal, design, level of service, passengers, planning, simulation

SIMULATION-AIDED AIRPORT TERMINAL DESIGN

Figure 1. Architectural Rendering of the New Curaçao International Airport Terminal and Concourses at Opening

Page 54: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 44

The most

challenging

conditions in

the terminal

occur when

two B-747s

arrive at the

same time during

a peak hour.

Equipment used to serve this market varies from long-range, E-size aircraft with 400 seats, such as the B-747, to small, twin turbo prop aircraft with 48 seats, such as the de Havilland Dash 8. Each day, three B-747 flights from Europe will arrive in Curaçao from London, Madrid, and Amsterdam. To serve these flights, the apron was designed with 12 positions, 5 of which have access to the terminal via passenger loading bridges. There are three positions for B-747s, three for B-767s, and six for Dash 8s.

The airport has a curfew at night from approximately 11:00 p.m. to 6:00 a.m. When flights resume, activity builds to a midday peak, when nine aircraft arrive during one hour, representing 15 percent of daily aircraft arrivals. A second peak is reached at a later hour, when nine aircraft depart. During these two hours, passenger peaks are also reached, with 920 passenger arrivals and 940 departures (more than 18 percent of daily passengers).

The most challenging conditions in the terminal occur during a peak hour when two B-747s arrive at the same time. The terminal is designed with sufficient facilities and public space so that the level of service (LOS) during this peak will not drop below IATA LOS C standards.

Because of the high level of concentrated activity at such a compact airport, it was more difficult to apply the planning methodologies advocated by IATA and the FAA. Therefore, it was decided that, in addition to the IATA and FAA methodologies, a passenger simulation model would be used for the design. The simulation model makes it possible to quantify the LOS in each area of the terminal and for each type of airport patron, given a specific terminal size and layout and a specific scenario of arriving and departing flights during a 24-hour day (the design day).

PASSENGER SIMULATION MODEL—TERMSIM

TERMSIM, Bechtel’s proprietary simulation package, enables the airport planner to

quantify the level of service experienced by passengers going through a specific terminal layout, for a traffic level based on a forecasted design day flight schedule.

The simulation model is driven by the design day flight schedule, in which each flight has a time of arrival or departure, consists of a specific aircraft type, is assigned to a specific gate or remote stand, and belongs to a specific airline with a unique flight number.

In addition, the number of originating or terminating passengers and the number of transferring passengers in each flight are based on projected load factors. These passengers are further divided into first, business, and economy classes.

For each departing flight, profiles of originating passengers are generated in the model at a time determined by their scheduled departure time and an earliness-of-arrival distribution. Once created by this process, each originating passenger has the following attributes:

• Travel class (first, business, or economy)• Time and location of arrival at the airport• Ground transportation mode used• Airline and flight number• Departure boarding lounge

In addition, each originating passenger is assigned the following attributes:

• Number of people travelling together in a party (party size distribution)

• Number of checked bags per passenger based on a bag distribution range, as well as a probability distribution that a percentage of these bags are oversized or require special handling (e.g., animals)

ABBREVIATIONS, ACRONYMS, AND TERMS

AOCI Airports Operations Council International

ASCE American Society of Civil Engineers

ASME American Society of Mechanical Engineers

CAD computer-aided design

CADD computer-aided design and drafting

FAA Federal Aviation Administration

IATA International Air Transport Association

IEEE Institute of Electrical and Electronics Engineers

LOS level of service

LOS C [IATA] level of service C [standards]

SCS Society for Modeling and Simulation International

TD&I Transportation and Development Institute [of ASCE]

Page 55: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 45

The Curaçao

Airport terminal

model simulates

real-time behaviors.

For example,

while moving

through the terminal,

passengers have

the option of walking

or using a

moving walkway.

• Number of carry-on bags per passenger sampled from a distribution range

• Number of well-wishers per party sampled from a well-wisher distribution range

• Special needs passenger assumptions to account for wheelchairs or electric carts

A similar process is used to generate terminating and transferring passenger profiles in arriving flights at their specific gate or hardstand, with appropriate attributes assigned as the passengers exit the aircraft and move into the terminal or in a bus.

The passengers thus generated move within the terminal and concourses from one area to another according to their attributes. Each area or processing station has a location in the model specified by an x, y, or z coordinate that is tied to a scaled CADD drawing of the terminal, such as the one in Figure 2. The distance between two areas is calculated as the most direct distance along a travel route. Or, when a straight line between two areas is not physically possible, intermediate points are defined through which the passengers must pass.

Walking time between areas is computed by giving each party a walking speed, sampled from a distribution between a minimum and a maximum. Two such speed distributions are used: one for passengers with special needs and one for all other passengers. These speeds are reduced when the occupancy of the area that the passengers traverse rises above a given threshold (crowding effect). Randomization of walking speeds is used to reflect the reality of people moving in a terminal.

The model simulates real-time behaviors. For example, while moving through the terminal, passengers have the option of walking, using a moving walkway, or boarding the automated people-mover. When changing levels, they can use escalators, elevators, or stairs. On the escalators and moving walkways, some passengers will stand while some will walk, adding their speed to that of the escalator or walkway.

When passengers arrive at a processing area, they join a queue. Queues can be universal (a single queue serving several identical processes or checkpoints) or individual (one queue per

Figure 2. Curaçao Terminal Airport Passenger Flows

DEPARTURE CURBSIDE

CHECK-INDOCUMENT CHECK

TO UPPER LEVEL

BAGGAGE CLAIM

CUSTOMS

ARRIVAL CURBSIDE

OPTIONAL REMOTEGATE BUS PICK-UP

UP TO STERILE CORRIDORDOWN TO REMOTE STANDS

ESCORTED DISABLED PASSENGERS

FROM UPPER LEVEL

OPTIONAL REMOTEGATE BUS DROP-OFF

FROM UPPER LEVELELEVATOR FOR DISABLED TRANSFER PASSENGERS

IMMIGRATION

Arriving PassengersDeparting Passengers

Escorted Disabled Arriving PassengersEscorted Disabled Departing Passengers

STAIRS DOWN TO REMOTE STANDSGATES 1 AND 2

HOLD ROOMHOLD ROOMESCORTED DISABLED PASSENGERSTO LOWER LEVEL TRANSFER

SECURITY

GATE 3GATE 4GATE 5

PASSPORT CONTROL

SECURITYPROCESSING

FROM LOWER LEVEL

FROM REMOTE STANDSTO REMOTE STANDS

2ND Level

1ST Level

Page 56: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 46

process or checkpoint). When passengers reach the head of the queue, they are processed. The processing time is a value sampled from a distribution range specific to each facility and passenger attribute.

As passengers move through successive processing checkpoints or areas in the terminal and concourse, their movement is followed by the model and statistics are generated. The occupancy of each area (circulation or queuing) is tracked during the 24-hour simulation period. At each processing point the flow of passengers and queue length is tracked during the day and these data points are, in turn, used to assess the different LOS metrics.

The model collects all of these output variables on a minute-by-minute basis. This enables the planners and architects to design facilities, public spaces, and corridor widths for the peak traffic activity of the design day.

INPUT ASSUMPTIONS

Input assumptions fall into four categories:

• Description of terminal spaces and processing facilities. This is summarized in the CAD drawings of each floor plan of the terminal building and of each concourse. On each drawing, the flow paths of originating, terminating, and transferring passengers are shown, as well as the paths of the electric carts.

• Flowchart of passenger circulation and processing. For each category of passengers (originating, terminating, and transferring), a flowchart describes all the facilities the passengers have to visit, and the order in which the facilities are traversed, from the time passengers arrive at the airport until they leave the airport.

• Functional description of each processing facility and subfacility. For each category of passenger, a detailed table summarizes the facility parameters such as the percent of passengers using it, together with the processing time distribution (maximum, minimum, and average). Some facilities have no associated processing time, but passengers wait a specific length of time (e.g., well-wisher leaving point in departure hall) or wait for a specific event (e.g., boarding call in the departure lounges).

• Minimum acceptable LOS in each facility and subfacility. In addition to the IATA levels of service, which are based on areas

available per passenger, a maximum dwell time not to be exceeded is specified (queuing time plus processing time). Likewise, a 90 percentile dwell time is also specified. This means that 90 percent of the passengers processed at that facility should have a dwell time shorter or at most equal to that criterion.

SIMULATION OUTPUTS

The results of the simulation are summarized in five categories:

• Determination of facilities requirements. When the simulation run is progressing, the model automatically adds facilities to ensure that the desired LOS is not exceeded when the demand for this facility keeps growing. For instance, at the economy check-in, when the waiting time for 90 percent of the users exceeds 10 minutes, a new counter is opened.

• Performance of each processing station. The number of passengers processed during the design day is summarized in a comprehensive table for each processing station, together with the percentage of passengers that did not have to wait in a queue, the mean wait time and maximum wait time for all passengers, as well as the maximum queue length.

• Queuing areas. Groups of processing stations are generally fed from a single queuing space (universal queue). For instance, multiple economy check-in counters are fed from such a single, universal queue. The LOS in the queuing area is determined by the number of passengers and the size of the queuing area, using the IATA criteria. For each queuing area there is a graph showing the number of passengers in the queue every 10 minutes, together with lines showing the boundaries between LOS as shown in Figure 3.

• Clearance times. At the end of the simulation, each passenger from each category “remembers” the time spent in the airport, between the time of arrival or departure at a gate and the time of entrance or exit from a ground access mode. These clearance times are then ranked from the shortest to the longest and displayed in separate histograms for terminating passengers, originating passengers, and transferring passengers.

• Space occupancy. For boarding gate lounges and public lobbies where people are waiting in a given space, an occupancy graph is

The simulation

model collects

all of the

output variables

on a

minute-by-minute

basis.

Page 57: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 47

given, as well as a graph of the corresponding LOS, based on IATA criteria, as shown in Figure 4. For corridors, where people are walking, similar graphs are given, based on people per minute walking past a cross-section. The LOS is calculated by cordoning off a section of the corridor with virtual doors and counting how many people are in this section every 10 minutes. The number of people in that section is found by adding one

every time a person passes the virtual entry door and subtracting one whenever a person passes the virtual exit door.

CONCLUSIONS

Curaçao is a small, compact airport in a very dynamic environment. In the contract, one

of the performance specifications items was to demonstrate that even during peak hours of the

Figure 3. Check-In Counters Queuing Area Passenger Density Distribution in 2031

160

140

120

100

80

60

40

20

0

00:32

00:22

00:12

00:02

00:91

00:81

00:71

00:61

00:51

00:41

00:31

00:21

00:11

00:01

00:9

00:8

00:7

00:6

00:5

00:4

00:3

00:2

00:1

00:0

Hour

Pass

enge

rs

LOS = level of ser vice

LOS C

LOS B

LOS A

Figure 4. Boarding Area (Boarding Lounge Gate C001) IATA LOS Distribution in 2031

200

180

160

140

120

100

80

60

40

20

0

00:32

00:22

00:12

00:02

00:91

00:81

00:71

00:61

00:51

00:41

00:31

00:21

00:11

00:01

00:9

00:8

00:7

00:6

00:5

00:4

00:3

00:2

00:1

00:0

Hour

Pass

enge

rs

LOS C

LOS B

LOS A

LOS = level of ser vice

Page 58: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 48

ultimate forecast, the LOS should be at IATA LOS C standards or higher. TERMSIM, Bechtel’s proprietary simulation package, made it possible to quickly investigate the performance of different terminal layouts and translate them into the design changes necessary to accommodate the traffic projections at desired design standards. Because TERMSIM can be used at different levels of detail, its use is practical and effective, even for small airports like Curaçao.

BIOGRAPHIESMichel A. Thomet is the manager of facility planning and simulation for the Aviation Services Group in San Francisco, California. He has been involved in the master planning of transportation infrastructure megaprojects around the world, including airports, rail systems, transit

systems, ports, mines, and industrial cities. On these projects, Dr. Thomet has been responsible for simulation studies (capacity, level of service), traffic forecast studies, economic feasibility studies, and noise and air quality impact studies. He currently supports the New Doha International Airport project in Qatar.

Previously, Dr. Thomet was the planning director at Suiselectra in Basel, Switzerland, where he coordinated a team of experts in various fields related to transportation, and traveled widely in Europe and North America to gain first-hand knowledge of state-of-the-art urban transportation systems. Earlier, as a senior electric engineer at the Westinghouse research and development laboratories in Pittsburgh, Pennsylvania, he conducted research on solid state power conversion systems.

Dr. Thomet is a member of the Institute of Electric and Electronics Engineers (IEEE), Society for Computer Simulation (SCS), and the Transportation & Development Institute (T&DI) of the American Society of Civil Engineers (ASCE). As a member of the executive committee of the Vehicular Technology Society of IEEE, he has been involved in preparing and supporting the annual American Society of Mechanical Engineers (ASME)/IEEE Joint Railroad Conference. Dr. Thomet has authored and published 12 technical papers (4 on electrical engineering and 8 on transportation), several of which have been presented at the Winter Simulation Conference (WSC) and at conferences sponsored by the IEEE, ASME, and Airports Operations Council International (AOCI).

Dr. Thomet received an MBA in Management and Economics from the University of California, Berkeley; has a PhD in Systems Engineering and an MS in Electrical Engineering, both from Carnegie Mellon University, Pittsburgh, Pennsylvania; and received a Diploma in Electrical Engineering from the Federal Institute of Technology, Zurich, Switzerland.

Farzam Mostoufi is a senior planning and simulation specialist with Bechtel Civil, with 20 years of experience at Bechtel in the planning and design of transportation and material handling facilities, including international airport terminals, railroads, transit systems, bulk and container

ports, and mining and metals production complexes. He is highly experienced in conducting technical simulation studies and economic analysis, and in the design, development, and use of specialized transportation and logistics models.

Farzam has developed economic models and participated in feasibility studies to test the impact of projected operations and designed facilities on revenues, capital expenditures, and maintenance costs. He is currently supporting the New Doha International Airport project in Qatar, being designed to meet Qatar’s aviation needs for decades to come. When the airport opens in 2011, as many as 8,000 passengers will be able to use the 590,000+ m2 passenger terminal complex in a single hour, and the 4,850 m eastern runway will be among the longest commercial runways in the world, allowing for unrestricted operations by Airbus A380 aircraft even under extreme meteorological conditions.

Farzam received an MBA in Finance from Golden Gate University, San Francisco, California; has a BS in Economics and Insurance from Tehran College, Tehran, Iran; and has completed course requirements in the Doctor of Business Administration (DBA) degree program at Golden Gate University. As a lecturer at Golden Gate University, he taught graduate and undergraduate level courses in computer modeling, simulation, and database systems. Farzam also holds a Certificate in Airport Systems Planning from Massachusetts Institute of Technology, Cambridge, Massachusetts.

Page 59: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 49

INTRODUCTION

Many developed countries use the probable maximum flood (PMF) as the design basis

for establishing the inflow design flood (IDF) for dams that are classified as high hazard because of their high dam heights and large storage volumes. The failure of such dams would cause loss of life and result in major adverse economic consequences due to damage to properties downstream.

The PMF is defined as “the flood that may be expected from the most severe combination of critical meteorological and hydrologic conditions that are reasonably possible for the drainage basin in question.” In general, the PMF is considered to be statistically indeterminate.

When the PMF is not used as the basis of the IDF, it is at least used as a “check flood” to ensure that a dam will not fail catastrophically if overtopped. However, in some developed and many developing countries, the peaks of the IDF are estimated from probabilistic approaches, regardless of the hazards these dams may pose to downstream inhabitants. This is particularly true for small dams with heights of 30 m or less. In many cases, the available flood-flow data is barely sufficient for a probabilistic analysis; therefore, estimates of design-flood peak discharges that use probabilistic approaches are highly uncertain.

The International Commission on Large Dams (ICOLD) has charged the Technical Committee of

Hydraulics for Dams with developing a bulletin, entitled “Safe Passage of Extreme Floods,” to provide insight and approaches for determining design-flood peak discharges when probabilistic approaches are used. The bulletin was also developed to provide a better design of the outlet works that could safely pass extreme floods. The purpose of this paper is to capture the essence of Chapter 2 of that bulletin, “Confidence Level Assessment of Design Flood Estimates,” which suggests using the upper bound of the confidence limits to provide a margin of safety in defining IDFs for dams.

The precision of confidence-level determinations may also be improved by using recently developed algorithms in determining quantile estimators for some distributions that are commonly used in flood-flow frequency analysis. This paper provides additional discussion, not presented in the ICOLD bulletin, regarding the estimate of confidence levels in determining extreme floods for dam design.

CURRENT PRACTICE AND UNCERTAINTY OF IDF ESTIMATES

The current practice in the design of dams is to first select the IDF appropriate for the

hazard potential of a dam and reservoir and then to determine its peak flow rate and/or its entire hydrograph. Then the spillway and outlet works can be designed, or adequate storage can be allocated in the reservoir, to safely

Samuel L. Hui

[email protected]

André Lejeune, PhD Université de Liège, Belgium

[email protected]

Vefa YucelNational Security Technologies, LLC

[email protected]

Issue Date: December 2008

Abstract—This paper takes a fresh look at uncertainty in estimates of the inflow design floods (IDFs) used for spillway design for safe passage of extreme floods through dams, particularly dams with a height of 30 m or less. Development of IDFs currently involves statistical analysis; thus, IDFs incorporate uncertainties. The paper defines the extreme flood and suggests a means by which it can be estimated in order to incorporate uncertainty in the IDF. A clear understanding of the physical site conditions and the physical processes in question, as well as engineering judgment, are paramount in developing a safe design.

Keywords—ARI, bootstrap CI, dam, ELV, EMA, flood, hydraulics, ICOLD, IDF, PMF, PMP, spillway, WRC

SAFE PASSAGE OF EXTREME FLOODS — A HYDROLOGIC PERSPECTIVE

Page 60: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 50

Estimates for

extreme events

extrapolated from

flood frequency

analyses based on

short records are

highly uncertain.

accommodate the design flood without endangering the integrity of the dam and its appurtenant structures.

In many developed countries, dams are classified by their hazard potential, with regulations governing the selection of the IDF. For high-hazard dams, the PMF, or the flood that may be expected from the most severe combination of critical meteorological and hydrologic conditions that are reasonably possible, is commonly adopted as the IDF. When the PMF, which is considered to be statistically indeterminate, is not used as the basis of the IDF, it is used as a “check flood” to ensure that the dam does not fail catastrophically if overtopped.

The IDF is derived either deterministically with a precipitation-runoff model, using a design rainfall sequence and other basin hydrologic parameters appropriate for the design hydrometeorological conditions, or by means of a statistical analysis, using historical flood peaks observed at or near the proposed dam site. In the former case, the design precipitation values are generally determined using historical data. For example, in deriving the probable maximum precipitation (PMP) used in the development of the PMF, the 100-year or other frequency precipitation values form the basis of the PMP estimates. Therefore, regardless of the approach taken, determining the design-flood peak discharge involves statistical analysis of data. In many cases, the available data is insufficient (in terms of years of data collected), resulting in uncertainty in the estimate of the IDF.

Dam design professionals generally recognize that good engineering demands realistic or justified design, and that dams should be designed to accommodate the maximum flood computed based on approved hydrologic design

criteria (i.e., ranging from a 100- to a 10,000-year IDF). Therefore, an IDF estimate that exceeds the design-flood peak discharges (i.e., an extreme flood), as advocated by the ICOLD bulletin, should account for any uncertainty, usually defined by the confidence limits.

FLOOD FREQUENCY ANALYSIS

The IDF chosen from a flood frequency analysis is generally located in the “upper

tail” of the cumulative distribution of the observed phenomena. This would correspond to an average recurrence interval (ARI) of 100 years to more than 10,000 years, depending on the potential hazard a dam poses to the downstream inhabitants and properties. Unfortunately, available sample data may include, at best, 100 years of observations, and often less than 50 or even 20 years. Without some assumptions about the population distribution of the data, we would theoretically need 3,000 years of observation to roughly define the 1,000-year event. In this case, the interval bounded by the highest and the 7th highest of the sample data would have approximately a 90 percent chance of containing the 1,000-year value. [1]

Therefore, when there is a limited amount of available historical data at a site, the use of regionalization techniques may be required to increase the database in deriving a reliable at-site frequency distribution. In any case, extrapolation beyond the database is required in estimating the design frequency event, usually in the range of a return period of 1,000 to 10,000 years.

The extrapolation to derive extremely rare peak flows could lead to uncertain estimates with the resulting flood quantile estimates highly dependent on the choice of theoretical distribution. Regardless of the appropriateness of the probability distribution used, the extrapolation of data to the 1,000- and 10,000-year range from even 100 years of available data is a stretch. However, sound design-flood estimates can be achieved by explicitly accounting for uncertainty in the estimates by means of confidence intervals (CI), relying on a clear understanding of the hydro-meteorological characteristics of the watershed, and using professional judgment.

In addition, the evaluation of the safe passage of a flood requires the routing of the design flood hydrograph through the reservoir. It also requires a determination of the corresponding flood volumes and distributions.

ABBREVIATIONS, ACRONYMS, AND TERMS

ARI average recurrence interval

CI confidence interval

ELV estimated limiting value

EMA Expected Moments Algorithm

ICOLD International Commission on Large Dams

IDF inflow design flood

LP3 Log Pearson Type III

PMF probable maximum flood

PMP probable maximum precipitation

WRC Water Resources Council

Page 61: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 51

The shorter

the historic record,

the larger

the CIs and the

associated errors.

CONFIDENCE INTERVALS

The precision of a design event estimate, in terms of a return period in years (described

as T-years; i.e., T = 100 years or T = 1,000 years), that is derived from a probability distribution fitted to the sample data, can be quantified by computing a CI of a certain confidence level, e.g., 95 percent, for the T-year event. The CI is a range of estimated values within which the true value of the T-year event is expected to lie. If different CIs are derived using various methods, the CI giving the smaller range should be chosen.

The statistical distribution of the T-year event is usually unknown; therefore, it is not possible to derive an exact CI for the T-year event. However, analytical expressions (i.e., first-order approximations) have been developed that are acceptable for large sample sizes. Because hydrologic samples are typically small, these approximate CIs may lack accuracy. Methods for computing CIs are further summarized below.

IMPACT OF RECORD LENGTH ON CONFIDENCE INTERVALS

The accuracy of estimates of the T-year events and the associated CIs are functions of the

number of years of records available for the analysis, the assumed probability relationships (frequency distribution), and the way the sample statistics are estimated.

As the length of the record increases, the reliability of the estimate also increases. Approximate values of reliability (percent chance) can be calculated for different return periods. The approximate values for infrequent events are shown in Table 1, giving the approximate reliabilities as a function of confidence limit, ARI, and

record length. [2] For example, there is almost a certainty that with 25 years of historical database, the estimate for the 2-year ARI will fall within plus or minus 50 percent of the estimated value; but the chance of having the estimated value fall to within plus or minus 10 percent of the estimate is only about 68 percent. [3] The table also depicts the risk of having a flood with an ARI greater than T-years during the life of a project. It is important to note, however, that the lifetime of a dam, generally defined in economic terms, is different from the real lifetime of the structure, which is usually greater.

ACCOMMODATING UNCERTAINTY IN THE IDF

On the basis of the method suggested in the U.S. Geological Survey’s “Guidelines for

Determining Flood Flow Frequency,” Bulletin 17B [4], for a project involving a 1,000-year flood of 4,478 m3/s derived from a 39-year systematic record using the Log Pearson Type III (LP3) Distribution, the corresponding upper 95 percent confidence limit is estimated to be 7,632 m3/s. The extreme flood used for the design of such a project would include all floods up to 7,632 m3/s, an increase of about 72 percent of the expected value that is normally used as the IDF. In this example, the dam design professionals would have to find a way to accommodate the extra 3,154 m3/s, other than relying on the planned spillway, if they wish to have a 95 percent confidence that the dam could safely pass the IDF based on data accuracy alone.

Using the same database and the same LP3 distribution, the 10,000-year-flood peak discharge and the corresponding upper 95 percent confidence limit are estimated to be 11,768 m3/s and 19,039 m3/s, respectively. The dam designers

Average Recurrence

Interval (ARI), years

Record Length, years

Confi dence Limits, % error Risk of Flood (ARI = T-years)Within Lifetime (N-years)

±10% ±25% ±50% N = 30 N = 50

2

10 47 88 99

100% 100%25 68 99 100

100 96 100 100

10

10 46 77 97

95% 99%25 50 93 99

100 85 100 100

50

10 37 70 91

45% 63%25 46 91 97

100 73 99 100

100

10 35 66 90

26% 39%25 45 89 98

100 64 99 100

Table 1. Approximate Reliabilities as a Function of Confidence Limit

Page 62: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 52

would have to find ways to accommodate the additional 7,271 m3/s in order to pass the extreme flood safely through the dam if the 10,000-year flood is adopted as the IDF.

It is possible that the extreme flood defined by the upper limit of the CI could exceed the PMF or the estimated limiting value (ELV) flood. The dam designers would need to perform sufficient analyses to ensure that this would not happen and to minimize any unnecessary over-design.

METHODS TO DERIVE CONFIDENCE INTERVALS

CIs based on asymptotic theory, along with CIs constructed using the non-

central t-distribution, are commonly used in practice. Stedinger [5] discussed these methods in computing CIs for quantile estimates. He concluded that the use of the non-central t-distribution and the asymptotic distribution normally work well with observations and their logarithms if the data is normally distributed. For LP3 distribution with a known skew coefficient, a combination of the non-central t-distribution, with an adjustment based on asymptotic variance of the quantile estimator, is also generally performed satisfactorily. However, the approach suggested by Bulletin 17B did not perform as well as the other methods because a possible error in the specified population skewness coefficient was ignored.

Recent literature on CIs includes attempts to remedy some of the issues identified with the current commonly used CI estimating methods, particularly issues associated with procedures suggested in Bulletin 17B. These can be summarized as analytical methods and bootstrap methods.

One of the recent analytical methods is the Expected Moments Algorithm (EMA) method developed by Cohn et al. [6] The EMA is an attempt to remedy the shortcomings of the Bulletin 17B procedures, in which the parameters used to describe the distribution are derived independently of the distribution, without modifying or abandoning the use of the “method of moment,” a basic statistical structure in Bulletin 17B. EMA is an iterative method of moment procedure that computes the parameters of the LP3 distribution using systematic flood peak data as well as historic flood peaks, with analytical expressions for the asymptotic variance of EMA flood-quantile estimators and CIs for flood quantile estimates. Using the parametric bootstrap method (also known as Monte Carlo simulations), Cohn et al.

demonstrate that their expressions provide for useful estimates of the CIs even though they are not exact.

Bootstrap methods are among the many modern tools used by statisticians. There are nonparametric and parametric bootstrap methods. The idea behind the nonparametric bootstrap method is the use of the sample data at hand to generate many artificial samples of the same size using random sampling with replacement. For each artificial sample data, a quantile of interest can be computed based on the sample distribution. If there are N samples of data generated, then there will be N estimates of the quantile of interest. If a 95-percentile CI is sought, then the 2.5 and 97.5 percentiles of the N sample quantiles provide the needed upper and lower bounds of the 95-percentile CI. The major advantage of this method is that it can be applied to any estimating problem without the need to make assumptions about the uncertainty distribution around the estimate of the statistics of interest, which is often the problem with the analytical confidence expressions.

The parametric bootstrap method has four main steps: (1) use the observed data and compute the parameters based on a certain assumed parametric distribution, (2) generate a large number of samples from the assumed parametric distribution, (3) calculate the statistic of interest for each sample, and (4) sort these values and use the appropriate quantiles to define the CI.

The parametric bootstrap method of computing confidence levels is also found in the regional flood frequency analysis using L-moments developed by Hosking and Wallis. [7] Hosking and Wallis show that the method performs well even with the issues of heterogeneity and dependency among the gauging stations used in the regional analysis; the method also avoids the issue of making assumptions about the uncertainty distributions around the quantiles of interest.

CONCLUSIONS

We have defined the extreme flood in the context of the bulletin “Safe Passage of

Extreme Floods” for the design of a dam. We have also suggested means by which the extreme flood can be estimated, while accounting for uncertainty. As in flood hydrology and in any predictive analysis that deals with nature, a clear understanding of the physical site conditions and the physical processes in

The accuracy of

CI estimates

are very much

method dependent.

Page 63: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 53

Multiple methods

should be employed

in defining the CIs.

question, as well as engineering judgment, are paramount in the development of a safe design.

The question of how confident we are in our estimates of the confidence levels remains the real issue. The papers referenced here point out the inexact nature of the approaches used in deriving CIs, as well as the associated shortcomings of these procedures. Some methods perform better than others, depending on the sample data at hand.

In [8], six methods are evaluated, including both analytical approximate methods and the bootstrap method. The following is an excerpt from that paper, which also makes reference to a paper presented at the American Water Resources Association’s 1997 conference. [9]

Nonparametric computer-intensive Bootstrap CIs are compared with parametric CIs for simulated samples, drawn from an LP3 distribution. Using this methodology, biased in favor of parametric CIs since the parent distribution is known, Bootstrap CIs are shown to be more accurate for small to moderate confidence level (̂ 80%), when parameters are estimated by the indirect method of moment (Bulletin 17B). However, the actual level of Bootstrap CIs is almost always lower than the target level. It is expected that, compared to parametric CIs, Bootstrap CIs perform even better when applied to actual series of maximum annual floods, since they need not come from an LP distribution.

It is recommended that several methods be used in defining confidence levels, and, based on performance criteria and professional judgment, the best method should be selected. Consultation with a professional statistician is always a prudent way for the hydrologist to build further confidence in his or her estimates.

REFERENCES

[1] ICOLD–CIGB, International Symposium on Dams and Extreme Floods (Tomo 1 and Tomo 2), Granada, Spain, September 1992 (see <www.icold-cigb.org.> and <http://www.wrm.ir/ircold/pdf/eng_books/Proceedings5.pdf>).

[2] M. Wanielista, R. Kersten, and R. Eaglin, “Hydrology: Water Quantity and Quality Control,” 2nd Edition, John Wiley & Sons, New York, NY, 1997, access via <http://he-cda.wiley.com/WileyCDA/HigherEdTitle/productCd-0471072591,courseCd-E63100.html>.

[3] G.W. Kite, “Frequency and Risk Analyses in Hydrology,” Water Resources Publications, LLC, Littleton, CO, 1988, access via <http://www.wrpllc.com/books/kitebooks.html>.

[4] “Guidelines for Determining Flood Flow Frequency,” Bulletin 17B of the Hydrology Subcommittee, Office of Water Data

Coordination, and Interagency Advisory Committee on Water Data, U.S. Geological Survey, Reston, VA, 1982 <http://choctaw.er.usgs.gov/new_web/reports/other_reports/flood_frequency/guidelinesflofreq.html>.

[5] J.R. Stediger, “Confidence Intervals for Design Events,” ASCE, Journal of Hydraulic Engineering, Vol. 109, No. 1, January 1983, pp. 13–27, access via <http://cedb.asce.org/cgi/WWWdisplay.cgi?8300105>.

[6] T.A. Cohn, W.L. Lane, and J.R. Stedinger, “Confidence Intervals for Expected Moments Algorithm Flood Quantile Estimates,” Water Resources Research, Vol. 37, No. 6, June 2001, pp. 1695–1706 <http://www.agu.org/pubs/crossref/2001/2001WR900016.shtml>.

[7] J.R.M. Hosking and J.R. Wallis, “Regional Frequency Analysis: An Approach Based on L-Moments,” Cambridge University Press, Cambridge, UK, May 1997, see <http://www.springerlink.com/content/g848127107ul6403/>.

[8] V. Fortin and B. Bobee, “Nonparametric Bootstrap Confidence Intervals for the Log Pearson Type III Distribution,” Transactions on Ecology and the Environment, Vol. 6, WIT Press, 1994, access via <http://library.witpress.com/pages/listpapers.asp?q_bid=245&q_subject=Ecology>.

[9] J.F. England, Jr., and T.A. Cohn, “Scientific and Practical Considerations Related to Revising Bulletin 17B: The Case for Improved Treatment of Historical Information and Low Outliers,” Proceedings of the World Environmental & Water Resources Congress: Restoring Our Natural Habitat, K.C. Kabbes, ed., Tampa, FL, May 15–19, 2007, Paper 40927–2565, see <https://www.asce.org/bookstore/book.cfm?book=7402>.

BIOGRAPHIESSamuel L. Hui has 42 years of hydraulic engineering experience, including 35 years at Bechtel. His vast technical knowledge and skills have been applied to more than 40 projects in the United States and around the world, including such Bechtel megaprojects as the Jubail

Industrial City, King Khalid International Airport, and King Fahd International Airport in Saudi Arabia;

This paper was presented at the International Conference on Dam Safety Management, held in

Nanjing, China, in October 2008. The original version is slated for inclusion in the conference proceedings,

which will be published at a future date.

Additionally, the original version of this paper is scheduled for publication in the January issue of

L’ Houille Blanche, the technical journal of the Société Hydraulique de France in Paris and one of the premier hydraulic engineering technical journals in France and

the world. The paper was translated into French by Dr. Lejeune (who will be listed as lead author on the

translation) and will be submitted under the title “Passage en sécurité des crues extrêmes.”

Page 64: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 54

Sohar Aluminum Smelter in Oman; and Fjarðaál Aluminum Smelter in Iceland.

Currently, as the senior principal engineer with Bechtel Civil, Sam participates in hydraulic or hydrologic engineering tasks on multiple projects, and is the off-project design reviewer of hydraulic/hydrology tasks for the Guinea Alumina Project in West Africa, which ranks as one of the largest and most significant greenfield projects ever to be developed.

Sam was manager and global technical lead for Bechtel’s Hydraulics and Hydrology Group, which performed technically challenging hydraulics and hydrologic studies worldwide, from 1995 to 2004.

Sam’s many professional memberships include the U.S. Society on Dams (USSD), in which he serves on the Technical Committee on Hydraulics of Dams; and the International Commission on Large Dams (ICOLD), in which he serves on the subcommittee charged with the preparation of the ICOLD bulletin titled “Safe Passage of Extreme Floods.” He was also a member of the American Society of Civil Engineers (ASCE), and formerly chaired the Surface Water Hydrology Technical Committee’s control group and served on the subcommittee that oversaw revisions to the ASCE Hydrology Handbook (second edition).

Sam holds MS and BS degrees in Civil Engineering from Queen’s University, Kingston, Ontario, Canada. He is a registered civil engineer in the province of Ontario, Canada, and in the state of California.

Professor André Lejeuneteaches at the Université deLiège, Belgium, where he heads the Department of Hydraulics and Transport and the Laboratory of Applied Hydrodynamics and Hydraulic Construction. Dr. Lejeune has also taught at the International

Institute for Infrastructural, Hydraulic and Environmental Engineering, Netherlands, and l’Ecole Polytechnique Fédérale de Lausanne, Switzerland, as a visiting professor.

Dr. Lejeune has lent his outstanding hydraulics expertise to projects in 70 countries, including China, Egypt, Ethiopia, Indonesia, Iran, Israel, Japan, Jordan, Kenya, Madagascar, Pakistan, Poland, Thailand, the former Soviet Republic, Venezuela, and Yemen. He currently participates in a feasibility study for the Red Sea–Dead Sea Canal, a potential joint Jordanian–Israeli initiative to bring water from the Red Sea to the Dead Sea, which is shrinking rapidly due to evaporation and upstream water diversion. Also, Dr. Lejeune recently served as an advisor for post-earthquake reconstruction of the Jian Rive irrigation dam near the city of Mianyang, in Sichuan, China.

Dr. Lejeune is a member of the Belgian Royal Academy of Science, and a past peer reviewer for ABET, Inc. (formerly the Accreditation Board for Engineering and Technology), which accredits educational programs in applied science, computing, engineering, and technology. He is also a member of the International Commission on Large Dams (ICOLD), and currently chairs the Technical Committee on Hydraulics for Dams.

In 1972, Dr. Lejeune received the Lorenz G. Straub Award, a prestigious international award presented annually by the University of Minnesota to the author of a particularly meritorious doctoral thesis on a topic related to hydraulic engineering. His paper was titled “The Operating Forces for the Opening and Closing of Miter Gates on Navigation Locks.”

Dr. Lejeune holds a PhD in Hydraulics, and has received the US equivalent of master of science degrees in Oceanography and Civil Engineering, all with highest marks, from the Université de Liège, Belgium.

Vefa Yucel is a principal engineer with National Security Technologies, LLC (NSTec), which provides management and operations (M&O) services for the Nevada Test Site (NTS), a 1,350-square-mile area northwest of Las Vegas, Nevada. He leads GoldSim

modeling (a contaminant transport and regulatory compliance model) development for low-level and transuranic (TRU) waste performance assessments and compliance evaluations of two disposal facilities, closure planning and cover design, and environmental monitoring of the site’s waste management facilities. As engineering supervisor and principal hydrologist at NTS with Bechtel Nevada, he managed many of the same tasks he currently manages as one of NSTec’s principal engineers.

Earlier, with Bechtel Environmental, Inc., in Oak Ridge, Tennessee, Vefa supervised Geotechnical and Hydraulic Engineering Services’ hydraulics and hydrology group, which provided specialty services to environmental restoration and waste management projects in surface water and groundwater hydrology, and fate and transport modeling. He was originally a senior engineer in Bechtel’s Hydraulics and Hydrology Group in San Francisco, where he was engaged in hydrologic studies for water resource and flood control projects.

Vefa is a member of the American Society of Civil Engineers (ASCE), and served on ASCE’s Task Committee on Paleoflood Hydrology in 1999. He has authored numerous technical papers, most of which have been presented at professional conferences worldwide, including “Decision Support System for Management of Low-Level Radioactive Waste Disposal at the Nevada Test Site,“ “Hydrologic Simulation of the Rio Grande Basin, Bolivia,” “Pollutant Loadings on the San Francisco Bay Estuary,” “An Integrated Model for Surface- Subsurface Fate and Transport and Uncertainty Analyses (Part I: Theory, Part II: Application),” and “Development of Rainfall Intensity-Duration-Frequency Data for the Eastern Province of Saudi Arabia.”

Vefa holds MS and BS degrees in Engineering from Iowa State University, Ames, Iowa, and has completed graduate courses in Civil Engineering at Stanford University, Palo Alto, California. He is a registered civil engineer in the state of California.

Page 65: BTJ Book V1 N1 2008 Final

Bechtel Communications

Communications — AT&TThe AT&T Mobility project involves construction,

engineering, procurement, project management, and site acquisition for 3G mobile networks

in the United States.

T e c h n o l o g y P a p e r s

57FMC: Fixed-Mobile ConvergenceJake MacLeodS. Rasoul Safavian, PhD

77The Use of Broadband Wireless on Large Industrial Project SitesNathan Youell

91Desktop Virtualization and Thin Client OptionsBrian Coombe

Page 66: BTJ Book V1 N1 2008 Final
Page 67: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 57

INTRODUCTION

Fixed–mobile convergence (FMC) is not new! The original concept has been around since

the early 1990s and was originally perceived as representing the ultimate telecommunications merger. The future would be one in which there would be little or no difference between fixed and mobile phone services. Each user would have a single number and receive a single bill. There would be a quality of service (QoS) range, end-to-end security, and a single point of contact for customer service. The key focal points in this vision were, and still are, services built around individual users, independently of their access networks!

So what went wrong? The answer varies depending on whom you ask, but perhaps the main shortcomings could be attributed to technology immaturity; lack of unified standards; slow acceptance of packet services in the mobile arena; lack of (or delay in) delivery of the appropriate terminal devices; and, perhaps most importantly, absence of the appropriate market business drivers. Adding a further blow, the slowdown of the telecommunications boom has had a huge impact on capital expenditures (CAPEX).

Technologically, FMC on the core network was hampered by the QoS issues of the Internet

Protocol (IP) backbone. On the mobile side, packet data services had a much slower than desired or expected takeoff. Fortunately, the recent emergence of common multimedia services standards such as session initiation protocol (SIP) and IP multimedia subsystem (IMS)—both wireless and wireline—has made it possible and efficient to share not only network infrastructure, but also application and billing layers, thus reversing the technical limitations.

This paper is organized under five headings:

• What is FMC?—Defines FMC and its main drivers.

• Different FMC Approaches and Solutions—Addresses various evolutionary steps toward full FMC, beginning with an examination of wireless local area network (WLAN)–cellular integration, generic access networks (GANs), and related issues.

• Next-Generation Networks—Examines a comprehensive new approach to FMC known as next-generation networks (NGNs) and addresses NGN requirements, architecture, functional elements, etc.

• Security Concerns—Addresses the all-important issue of end-to-end security.

• Conclusions—Provides concluding remarks and future research directions.

FMC: FIXED–MOBILE CONVERGENCE

Abstract—Fixed–mobile convergence (FMC) is providing a new direction for the future of telecommunications, with a potentially profound impact on various segments and industries. As the boundaries between various services blur, so do the rules of engagement of various industries. The impact is more than purely technical. FMC could potentially redefine the nature of telecommunications, information, and entertainment services and how various types of service providers compete. This paper looks into FMC drivers; various technical aspects of FMC; and the current evolutionary steps toward FMC implementation, such as generic access network (GAN), cellular–wireless local area network (WLAN) integration, femtocells, and next-generation networks (NGNs).

Keywords—3GPP, 3GPP2, end-to-end, fixed–mobile convergence, FMC, FTTH, GAN, generic access network, integration, interoperability, interworking, layer, mobility, next-generation network, NGN, nomadicity, seamless, security, TISPAN, VDSL, VHE, virtual home environment, WiMAX, wireless local area network, WLAN

Originally Issued: June 2006Updated: December 2008

Jake MacLeod

[email protected]

S. RasoulSafavian, PhD

[email protected]

Page 68: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 58

ABBREVIATIONS, ACRONYMS, AND TERMS

3G third generation, enhanced digital mobile phone service at broadband speeds enabling both voice and nonvoice data transfer3GPP™ Third Generation Partnership Project—a collaboration agreement among several communications standards bodies to produce and maintain globally applicable specifications for a third-generation mobile system based on GSM technology

3GPP2 Third Generation Partnership Project 2—a sister project to 3GPP and a collaboration agreement dealing with North American and Asian interests regarding third-generation mobile networksAAA authentication, authorization, and accountingAC authentication centerAGCF access gateway control functionAGW access gatewayAKA authentication and key agreementA-MGF access MGF AN access nodeANSI American National Standards Institute AP access pointAPI application program interfaceAPN access point nameARPU average revenue per userAS application serverASP application service providerAuC authentication centerBS base stationBSC base station controllerBTS base transceiver stationCALEA CommunicationAssistance for Law Enforcement ActCAPEX capital expendituresCCF charging collection functionCDMA code division multiple accesscdma2000® A family of standards, developed through comprehensive proposals from Qualcomm, describing the use of code division multiple access technology to meet 3G requirements for wireless communication systemsCGW charging gatewayCN core networkCS circuit switchedCSCF call session control functionDMH dual-mode handsetDoS denial of service

DSL digital subscriber lineDSS digital subscriber signaling DTM dynamic asynchronous transfer mode E911 emergency 911 (service)EAS emulation ASEDGE enhanced data rates for GSM evolution EMTEL emergency telecommunication ESP encapsulating security payload ETSI European Telecommunications Standardization InstituteEV-DO evolution–data optimized (3GPP2 standard)FCC Federal Communications CommissionFE functional entityFMC fixed–mobile convergenceF-MMS fixed line MMS FMS fixed-mobile substitutionFNO fixed network operatorFNP fixed network portabilityFTTH fiber to the homeGAA generic authentication architectureGAN generic access networkGANC GAN controllerGBA generic bootstrapping architectureGERAN GSM/EDGE RAN GGSN gateway GPRS support nodeGMSC gateway MSC GPRS general packet radio serviceGPS global positioning systemGSM global system for mobile communicationsHLR home location registerHPLMN home PLMN HSPA high-speed packet accessHSS home subscriber serverHTN handoff trigger nodeHTTP hypertext transport protocolIETF Internet Engineering Task ForceIKE Internet key exchangeIM instant messagingIMS IP multimedia subsystemIP Internet ProtocolIPSec IP securityIPv4 IP version 4IPv6 IP version 6ISDN integrated services digital networkISIM IMS subscriber identity module

Page 69: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 59

ISP Internet service providerISUP ISDN user part IT information technologyITU-T International Telecommunication Union–Telecommunication Standardization Sector IUA ISDN Q.921 user adaptation layer MBMS multimedia broadcast multicast serviceMGC media gateway control MGF media gateway functionMitM man in the middleMMS multimedia messaging service MNO mobile network operatorMNP mobile number portabilityMoU minutes of useMP3 MPEG-1 Audio Layer 3MS mobile station MSC mobile switching center MSF Multiservice ForumNASS network attachment subsystemNAT network address translationNGN next-generation networkOA&M operation, administration, and management OCS online charging systemOEM original equipment manufacturerOMA Open Mobile AllianceOPEX operating expendituresOSA open service access PBX private branch exchange PDA personal digital assistantPDG packet data gatewayPDSN packet data serving nodePES PSTN/ISDN emulation subsystemPLMN public land mobile networkPOTS plain old telephone servicePS packet switchedPSAP public safety answering pointPSTN public switched telephone networkQoS quality of serviceRACS resource and admission control subsystem RAN radio access networkRF radio frequencyRGW residential gatewayR-MGF residential MGF RNC radio network controllerSDO standardization development organizationSDR software-defined radioSEGW security gateway

SG-17 (ITU-T) Study Group 17SGSN serving GPRS support node SIM subscriber identity moduleSIP session initiation protocolSLF subscriber location function SMLC serving mobile location center SPAN Services and Protocols for Advanced Networks (ETSI TC)TC (ETSI) technical committeeTDD time-division duplexTDM time-division multiplexingTGW trunking gateway TIPHON Telecommunications and Internet Protocol Harmonization Over Networks (ETSI TC)TISPAN Telecommunications- and Internet- converged Services and Protocols for Advanced Networking (ETSI TC)TLS transport layer securityUE user equipmentUMA unlicensed mobile accessUMAC UMA ConsortiumUMAN UMA network UMTS universal mobile telecommunications system UNC UMA network controllerUPSF user profile server functionUSIM UMTS SIMUTRAN UMTS terrestrial RAN V5UA V5.2-user adaptation layer VCC voice call continuityVDSL very high data rate DSLVHE virtual home environmentVLR visitor location registerVOD video on demandVoIP voice over IPVPLMN visited PLMN WAG WLAN access gateway Wi-Fi® wireless fidelity (Although synonymous with the IEEE 802.11 standards suite and standardized by IEEE, Wi-Fi is a certification mark promoted by the Wi-Fi Alliance.)WiMAX™ worldwide interoperability for microwave access (Although synonymous with the IEEE 802.11 standards suite and standardized by IEEE, WiMAX is a certification mark promoted by the WiMAX Forum.)WISP wireless ISP WLAN wireless local area networkXCAP XML configuration access protocolxDSL any type of DSLXML extensible markup language

Page 70: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 60

The different

types of

convergence

can profoundly

change how

telecommunications

sectors compete

nationally and

internationally.

WHAT IS FMC?

The European Telecommunications Standard-ization Institute (ETSI) is an independent,

not-for-profit organization whose mission is to produce telecommunications standards for the global marketplace. ETSI members come from network operators, equipment manufacturers, government, and academia. ETSI’s various technical committees (TCs) work on different projects. ETSI defines FMC as being concerned with providing network and service capabilities independently of the access technique. It is concerned with developing converged network capabilities and supporting standards. This does not necessarily imply the physical convergence of networks. These standards may be used to offer a set of consistent services via fixed or mobile access to fixed or mobile, public or private, networks.

In other words, FMC allows users to access a consistent set of services from any fixed or mobile terminal via any compatible access point. An important extension of this principle relates to roaming: users should be able to roam from network to network while using the same consistent set of services throughout those visited networks. This feature is referred to as the virtual home environment (VHE).

The key word in FMC is convergence, and it is crucial to understand what convergence means and what convergence implies.

What Does Convergence Mean?There are several fundamentally different types of convergence:

• Voice/data/multimedia convergence, which has the ultimate goal of providing intelligent and personalized services to subscribers and can be referred to as service convergence

• Information/communications/entertain- ment convergence, which can also be perceived as Internet or information technology (IT)/telecommunications/broad- casting or media convergence and implies industry convergence

• Broadband, heterogeneous, all-IP conver-gence, which implies network convergence

• Terminal/computer/electronic home appliance (game consoles, video cameras, personal digital assistants [PDAs], MPEG-1 Audio Layer 3 [MP3] players, etc.) convergence, which is referred to as device convergence

What Does Convergence Imply?So, what does convergence in the context of FMC imply? Potentially, all of the above different types of convergence! Convergence also applies equally to consumer and enterprise users. Although enterprises potentially stand to benefit sooner by increased employee productivity attributed to nomadic activities, the distinction between consumer and enterprise users will eventually blur significantly as workplace, home, and personal spaces converge.

These different types of convergence can also profoundly change how telecommunications sectors compete nationally and internationally. Until recently, competition was limited to enterprises within a given sector; for example, a mobile network operator (MNO) competed only with other MNOs, a fixed network operator (FNO) competed only with other FNOs, and Internet service providers (ISPs) competed only with other ISPs. But this is no longer the case; instead, various industries are now converging into a single telecommunications industry.

What Drives FMC?As of 2008, there were more than 3.2 billion wireless consumers and only approximately 1.2 billion wireline subscribers. These numbers are reflected in significant changes in recent telecommunications trends. For example, fixed line usage is decreasing dramatically for classic services, and mobile usage is increasing steadily. Likewise, fixed line minutes of use (MoU) have been steadily declining, while mobile MoU have been rising. On the other hand, the fierce competition among MNOs and saturation of subscriber penetration have led to a decline in mobile average revenue per user (ARPU).

Furthermore, broadband Internet deployment has grown rapidly from 100 million subscribers in 2003, to 280 million in 2006, to 350 million in early 2008. At the same time, cable companies have gone from delivering just entertainment services to delivering dial tone as a bundled part of triple-play (entertainment, Internet, and voice). Significantly, voice over Internet Protocol (VoIP) usage is on the rise: In 2005, the number of US VoIP subscribers tripled to 4.5 million and VoIP revenue surpassed $1 billion, with the strongest growth occurring in the fourth quarter. Although Vonage is currently the leader in providing VoIP services in the US, Time Warner Cable could overtake them in the very near future.

So what does one do with too much competition and too few customers? One approach is to start

Page 71: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 61

buying up competitors, but other tactics could include seeking new markets, new services, and new ways of offering services. This is exactly where FMC enters the picture.

What Benefits Does FMC Offer?In general, FMC offers two basic benefits:

1. It guarantees interoperability.

2. It reduces CAPEX and operating expenditures (OPEX) by using common resources; transports; operation, administration, and management (OA&M) functions; services; etc.

More specifically, FMC can provide:

• Benefits for network operators—For operators that own both fixed and mobile networks, FMC makes it easier and cheaper to launch new services. It provides service continuity for customers, raising their network performance experience and thus reducing churn, thereby maintaining or increasing revenue. FMC also makes it easier to manage services, thereby leading to potential reduction in OPEX. For operators that have either fixed or mobile networks, FMC builds new services that leverage on the other network, thereby providing service differentiators. This becomes particularly important where there are no longer just MNOs competing with MNOs or FNOs competing with FNOs and the focus shifts from delivering connectivity to delivering cost-effective services. Furthermore, MNOs can realize a reduction in CAPEX brought about by (a) less spectrum being required as they employ wireless fidelity (Wi-Fi®) technology to offload traffic from cellular networks to WLANs and (b) fewer cell sites, repeaters, etc., being needed as, for example, they leverage fixed networks via WLANs.

• Benefits for equipment vendors—Original equipment manufacturers (OEMs) benefit from FMC through developing common products (reusable software/hardware components); gaining a larger addressable market (increased revenue); and producing better, richer, more cost-effective products.

• Benefits for customers—FMC provides customers with new services, continuity of service, personalized services (same ergonomics, same feel and look), mobility, simplicity (via a single number independent of network connectivity and via single billing), guaranteed QoS, security, and single customer care interface.

What are FMC’s Potential Drawbacks?One concern with FMC is that the business model may regress toward that in place before the 1984 diversification in the US, when there was basically only one network operator and few equipment vendors that provided limited services. After all, some believe, it was diversification that fueled competition in the telecommunications industry, leading to a broader set of services, more operators competing in price and quality, the Internet explosion, and so forth.

The security requirements of FMC also pose a concern. These requirements are currently based on IMS requirements, which may be challenging for FNOs to meet. Security issues are addressed in more detail later in this paper.

What Does FMC Bring to Pre-Convergence Networks?Current legacy networks are basically single-purpose networks that provide silo solutions. These are also referred to as vertically integrated networks. Each provides its own services: A fixed network offers fixed services, a mobile network offers mobile services, an entertainment network offers entertainment services. A user who wants to access different services must go back and forth between these silos to get the complete set. This is called the “spaghetti” solution (see Figure 1). In the FMC approach, applications and services are placed in one layer, there is a service control layer, and all users, regardless of access technology, can access the applications or services using the service control layer. The obvious benefits of this horizontally layered approach (called the “lasagna” solution) include the capacity to provide all services to all eligible subscribers; the diversity to provide market/service differentiators for different operators; the consistency to use standard technologies such as IP, SIP, IMS, etc.; and the ability to reduce the OPEX explosion.

Current

legacy networks

are basically

single-purpose

networks

that provide

silo solutions.

These are also

referred to as

vertically integrated

networks.

Vertically Integrated(Spaghetti) Networks

Horizontally Layered(Lasagna) Networks

Mob

ile N

etwo

rks/

Mob

ile A

pplic

atio

ns

Fixe

d Ne

twor

ks/

Fixe

d Ap

plic

atio

ns

Ente

rtai

nmen

t Net

work

/En

tert

ainm

ent A

pplic

atio

ns Applicationsand Content

Common ServiceCapabilities

Network/Transport(Different Access

Networks IP)

From Silos to Layers

Figure 1. Vertically Integrated Networks vs. Horizontally Layered Networks(Traditional “Silos” of Services vs. Future Converged Services)

Page 72: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 62

What Technology Enables FMC?The emergence of the following technologies has been instrumental in the development of FMC:

• VoIP, SIP, IP version 6 (IPv6), GAN/ unlicensed mobile access (UMA)

• Multimode/multiradio phones (e.g., Wi-Fi/ cellular, softphones, software-defined radio [SDR])

• New access and core network solutions, such as very high data rate digital subscriber line (VDSL), mobile worldwide interoperability for microwave access (WiMAX™), IMS, etc.

What Current Issues and Challenges Face FMC?Some of the current issues and challenges facing FMC are:

• Number plans and number portability—Fixed numbers and mobile numbers come from separate blocks. Prefixes contain important information for interconnection charging and number portability. Interconnection charges usually have symmetrical arrangements between two fixed networks and asymmetrical arrangements between a fixed and a mobile network. Typically, charges increase fixed network costs and reduce mobile network costs; only the consumers are the beneficiaries. Currently, there is separate fixed number portability (FNP) and mobile number portability (MNP), but no fixed/mobile number portability.

• Directory services—Fixed carriers provide unified directory service to their customers through a unified directory database containing information on all fixed line customers. Currently, mobile carriers have no such obligations. In fact, mobile numbers are considered personal data. Changes to this situation may be subject to public consultation.

• Handset availability—This is a typical issue in the early stages of the introduction of any telecommunications technology.

• Role of regulators—There are two opposing views about the role of regulators in FMC. One viewpoint is that it is not for regulators (the Federal Communications Commission [FCC] and/or the US Congress) to decide whether there should be FMC and what its pace of implementation should be. Rather, regulators should set up the environment so that market forces guide direction, extent, and pace. The other viewpoint is that, since the definitions of information, data, and entertainment have

changed, the rules governing network and service providers should change accordingly to encourage fair and healthy competition. For instance, when the Telecommunications Act of 1996 was passed, the capability to provide VoIP was virtually unanticipated, and “tele-communications services” and “information services” had different meanings. Regardless of their differences, both camps agree that new policies regarding FMC should provide a stable and secure legal and regulatory environment to enable innovation and competition and encourage investment.

DIFFERENT FMC APPROACHES AND SOLUTIONS

Current FMC solutions can be classified into four basic categories:

• GAN–cellular integration • Third Generation Partnership Project

(3GPP™)–WLAN interworking• Femtocells• NGNs

GAN–Cellular IntegrationThe first phase of FMC started with WLAN–cellular integration via UMA technology. The initial activities were defined by the Unlicensed Mobile Access Consortium (UMAC) and published in September 2004 for global system for mobile communications (GSM)/general packet radio service (GPRS) integration with WLAN (Wi-Fi) service. The key drivers behind this development were (a) the need for a quick fix to address the challenges of improving coverage for markets deploying higher frequency GSM networks and (b) the desire for brand association for the early convergence offerings and the higher data speed advantage implied by Wi-Fi access.

In this approach, a Wi-Fi network is perceived simply as an extension of a GSM/GPRS network and UMA as a new access technology. The main element in UMA is a UMA network controller (UNC), which provides the same basic functionality as a conventional base station controller (BSC). That is, the UNC handles mutual authentication and encryption and data integrity. The UNC enables mobile devices to access circuit-switched (CS) services via A interfaces with mobile switching centers (MSCs) and to access GPRS services via Gb interfaces with serving GPRS support nodes (SGSNs). The UNC maintains session control during handoff. The basic UMA network is shown in Figure 2.

In addition to

improving and

extending the

UMA standard, the

GAN specification

allows any generic

IP-based access

network to provide

connectivity

between the

handset and the

GANC, through

the Up interface.

Page 73: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 63

UMA development work was transferred to 3GPP in April 2005 and renamed GAN. [1]

In addition to improving and extending the UMA standard, the GAN specification allows any generic IP-based access network to provide connectivity between the handset and the GAN controller (GANC), through the Up interface. The GANC also includes a security gateway (SEGW) that terminates secure remote access tunnels from the handset. The SEGW interfaces with the authentication, authorization, and accounting (AAA) proxy-server via the Wminterface, and the AAA proxy-server retrieves the user information from the home location register (HLR). GAN (UMA) functional architectural is shown in Figure 3.

GAN Security Issues GAN supports security mechanisms at different levels and interfaces, as shown in Figure 4.

There are three levels of security:

• First, the security mechanisms over the Up interface protect signaling, voice, and data traffic flows between the handset and the GANC from unauthorized use, data manipulation, and eavesdropping.

• Second, authentication of the subscriber by the core network (CN) occurs at the MSC/visitor location register (VLR) or SGSN and is transparent to the GANC; however, there is cryptographic binding between the handset and the CN, as well as handset and GANC authentication, to prevent man-in-the-middle (MitM) attacks.

• Third, there is also an optional additional application level security mechanism that may be employed in the packet-switched (PS) domain to secure end-to-end communication between the handset and the application server (AS).

•••

UNC

UMAN

UMA-enabled,

Dual-modeHandset

CS Core

PS Core GGSNSGSN

GMSCMSC

A

Gb

A

Gb

RAN

BTS

PrivateNetwork

BSC

Unlicensed WirelessNetwork

(e.g., Wi-Fi, Bluetooth)

IP AccessNetwork

Mobile Core Network

Figure 2. UMA (GAN)–Cellular Networks

MS Generic IPAccess Network

Up GANC

SEGW

A

Gb

Wm

MSC

SGSN

AAA Proxy/Server HLR

VPLMN/HPLMN

HPLMN (Roaming Case)AAA Server

Wd

D’/Gr’

D’/Gr’

HLR

SMLC

Lb

MS Generic IPNetwork GANC

A/GbInterfaces MSC

SGSNIP

NetworkApplication

Server

1. Up Interface Security

2. CN Authentication and Ciphering

3. Data Application Security, e.g., HTTPs (optional)

Figure 3. GAN (UMA) Functional Architecture

Figure 4. GAN (UMA) Security Mechanisms

Page 74: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 64

GAN Advantages/DisadvantagesAs mentioned, GAN uses lower cost Wi-Fi access points to improve and extend cellular network coverage. When using service at home or inside a building, a subscriber could have excellent coverage. GAN may also potentially relieve congestion on the GSM/GPRS network. That is, GAN will shift traffic to unlicensed spectrum. This means, however, that if another service provider is legally operating service in the same spectrum, available bandwidth could become limited or even be eliminated. Traffic prioritization techniques can significantly improve Wi-Fi performance and, of course, voice and data paths can be separated. Also, because the handset must listen to two different radio technologies, it must have two radios on board. Both radios must scan for networks at all times, in case the user roams into an area where a Wi-Fi network exists. This could affect battery life. And since all the data from the handset goes through the carrier’s servers, it may be chargeable! Subscribers might wonder why they are being charged for the data going over their own Internet connections, when they can use other devices such as laptops for no extra charge.

3GPP–WLAN InterworkingThe interworking between 3GPP systems and WLAN has been defined in [2]. This specification is not limited to WLAN but is also valid for other IP-based access networks that support the same capabilities toward interworking that WLAN supports (e.g., any type of digital subscriber line

[xDSL]). The intent of 3GPP–WLAN interworking is to extend 3GPP services and functionalities to the WLAN access environment, where the WLAN effectively becomes a complementary radio access technology to the 3GPP system. The interworking levels have been categorized into six hierarchical service scenarios. [3] Of these, scenarios 2 and 3 are of the most interest; more specifically:

• Scenario 2—This scenario deals with 3GPP system-based access control and charging. Here, AAA is provided by the 3GPP system for WLAN access. This ensures that the user does not see significant differences in the way access is granted. This may also provide the means for the operator to charge access in a consistent manner over the two platforms.

• Scenario 3—The goal of this scenario is to allow the operator to extend 3GPP system PS-based services (e.g., IMS) to the WLAN. These services may include, for example,access point names (APNs), IMS-based services, location-based services, instant messaging (IM), presence-based services, multimedia broadcast multicast service (MBMS), and any service built on a combination of several of these components. Even though this scenario allows access to all services, it is a question of implementation as to whether only a subset of services is actually provided. Also, service continuity is not required between the 3GPP system part and the WLAN part.

Figure 5 shows the 3GPP–WLAN interworking architecture. The key network components are:

The specification

defining the

interworking

between 3GPP

systems and WLAN

is not limited to

WLAN but is also

valid for other

IP-based

access networks

that support the

same capabilities

toward interworking

that WLAN

supports.

3GPP Home Network

WLA

N 3G

PP IP

Acc

ess

3GPP Visited Network

3GPP AAAProxy

OfflineChargingSystem

WAG

3GPP AAAServer

SLF

HSS

HLROCS

OfflineChargingSystem

PDG

Wg

Wd

Wf

Dw

Wx

D‘/Gr’

Wf

WoWy

Wm

Wz

Intranet/Internet

Wp

Wi

WLANWLANUE

Wu

Ww Wn

Wa

IP Network

Figure 5. 3GPP–WLAN Roaming Reference Model

Page 75: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 65

• Packet data gateway (PDG)—3GPP PS-based services are accessed via a PDG. A PDG has functionality like that of the gateway GPRS support node (GGSN), e.g., charging data generation, IP address management, tunnel endpoint, QoS handling.

• WLAN access gateway (WAG)—Data to/from the WLAN access node (AN) is routed through the WAG via a public land mobile network (PLMN) through a selected PDG to provide a WLAN terminal with third generation (3G) PS-based services.

• 3GPP AAA proxy-server—This proxy-server handles all AAA-related tasks and performs relaying when needed.

• HLR/home subscriber server (HSS)—The HLR and HSS contain the required authentication and subscription data to access the WLAN interworking services. They are located within the 3GPP subscriber’s home network. The WLAN user profile should be stored in the HSS. For the HLR, the user profile may be located in the 3GPP AAA server. The user profile contains such information as user identification, operator-determined barring of 3GPP–WLAN interworking subscription and tunneling, charging mode (prepaid, post-paid, both), roaming privileges, and so forth.

• Online charging system (OCS)/charging collection function (CCF)/charging gateway (CGW)—These entities collect charging data, perform accounting and on-line charging, and carry out similar functions.

The critical issue is network (both WLAN and PLMN) selection and advertisement. The standard allows two modes: automatic and manual. WLAN access network selection is technology dependent, although user and operator may have “preferred” lists of access networks. PLMN network selection and advertisement, however, should be WLAN agnostic.

3GPP–WLAN interworking is part of 3GPP Release 6. On the cdma2000® side, the Third Generation Partnership Project 2 (3GPP2) has also undertaken similar activities, and there are now 3GPP2–WLAN interworking specifications. [4, 5]

WLAN–Cellular Handover IssuesThe make-before-break handover provided by WLAN–cellular interworking and GAN enables seamless service provisioning. [6] There are intra-system (horizontal) handovers and intersystem (vertical) handovers. Handovers from 3GPP access networks to WLAN (or GAN) are calledhandover in, and handovers from WLAN (or GAN) to 3GPP access networks are called handover out.

WLAN–cellular integration architecture is characterized by the degree of interdependence between the two component networks. There are two types of integration architectures: tightly coupled interworking and loosely coupled interworking (see Figure 6).

In tight coupling, the WLAN (gateway) is integrated into the cellular infrastructure (connected directly to either the SGSN for 3GPP or the packet data serving node (PDSN) for 3GPP2) and operates as a slave to the 3G network. Here, the WLAN network appears to the 3G CN as another 3G access network, and the WLAN gateway hides the details of the WLAN network from the 3G core. The WLAN gateway also needs to implement all 3G protocols (mobility management, authentication, etc.) required in a 3G radio access network (RAN). This approach could have several disadvantages [7], two of which bear mentioning. First, since the 3G CN directly exposes its interfaces to the WLAN network, and direct connectivity to the 3G core is required, the same operator must own both the WLAN and the 3G network. Second, since WLAN traffic is injected directly into the 3G CN, 3G core elements have to be dimensioned properly for the extra WLAN traffic.

In loose coupling, the WLAN gateway connects to the Internet and does not have a direct link to the 3G network elements. Here, the WLAN and 3G data paths are kept completely separate. There are several obvious advantages to this approach; again, two bear mentioning. First, this approach allows the 3G and WLAN networks to be independently deployed and engineered for traffic. Second, via roaming agreements with many partners, the 3G operator can have widespread coverage without extensive CAPEX.

Unfortunately, the latency associated with vertical handoffs could be long, leading to unacceptable

SGSN

Gateway GatewayRNC

WLAN(WISP1 – Tightly Coupled)

WLAN(WISP2 – Loosely Coupled)

UTRAN/GERAN

Internet

IP

APAPNode B/BTS

3GCore Network

IMSAuC AAA

HLR

Figure 6. 3G and WLAN Integration Architectures

In loose coupling,

the WLAN gateway

connects to the

Internet and

does not have

a direct link to

the 3G network

elements.

Page 76: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 66

dropped call rates; this may be particularly true for voice calls. To make matters worse, the transition between WLAN hotspots and cellular coverage is typically very abrupt (e.g., upon entering or leaving a building). One approach to potentially alleviate this issue is to use handoff trigger nodes (HTNs) at the transition areas. [8] An HTN generates link layer triggers that cause early initiation of the vertical handoff. During a successful handoff, the terminal is assigned capacity in the cellular network. In tightly coupled architecture, it is possible to reserve capacity for WLAN–cellular handoff, thus improving performance (reducing the dropped call rate). In loosely coupled architecture, a cellular base transceiver station (BTS) may not be able to distinguish the vertical handoff from a new call request.

It is important to note, as mentioned at the beginning of this section, that the 3GPP–WLAN interworking specification is also valid for any other IP-based access network that supports the same capabilities toward interworking as WLAN, such as xDSL.

FemtocellIn the femtocell solution, the access point (AP) supports the same radio access technology (e.g., universal mobile telecommunications system [UMTS]/high-speed packet access [HSPA] or cdma2000/evolution–data optimized [EV-DO]) as the outdoor system, thus eliminating the need for dual-mode handsets (DMHs). Because the femtocell is gaining momentum as a path to fixed-mobile substitution (FMS), this paper describes a number of its potential challenges in the following expanded discussion:

• Zero-touch provisioning—To successfully deploy femtocells on a large scale, the devices must be designed to include simple plug-and-play and self-configuration capabilities that avoid costly and time-consuming truck rolls.

• Network integration and AP management— Typically, mobile networks have thousands or tens of thousands of macrocell sites. Adding potentially millions of femtocell base stations (BSs) could bring about a surge in network management and operational requirements and a significant increase in network signaling traffic, all of which must be addressed through proper planning. Also, since information will be traveling over the Internet, data must be authenticated and encrypted (e.g., using IP security [IPSec]) to avoid typical security risks. Because femtocells serve as points of entry to mobile operator networks, additional security issues are associated with placing

them at customers’ homes or premises and providing physical access to the devices. Mobile operators must be able to act remotely to detect and promptly disable any rogue or malfunctioning APs.

• Interference—Because femtocells operate in the licensed spectrum, the potential exists for interference between femtocells and macrocells, as well as interference among femtocells, particularly in large multi-dwelling units. Operators having the required spectrum can set aside a portion to be used solely for femtocell deployment, thus eliminating potential macrocell-related interference issues. However, the potential for interference among femtocells would still remain and must be addressed through proper planning.

• Automatic system selection and handoff—The handset must appropriately select the system to operate on and hand off calls properly as the user moves between outdoor macrocell and indoor femtocell. Proper performance is particularly important for voice calls, given that radio frequency (RF) conditions can change significantly in a relatively short period of time. Improper design and planning could lead to an unacceptable level of dropped calls and user dissatisfaction and churn.

• Troubleshooting—Large-scale deployment of femtocells will require carrier-grade diagnostic capabilities and tools to minimize costs associated with troubleshooting. To provide a viable business model, most customer trouble calls must be resolvable in the first few minutes of the first call.

• Location determination and E911—Since voice calls can be placed over femtocells, the device must provide support for emergency 911 (E911) services. This task may become particularly challenging as femtocells are placed indoors and are easily moved from location to location. Also, the impact that adding potentially millions of femtocells (or BSs) can have on the public safety answering points (PSAPs) must be examined and plans made to accommodate an increase of this magnitude.

• Lawful interception—Femtocell APs, like any other public communications system, typically must comply with all relevant lawful interception requirements (e.g., the Communications Assistance for Law Enforcement Act [CALEA] in the US).

• Timing and synchronization—Precise timing and synchronization knowledge is vital to ensure proper performance of many access

In the

femtocell solution,

the AP supports the

same radio access

technology (e.g.,

UMTS/HSPA or

cdma2000/EV-DO)

as the

outdoor system,

thus eliminating

the need for DMHs.

Page 77: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 67

technologies and services. This is particularly important for time-division duplex (TDD) systems, such as current mobile WiMAX profiles. Several synchro-nization options exist, among them: – Synchronizing to a network time protocol

server via the IP backhaul connection– Listening to timing signals from a macro

network (viable only where one exists) – Using a global positioning system

GPS) receiver for timing (although the necessary GPS hardware increases cost, and the femtocell or antenna must have unobstructed GPS access)

• Access control—Femtocells can operate either in the open access mode where any potential user can get access, or in the closed mode where only a few (typically two to eight) authorized users can use the AP. In either case, emergency calls should always be allowed. On one hand, since the AP is typically purchased by a specific user and the AP taps into that user’s broadband services, the owner of the AP should have the choice of operating in open or closed mode. The owner of the AP may also choose to move the AP from one geographical location to another completely different one. On the other hand, since the AP operates on the licensed spectrum, the owner of the spectrum (mobile operator) is fully responsible (and liable) for any use of, and RF transmissions from, the AP. This creates a dilemma regarding who really owns the femtocell; this dilemma must be addressed carefully.

• QoS on backhaul and business models—As the volume of femtocell-generated traffic increases, the indoor broadband service providers being paid by subscribers to provide femtocell backhaul may choose one of the following actions:– Downgrade the level of quality accorded

backhaul traffic. This could potentially deteriorate user quality of experience to the point that certain real-time services such as VoIP or video become unacceptable. Of course, taking this step could eventually raise legal issues regarding Internet neutrality.

– Switch from unlimited, unmetered use to service packages with different prices and different maximum traffic volumes and QoS levels (e.g., 1 GB/month for $30, 3 GB/month for $50).

– Institute tariff sharing, where the indoor broadband service providers demand a

portion of the revenue from the mobile operators based on the amount of backhaul they carry over their networks.

The Femto Forum [9] is currently working to standardize femtocell technology and address most of the above issues. Large-scale femtocell deployments are expected in the 2009–2010 time frame.

NEXT-GENERATION NETWORKS

The full evolution to FMC will be through the NGN path. NGNs promise to be

multiservice, multiprotocol, multi-access, IP-based networks: secure, reliable, and trusted. The NGN framework is set by the International Telecommunication Union–Telecommunication Standardization Sector (ITU-T) through ETSI. ETSI’s Telecommunications- and Internet-converged Services and Protocols for Advanced Networking (TISPAN) TC deals with fixed networks and migration from CS networks to PS networks. TISPAN was formed in September 2003 by the merger of the Telecommunications and Internet Protocol Harmonization over Networks (TIPHON) and Services and Protocols for Advanced Networks (SPAN) TCs. The TISPAN TC focuses on all aspects of standardization for present and future converged networks, including NGNs, and produces implementable deliverables that cover NGN service aspects, architectural aspects, protocol aspects, QoS studies, security-related studies, and mobility aspects within fixed networks. Major standardization development organizations (SDOs) such as the Internet Engineering Task Force (IETF), 3GPP, 3GPP2, American National Standards Institute (ANSI), CableLabs, MultiService Forum (MSF), and Open Mobile Alliance (OMA) are actively involved in defining NGN standards. The TISPAN TC structure (working groups and projects) is summarized in Figure 7.

TISP

AN N

GN

Tele

com

mun

icat

ions

Eq

uipm

ent I

dent

ity

F-M

MS

EMTE

L

OSA

DTM

ServicesArchitecture

ProtocolsNumbering and Routing

QoSTesting

SecurityNetwork Management

ProjectsEight Working Groups

Figure 7. ETSI TISPAN TC Structure

The full evolution

to FMC

will be through the

NGN path. NGNs

promise

to be multiservice,

multiprotocol,

multi-access,

IP-based networks:

secure, reliable,

and trusted.

Page 78: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 68

TISPAN NGN Roadmap NGN Release 1, published on December 9, 2005, incorporates the following capabilities: real-time conversational services, messaging (IM and multimedia messaging service [MMS]), and content delivery (e.g., video on demand [VOD]). This release provides limited mobility support, with user-controlled roaming but no in-call handover. It allows a wide range of access technologies (xDSL, Ethernet, WLAN, cable) and allows for interworking with public switched telephone network (PSTN), integrated service digital network (ISDN), PLMN, and other IP networks.

NGN Release 2, finalized in early 2008, focuses on optimizing access resource usage according to user subscription profile and service use.

Work began on NGN Release 3 in mid-2008. When released, it is expected to include full inter-domain nomadicity and accommodate higher bandwidth access such as VDSL, fiber to the home (FTTH), and WiMAX.

NGN RequirementsAn ideal network would fuse the best of today’s networks and capabilities and allow the incorporation of tomorrow’s inventions; it would have the following characteristics:

• The reliability of a PSTN • The mobility of a cellular network• The bandwidth of an optical network• The security of a private network• The flexibility of the Ethernet• The video delivery of cable television• The content richness of the Internet

• A decoupling of services from networks and transports

• Open interfaces• Full QoS selection and control• The capability to support legacy as well as

NGN-aware terminal devices• Simplicity and reasonable price

NGNs promise to provide exactly all of these and more!

NGN ArchitectureThe TISPAN TC has developed a functional architecture [10] consisting of a number of subsystems and structured in a service layer and an IP-based transport layer. This subsystem-oriented architecture enables new subsystems to be added over time to cover new demands and service classes. It also provides the ability to import subsystems defined by other standardization bodies. Each subsystem is specified as a set of functional entities and related interfaces. Figure 8 shows the overall NGN functional architecture.

The NGN service layer comprises the following:

• PSTN/ISDN emulation subsystem (PES)• Core IMS• Other multimedia subsystems (e.g., streaming

subsystem, content broadcasting subsystem)• Common components used by several

subsystems (e.g., subsystems required for accessing applications, charging functions, user profile management, security management)

The transport layer provides the IP connectivity for NGN users. The transport layer is composed of a transport control sub-layer on top of transfer functions. The transfer control sub-layer is further divided into the network attachment subsystem (NASS) and the resource and admission control subsystem (RACS).

The NASS provides registration at the access level and initializes terminal accessing to NGN services. More specifically, the NASS provides the following functionalities [11]:

• Authorization of network access based on user profile

• Dynamic provisioning of IP addresses and other terminal configuration parameters

• Authentication at the IP layer, before or during the address allocation procedure

• Location management at the IP layer

Applications

Transfer Functions

User

Equ

ipm

ent

Other Networks

Other Subsystems

Core IMS

PSTN/ISDNEmulationSubsystem

UserProfiles

RACS

NASS

Service Layer

Transport Layer

Figure 8. NGN Overall Architecture

NGN architecture

enables new

subsystems to be

added over time

to cover new

demands and

service classes.

It also provides

the ability to

import subsystems

defined by other

standardization

bodies.

Page 79: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 69

There may be more than one NASS to support multiple access networks.

The RACS provides applications with a mechanism for requesting and reserving resources from the access network. More specifically, the RACS provides the following functionalities [12]:

• Session admission control• Resource reservation, permitting

applications to request bearer resources in the access network

• Service-based local policy control to authorize QoS resources and define policies

• Network address translation (NAT) traversal

As mentioned, the major subsystems in the service layer are the PES, core IMS, and other multimedia subsystems, such as a streaming subsystem.

PSTN/ISDN Service Continuity in an NGNThe NGN supports the legacy plain old telephone service (POTS). That is, an NGN mimics a PSTN/ISDN from the point of view of legacy terminals (or interfaces) via an IP network through a residential gateway (RGW) or an access gateway (AGW). This is referred to as PSTN/ISDN emulation. All PSTN/ISDN services remain available and identical (i.e., with the same ergonomics) so that end users are unaware that they are not connected to a time-division multiplexing (TDM)-based PSTN/ISDN. This allows TDM equipment replacement, while keeping legacy terminals unchanged. The ITU-T H.248 protocol is used by the emulation AS (EAS) to control the gateway. A typical PES configuration is shown in Figure 9. PES is defined in [13].

The NGN also supports PSTN/ISDN simulation, allowing PSTN/ISDN–like services to be provisioned to advanced terminals (IP phones) or IP interfaces. Although there are no strict requirements to make all PSTN/ISDN services available or identical, end users expect to have access to the most popular ones, possibly with different ergonomics. Either the pure or the 3GPP/TISPAN version of the SIP is used to provide simulation services.

Core IMSThe IMS is the main platform for convergence. [14] Currently, the IMS is at the heart of convergent NGNs. The mobile SIP-based IMS is also the core of both 3GPP and 3GPP2 networks. It is expected that tomorrow’s entire multimedia mobile world will be IMS-based. The IMS is IP end-to-end and allows applications and services to be supported seamlessly across all networks. The IMS is defined

by 3GPP [15] and builds on IETF protocols; 3GPP has enhanced those protocols to allow for mobility. The TISPAN TC has decided to adopt the IMS and work with 3GPP on any modifications or improvements that may be needed for the NGN. [16] The main differences between the core IMS and the 3GPP IMS are as follows:

• Access networks differ significantly (xDSL and WLAN versus UMTS), although 3GPP Release 6 provides WLAN access and Release 7 provides xDSL access.

• There are bandwidth and transmission delay constraints.

• NGN terminals are usually more feature-rich and have less stringent requirements, such as for a UMTS subscriber identity module (USIM)/IMS subscriber identity module (ISIM).

• Location information is fundamentally different.

• Explicit resource reservation signaling is not available in terminals and access edge points; there is no dedicated channel for signaling.

• IP version 4 (IPv4) is still very much in use on the NGN.

SECURITY CONCERNS

The telecommunications and IT industries are seeking cost-effective, comprehensive,

end-to-end security solutions. ITU-T Study Group 17 (SG-17) is the designated lead study group for telecommunications security. Working groups within SG-17, called Questions (Qs), are tasked with looking into specific areas of

Z ZRGW

orAGW H.248 SIP-I H.248

H.248

DSS1/IUA

V5.2/V5UA

ISUP

TGW

S/T

V5.2AN

PSTN/ISDN

ZRGW

orAGW H.248 SIP-I H.248

ZRGW

orAGW H.248 SIP-I H.248

ZRGW

orAGW H.248 SIP-I

RGWor

AGW

Figure 9. Emulation Configuration

The NGN supports

the legacy POTS.

That is, an NGN

mimics a

PSTN/ISDN from

the point of view

of legacy terminals

(or interfaces) via

an IP network

through an RGW

or an AGW.

This is referred

to as PSTN/ISDN

emulation.

Page 80: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 70

telecommunications security and produce technical specifications that are published as Recommendations. Q7 is chartered to look into telecommunications security management, Q5 into security architecture and framework, and Q9 into mobile secure communications. Q5 has published key Recommendations X.800 [17] and X.805 [18], and Q9 has published X.1121 [19] and X.1122 [20]. Recommendation X.800 deals

mainly with security architecture, and X.805 addresses security architecture for end-to-end communications.

ITU-T Recommendation X.800 [17] provides a systematic way of defining security requirements. It defines security services in the five major categories of authentication, access control, data confidentiality, data integrity, and non-repudiation, and it defines five threat models, as listed in Table 1.

ITU-T Recommendation X.805 [18] defines the security architecture for systems providing end-to-end communications. This security architecture was created to address the global security challenges of service providers, enterprises, and consumers and is applicable to wireless and wireline, including optical and converged networks. The security architecture logically divides the complex set of end-to-end network-security-related features into separate architectural components: security dimensions, security layers, and security planes as follows:

Table 1. Threat Models Defined by ITU-T Recommendation X.800

The security

architecture

logically divides

the complex set

of end-to-end

network-security-

related features

into separate

architectural

components:

security dimensions,

security layers, and

security planes.

Model Defi nition/Description Attack On

DestructionDestruction of information and/or network resources

Availability

CorruptionUnauthorized tampering

with an assetIntegrity

RemovalTheft, removal, or loss of information and/or

other resourcesAvailability

DisclosureUnauthorized access

to an assetConfi dentiality

InterruptionUnavailability or unusability

of the networkAvailability

SECURITY-RELATED TERMS

CONFIDENTIALITY The concealment of information or resources

AUTHENTICITY The identifi cation and assurance of the origin of information

INTEGRITYThe trustworthiness of data or resources in terms of preventing improper and unauthorized changes

NON-REPUDIATION The prevention of the ability to deny that an activity on the network occurred

AVAILABILITY The ability to use the information or resources desired

THREAT A potential violation of security

ATTACK

Any action that violates security. An attack has an implicit concept of intent. A router misconfi guration or server crash can also cause loss of availability, but they are not attacks. There are passive attacks and active attacks. A passive attack refers to eavesdropping on or monitoring transmissions to obtain message content or monitor traffi c fl ow, whereas in an active attack, the attacker modifi es the data stream to masquerade one entity as another, to replay previous messages, to modify messages in transit, or to create denial of service (DoS).

POLICY A statement of what is and is not allowed

MECHANISM

A procedure, tool, or method of enforcing a policy. Security mechanisms implement functions that help prevent, detect, and respond to recovery from security attacks. Security functions are typically made available to users as a set of security services through application program interfaces (APIs) or integrated interfaces. Cryptography underlies many security mechanisms.

SECURITY DOMAINA set of elements made up of the security policy, security authority, and security-relevant activities. The set of elements is subject to the security policy for the specifi ed activities, and the security policy is administered by the security authority for the security domain.

Page 81: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 71

• Security dimensions—A security dimension is a set of security measures designed to address a particular aspect of network security. Recommendation X.805 identifies eight sets of dimensions that protect against all major security threats. These eight sets are: access control, authentication, non-repudiation, data confidentiality, communication security, data integrity, availability, and privacy.

• Security layers—To provide an end-to-end security solution, the security dimensions are applied to a hierarchy of network equipment and facility groupings, referred to as security layers. There are three security layers: applications, services, and infrastructure. These layers identify where security must be addressed in products. Each security layer has unique vulnerabilities, threats, and mitigations. The infrastructure security layer enables the services layer, and the services layer enables the application layer.

• Security planes—Security planes address the security of activities performed in a network. There are three security planes: end-user, control, and management. Each security plane is applied to every security layer. This yields nine security perspectives. Each security perspective has unique vulnerabilities and threats. Since there are eight security dimensions for each security perspective, this implies 72 combinations that need to be addressed!

The architecture for the end-to-end network security proposed by Recommendation X.805 is shown in Figure 10.

NGN Security IssuesNGN security requirements are addressed in [21], and security architecture is addressed in [22]. The security requirements for IMS applications are,

to a large extent, based on 3G requirements for the IMS, although there are some differences and challenges related specifically to fixed networks. The TISPAN NGN TC is working with 3GPP to add, modify, or extend the existing 3GPP IMS to encompass the fixed network requirement.

Some of the main security issues currently under study are:

• Security to support xDSL, WLAN, cable, etc.• NAT/firewall traversal of NGN signaling

and media protocols• Authentication of NASS and IMS services• Security to RGWs and AGWs• Interworking of various security

mechanisms• Interdomain/interconnection security• Lawful interception• Legacy terminals (without ISIM)

The NGN Release 1 security architecture assumes the existence of a well-defined NGN architecture that includes the IMS, NASS, RACS, and PES, and basically consists of the following major parts:

• NGN security domains• Security services (authentication,

authorization, policy enforcement, key management, confidentiality, and integrity)

• Security protocols (IMS access security, SIP hypertext transport protocol [HTTP]-digest, presence security)

• Application key management• SEGW functions• IMS RGWs (to secure access of legacy

terminals)• NGN subsystem-specific security measures

(e.g., for PES)

Applications Security

Services Security

Infrastructure Security

End User PlaneControl Plane

Management Plane

Acce

ss C

ontro

l

Auth

entic

atio

n

Non-

repu

diat

ion

Data

Con

fiden

tialit

y

Com

mun

icat

ion

Secu

rity

Data

Inte

grity

Avai

labi

lity

Priva

cy

Threats

Attacks

DestructionCorruptionRemoval

DisclosureInterruption

Vulnerabilities

Eight Security Dimensions

Figure 10. Security Architecture for End-to-End Network Security

The NGN

Release 1 security

architecture

assumes the

existence of

a well-defined

NGN architecture

that includes

the IMS, NASS,

RACS, and PES.

Page 82: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 72

Within the NGN security architecture, the following logical security planes with their respective security functional entities (FEs) are distinguished:

• NASS security plane—This plane encompasses the security operations during network attachment for gaining access to the NGN access network.

• IMS security plane—This plane encompasses the call session control functions (CSCFs) and the user profile server function (UPSF). UPSF is the NGN version of HSS in the 3GPP IMS.

• Generic authentication architecture (GAA)/ generic bootstrapping architecture (GBA) key management plane—This plane is optional and is provided for in application layer security.

The NGN security architecture partitions the NGN into the following security domains [22]:

• Access network security domain—FEs are hosted by the access network provider.

• Visited NGN security domain—FEs are hosted by a visited network provider, where the visited network may provide access to some application services. The visited network provider may host some applications and may own its own database of subscribers. Alternatively, or additionally, the visited network provider may outsource some appli-cations services to the home network provider or even a third-party application provider.

• Home NGN security domain—FEs are hosted by the home network provider, where the home network may provide some application services. The home network provider hosts some applications and owns a database of subscribers.

• Third-party application service provider (ASP) network security domain—FEs are hosted by the ASP, which provides some application services. The ASP may be a separate service provider different from the visited or the home network provider. The ASP may need to deploy authorization information offered by the visited or home network provider.

The NASS and RACS FEs are mapped to these four NGN security domains. Figure 11 shows the NGN security architecture with the NASS and RACS. SEGW functions within each security domain protect the exposed interfaces between security domains and ensure that a minimum security policy is enforced among the domains.

The NGN IMS security architecture is very similar to that of the 3GPP IMS. In the NGN, the

3GPP-specific transport domain is replaced by the generic IP transport domain.

The security architectures of the NGN application and the IMS application are also similar. The NGN also defines a security protocol (HTTP digest over transport layer security [TLS]) to protect PSTN/ISDN simulation services. It uses an extensible markup language (XML) configuration access protocol [XCAP] on the Ut interface between the terminal(s) as the XCAP client and the AS as the XCAP server. [23] Use of an authentication proxy for user authentication is optional (see Figure 12).

NGN security can also be divided into the following three basic areas:

• Access security• Core security• Interconnection security

Access security, also known as first-hop or first-mile security, is a difficult part of the NGN architecture to achieve because of the different access technologies within interconnects. Access security consists of the network attachment part and the service layer part. Network attachment includes network authentication between the UE and the NASS; network authentication is access technology dependent. For IMS access security, TISPAN has adopted the 3GPP solution of using the IPSec transport mode and SIP digest authentication and key agreement (AKA). The presence of NAT introduces some difficulties, but several potential solutions have been under investigation.

NASSNASSNASS

RACS RACS

Application Application

Access NetworkSecurity Domain

Visited NGNSecurity Domain

Home NGNSecurity Domain

Third-Party ASPNetwork Security

Domain

Third-Party ASPNetwork Security

Domain

UE

RACS

Figure 11. NGN Security Architecture with NASS and RACS and Different Domains

UE ASUt Ut

HTTP Digest over TLS

AuthenticationProxy (optional)

Figure 12. Application Security Architecture

Access security,

also known as

first-hop or

first-mile security,

is a difficult

part of the

NGN architecture

to achieve

because of the

different access

technologies within

interconnects.

Access security

consists of the

network attachment

part and the

service layer part.

Page 83: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 73

Core or intra-domain security is mainly the responsibility of the network operator. Protection at the domain borders is insufficient; experience has shown that many attacks are launched from inside the network. The separation principle, whereby information flow types (signaling, management, and media) and node types are isolated and individually protected, could significantly reduce the extent of an attack. [24]

Interconnection or interoperator security is addressed by SEGWs, which enforce a domain’s security policy toward the SEGW of another domain. The use of the IPSec encapsulating security payload (ESP) tunnel mode with Internet key exchange (IKE) is a recommended option for mutual SEGW authentication, information integrity, and anti-replay. Confidentiality is optional. [14]

The one remaining issue is security for non-IMS services, which have been mainly PSTN/ISDN services. VoIP can be supported by the IMS but can also be provided via a PES configuration. The PES is well positioned to replace the PSTN. The PES uses the ITU-T H.248 protocol instead of SIP between its AGW control function (AGCF) and media gateways. For security aspects, a distinction must be made between the AGWs at the operator’s premises and the RGWs in the subscribers’ homes.

For AGWs, no authentication is required, since AGWs have a one-to-one relationship with an AGCF and security features can be provisioned. The security solutions for RGWs are somewhat more difficult because authentication is required. Authentication should be performed while maintaining the user’s PSTN experience. Security negotiations should be fully embedded in the RGW, and the RGW and AGCF should belong to the same security domain. See Figure 13.

From a business standpoint regarding risks and vulnerabilities, network operators are typically

most worried about theft of service via identity theft and DoS attacks. The former threatens revenue, while the latter endangers service delivery and consequently service quality. Poor service quality often leads to higher churn, which, in turn, leads to loss of revenue.

As a closing remark, it is worth mentioning that, until recently, handover between a CS voice (or potentially video or other multimedia services) call and a PS (WLAN or IMS) call was not addressed. This issue is now being addressed by 3GPP as voice call continuity (VCC), and the final specifications are part of 3GPP Release 7.

CONCLUSIONS

FMC has strong market drivers, and convergence is inevitable! FMC promises

to provide something for everyone—from end-user to network operator to application or service provider. Current FMC solutions are evolutionary steps toward full convergence, which is envisioned as occurring via the NGNs. The wheels of convergence are already in motion. We can choose to embrace, participate in, and prepare for convergence or be caught unprepared!

TRADEMARKS

3GPP is a trademark of the European Telecommunications Standards Institute (ETSI) in France and other jurisdictions.

cdma2000 is a registered trademark and certification mark of the Telecommunications Industry Association (TIA-USA).

Wi-Fi is a registered trademark and certification mark of the Wi-Fi Alliance.

WiMAX is a trademark and certification mark of the WiMAX Forum.

Legacy UserEquipment

(terminals, PBXs)RGW

(R-MGF)

AGW(A-MGF)

IP Transport(Access and

Core Network)

ControlSubsystem

(AGCF with MGC)

Customer’s Premises Operator’s Premises

Single Operator’s Security Domain

Figure 13. PSTN Emulation SecurityThe wheels of

convergence are

already in motion.

We can choose

to embrace,

participate in,

and prepare for

convergence

or be caught

unprepared!

Page 84: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 74

REFERENCES

[1] 3GPP TS 43.318, “Generic Access to the A/Gb Interface,” Release 6.

[2] 3GPP TS 23.234, “3GPP System to WLAN Interworking – System Description.”

[3] 3GPP TS 22.934, “Feasibility Study on 3GPP System to WLAN Interworking.”

[4] 3GPP2 X.P0028-200, “Access to Operator Service and Mobility for WLAN Interworking.”

[5] 3GPP2 S. R0087, “3GPP2 – WLAN Interworking.”[6] 3GPP TS 33.234, “3G Security: WLAN

Interworking Security.”[7] M. Buddhikot et al., “Design and Implementation

of a WLAN/CDMA2000 Interworking Architecture,” IEEE Communications Magazine, November 2003, pp. 90–100.

[8] P. Khadivi et al., “Handoff Trigger Nodes for Hybrid IEEE 802.11 WLAN/Cellular Networks,” Proceedings of the First International Conference on Quality of Service in Heterogeneous Wired/Wireless Networks, 2004.

[9] Femto Forum <http://www.femtoforum.org>.[10] ETSI ES 282 001, “NGN Functional Architecture.”[11] ETSI ES 187 004, “NGN Functional Architecture;

Network Attachment Sub System (NASS).”[12] ETSI ES 187 003, “Resources and Admission

Control Sub-system (RACS); Functional Architecture.”

[13] ETSI ES 187 002, “PSTN/ISDN Emulation Sub-system (PES); Functional Architecture.”

[14] R. Safavian, “IP Multimedia Subsystem (IMS): A Standardized Approach to All-IP Converged Networks,” Bechtel Telecommunications Technical Journal, January 2006, pp. 13–36.

[15] 3GPP TS 23.223, “IP Multimedia Subsystem (IMS) (Stage 2) – Release 5.”

[16] ETSI ES 282 007, “TISPAN: IP Multimedia Subsystem (IMS): Functional Architecture.”

[17] ITU-T Recommendation X.800, “Security Architecture.”

[18] ITU-T Recommendation X.805, “Security Architecture for Systems Providing End-to-End Communications.”

[19] ITU-T Recommendation X.1121, “Framework of Security Techniques for Mobile End-to-End Communications.”

[20] ITU-T Recommendation X.1122, “Guidelines for Implementing Secure Mobile Systems Based on PKI.”

[21] ETSI TS 187 001, “NGN SECurity (SEC); Requirements.”

[22] ETSI TS 187 003, “NGN Security; Security Architectures.”

[23] ETSI TS 183 023, “TISPAN; PSTN/ISDN Simulation Services; XML. Configuration Access Protocol (XCAP) over Ut Interface for Manipulating NGN PSTN/ISDN Simulation Services.”

[24] M. Mampaey et al., “Security from 3GPP IMS to TISPAN NGN,” Alcatel Telecommunications Review, 4th Quarter 2005.

BIOGRAPHIESJake MacLeod is chief technology officer, Engineering and Technology, for Bechtel Communications, and a Bechtel principal vice president. He was recently named a Bechtel Fellow in recognition of his contributions to the advancement of tele-communications technology

and acknowledgment of his strong advocacy of Bechtel’s role in that arena.

Jake joined Bechtel in May 2000 and is responsible for expanding the scope of Bechtel’s communications engineering services to include all aspects of technical design, from network planning to commercial system optimization. He initiated and developed Bechtel’s RF and Network Planning team, which, at its peak, had more than 150 world-class engineers. Jake also designed and established two world-class Bechtel telecommunications laboratories that have provided clients with applied research and development services ranging from interoperability testing to product characterization.

Jake was the first Bechtel Communications person to enter Baghdad in 2003, immediately after the conflict paused. He and his teams assessed the Iraqi telecommunications network, then designed and replaced 12 wire centers (equivalent to 240,000 POTS lines) in a period of 4 months, an unprecedented achievement in telephony. Jake and his teams also analyzed and replaced the air traffic control system at Baghdad International Airport. Under his purview, Bechtel’s technology teams have developed the Virtual Survey Tool (VST), an automated network planning tool with the potential to radically change the conventional methods of network design. Jake’s laboratories are currently working with global wireless equipment manufacturers to analyze and characterize UMTS, HSDPA, Node B hotels, WiMAX, and intuitive networks. Under his direction, the Bechtel Communications Technical Journalauthoritatively analyzes cutting-edge operational issues. Bechtel Communications also hosts semi-annual global technology debates focused on the pros and cons of the most advanced telecommunications technologies. Throughout the year, Jake typically gives an average of six to eight keynote and technology-based presentations at industry conferences.

Jake started his career in the telecommunications industry in 1978, beginning in transmission engineering with Southwestern Bell Telephone Company (SWBTC) in San Antonio, Texas. His responsibilities at SWBTC included design and implementation of radio systems in Texas west of Ft. Worth. He participated in the original cellular telephone system designs for SWBTC in San Antonio, Dallas, and Austin. After SWBTC, Jake became the second employee for PageNet/CellNet and vice president of Engineering for its cellular division. He designed more than 135 cellular network systems,

The original version of this paper was published in theBechtel Technology Telecommunications Journal,

Vol. 4, No. 2, June 2006, pp. 37–53.

Page 85: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 75

including San Francisco’s, and filed them with the FCC. In addition to his responsibilities at PageNet/CellNet, Jake was asked to chair the FCC’s Operational Relationships Committee.

Jake has held executive management positions with NovAtel (Calgary), NovAtel (Atlanta), Western Communications, and West Central Cellular. More recently, Jake spent 9 years with Hughes Network Systems (HNS), where he was instrumental in establishing its cellular division. Jake designed and established cellular and WLL systems in areas ranging from central Russia to Indonesia, as well as in 57 US markets.

Jake holds a BS degree from the University of Texas.

S. Rasoul Safavian brings more than 20 years of experience in the wired and wireless communications industry to his position as Bechtel Communications’ vice president of Technology. He is charged with establishing and maintain-ing the overall technical vision and providing guidance and

direction to its specific technological activities. In fulfilling this responsibility, he is well served by his background in cellular/PCS, fixed microwave, satellite communications, wireless local loops, and fixed networks; his working experience with major 2G, 2.5G, 3G, and 4G technologies; his exposure to the leading facets of technology development as well as its financial, business, and risk factors; and his extensive academic, teaching, and research experience.

Before joining Bechtel in June 2005, Dr. Safavian oversaw advanced technology research and development activities, first as vice president of the Advanced Technology Group at Wireless Facilities, Inc., then as chief technical officer and vice president of engineering at GCB Services. Earlier, over an 8-year period at LCC International, Inc., he progressed through several positions. Initially, as principal engineer at LCC’s Wireless Institute, he was in charge of CDMA-related programs and activities. Next, as lead systems engineer/senior principal engineer, he provided nationwide technical guidance for LCC’s XM satellite radio project. Then, as senior technical manager/senior consultant, he assisted key clients with the design, deployment, optimization, and operation of 3G wireless networks.

Dr. Safavian has spoken at numerous conferences and industry events and has been published extensively. He has authored three technical papers and co-authored two in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) since August 2005. Most recently, he co-authored “Next-Generation Mobile Backhaul,” which appeared in the September 2008 issue.

Dr. Safavian is quite familiar with the Electrical Engineering departments of four universities: The George Washington University, where he has been an adjunct professor for several years; The Pennsylvania State University, where he is an affiliated faculty member; Purdue University, where he received his PhD in Electrical Engineering, was a graduate research assistant, and was later a member of the visiting faculty; and the University of Kansas, where he received both his BS and MS degrees in Electrical Engineering and was a teaching and a research assistant. He is a senior member of the IEEE and a past official reviewer of various transactions and journals. Dr. Safavian is pleased to have been selected for inclusion in Marquis Who’s Who in America®, Sixty-Third Edition, January 2009.

Page 86: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 76

Page 87: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 77

INTRODUCTION

It is not unusual for large engineering, construction, and other heavy industry

companies to be working on a variety of projects at locations around the world. The substantial size and complexity of these projects—power plants, refineries, mines, smelters, roads, airports, environmental cleanup facilities, pipelines, etc.—make reliable onsite voice and data communications essential for successful project execution. Bechtel, a global leader in the engineering, construction, and management of large projects of all kinds, prides itself on bringing an unmatched combination of knowledge, skill, experience, and customer commitment to every job. [1] This includes designing and deploying the latest jobsite communications technologies.

To grow and to be an industry leader, a company must keep abreast of those technologies that can help it to improve its processes, become more efficient, and provide better value to its customers. Deploying wireless broadband communications on large project sites is an example of using a technology that has the potential of providing such benefits. (Project size could be based on physical size, communications requirements, number of potential communications users, or a combination of these and similar factors.)

BACKGROUND

Industry TrendsUntil recently, the voice/data communications systems typically used by large industrial projects have been combinations of narrowband mobile radio, push-to-talk (PTT) phones, hardwired office data connections, private branch exchange (PBX) voice switches, and temporary fiber to construction trailers. The wireless broadband system is a fairly new network concept that provides high-speed wireless Internet and data access over a wide area. In the past several years, both licensed and unlicensed services and devices have enjoyed extensive growth in the wireless marketplace. Initial wireless access technology deployments have already begun, but have been mostly limited to smaller wireless fidelity “hotspots” and wireless bridging devices. Significant opportunities exist today to deploy and extend wireless networks/technologies across entire project sites to support multiple operations. An example of how wireless communications could be deployed across a project is illustrated in Figure 1.

No single wireless broadband solution is applicable to all communications needs on all projects. Instead, these solutions can play

THE USE OF WIRELESS BROADBAND ON LARGE INDUSTRIAL PROJECT SITES

Nathan Youell

[email protected]

Issue Date: December 2008

Abstract—As wireless broadband technologies continue to evolve, they are becoming more practical and cost effective for use on large industrial project sites where reliable voice and data communications are essential. As a result, projects are becoming interested in the potential benefits and efficiencies of wireless broadband communications systems. In addition to facilitating a mobile workforce, wireless broadband can leverage a host of other advantages on large sites (e.g., safety and security, rapid deployment, survivability, asset tracking, sensors). This paper helps companies and projects become familiar with wireless broadband implementations and knowledgeable about the surrounding issues. Thus, they will be better prepared to evaluate this innovative way of providing value to their customers and their industries.

Keywords—asset tracking, network planning, network in a box, NIB, project criteria, rapid deployment, spectrum availability, sustainable development, traffic analysis, untethered workforce, wireless broadband, wireless mesh, WiMAX

Page 88: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 78

OfficesConstruction Sites

Mobile Users

Residential/Camps

Base Station Community

Mobile Users

ABBREVIATIONS, ACRONYMS, AND TERMS

3GPP™ Third Generation Partnership Project—a collaboration agreement among several communications standardsbodies to produce and maintain globally applicable specifications for a third-generation mobile system based on GSM technology

3GPP2 Third Generation Partnership Project 2—a sister project to 3GPP and a collaboration agreement dealing with North American and Asian interests regarding third-generation mobile networks

4G fourth generation enhanced digital mobile phone service that promises to boost data transfer rates to 20 Mbps

CAPEX capital expenditures

CPE customer premises equipment

EV-DO evolution–data optimized

FTP file transfer protocol

GPS global positioning system

GSM global system for mobile communication

HSPA high speed packet access

IEEE Institute of Electrical and Electronics Engineers

IT information technology

LAN local area network

LTE long-term evolution

M2M mobile-to-mobile

MMS multimedia messaging service

NIB network in a box

OPEX operating expenditures

PBX private branch exchange

PTT push to talk

RFID radio frequency identification

SMS short message service

UHF ultra high frequency

VHF very high frequency

WiMAX™ worldwide interoperability for microwave access (Although synonymous with the IEEE 802.11 standards suite and standardized by IEEE, WiMAX is a certification mark promoted by the WiMAX Forum.)

WLAN wireless LAN

Figure 1. Wireless Project Site

Page 89: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 79

an important role in the successful execution of specific types of projects. At the same time, wireless solutions are being included more and more often by customers as a project requirement and expectation; this calls for a level of expertise and judgment on the part of the contractor. Why not leverage this knowledge and the potential benefits of wireless systems to solve multiple communication challenges?

Often, the issue with respect to using wireless broadband is one of overcoming risk or even the perception of risk. Traditional solutions have been used for years, are well understood, and are universally accepted. There is a natural reluctance to implement solutions perceived as being unproven. Factors such as cost, design and installation, performance, and supportability can affect a project’s bottom line. The good news is that wireless solutions tend to mitigate these concerns. There is great interest in wireless solutions because their advantages over traditional methods can make taking the risk worth the effort.

Vendor SupportMany large industrial project sites and engineering and construction companies are currently evaluating and selecting wireless broadband solutions to support their operations. Prime examples include mining and refineries, construction site automation, private security systems, ports, and infrastructure projects. Equipment vendors are working with these potential customers to develop solutions targeted to these opportunities. Vendors include those traditionally used on large sites—such as Motorola®, Nortel™, Cisco Systems®, and Alcatel-Lucent™—as well as new entrants to this market—such as Huawei®, Alvarion®, and Tecore Networks®. For example, Motorola has recently broadened its portfolio of communications solutions for the mining industry [2], and Alvarion has produced a brochure addressing the communications needs of oil, gas, and industrial sites [3]. There is little doubt that vendors are constantly evolving and evaluating innovative ways of moving forward with wireless broadband solutions for the world’s large industrial sites.

PROJECT CATEGORIZATION AND EVALUATION

Project CategorizationAs stated earlier, mobile wireless broadband solutions do not fit universally across all projects. Therefore, it is important to identify the

characteristics and attributes of a given project as they relate to the potential feasibility of using wireless solutions for site communications. Classification and categorization can also be used to provide a generic baseline for determining the feasibility and benefit of using wireless communications across different types of projects.

The following is an example of this categorization process:

• Very large, remote project—Often, the very large, remote project is a greenfield site with little or no existing wireless service available. The large, remote project may realize the greatest benefit from ubiquitous wireless broadband coverage. Besides size and remoteness, this kind of project often has multiple locations that could benefit from wireless coverage between them. These locations may include residential camps, office buildings, construction sites, adjacent communities, and the transportation networks connecting them. Wireless solutions provide an opportunity to support the entire project with access to data, voice, applications, and other specialized services, making the large, remote project an ideal candidate for wireless broadband solutions. Categorization: High Feasibility.

• Mega-project—Another common project type is the mega-project, which may or may not be remote and may or may not encompass multiple locations. Examples include large power plants, airports, and environmental waste treatment facilities. The benefits of wireless broadband depend primarily on the general wireless criteria discussed later in this paper, as well as on factors such as the number of users, access to spectrum, project duration, and specific wireless niches. Categorization: Medium Feasibility.

• Linear project—A linear project takes place over a large area that is linear in nature. This kind of project tends to be road or railway construction, where crews typically focus on one or only a few local areas before moving to the next, usually down the road. Wireless broadband is potentially very useful because of the significant area the project usually covers from one end to the other. Imagine a worker who must frequently travel the entire length of the project to get updated construction drawings from the main office. The time and safety implications alone make wireless communication particularly attractive. However, the same project

To grow and

to be an

industry leader,

a company must

keep abreast of

those technologies

that can help it

to improve

its processes,

become more

efficient, and

provide better value

to its customers.

Page 90: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 80

Frequencies may

be licensed or

unlicensed, and,

although there

are similarities

in band usage

worldwide, each

country or region

can be completely

different regarding

how spectrum is

used and regulated.

Table 1. Evaluation Criteria for a Wireless Solution

Criterion Description

CLASSIFICATION Does the project fi t into a generic project classifi cation such as one of those discussed above?

LOCATION

The project location may lend itself to using a wireless broadband solution or just as easily prevent its use. Location factors affecting the feasibility and acceptability of wireless solutions include the availability of spectrum, either licensed or unlicensed; the availability of acceptable technologies; the presence of service providers; and the terrain.

SPECTRUM AVAILABILITY

Spectrum availability is one of the most important considerations. Frequencies may be licensed or unlicensed, and, although there are similarities in band usage worldwide, each country or region can be completely different regarding how spectrum is used and regulated. For example, a frequency that may be unlicensed in one country could very easily be licensed in another. Also, certain regulations could limit the types of technologies used. (Spectrum issues are discussed further later in this paper.)

STATUS, POINT IN PROJECT

It is more diffi cult to affect a project far along in the construction effort. Projects in the initial design stage or with specifi c communication challenges or issues are the prime targets to be assessed for the use of wireless broadband technologies.

DURATION Project duration needs to be evaluated against needs and costs (cost-benefi t analysis).

NUMBER OF USERS

The actual number of wireless users is usually much less than the total number of people at the jobsite. However, it is anticipated that the number of users will grow as wireless technologies become more acceptable and gain more traction for different applications throughout a project. The number of users is useful to know for the cost-benefi t analysis and is a critical piece of information for performing traffi c analysis and determining capacity requirements.

PROJECT AREA

How physically large is the project? The size of a particular project is a critical factor in evaluating the potential benefi t of implementing a wireless broadband solution because it is used to determine how many cell sites are needed to provide adequate coverage based on project requirements.

COVERAGE AND PERFORMANCE REQUIREMENTS

Wireless solutions are engineered to provide the required coverage throughout a project area and to meet performance requirements. The solution could be as simple as providing basic outdoor connectivity in a well-defi ned area or as complex as supporting bandwidth-intensive applications in a coverage-limiting location. For example, coverage may need to be provided inside buildings with unusually thick walls, in mines, through tunnels, or in other hard-to-accommodate conditions. Because of the nature of project work, the more diffi cult situation is more often the standard rather than the exception. It is expected that unrealistic situations or particularly high demands will dictate the use of wired technologies. Wireless broadband is not a solution to every problem, which is why it is important to evaluate the requirements to determine what fi t, if any, may exist.

THROUGHPUT AND LATENCY REQUIREMENTS

Throughput and latency requirements play a big role in technology selection and implementation strategies.

APPLICATIONS

The types of applications that need to be supported largely determine the performance requirements. Applications include voice, Internet/intranet, e-mail, video conferencing, streaming video, and other company-specifi c applications, each with its own set of requirements for effi cient operation. The challenge comes in designing a wireless network with limited resources that can be used in the same way as, or as an extension to, a traditional wired LAN that provides unlimited bandwidth and low latency. To provide a high-quality user experience, attention must be paid to the types of applications that will be used on the wireless networks. In some cases, it could even necessitate specifi c application development or redesign.

DISTRIBUTION (MULTIPLE SITES)

Is this project distributed across different sites? If so, to what extent? A key benefi t of wireless broadband solutions is that they may help provide fast, effi cient, and consistent services between different locations (construction site, residential camp, warehouse, etc.) without the need for a dedicated wired infrastructure. On the other hand, if a project is too widely distributed, a wireless solution might not be the best. Instead, a traditional leased line or microwave system is still generally the best option for extending connectivity.

BUDGETIt is hard to focus solely on solutions and technologies without at least mentioning cost, which is often the deciding factor. The question to ask is whether or not the cost of the solution is in scale with the overall project value and allocated budget.

NEEDSIs there a specifi c project need or application that requires wireless broadband? Or, maybe a customer requirement or demand? More and more frequently, customers are requesting wireless broadband communications.

Page 91: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 81

configuration makes wireless communication equally difficult and costly to implement. Because of the longer distance, more base stations or radios are usually needed to provide adequate coverage, and the potential number of users is relatively small. For the linear project, it usually makes more sense to leverage existing commercial wireless infrastructure if possible. Categorization: Low Feasibility.

• Widely distributed project—The widely distributed project extends across a very large geographical area, such as multiple states or regions throughout a country. A number of smaller regional offices scattered throughout the country is a good example of this type of project. A wide-scale wireless broadband solution is not practical to implement on such a large scale, but it does make sense to leverage any existing wireless communi-cation infrastructure. For individual offices, local area network (LAN) solutions may make more sense, including using existing pre-wired office buildings and wireless LAN (WLAN) technologies. Categorization: Very Low Feasibility.

Project Evaluation CriteriaIn evaluating the possible use of a wireless broadband solution on a project, a host of items needs to be considered to make an informed decision about whether or not a wireless solution is appropriate and, if so, the best technology to use. Items to consider include, but are not limited to, those described in Table 1.

Table 2 shows how the various criteria could relate to project benefits and the feasibility of implementation. Although the relationship varies from project to project, the table provides a good starting point.

Other ConsiderationsNeeds and Requirements Versus “Nice-To-Haves” At present, wireless broadband is seen as “nice to have” but, for the most part, not yet necessary or required for completing the project. In other words, a project may recognize that workers with tablets are potentially very useful, yet not identify the value/savings of having an always-connected workforce. Even though wireless is potentially more efficient, the mindset is that current solutions still work, so why change them? However, a shift is occurring: specific applications are more regularly prescribing the need for wireless solutions, and customers are beginning to expect and require them. Companies and

projects should become familiar with wireless broadband implementations and be knowledgeable about the surrounding issues so as not to be caught off guard.

Phases of the Project SiteIn addition to categorizing a project according to type, consideration should be given to evaluating the three main phases where there are potential uses for wireless broadband on a given project site: construction, operation, and sustainability. The classification process discussed previously focuses mainly on the construction phase. However, as will be seen more commonly in the future, customer requirements may dictate the installation and subsequent use of wireless networks for the operations phase or even as a sustainability accomplishment. Depending on project requirements, schedule, and contractual obligations, wireless could be used during any one of these phases individually or, more efficiently, throughout two or even all three.

WIRELESS TECHNOLOGIES FOR CONSIDERATION

Several dominant wireless broadband technologies need to be considered as possible

solutions for use on large industrial project sites as alternatives to cable and DSL. The main technologies to evaluate are currently Institute of Electrical and Electronics Engineers (IEEE) 802.11 wireless fidelity standards, wireless fidelity mesh architecture, IEEE 802.16e-2005 worldwide interoperability for microwave access (WiMAX™) standards, Third Generation Partnership Project (3GPP™) high-speed packet access (HSPA) and long-term evolution (LTE), and 3GPP2 evolution–data optimized (EV-DO). Each has its pros and

At present,

wireless broadband

is seen as

“nice to have” but,

for the most part,

not yet necessary

or required

for completing

the project.

Table 2. Project Benefits and Implementation Feasibility

Project Benefi t Implementation Feasibility

• Number of Users

• Project Area

• Throughput

• Applications

• Specifi c Need or Requirement

• Cost Savings

• Location

• Spectrum Availability

• Status, Point in Project

• Duration

• Number of Users

• Coverage and Performance Requirements

• Throughput

• Distributed Site

• Budget

Page 92: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 82

cons, as shown in the brief evaluation given in Table 3. One overarching difficulty to mention is that technologies are constantly changing and evolving, a critical factor when it comes to feeling confident in selecting and investing in a specific wireless technology.

Each technology listed in Table 3 is currently viable and has commercial equipment available, with the exception of LTE, which is a little further out but fast approaching. These technologies need to be evaluated to determine which not only provides the best performance for a specific project, but is also the most cost-effective solution for use on that project. This evaluation usually points to a wireless broadband technology such as WiMAX or a mesh implementation of the popular wireless fidelity standards. Initial wireless deployments focus mostly on providing “data pipes” and usually coexist with existing commercial wireless voice services such as global system for mobile communications (GSM) and traditional construction-oriented solutions like two-way radios. As technologies continue to improve, increasing consolidation should be expected among the different wireless systems on the jobsite.

POTENTIAL BENEFITS OF A WIRELESS BROADBAND SOLUTION

Implementing a wireless broadband network can bring many benefits. Wireless solutions

provide an opportunity to give a project

site-wide access to data, voice, applications, project servers, and other specialized services. They also provide additional features and benefits. Some of these are security and safety improvements, more rapid deployment, survivability, asset tracking, and sustainable development. These additional benefits may be a high priority on some project sites and, by themselves, justify implementation. Even where wireless broadband is less likely to fit as a bigger solution, there may still be target opportunities and niches that might be predisposed to wireless solutions and applications.

Examples of the benefits of wireless broadband on large industrial project sites are explained in more detail in the following paragraphs.

Untethered Workforce A wireless broadband communications system can benefit a project by creating an untethered workforce that can access voice communications, e-mail, and important data at any time or place on the project site. For example, a wireless system would enable project personnel to operate with tablets and laptops anywhere in the field in real-time collaboration with engineers at the home office or any other location available on the company network. Think about wireless handheld devices and how prevalent they have become over the years. The untethered workforce could be the most visible and most evident benefit of implementing a wireless broadband system on a project.

As technologies

continue

to improve,

increasing

consolidation

should be expected

among the different

wireless systems

on the jobsite.

Table 3. Technology Evaluation

Technology Cost Spectrum1 Equipment Maturity

CPE Availability

Coverage/Range Latency Performance –

Throughputs

Wireless Fidelity

Low High High High Low Low High

Wireless Fidelity Mesh

Low/Med High High High Medium Medium Med/High

WiMAX2 Medium Med/High Medium Medium Med/High Medium Medium

HSPA Med/High3 Low High High Med/High High Low

EV-DO Med/High4 Low High High Med/High High Low

LTE High5 Low Low Low Med/High Medium Med/High

1 High = spectrum widely available; low = licensed spectrum required2 IEEE 802.16e-2005 standard3 Medium for small-scale systems (network in a box), high for commercial-grade equipment4 Medium for small-scale systems (network in a box), high for commercial-grade equipment5 Equipment not yet available

Most Desirable

Least Desirable

Page 93: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 83

Sustainable Development Wireless broadband networks deployed in third-world countries or territories that previously had none can be left behind as a sustainable development initiative. Local economies can greatly benefit from access to basic communication systems. Students can attend online classes at universities outside their means of travel, and businesses can instantly become globally visible, exponentially increasing their customer base. A sustainable development project such as this helps broaden project acceptance by the local communities.

Retention and Employee Satisfaction The availability of broadband services and wireless broadband networks is playing an increasing role in attracting and retaining employees, particularly for projects in remote locations. Regardless of where the workforce is from, employee expectations are on the rise when it comes to onsite perks or benefits. One of these is the ability to remain connected with friends, family, and the outside world. Given the choice between two projects, one with and one without wireless access, it is not difficult to envision which would have the most difficulty attaining and keeping its workforce.

Advanced ApplicationsSafety and Security When a business office or project site is being deployed in a foreign country where hostilities are prevalent, safety and security are the two most important considerations. Here, a wireless broadband solution provides many improvement opportunities:

• The speed at which information can be relayed is very important and, in certain situations, can save lives. For example, simple text messaging allows the dissemination of safety messages prompting urgent action to all employees.

• Ubiquitous coverage allows personnel to remain in constant communication in the presence of dangerous situations.

• Global positioning system (GPS)-enabled mobile devices give a multitude of options for tracking and communicating with large numbers of people.

• In the near future, mobile devices may even be used to “detect” harmful threats by using a network of cell phones to detect and track chemical, biological, and radiological events.

Rapid Deployment The speed at which a communications network can be deployed may be the most important factor, at least initially. Wireless broadband solutions provide a way in which to deploy onsite communications very quickly, especially when other alternatives are not available or require additional outside resources or construction efforts. Today, it is possible to have a complete communications system set up and ready to go within hours of arriving on site, using a rapid-deployment approach colloquially referred to as the “network in a box” (NIB).

Extending a coverage area in a traditional wired network involves digging another trench and laying more cable. The amount of time and money spent to achieve this appears high when compared with the wireless solution. Various telecommunications vendors provide rapid deployment or NIB systems that can be quickly set up and operational to provide immediate coverage when emergency or merely temporary communication is needed. Examples of NIB systems include Cisco’s Aironet® 1524 Lightweight Outdoor Mesh Access Point [4] and Tecore’s rapidly deployable mobile networks [5]. NIB systems typically provide both radio and core network functions, yet are small enough to fit in the back of an SUV.

Survivability If a project network is going to be active for a long time or carry business-critical information, its survivability may become more important than its speed or scalability. Survivability refers to a network’s ability to better withstand disruptions. For example, fiber, once installed, provides very high data rates; however, depending on its location, it may be prone to downtime caused by cuts (both accidental and intentional). Cables are susceptible to being cut during construction digging or even intentionally by malevolent forces. Wireless solutions can be used to provide backup or protection from such events. It can also be used with other solutions to provide more robust communications.

Asset Tracking Tracking assets is an important task that all businesses face. Wireless broadband LANs provide suitable environments for implementing asset tracking solutions like radio frequency identification (RFID) systems. RFID uses paper-thin transponder tags that can be placed on equipment, products, and materials. Each tag emits a unique signal that positively identifies the object, as well as storing descriptive information. RFID eliminates the need for an employee

The availability

of broadband

services and wireless

broadband networks

is playing an

increasing role

in attracting

and retaining

employees,

particularly for

projects in

remote locations.

Page 94: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 84

to get close enough to an item to physically scan a barcode to retrieve or write information on a particular piece of inventory. RFID also dramatically speeds the process of logging when assets are on the move during shipping and receiving. Zones can be set up to automatically scan and track items whenever they move in or out of a zone. Tags can hold several fields of data about a particular object, including name and/or catalog number. Wireless asset tracking technologies are particularly useful where a large inventory is dispersed across a large area.

Sensors and Video Surveillance Any number of sensors are used throughout a large project site, from perimeter surveillance to vehicle telemetry to monitoring water levels in dams and retention ponds. A large-scale wireless broadband network increases the practicality of such sensors and encourages innovative ways of solving unique problems. Video is a prime area where a wireless system dramatically increases flexibility. Without the wireless network, video camera locations either need to be determined in advance and prewired, or are limited to areas where existing wired interfaces are present. Of course, the wired infrastructure can always be expanded, but only at a cost. With wireless video capabilities, cameras can be installed anywhere within the coverage area and automatically be connected to the proper surveillance network. Simply put, without such a wireless network, the use of many of these sensors would be impractical or cost-prohibitive.

In summary, many applications can be deployed inside a wireless network that can add to or improve that network’s features—features that may prove to be crucial to overall project success and warrant careful consideration. The examples mentioned in this section represent only a few possible options. Many more exist, some of which have yet to be fully investigated.

IDENTIFYING APPLICATIONS AND REQUIREMENTS

It is important to consider the types of applications and services required on project sites that may be delivered more effectively via a wireless broadband network (see Table 4 for examples). As shown in Figure 2, applications and services have minimum performance requirements that must be met before they can operate properly and satisfy user expectations. For example, wireless coverage that supports only voice services or very limited data will not add sufficient large-scale value. Likewise, large bandwidths but high latencies are unacceptable because many specialized applications have strict performance requirements.

IMPEDIMENTS TO A WIRELESS BROADBAND SOLUTION

Broadband wireless networks have been tested, used, and deployed in different fashions on

different projects on multiple occasions. In many cases, however, these networks have been less than successful. Lower-than-expected performance, poor coverage, unproven technology, lack of interoperability with required applications, and general underutilization are some of the more common causes of failure. Wireless deployment on a large-scale project (beyond hotspot Internet-café-style and office environments) has yet to be fully proven, and the initial attempts to do so have often led to distrust of vendor marketing ploys and the technology itself. These factors are compounded by the fact that projects are extremely cost and schedule (risk) sensitive and often do not have the luxury of implementing new technologies without a well-thought-out and proven business case. In most cases, wireless does not fit within this framework.

In addition to the more intangible perceptual impediments, spectrum, performance, and interoperability are the three most tangible

A large-scale

wireless broadband

network increases

the practicality

of such sensors

and encourages

innovative ways

of solving

unique problems.

Table 4. Application Examples

Generic Specifi c Other

• Two-Way Radios (PTT)

• Mobile Voice

• Mobile Data

• E-Mail

• Streaming Video

• Messaging (SMS, Multimedia)

• Project Database

• Asset and Inventory Tracking

• Timecards

• Other Corporate Applications

• Sensors

• Location-Based Services

• Emergency Services

• Survivability/Backup

Page 95: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 85

impediments to successfully deploying wireless broadband on large project sites. Each is discussed in the following paragraphs.

SpectrumSpectrum remains the most critical piece of the puzzle for wireless broadband technology and its deployment. Any wireless solution requires a finite amount of spectrum; therefore, the availability of spectrum is the first consideration on any project. Availability of spectrum is crucial to the overall success of the wireless solution.

Because licensed spectrum is usually expensive and involves dealing with an often unknown bureaucracy, unlicensed spectrum is initially very appealing. However, it is important to consider the potential risks associated with using unlicensed spectrum and to plan accordingly by developing a robust network, periodically scanning the spectrum, etc.

So many different rules and regulations control the use of wireless networks in every country that complying with them is generally the initial hurdle to deploying networks at many of the remote project sites throughout the world. Up to this point, the most widely regulated wireless spectrum has been standard ultra high frequency (UHF) and very high frequency (VHF).

To further complicate matters, as wireless technologies evolve, their bandwidth and spectrum requirements generally increase.

For example, future fourth-generation (4G) technologies will rely on scalable, very wideband spectrum such as 20 MHz and even 40 MHz. Even though current WiMAX specifications allow for 20 MHz bandwidths, profiles have yet to be defined beyond 10 MHz channels. Without this wideband spectrum, performance metrics and target data rates will be unachievable.

Performance Measurements and ConsiderationsOn the most basic level, wireless communications systems on jobsites need to provide connectivity (quick, reliable, and often temporary) to construction trailers and other site locations not easily serviced via traditional cable due to time or cost constraints. In reality, due to bandwidth demands and security and reliability issues, high-bandwidth cable is, and will continue to be, used in highly concentrated and other key areas where replacing it with wireless is not an option.

Without a proper understanding of performance requirements and considerations, wireless broadband cannot really be thought of as a replacement for traditional LAN cables inside buildings or as a replacement for fiber or other cables installed for high bandwidth applications.

Interoperability of Specialized ApplicationsIn many cases, specialized or company-specific applications are neither optimized nor geared for use in a wireless environment. Instead, they are usually designed for use on corporate LANs.

>1 Sec

>64

Kb/s

1 M

b/s

>5 M

b/s

200 Ms 100 Ms 20 Ms

Band

widt

h

Network Latency

Video Streaming Video Conferencing

Video Conferencing

Multiplayer Games

Voice TelephonyM2M: Remote Control

Video Telephony, Audio StreamingInteractive Remote

Games

Real-TimeGaming

M2M: Robot Security,Video BroadcastingAudio/Video Download

Mobile Office/e-mail

MMS, Web Browsing

SMS

Voice Mail

FTP

In addition to

the more intangible

perceptual

impediments,

spectrum,

performance, and

interoperability

are the three

most tangible

impediments

to successfully

deploying

wireless broadband

on large

project sites.

(Source: IST-2003-507581 WINNER, D1.3 Version 1.0, Final Usage Scenarios. 30/06/2005; “Parameters for Tele-traffic Characterization in Enhanced UMTS2,” University of Beira, Portugal, 2003)

Figure 2. Latency and Bandwidth Requirements for Various Services [6]

Page 96: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 86

Because of this, their performance could suffer substantially and even reduce the performance of applications that have been optimized for wireless systems.

An ongoing effort is needed to further test and develop the applications typically used on large industrial sites to improve their bandwidth and latency efficiencies or even to optimize them for use on mobile devices.

CONSIDERATIONS AND RECOMMENDATIONS

Commercial versus Enterprise SolutionsEven though a certain project may seem to be an ideal candidate to benefit from the deployment of an onsite wireless broadband network, one or more specific negative attributes may weigh more heavily. Depending on the project’s configuration and location, numerous solutions could be used to provide onsite wireless access. In a remote location, there may be no choice but to install and operate a standalone enterprise network. However, at a project site in or near a city or for a short-term project, especially in the United States and other developed countries, operating an independent wireless network may be impractical for a number of reasons, such as spectrum availability. For these projects, leasing the services of an existing wireless provider might be more attractive, offering exceptional coverage and sure performance.

In some cases, time, money, number of users, etc., make the best solution the use of existing commercial services. The point is to use the best solution for each project. Building a wireless network on every project is not an automatic solution for project communication needs.

The graph in Figure 3 plots the results of an analysis of using a commercial or an enterprise wireless (WiMAX) solution on a sample project. Based on conservative assumptions, there is a slightly greater than 5½-year breakeven point for using a dedicated enterprise wireless solution instead of an existing commercially available solution. This outcome does not take into account additional supportability issues. The assumptions used in the analysis are listed in Table 5 and include a minimum monthly lease cost of

00

100

200

300

400

500

600

700

1 2 3 4 5 6 7 8 9 10

Tota

l Cos

t, $

Thou

sand

Years

Commercial

WiMAX

00

0

0

0

0

1 2 3 4 5 6 7 8 9 10

mmercial

MAX

Com

WiM

0

0

0

Figure 3. Commercial versus Enterprise Analysis

Building

a wireless network

on every project

is not an

automatic solution

for project

communication

needs.

Table 5. Commercial versus Enterprise Assumptions

Commercial System

Users 100

Cost per User (CPE) $200

Cost per User (Monthly) $50

Total Initial Cost $20,000

Total Monthly Cost $5,000

Total Annual Cost $60,000

WiMAX System

Required Distance (Miles) 11

Number of Miles per Base Transceiver Station 4

Estimated Number of Sites 3

Miscellaneous Costs (Antennas, Cables, etc.) $3,000/Site $9,000 Total

Radio Equipment $25,000/Site $75,000 Total

Customer Premises Equipment $250/User $25,000 Total

Core Network Equipment $150,000 Total

Site Lease Cost $500/Month $6,000/Year

Total Lease Cost $1,500/Month $18,000/Year

Total Initial Cost $259,000

Total Monthly Cost $1,500

Total Yearly Cost $18,000

Page 97: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 87

$500 for each site. This recurring expense adds to the increase in total cost over time. If, for example, no sites currently exist and need to be constructed, the initial cost would be higher but the cost over time would remain relatively flat.

Network Planning The value of proper planning and engineering can never be underestimated. Network planning for a proposed wireless broadband solution is no exception. It is not enough to think that because wireless access points can be readily deployed in the home or around a building or campus, this correlates with ease of implementing a wireless network on a large project site. Networks deployed without this consideration are destined to face challenges and issues.

Coverage RequiredThe actual layout of a project site must be taken into account in determining optimal equipment locations and analyzing expected coverage. This analysis should consider geographic features and clutter in the desired coverage area. Are there hills or trees that could obstruct coverage? Is indoor coverage required? What types of buildings are there and how are they constructed? What kinds of towers (existing, rooftop, new construction) are available and where are they? What sorts of customer premises equipment (CPE) will be used (fixed outdoor, fixed indoor, mobile, etc.)? Answering these and similar questions and preparing accordingly will help optimize coverage and performance issues in the design. The results of this analysis should appear similar to the coverage plot shown in Figure 4.

Traffic Supported: CapacityA traffic analysis will need to be performed for the target project based on the expected number and types of users. This analysis will determine user requirements and help to appropriately size the wireless network. Just because coverage is available does not mean the resources are adequate to provide service to the desired number of users. Capacity planning should address not only the number of users, but also the demands they make on the network. Some users may be voice only, some may be heavy data users, some may use video cameras or other sensors, etc.

Number of Sites RequiredThe results of the coverage analysis and traffic study determine the expected number of sites and radio resources required to adequately cover the project site. This may include specialized sites for indoor coverage or enhanced coverage for potential trouble areas.

Supportability—Operating ExpendituresWhile this paper primarily considers the capital expenditures (CAPEX) of purchase and deployment, the operating expenditures (OPEX) should not be forgotten. Supportability is a major concern (and for a good reason), especially within the function responsible for operating the network, usually the information technology (IT) function. Initial costs are only a small portion of overall costs. Operating and supporting costs play a critical role in the successful implementation of wireless broadband communications across multiple projects within a company. Hence, vendor and process standardization is crucial not only to lower OPEX, but also to prevent each project from individually “reinventing the wheel,” so to speak.

CONCLUSIONS

Wireless broadband is currently being used (to different degrees) on many of today’s

large project sites. Current implementations are usually extensions of basic WLAN systems—extending wireless access points from an indoor office environment to limited outdoor and wide area coverage. Success has also been limited because not much network planning is usually involved. In addition, wireless bridging for a LAN extension is becoming very popular for providing connectivity throughout a project site (office to trailer or trailer to trailer).

There is keen interest on projects for continuing to improve performance by deploying (where appropriate) advanced wireless broadband technologies. End users need and want wireless connectivity and technologies, and the expertise

The value of

proper planning

and engineering

can never be

underestimated.

Figure 4. Typical Wireless Broadband Access Coverage Plot

Page 98: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 88

needed to provide it is readily available. One of the biggest hurdles to wireless broadband deployments is not technical, but cultural, namely, risk avoidance. Project sensitivity to cost and schedule factors necessitates the preparation of a well-thought-out and proven business case, i.e., a well-defined need for a particular wireless solution. Projects are simply reluctant to change their tried and true, well-known methods of operation for something new.

There is an underlying need to raise an awareness of what constitute realistic capabilities of and expectations for wireless networks. Unfortunately, wireless broadband networks have been sold as a solution to all communication needs. But even though this is not the case, there are certain areas where wireless is decidedly useful.

ACKNOWLEDGMENTS

The contents of this paper were adapted from work performed under a 2008 Bechtel Technology Grant titled “Broadband Wireless Technology for Bechtel Project Sites.” [7] The author would like to thank the co-recipients of the grant as well as Larry Bartsch, all of whom contributed to the effort.

TRADEMARKS

3GPP is a trademark of the European Telecommunications Standards Institute (ETSI) in France and other jurisdictions.

Alcatel-Lucent is a trademark of Alcatel-Lucent, Alcatel, and Lucent Technologies.

Alvarion is a registered trademark of Alvarion Ltd.

Cisco Systems and Aironet are registered trademarks of Cisco Systems, Inc., and/or its affiliates in the United States and certain other countries.

Huawei is a registered trademark of Huawai Corporation or its subsidiaries in the People’s Republic of China and other countries (regions).

Motorola is registered in the U.S. Patent and Trademark Office by Motorola, Inc.

Nortel is a trademark of Nortel Networks.

Tecore Networks is a registered trademark with the U.S. Patent and Trademark Office.

WiMAX and WiMAX Forum are trademarks of the WiMAX Forum.

REFERENCES

[1] Bechtel website, <http://www.bechtel.com>.[2] Motorola; see, e.g.: “Motorola Broadens Portfolio of

Communications Solutions for the Mining Industry” <http://www.motorola.com/mediacenter/news/detail.jsp?globalObjectId=10166_10095_23>.

“Case Study: In the Coal Fields of Wyoming, wi4 Mesh Solutions are Mining Enhanced Efficiency, Productivity and Profitability” <http://www.motorola.com/staticfiles/Business/Solutions/Industry%20Solutions/Manufacturing/MOTOwi4/_Document/_Static%20files/International-Mining[1].pdf?pLibItem=1&keywords=Manufacturing+Education+Case%20Studies>.

MOTOMESH™ Solo Networks – “Enduring Broadband Performance in Challenging RF Environments” <http://www.motorola.com/staticfiles/Business/Products/Wireless%20Broadband%20Networks/Mesh%20Networks/MOTOMESH%20Solo/_Documents/Static%20files/MOTOMESH%20Solo_Brochure_9.21.08.pdf>.

[3] “Alvarion: Pumping Up Productivity,” Alvarion <http://www.alvarion.com/upload/contents/291/Oil_and_gas_Brochure_LR[1].pdf>.

[4] “Cisco Aironet 1524 Lightweight Outdoor Mesh Access Point,” Cisco Systems <http://www.cisco.com/en/US/products/ps8731/index.html>.

[5] ”Rapidly Deployable Mobile Networks,” Tecore Networks <http://www.tecore.com/solutions/rapid.cfm>.

[6] “Charting the Course for Mobile Broadband: Heading Towards High-Performance All-IP with LTE/SAE,” Nokia Siemens Networks white paper, 2008 <http://www.nokiasiemensnetworks.com/ NR/rdonlyres/AB092948-6281-4452-8D59-90B7A310B5BA/0/broadband_lte_sae_update_intranet.pdf>.

[7] J. Centi, J. Owens, and N. Youell, “Broadband Wireless Technology for Bechtel Project Sites,” Bechtel Technology Grant, 2008.

One of the

biggest hurdles

to wireless

broadband

deployments

is not technical,

but cultural,

namely,

risk avoidance.

Page 99: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 89

BIOGRAPHYNathan Youell joined Bechtel in 2001 and is currently a systems engineer with the Strategic Infrastructure Group. He is the resident subject matter expert for wireless systems and is responsible for testing and evaluating telecommunications equipment, as well as modeling and

simulating critical infrastructure, with a primary focus on telecommunications systems.

Previously, as a staff scientist/engineer and the manager of the Bechtel Telecommunications Laboratory, Nathan gained and then provided expertise in developing and implementing test plans and procedures. He was instrumental in creating the Bechtel Training, Demonstration, and Research (TDR) Laboratory in Frederick, Maryland, and the Bechtel Wireless Test Bed (BWTB) in Idaho Falls, Idaho. He also tested numerous telecommuni-cations equipment and technologies, including TMA, 802.11, 802.16, GSM, DAS, DWDM, FSO, microwave and millimeter wave radio, and wireless repeater. Earlier, Nathan was an RF engineer in the New York and Washington, DC, markets as part of Bechtel’s nationwide build-out contract with AT&T Wireless.

Nathan has authored three papers and co-authored three more in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) since its inception in December 2002; his most recent paper — “4G: Fact or Fiction?”— appeared in the September 2008 issue.

Nathan received his MS and BS degrees, both in Electrical Engineering, from Clemson University, South Carolina.

Page 100: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 90

Page 101: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 91

INTRODUCTION

Modern businesses spend much time, money, and effort to maintain their desktop

computing infrastructures. Along with the costs of deploying and maintaining hardware, information technology (IT) departments are saddled with the burden of addressing software

updates, application installations, client-issue drivers, patches, license administration, malware and viruses, and general troubleshooting. While remote administration tools have eased this burden, many issues arising within the context of the traditional computing architecture (see Figure 1) cannot be solved remotely, requiring direct interaction with the actual desktop. [1]

DESKTOP VIRTUALIZATION AND THIN CLIENT OPTIONS

Brian Coombe

[email protected]

Issue Date: December 2008

Abstract—Global engineering, procurement, and construction (EPC) firms face an increasingly difficult environment—projects are growing in risk, complexity, and size, while customers expect delivery on or ahead of schedule, at or below budget, with exacting safety and quality standards. EPC firms must leverage advances in technology to improve their workers’ efficiency and effectiveness and utilize the opportunities presented by a globally educated workforce. The unique demands of the flexible, mobile, and distributed employees of these firms can be addressed by virtualization and thin client architectures. This paper outlines the challenges faced by the EPC firm and the advantages and disadvantages to consider when evaluating both desktop and application virtualization, as well as client architecture choices.

Keywords—client, data center, desktop, hypervisor, infrastructure, paravirtualization, remote, server, streaming, terminal services, thick, thin, virtual, virtualization

Figure 1. Traditional Computing Architecture

ApplicationC

ApplicationB

Exchange

DATA CENTER

DATA CENTER

APPLICATIONS ANDDESKTOP OS

WAN

LAN

Internet

Storage

Proxy

CAD

ApplicationA

ApplicationD

Page 102: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 92

This problem is particularly complicated in the global engineering, procurement, and construction (EPC) industry, where users may be remotely deployed, may be located at disparate sites, and may perform multiple roles that need different applications for each. At the same time, EPC projects are growing in risk, complexity, and size, while customers continue to expect delivery on or ahead of schedule, at or below budget, with exacting safety and quality standards. These circumstances make it vital for EPC firms to leverage advances in technology to improve their

workers’ efficiency and effectiveness and to utilize the opportunities presented by a globally educated workforce. Fortunately, these distinct demands can be addressed by virtualization and thin client architectures.

VIRTUALIZATION

Managing the licenses, Java™ programming language clients, database clients, drivers,

and other potentially conflicting elements of a large computer network often requires a visit from a technician. An employee changing roles may need to request new applications or services, requiring multiple installs. Trying to patch or install a new application on multiple desktops—whether 30 or 3,000—poses a challenge whether this is done remotely or one at a time by service technicians.

Virtualization moves the computing hardware and the applications residing on that hardware from the desktop personal computer (PC) to the data center. This allows the IT department to manage the applications and resources from a centralized, single location, often with the assistance of flexible configurations and software tools.

A forerunner of virtualization was terminal services. Unlike traditional client/server computing, where a dedicated, per-application client with processing and local memory storage is required, terminal services use an application to present a client interface to the user. The client application only presents a video display of the

ABBREVIATIONS, ACRONYMS, AND TERMS

BIOS basic input/output system

CAD computer-aided design

DLL dynamic link library

EPC engineering, procurement, and construction

IT information technology

LAN local area network

OS operating system

PC personal computer

RAM random access memory

TCP transmission control protocol

USB universal serial bus

VDI virtual desktop infrastructure

WAN wide area network

Figure 2. Terminal Services

Page 103: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 93

client (for this reason, it is often called a presen-tation server) and only receives and forwards input from the mouse and keyboard. The same presentation server can be used to create multiple sessions with various applications. [2] Figure 2 illustrates a typical terminal services configuration with presentation server and client interfaces.

Desktop virtualization builds on the concept of terminal services. In its simplest form, desktop virtualization uses the data center environment to host the client applications, streaming the client presentation over the network to a thick or thin client1 (see Figure 3). This approach, while effective, does not satisfy the desire of many thin client users to have their own personal machines and also limits the execution of certain types of applications. New solutions have emerged to address these challenges.

While native virtualization was possible on higher-end IBM® machines and other hardware typical to UNIX® computer operating system

(OS) environments, it was not possible, until 2005, to natively virtualize an OS on hardware that uses the x86 instruction set architecture widely implemented in processors from many computer and chipset manufacturers. Within the last few years, both Intel® and AMD™ have included virtualization capabilities directly in their chipsets. [3]

Hybrid approaches leverage a thick client desktop’s power to perform processing. One approach, known as desktop streaming, involves transferring the code required by applications to the user’s terminal and memory just for the session, with no permanent storage on the user’s machine. Another approach, application streaming, allows the server to send the runtime cache for each application to the terminal, potentially limiting the total number of application licenses required. Chipset manufacturers are also now building new methods of virtualization, allowing virtualized OSs to run natively. The architectures, advantages, and challenges of each are explored later in this paper. Finally, because desktop virtualization is not an independent vector, the impacts to the data center of moving to a virtualized environment are discussed as well. [4]

THIN CLIENT ARCHITECTURES

Thin client virtualization, often referred to as virtual desktop infrastructure (VDI), makes

use of a full desktop environment operating

Virtualization

moves the

computing hardware

and the applications

residing on

that hardware

from the

desktop PC to

the data center.

This allows the

IT department

to manage

the applications

and resources

from a centralized,

single location.

Figure 3. Desktop Virtualization

ApplicationC

ApplicationB

Exchange

DATA CENTER

DATA CENTER

DESKTOP ENVIRONMENTS

WAN

LAN

Internet

Storage

Proxy

ApplicationA

THIN CLIENT

1 Thick and thin refer to the configuration of hardware and applications at the client interface. A thick client has a hard disk drive and can use installed applications to perform most of the processing itself; there is no need for continuous communication with the server because the client primarily communicates information for storage. A thin client, on the other hand, is designed to be compact, with most of the data processing done by the server; typically lacking a hard disk drive, the thin client acts as a simple terminal requiring constant communication with the server.

Page 104: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 94

remotely. While the client machine could be thick, the full advantages of VDI are obtained when thin clients are used. As stated earlier, this type of virtualization is very similar to terminal services; users have already had the ability to access a full desktop remotely via a standard terminal server program bundled with the typical OS. The primary differentiator is how the user environment is provisioned. In traditional terminal services, multiple users access the same resources. Environments can be tailored for the individual user, but resources are not dedicated, often causing performance problems. Furthermore, certain applications often could not be run via terminal services due to issues with executing certain instructions. Additionally, multiple users sharing an application can cause issues with temporary files; applications that interface with network traffic protocols or listen for particular transmission control protocol (TCP) ports may find conflicts.

In contrast, desktop virtualization uses the ability to run an OS in a virtual environment and moves that virtual environment to the data center. This then enables legacy, low performance, or thin client workstations to be deployed to users. Virtualization is realized by deploying an independent software layer that provides abstraction between the computer hardware and the OS kernel. This layer allows the hardware to be separated, and each machine instruction code is passed from the OS to the hardware. Often, this layer is known as a hypervisor. Virtualization can be a complete simulation—known as full virtualization, where any OS can run in an unmodified form—or take a more limited form known as partial virtualization. In partial virtualization, various resources such as address space can be virtualized and simulated, and, therefore, isolated and reused. Partial virtualization allows applications to run in isolation but does not work for all applications, particularly ones that need to access particular hardware that may not be virtualized. [5]

Paravirtualization is a hybrid of full and partial virtualization; it simulates the instructions of the underlying hardware but has a software interface that is slightly different from the actual hardware, requiring OSs that reside on the virtualized hardware to be slightly modified. Paravirtualization often provides better performance than full virtualization.

Whether running the native OS in full virtualization mode or in a modified partial or paravirtualization mode, desktop virtualization

uses management infrastructure to create individual user images, complete with each user’s suite of applications software. Multiple vendors offer different desktop virtualization configurations, which typically include server software, client software, and management infrastructure. Various protocols can be used, generally specific to the vendor implementation. Extreme examples use a diskless workstation that loads its boot image to random access memory (RAM) over the network from the server or that uses virtualized disk operations translated into instructions executed over the network protocols. [6]

Diskless workstations have significant advantages for EPC firms. Areas of high radiation, extreme temperature, heavy dust, or other potential contaminants are well suited for thin clients. Because it has no fans or other moving parts, a thin client machine damaged in a harsh environment is less costly to replace and does not result in significant work to institute a new machine. Thin client machines also offer much better security: there is no data to steal from the machine, and an opportunistic worker or burglar is less likely to pilfer a device of no value by itself.

Both in the office and in the field, there are several other advantages of using thin client hardware in a virtualized environment. Users who move from office to office or from the office to the field encounter an identical environment, data, and applications wherever they use the network. Client machines—whether fully thin or thick—using virtualized desktop streaming are less dependent on technology and need to be refreshed less often. Clients that fail can be refreshed immediately, or a user can be moved to another client seamlessly.

Desktop virtualization also results in less network traffic, since the only traffic sent to and from the client is the visual display of information and the inputs from the mouse and keyboard; most of the network file- and information-processing traffic is isolated to the data center. Through efficient planning and design, network traffic can be isolated to particular server and storage arrays, reducing the overall load on the local and wide area networks (LANs and WANs).

Finally, moving the processing to the data center results in overall processor savings, along with better power and cooling usage. Typically, each desktop user is given processor, storage, and memory consistent with peak requirements, but at any given time, only a small percentage of users require peak processing or memory. Furthermore,

Desktop

virtualization

also results

in less

network traffic,

since the only traffic

sent to and from

the client is

the visual display

of information and

the inputs from

the mouse

and keyboard.

Page 105: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 95

Users often

feel an

attachment to,

and have a

comfort level with,

“their” PC and

do not want

to give up

the box under

the desk for a

fully thin client.

many applications only require asynchronous processing and offer limited differences in user experience if the execution of those operations is resource-leveled.

In contrast, the processing, storage, and memory allocations of a consolidated server running images of workstations in use can be sized to be greater than the sum of the average demands of the users but much less than the sum of everyone’s peak demands together. Furthermore, the larger-scale processing, memory, and storage of servers makes much better use of economies of scale, making for a lower capital expense. In addition to the capital savings, the cost of operating equipment within the data center can be significantly lower because the mass concentration of equipment means shorter power runs, less power distribution equipment to maintain, and less overall/more-concentrated cooling equipment.

Even with all of these advantages, enterprises can be reluctant to move to a totally thin client infrastructure. Users often feel an attachment to, and have a comfort level with, “their” PC and do not want to give up the box under the desk for a fully thin client. Furthermore, mobile and traveling users often prefer to have a laptop so they can work out of hotel rooms and on airplanes, perform troubleshooting on equipment in the field, and give demonstrations or presentations at customer sites.

Using thin clients can also pose a challenge for certain types of users. Thin clients can be designed to support particular types of peripherals, including those connected via universal serial bus (USB), but they do not have the proper drivers to support every make and model. Multimedia-heavy applications, particularly those with full-motion video and sound, can strain network resources and cause performance problems. Certain applications require access to video or sound cards; addressing this in a totally thin client environment can prove challenging, if possible at all.

A simple solution that addresses these concerns is to use a thick client for those users—mobile, bandwidth-intensive, and the like—for whom the thin client solution just does not work, while perhaps virtualizing the OS, but only for some or even none of the applications. By nature, this solution is inefficient, providing additional and potentially wasted hardware distributed throughout the enterprise. It also brings back some of the problems that thin client virtualization was supposed to fix. As a result, newer, hybrid

approaches have been developed and are being deployed that more efficiently use the distributed processing and storage while maintaining some of the key benefits of a fully deployed thin client infrastructure. These hybrid approaches are categorized as desktop streaming or application streaming; each is discussed in the following sections.

DESKTOP STREAMING

In desktop streaming (see Figure 4), each user has a complete OS and images of the

applications being used, all running on a virtual machine resident in the desktop PC. A hypervisor manages the virtual machine and the applications that it executes, as well as providing the interface to the images hosted in the data center. Both the OS and applications are streamed to the PC; updates, changes, and patches are only done once, in the image resident in the server. [7]

Desktop streaming can provide better performance in demanding environments, since it leverages local processing and memory. Diskless workstations can be used, executing the streamed OS from local RAM. Alternatively, the client can be a full-blown terminal or laptop where hard drive space is used for swap files, and local storage can be provisioned as well. In one such implementation, a user’s primary (i.e., C:\) drive resides on the server and is accessible from any login, while the local drive becomes the D:\ drive. Users of desktop streaming are generally pleased with the much faster boot via basic input/output system (BIOS) and streamed OS versus the traditional boot of the locally loaded OS.

The security and reliability of this implemen-tation offer several advantages. Centralized data permits only images of the information required by the application to be streamed, preserving control of sensitive and proprietary information. For EPC firms sharing information with a third party or someone overseas, this control offers significant reassurance. Machines that malfunction (whether due to a virus or malware or to a user- or machine-driven issue), can be easily restarted and repaired simply by terminating the virtual OS image and activating a new one.

A challenge with desktop streaming is the integration of multiple software manufacturers’ products and various environments. One provider may be used for the hypervisor and desktop client, and another may be used to provide desktop images or to virtualize applications in the server environment. While IT managers can

Page 106: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 96

Machines that

malfunction

(whether due to

a virus or malware

or to a user- or

machine-driven

issue), can be

easily restarted

and repaired simply

by terminating the

virtual OS image

and activating a

new one.

Figure 5. Application Streaming

Sequencer

PackagingServer

ApplicationServers

ManagementConsole

Clients

DATA CENTER

LAN

Figure 4. Desktop Streaming

ManagementClient

ActiveDirectory

VirtualizationInfrastructure

Clients

CentralizedVirtual Desktops

Page 107: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 97

build a “best-of-breed” implementation, careful execution of integration is required.

Even with desktop streaming, the data center must be sized to support the OS images for each user receiving a virtual desktop, along with the associated applications. It is possible to configure the implementation to decentralize OS images so that they reside on the clients, but this approach gives up some of the advantages of centralized virtualization.

APPLICATION STREAMING

In application streaming (see Figure 5), applications are configured on a server,

then virtualized, sequenced, and packaged for deployment over the network. Application streaming can use either a virtualized OS at the desktop or a traditionally installed OS. Because the applications run in containerized, virtualized spaces, they are not subject to the number of conflicts and deployment scenario permutations that traditional applications running on a standard client may encounter.

Application streaming leverages the fact that many software features are seldom used; only 15%–20% of the core code is generally required by standard users. The code required to launch and provide the basic functionality of the application is streamed to the user, and the other pieces are either sent “on demand” over the network or loaded in their entirety in the background while the user works. These pieces of code required to execute the application are known as the runtime cache. In a well-designed environment with robust connectivity and high-performance machines, users see no difference between a streamed and a locally executed application. [8]

With applications centrally managed, migrations to new OSs or standard hardware suites can be easily accomplished; the application only needs to be modified once to address the changes, and then repackaged. The applications’ use of their own memory spaces prevents them from

conflicting with each other, creating memory holes, or causing dynamic link library (DLL) issues. In fact, it is possible for two or more different versions of the same application to run at the same time.

Application streaming, while offering the significant advantages outlined above, does require each application to be sequenced and packaged. This can be a burden on the IT department, since it is an additional step in deploying virtualized applications.

CONCLUSIONS

Multiple options are available to EPC (and other) firms for deploying a virtualized

desktop infrastructure. While each has subtle advantages, the challenge for a firm’s IT department is how to best leverage the multiple hardware, software, and management solutions and to best understand the requirements of its unique users and infrastructure in order to deploy the correct architecture. Table 1 summarizes the benefits of each approach. The coming years will see even more options and, as firms migrate to this environment, the proffered advantages of each type of virtualization implementation will be validated.

TRADEMARKS

AMD is a trademark of Advanced Micro Devices, Inc.

IBM is a registered trademark of International Business Machines Corporation in the United States.

Intel is a registered trademark of Intel Corporation in the US and other countries.

Java is a trademark of Sun Microsystems, Inc., in the United States and other countries.

UNIX is a registered trademark of The Open Group.

Application

streaming leverages

the fact that

many software

features are

seldom used;

only 15%–20% of

the core code is

generally required

by standard users.

Benefi t Non-Virtualized Thick Client Virtualized Thin Client Desktop Streaming Application Streaming

Performance High Medium Medium Medium

Security Medium High High High

First Cost Medium High Medium Medium

Total Cost Low High High High

Flexibility Medium Medium High High

Table 1. Benefits of Different Virtualization Approaches

Page 108: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 98

REFERENCES

[1] “SO31 The Virtual Desktop—A Computer Support Model that Saves Money,” California Government Performance Review <http://cpr.ca.gov/CPR_Report/Issues_and_Recommendations/Chapter_7_Statewide_Operations/Information_Technology/SO31.html>.

[2] G. Gruman, “Desktop Virtualization: Making PCs Manageable,” Infoworld, September 11, 2006 <http://www.infoworld.com/article/06/09/11/37FEvirtcasedesk_1.html?s=feature>.

[3] K. Adams and O. Agesen, “A Comparison of Software and Hardware Techniques for x86 Virtualization,” Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS XII), San Jose, California, October 21–25, 2006, pp. 2–13, access original paper via <http://www.vmware.com/pdf/asplos235_adams.pdf>.

[4] T. Olzak, “Desktop Application Virtualization and Application Streaming: Function and Security Benefits,” Erudio Security white paper, August 2007 <http://adventuresinsecurity.com/Papers/Desktop_Virtualization_Aug_2007.pdf>.

[5] “Desktop Virtualization Comes of Age: The Data Center is the Desktop,” Virtualization, November 27, 2007 <http://virtualization.sys-con.com/node/466375>.

[6] “Why Software Vendors Need to Care About Virtualization,” Intel® Software Network, October 2, 2008, retrieved November 3, 2008 <http://software.intel.com/en-us/articles/why-software-vendors-need-to-care-about-virtualization>.

[7] S. Higginbotham, “Desktop Virtualization: Where Thin Clients Meet the Cloud,” GigaOM, August 27, 2008, retrieved November 3, 2008 <http://gigaom.com/ 2008/08/27/desktop-virtualization-where-thin-clients-meet-the-cloud/>.

[8] R.L. Mitchell, “Streaming the Desktop: Application Streaming Creates a Virtualized Desktop That Can Be Managed Centrally, Yet Offers the Speed of Local Execution,” Computerworld, November 21, 2005 <http://www.computerworld.com/softwaretopics/software/apps/story/ 0,10801,106354,00.html>.

BIOGRAPHYBrian Coombe, a program manager in the Bechtel Federal Telecoms organiza-tion, oversees the provision of project management and systems engineering expertise to the Department of Defense in support of a major facility construction, information technology integration, and

mission transition effort.

Prior to holding this position, Brian served as the program manager of the Strategic Infrastructure Group, where he oversaw work involving telecommunications systems and critical infrastructure modeling, simulation, analysis, and testing. In addition, he evaluated government telecommunications markets, formulated requirements for telecommunications and water infrastructure work, and developed the Strategic Infrastructure Group’s scope.

As Bechtel’s technical lead for all optical networking issues, Brian draws on his extensive knowledge of wireless and fiber optic networks. In his first position with the company, he engineered configurations to allow for capacity expansion of the AT&T Wireless GSM network in New York as part of a nationwide buildout contract. Later, he was the lead engineer for planning, designing, and documenting a fiber-to-the-premises network serving more than 20,000 homes. He was the Bechtel Communications Training, Demonstration, and Research (TDR) Laboratory’s resident expert for optical network planning, evaluation, and modeling.

Before joining Bechtel in 2003, Brian was a systems engineer at Tellabs®, where he launched the company’s dense wavelength-division multiplexing services and managed network design and testing. He developed solutions to complex network issues involving echo cancellation, optical networking, Ethernet, TCP/IP, transmission, and routing applications.

Brian is a member of the IEEE; the Project Management Institute; SAME; ASQ; NSPE; MSPE; INSA; AFCEA; the Order of the Engineers; and Eta Kappa Nu, the national electrical engineering honor society. He has authored six papers and co-authored one in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) since August 2005; his most recent paper —“GPON Versus EP2P”— appeared in the September 2008 issue. His tutorial on Micro-Electrometrical Systems and Optical Networking was presented by the International Engineering Consortium.

Brian earned an MS in Telecommunications at the University of Maryland and a BS with honors in Electrical Engineering at The Pennsylvania State University. He is a licensed professional engineer in the state of Maryland.

Page 109: BTJ Book V1 N1 2008 Final

TECHNOLOGY PAPERS

Bechtel M&MT e c h n o l o g y P a p e r s

101Computational Fluid DynamicsModeling of the Fjarðaál Smelter Potroom VentilationJon BerkoePhilip DiwakarLucy MartinBob BaxterC. Mark Read Patrick Grover Don Ziegler, PhD

111Long-Distance Transport of Bauxite Slurry by PipelineTerry Cunningham

M&M — Fjarðaál Aluminum SmelterThis project, Bechtel’s fi rst in Iceland,

is one of the world’s safest, most sustainable, and technologically advanced

aluminum production facilities.

Page 110: BTJ Book V1 N1 2008 Final
Page 111: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 101

INTRODUCTION

The Fjarðaál Aluminum Smelter in East Iceland is located adjacent to a fijord, surrrounded

by rough terrain, and subject to high wind speeds. The smelter potrooms—buildings that house the large steel vessels, or pots, used for electrolytic reduction of alumina—are designed to ventilate heat and fugitive emissions, such as hydrogen fluoride (HF), using a system based on natural convection. The design of the Fjarðaál potlines—a series of potrooms—is based on the design used at Alcoa’s Deschambault Smelter, located in Deschambault, Quebec, Canada. However, the Fjarðaál Smelter potlines are longer, situated on a sloping site adjacent to a fjord, and subject to high winds from multiple directions. A rendering of the smelter is shown in Figure 1.

Wind effects around the potrooms are especially important to consider in designing a ventilation system; this is because infiltration of winds into the potrooms can disrupt the “chimney” effect plume rise required for proper ventilation, and potentially cause drifting and re-entrainment of heated air and emissions from the outside back into the potroom. A ventilation system must keep fresh air flowing into the worker areas of the potroom and ventilate heat and emissions generated by the pots to the outside.

Because the complex terrain and wind interactions made potroom ventilation system performance difficult to predict for the Fjarðaál site, Bechtel and Alcoa conducted a state-of-the-art computational fluid dynamics (CFD) analysis to help guide system design by simulating the velocities, temperatures, pressures, and emissions concentrations inside and outside the potrooms. The purpose of the analysis was to confirm that local environmental conditions, particularly wind speed, wind direction, and air temperature, would not be detrimental to the final system design. Data for the analysis included the plot plan for the site potline layout and potrooms,

Abstract—The potrooms of the Fjarðaál Aluminum Smelter in East Iceland use a system based on natural convection to ventilate heat and fugitive emissions. The design of the potlines—a series of potrooms—is based on one used at Alcoa’s Deschambault Smelter in Canada. However, the Fjarðaál potlines are longer and are situated on a sloping site that is adjacent to a fjord, surrounded by rough terrain, and subject to high winds from multiple directions. Because the complex terrain and wind interactions made ventilation system performance difficult to predict, Bechtel and Alcoa conducted a state-of-the-art computational fluid dynamics (CFD) analysis to help guide system design by simulating the velocities, temperatures, pressures, and emissions concentrations inside and outside the potrooms. CFD modeling—employed for similar studies—produced a potroom ventilation model that was validated against Deschambault smoke test measurements. The analysis confirmed that Fjarðaál potroom airflow patterns, ambient temperatures, and emissions concentrations are relatively unaffected by the terrain and wind conditions. This result indicates that the Fjarðaál ventilation system design is highly effective.Keywords—CFD modeling, claustra wall, computational fluid dynamics, modeling, natural convection, potline, potrooms, smelter, ventilation

Originally Issued: February 2005Updated: December 2008

COMPUTATIONAL FLUID DYNAMICS MODELING OF THE FJARÐAÁL SMELTER POTROOM VENTILATION

Jon Berkoe [email protected]

Philip [email protected]

Lucy [email protected]

Bob Baxter [email protected]

C. Mark Read [email protected]

Alcoa

Patrick Grover [email protected]

Don Ziegler, PhD [email protected]

Figure 1. CAD Model of Fjarðaál Aluminum Smelter, Iceland

Potroom

Potline

Page 112: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 102

Ventilation

system designs

based on natural

convection are

advantageous

because they offer

lower

life cycle cost

in terms of

capital investment

and operating and

maintenance costs,

as compared

to powered

ventilation.

wind rose data, topographic data of the nearby terrain, detailed vent and louver parameters, and heat release design data for the pots. The analysis used CFD modeling based on the commercial software program FLUENT® 6.2. Such modeling has been employed for similar studies [1, 2]. The unstructured mesh was generated from computer-aided design (CAD) models using the ANSYS® ICEM CFD™ meshing software program.

The objectives of the CFD analysis were to:

• Determine:– Whether sufficient airflow for personnel

cooling and ambient air quality is achieved– If there are significant HF release

gradients along the length of the potroom– Whether the terrain is influencing the

degree of re-entrainment and intake air velocity profiles

– Where to locate emissions-monitoring equipment to ensure reliable readings

• Confirm that: – The roof ventilator performs adequately– The required cooling of the pot and

busbar is achieved– The potroom ventilation achieves vertical

evacuation of fumes, with minimal drifting along the length of the potroom

• Evaluate:– The impact of adding basement panels

to restrict inlet airflow (Because of the windy conditions in Iceland, basement panels are used to prevent blowing snow from entering the potroom, and to minimize heat loss from the pots during the winter.)

– The dispersion modeling assumption that uniform release of HF along the length of the potroom is not too unrealistic

The CFD model produced during the analysis was validated by comparing the computed airflow patterns and temperatures with those measured and observed during smoke tests conducted by the Bechtel team at the Deschambault Smelter.

This model validation did not apply to the wind conditions prevailing at the Fjarðaál Smelter, but it did confirm the CFD model’s accuracy in simulating both the potroom airflow patterns and the effects of thermal buoyancy.

PHYSICAL DESCRIPTION OF POTROOM VENTILATION

Ventilation system designs based on natural convection are advantageous

because they offer lower life-cycle cost in terms of capital investment, and operating and maintenance costs, compared to powered ventilation. In aluminum potline facilities, a turbulent natural convection occurs in the potroom because the heat dissipated from the pots generates a buoyancy-driven plume that draws outside air into and through the potroom side openings, and exhausts the air through a roof ventilator. A schematic diagram of the potroom ventilation is shown in Figure 2.

Thermal buoyancy is virtually impossible to replicate accurately in a scale model because of the nature of the convection scaling laws, i.e., natural convection in large spaces does not scale accurately to small-scale experiments. Because thermal buoyancy is the driver for flow through the potroom, the heat flux from the pots and the potroom dimensions must be prescribed as close to full scale as possible. Otherwise, the flow patterns and rates determined by the CFD model will most likely be incorrect.

Wind roses—graphic tools that provide a succinct view of how wind speed and direction are typically distributed at a particular location—were used to model the wind conditions at the Fjarðaál Smelter site. A detailed study of the wind rose data revealed that certain wind speeds and directions were representative of the conditions at the Fjarðaál Smelter site throughout most of the year. These wind speedsand directions were used as boundary conditions for the CFD analysis.

CFD ANALYSIS METHODOLOGY

CFD is typically applied to modeling continuous processes and systems. CFD

software uses a CAD-like model-building interface combined with advanced computational methods that solve the fundamental conservation equations. CFD eliminates the need for many typical assumptions, such as those required for equilibrium, plug flow, averaged quantities, etc., because the physical domain is replicated in the form of a computerized prototype.

ABBREVIATIONS, ACRONYMS, AND TERMS

CAD computer-aided design

CFD computational fluid dynamics

HF hydrogen fluoride

ICEM integrated computer-aided engineering and manufacturing

Page 113: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 103

The CFD model

of the

Fjarðaál Smelter

incorporated

the key factors

that influence

ventilation system

performance.

A typical CFD simulation begins with a CAD rendering of the geometry, adds physical and fluid properties, and then specifies natural system boundary conditions. By appropriately changing these parameters, countless “what if?” questions can be quickly addressed.

The CFD model of the Fjarðaál Smelter was based on a fully three-dimensional steady-state simulation. The model incorporated the key factors that influence ventilation system performance. They include wind speed, wind direction, thermal buoyancy, and local terrain, as well as geometry features such as side-by-side pot arrangement, a tulip-shaped roof ventilator, floor grills between the pots, and claustra wall details. (A claustra wall is a building sidewall inlet air plenum used to prevent the ingress of rain and snow. The plenum discharge to the potroom is equipped with a lattice structure to direct the air horizontally into the potroom, allowing for maximum penetration of fresh air into the process area.)

The model was configured to include local terrain surrounding the smelter, as well as nearby structures of significant size. It was also configured so that any number of basement panels could be closed, and the effects of closing them during the winter months could be studied.

The following assumptions and simplifications were used for the CFD analysis:

• The analysis assumed a steady-state wind condition and an ambient temperature of 2 °C.

• No sources of heat, other than heat dissipated from the pots and busbar, were accounted for in the simulation; 100 kW of heat was assumed to reenter the pots after being sucked into the pot gas exhaust system through small gaps between the pot hoods.

• The floor grills were modeled, but gaps in the floor between concrete slabs were not modeled.

• Concentration profiles of pot emissions such as HF were modeled either as line sources or as inlets emanating from the sides, ends, or tops of the pots.

As shown in Figure 3, the heat dissipated from the pots was modeled using a prescribed heat

Figure 2. Schematic Diagram of Potroom Ventilation

HeatedAirflowExiting

Roof Vent

Internal RecirculatingAirflow

Pot Hoods

Aluminium Electrolysis Pots

Below-Grade Basement

Airflow Directed Below Pots

Airflow Directed Across

Pots

Claustra Walls

Figure 3. Prescribed Pot Surface Heat Flux in CFD Model

Small Area to Model Pot Ventilation and Exhaust toGas Treatment Centers at2.4 Nm3/s/pot

Superstructure0.44 kW/m2

No Heat Flux Prescribed;Used as Source forHF Evolution

Upper End Wall5.68 kW/m2

(area = 7.85 m2)

Lower End Wall1.31 kW/m2

(area = 3.23 m2)

Bottom1.39 kW/m2

Lower Side Wall6.30 kW/m2

(area = 5.6 m2)

Upper Side Wall9.91 kW/m2

(area = 13.6 m2)

Hoods1.31 kW/m2

Page 114: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 104

flux assigned to various surfaces on the pot. Due to the complex geometries of the roof vents, floor grills, and claustra walls, each of these features was first modeled as a submodel. The individual pressure-drop values for each feature were compared against vendor test data and then incorporated in the full CFD model using a simplified feature known as porous media.

Creating a computational grid for a model like this one is extremely challenging and requires highly sophisticated meshing tools and expertise. To accurately simulate heat transfer effects adjacent to the pot, the grid thickness needed to be about a millimeter. A coarser grid was used to accommodate convec- tion in the potroom. The terrain in the model domain extended at least 2 km in all directions. To conserve the total mesh size within manage- able limits, grids with large control volumes, on the order of 1 m3, were required.

The final CFD model of the Fjarðaál Smelter included two potrooms, each with 168 pots, and the surrounding terrain, buildings, and silos. The mesh was fully hexahedral and unstructured. The most effective use of computer resources was achieved by limiting the total mesh size, resulting in a model consisting of 2.8 million control volumes. Figure 4 presents a model of a single 42-pot potroom segment showing the computational grid.

Closing the basement panels during the winter makes the potroom warmer but has a detrimental effect on the ventilation. Basement panels were modeled as thin walls so that panel sections could be simulated as open, partially open, and closed. Opening and closing basement panels effectively regulates the air entering the potroom. The opening and closing of the basement panels was modeled to study the effects of emissions spread at worker level under conditions of reduced airflow entry into the building, such as when basement panels are partially closed. Figure 5 shows a section of the completed CFD model configured with the basement panels 60 percent closed.

VALIDATION OF THE POTROOM CFD MODEL

The detail design for the Fjarðaál Smelter had not been finalized at the time of the CFD

analysis. However, because the potroom design is similar to the design used at the Deschambault Smelter, the main focus of the potroom CFD model validation was to confirm that the thermal buoyancy effects simulated by the model were realistic. This validation was done by comparing

the model’s airflow patterns and vectors with those observed and photographed during the smoke tests.

The smoke tests at the Deschambault Smelter were carried out on April 27, 2004, at 4 a.m. Velocity and temperature measurements were obtained using handheld thermal anemometers, rotating vane anemometers, and thermocouples. The test locations were selected so that the influence of localized changes in the flow direction and magnitude could be qualified. Velocity and temperature measurements were taken from the walkway near the roof vent, and at the floor grill, basement inlets, and claustra wall. Data was also obtained from the meteorological station local to the plant.

The following benchmark comparisons were used to validate the potroom CFD model:

• Pictures and videos of the smoke tests against velocity vectors at corresponding sections in the CFD model

• Velocities obtained using both rotating vane and thermal anemometers

• Temperatures obtained using handheld thermocouples

Creating a

computational grid

for a CFD model

like that for the

Fjarðaál Smelter

is extremely

challenging

and requires

highly sophisticated

meshing tools

and expertise.

Roof Vents

Ground Level

ClaustraWalls

BasementFloor

Pots

Figure 5. Section of CFD Model Showing Potline Sections with Partially Closed Basement Panels

Figure 4. Surface Grid for Section of CFD Model

Page 115: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 105

The CFD analysis

showed that

flow patterns

around the potlines

can be influenced

by local terrain,

wind direction, and

nearby buildings.

• Temperatures and velocities measured at the roof monitor

• Heat and mass balances inside the potroom (Heat balance was obtained from pot external surface heat flux.)

The first part of the model validation involved comparing the flow field obtained from the CFD analysis with measurements, observations, and videotaped flow patterns from the smoke tests. The observations showed that, on the wide-aisle side of the potroom, the smoke entering the claustra wall travels at least two-thirds of the pot length, parallel to the ground and the pot, before thermal buoyancy effects from the hot pots and the flow from the narrow aisle force the flow to move upward. However, the flow entering the claustra wall on the narrow-aisle side of the potroom travels only a small distance along the pots before being driven upward. Subsequently, the upward-moving hot air crosses the roof vent, taking a curved path before encountering the roof vent. A still photograph of this airflow movement is shown in Figure 6.

The potroom CFD model produced the flow pattern as indicated by the streamlines shown

in Figure 7. This flow pattern is very similar to the one observed during the smoke tests. Note that a secondary flow, driven by the natural convection, forms a recirculating flow that drops back down to the top of the claustra wall. What is most important is that the fresh incoming airflow has more energy than the recirculating flow and does not allow the recirculating flow to reach floor level. This flow pattern ensures that, at the worker level, fresh air is ambient.

Qualitatively, this result demonstrates that the potroom CFD model reproduced the flow patterns observed during the smoke tests. As shown in Figure 8, potroom temperatures calculated by the model also showed good agreement with the temperature of the air at the roof vent intake at the Deschambault Smelter, which is typically about 20 °C above ambient.

KEY FINDINGS FROM THE FULL CFD MODEL OF THE FJARÐAÁL SMELTER

The CFD analysis showed that flow patterns around the potlines can be influenced

by local terrain, wind direction, and nearby buildings. Lower wind speeds (i.e., calm conditions) showed results typical for potlines sited at low-wind locations. Higher wind speeds from the east direction, which is aligned with the potline layout, were found to have minimal impact on potroom internal temperatures, as shown in Figure 9.

However, as shown in Figure 10, which presents the velocity patterns around the potlines at an elevation 12 m above ground for the northwest wind case, there are several areas of low velocity, referred to as dead zones, both between the potrooms and in the wake of the potrooms. In particular, there are pronounced dead zones downwind of the passageways that extend sideways from the potrooms.

Figure 6. Still Photograph from Videotape of Smoke Test Conducted at Deschambault Smelter

Figure 7. Flow Streamlines Through Cross-Section of Potroom Generated from Potroom CFD Model

Figure 8. Temperature (°C) Through Cross-Section of Potroom Calculated by Potroom CFD Model

32.0 ºC

2.0 ºC

Page 116: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 106

Figure 11 presents the pressure contours around the potlines at an elevation 12 m above grade for the northwest wind case. As shown in the figure, the dead zones tend to produce a slight pressure buildup on the windward side of the potline and a negative pressure on the leeward side. This creates local regions of pressure gradient that can cause the internal ambient flow to either be pushed away from or pulled toward the dead zones. Although such effects increased with increasing wind speed, they were not significant in the final analysis.

The primary findings from the CFD analysis are summarized below (Figures 12, 13, and 14 apply to all cases without basement panels).

• As shown in Figure 12, the standard deviation of the roof vent exhaust HF concentration is fairly low for all wind conditions without basement panels, and moderate for east and west wind conditions with closed basement panels (not shown). This result indicates that the variation in exhaust concentration is reasonably stable along the potroom length.

• As shown in Figure 13, temperature levels in the potroom at medium and high elevations are consistent with measured data from the Deschambault Smelter smoke tests. These temperature levels are also reasonably uniform over the range of wind conditions evaluated. This result implies that heat removal from the pots is effectively accomplished under all wind conditions.

• Velocity magnitudes in the potroom are as expected, with stronger inflow in the wider aisle; demonstrate some cross-flow recirculation; and are dominated by the central vertical plume. This data is consistent with observations made during the Deschambault Smelter smoke tests.

• HF concentrations are acceptable in the potroom worker aisles and outside the potrooms. For high-wind-speed cases without basement panels, the accumulated HF in the aisles is not significantly higher than it is for the low-wind speed cases. Note that when the basement panels are closed, the accumulated HF in the worker aisles can increase by 50 to 100 percent.

• The flow patterns in the potroom can be affected by locally varying outside pressure gradients. Specifically, as shown in Figure 14, prevailing winds parallel to the potrooms may cause airflow drift to occur in some areas, such as near the roof.

HF concentrations

are acceptable

in the potroom

worker aisles and

outside

the potrooms.

Figure 10. Velocity Contours (m/s) Around Potlines for Northwest Wind Case

10.0 m/s

0.0 m/s

Figure 11. Pressure Contours (Pascal [Pa]) Around Potlines for Northwest Wind Case

320.0 Pa

300.0 Pa

Figure 9. Temperature (°C) Through Cross-Sections of Potroom, with Fully Open

Basement Panels, for East Wind Case at 10 m/s

32.0 ºC

2.0 ºC

Page 117: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 107

• Higher wind speeds do not appear to increase the velocity of the cross-flow patterns near the roof, i.e., patterns induced by the asymmetric airflow in the potroom. The cross-flow velocities are similar for low- and high-wind-speed cases, both with open and closed basement panels. Therefore,

wind speed should not affect the location of emissions monitoring equipment.

• Re-entrainment of roof vent exhaust is not a concern. Some minor dead zones exist between the buildings; however, HF buildup in these areas is negligible.

Figure 12. HF Concentration (Average [Blue]) and Standard Deviation (Red) Through Roof Vent

North Pot South PotE 3 m/s

North Pot South PotWSW 8 m/s

North Pot South PotW 10 m/s

North Pot South PotNW 8 m/s

North Pot South PotENE 10 m/s

North Pot South PotE 10 m/s

0.25

0.20

0.15

0.10

0.05

0

mg/

m3

Figure 13. Potroom Temperature (°C) Just Above Hoods (Blue) and at Roof Vent Intake (Red)

North Pot South PotE 3 m/s

North Pot South PotWSW 8 m/s

North Pot South PotW 10 m/s

North Pot South PotNW 8 m/s

North Pot South PotENE 10 m/s

North Pot South PotE 10 m/s

25

20

15

10

5

0

º C

Page 118: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 108

CONCLUSIONS

The results of the CFD analysis of the Fjarðaál Smelter potroom ventilation system

confirmed that its final design would provide adequate overall performance under all expected wind conditions. The claustra wall design is highly effective in dampening the effects of outside winds on the internal potroom environment, and in drawing in sufficient amounts of fresh air to prevent HF and heat buildup in the main worker areas. The potroom ventilation system design generally accommodates the secondary flows while preventing such recirculating air from pene-trating the worker level in the wide aisles. Additionally, vertical thrust of the plume over the pots is facilitated by sufficient potroom height, roof pitch, and roof vent dimensions.

ACKNOWLEDGMENTS

The authors would like to thank the staff at the Deschambault Smelter for their superb

efforts in supporting the smoke tests in a safe and reliable manner, and for producing quality photographs and videotape for use in the model validation effort.

The authors would also like to recognize Aluminium Pechiney, the original technology supplier of the ventilation system at the Deschambault Smelter, for its permission to publish this paper.

TRADEMARKS

ANSYS and FLUENT are registered trademarks of ANSYS, Inc., or its subsidiaries in the United States or other countries. [ICEM CFD is a trademark used by ANSYS, Inc., under license.]

REFERENCES

[1] M. Dupuis, “3D Modeling of the Ventilation Pattern in an Aluminium Smelter ‘Potroom’ Building Using CFX-4,” CFD 2001 Conference, Waterloo, Ontario, Canada, May 2001 <http://www.genisim.com/website/cfd2001.htm>.

[2] J. Bos et al., “Numerical Simulation, Tools to Design and Optimize Smelting Technology,” TMS Light Metals, 1998, pp. 393–401.

The results of

the CFD analysis

of the Fjarðaál

Smelter potroom

ventilation system

confirmed that its

final design would

provide adequate

overall performance

under all expected

wind conditions.

Figure 14. Potroom Air Velocity in Axial (Drift) Direction (Blue) and Vertical (Thrust) Direction (Red) Below Roof Vent Intake

1.4

1.2

1.0

0.8

0.6

0.4

0.2

0North Pot South Pot

E 3 m/sNorth Pot South Pot

WSW 8 m/sNorth Pot South Pot

W 10 m/sNorth Pot South Pot

NW 8 m/sNorth Pot South Pot

ENE 10 m/sNorth Pot South Pot

E 10 m/s

m/s

The original version of this paper was published in Light Metals 2005, edited by Halvor Kvande, TMS

(The Minerals, Metals & Materials Society), February 2005.

Page 119: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 109

BIOGRAPHIESJon Berkoe is a senior principal engineer and manager for Bechtel Systems &Infrastructure, Inc.’s Advanced Simulation and Analysis Group. He oversees a team of 20 technical specialists in the fields of CFD, finite element structural analysis, virtual reality, and dynamic simulation

in supporting Bechtel projects across all global business sectors. Jon is an innovative team leader with industry-recognized expertise in the fields of CFD and heat transfer. During his 20-year career with Bechtel, he has pioneered the use of advanced engineering simulation on large, complex projects encompassing a wide range of challenging technical issues and complex physical conditions.

Jon has presented and published numerous papers for a wide variety of industry meetings and received several prominent industry and company awards. They include the National Academy of Engineering’s Gilbreth Lecture Award, the Society of Mining Engineers’ Henry Krumb Lecturer Award, and three Bechtel Outstanding Technical Paper awards.

Jon holds MS and BS degrees in Mechanical Engineering from the Massachusetts Institute of Technology, Cambridge, Massachusetts, and is a licensed mechanical engineer in the state of California.

Philip Diwakar is a senior engineering specialist for Bechtel Systems & Infra-structure, Inc.’s Advanced Simulation and Analysis Group. He employs state-of-the-art technology to resolve a wide range of complex engineering problems on large-scale projects. Philip

has more than 15 years of experience in CFD and finite element analysis for structural mechanics. His more recent experience includes work on projects involving fluid-solid interaction and explosion dynamics.

During his 6-year tenure with Bechtel, Philip has received two full technical grants. One was used to determine the effects of blast pressure on structures at liquefied natural gas plants, with a view toward an advanced technology for designing less costly, safer, and more blast-resistant buildings. The other grant was used to study the effects of soil and fluids on building structures and vessels during seismic events. Philip has also received three Bechtel Outstanding Technical Paper awards, as well as two awards for his exhibit on the applications of fluid-solid interaction technology at the 2006 Engineering Leadership Conference in Frederick, Maryland.

Before joining Bechtel, Philip was a project engineer with Caterpillar, Inc., where he served as part of a Six Sigma team. He applied his CFD expertise to determine the best approach for solving issues involving the cooling of Caterpillar heavy machinery.

Philip holds an M.Tech degree in Aerospace Engineering from the Indian Institute of Science,

Bengalaru; a B.Tech degree in Aeronautics from the Madras Institute of Technology, India; anda BS degree in Mathematics from Loyola College, Baltimore, Maryland. He is a licensed professional engineer in Mechanical Engineering and is Six Sigma Yellow Belt certified.

Lucy Martin is chief environ-mental engineer for Bechtel’s Mining & Metals (M&M) global business unit. She is functionally responsible for the environmental engineering executed from Bechtel offices in Montreal, Canada; Brisbane, Australia; and Santiago, Chile. Lucy

also provides environmental expertise to major aluminum smelter projects.

As lead environmental engineer for the Fjarðaál Aluminum Smelter Project, Lucy was responsible for permitting, wastewater management, and energy reduction and recovery. She also coordinated the CFD modeling study of the ventilation pattern in the potroom and carbon area to ensure environmental compliance. On the Alba Aluminum Smelter Potline 5 Expansion Project in Bahrain, Lucy served as the environmental engineer responsible for developing environmental design criteria and ensuring environmental compliance with legislative and client requirements.

Lucy began her career with Bechtel as a process engineer with the Bechtel Water organization and transitioned to Bechtel Oil, Gas and Chemicals, Inc., before joining M&M in 2003.

Lucy holds a BS degree in chemical engineering from the University of Sheffield, England, and is a registered professional engineer in Ontario, Canada.

Bob Baxter is a technology manager and technical specialist in the Bechtel Mining & Metals Aluminum Center of Excellence. He provides expertise in the development of lean plant designs and materials handling and environmental air emission control systems

for aluminum smelter development projects, as well as smelter expansion and upgrade studies. He is one of Bechtel’s technology leads for the Az Zabirah, Massena, and Kitimat aluminum smelter studies.

Bob has 25 years of experience in the mining and metals industry, including 20 years of experience in aluminum electrolysis. He is a recognized specialist in smelter air emission controls and alumina handling systems.

Before joining Bechtel, Bob was senior technical manager for Hoogovens Technical Services, where he was responsible for the technical development and execution of lump-sum turnkey projects for the carbon and reduction areas of aluminum smelters.

Bob holds an MAppSc degree in Management of Technology from the University of Waterloo, and a BS degree in Mechanical Engineering from Lakehead University, both in Ontario, Canada.

Page 120: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 110

C. Mark Read is senior specialist, primary aluminum processes, in the Bechtel Mining & Metals Aluminum Center of Excellence. He serves as engineering manager for the Kitimat Smelter Modernization Project in British Columbia. The project will increase the smelter’s

production capacity by 40 percent, making it one of the largest smelters in North America.

Mark has 28 years of technology management experience in the mining and metals industry, including leadership of advanced simulation and analysis groups with substantial CFD capability. He has expertise in the application of CFD to primary processes, including smelter potroom ventilation; magneto-hydrodynamics in Hall-Héroult cells; solidification, fluid flow, and stress distribution in direct chill casting of aluminum ingot; and combustion in coke calcination kilns and anode baking furnaces.

Mark has provided technology support to smelter studies in Iceland, Russia, North America, the Middle East, and the Caribbean. His expertise in mining and metals covers Hall-Héroult cell design and operation, prebaked carbon products processing and performance, and aluminum casting operations.

Before joining Bechtel, Mark served as director, Technology and Processing, for the Elkem Metals Company.

Mark holds an MSc degree in Industrial Metallurgy and a BSc degree in Metallurgical Engineering, both from Sheffield Hallam University, Sheffield, South Yorkshire, England.

Patrick Grover is director of Environmental, Health and Safety (EHS) for Alcoa’s Global Primary Products Growth, Energy, Bauxite, and Africa Business Unit. He leads all aspects of EHS and related g o v e r n m e n t /c o m m u n i t y consultation for aluminum mining, refining, smelting,

and power generation megaprojects. Patrick has 17 years of experience in the aluminum industry. He is currently working on the development of aluminum projects in Greenland and North Iceland.

Before joining Alcoa, Patrick served as area manager with the Virginia Department of Environmental Quality.

Patrick holds a BS in Chemistry from Virginia Commonwealth University, Richmond, Virginia.

Don Ziegler is program manager for Modeling and Simulation for Alcoa Primary Metals. He leads a team that provides CFD, magnetic-field, thermoelectric, and structural analyses support to Alcoa’s aluminum smelter projects. Since joining Alcoa 23 years ago, Dr. Ziegler has

specialized in CFD and magneto-hydrodynamics. He has also developed a wide variety of process models for various aspects of aluminum processing. In addition, he has devised several novel approaches to micro-scale models of aluminum to simulate structural evolution.

Before joining Alcoa, Dr. Ziegler served as a research engineer at St. Joe Minerals Corporation in Monaca, Pennsylvania.

Dr. Ziegler holds PhD and MS degrees in Metallurgical Engineering from the University of California, Berkeley, and a BS degree in Metallurgy from Pennsylvania State University, University Park.

Page 121: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 111

INTRODUCTION

The traditional methods of transporting bauxite from mine to alumina refinery over

long distances are by rail or conveyor. Bauxite is delivered to the refinery relatively dry as run of mine ore ready for feed into the plant crushing/grinding circuit.

However, an alternative means of transport gaining attention is to prepare a slurry from the bauxite ore at the mine and pump it through a pipeline to the refinery, where it is dewatered in high-pressure filters. This option warrants consideration when traversing rugged terrain where rail and conveyor construction can become expensive and time consuming. Furthermore, a pipeline can offer the benefits of being unobtrusive, more environmentally friendly, and less likely to suffer interference from local populations.

This paper discusses the advantages and disadvantages of the rail and pipeline transport options. The shorter-haul options of truck and conveyor are also discussed but discounted as not appropriate for long-distance transportation. The paper’s focus is on the pipeline transport option and when this option is likely to become feasible in terms of cost and operation. Major factors requiring consideration when evaluating the pipeline option are outlined.

Costs are compared for rail versus pipeline, showing unit costs against distance. A cost breakeven point (BEP) is established to show under what conditions and over what distance pipeline becomes more cost-effective than rail.

BACKGROUND

Transporting minerals by pumping slurry through a pipeline has a perceived advantage

throughout the transport system. However, many important issues need to be addressed when assessing a pipeline option.

Pipeline transport of minerals became of interest in 1967 after Bechtel constructed the world’s first iron ore slurry pipeline: the 85-km-long Savage River project in Tasmania. Since then, long-distance slurry pipeline transport has become quite commonly adopted for mineral concentrates.

Interest has increased in the alumina industry, where the world’s first long-distance bauxite slurry pipeline was commissioned in Brazil in September 2007. Owned by Mineração Bauxita Paragominas (MBP) and 245 km long, the pipeline is used to pump up to 4.5 million tonnes (dry weight) of bauxite per year across remote and rugged terrain to the Alunorte Refinery. [1]

Terry Cunningham

[email protected]

Issue Date: December 2008

Abstract—The traditional methods of transporting bauxite from mine to alumina refinery over long distances are by rail or conveyor. However, slurrying bauxite and pumping it through a pipeline is a viable option that warrants serious consideration. This is especially true in rugged terrain where rail and conveyor construction can become expensive and time consuming. Furthermore, a pipeline can offer the benefits of being unobtrusive, more environmentally friendly, and less subject to interference from local populations. The world’s first long-distance bauxite slurry pipeline was commissioned in Brazil in May 2007. Owned by Mineração Bauxita Paragominas (MBP), it is pumping up to 4.5 million tonnes of dry bauxite per year 245 km to the Alunorte Refinery. This paper discusses the advantages and disadvantages of a bauxite slurry pipeline and notes major issues in considering such an option. It compares the costs of the rail and pipeline transport options over a range of long distances traversing rugged terrain and identifies the point where economics begin to favor the pipeline.

Keywords—bauxite, breakeven point, economic pipeline length, long-distance pumping, pipeline, rail, slurry

LONG-DISTANCE TRANSPORT OF BAUXITE SLURRY BY PIPELINE

Page 122: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 112

Reviews to date

show that

pipeline transport

becomes

more viable over

long distances

in rugged terrains

with steep gullies,

peaks, and ridges

where rail and

conveyor solutions

can be difficult

and costly to

implement.

Reviews to date show that pipeline transport becomes more viable over long distances in rugged terrains with steep gullies, peaks, and ridges where rail and conveyor solutions can be difficult and costly to implement.

TRANSPORT SYSTEM OPTIONS

Options used to transport bauxite vary according to a number of factors, particularly

distance and terrain. Transport options include short-haul truck, long-haul truck, overland conveyor, rail, and pipeline.

The two truck options are generally best suited for relatively short distances (up to 75 km) between mine and refinery. As distance between mine and refinery increases, the more-permanent overland conveyor, rail, and pipeline transport modes are generally employed to reduce the operating costs of hauling to the refinery. These three options require the establishment of mining and ore preparation hubs at each system’s feed end; these hubs are typically fed from the mine face by short-haul trucks.

Each of the five options is briefly discussed in this section.

Short-Haul TruckLarge-capacity trucks (generally ranging from 100–250 tonnes) are cost-effective for hauls up to 20 km. They allow operational flexibility when mining occurs from multiple pits. They require costly heavy-duty haul roads (due to their high axle loads and width) that require ongoing maintenance to achieve optimum operations. Road costs are heavily affected by the terrain, rising rapidly as it steepens and as the roads are lengthened to maintain safe design parameters.

Long-Haul TruckSpecially designed large-capacity road-train trucks (generally ranging from 250–350 tonnes) increase the cost-effective haul distance to

50–75 km, depending on the terrain. They are designed to carry run-of-mine ore and can be loaded directly at the mine face. These trucks also allow mining from multiple facings while allowing a larger incremental extension of the haul distance. The haul roads are less costly than those for short-haul trucks because multiple axles reduce pavement load, but they are similarly affected by terrain.

Overland ConveyorOverland conveyor systems are specifically designed to transport large quantities of bauxite and are cost-effective over distances of 15–100 km. A conveyor can have a single flight of 15–50 km, depending on the type chosen (e.g., cable belts). Longer distances require several conveyors in series. Conveyors occupy a smaller footprint than haul roads and accommodate steeper grades of up to 10%. They have horizontal alignment restrictions but can accommodate horizontal curves ranging from 2–4 km radius, depending on the tonnage being transported. Conveyor costs are affected by terrain, but the conveyors can be elevated in steel galleries to maintain the required vertical grades at creek or other crossings.

RailRail is feasible for transport distances exceeding 100 km and excels in gentle terrain with only few stream crossings. Rail requires a generally flat vertical alignment with about 1% maximum grade. Horizontal alignment can accommodate tight bends down to 300 m radius. Rail is severely affected by terrain because it must follow the contours to maintain the low maximum grade and minimize cuttings. Therefore, in rugged terrain, rail length can become up to 50% longer than the length of a direct route unless large cuttings and embankments are constructed, with associated high costs. Stream crossings are usually expensive, requiring bridges and/or large culvert structures to accommodate high axle loads and flood flows.

PipelineA pipeline uses water as the transport medium, in contrast to the four dry options just described. The ore must be slurried with water to approximately 50% solids for pumping and then dewatered at the refinery to return it to the required moisture content for refinery feed. Because of this additional requirement, a pipeline becomes cost-effective only when long routes are being considered—usually well in excess of 100 km in rugged terrain. Slurry pipeline alignments, both vertical and horizontal, are considerably more flexible than

ABBREVIATIONS, ACRONYMS, AND TERMS

BEP breakeven point

MBP Mineração Bauxita Paragominas

MPa megapascal

Mt/y million tonnes per year

w/w weight-to-weight

μm micrometer (micron)

Page 123: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 113

Rail is a

straightforward,

well-understood,

bulk materials

transport system.

those of rail or conveyor. Horizontally, pipe may weave around obstacles with a minimum radius of just a few meters. Vertically, pipe may rise and fall with grades of up to about 16%, which is a limiting factor to prevent settled particles from sliding downward when flow ceases. Pipelines are generally buried and follow existing terrain without substantial cut and fill earthworks. At creek crossings, the pipe is usually buried under the creek bed. Substantial rivers are crossed in conjunction with an access bridge.

TRANSPORT SYSTEM COMPONENTS

OverviewTo further understand the different options, the entire transport system needs to be considered; that is, from mine face to refinery grinding circuit. This approach properly represents the total cost of a system on an equitable basis.

Table 1 shows the key components of each transport system, highlighting the fixed components (F) required for each transport mode, which is variable (V). Also highlighted are the components common (C) to all systems regardless of transport system.

The information in Table 1 indicates that the pipeline option has substantially more fixed components than the other options because

of its slurrying and dewatering requirements. Thus, pipelines are feasible only for transporting bauxite over long distances, where these costs can be amortized.

The remainder of this paper discusses only the two long-distance transport options of rail and pipeline. To make this discussion meaningful, a better understanding is required of the features of each system. Therefore, the following paragraphs provide more detailed information about their key features.

RailRail is a straightforward, well-understood, bulk materials transport system. Robust in construction and operation, it offers great flexibility in its carrying capacity. Furthermore, it does not require altering the properties of the bauxite that feeds the refinery.

As described in Table 1, the rail system consists of a rail loading facility, the rail line, and a rail unloading facility. The requirement for loading and unloading facilities is independent of the transport length and so represents a fixed cost. The most significant variables affecting the cost of rail are transport distance, associated terrain, and number of significant crossings. Also, adverse geotechnical conditions such as hard rock in cuttings and access through swamps can significantly affect cost.

Table 1. System Components for Each Transport Mode

MATERIAL FLOW

Short-Haul Truck

Mine Supply

(C)

Short-Haul Road

(V)

Ore Crushing

(C)

Storage (C)

Grinding (C)

Refi nery Process

(C)

Long-Haul Trucks

Mine Supply

(C)

Long-Haul Road

(V)

Ore Crushing

(C)

Storage (C)

Grinding (C)

Refi nery Process

(C)

Overland Conveyor

Mine Supply

(C)

Ore Crushing

(C)

Overland Conveyor

(V)

Storage (C)

Grinding (C)

Refi nery Process

(C)

RailMine

Supply (C)

Ore Crushing

(C)

Rail Loading Station

(F)

Rail Line (V)

Rail Unloading Station,

Conveying (F)

Storage (C)

Grinding (C)

Refi nery Process

(C)

PipelineMine

Supply (C)

Ore Crushing

(C)

Grinding (C)

Cycloning, Slurry

Storage, Pumping

(F)

Pipeline (V)

Slurry Storage,

Dewatering, Water

Disposal, Conveying,

Power (F)

Solids Storage

(F)

Re-slurry (F)

Refi nery Process

(C)

Mine Supply = truck loading and local hauling• Ore Crushing = truck dump, crushing, and local conveying•

Page 124: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 114

In rugged terrain, rail length can become up to 50% longer than the length of a direct route as the route follows contours to maintain the low maximum grade and minimize cuttings. If a more direct route is required, large cuttings and embankments must be constructed, with associated high costs. Stream crossings are usually expensive, necessitating bridges and/or large culvert structures to accommodate high axle loads.

Rail system capacity is generally limited by the amount of rolling stock (wagons and locomotives) available. Typically, the smallest amount of rolling stock practicable is employed because rail requires such a large initial investment in rail loading and unloading facilities and formation earthworks. System capacity can be expanded only by incurring the expense of adding more rolling stock.

Rail does create a small barrier across the landscape, but pedestrian and/or vehicle access can be provided relatively easily and safely, commonly via large box-culvert access ways or crossover bridges.

PipelineThe significant difference in transport via slurry pipeline is that the bauxite must be slurried for pumping and then dewatered to the equivalent moisture (12%–14%) found in the as-mined bauxite, before it is fed into the refinery. Dewatering is necessary because once the bauxite is in the refinery circuit, additional water content beyond the normal range incurs additional capital and power costs to evaporate the excess moisture.

As indicated in Table 1, the pipeline system has three major parts: slurry preparation facility, pipeline and pumps, and dewatering facility. Slurry preparation and dewatering are generally independent of pipeline length.

Slurry Preparation FacilityThe slurry preparation facility includes crushing, grinding, water supply, agitator tanks, and slimes removal. To ensure that slurry pumping is performed as designed, a consistent product with the right characteristics must be fed into the pipeline. Achieving the correct particle size distribution during slurry preparation requires a well designed and controlled plant. The ideal particle distribution for bauxite slurry has a top size of 250 μm, with about 50% of the particles less than 45 μm. [1] Grinding circuits are generally designed to target this grading, but

the ore feed from mining also plays an important role. It may be necessary to extensively practice and optimize the selective mining and blending of ores, because this part of the process affects the sizing of the end product considerably.

Pipeline and PumpsThe transport system from slurry preparation area to refinery consists of a long steel pipeline and high-pressure pumps, plus ancillary items. To pump large quantities of bauxite requires special considerations to maintain a high level of reliability in this single “life line.” Of paramount importance are the characteristics of the bauxite particles in the slurry. Solids degradation by particle attrition through the pumps and pipeline could change the fluid rheology, which, in turn, could affect pumping by necessitating a higher hydraulic head and more power. Opportunities for the occurrence of plugs and leaks need to be mitigated by a design that reduces the risk of such events. The design must also maintain a pipeline velocity high enough to prevent solids from dragging on the bottom of the pipe in a situation called “incipient sanding,” which can increase pipe wear, reduce pipeline life, and lead to a higher pumping head requirement as a result of the reduced flow cross-section.

Dewatering FacilityOnce the slurry arrives at the refinery, significant processing is required to dewater it to about 12%–14% water content. This process minimizes the dilution of the refinery liquor in the circuit. Dewatering is typically achieved via a bank of hyperbaric high-pressure filters. [2] The filtration process uses large amounts of compressed air, and some processes use pressurized steam. The amount of power required to operate the filters and compressors and to produce steam is significant, typically about 12 MW for a refinery of 3.5 million tonnes of annual alumina capacity. Water from the filters is usually clarified before its release downstream. In some cases, it may be returned to the mine via a water pipeline, but this latter option is not normally practiced because of its high cost. The refinery also uses some of the water.

ExpansionThe capacity of a pipeline and any expansion requirements must be decided upon during the initial design and construction phase because there is little opportunity to do so later. Sizing the pipeline only for the initial bauxite demand leaves little leeway to increase capacity. Additional pump capacity may produce only an extra 10%–20% of tonnage. To allow for additional

In rugged terrain,

rail length

can become up to

50% longer than

the length of

a direct route

as the route

follows contours

to maintain the low

maximum grade

and minimize

cuttings.

Page 125: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 115

Providing a large

diameter pipeline

for future use initially

means pumping

significant extra

quantities

of water to the

refinery.

future demand, the pipeline must be oversized until that demand is met. This means that batch pumping is required, which is operationally expensive because significantly more water than ore is delivered to the refinery. If the maximum future bauxite demand is met early, this could be an acceptable option because the high operating cost associated with pumping large quantities of water is limited in time.

COMPARISON OF RAIL AND PIPELINE SYSTEMS

Table 2 compares the attributes and differences of the rail and pipeline long-

distance transport systems.

Many factors may make a pipeline more favorable. For example, if a transport site is long, located in rugged terrain, has safety and security concerns, and/or is located in environmentally sensitive areas or crosses numerous creeks, then the case for pipeline could be strong.

PIPELINE SYSTEM DESIGN CONSIDERATIONS

This section presents an overview of the key elements to be considered when designing

a pipeline transport system. These elements are focused on water supply, slurry properties, and pipeline design.

Water SupplyA reliable water supply is essential. The water resource should be located near the slurry preparation area, with continuous supply to meet the demand. The water supply scheme generally involves an off-take weir from a local stream, with a pump station and a pipeline to a storage dam. The storage dam is needed to accommodate seasonal variations in stream flow. A pump transfers water to the grinding circuit for slurry preparation.

If water resources are scarce, a return water pipeline may be necessary from the dewatering facility at the refinery back to the mine, at increased project cost.

Table 2. Comparison of Rail and Pipeline Attributes and Differences

SELECTION FACTOR RAIL PIPELINE

Distance • Suited to 100+ km • Cut and fi ll quantities important

• Suited to 100+ km• Potentially higher cost BEP

Constructability – Rugged Terrain

• Heavy equipment for sleeper plant (in country), track, and bridge construction• High earthworks

• Pipeline simpler to construct, with minimal earthworks

River Crossings • High cost for bridges and culverts • Stream crossings simpler and cheaper (buried under stream bed)

Water Requirements • Minor• Water supply required for slurry preparation

Future Expansion • Additional rolling stock • Sidings

• No expansion unless initially undersized or designed as batch transfer • Large diameter for future use means signifi cant extra quantities of water

Security and Interference • Exposed to risk from human and environmental infl uences

• Better protected and thus more secure because buried

Safety – Local Population • Exposure to moving equipment • Bauxite is enclosed in buried pipeline

Environmental – Habitats • Can restrict fauna movement• Low impact – Lower requirement for clearing because footprint is small • Habitat not cut off or isolated

Environmental – Noise and Dust • Moderate to high impact• Low impact – No noise or dust issues except during construction

Community – Impact of Route • High impact • Larger deviations result in longer route

• Low impact – Can deviate easily • Fewer resettlement issues

Page 126: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 116

Slurry Properties• Rheology—The rheology of the slurry

influences a number of important factors in long-distance pumping. It is important to have sufficient slurry viscosity to prevent solids from settling out during short flow interruptions. To obtain the proper viscosity, the slurry must have a sufficient proportion of ultra-fine (less than 10 μm) particles. However, while the presence of these particles serves to increase slurry viscosity, it also increases pumping costs because of the increased head requirement. During normal flow, pipeline velocity must be above the minimum settling velocity to prevent larger particles from dragging on the bottom of the pipeline. The design velocity is usually above laminar flow, just inside the turbulent flow region. This compromise keeps particles in suspension and the pumping head as low as possible. A higher velocity may be used in the design to ensure that no particles settle even if larger particles appear due to change of grind or ore body. Pump test loops are used to confirm the hydraulic properties of the slurry before finalizing design.

• Slurry density—The higher the slurry density, the better, from a transport viewpoint. Typically, 50% solids weight-to-weight (w/w) is used, but a higher density results in less water being transported. Water is the carrying medium only and is otherwise of no value. However, increasing pumping density beyond 55% increases pumping requirements in terms of both head and power. Higher concentrations have not been accepted for long-distance pumping because the operational risk element is too high at present.

• Grading—The slurry grading is very important in achieving the right performance during slurry pumping. For bauxite slurry, the ideal grading has a top size of 250 μm, with about 50% less than 45 μm. A percentage of ultra-fine particles is desirable to improve slurry characteristics. However, if desliming is required to remove highly reactive silica, this may not be achieved. In this case, higher velocity pumping is required to keep the coarser particles in suspension.

• Particle degradation—Much has been reported about the potential of bauxite to break down because the ore tends to be soft and could suffer particle size degradation due to turbulent flow and the mechanical impacts of pumping. Such a breakdown can increase the viscosity and the pumping head required.

This is especially true if the proportion of ultra-fine particles is increased. However, MBP in Brazil has reported no particle attrition or increase in viscosity in more than 12 months since the initial operation of its bauxite slurry pipeline. [1]

Pipeline DesignPerformance and Security ConsiderationsA pipeline’s long-term performance and security are paramount for a successful slurry transport system. The following relevant concerns in determining pipe wall thickness require serious attention:

• Wear—Erosion of the inside surface of the pipe is inevitable when a mineral slurry is being pumped. To deal with this issue, the pipe is either lined or provided with additional (sacrificial) wall thickness. However, experience has shown that bauxite slurry is not very abrasive and causes negligible erosion. Furthermore, most of the solids are in suspension because of the fine particles.

• Corrosion—Interior corrosion is difficult to predict but must be allowed for if no lining is used. An allowance for bauxite is typically 0.2 mm/year. [1] The outside of the pipe is protected using a cathodic protection system coupled with a commercial paint system.

• Transients—Pressure waves or transients are induced by velocity changes inside the pipeline. Sudden loss of power, valve closure, and column separation induce transients. In long pipelines, this can be significant and must be assessed. Slurry behaves differently from water, but there are occasions when the line is filled with water or only partially filled with slurry.

• Steel grade—Higher grade steel yields a lower wall thickness, per hoop stress calculations. The availability of various grades and their costs affect the choice.

Pumping and Size ConsiderationsPumping high-pressure slurry over long distances in a pipeline requires a number of serious considerations in the design process:

• Pumps—The pumps used to drive the slurry along the pipeline must be robust, reliable, and of proven performance. For long pipelines, the selection of choice is the positive-displacement pump. Whereas centrifugal pumps are limited by the maximum casing pressure, typically about 7 MPa or 700 m head, positive-displacement

The slurry grading

is very important

in achieving the

right performance

during

slurry pumping.

Page 127: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 117

pumps can provide pressures up to 25 MPa. Piston wear, once a problem, has been addressed by the piston diaphragm pump, in which a diaphragm protects the piston and liner from abrasive sliding contact wear.

• Pipeline internal diameter—The pipeline internal diameter is crucial for the long-term performance of the system. A diameter should be used that achieves the minimum slurry velocity needed to prevent solids settling. This usually results in flow in the turbulent region. The penalty for using a smaller diameter and higher line velocity is higher pumping heads. However, this lower-risk strategy also decreases wear and reduces the chance of plugging.

• Valves and fittings—A number of fittings are required for pipeline operation. These include isolating valves, scour valves, and air valves. Also, pressure monitors and magnetic flow recorders are installed as part of the monitoring/leak detection system. Air valves and scour valves facilitate emptying sections of pipeline and capturing the associated slurry in dump ponds. If these valves are not provided, managing line breaks or blockages could be challenging, particularly in very long pipelines.

• Dump ponds—Dump ponds may be needed to trap and contain slurry if it becomes necessary to manage breaks and line blockages. The requirement to contain slurry depends on the environmental regulations of the country of operation.

• Power and controls—Power supply and distribution are required, along with a feed control system. Commonly, a power distribution system of about 15 MW is required to deliver power to the pumps. A complex control system is required to link feed into the pipeline with demand at the refinery end. In addition, some long pipelines require leak detection monitoring, which involves recording slurry pressure at intervals along the pipeline and sending this information to the control room.

RISKS IN A PIPELINE TRANSPORT SYSTEM

A number of risks need to be addressed in adopting a pipeline slurry transport system.

These are increased hydraulic pressure, shortened pipeline life, plugging, excessive slimes, and dewatering plant startup.

Increased Hydraulic PressureA change in slurry rheology as a result of altered particle size distribution changes the pump head requirements. For positive-displacement pumps, this means applying more power to the piston stoke and consequently drawing more power from the motor. To cover this possibility, a larger motor may be required.

Shortened Pipeline LifeThe pipeline may be subject to greater wear if larger particles are generated in the grinding circuit due to a change in ore characteristics. Larger particles may drag as a shifting bed, thereby increasing erosion. In addition, a change in slurry chemistry may accelerate corrosion. Pipe inspections are needed to determine the extent of wall thickness reduction.

PluggingThe slurry should be within the design grading, with a top size of 250 μm and about 50% at less than 45 μm. With this grading and a pipeline slope of no greater than 16%, the risk of plugging is low. However, if there is a change in sizing, plugging is possible, especially if the pumps stop for more than 30 minutes.

Excessive SlimesSlimes are detrimental to dewatering plant filter operation because they blind the filter cloths. Failure to remove slimes at the slurry preparation area increases filter usage and maintenance. This situation could be costly in terms of additional capital costs and lost refinery production.

Dewatering Plant StartupIt is possible that dewatering plant startup and commissioning could take at least 3 months. [3] Efficient plant operation may be further delayed if the initial slurry particle size distribution is too fine and/or there is a significant presence of slimes. Thus, control of particle size and slimes at the slurry preparation end is important from the outset of operations.

TRANSPORT SYSTEM ECONOMICS: RAIL VERSUS PIPELINE

Bauxite transportation system study data comparing rail and pipeline provides a range

of costs for projects in a number of countries. The data covers rugged terrain sites in Cameroon, Guinea, Brazil, and Vietnam for production rates of 10 million tonnes per year (Mt/y) and 20 Mt/y.

The pipeline

internal diameter

is crucial for

the long-term

performance of

the system.

Page 128: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 118

The system components used to develop the costs are listed in Table 3. Total capital and operating costs are summarized in Table 4 as cost per tonne against distance.

Figures 1 and 2 show the cost BEPs for bauxite transportation by rail and by pipeline at capacities of 10 Mt/y (Figure 1) and 20 Mt/y (Figure 2) over rugged terrain in Cameroon, Guinea, Brazil, and Vietnam. In both cases, Vietnam is an outlier compared with the other countries, which are closely aligned. This is because the Vietnamese wage rates are lower than the others. Discounting Vietnam, it is seen that for both the 10 Mt/y and 20 Mt/y capacities, the BEPs are similar, ranging from 450 to 600 km and 450 to 650 km, respectively. This shows that the BEP is largely independent of transport capacity. It should be noted that the BEPs in the two figures pertain only to the case studies selected and will vary for other locations.

Figures 1 and 2 suggest that pipeline transport does not become economical until the transport distance reaches about 450 km. The capital cost of components is the determining factor. For rail, the greater the distance, the more rolling stock (to maintain unit-train cycle time and bauxite delivery rate), sleepers, and rail are needed. For pipeline, the greater the distance, the more

pumps (to maintain bauxite delivery rate) and pipe are needed. Cost-wise, pipeline components are cheaper to buy, replace, and augment than are rail components.

It should be noted that where the BEP for rail/pipeline occurs, the unit-cost-versus-distance curves intersect at acute angles. This means that a small change in cost for either rail or pipeline results in a large shift in BEP. For example, if the pipeline cost varies by ±10% of the estimated cost, the BEP range changes as shown in Table 5. Thus, the BEP could range from 350 to 800 km, a significant departure from the base case range of 450 to 600 km.

MBP BAUXITE SLURRY PIPELINE IN BRAZIL [1]

At the MBP facility in Brazil, bauxite is pumped through a 245 km pipeline across remote

and rugged terrain to the Alunorte Refinery. This pipeline system became an economical transport mode because of factors favorable to a pipeline solution but not to a rail alternative, including

Studies

have shown that

pipeline transport

often does not

become economical

until transport

distances of a few

hundred kilometers

are reached.

Table 3. System Components — Rail versus Pipeline

RAIL PIPELINE

Land Acquisition – easements for rail Raw Water System – pipeline, pump, and pond

Earthworks – cuttings, embankments Slurry Thickener and Slurry Tank with Agitator

Stream Crossings – culverts, bridges Slurry Pipeline

Trackwork – rail and rolling stock Slurry Pumps – two pump stations

Signaling and Telecommunications Slurry Tank with Agitator

Workshops – maintenance facilities Pressure Filters

Crossings – level or grade separation Water Clarifi er (from pressure fi lters)

Table 4. Bauxite Rail and Pipeline Transportation Costs, 2006 (US$/Tonne)

BAUXITE TONNAGE AND DISTANCE

10 Mt/y 20 Mt/y

200 km 400 km 800 km 200 km 400 km 800 km

RAIL

Cameroon 3.37 6.60 12.06 2.41 4.57 8.72

Guinea 3.83 7.08 13.77 2.90 5.45 10.54

Brazil 4.08 7.17 13.48 2.85 5.30 10.06

Vietnam 3.04 4.86 10.72 2.21 4.24 8.06

PIPELINE

Cameroon 5.75 7.55 10.97 4.77 5.82 7.91

Guinea 5.74 7.51 10.87 4.76 5.80 7.49

Brazil 5.47 7.32 10.74 4.51 5.57 7.62

Vietnam 4.27 5.80 8.71 3.49 4.31 6.30

Table 5. Sensitivity Range of Cost BEPs, 10 Mt/y System

BASE CASE BASE CASE –10% BASE CASE +10%

450–600 km 350–450 km 600–800 km

Page 129: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 119

Figure 1. Bauxite Rail and Pipeline Cost Breakeven Points, 10 Mt/y System

0 200 400 800

Distance, km

BreakevenCost Range

450–600 km

BreakevenCost for Vietnam

10 Mt/yTrain

10 Mt/yPipeline

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

US $

/ton

ne

Figure 2. Bauxite Rail and Pipeline Cost Breakeven Points, 20 Mt/y System

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

0 200 400 800

US $

/ton

ne

Distance, km

BreakevenCost Range

450–650 km

BreakevenCost for Vietnam

20 Mt/yTrain

20 Mt/yPipeline

Page 130: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 120

location in a remote area; location in rugged terrain; use of an existing, cleared right-of-way; and the need for many stream crossings.

The MBP bauxite transportation system has the following slurry preparation, pipeline, and dewatering features.

Slurry Preparation FeaturesSlurry is prepared by crushing and grinding mined ore taken from stockpiles. Cyclones are used to adjust particle size. Tailings are separated and sent to a waste thickener. The prepared slurry at 50% solids is sent to agitated storage tanks next to the pipeline pump station. The pump station consists of six mainline crankshaft-driven piston diaphragm pumps connected in parallel. The pumps are Geho TZPM 2000 units from Weir Minerals Netherlands, each with a design capacity of 356 m³/h and a maximum discharge pressure of 13.7 MPa.

Slurry Pipeline FeaturesThe pipeline is designed for a future maximum annual production of 12.6 million tonnes of bauxite, but the initial capacity is only 4.5 million tonnes. The ultimate capacity will be reached after a few years, when the 610 mm (24 in.) diameter

pipeline is fully used. Currently, the operation pumps slurry-water batches, a requirement to maintain slurry minimum design velocity. A second pump station will be needed to meet the additional pumping requirements for the future maximum production.

The pipeline has a leak detection system featuring five intermediate pressure-monitoring stations to indicate continuous pressure data to the pipeline operator. The pressure data is transmitted via a fiber-optic cable. The pipeline shares right-of-way with two existing kaolin slurry pipelines.

Slurry Dewatering FeaturesAt the terminal station, agitator tanks receive the slurry to provide feed to the filter plant. The slurry is dewatered using hyperbaric pressure filters to produce a bauxite with about 12% moisture in the filter cake feed to the refinery.

SUMMARY OF PIPELINE TRANSPORT SYSTEM ATTRIBUTES

Table 6 summarizes the key advantages and disadvantages that need to be taken into

account when considering a pipeline transport system.

The MBP pipeline

has a leak

detection system

featuring

five intermediate

pressure-

monitoring

stations to indicate

continuous

pressure data

to the

pipeline operator.

Table 6. Advantages and Disadvantages of a Pipeline Transport System

ADVANTAGES

Unobtrusive Line buried, low visibility, low environmental impact, lower footprint

More Secure Better protected, less likely to be vandalized

Safer Local population better protected, no moving parts to clash with

Continuous Flow No stop/start operation, less likely to experience product delay at refi nery

Low Maintenance None on pipeline, minor on pumps, high on fi lters

Flexible Alignment Easily adjustable around villages or obstacles

Shorter Route Fewer vertical and horizontal alignment constraints, resulting in more direct route

Easier Stream Crossings Can pass buried under streams without bridging

Environmentally Friendly Lower footprint, less clearing, does not isolate habitat, no noise/dust

DISADVANTAGES

High Capital and Operating Costs

Slurry Preparation—high capital and operating costs from mine to pipeline for slurry preparation (crushing, grinding, water supply, etc.)

Slurry Dewatering—high capital and operating costs for slurry receiving, dewatering fi lters and associated compressors, cake storage and re-slurrying, water disposal

Water Usage Large water requirement for slurry transport (approximately 1 tonne of water per tonne of bauxite)

Rheology Change Change in ore characteristics can change particle distribution, leading to possible increase in pumping head and increased fi ltering

Blockages May be diffi cult to locate and remove

Dewatering Management Expensive to return fi lter water to mine; disposal at refi nery may require treatment before release; downstream issues, including environmental and community, may occur

Pipeline Life Long-term pipeline performance, higher-than-expected internal corrosion and erosion

Page 131: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 121

CONCLUSIONS

Long-distance transport of bauxite by pumping slurry through a pipeline is in its infancy,

with only one operation at present, worldwide. This operation is at the MBP facility in Brazil, which has been pumping bauxite slurry 245 km across rugged terrain for only about 1 year. To date, there have been no reported problems.

Transporting bauxite slurry by pipeline may be the preferred solution over transporting dry bauxite by rail if a number of the favorable selection criteria listed in Table 6 are met. However, pipeline transport is likely to be used only in special cases, and selection must proceed with caution. There are a number of risks associated with pumping bauxite, as outlined in this paper. However, if the risks are addressed and properly managed, an economic advantage can be realized.

Compared with the long-distance rail transport of bauxite, slurry pumping becomes more economical over distances beyond 450 km for the cases presented in this paper. However, in contemplating the transportation of bauxite slurry by pumping it through a pipeline, it is essential to completely understand the properties and characteristics of the bauxite.

In the final analysis, each project is different and site dependent. Thus, even if the distance is long, the terrain is rugged, and the bauxite properties are known, much detailed analysis is still required before an informed decision can be made.

REFERENCES

[1] R. Gandhi, M. Weston, M. Talavera, G.P. Brittes, and E. Barbosa, “Design and Operation of the World’s First Long Distance Bauxite Slurry Pipeline,” in Light Metals 2008, edited by D.H. DeYoung, The Minerals, Metals & Materials Society, March 2008, pp. 95–100, access publication via <http://iweb.tms.org/Purchase/ProductDetail.aspx?Product_code=08-7100-G>.

[2] R. Bott, T. Langeloh, and J. Hahn, “Filtration of Bauxite After Pipeline Transport: Big Challenges – Proper Solutions,” presented at the 8th International Alumina Quality Workshop, Darwin, NT, Australia, September 7–12, 2008, access technical program via <http://www.aqw.com.au/Portals/32/AQW%202008%20final%20program.pdf>.

[3] M. Santa Ana, J. Morales, R. Prader, J. Kappel, and H. Heinzle, “Hyperbaric Bauxite Filtration: New Ways in Bauxite Transportation,” presented at the 8th International Alumina Quality Workshop, Darwin, NT, Australia, September 7–12, 2008, access technical program via <http://www.aqw.com.au/Portals/32/AQW%202008%20final%20program.pdf>.

BIOGRAPHYTerry Cunningham is a senior civil engineer with 39 years of experience in planning, investigating, designing, and supervising the construction of a diverse range of large-scale infrastructure works, particularly in the mining and metals sector. He has spent the last 14 years with Bechtel and

is currently based in Brisbane, Queensland, Australia, in the Alumina and Bauxite Centre of Excellence. His areas of expertise include earthworks, tailings disposal, water management, water resources, haul roads, dams, mine dewatering, drainage, flood mitigation, sewerage treatment, pumping systems, feasibility studies, and report writing.

Terry has worked throughout Australia and in Oman; Bahrain; Montreal, Quebec; Indonesia; and Papua New Guinea. Representative Bechtel assignments include serving as lead civil engineer on the Sohar aluminum smelter project in Oman, managing the design requirements for the complex licensing process required by the Queensland Government for the Pasminco Century zinc tailings dam, and designing and managing an important environmental cleanup project for the Comalco Bell Bay aluminum smelter in Tasmania. He presented a paper on this award-winning project to an international conference hosted by the Minerals Council of Australia at Newcastle near Sydney.

Before joining Bechtel, Terry worked for BHP Engineering on many coal infrastructure projects and on a major upgrade of the Brisbane-to-Cairns railway. The railway work involved replacing more than 300 old timber bridges (built in 1906) with large box culverts and bridges.

Terry qualified in Civil Engineering at Swinburne University of Technology, Melbourne, Australia. He is a member of the Institution of Engineers Australia and a registered professional engineer. Terry lectured part-time for 2 years at Royal Melbourne Institute of Technology.

Pipeline transport

is likely

to be used only

in special cases,

and selection

must proceed

with caution.

Page 132: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 122

Page 133: BTJ Book V1 N1 2008 Final

TECHNOLOGY PAPERS

Bechtel OG&CT e c h n o l o g y P a p e r s

125World’s First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade® LNG ProcessCyrus B. Meher-HomjiTim HattenbachDave MessersmithHans P. WeyermannKarl MasaniSatish Gandhi, PhD

141Innovation, Safety, and Risk Mitigation via Simulation TechnologiesRamachandra Tekumalla Jaleel Valappil, PhD

157Optimum Design of Turbo-Expander Ethane Recovery ProcessWei Yan, PhDLily Bai, PhDJame Yao, PhD Roger Chen, PhDDoug Elliot, PhD Stanley Huang, PhD

OG&C — Jamnagar Export Refi neryWith the construction of this refi nery in

northwest India, the second refi nery on the site, Jamnagar has now become the world’s

largest oil-refi ning operation.

Page 134: BTJ Book V1 N1 2008 Final
Page 135: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 125

INTRODUCTION

Aeroderivative engines fit the ConocoPhillips Optimized Cascade® Process1 because of the

two-trains-in-one design concept that facilitates the use of such engines. Further, the application of a range of larger aeroderivative engines that are now available allows for a flexible design fit for this process. Benefits of aeroderivative engines over large heavy-duty single- and two-shaft engines include significantly higher thermal efficiency and lower greenhouse gas emissions, the ability to start up without the use of large helper motors, and improved production efficiency due to modular engine change-outs. For instance, the Darwin liquefied natural gas (LNG) plant is able to operate at reduced rates of 50% to 70% in the event that one refrigeration compressor is down.

Several practical aspects of the application of aeroderivative gas turbines as refrigeration drivers along with design and implementation considerations are discussed below. The selection of aeroderivative engines and their configurations for various train sizes, and evaluation of emission considerations are also covered.

OVERVIEW OF THE DARWIN LNG PROJECT

On February 14, 2006, the Darwin LNG plant was successfully commissioned

and the first LNG cargo was supplied to the buyers, Tokyo Electric and Tokyo Gas. The plant represents an innovative benchmark in the LNG industry as the world’s first facility to use high-efficiency aeroderivative gas turbine drivers. This benchmark follows another landmark innovation by ConocoPhillips: the first application of gas turbine drivers at the Kenai LNG plant in Alaska built in 1969.

The Darwin plant is a nominal 3.7 million tonnes per annum (MTPA) capacity LNG plant at Wickham Point, located in Darwin Harbour, Northern Territory, Australia. The plant is connected via a 500 km, 26-inch-diameter subsea pipeline to the Bayu-Undan offshore facilities. The Bayu-Undan field was discovered in 1995 approximately 500 km northwest of Darwin in the Timor Sea (see Figure 1). Delineation drilling over the next 2 years determined the field to be of world-class quality with 3.4 trillion cubic feet (tcf) of gas and 400 million barrels (MMbbl) of recoverable condensate and liquefied petroleum gas (LPG). The Bayu-Undan offshore facility began operating in February 2004; current production averages 70,000 bbl of condensate and 40,000 bbl of LPG per day.

The Darwin project was developed through a lump-sum, turnkey (LSTK) contract with Bechtel

Abstract—Market pressures for new thermally efficient and environmentally friendly liquefied natural gas (LNG) plants, coupled with the need for high plant availability, have resulted in the world’s first application of high-performance PGT25+ aeroderivative gas turbines for the 3.7 MTPA Darwin LNG plant in Australia’s Northern Territory. The plant was operational several months ahead of contract schedule and exceeded its production target for 2006. This paper describes the philosophy leading to this first-of-a-kind aeroderivative gas turbine plant and future potential for the application of larger aeroderivative drivers, which are an excellent fit for the ConocoPhillips Optimized Cascade® Process.

Keywords—aeroderivative, gas turbine, greenhouse gas, LNG, LNG liquefaction, thermal efficiency

Originally Issued: April 2007Updated: December 2008

Cyrus B. Meher-Homji

[email protected]

Tim Hattenbach

[email protected]

Dave Messersmith

[email protected]

Hans P. Weyermann

[email protected]

Karl Masani

[email protected]

Satish Gandhi, PhD

[email protected]

WORLD’S FIRST APPLICATION OF AERODERIVATIVE GAS TURBINE DRIVERS FOR THE CONOCOPHILLIPS OPTIMIZED CASCADE® LNG PROCESS

1 ConocoPhillips Optimized Cascade Process services are provided by ConocoPhillips Company and Bechtel Corporation via a collaborative relationship with ConocoPhillips Company.

Page 136: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 126

ABBREVIATIONS, ACRONYMS, AND TERMS

ASME American Society of Mechanical Engineers

bbl barrel

CC combined cycle

CIT compressor inlet temperature

CNG compressed natural gas

CO2 carbon dioxide

DBT dry bulb temperature

DLE dry low emissions

FEED front-end engineering design

FOB free on board

GENP General Electric Nuovo Pignone

GT gas turbine

HHV higher heating value

HPT high-pressure turbine

HSPT high-speed power turbine

ISO International Organization for Standardization

kg/sec kilograms per second

LM2500+G4 gas generator manufactured by GE Industrial Aeroderivative group

LNG liquefied natural gas

LPG liquefied petroleum gas

LSTK lump-sum, turnkey

MDEA methyldiethanolamine

MMbbl million barrels

MMBtu million British thermal units

MTPA million tonnes per annum

NGL natural gas liquid

NOx nitrogen oxide

NPV net present value

PGT25+ GENP designation of the LM2500 engine with HSPT

ppm parts per million

RH relative humidity

rpm revolutions per minute

SAC single annular combustor

SC simple cycle

SHP shaft horsepower

tcf trillion cubic feet

TMY2 typical meteorological year 2

VFD variable-frequency drive

DILL

SUAI

TIMOR LESTE

JPDA

SUNRISE

EKKN

ABADI

EVANS SHOAL

DARWIN

NORTHERN TERRITORY

PUTREL

TERN

BLACKTIP

BAYU-UNDANTIMOR SEA

CRUX

SCOTT REEFBREWSTER

BRECKNOCK

INDONESIA

AUSTRALIA

AUSTRALIA

0 50 100km

N

Figure 1. Bayu-Undan Field Location and the Darwin LNG Plant

Page 137: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 127

Corporation that was signed in April 2003 with notice to proceed for construction issued in June 2003. An aerial photo of the completed plant is shown in Figure 2. Details regarding the development of the Darwin LNG project have been provided by Yates. [1, 2]

Not only has the Darwin plant established a new benchmark in the LNG industry by being the first LNG plant to use aeroderivative gas turbines as refrigerant compressor drivers, it also is the first to use evaporative coolers. The GE PGT25+2 is comparable in power output to the GE Frame 5D gas turbine but has an ISO thermal efficiency of 41% compared to 29% for the Frame 5D. This improvement in thermal efficiency results in a reduction of required fuel, which reduces greenhouse gas in two ways. First, CO2 emissions are reduced due to a lower quantum of fuel burned, and second, the total feed gas required for the same LNG production also is reduced. The feed gas coming to the Darwin LNG facility contains carbon dioxide, which is removed in an amine system before LNG liquefaction and is released to the atmosphere. The reduction in the feed gas (due to the lower fuel gas requirements) results in

a reduction of carbon dioxide or greenhouse gas emissions from the unit.

The Darwin plant incorporates several other design features to reduce greenhouse gas emissions. They include the use of waste heat recovery on the PGT25+ turbine exhaust that is used for a variety of heating requirements within the plant. The facility also contains ship vapor recovery equipment. Both of these features not only reduce emissions that would have been produced from fired equipment and flares, but they also lead to reduced plant fuel requirements, which reduce the carbon dioxide released to the atmosphere.

Gas turbine nitrogen oxide (NOx) control is derived by water injection, which allows the plant to control NOx emissions while maintaining the flexibility to accommodate fuel gas compositions needed for various plant operating conditions. At the same time, there is no need for costly fuel treatment facilities for dry low NOx combustors.

The Darwin plant uses a single LNG storage tank with a working capacity of 188,000 m3, one of the largest aboveground LNG tanks. A ground flare is used instead of a conventional stack to minimize visual effects from the facility and any intrusion on aviation traffic in the Darwin area. The plant also uses vacuum jacketed piping in the storage and loading system to improve thermal efficiency and reduce insulation costs. Methyldiethanolamine

The Darwin plant

established a

new benchmark in

the LNG industry

by being the first

LNG plant to use

aeroderivative

gas turbines

as refrigerant

compressor drivers.

2 This engine uses a LM2500+ gas generator, coupled with a two-stage high-speed power turbine developed by GE Oil & Gas.

Figure 2. Aerial View of 3.7 MTPA Darwin LNG Plant — 188,000 m3 Storage Tank, 1,350 m Jetty, and Loading Dock

Page 138: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 128

(MDEA) with a proprietary activator is used for acid gas removal. This amine selection lowers the required regeneration heat load, and for an inlet gas stream containing more than 6% carbon dioxide, this lower heat load leads to reduced equipment size and a corresponding reduction in equipment cost.

Plant Design The Darwin LNG Plant uses the ConocoPhillips Optimized Cascade Process, which was first used in the Kenai LNG plant in Alaska and more recently in Trinidad (four trains), Egypt (two trains), and a train in Equatorial Guinea. A simplified process flow diagram of the Optimized Cascade Process is shown in Figure 3.

Thermal Efficiency ConsiderationsSeveral fundamental conditions in today’s marketplace make aeroderivative engines an excellent solution:

• Sizes of available aeroderivative engines ideally fit the two-trains-in-one concept of the ConocoPhillips LNG Process.

• Aeroderivative engines are variable-speed drivers, which facilitate the flexibility of the process and allow startup without the use of large variable-frequency drive (VFD) starter motors commonly used on single-shaft gas turbines. Aeroderivative engines also allow startup under settle-out pressure conditions, with no need to depressurize the compressor as is common for single-shaft drivers.

• High efficiency results in a greener train with a significant reduction in greenhouse gas emissions.

• Several LNG projects are gas constrained due to a lack of available supplies. This situation occurs both on potential new projects and at existing LNG facilities. Under such constraints, any fuel reduction resulting from higher gas-turbine thermal efficiency means it can be converted to LNG.

• Gas supplies are also constrained due to greater national oil company control of the sources. Gas supplies are no longer available at low cost to LNG plants and the notion that fuel is free is now a thing of the past. Several current projects and front-end engineering design (FEED) studies

Sizes of

available

aeroderivative

engines ideally

fit the

two-trains-in-one

concept of the

ConocoPhillips

LNG process.

RAW GASFEED

PRETREATMENT

ETHYLENECOLD BOX

METHANECOLD BOX

AIR FIN HEATEXCHANGER

AIR FIN HEATEXCHANGER

COMPRESSORS

TURBINES

PLANT FUEL

NGLTANK VAPOR

BLOWERSHIP VAPOR

BLOWER

STORAGE TANKSAND

LOADING PUMPS

VAPOR FROM SHIP WHEN LOADING

TO SHIP LOADINGFACILITIES

THYLCOLD BOX

LNG

HEAVIESREMOVAL

COMPRESSORS

TURBINES

COMPRESSORSTURBINES

AIR FIN HEATEXCHANGER

PROPANEHEAT EXCHANGE ETHYLENE

METHANE

ETHYLENE

PROPANE

METHANE METHANE

Figure 3. Simplified Process Flow Diagram of the Optimized Cascade Process

Page 139: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 129

have encountered fuel valued much higher than a decade ago. Host governments also are requiring more gas for domestic use, increasing the shortfalls for LNG plants.

Given this situation and the fact that fuel not consumed can be converted to LNG, use of high-efficiency aeroderivative engines delivers significant benefits with a net present value (NPV) of hundreds of millions of dollars. Because NPV is a strong function of feed gas costs and LNG sales price, it is highly affected by a plant’s thermal efficiency, especially when the free on board (FOB) LNG costs are high, as in the current market.

The present value of converting fuel into LNG for a nominal 5.0 MTPA plant is shown in Figure 4 for a range of driver efficiencies between 33% and 50%, as compared to a base case of 30%. Results are provided for FOB LNG prices ranging from $1 to $5 per million British thermal units (MMBtu). The present value of the gross margin (defined as LNG revenue—feed gas cost) is calculated over a 20-year life and a discount rate of 12%. The graph shows the strong influence of driver efficiency.

The thermal efficiency of an LNG facility depends on numerous factors such as gas composition, inlet pressure and temperature, and even more obscure factors such as the location of the loading dock relative to the site of the liquefaction process. Higher thermal efficiency is typically

a tradeoff between capital and life cycle costs. Gas turbine selection, the use of waste heat recovery and ship vapor recovery, and self-generation versus purchased power all have a significant effect on the overall thermal efficiency of the process. Process flexibility and stability of operation are of paramount importance and must be incorporated into the considerations regarding thermal efficiency because the value of a highly efficient process is diminished if plant reliability and availability are sacrificed.

Yates [3] has provided a detailed treatment of the design life cycle and environmental factors that affect plant thermal efficiency, such as feed gas characteristics, feed gas conditioning, and the LNG liquefaction cycle itself. Some of the key elements of this discussion are provided below, leading into the discussion of the selection of high-efficiency aeroderivative engines.

A common consideration in evaluating competing LNG technologies is the difference in thermal efficiency. The evaluation of thermal efficiency tends to be elusive and subjective in that each project introduces its own unique characteristics that determine its optimum thermal efficiency based on the project’s strongest economic and environmental merits. Different technologies or plant designs cannot be compared on thermal efficiency without understanding and compensating for such unique differences of each project.

Process flexibility

and stability

of operation are

of paramount

importance and

must be

incorporated into

the considerations

regarding thermal

efficiency.

Pres

ent V

alue

of G

ross

Mar

gin,

$ M

illio

n(D

elta B

etwe

en Ea

ch Ef

ficien

cy C

ase a

nd B

ase C

ase)

500

400

300

200

100

0

LNG Price, $/MMBtu

4.00 5.001.00 3.002.00

40%

37%

50%

33%

Value of Converting Fuel Savings into LNG for a 5.0 MTPA LNG Plant(Power Cycle Efficiency Increase vs. Power Cycle Efficiency of 30%)

Fixed feed gas flow: gas cost = $0.75/MMBtuPresent value calculated at discount rate = 12% and 20-year life

Availability is adjusted for aeroderivatives (+1%) and combined cycle (–2%)Capital cost adjusted for incremental capacity (SC: $150/tonne, CC: $300/tonne)

Figure 4. Present Value of Gross Margin as a Function of Driver Thermal Efficiency, for a Range of LNG FOB Prices

Page 140: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 130

The definition of thermal efficiency also has proven to be subjective depending on whether an entire plant, an isolated system, or an item of equipment is being compared. Thermal efficiency, or train efficiency, has been expressed as the ratio of the total higher heating value (HHV) of the products to the total HHV of the feed. This definition fails to recognize the other forms of thermodynamic work or energy consumed by the process. For example, the definition would not account for the work of purchased power and electric motors if they are used for refrigeration and flashed gas compression. When evaluating the benefits of achieving high thermal efficiency with a specific LNG plant design, a true accounting of all of the energy being consumed in the process must be considered.

Turndown capabilities of an LNG process also need to be considered when thermal efficiency and life-cycle cost comparisons are being made. Thermal efficiency comparisons are typically based on the process operating at design conditions. In an actual plant environment, this design point is elusive, and an operator is always trying to attain a “sweet spot” where the plant will operate at its peak performance under prevailing conditions. As the temperature changes during the day, affecting the performance of the air coolers, turbines, or process fluid and equipment, the operator needs to continually adjust plant parameters to achieve optimal performance. Designing a plant to allow an operator to continually achieve this optimum performance will affect the plant’s overall thermal efficiency and life cycle costs.

The efficiency of an LNG process depends on many features. The two most significant ones are the efficiency of heat exchange and the turbomachinery efficiency. The heat exchange efficiency is a function of the process configuration and selection of the individual heat exchangers, which sets temperature approaches. The turbomachinery efficiency depends on the compressor and turbine efficiencies.

Cooling Curve PerformanceThe liquefaction cooling curve3 performance is another benchmark reviewed in LNG technology comparisons and is often misunderstood or incorrectly applied. Recent analyses by Ransbarger et al. [4] have comprehensively evaluated the issue of cooling curve performance with respect to overall thermal efficiency.

A liquefaction cooling curve plot depicts the temperature change of the heat sink and the heat source as a function of the heat transferred. Frequently, cooling curves are shown with only the feed gas as a heat source and then used as a means to compare different liquefaction processes. Cooling curves should include all duty that is transferred at a given temperature, which includes cooling and condensing of the refrigerants as well as the feed gas. The composite cooling curve analysis seeks to optimize the area or temperature difference between the heat source and the heat sink in a cost-effective manner. Each of the available liquefaction processes attempts to optimize this temperature difference in a different way.

Very often, process efficiencies of LNG technologies have been compared with the classical cascade process. It is important to note that the ConocoPhillips Optimized Cascade Process encompasses two major modifications:

• The addition and optimization of heat recovery schemes

• Where appropriate, the conversion of the traditional closed-loop methane refrigeration system to an open-loop system

The plate fin heat exchangers used in this process are also recognized for their ability to achieve an exceptionally close temperature approach. The use of pure refrigerants allows continually accurate prediction of refrigerant performance during plant operation without the need for on-line refrigerant monitoring. Therefore, for a given feed gas composition range, the cascade liquefaction technology provides the plant designer with flexibility in cooling stage locations, heat exchanger area, and operating pressure ranges during each stage, resulting in a process that can achieve high thermal efficiency under a wide range of feed conditions.

When using cooling curves, incorrect conclusions can be drawn if only the feed gas is used as a heat source. It is imperative that heat transfer associated with cooling and condensing refrig- erants also be included4, so that a “complete cooling curve” can be derived. Complete cooling curves of the classical and Optimized Cascade Process are depicted in Figure 5. The average temperature approach of the classical cascade is 16 °F (8.89 °C), while the average approach temperature of the Optimized Cascade is

Turndown

capabilities

of an LNG process

also need to be

considered when

thermal efficiency

and life-cycle

cost comparisons

are being made.

3 Also known as a temperature-duty curve.

4 The Optimized Cascade Process would include heat transfer associated with the propane refrigerant loads necessary to cool and condense ethylene, as well as the ethylene refrigeration loads necessary to cool and condense methane flash vapors.

Page 141: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 131

12 °F (6.67 °C), i.e., a reduction of 25%, which represents a 10% to 15% reduction in power.

The maturity of the liquefaction processes has approached a point at which changes in duty curve no longer represent the greatest impact. Two developments that have a significant impact on efficiency are the improvement in liquefaction compressor efficiency5 and the availability of high-efficiency gas turbine drivers.

A comparison of LNG technologies at a single design condition does not address plant performance during variations in operating conditions. A two-shaft gas turbine such as the PGT25+ used at Darwin, with its ability to control compressor performance without the need for recycle, can deliver significant improvements in thermal efficiency during turndown operations. Due to significant production swings during the day as a result of changes in ambient temperature, described earlier, the performance of the gas turbine and compressor package needs to be considered in any comparison of plant thermal efficiency.

SELECTION OF AERODERIVATIVE ENGINES

The earlier discussion demonstrated that the selection of the gas turbine plays an important

role in efficiency, greenhouse gas emissions, and flexibility under various operating conditions. The gas turbine selection for Darwin LNG was based on the economic merits that the turbine would deliver for the overall life cycle cost of the project.

When high fuel costs are expected, the selection of a high-efficiency driver becomes a strong criterion in the life cycle cost evaluation. However, LNG projects are developed to monetize stranded gas reserves, while low-cost fuel has favored industrial gas turbines. This situation is changing and the value of gas is growing. Further, when the gas is pipeline or otherwise constrained, there is a clear benefit to consuming less fuel for a given amount of refrigeration power. In such cases, a high-efficiency gas turbine solution through which the saved fuel can be converted into LNG production can reap large benefits.

Figure 6 6 shows that aeroderivative gas turbines achieve significantly higher thermal efficiencies

than industrial gas turbines. The figure illustrates the engines’ thermal efficiency vs. specific work (kW per unit air mass flow). The higher efficiency of an aeroderivative gas turbine can result in a

Classical Cascade ProcessComplete Cooling Curve

–300

–250

–200

–150

–100

–50

0

50

100

150

0% 20% 40% 60% 80% 100%

Enthalpy Change

Tem

pera

ture

, °F

Average Approach = 16 °F

Optimized Cascade ProcessComplete Cooling Curve

–300

–250

–200

–150

–100

–50

0

50

100

150

0% 20% 40% 60% 80% 100%

Enthalpy Change

Tem

pera

ture

, °F

Average Approach = 12 °F

T sinkT sourceT avg

T sinkT sourceT avg

Figure 5. Comparison of Cooling Curves for Classical Cascade Process and ConocoPhillips Optimized Cascade Process

5 Compressor polytropic efficiencies now exceed 80% and high-efficiency gas turbines are available with simple-cycle thermal efficiencies of approximately 40%.

6 Based on Frame 5C, 5D, 7EA, and 9E frame type drivers and GE PGT25+, LM6000, RR 6761, and RR Trent aeroderivative units.

Ther

mal

Effi

cien

cy, %

50

45

40

35

30

25

GT-Specific Work, kW/kg/sec

Aeroderivative

Heavy-Duty Engines

350 400200 300250

Figure 6. Map of ISO Thermal Efficiency vs. Specific Work of Commonly Used Frame Drivers and Aeroderivative Engines

(Aeroderivative Engines Exhibit Higher Specific Work and Thermal Efficiency)

Page 142: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 132

3% or greater increase in overall plant thermal efficiency. Further, plant availability significantly improves because a gas turbine generator (or even a complete turbine) can be completely changed out within 48 hours compared to the 14 or more days required for a major overhaul of a heavy-duty gas turbine.

The GE PGT25+ aeroderivative gas turbine is used as the refrigerant compressor driver at Darwin. The PGT25+ is comparable in power output to the GE Frame 5D but has a significantly higher thermal efficiency of 41.1%. This improvement in thermal efficiency results directly in a reduction of specific fuel required per unit of LNG production. This reduction in fuel consumption in turn results in decreased CO2 emissions, as depicted in Figure 7, which shows relative CO2 emissions for various drivers.

A similar beneficial greenhouse gas reduction comes from the use of waste heat recovery on the PGT25+ turbine exhaust used for various heating requirements within the plant. The use of this heat recovery eliminates greenhouse gas emissions that would have been released had gas-fired equipment been used. The result is an approximately 9.3% reduction in total greenhouse gases.

Advantages of Aeroderivative Engines over Heavy Duty Gas TurbinesSeveral advantages of using aeroderivative engines, some of which have been discussed, include:

• Much higher efficiency that leads to reduced fuel consumption and greenhouse emissions

• Ability to rapidly swap engines and modules, thus improving maintenance flexibility

• High starting torque capacity; excellent torque-speed characteristics, allowing large trains to start up under settle-out pressure conditions

• Essentially zero-timed after 6 years; maintenance can also be done “on condition,” allowing additional flexibility

• Dry-low-emissions (DLE) technology, avail-able and proven on several engines

• Relatively easy installation due to low engine weight

Implementation of the PGT25+ in the Darwin Plant − Gas Turbine and Compressor ConfigurationsThe Darwin LNG compressor configuration encompasses the hallmark two-in-one design of the Optimized Cascade Process, with a total of six refrigeration compressors configured as shown below in a 2+2+2 configuration. All of the turbomachinery was supplied by GE Oil & Gas (Nuovo Pignone).

Propane: 2 X PGT25+ + GB + 3MCL1405Ethylene: 2 X PGT25+ + GB + 2MCL1006Methane: 2 X PGT25+ + MCL806 + MCL 806 + BCL608

Both the propane and ethylene trains have speed reduction gearboxes. All compressors are horizontally split except for the last casing of the methane string, which is a barrel design. The gas turbines and compressors are mezzanine mounted as shown in Figure 8, which facilitates a down-nozzle configuration for the compressors. A view of the six strings from the gas turbine inlet side is shown in Figure 9. The four once-through steam generators are on the four turbine exhausts to the left. The LM2500+ gas generator is shown in Figure 10.

AERODERIVATIVE ENGINE TECHNOLOGY FOR DARWIN LNG

The PGT25+ engine used at the Darwin plant has a long heritage, starting from the

TF39 GE aeroengine, as shown in Figure 11. This highly successful aeroengine resulted in the industrial LM2500 engine, which was then upgraded to the LM2500+. The PGT25+ is essentially the LM2500+ gas generator coupled to a 6,100 revolution-per-minute (rpm) high-speed power turbine (HSPT). The latest variant of this engine is the G4, rated at 34 MW.

The first LM2500+ design was based on the successful history of the LM2500 gas turbine

Aeroderivative

gas turbines

achieve

significantly higher

thermal efficiencies

than industrial

gas turbines.

0.5

0.6

0.7

0.8

0.9

1.0

1.1

Rolls Royce 6761

GE LM2500+

GE Frame

7EA

GE Frame

6B

GE Frame

5D

GE Frame

5C

Rela

tive C

O 2 Emiss

ions

Trent 60 DLE

GE LM6000PD

Figure 7. Relative CO2 Emissions from Different Classes of Gas Turbines

Page 143: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 133

that was completed in December 1996. The LM2500+ was originally rated at 27.6 MW, with a nominal 37.5% ISO thermal efficiency. Since that time, its ratings have grown to the current 31.3 MW and 41% thermal efficiency.

The LM2500+ has a revised and upgraded compressor section with an added zero stage for a 23% increased airflow and pressure ratio, and revised materials and design in the high-pressure and power turbines. Details can be found in Wadia et al. [5]

Description of the PGT25+ Gas TurbineThe PGT25+ consists of the following components:

• Axial flow compressor — The compressor is a 17-stage axial-flow design with variable-geometry compressor inlet guide vanes that direct air at the optimum flow angle, and variable stator vanes to ensure ease of starting and smooth, efficient operation over the entire engine operating range. The axial flow compressor operates at a pressure ratio of 23:1 and has a transonic blisk as the zero stage7. As reported by Wadia et al. [5], the airflow rate is 84.5 kg/sec at a gas generator speed of 9,586 rpm. The axial compressor has a polytropic efficiency of 91%.

• Annular combustor — The engine is provided with a single annular combustor (SAC) with coated combustor dome and liner similar to those used in flight applications. The SAC features a through-flow, venturi swirler to provide a uniform exit temperature profile

Figure 8. Compressor Trains at Darwin LNG Plant

Figure 9. Compressor Block Viewed from Gas Turbine Filter House End (Note Four Once-Through

Steam Generators on Gas Turbines)

C-5

DC-10LM2500+G4/PGT25+G4

LM2500+/PGT25+

LM2500/PGT25

34.3/46,00039%–41%

31.3/42,00039%–41%

23/32,00038%

Power OutputMW/SHP

Thermal Efficiency

LM2500/PGT25/

LM2500+/PGT25+TF39/CF6-6

(Source: GE Energy)

Figure 11. LM2500 Engine Evolution

Figure 10. Installation of LM2500+ Gas Generator 7 The zero stage operates at a stage pressure ratio of

1.43:1 and an inlet tip relative mach number of 1.19.

Page 144: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 134

and distribution. This combustor configura- tion features individually replaceable fuel nozzles, a full-machined-ring liner for long life, and an yttrium-stabilized zirconium thermal barrier coating to improve hot corrosive resistance. The engine is equipped with water injection for NOx control.

• High-pressure turbine (HPT) — The PGT25+ HPT is a high-efficiency air-cooled, two-stage design. The HPT section consists of the rotor and the first- and second-stage HPT nozzle assemblies. The HPT nozzles direct the hot gas from the combustor onto the turbine blades at the optimum angle and velocity. The high-pressure turbine extracts energy from the gas stream to drive the axial flow compressor to which it is mechanically coupled.

• High-speed power turbine — The PGT25+ gas generator is aerodynamically coupled to a high-efficiency HSPT with a cantilever-supported two-stage rotor design. The power turbine is attached to the gas generator by a transition duct that also serves to direct the exhaust gases from the gas generator into the stage-one turbine nozzles. Output power is transmitted to the load by means of a coupling adapter on the aft end of the power turbine rotor shaft. The HSPT operates at a speed of 6,100 rpm with an operating speed range of 3,050 to 6,400 rpm. The high-speed two-stage power turbine can be operated over a cubic load curve for mechanical drive applications.

• Engine-mounted accessory gearbox driven by a radial drive shaft — The PGT25+ has an engine-mounted accessory drive gearbox for starting the unit and supplying power for critical accessories. Power is extracted through a radial drive shaft at the forward end of the compressor. Drive pads are provided for accessories, including the lube and scavenge pump, starter, and variable-geometry control. An overview of the engine, including the HSPT, is shown in Figure 12.

Maintenance A critical factor in any LNG operation is the life- cycle cost influenced in part by the maintenance cycle and engine availability. Aeroderivative engines have several features that facilitate “on condition” maintenance, rather than strict time-based maintenance. Numerous boroscope ports allow on-station, internal inspections to determine the condition of internal components, thereby increasing the interval between scheduled, periodic removal of engines. When the condition

of the internal components of the affected module has deteriorated to such an extent that continued operation is not practical, the maintenance program calls for exchange of that module.

The PGT25+ is designed to allow for rapid on-site exchange of major modules within the gas turbine. Component removal and replacement can be accomplished in less than 100 hours, and the complete gas generator unit can be replaced and be back online within 48 hours. The hot-section repair interval for the aeroderivative is 25,000 hours on natural gas; however, waterinjection for NOx control shortens this inter- val to between 16,000 hours and 20,000 hours, depending on the NOx target level8.

Performance Deterioration and RecoveryGas turbine performance deterioration is of great concern to any LNG operation (see [6, 7, and 8]). Total performance loss is attributable to a combination of recoverable (by washing) and non-recoverable (recoverable only by component replacement or repair) losses. Recoverable performance loss is caused by airborne contami-nant fouling of airfoil surfaces. The magnitude of recoverable performance loss and the frequency of washing are determined by site environment and operational profile. Generally, compressor fouling is the predominant cause of this type of loss. Periodic washing of the gas turbine, using online and crank-soak wash procedures, will recover 98% to 100% of these losses. The objective of online washing is to increase the time interval between crank washes. The best approach is to couple online and offline washing.

The cooldown time for an aeroderivative engine is much less than that for a heavy-duty frame machine due to the lower casing mass. Crank washes can therefore be done with less downtime.

(Source: GE Energy)

Figure 12. PGT25+ Gas Turbine A critical factor

in any LNG

operation is the

life-cycle cost

influenced

in part by the

maintenance cycle

and

engine availability.

8 The level of water injection is a function of the NOx target level.

Page 145: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 135

Upgrades of the PGT25+Another advantage of using aeroderivative engines for LNG service is that they can be uprated to newer variants, generally within the same space constraints—a useful feature for future debottlenecking. The Darwin LNG plant is implementing this upgrade.

The LM2500+G4 is the newest member of GE’s LM2500 family of aeroderivative engines. The engine, shown in Figure 13, retains the basic design of the LM2500+ but increases the power capability by approximately 10% without sacrificing hot-section life. The modification increases the engine’s power capability by increasing the airflow, improving the materials, and increasing the internal cooling. The number of compressor and turbine stages and the majority of the airfoils and the combustor designs remain unchanged from the LM2500. Details on the LM2500+G4 can be found in [9].

The increased power of this variant compared to the base engine is shown in Figure 14.

POWER AUGMENTATION BY EVAPORATIVE COOLING

LNG production is highly dependent on the power capability of the gas turbine drivers of

the propane, ethylene, and methane compressors. Industrial gas turbines lose approximately 0.7% of their power for every 1 °C rise in ambient temperature. This effect is more pronounced in aeroderivative gas turbines due to their higher specific work for which the sensitivity can increase to much greater than 1% per °C. The impact of ambient temperature on the PGT25+ power and air flow is depicted in Figure 15.

As aeroderivative machines are more sensitive to ambient temperature, they benefit significantly from inlet air cooling. Darwin LNG uses media-type evaporative coolers—another first for LNG refrigeration drivers. Details on

(Source: GE Energy)

Figure 13. Uprated LM2500+G4 Engine — DLE Variant

Shaf

t Pow

er O

utpu

t, kW

Ambient Temperature, °C

40,000

36,000

32,000

28,000

24,000

20,000–30.0 –15.0 0.0 15.0 30.0

LM2500+G4 SAC PowerLM2500+ SAC PowerLM2500 SAC Power

(Source: GE Energy)

Figure 14. Comparative Power Output of LM2500+G4 Variant

Powe

r, kW

Air M

ass F

low

Rate

, kg/

sec

Ambient Temperature, °C

40,000

35,000

30,000

25,000

20,000

10,000

15,000

95

90

85

80

75

65

60

70

25201510 30 355 40

kWAirflow, kg/sec

Figure 15. Variations in Power Output and Airflow Rate for PGT25+ Gas Turbine

Page 146: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 136

media-based evaporative cooling can be found in Johnson. [10]

Among the key advantages of power augmen-tation are that it leads to:

• Greater LNG production due to reduced gas turbine compressor inlet air temperature, increasing the air mass flow rate and power

• Improved thermal efficiency of the gas turbine, resulting in lower CO2 emissions

There is considerable evaporative cooling potential available in Darwin especially during the periods of high ambient temperatures, because the relative humidity tends to drop as the temperature increases. The average daily temperature profile at Darwin is shown in Figure 16, and the relationship of relative humidity and dry bulb temperature for the month of September is shown in Figure 17 9. Details regarding the climatic analysis of evaporative cooling potential can be found in [11].

Media-based evaporative coolers use corrugated media over which water is passed. The media material is placed in the gas turbine airflow path within the air filter house and is wetted via water distribution headers. The construction of the media allows water to penetrate through it, and any non-evaporated water returns to a catch basin. The material also provides sufficient airflow channels for efficient heat transfer and minimal pressure drop. As the gas turbine airflow passes over the media, the airstream absorbs moisture (evaporated water). Heat content in the airstream is given up to the wetted media, resulting in a lower compressor inlet temperature. A typical evaporative cooler effectiveness range is 85% to 90%, and is defined as follows:

Because

aeroderivative

machines are

more sensitive

to ambient

temperature,

they benefit

significantly from

inlet air cooling.

9 Data is for Darwin Airport, from the typical meteorological year (TMY2) database.

Tem

pera

ture

, °C

Time of Day

35

33

31

29

27

25

23

21

19

17

15

22:3

0

20:3

0

18:3

0

16:3

0

14:3

0

12:3

0

10:3

0

8:30

6:30

4:30

2:30

Mea

n

0:30

Dec

Nov

Oct

Sep

Aug

Jul

Jun

May

Apr

Mar

Feb

Jan

Figure 16. Darwin Temperature Profile Based on Time of Day over 12-Month Period

0

10

20

30

40

50

60

70

80

90

100

10 12 14 16 18 20 22 24 26 28 30 32 34 36

DBT, °C

RH, %

Figure 17. RH vs. DBT at Darwin Airport for the Month of September (Considerable Evaporative Cooling Potential is Available During Hot Daytime Hours)

Page 147: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 137

Where:

T1DB = entering-air dry bulb temperature

T2DB = leaving-air dry bulb temperature

T2WB = leaving-air wet bulb temperature

Effectiveness is the measure of how capable the evaporative cooler is in lowering the inlet-air dry bulb temperature to the coincident wet bulb temperature. Drift eliminators are used to protect the downstream inlet system components from water damage, caused by carryover of large water droplets.

The presence of a media-type evaporative cooler inherently creates a pressure drop, which reduces turbine output. For most gas turbines, media thickness of 12 inches will result in a pressure drop of approximately 0.5 in. to 1 in. of water. Increases in inlet duct differential pressure will cause a reduction of compressor mass flow and engine operating pressure. The large inlet temper- ature drop derived from evaporative cooling more than compensates for the small drop in per-formance due to the additional pressure drop.

Inlet temperature drops of approximately 10 °C have been achieved at Darwin LNG, which results in a power boost of approxi-mately 8% to 10%. Figure 18 shows calculated compressor inlet temperatures (CITs) with the evaporative cooler for a typical summer month of January.

FUTURE POTENTIAL OF AERODERIVATIVE ENGINES USING THE OPTIMIZED CASCADE PROCESS

Several factors must be considered in choosing an optimal train size, including:

• Gas availability from the field• Market demand and LNG growth profile

(which would also dictate the buildup and timing between subsequent trains)

• Overall optimization of production, storage, and shipping logistics

• Operational flexibility, reliability, and maintenance of the refrigeration block (Flexibility is of extreme importance in today’s operational market environment, which has seen some departure from long-term LNG supply contracts.)

As the Optimized Cascade Process uses a two-train-in-one concept, in which two parallel compressor strings are used for each refrigeration service, the application of larger aeroderivative engines is an ideal fit. Using the Optimized Cascade Process, the loss of any refrigeration string does not shut down the train but only necessitates a reduction in plant feed, with overall LNG production remaining between 60% and 70% of full capacity10.

The significant benefits of aeroderivative engines as opposed to large single-shaft gas turbines make large aeroderivative units an attractive proposition for high-efficiency, high-output LNG plants. Larger LNG plant sizes can be derived

DBT, CIT Media Evaporative Efficiency = 90% (TMY2 Database Data for Month of January )

10

15

20

25

30

35

40DB

T, CI

T, °C

1 33 65 97 129

161

193

225

257

289

321

353

385

417

449

481

513

545

577

609

641

673

705

Hours per Month

5

DBT

CIT with Evaporative Cooler

DBT, °CCIT, °C; Efficiency = 90%

Figure 18. Calculated CITs due to Evaporative Cooling During the Summer Month of January

Because the

Optimized Cascade

Process uses a

two-train-in-one

concept, in which

two parallel

compressor strings

are used for

each refrigeration

service, the

application

of larger

aeroderivative

engines is

an ideal fit.

10 Obtained by shifting refrigerant loads to the other drivers.

(T1DB – T2DB)(T1DB – T2WB)

Effectiveness =

Page 148: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 138

by adding gas turbines, as shown in Table 1. While the output with one driver down in a 2+2+2 configuration is approximately 60% to 70%, the output would be even higher with a larger number of drivers.

As split-shaft industrial gas turbines are not available in the power class of large aeroderivative gas turbines, the application of aeroderivative engines offers the significant advantage of not requiring costly and complex large starter motors and their associated power generation costs.

For example, the LM6000 depicted in Figure 19is a 44 MW driver11, with a thermal efficiency of 42%, operating at a pressure ratio of 30:1 and with an exhaust mass flow rate of 124 kg/sec. This engine is a two-spool gas turbine with the load driven by the low-speed spool, which is mounted inside the high-speed spool, enabling the two spools to turn at different speeds. The output speed of this machine is 3,400 rpm.

The LM6000 gas turbine makes extensive use of variable geometry to achieve a large operating envelope. The variable geometry includes the variable inlet guide vanes, variable bypass valves, and the variable stator vanes in the engine

compressor with each system independently controlled. The gas turbine consists of five major components: a 5-stage low-pressure compressor, a 14-stage high-pressure compressor, an annular combustor, a 2-stage high-pressure turbine, and a 5-stage low-pressure turbine. The low-pressure turbine drives the low-pressure compressor and the load. The engine is available in both a water-injected and DLE configuration, with a DLE capability of 15 parts per million (ppm) NOx.

The importance of high thermal efficiency and the details on the implementation and operating experience of aeroderivatives at Darwin LNG have been presented by Meher-Homji et al. [12]

CONCLUSIONS

In 1969, the ConocoPhillips-designed Kenai LNG plant in Alaska was the first LNG plant

to use gas turbines as refrigeration drivers. This plant has operated without a single missed shipment. Another groundbreaking step was made 38 years later with the world’s first successful application of high-efficiency aeroderivative gas turbines at the Darwin LNG plant. This efficient plant has shown how technology can be integrated into a reliable LNG process to minimize greenhouse gases and provide the high flexibility, availability, and efficiency of the Optimized Cascade Process. The plant, engineered and constructed by Bechtel, was started several months ahead of schedule and has exceeded its production targets. It has been successfully operated for close to 3 years and will shortly be upgraded by implementing PGT25+G4 engines as part of a debottlenecking effort.

The new generation of highly efficient and high-power aeroderivative engines in the 40 MW to 50 MW range available today is ideally suited to the Optimized Cascade Process due to its two-trains-in-one concept. The ConocoPhillips-Bechtel LNG collaboration will offer the engine for future LNG projects. In the meantime, the ConocoPhillips-Bechtel LNG Product Development Center continues to design and develop new and highly efficient plant designs that can be used for 5.0–8.0 MTPA train sizes.

11 To compare the power/wt ratio, the LM6000 core engine weighs 7.2 tons compared to 67 tons for a 32 MW Frame 5D engine (core engine only).

(Source: GE Energy)

Figure 19. LM6000 Gas Turbine

Table 1. Configuration/Size of LNG Plants Using Aeroderivative Engines

Aeroderivative Engine Confi guration (Propane/Ethylene/Methane) Approximate Train Size, MTPA

6 x LM2500+ 2/2/2 3.5

8 x LM2500+G4 DLE 3/3/2 5

6 x LM6000 DLE 2/2/2 5

9 x LM6000 DLE 3/3/3 7.5

The new generation

of highly efficient

and high-power

aeroderivative

engines in

the 40 MW to

50 MW range

available today

is ideally suited

to the Optimized

Cascade Process

due to its

two-trains-in-one

concept.

Page 149: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 139

TRADEMARKS

ConocoPhillips Optimized Cascade is a registered trademark of ConocoPhillips.

REFERENCES

[1] D. Yates and C. Schuppert, “The Darwin LNG Project,” 14th International Conference and Exhibition on Liquefied Natural Gas (LNG 14), Doha, Qatar, March 21–24, 2004 <http://www.lng14.com.qa/lng14.nsf/attach/$file/PS6-1.ppt>.

[2] D. Yates and D. Lundeen, “The Darwin LNG Project,” LNG Journal, 2005.

[3] D. Yates, “Thermal Efficiency – Design, Lifecycle, and Environmental Considerations in LNG Plant Design,” GASTECH, 2002 <http://lnglicensing.conocophillips.com/NR/rdonlyres/8467A499-F292-48F8-9745-1F7AC1C57CAB/0/thermal.pdf>.

[4] W. Ransbarger et al., “The Impact of Process and Equipment Selection on LNG Plant Efficiency,” LNG Journal, April 2007.

[5] A.R. Wadia, D.P. Wolf, and F.G. Haaser, “Aerodynamic Design and Testing of an Axial Flow Compressor with Pressure Ratio of 23.3:1 for the LM2500+ Engine,” ASME Transactions, Journal of Turbomachinery, Vol. 124, Issue 3, July 2002, pp. 331−340, access via <http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JOTUEI000124000003000331000001&idtype=cvips&gifs=yes>.

[6] C.B. Meher-Homji, M. Chaker, and H. Motiwalla, “Gas Turbine Performance Deterioration,” Proceedings of the 30th Turbomachinery Symposium, Houston, Texas, September 17−20, 2001.

[7] C.B. Meher-Homji and A. Bromley, “Gas Turbine Axial Compressor Fouling and Washing,” Proceedings of the 33rd Turbomachinery Symposium, Houston, Texas, September 20−23, 2004, pp. 163-192 <http://turbolab.tamu.edu/pubs/Turbo33/T33pg163.pdf>.

[8] G.H. Badeer, “GE Aeroderivative Gas Turbines − Design and Operating Features,” GE Power Systems Reference Document GER-3695E, October 2000 <http://gepower.com/prod_serv/products/tech_docs/en/downloads/ger3695e.pdf>.

[9] G.H. Badeer, “GE’s LM2500+G4 Aeroderivative Gas Turbine for Marine and Industrial Applications,” GE Energy Reference Document GER-4250, September 2005 <http://gepower.com/prod_serv/products/tech_docs/en/downloads/ger4250.pdf>.

[10] R.S. Johnson, “The Theory and Operation of Evaporative Coolers for Industrial Gas Turbine Installations,” ASME International Gas Turbine and Aeroengine Congress and Exposition, Amsterdam, Netherlands, June 5−9, 1988, Paper No. 88-GT-41 <http://www.muellerenvironmental.com/documents/100-020-88-GT-41.pdf>.

[11] M. Chaker and C.B. Meher-Homji, “Inlet Fogging of Gas Turbine Engines: Climatic Analysis of Gas Turbine Evaporative Cooling Potential of International Locations,” Journal of Engineering

for Gas Turbines and Power, Vol. 128, No. 4, October 2006, pp. 815–825, see <http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JETPEZ000128000004000815000001&idtype=cvips&gifs=yes> (see also Proceedings of ASME Turbo Expo 2002, Amsterdam, the Netherlands, June 3–6, 2002, Paper 2002-GT-30559 <http://www.meefog.com/downloads/30559_International_Cooling.pdf>).

[12] C.B. Meher-Homji, D. Messersmith, T. Hattenbach, J. Rockwell, H. Weyermann,and K. Masani, “Aeroderivative Gas Turbines for LNG Liquefaction Plants − Part 1: The Importance of Thermal Efficiency” and “Part 2: World’s First Application and Operating Experience,” Proceedings of ASME International Gas Turbine and Aeroengine Conference, Turbo Expo 2008, Paper Nos. GT2008-50839 and GT2008-50840, Berlin, Germany, June 9−13, 2008 (see http://www.asmeconferences.org/TE08//pdfs/TE08_FinalProgram.pdf, p. 86).

BIOGRAPHIESCyrus B. Meher-Homji is a Bechtel Fellow and senior principal engineer assigned to the Houston, Texas-based Bechtel-ConocoPhillips LNG Product Development Center as a turbomachinery specialist. His 29 years of industry experience covers gas turbine and compressor design, engine

development, and troubleshooting. Cyrus works on the selection, testing, and application of gas turbines and compressors for LNG plants. His areas of interest are turbine and compressor aerothermal analysis, gas turbine deterioration, and condition monitoring.

Cyrus is a Fellow of ASME and past chair of the Industrial & Cogeneration Committee of ASME’s International Gas Turbine Institute. He also is a life member of the American Institute of Aeronautics and Astronautics (AIAA) and is on the Advisory Committee of the Turbomachinery Symposium. Cyrus has more than 80 publications in the area of turbomachinery engineering.

Cyrus has an MBA from the University of Houston, Texas, an ME from Texas A&M University, College Station, and a BS in Mechanical Engineering from Shivaji University, Maharashtra, India. He is a registered professional engineer in the state of Texas.

Tim Hattenbach has worked in the oil and gas industry for 36 years, 30 of which have been with Bechtel. He is the team leader of the Compressor group in Bechtel’s Houston office and has worked on many LNG projects and proposals as well as a variety of gas plant and refinery projects.

A modification of the original version of this paper was presented at LNG 15, held April 24–27, 2007,

in Barcelona, Spain.

Page 150: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 140

Tim is Bechtel’s voting representative on the American Petroleum Institute (API) Subcommittee on Mechanical Equipment and is a member of its Steering Committee. He is Taskforce Chairman of two API standards (API 616 – Gas Turbines and API 670 – Machinery Protection Systems).

Tim has an MS and a BS in Mechanical Engineeringfrom the University of Houston, Texas.

Dave Messersmith is deputy manager of the LNG and Gas Center of Excellence, respon- sible for LNG Technology Group and Services, for Bechtel’s Oil, Gas & ChemicalsGlobal Business Unit, located in Houston, Texas. He has served in various lead roles on LNG projects for 14 of the past

17 years, including work on the Atlantic LNG project conceptual design through startup as well as many other LNG studies, FEED studies, and projects. Dave’s experience includes various LNG and ethylene assignments over 17 years with Bechteland, previously, 10 years with M.W. Kellogg, Inc.

Dave holds a BS degree in Chemical Engineering from Carnegie Mellon University, Pittsburgh, Pennsylvania, and is a registered professional engineer in the state of Texas.

Hans P. Weyermann is a principal rotating equipment engineer in the Drilling and Production Department of the ConocoPhillips Upstream Technology Group. He supports all aspects of turbo-machinery for business units and grassroots capital projectsand is also responsible for

overseeing corporate rotating machinery technology development initiatives within the ConocoPhillips Upstream Technology Group.

Before joining ConocoPhillips, Hans was the supervisor of rotating equipment at Stone &Webster, Inc., in Houston, Texas. Earlier, he was an application/design engineer in the TurboCompressor Department at Sulzer Escher Wyss Turbomachinery in Zurich, Switzerland.

Hans is a member of ASME, the Texas A&M University Turbomachinery Advisory Committee, and the API SOME, and, in addition, serves on several API task forces.

Hans has a BS degree in Mechanical Engineering from the College of Engineering in Brugg-Windisch, Switzerland.

Karl Masani is a director for LNG Licensing & Technology in the Global Gas division of ConocoPhillips, where he is responsible for LNG project business development and project supervision. Previously, he held various managerial positions at General Electric Company, Duke Energy Corporation, and Enron Corporation.

Karl holds an MBA in Finance from Rice University, Houston, Texas, and a BS degree in Aerospace Engineering from the University of Texas at Austin.

Satish Gandhi is LNG Product Development Center (PDC) director and manages the center for the ConocoPhillips-Bechtel Corporation LNGCollaboration. He is responsiblefor establishing the work direction for the PDC to implement strategies and priorities set by the LNG

Collaboration Advisory Group.

Dr. Gandhi has more than 34 years of experience in technical computing and process design, as well as troubleshooting of process plants in general and LNG plants in particular. He was previously process director in the Process Technology & Engineering Department at Fluor Daniel with responsibilities for using state-of-the-art simulation software for the process design of gas processing, CNG, LNG, and refinery facilities. He also was manager of the dynamic simulation group at M.W. Kellogg, Ltd., responsible for technology development and management and implementation of dynamic simulation projects in support of LNG and other process engineering disciplines.

Dr. Gandhi received a PhD from the University of Houston, Texas; an MS from the Indian Institute of Technology, Kanpur, India; and aBS from Laxminarayan Institute of Technology,Nagpur, India, all in Chemical Engineering.

Page 151: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 141

INTRODUCTION

Over the past 15 years, rapid developments in computer technology have led to

significant advancement in simulation technologies. Published papers provide numerous examples of the application of simulation tools to enhance design and operations. For example, simulation tools have been used to enhance plant design [1], maximize plant profits through real-time process optimization [2], and analyze complex operational problems involving scheduling, sequencing, and material handling [3]. Simulation tools are especially useful for visualizing the development of a plant during design.

Bechtel routinely uses computer-aided design (CAD) tools such as SmartPlant ® 3D to visualize how a physical plant design (e.g., piping, plot layout) will develop throughout the engineering phase. In fact, 3D model reviews are now considered necessary milestones during design. Evaluating a plant’s physical structure is much easier than identifying and analyzing the safety and operational risks associated with a plant’s design. The ability to conduct such an analysis once depended on a design engineer’s experience. However, today’s simulation tools allow design engineers at all experience levels

to simulate a plant’s design and to analyze it for safety and operational risks. When performed early in the design phase, such an analysis allows design engineers to identify and avert potential problems that could occur during plant startup, when they would be expensive to resolve.

The case studies presented in this paper describe how the Bechtel Oil, Gas & Chemicals Global Business Unit (OG&C GBU) is applying three key simulation technologies—computational fluid dynamics (CFD), finite element analysis (FEA), and lifecycle dynamic simulation—to mitigate safety and operational risks during the design and commissioning of ConocoPhillips Optimized Cascade® Liquefied Natural Gas (LNG) Process plants. This paper also describes how OG&C is innovatively leveraging the applications of these technol-ogies to provide added value to global clients.

CFD, FEA, and Life-cycle Dynamic Simulation CFD is a technique used for modeling the dynamics of fluid flows in processes and systems, such as those in an LNG plant. CFD enables design engineers to build a virtual model of a system or process so they can simulate what will happen when fluids and gases flow and

Abstract—Developments in hardware and software have made simulation technologies available to the design engineer. Engineering companies are adapting the latest computer-aided design (CAD) tools such as SmartPlant® 3D to improve work processes and the final design. However, many companies still rely on traditional design methods, especially in mitigating the safety and operational risks associated with a project’s design.

Bechtel is an industry leader in successfully applying advanced simulation technologies, such as computational fluid dynamics (CFD), finite element analysis (FEA), and lifecycle dynamic simulation, to identify and mitigate any safety and operational risks associated with plant design and to improve the design over the project’s lifecycle. Bechtel is leveraging the concept of lifecycle dynamic simulation to develop advanced applications, such as operator training simulator (OTS) and advanced process control (APC), to enhance plant operations for clients. This paper presents case studies on the use of such simulation technologies and applications to improve the design and operation of liquefied natural (LNG) plants.

Keywords—APC, CFD, dynamic process modeling, FEA, life-cycle dynamic simulation, LNG, OTS, simulation-based design, simulation technologies, simulator

Ramachandra Tekumalla

[email protected]

Jaleel Valappil, PhD

[email protected]

INNOVATION, SAFETY, AND RISK MITIGATION VIA SIMULATION TECHNOLOGIES

Issue Date: December 2008

Page 152: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 142

interact with the complex surfaces used in engineering. CFD software performs the millions of calculations required to simulate fluid flows and produces the data and images necessary to predict the performance of a system or process design. This technique makes it easier for engineers to identify, analyze, and solve design-related problems that involve fluid flows.

FEA is a technique for simulating and evaluating the strength and behavior of components, equipment, and structures under various loading conditions, such as applied forces, pressures, and temperatures. FEA enables design engineers to produce detailed visualizations of where structures bend or twist, to pinpoint the distribution of stresses and displacements, and to minimize weights, materials, and costs. Bechtel engineers also use FEA techniques to analyze scenarios related to design optimization and code compliance.

CFD and FEA can be used to determine how flow phenomena, heat and mass transfer, and various stresses affect process equipment. Studying these effects in a virtual environment enables engineers to identify and mitigate safety and operational risks associated with a design and to design plant equipment that can operate under a wide range of conditions.

Life-cycle dynamic simulation allows design engineers to build a dynamic model of an LNG plant that can evolve over the project’s life cycle. This dynamic model can then be used for a variety of purposes, such as evaluating safety and operating procedures, supporting startup, and training plant personnel.

The ConocoPhillips Optimized Cascade LNG ProcessOver the past decade, Bechtel has built eight ConocoPhillips Optimized Cascade LNG trains of varying capacities. These projects were based on a lump-sum, turnkey approach whereby Bechtel, with assistance from ConocoPhillips, was responsible for commissioning, startup, and operation of the plants until the performance requirements were met. The case studies presented in this paper apply to the ConocoPhillips Optimized Cascade LNG Process. A brief description of the process follows.

As shown in the schematic in Figure 1, feed gas is first processed in the feed-treatment section of the plant. Diglycolamine, or a similar solvent, is typically used for the gas sweetening process to remove H2S and CO2. Next, treated gas is fed to the molecular sieve dehydrators where water vapor is removed and processed through activated carbon beds to remove any remaining mercury. The treated gas is then fed to the liquefaction unit, where it is cooled in stages and condensed prior to entering the LNG tanks.

The liquefaction process includes three refrigeration circuits consisting of predominantly pure component refrigerants: propane, ethylene, and methane. Although not required, each refrigeration circuit typically uses parallel compressors (up to two or three per refrigerant service) combined with common process equipment. The feed gas passes through multiple stages of chilling in the propane, ethylene, and open-loop methane circuits. Each successive stage is at a lower temperature and pressure. The resulting LNG product is pumped to storage, where it is stored at near-atmospheric pressure and –161 ºC.

ABBREVIATIONS, ACRONYMS, AND TERMS

AISC American Institute of Steel Construction

APC advanced process control

ASCC Australian Safety and Compensation Council

ASME American Society of Mechanical Engineers

CAD computer-aided design

CFD computational fluid dynamics

DCS distributed control system

EPC engineering, procurement, and construction

FAT factory acceptance test

FEA finite element analysis

FEED front-end engineering design

FSI fluid-solid interaction

HYSYS® AspenTech computer software program

LNG liquefied natural gas

NGL natural gas liquid

OG&C GBU [Bechtel] Oil, Gas & ChemicalsGlobal Business Unit

OPT optimization

OTS operator training simulator

PSV pressure safety valve

RT real-time

SIS safety instrumented system

Page 153: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 143

CASE STUDIES IN THE USE OF CFD AND FEA

The following case studies describe how Bechtel’s design engineers used CFD and

FEA to identify and mitigate safety and operational risks associated with the design of LNG plant equipment.

Case Study: Design of a Wet Ground Flare System A gas flare is typically an elevated vertical stack or chimney found on oil wells or oil rigs, and in landfills, refineries, and chemical plants. Gas flares are used to burn off unwanted gas or flammable gas and liquids released by pressure relief valves during an unplanned over-pressuring of plant equipment. On one of Bechtel’s LNG plant construction projects, the local airport was situated close to the plant site, and the aircraft landing path was over the LNG plant. To ensure that landing aircraft would not be affected by the plant’s ground flare system, local authorities required Bechtel to ensure that the system’s design minimized the heat impact to the atmosphere. Bechtel was required to use a wet ground flare for material flows that were warmer than –29.5 ºC.

The ProblemTo comply with the Australian Safety and Compensation Council (ASCC) National Standard for Control of Major Hazard Facilities, the design of the plant’s wet ground flare system required an assessment of the impact of heat release by radiation on the surrounding area during an excess propane release, as well

as an assessment of the effects of continuous exhaust from the gas turbines. The surroundings included buildings, terrain, and vegetation.

The assessment required Bechtel engineers to study individual flames from each burner, with nozzles on the order of 1 mm, and the effect of 180 burners on a large area and surrounding terrain, with length scales of several meters. This required managing the complexity of varying length scales. [4] Calculations spanned 750 m in the lengthwise direction of the flare position on the ground, 700 m in width, and 500 m in height.

The SolutionThe complexity of the assessment would have required massive parallel computing efforts if engineers were to handle the above calculations in their entirety. Instead, Bechtel used CFD to implement the following more manageable three-phase approach to conduct the assessment:

• Phase 1—The combustion model used in a single flare study was validated.

• Phase 2—The interaction of two to three neighboring burners was studied for velocity, temperature, and composition.

• Phase 3—The interaction between neighbor-ing burners and the overall flare was extended, and the propagation of the plume was studied.

This approach enabled Bechtel to break down the complex problem into smaller-scale

Bechtel applies

advanced simulation

technologies

to visualize design

and develop

operational

applications

for clients.

InletRaw Gas

Inlet Meter

Plant Fuel

Propane Chiller

Heat Exchangers

Transfer Pump

Tank Vapor BlowerShip Vapor Blower

Vapor FromShip When Loading

To Ship LoadingFacilities

LNG

Heat Exchangers

Inlet Scrubber

Scru

bber

Dehy

drat

or

Carb

onFi

lter

MethaneCompressor

Dry Gas Filter

Air Fin Heat

Exchanger

PropaneCompressor

PressureSurge

Ethylene Surge

Storage Tanksand

Loading Pumps

EthyleneCompressor

Air Fin HeatExchanger

Air Fin HeatExchanger

Amine Treater

Heat Exchangers

(Reprinted with permission from ConocoPhillips Company)

Figure 1. Schematic of ConocoPhillips Optimized Cascade® LNG Process

Page 154: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 144

sub-problems, which worked within the limita-tions of the CFD technology. Each phase provided the information required for the succeeding phase. As a result, the assessment ensured that the design of the wet ground flare system met the requirements of the ASCC National Standard for the Control of Major Hazard Facilities.

During the assessment, CFD was used to study:

• Flux on the incoming pipe rack to the flare • Ground-level flux at buildings near the flare • The probability of a fire, and the resulting

damage to vegetation/nearby mangroves • The effects of the flare event on personnel

and buildings

Figures 2, 3, and 4 were generated from the CFD model. Figure 2 shows the flare region with the wet and dry flare arrays and the corresponding computational grid. The temperature contours and surface-incident radiation on the full terrain

are shown in Figure 3. The temperature contours on the ground below the burner risers are shown in Figure 4.

Case Study: Mitigation of Temperature-Driven Bending Stress and the Failure of a Multiphase Flare Line under Cryogenic ConditionsBechtel engineers used CFD and FEA to perform a flow and stress analysis of a flare line. This case study describes how, when applied together, these simulation technologies led to an understanding of the thermal stresses induced by the Joule-Thompson effect within a flare line. This understanding led to the design of effective mitigation measures.

The ProblemFlare lines in an LNG plant carry a mixture of methane, propane, and ethylene. Several laterals bring these constituents to the flare header, and the constituents flow out to burn during a flaring event. Pressure safety valves and

Fire, Medical Warehouse, and Other Buildings

Hills in Terrain

Radiation Fence

Wet Burner Risers

MangrovesDomain Dimensions are 750 m x 700 m x 500 m

Figure 2. Draft of Flare Region Showing Wet and Dry Flare Arrays and Corresponding Computational Grid

Page 155: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 145

depressurizing valves are located in the laterals. As a result of Joule-Thompson effects that occur when the valves are operating, liquid can form and flow into the flare header. This occurrence leads to large temperature differentials and bending that causes cracks in the welded joints at the support shoes and tees.

This case study [5] involves a flare header system with the following:

• 24- to 36-in. main header to flare stack• Laterals that enter perpendicular to the

main header, with sizes up to 24 inches • Shoe spacing 6 m apart• Pipe supports with stitch-welded repads• A slope of 5 mm every 6 m• Saddle-type shoes resting on an I-beam

The Solution Bechtel engineers undertook a two-phase approach to solve the design problem. In the first phase, CFD was used to predict:

• The liquid accumulation upstream of the flare header

• Temperature differentials across the cross section of the flare header due to liquid accumulation

• The occurrence of any high-temperature differentials during the use of 45-degree laterals

In the second phase, Bechtel engineers applied predictions from the CFD analysis to an FEA analysis to determine if upward bowing occurred in the pipe and to assess any subsequent stress impacts. They also studied various mitigation

CFD was used

to assess

the impact of

wet-flare design

on the terrain and

the atmosphere.

This assessment

was required

to meet regulator

standards.

1.00e+039.38e+028.75e+028.13e+027.50e+026.88e+026.25e+025.63e+025.00e+024.38e+023.75e+023.13e+022.50e+021.88e+021.25e+026.25e+010.00e+00

1.00e+039.38e+028.75e+028.13e+027.50e+026.88e+026.25e+025.63e+025.00e+024.38e+023.75e+023.13e+022.50e+021.88e+021.25e+026.25e+010.00e+00

Z X Y

Z

X Y

Figure 3. Temperature Contours and Surface-Incident Radiation on Full Terrain

Y

Z X

3.50e+023.36e+023.23e+023.09e+022.96e+022.82e+022.69e+022.55e+022.42e+022.28e+022.15e+022.01e+021.88e+021.74e+021.61e+021.47e+021.34e+021.20e+021.07e+029.35e+018.00e+01

Figure 4. Temperature Contours on Ground Below Burner Risers

Page 156: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 146

CFD allows

engineers

to visualize how

fluid distributes

with original

design.

This is the basis

for understanding

temperature

contours in the

flare line.

10" Line from PSV

8" Line from PSV

24" Line from PSV

24" Line from PSV

8" Line from PSV

6" Line from PSV

11,000 mm Two Supports

Flow Outlet

Flow Inlet for This Case

17,170 mm Three Supports

14,000 mm Three Supports

Slope Modeled 5 mm Every 6 m

90-Degree Lateral and 45-Degree Lateral

Supports withPadded Plate Resting on I-Beam

3,000 mm to Support

9,000 mm to Support

Z X

Figure 5. Geometry and Grid of Supports, Laterals, and Pipe Connections for CFD and FEA Analyses

Cross-Sectional Temperature Contours Shown in Slices

Inner Liquid Levels Shown in Cut Slices Taken at 1 m Intervals

Y

Z X

3.10e+023.01e+022.93e+022.84e+022.76e+022.67e+022.58e+022.50e+022.41e+022.32e+022.24e+022.15e+022.07e+021.98e+021.89e+021.81e+021.72e+021.64e+021.55e+021.46e+021.38e+02

Y

Z X

Y

Z X

Y

Z X

Maximum Temperature Difference Between Top and Bottom 170 K (306 °F)

Liquid (Blue Contour) Fraction After 20 Seconds

11.0 m

Figure 6. Volume Fraction and As-Designed Temperature Contours

Page 157: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 147

measures to ensure that the pipe met American Society of Mechanical Engineers (ASME) 31.3 code requirements. Figure 5 shows the geometry and grid of the supports, laterals, and pipe connections used for the CFD and FEA analyses.

Through the CFD analysis, Bechtel engineers determined that when the flow entered the main header through the 90-degree lateral, liquid accumulated on the upstream side even though there was a slope. This cold liquid stagnation could be as high as 15 m downstream, causing a significant temperature differential along the cross section. The 45-degree lateral, however, did not result in a similar effect, and was therefore determined to be better than a 90-degree lateral.

The stresses, as designed, were determined to be 61,000 psi at the pipe-to-shoe weld. The allowable stresses were 36,000 psi for structural steel support and 60,000 psi for the piping itself, based on American Institute of Steel Construction (AISC) and ASME 31.3 standards.

Further analysis of the mitigation measures determined that continuously stitched support pads with an overhang of 100 m would bring the stresses within allowable limits. Figure 6 shows the volume fraction and as-designed temperature contours. Figure 7 shows the stress contour predictions generated by the FEA analysis. Finally, Figure 8 presents the recommendations for stress reduction.

LIFE-CYCLE DYNAMIC SIMULATION

Bechtel uses life-cycle dynamic simulation to improve the design and operability of LNG

plants. As shown in Figure 9, this technology enables design engineers to produce a dynamic model of a plant that evolves with the various stages of a project’s life cycle. This dynamic model can also be tailored to various applications throughout the project’s life cycle.

Additionally, design engineers can use the dynamic model to perform engineering,

A judicious

application

of fluid-solid

interaction

(CFD and FEA)

helps in

understanding and

mitigating hydraulic

and thermal

stresses

in flare lines.

41696

.523E+02

.105E+09

.157E+05

.209E+09

.314E+09

.366E+05

.41EE+05

470E+09

Padding Required at Tees Since Stresses Can Be High at Tee Welds

Flare Header Stress AnalysisFlare Header Stress Analysis

Stresses of About 25,000 psi Seen at T-Joints and Temperature Gradient Locations

Maximum Vertical Bending Displacement While Using 45-Degree Laterals is < 1/4"

Maximum Displacement in a Maximum Liquid Case May Be 3" and Four Shoes Are Lifted

Bending Stresses of 50,000 psi Stresses at Supporting Shoes of About 60,000 psi

0

.375E+08

.835E–03

.008231

.017298

.026364

.03543

.044496

.053562

.062629

.071695

.080761

.425E+08

–.003108

–.002014

–.919E–03

.176E–03

.00127

.002365

0

.196E+08

.392E+08

.589E+08

.785E+08

.981E+08

.118E+09

.137E+09

.157E+09

.177E+09

.00346

.004555

.005649

.006744

.851E+08

.128E+09

.170E+09

.750E+08

.113E+09

.150E+09

0

Figure 7. Stress Contour Predictions Generated by FEA Analysis

00

.424E+08

.848E+08

.127E+09

.170E+09

.212E+09

.254E+09

.297E+09

.339E+09

.352E+09 .676E+08

.135E+09

.203E+09

.270E+09

.338E+09

.405E+09

.473E+09

.541E+09

.608E+09

Reinforcing the Tees Significantly Reducesthe Stresses

Tees NotReinforced

Flare Header Stress Analysis Flare Header Stress Analysis

Figure 8. Recommendations for Stress Reduction

Page 158: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 148

process, and control system studies. For example, a dynamic model is typically used in engineering studies associated with the refrigeration com-pressor system design and critical-valve sizing. Dynamic simulation can be used during the control system design to ensure that a plant has sufficient design margins to handle any disturbances. It can also be used to assess a plant’s optimal operation in the face of changing conditions, such as ambient temperature, and to evaluate a plant’s distributed control system prior to commissioning to reduce the time required for onsite commissioning during startup.

Life-cycle dynamic simulation enables engineers to consider all aspects of the process and control system design during the early stages of a plant’s design, helping eliminate or reduce the need for costly rework later. In addition, dynamic simulation can be used to:

• Study the effects that extensive heat integration (typical of LNG plants) will have on plant stability and startup

• Analyze the effects of external factors such as ambient conditions and compositional changes on a plant’s future operation

• Improve a plant’s day-to-day operations

Descriptions of LNG process dynamic modeling and the LNG plantwide dynamic model validation follow. A number of case studies are also presented. The first case study describes how dynamic simulation was used early in the design stage to ensure a plant’s operability. The remaining case studies highlight the benefits of using dynamic simulation to perform both engineering and control studies.

Information on extending the dynamic model to an operator training simulator (OTS) is also provided at the end of this section.

LNG Process Dynamic Modeling LNG process dynamic modeling uses a plantwide dynamic simulation model that is fundamental and based on rigorous thermodynamics. The model’s level of detail and fidelity depend on the application it is used for and the information available for modeling. This discussion assumes a model with a level of detail required to perform engineering studies. Unlike steady-state simulation, dynamic simulation requires an accurate pressure profile, equipment volumes, and other specific information.

Two main components in the LNG process are heat exchangers and turbine-driven refrigeration system compressors. The compressors are modeled with design performance curves. A transfer function-based model is used for the gas turbine. The speed governor dynamics, turbine dynamics, and turbine power output are computed based on a model derived from vendor-provided data. A balance between power supply and demand is used to compute turbine speed. The anti-surge control and performance control systems from Compressor Controls Corporation are modeled in the simulation environment. Another key piece of equipment in the LNG process is the brazed aluminum heat exchanger. Application of the pressure profile to the dynamic simulation model (i.e., calculations of conductance coefficients for piping and equipment) is based on the isometric information, whenever available.

Bechtel uses

lifecycle dynamic

modeling based on

validated modeling

approaches

to manage

design risk

at various stages

in a project.

PLANT EVOLUTION

PROCESSSTUDIES

FEED EPC Startup Operations

Process,ControlStudies

Anti-Surgeand

Relief ValveDCS Check

and FATStartupSupport

OperatorTraining

APCRT OPT

DYNAMIC MODEL EVOLUTION

Figure 9. Life-Cycle Dynamic Simulation Approach

Page 159: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 149

Validation of the LNG Plantwide Dynamic ModelValidating the performance of the dynamic model against field data was critical for establishing confidence in the model. The predictions generated by the dynamic model under transient conditions were checked against appropriate data from an operating LNG plant. The dynamic model was also based on design information from the LNG plant’s engineering, procurement, and construction (EPC) stage. No model fitting with the plant data was done prior to the validation.

The model validation was performed using plant data that represented a major change in plant feed rate and, therefore, LNG production. The response of the model variables was compared with the plant variables. Actual plant data for certain variables, such as plant feed rate, ambient temperature, and compressor speeds, was externally specified at 1-minute intervals to drive the model. The model generated predictions about other process variables, which were then analyzed.

The information presented in Figure 10 shows that the model effectively captured the LNG plant’s behavior. The plant feed rate during the validation period went to about 60 percent of the normal operating rate and then returned to a rate close to the original plant feed rate. The

scaled values of the feed are shown in the upper-left-hand box. During the validation period, the ambient temperature ramped up, and the refrigeration compressors slowed down because of the reduced load. This led to a temporary decrease in the compressor discharge pressure, as shown in the upper-right-hand box. The anti-surge valves for the compressors also opened up to protect the machines from surge, as shown in the lower-left-hand box. A comparison between the output from a plant level controller and the output from the model level controller is shown in the lower-right-hand box.

Case Study: Using Dynamic Simulation to Improve Plant Operability During the Front-End Engineering Design StageThis case study describes how dynamic simulation was used to verify the operability and controllability of a proposed large-capacity LNG plant. [6] It highlights the value of using life-cycle dynamic simulation during the front-end engineering design (FEED) stage. The LNG industry has experienced rapid growth in recent years, and larger plants are being built around the world. Bechtel and ConocoPhillips have collaborated in designing and building several LNG plants with a capacity in the range of 3 million tons per annum and greater.

Ethy

lene

Rec

ycle

Plan

t Fee

d Ra

te

Exch

ange

r Lev

el C

ontro

l Val

vePr

opan

e Di

scha

rge

Pres

sure

Time, minutes Time, minutes

Time, minutes Time, minutes

1.0

0.9

0.8

0.7

0.6

0.50 50 100 150 200 250 300

0 50 100 150 200 250 300 350 0 50 100 150 200 250 300

0 50 100 150 200 250 300

0.35

0.30

0.25

0.20

0.15

0.10

0.05

0.00

1.05

1.00

0.95

0.90

0.85

0.80

0.75

0.70

1.3

1.2

1.1

1.0

0.9

0.8

Figure 10. Comparison of Model Predictions (Blue) with Plant Data (Green)

Dynamic

simulation

is used to ensure

LNG plant

operability in the

FEED stage.

Page 160: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 150

The increasing number and size of LNG plants make the use of dynamic simulation to prevent operability problems after plant commissioning more important than ever.

The ProblemThe process design for the proposed large-capacity LNG plant was different from that of existing LNG plants based on the ConocoPhillips Optimized Cascade LNG Process. First, a single driver was used for both the propane and ethylene refrigeration compressors. In addition, gas turbines with a limited speed range (95 to 101 percent) were used as drivers for the refrigeration compressors. Subsequently, the control schemes in the refrigeration systems were modified to suit the design. A helper motor was used with the gas turbine to provide additional power. These factors raised concerns about the plant’s potential for safe and reliable startup and operations. As a result, the design required verification.

The SolutionDynamic simulation was used to address concerns about the design during the FEED stage of the project and to ensure the plant’s overall operability. The specific objectives of the simulation were to:

• Verify startup of the gas turbine drivers, which included determining the correct starting pressures and the adequacy of starter motor sizes for the turbines

• Develop a procedure for pressurizing and loading the refrigeration systems, including a procedure for loading and balancing the two parallel refrigeration systems

• Study the effect of the gas turbine trip on the parallel turbine and refrigeration units, and determine the configuration and size of the anti-surge valves required to prevent compressor surge for the turbine that tripped

• Study the performance of the control scheme of the frame-type gas turbines and identify any improvements

Further details about the simulation follow.

Trip of Gas TurbinesA major advantage of the ConocoPhillips Optimized Cascade LNG Process is that it can maintain plant production at half rate or more with the trip of one gas turbine driver. The applicability of this advantage to the large-capacity train design required verification.

The trip of a single-shaft gas turbine is a matter of concern because it causes the parallel turbine

to get overloaded and trip. Reducing the feed rate does not prevent this occurrence because the piping and equipment volumes cause a time lag in the system. Therefore, special procedures such as throttling the suction valves were necessary to prevent the overload trip of the parallel turbine. Such procedures momentarily reduced the load on the running turbine, keeping it online. Another method for preventing turbine trip was to use the novel anti-bog-down gas turbine control scheme. This scheme detects impending overload trip and throttles back all suction valves to ensure that the second running gas turbine stays online.

Dynamic simulation was used to study the impact of the gas turbine trip on other process equipment. The simulation showed that the process equipment stabilized normally after the trip, and the intermediate variation in pressures and other variables was reasonable. Figure 11 shows the speed and power responses for the two GE Frame 7 gas turbines after the trip of one of the turbines. As the figure shows, the speed of the tripped gas turbine (solid blue line) coasts down to zero. The gas turbine that remains online (solid red line) initially slows down and then recovers and stabilizes at normal speed. The power of the tripped gas turbine (dotted red line) instantaneously goes to zero. The power of the operating turbine (dotted green line) remains close to the normal operating value.

Plant StartupStartup procedures for previous trains were not directly applicable to this LNG plant because of its unique process design, so dynamic simulation was used to develop and verify new startup procedures for the plant. Specifically, engineers used the simulation to:

• Verify that the sizes of the starter motors were sufficient to bring the turbine-compressor string to full speed

1.06

0.86

0.66

0.46

0.26

0.0614 15 16 17 18 19 20

Scal

ed S

peed

, Pow

er

Time, minutes

Figure 11. Speed and Power Responses for Two Gas Turbine Drivers During Trip

LNG plant startup

procedures

are verified

and optimized

using dynamic

simulation.

Page 161: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 151

• Determine the correct starting pressures for the refrigeration loops for the selected starter motor sizes

• Identify the various process conditions and operating points required for the compressor during startup to ensure that the compressor stages do not go into surge or stonewall during startup

• Define the procedure for loading and balancing the parallel trains so that if the second train is running, the other train can be used to share the load equally

• Estimate the time required for starting each individual train and the total plant startup time (This data is important for the reliability, availability, and maintainability analysis.)

Using dynamic simulation, the following three phases were identified for plant startup:

• Phase 1—Turbine startup involves bringing the compression trains to the point of stable recycle at a minimum of 95 percent speed. The refrigeration systems must be depressurized to the correct pressures before turbine startup can occur. The motor size required for proper startup and acceleration was verified using dynamic simulation. The starting pressure is critical for ensuring proper acceleration of the gas turbine and for preventing it from becoming bogged down during startup.

• Phase 2—Pressurization prepares the compressor system for loading by bringing the pressure to the right values.

• Phase 3—Loading begins with the introduction of refrigerant vapor from the process systems, and proceeds to fully integrated and normal parallel train operation. The procedures for pressurization and loading were verified and refined using the dynamic simulation model.

In the above case study, the objective of dynamic simulation was to verify operability and controllability of the proposed LNG plant during the early FEED stage. The simulation enabled design engineers to address concerns about the plant’s design, including verifying the plant’s response to compressor trips and determining the required startup procedures, compressor starter motor sizes, and starting pressures.

Case Study: The Application of Dynamic Simulation for Detailed Engineering Studies The dynamic simulation model used for engineering studies is typically based on

information from the EPC stage of a project. To be effective, this model must also include sufficient detail about the issue under study. A dynamic simulation model is run offline, and therefore does not run in real time. However, the simulation time and model complexity must still be within reasonable limits. The size of the model depends on the scope of the study and the process units involved.

A lumped parameter modeling approach is normally used for plantwide simulation for engineering studies. Actual equipment sizes are used for vessels, heat exchangers, and other equipment. Control valves are modeled with appropriate actuator dynamics, control characteristics, and valve sizes. The process control aspects of this model include only those details that are relevant to the study. Therefore, the dynamic simulation model used for engineering studies is highly dependent on the study scope and should represent but not replicate the plant.

Dynamic simulations have been used to perform many engineering studies related to the LNG process. These studies have involved:

• Analyzing compressor anti-surge systems for the refrigeration systems

• Observing the effects of upset conditions for a feed gas compressor providing gas for three LNG trains

• Mitigating pressure relief scenarios • Supporting the design of individual

pieces of equipment

The following case study highlights the value of applying life-cycle dynamic simulation to a detailed engineering study.

The ProblemAn engineering study was required to analyze the compressor anti-surge system for a plant’s refrigeration systems to ensure that:

• Anti-surge valves were sized adequately• Stroke times of the anti-surge valves and

the compressor suction and discharge isolation valves were fast enough

The SolutionTo meet the objectives of the engineering study, a dynamic simulation was performed for each of the three refrigeration systems. Various process upsets were modeled to determine which one governed the size of the anti-surge valves. These upsets included closing each of the suction isolation valves, closing the discharge isolation valve, and tripping the compressor.

Compressor

anti-surge valve

sizing is best done

using dynamic

simulation.

Page 162: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 152

The upset condition that showed the greatest impact on the sizing of the anti-surge valves was the tripped compressor. This upset condition included a compressor/turbine train’s coast down to a complete stop after a trip. The dynamic plant model used for this study included refrigeration compressors as well as other units used in the LNG process.

Figure 12 shows the results of the dynamic simulation of a refrigeration compressor trip, as well as the speed response of the tripped compressor. The surge margins for three compressor stages are shown in the left plot. The right plot shows the speed response of the tripped compressor (blue line) and the parallel compressor (red line) that is still in operation. One of the key parameters monitored in this simulation was the surge margin, which is defined as the difference between actual flow through the compressor and the surge flow at the corresponding speed. This parameter has to be positive during the trip to prevent compressor surge. The anti-surge valve sizes and speeds were selected so that the surge margin was adequate during the trip.

One of the findings of the dynamic simulation was that, in some cases, due to the configuration of the refrigeration system, the traditional method of sizing the anti-surge valves does not provide enough capacity to protect the compressor from surging during a compressor trip and coast down to stop. The results of the dynamic simulation were used to determine how much additional capacity each anti-surge valve needed to protect the compressor from surging during a trip.

The dynamic simulation also revealed that, if left in automatic control mode, most anti-surge controllers do not react quickly enough to protect the compressors during a trip. To mitigate

this issue, the emergency control strategy was modified to deactivate the controllers and to command open the valves immediately. The dynamic simulation also indicated that reducing the stroke times of the suction isolation valves significantly aided in protecting the compressors from surging during coast down. Reducing these stroke times also helped to reduce the anti-surge valve sizes to some extent. Dynamic simulation was an invaluable tool in providing the data necessary to optimize the anti-surge system design. A steady-state analysis could not have provided this data.

Case Study: The Application of Dynamic Simulation for Control Studies During the early stages of an LNG plant project, dynamic simulation can be used to determine the right process control structure. [7] There are systematic methods for analyzing process controllability using the dynamic simulation model. This case study describes how dynamic simulation was used to study alternate control strategies for an LNG plant. It highlights the benefits of applying this technology to the control system design.

The ProblemIn the original plantwide control scheme for an LNG plant, the plant feed was controlled along with the LNG condensation pressure. A disadvantage of this control scheme was that operators had to manipulate the plant feed rate to account for changes in operating parameters, such as variations in ambient temperature. The variations in ambient temperature changed the refrigeration capacity and, therefore, the LNG production rate. In addition, with this control scheme, the plant feed had to be externally reduced during a compressor trip to account for the reduced available refrigeration.

45,000

40,000

35,000

30,000

25,000

20,000

15,000

10,000

5,000

0

Surg

e M

argi

n, m

3 /hr

Spee

d, rp

m

Time, sec0 5 10 15

Time, sec0 5 10 15

5,000

4,500

4,000

3,500

3,000

2,500

2,000

1,500

1,000

Stage 1

Stage 2

Stage 3

Speed 1

Speed 2

Figure 12. Response of Refrigeration Compressor in Trip that Occurs at 2-Second Point

Page 163: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 153

The SolutionA plantwide control scheme that eliminated the need for operators to change the plant feed rate was studied using dynamic simulation. This scheme used the temperature control downstream of the refrigeration and front-end pressure control to indirectly control the plant feed rate. Using this scheme in the simulation, the plant was designed to utilize the available refrigeration capacity. Therefore, the plant feed rate was automatically reduced to maintain the LNG temperature with varying ambient conditions. Figure 13 presents a comparison of the LNG plant production using the original plantwide control scheme (red line) with the production from the dynamic simulation with no operator manipulation of the plant feed rate (blue line).

Some operational problems with this modified control scheme were noticed during test runs with certain plant scenarios. For example, one of the key operating pressures varied unacceptably during the refrigeration compressor trip. To prevent this occurrence, an override scheme was tested on the model, and other control schemes were explored. In addition, the test runs showed that a supervisory scheme was necessary to prevent the gas turbines from running in their temperature override mode under certain conditions. As a result, a supervisory program was developed that runs the turbines at their limits and, therefore, the plant at full capacity without operator intervention. Supervisory schemes that maintain the operation of the

heavies removal column were also found to be useful. A plantwide dynamic simulation model was used to study these schemes and to test their effectiveness under various upset conditions, such as plant trips and changes in ambient temperature and plant feed conditions.

Dynamic simulation was also used to select the appropriate tuning parameters for the plant. Typically, tuning parameters from a similar, previously constructed plant are used; or the control loops are tuned onsite during the commissioning phase of the project. These methods prolong the startup-commissioning time, making it economically inefficient. A dynamic simulation model makes it possible to tune the control loops offline in advance using established tuning procedures. The control loops can then be quickly fine-tuned onsite during commissioning.

Use of a Plantwide Dynamic Model for the OTSA plantwide dynamic model can be integrated with the control system emulations for use as an OTS. A simulator is a true replica of the plant, reflecting both the process and control elements. Control system emulations are used to replicate the plant control strategy. The shutdown/interlock logic of the plant is also implemented in a virtual plant simulator. This logic can be implemented in separate script files or by using commercially available programmable logic control emulators.

The OTS allows for complete control system checkout prior to plant commissioning. It enables engineers to resolve any issues regarding

0.98

0.96

0.94

0.92

0.90

0.88

0.86

0.84

0.82

0.80

LNG

Prod

uctio

n, S

cale

d

Time, sec

0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000

Figure 13. Comparison of Plant Production with Production from Dynamic Simulation with Modified Control Scheme

Tuning parameters

for the

control loops

are determined

early in the project

to save

startup and

commissioning

time.

Page 164: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 154

graphics or control logic coding early in the project. Operating personnel can also use the OTS for training purposes to ensure a smooth plant startup. Feedback on the OTS from operating personnel also helps to provide the framework for future OTS development. A schematic architecture of the OTS is presented in Figure 14.

Bechtel has provided OTS solutions to Bechtel LNG clients and is currently developing OTS solutions for other projects.

Improving the Profitability of LNG Plants Using Advanced Process ControlAdvanced process control (APC) is a technology designed to stabilize processes in LNG plants so they can operate as close to full capacity as possible. The application of APC enables LNG plant owners to maximize plant performance and increase operating profits. Other benefits of APC include increased process efficiency and natural gas liquid (NGL) production and reduced operator intervention. [8] The use of APC also results in a smoother and more consistent plant operation.

The objective of the LNG APC controller is to maximize gas feed rate subject to the various process operating constraints. The APC controller can also incorporate other objectives, such as minimizing power usage or maximizing NGL production and liquefaction thermal efficiency.

The increase in LNG production achieved by using APC depends on an operator’s skill level and assigned activities in the plant. For a highly skilled operator who makes frequent plant adjustments, a production increase of about 1 percent is achievable with APC. For an average operator who makes less frequent changes during a day, a production increase of between 2 and 3 percent is achievable. This is sufficient economic incentive to justify the implementation of APC in an LNG plant.

CONCLUSIONS

The case studies presented in this paper illustrate the crucial role simulation

technologies and concepts play throughout an LNG project’s life cycle, especially in quantifying the risks associated with design. Quantifying these risks using traditional design methods is not always possible. With the rapid developments in hardware and software and the increasing awareness of the power of simulation tech-nologies, future projects will involve an enormous amount of simulation-based design. This advancement will lead to a paradigm shift in current design methods. However, simulation technology will never completely replace traditional design methods; instead, it will complement the experience design engineers bring to a project.

OperatorStation

OperatorStation

DCS DatabaseServer

Simulated DCSControls

Simulated SISControls

MIMICS CONTROL ROOM ENVIRONMENT

STAND-ALONE OTS NETWORK

MIMICS LNG PLANT RESPONSE TRAINING TOOL

Instructor StationHYSYS® Dynamic

Model StationHYSYS® Dynamic

Model StationField Operator

Station

Figure 14. Schematic Architecture of OTS

Dynamic models

are integrated with

control system

emulation

to create a

high-fidelity

OTS solution, which

is further used

to develop

APC applications

to increase plant

production and

ensure quality.

Page 165: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 155

This paper also highlights how Bechtel has innovatively leveraged the life-cycle concept to create applications of value for clients. The concepts and applications described in the case studies have also been successfully applied to other processes such as gas processing and to facilities such as refineries.

TRADEMARKS

Aspen HYSYS is a registered trademark of Aspen Technology, Inc.

ConocoPhillips Optimized Cascade is a registered trademark of ConocoPhillips.

SmartPlant is a registered trademark of Intergraph Corporation.

REFERENCES

[1] D.P. Sly, “Plant Design for Efficiency Using AutoCAD and FactoryFLOW,” Proceedings of the 1995 Winter Simulation Conference, Arlington, Virginia, December 1995, pp. 437–444, access via <http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=478771&isnumber=10228>.

[2] C.R. Cutler and R.T. Perry, “Real-Time Optimization with Multivariable Control is Required to Maximize Profits,” Computers and Chemical Engineering, Vol. 7, No. 5, 1983, pp. 663–667, access via <http://www.sciencedirect.com/science/journal/00981354>.

[3] E.J. Williams and R. Narayanswamy, “Application of Simulation to Scheduling, Sequencing, and Material Handling,” Proceedings of the 1997 Winter Simulation Conference, Atlanta, Georgia, December 7–10, 1997, pp. 861–865, see <http://portal.acm.org/citation.cfm?doid=268437.268666>.

[4] P. Diwakar, V. Mehrotra, R. Vallavanatt, and T.J. Maclean, “Challenges in Modeling Ground Flares Using Computational Fluid Dynamics,” 5th International Bi-Annual ASME/JSME Symposium on Computational Technology for Fluid/Thermal/Chemical/Stressed Systems with Industrial Applications, San Diego, California, July 25–29, 2004, access via <http://catalog.asme.org/ConferencePublications/PrintBook/2004_Computational_2.cfm>.

[5] P. Diwakar, V. Mehrotra, and F. Richardson, “Mitigation of Bending Stress and Failure Due to Temperature Differentials in Piping Systems Carrying Multiphase Fluids: Using CFD and FEA,” Proceedings of the 2005 ASME International Mechanical Engineering Congress and Exposition, Orlando, Florida, November 5–11, 2005, access via <http://store.asme.org/product.asp?catalog_name=Conference%20Papers&category_name=Recent%20Advances%20in%20Solids%20and%20Structures_IMECE2005TRCK-33&product_id=IMECE2005-79969>.

[6] M. Wilkes, S. Gandhi, J. Valappil, V. Mehrotra, D. Messersmith, and M. Bellamy, “Large Capacity LNG Trains: Focus on Improving Operability During the Design Stage,” 15th International Conference & Exhibition on Liquified Natural Gas (LNG 15), Barcelona, Spain, April 24–27, 2007, PO-32, access via <http://kgu.or.kr/admin/data/P-000/d29fdfc6a6d5dd3b6f8b0c0c8b1b3fa3.pdf>.

[7] J. Valappil, V. Mehrotra, D. Messersmith, and P. Bruner, “Virtual Simulation of LNG Plant,” LNG Journal, January/February 2004, <http://www.lngjournal.com/articleJanFeb04p35-39.htm>.

[8] J. Valappil, S. Wale, V. Mehrotra, R. Ramani, and S. Gandhi, “Improving the Profitability of LNG Plants Using Advanced Process Control,” 2007 AIChE Spring National Meeting, Houston, Texas, April 22–26, 2007, see <http://aiche.confex.com/aiche/s07/preliminaryprogram/abstract_79719.htm>.

BIOGRAPHIESRamachandra Tekumallais chief engineer for OG&C’s Advanced Simulation Group, located in Houston, Texas. He leads a group of 10 experts in Bechtel’s Houston and New Delhi offices in develop-ing advanced applications for various simulation tech- nologies, such as APC, CFD,

FEA, OTS, dynamic simulation, and virtual reality. Ram has more than 10 years of experience in applying these technologies, as well as in real-time optimization, to ensure the successful completion of projects worldwide.

Prior to joining Bechtel, Ram was an applications engineer with the Global Solutions Group at Invensys Process Systems, where he developed applications for refineries and power plants, including real-time control, performance monitoring, and optimization.

Ram holds an MS degree from the University of Massachusetts, Amherst, and a BE degree from the Birla Institute of Technology & Science, Pilani, India, both in Chemical Engineering.

Jaleel Valappil is a senior engineering specialist with OG&C’s Advanced Simulation Group, located in Houston, Texas. His expertise covers dynamic simulation, real-time optimization, operator training simulation, and advanced process control. Dr. Valappil is also highly experienced

in developing and deploying lifecycle modeling concepts for applications in design, engineering, and operations. He currently applies advanced simulation and optimization technology to improve the design and operation of plants being built by Bechtel.

Before joining Bechtel, Dr. Valappil was a senior consulting engineer with the Advanced Control Services Group of Aspen Technology, Inc., where he

Page 166: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 156

was responsible for developing and implementing advanced control and optimization solutions for operating facilities. He also took part in identifying and analyzing the economic benefits of advanced control.

Dr. Valappil holds a PhD degree from Lehigh University, Pennsylvania, and a BS from the Indian Institute of Technology, Karagpur, India, both in Chemical Engineering.

Page 167: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 157

INTRODUCTION

Since its acceptance by the industry in the 1970s, the expander-based process has

become the mainstay technology in ethane (C2 ) recovery applications. [1] Despite the great technical and commercial success of this technology, a systematic methodology for determining the optimal system design has remained elusive until recently. Design optimization was approached as an art to be mastered; to this end, a new process engineer would typically spend several years gaining experience and acquiring the necessary expertise. The steep and frustrating learning curve was not conducive to extending this art beyond the province of process specialists to general engineers.

Recently, a methodology called SIMAR, which stands for system intrinsic maximum recovery, was described in papers presented at a key technical conference. [2, 3] These works and subsequent follow-up papers [4, 5] identified a systematic approach to arrive at the optimal design for a given feed stream.

Although SIMAR greatly facilitates the design procedures by reducing a two-dimensional (2-D) search to a single dimension, its reference case is a hypothetical scenario in which infinite amounts of refrigeration are available to the system. In many real cases, the refrigeration supply is limited and costly. Therefore, it is

necessary to move to sub-SIMAR operations, and additional steps are required.

This paper presents a new approach to eliminate the aforementioned shortcomings of SIMAR. The new method is called C-MAR, which stands for constrained maximum recovery. C-MAR redefines the reference case by adopting the gas sub-cooled process (GSP), a well-known industrial design [6], as the benchmark case and by incorporating a fixed refrigeration temperature of –35 °C, the practical lower bound of propane (C3 ) refrigeration circuits. Since this new reference case is a realistic industrial design, its results are more readily transferable to industrial applications (for example, cost estimates).

The technical background for the development of C-MAR is described in some detail in this paper. SIMAR methodology is discussed and illustrated. C-MAR’s usefulness and applications are demonstrated in real cases using the Enhanced Natural Gas Liquid (NGL) Recovery ProcessSM (ENRP) [1, 7, and 8] (employing a stripping gas system) and the lean reflux process. [9]

TECHNICAL BACKGROUND FOR DEVELOPMENT OF C-MAR

Following a general categorization and discussion of expander-based C2 recovery

processes, SIMAR methodology is explored in this section. A scenario in which liquefied

Abstract—This paper explores methods for determining the optimum design of turbo-expander ethane (C2 ) recovery processes, focusing on constrained maximum recovery (C-MAR), a new methodology. C-MAR—successor to the system intrinsic maximum recovery (SIMAR) methodology introduced recently—uses a set of curves developed to benchmark C2 recovery applications based on the popular gas sub-cooled process (GSP) and external propane (C3 ) refrigeration (–35 °C). Using the C-MAR curves, a process engineer can quickly determine the optimum design and estimate the performance and cost of various C2 recovery opportunities without performing time-consuming simulations. Moreover, the C-MAR curves enable alternative process configurations to be compared against GSP performance.

Keywords—C-MAR, compressor, ethane, expander, refrigeration, SIMAR, turbo-expander

Originally Issued: March 2007Updated: December 2008

IPSI LLC

Wei Yan, PhD

[email protected]

Lily Bai, PhD

[email protected]

Jame Yao, PhD

[email protected]

Roger Chen, PhD

[email protected]

Doug Elliot, PhD

[email protected]

Chevron Energy Technology Company

Stanley Huang, PhD

[email protected]

OPTIMUM DESIGN OF TURBO-EXPANDER ETHANE RECOVERY PROCESS

Page 168: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 158

natural gas (LNG) is used as feed is described. Since LNG contains abundant refrigeration, the SIMAR reference case can be approximated well. SIMAR curves are compared for the expander (XPDR) 1, XPDR 2, and XPDR 3 process configuration categories. The discussion then examines typical results when the feed is shifted from LNG to natural gas (NG), based on the XPDR 3 category.

C2 Recovery ProcessesFigure 1 shows a generalized scheme for C2 recovery based on expander technology. The process is intended to strip the inlet NG of its heavier components. The residue gas is recompressed and returned to the pipeline. Sweet, dry inlet NG flows through an inlet chilling section, where the gas is chilled to a

ABBREVIATIONS, ACRONYMS, AND TERMS

1-D one-dimensional

2-D two-dimensional

C1 methane

C2 ethane

C3 propane

C-MAR constrained maximum recovery

DeCl demethanizer column

ENRP Enhanced NGL Recovery ProcessSM

GPA Gas Processors Association

GPM gallons per Mscf

GSP gas sub-cooled process

JT Joule-Thomson

LNG liquefied natural gas

LRP lean reflux process

MMscfd million scf per day

Mscf thousand scf

NG natural gas

NGL natural gas liquid

SB side reboiler

scf standard cubic feet

SIMAR system intrinsic maximum recovery

VF vapor fraction

XPDR expander; expressed as XPDR in conjunction with XPDR 1, 2, or 3 process configuration categories

ToPipelineAir

Cooler

LP ResidueGas

Inlet Chiller(RefrigerationIntegration)Feed

GasSeparator

ExpanderSubcooler

DeCl

C2 + ProductExpander-Based C2 Recovery Scheme

M

Figure 1. Generalized Gas Processing Scheme for C2 Recovery

LP Residue Gas toRecompression

Separator

Expander

DeCl

C2 + ProductXPDR 1

HP LNG Inlet

LP Residue Gas toRecompression

Separator

Expander

DeCl

C2 + ProductXPDR 2

HP LNG Inlet

Recycled ChilledHP Residue Gas Subcooler

XPDR 1: No RectificationXPDR 2: Full Rectification

XPDR 3: Self-Rectification(Heat Pump Not Shown)

LP Residue Gas toRecompression

Separator

ExpanderDeCl

C2 + ProductXPDR 3

HP LNG Inlet

Subcooler

Figure 2. Categorizing Expander-Based Schemes

Page 169: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 159

suitable level before entering the heart of the plant: an expander-based C2 recovery system. The refrigeration of the inlet chilling section is mainly provided by the returning residue gas and is supplemented by side-draws from the demethanizer column (DeCl) (side-draws not shown in Figure 1). Depending on actual requirements, external refrigeration may be required (not shown in Figure 1).

The main components of the expander section include a separator, an expander, and a DeCl. A subcooler is usually provided for improved refrigeration integration in the low temperature regions. The exact design of this section is an art.

Figure 2 provides further details on the expander section. Following earlier practice, the expander schemes are grouped into three process configuration categories: separator top not rectified (XPDR 1), separator top fully rectified (XPDR 2), and separator top self-rectified (XPDR 3). A fourth category, heat pumps, is not shown but is described shortly. The XPDR 3 is the well-known GSP in the industry, which uses a small portion of the non-condensed vapor as the top reflux to the demethanizer, after substantial condensation and sub-cooling. The main portion, typically in the range of 65%–70%, is subjected to turbo expansion as usual.

Configurations with heat pumps are discussed separately because: (1) a heat pump can be an

enhancement to any of the aforementioned three categories, and (2) a heat pump moves heat from low to high temperature and changes the temperature distribution in its base configuration. Its working principle is different from that of the three categories. A heat pump design can be recognized by the use of a compressor; a cooler for rejecting heat to a high temperature sink, a Joule-Thomson (JT) valve, or a second expander; and, optionally, a second exchanger to take heat from the low temperature source. Figure 3 depicts the ENRP as an example.

In the Figure 3 configuration, a side-draw liquid stream from the bottom of the demethanizer is expanded to generate refrigeration. This stream is then heated by indirect heat exchange with inlet gas to generate a two-phase stream. The two-phase stream is flashed in a separator. The flashed vapor is compressed and recycled to the demethanizer as a stripping gas. The flashed liquid stream can be mixed with other NGL product streams or returned to the column. This heat pump effectively moves heat from the inlet stream to the bottom of the column.

The main features of this novel design center on the fact that the stripping gas (1) enhances relative volatility ratios and NGL recovery levels, and (2) lowers the column temperature profile and makes heat integration easier.

The lean reflux process, which belongs to the XPDR 3 category, was developed to achieve high recovery levels of C2 in an NG feed without

The XPDR 3

process

configuration

category is

the well-known

gas sub-cooled

process

in the industry.

Residue GasCompressor

ExpanderCompressor

Inlet Gas

IPSI Stripping Gas Package

Residue Gas

ColdSeparator

Expander

SideReboilers

Demethanizer

Liquid Product

FC

FC

Figure 3. Enhanced NGL Recovery Process

Page 170: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 160

adding substantial amounts of recompression and/or external refrigeration power. This process uses a slipstream from the cold separator or feed gas to generate an essentially C2-free stream as a lean reflux to the demethanizer (see Figure 4).

Introducing a lean reflux considerably reduces equilibrium loss, thereby leading to high C2 recovery while maintaining the demethanizer at a relatively high operating pressure. The process overcomes deficiencies in the commonly used gas sub-cooled reflux process in which C2 recovery levels are ultimately restricted to approximately 90% due to equilibrium loss, or otherwise demand a lower demethanizer pressure and a higher recompression and/or refrigeration horsepower.

SIMAR Methodology

C2 Recovery with LNG as FeedFor the sake of easy visualization, Figure 5 depicts a scenario wherein the NG feed shown in Figure 1 is replaced by LNG. The major difference resulting from this change is the fact that the entire inlet pre-chilling section can be eliminated when LNG is used as the feed. The refrigeration in the residue gas can be retained, thus dramatically reducing the recompression power.

A SIMAR curve can be constructed following a few simple steps. The process starts from a relatively high temperature at a reasonable pressure level, as shown in Figure 6. The track of testing temperatures penetrates through the two-phase region and ends at an arbitrarily chosen level of –100 °C. The fluid remains liquid at and below this temperature level. Once the temperature reaches a certain point, the column’s operating limits are exceeded and the column no longer converges.

The entire inlet

pre-chilling section

can be eliminated

when the

NG feed

is replaced

by LNG.

To Pipeline

NG Trim Heater or Cooler

LP Residue Gas

SeparatorExpander

Subcooler

DeCl

C2 + ProductExpander-Based C2 Recovery Scheme

M

FromLNG

Recondenser

PrimaryBooster

Pump

LNGPreheater

Figure 5. Generalized Processing Scheme for C2 Recovery with LNG as Feed

Pres

sure

, bar

Temperature, °C

Lean Case Rich Case

Track ofTesting

SeparatorTemperature

120

100

80

60

40

20

0–180 –140 –100 –60 –20 20 60

Figure 6. Phase Envelope of Inlet Gas

Residue GasCompressor

ExpanderCompressor

Inlet Gas

Residue Gas

ColdSeparator

Expander

SideReboilers

Demethanizer

Liquid Product

LRP Package

Figure 4. Lean Reflux Process

Page 171: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 161

Figure 7 plots the C2 recovery level against the test separator temperature for the three categories of process configurations. The DeCl operating pressure is 22 bar. Both XPDR 1 and XPDR 2 shows a monotonic trend of improve-ment as the temperature decreases. This trend continues even after the inlet gas is totally liquefied, when the expander is replaced by a JT valve and no expansion work is recovered. XPDR 3 shows a different trend, however. As the separator temperature decreases, the C2 recovery reaches a maximum value and decreases. In other words, too much refrigera-tion at the separator may hurt the C2 recovery.

For XPDR 3, the SIMAR is defined as the maximum of the curve. For XPDR 1 and XPDR 2, the SIMAR is defined at the temperature level where the separator fluid has 30% vapor fraction (VF). The choice of 30% is based on a practical consideration that no expanders would be installed if the gas flow is below this level.

A SIMAR curve is the collection of all the SIMAR conditions that cover the entire range of DeCl operations of interest. Figure 8 depicts typical SIMAR curves corresponding to the three XPDR categories. The characteristics of the three categories can be observed. The reflux stream makes XPDR 2 more efficient than XPDR 1 throughout the entire range, which corresponds to the operating pressures of DeCl from 17 to 42 bar. The efficiencies of XPDR 2 and XPDR 3 are comparable, while each has its advantages over a certain span.

C2 Recovery with NG as FeedFigure 9 shows typical results, based on the XPDR 3 category, when the feed is shifted from LNG to NG. Since the refrigeration in the residue gas must be recovered to cool the inlet gas, the recompression power increases significantly by this shift in feed. A big gap is apparent between the two thin curves on the left.

In addition to the recovered refrigeration in the residue gas, external refrigeration may also be needed. When this is true, the compression power required in the external refrigeration power should be added to the aforementioned recompression power to form the total compression power. The two thick curves on the right represent the recompression duties by using one or two side reboilers (SBs). The gap between the curves of total compression and the recompression curves on the left in Figure 9 represents the external refrigeration. Using two SBs reduces the external refrigeration because

of improved refrigeration integration in the pre-chilling section.

The gap narrows in Figure 9 when the DeCl operating pressure decreases, indicating the decreased demands for external refrigeration. As can be observed in Figure 8 as well, when the DeCl operating pressure decreases or C2 recovery level increases, the need for external refrigeration also decreases. The expander provides more refrigeration for process needs at lower DeCl operating pressures.

In addition to

the recovered

refrigeration

in the residue gas,

external refrigeration

may also

be needed.C 2 R

ecov

ery,

bar

Separator Temperature, °C

1.00

0.95

0.9

0.850

0.80

0.75

0.70

0.65–100.00 –95.00 –90.00 –85.00 –80.00 –75.00 –70.00 –65.00 –60.00

XPDR 1 XPDR 2 XPDR 3

DeCl Pressure = 22 bar

0% 50% 90% VF

Liquid JT Expander

Figure 7. Two Types of Behavior for Different Categories

C 2 Rec

over

y, ba

r

Total Power, MW per 100 MMscfd LNG Inlet

1.00

0.95

0.90

0.85

0.80

0.75

0.70

0.651.000 2.000 3.000 4.000 5.000 6.000

XPDR 1 XPDR 2 XPDR 3

Figure 8. Comparing SIMAR Curves for Three XPDR Categories

C 2 Rec

over

y, ba

r

Compression Power, MW per 100 MMscfd NG Inlet

1.00

0.95

0.90

0.85

0.80

0.752.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00

SIMAR Curve

NG Feed

Recompression Total Compression

SIMARCurve

NGDelivery

NGOverall (2 SB)

NGOverall (1 SB)

Figure 9. Comparing Compression Duties and Impact of Side Reboilers Based on XPDR 3

Page 172: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 162

FEED GAS COMPOSITION AND SIMULATION PARAMETERS

Table 1 lists two feed gas compositions used in this paper, rich case and lean case. They

represent different richness in C2+ components. The richness of a gas sample is reflected in its C2+ or C3+ components, expressed in gallons per Mscf (GPM). The GPM value for the richcase is 5.71 and for the lean case is 2.87. The phase envelopes corresponding to the two compositions are shown in Figure 6. The richer the gas, the wider its envelope becomes. The raw gas supply is 300 MMscfd (dry basis).

All simulations in this paper are performed using Aspen HYSYS® 3.2. Table 2 lists pertinent parameters. The delivery pressure to the pipeline is similar to the inlet pressure. Two

temperature levels of heat sinks are defined: The high temperature represents air coolers, and the low temperature represents the external refrigeration temperature supplied by two-stage C3 compressor loops.

C-MAR METHODOLOGY

Principal Elements and AssumptionsC-MAR methodology includes two major elements:

• XPDR 3 process configuration as the benchmark model (the GSP, which is well-known in the industry)

• Fixed refrigeration temperature of –35 °C

For purposes of conceptual discussions, the pre-chiller is simulated using one integrated exchanger, which handles all streams including inlet gas, returning residue, SBs, and external refrigeration. Only the minimum amount of refrigeration is added to satisfy the refrigeration balances. The intent is to minimize the additional compression work. Unless specified otherwise, two SBs in an integrated exchanger are assumed. External refrigeration implies closed-loop designs of C3 circuits.

The results of C-MAR methodology, including the characteristics of resultant curves and their relation to SIMAR, are examined below. The paper concludes with a discussion of the C-MAR curve in relation to the ENRP and the lean reflux process.

C-MAR Methodology Results, Curves, and Relation to SIMARFigure 10 shows the C2 recovery versus separator temperature for the lean case. As the separator temperature decreases, the C2 recovery shows a maximum at about –57 °C for all curves.

The two feed

gas compositions

used in

this paper,

rich case and

lean case,

represent different

richness in C2+

components.

C 2 Rec

over

y, %

High-Pressure Separator Temperature, °C

100

90

80

70

60

50

40–60.0 –50.0 –40.0 –30.0 –20.0 –10.0

17 bara26 bara32 bara37 bara

Inlet Pressure = 69 bara

Column Pressure

Figure 10. Determining Maximum C2 Recovery Using C-MAR Methodology (Lean Case)

Parameter Value

Inlet Temperature, ºC 27

Inlet Pressure, bar 69 or 55

Send-Out Residual Gas Temperature, ºC 38

Send-Out Residual Gas Pressure, bar 74 or 55

Number of Trays in DeCl 16

DeCl Operating Pressure, bar 17 to 37

Composition Ratio of C1 to C2 in DeCl Bottom Product

0.015

High-Temperature Sink, ºC 38

Low-Temperature Sink, ºC –35

Table 2. Simulation Parameters Used in This Paper

Components Rich Case, mole %

Lean Case, mole %

Nitrogen 0.315 0.750

CO2 0.020 0.217

Methane 79.550 88.910

Ethane 10.600 4.950

Propane 5.470 3.090

i-Butane 0.926 0.442

n-Butane 1.690 0.894

i-Pentane 0.468 0.224

n-Pentane 0.478 0.221

n-Hexane 0.295 0.300

n-Heptane 0.132 0.000

n-Octane 0.060 0.000

n-Nonane 0.020 0.000

GPM for C2+ 5.710 2.870

Table 1. Feed Gas Compositions

Page 173: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 163

This pattern bears similarities to the XPDR 3 curve in Figure 7, indicating the existence of the maximum behavior for the GSP configuration under different constraints, e.g., DeCl pressure and refrigeration availability. Physically, separator temperatures that are too cold result in C1 condensation. The DeCl reboiler would input extra heat to prevent excessive C1 loss from the bottom. The net result is the increased C2 loss in the residue gas. To calculate the power requirement for the external refrigeration, this discussion assumes that the external refrigeration is from two-stage C3 compressor circuits with evaporator temperature at –41 °C and refrigerant condensing temperature at 49.5 °C. From the Gas Processors Association (GPA) data book [9], the value for the power can be obtained.

It should be noted that the maximum C2 recovery using C-MAR occurs at a higher temperature (–57 °C) than that of SIMAR (about –70 °C). Using C-MAR, the constraint in refrigeration prevents the separator temperature from decreasing further. Using SIMAR, the constraint is imposed last by forcing the selection into the sub-SIMAR region. Either approach would lead to similar results.

Figure 11 shows trends for the rich case similar to those described above. With the separator temperature further decreasing below some point, the C2 recovery decreases due to C1 condensation.

Figures 12 and 13 depict operation curves determined by C-MAR methodologies for lean and rich cases. As anticipated, the trends are similar to those of SIMAR shown in Figure 9. The gap between recompression and total power represents the external refrigeration. Again, as anticipated, the rich case demands more refrigeration duties than the lean case at

the same inlet pressure. And with the decrease of DeCl pressure, C2 recovery increases and less external refrigeration is needed because the relative volatility is greater at lower column pressure.

Figures 14 and 15 show C-MAR curves at two inlet pressure levels and two inlet gas GPM values. Figure 14 is for total power and Figure 15 is for recompression power only. In Figure 14, rich feed gas needs more total power (or more power than lean feed gas) because more external refrigeration is needed to condense heavy components in rich feed gas in the DeCl into liquid product. But the total power (or the power of lean and rich feed gas) will be about the same, or the lean case can even require more power than the rich case, at high C2 recovery level. The reason is that more recompression power is needed for lean feed gas to handle the larger residual gas flow.

As can be seen from Figure 15, for both 69 bara and 55 bara inlet pressure cases, rich feed gas requires more recompression power at low C2 recovery level than lean feed gas. But at high C2 recovery level, lean feed gas needs more

17 bara26 bara32 bara37 bara

Column Pressure

C 2 Rec

over

y, %

High-Pressure Separator Temperature, °C

100

90

80

70

60

50

40–60.0 –50.0 –40.0 –30.0 –20.0 –10.0

Inlet Pressure = 69 bara

Figure 11. Determining Maximum C2 Recovery Using C-MAR Methodology (Rich Case)

C 2 Rec

over

y, ba

ra

Power, MW/100 MMscfd

1.00

0.98

0.96

0.94

0.92

0.90

0.88

0.86

0.84

0.82

0.800.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00

Inlet Pressure = 69 bara

17 bara

32 bara

26 bara

37 bara

High Column Pressure NeedsMore External Refrigeration

Total PowerRecompression Power

Figure 12. Operation Curves Determined by C-MAR Methodology (Lean Case)

C 2 Rec

over

y, ba

ra

Power, MW/100 MMscfd

1.00

0.98

0.96

0.94

0.92

0.90

0.88

0.86

0.84

0.82

0.800.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00

Inlet Pressure = 69 bara

17 bara

26 bara

22 bara

37 bara

Total DutyRecompression Duty

High Column Pressure NeedsMore External Refrigeration

Figure 13. Operation Curves Determined by C-MAR Methodology (Rich Case)

It should

be noted that

the maximum

C2 recovery

using C-MAR

occurs at a

higher temperature

than that

of SIMAR.

Page 174: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 164

recompression power than rich feed gas. At low C2 recovery level or high DeCl pressure, to obtain the same C2 recovery, rich feed gas needs lower DeCl pressure to create higher relative volatility, which leads to a higher recompression power requirement. But at high C2 recovery or low DeCl pressure, either lean or rich feed gas has high relative volatility, while lean feed gas requires a greater flow rate to achieve the same C2 recovery. This explains the larger recompression power requirement of the lean case at high C2 recovery. It is easy to understand that high pressure feed gas (69 bara) needs more recompression power than low pressure feed gas (55 bara) because of the assumption that the inlet pressure is the same as the delivery pressure. As mentioned earlier, the external refrigeration requirement can be deduced from the curves in Figures 14 and 15 because it is simply the difference between the total power and the recompression power.

Examining the regularities between the rich and lean cases in Figure 14 leads to an important conclusion. At a given feed gas pressure and a given richness of inlet gas (i.e., GPM value), it is possible to develop general correlations to

interpolate required duties for different feed gases. Since the curves in Figure 14 represent the maximum C2 recoveries achievable by the GSP with realistic refrigeration supplies, the interpolated results provide expedient estimates in feasibility investigations.

In addition, since the GSP has practically become a benchmark configuration in this field, the curves in Figure 14 acquired by C-MAR methodology can be used to evaluate different process configurations. The following subsection provides an illustration using the ENRP and the lean reflux process as examples.

C-MAR Curve and the ENRP and Lean Reflux ProcessIn Figure 16, the ENRP and lean reflux process are compared with the C-MAR curve. Either of the two processes or both in combination can achieve higher C2 recovery at lower power than the C-MAR curve or the highest recovery by the GSP. Improvement can be expected from the two processes. Obviously, the ENRP can expend less power to achieve higher C2 recovery than the GSP, and the lean reflux process can achieve high C2 recovery with less power than the GSP. Combining the ENRP and lean reflux process is better because the combination can achieve high C2 recovery with less power.

In Figure 17, the C-MAR curve is compared with the recovery and power for the Pascagoula NGL plant, which uses the GSP. The point plotted for Pascagoula falls on the right side of the C-MAR curve and is quite close to it. This shows that the design of this plant can achieve a C2 recovery close to the maximum achievable by the GSP.

Another example shown in Figure 18 is the Neptune II NGL plant, which uses the ENRP in its design. For comparison, the point for the GSP without refrigeration is also marked. The

At a given

feed gas pressure

and inlet

gas richness,

it is possible

to develop

general correlations

to interpolate

required duties

for different

feed gases.

1.0

0.95

0.90

0.85

0.80

0.75

0.702.00 3.00 4.00 5.00 6.00 7.00

C 2 Rec

over

y, ba

ra

Total Power, MW/100 MMscfd

Inlet Pressure = 69 bara, Rich Case

C-MAR Curve

IPSI Stripping Gas Refrigeration

IPSI StrippingGas Refrigeration + Lean Reflux

Lean Reflux

Figure 16. Comparison of ENRP and Lean Reflux Process with C-MAR Curve

C 2 Rec

over

y, ba

ra

Recompression Power, MW/100 MMscfd

1.00

0.95

0.90

0.85

0.80

0.75

0.701.00 2.00 3.00 4.00 5.00 6.00 7.00

P = 69 bara, LeanP = 69 bara, RichP = 55 bara, LeanP = 55 bara, Rich

Figure 15. Impact of Inlet Pressure and Richness of Feed Gas on C-MAR Curves (Recompression Power)

C 2 Rec

over

y, ba

ra

Total Power, MW/100 MMscfd

1.00

0.95

0.90

0.85

0.80

0.75

0.702.00 3.00 4.00 5.00 6.00 7.00

P = 69 bara, LeanP = 69 bara, RichP = 55 bara, LeanP = 55 bara, Rich

Figure 14. Impact of Inlet Pressure and Richness of Feed Gas on C-MAR Curves (Total Power)

Page 175: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 165

point for the ENRP is on the left side of the C-MAR curve and shows the improvement realized from use of the ENRP over the GSP. GSP without refrigeration is some distance away from the C-MAR curve on the right side; the C2 recovery is limited because no external refrigeration is supplied.

CONCLUSIONS

In this paper, the authors explored the technical background for developing design optimi-

zation methodologies for turbo-expander C2 recovery processes. Methods and processes to optimize design and improve system performance were examined and illustrations presented. The discussion and data support the following conclusions, briefly summarized below:

• C-MAR is a valuable tool and a new approach that eliminates the shortcomings of the SIMAR methodology by expediently determining the maximum C2 recovery and compression power based on use of the well-known GSP.

• Using the C-MAR curves enables optimum design to be determined and initial cost estimates to be prepared for project scoping, avoiding the need to perform intricate simulations.

• Separately, use of the stripping gas process (ENRP) and the lean reflux process can significantly improve the system performance.

• A combination of the aforementioned two processes further improves the system performance.

TRADEMARKS

Aspen HYSYS is a registered trademark of Aspen Technology, Inc.

Enhanced NGL Recovery Process is a service mark of IPSI LLC (Delaware Corporation).

REFERENCES

[1] R.J. Lee, J. Yao, and D. Elliot, “Flexibility, Efficiency to Characterize Gas-Processing Technologies in the Next Century,” Oil & Gas Journal, Vol. 97, Issue 50, December 13, 1999, p. 90, access as IPSI technical paper via <http://www.ipsi.com/Tech_papers/paper2.htm>.

[2] S. Huang, R. Chen, J. Yao, and D. Elliot, “Processes for High C2 Recovery from LNG – Part II: Schemes Based on Expander Technology,” 2006 AIChE Spring National Meeting, Orlando, Florida, April 23–27, 2006, access via <http://aiche.confex.com/aiche/s06/techprogram/P43710.HTM>.

[3] S. Huang, R. Chen, D. Cook, and D. Elliot, “Processes for High C2 Recovery from LNG – Part III: SIMAR Applied to Gas Processing,” 2006 AIChE Spring National Meeting, Orlando, Florida, April 23–27, 2006, see <http://aiche.confex.com/aiche/s06/preliminaryprogram/abstract_43672.htm and <http://aiche.confex.com/aiche/s06/preliminaryprogram/abstract_43672.htm>.

[4] J. Trinter and S. Huang, “SIMAR Application 1: Evaluating Expander-Based C2+ Recovery in Gas Processing,” 2007 AIChE Spring National Meeting, Houston, Texas, April 22–26, 2007, see <http://aiche.confex.com/aiche/s07/preliminaryprogram/abstract_81508.htm>.

[5] C. McMullen and S. Huang, “SIMAR Application 2: Optimal Design of Expander-Based C2+ Recovery in Gas Processing,” 2007 AIChE Spring National Meeting, Houston, Texas, April 22–26, 2007, see <http://aiche.confex.com/aiche/s07/preliminaryprogram/abstract_81509.htm>.

C-MAR is

a valuable tool

and a

new approach

that eliminates

the shortcomings

of SIMAR

methodology

and enables

optimum design

to be

determined.

1.00

0.95

0.90

0.85

0.80

0.75

0.702.00 3.00 4.00 5.00 6.00 7.00

C 2 Rec

over

y, ba

ra

Total Power, MW/100 MMscfd

Inlet Pressure = 69 bara, GPM = 2.18

C-MAR

Pascagoula GSP

Figure 17. C-MAR with Pascagoula GSP

1.00

0.95

0.90

0.85

0.80

0.75

0.70

0.65

0.60

0.55

0.502.00 3.00 4.00 5.00 6.00 7.00

C 2 Rec

over

y, ba

ra

Total Power, MW/100 MMscfd

C-MARIPSI Stripping GasRefrigeration

Inlet Pressure = 72 bara, GPM = 4.50

GSP WithoutRefrigeration

Figure 18. C-MAR with Neptune II

Page 176: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 166

[6] R.N. Pitman, H.M. Hudson, and J.D. Wilkinson, “Next Generation Processes for NGL/LPG Recovery,” 77th GPA Annual Convention Proceedings, Dallas, Texas, March 1998, access via <http://www.gpaglobal.com/nonmembers/catalog/index.php?cPath=45&sort=1a&page=2> and <http://www.gasprocessors.com/dept.asp?dept_id=7077#>.

[7] L. Bai, R. Chen, J. Yao, and D. Elliot, “Retrofit for NGL Recovery Performance Using a Novel Stripping Gas Refrigeration Scheme,” Proceedings of the 85th GPA Annual Convention, Grapevine, Texas, March 2006, access via <http://www.gasprocessors.com/product.asp?sku=P2006.10>.

[8] P. Nasir, Enterprise Products Operating, LP; W. Sweet, Marathon Oil Company; and D. Elliot, R. Chen, and R.J. Lee, IPSI LLC, “Enhanced NGL Recovery ProcessSM Selected for Neptune Gas Plant Expansion,” Oil & Gas Journal, Vol. 101, Issue 28, July 21, 2003, access as IPSI technical paper via <http://www.ipsi.com/Tech_papers/neptune_gas_REV1.pdf>.

[9] GPSA Engineering Data Book, 12th edition, Gas Processors Association, Tulsa, Oklahoma, 2004, access via <http://www.gasprocessors.com/gpsa_book.html>.

BIOGRAPHIESWei Yan has more than 10 years of experience in the oil and gas industry. He joined IPSI LLC 1 as a senior process engineer in 2006 to work on design and technology developmentfor LNG and natural gas processing projects.

Before joining IPSI LLC, Dr. Yan worked at Tyco Flow Control Co. as an application engineer focused on new flow-control product development. He also served as a process engineer for China Huanqiu Chemical Engineering Corp., where he worked on the process design of petrochemical projects. Previously, as a research assistant at Rice University, Dr. Yan focused on the foam-aided alkaline-surfactant- enhanced oil recovery process.

Dr. Yan is a member of the Society of Petroleum Engineers and the American Institute of Chemical Engineers.

Dr. Yan holds a PhD from Rice University, Houston, Texas, and a Bachelor’s degree from Tianjin University, China, both in Chemical Engineering.

Lily Bai, a senior process engineer with IPSI LLC, has more than 10 years of experience in research, process design, and development in chemicals, petrochemicals, gas processing, and LNG. Dr. Bai works on the Wheatstone LNG project and is responsible for process

simulation. The Wheatstone facility, to be located on the northwest coast of mainland Australia, will have initial capacity of at least one 5 million-ton-per-annum LNG production train.

Before her current assignment, Dr. Bai worked on projects such as Angola LNG, Santos Gladstone LNG, and Atlantic LNG (Train 4) reliability. Her responsibilities included process simulation and preparation of process flow diagrams and equipment datasheets.

Dr. Bai holds a PhD from Rice University, Houston, Texas, and MS and BS degrees from Tianjin University, China, all in Chemical Engineering.

Jame Yao has 28 years of experience in the develop-ment of gas processing and LNG technologies. As vice president of IPSI LLC, Dr. Yao is responsible for allIPSI/Bechtel process design/simulation and development in cryogenic gas processing, nitrogen rejection, and LNG

technology. He holds several patents in the field.

Dr. Yao joined International Process Services, Inc., the predecessor to IPSI LLC, in 1986 as a senior process engineer. During his tenure with IPSI, he has co-invented several processes for the cryogenic separation and liquefaction of N2 , He, LNG (methane), and other light hydrocarbons. Previously, Dr. Yao worked as a member of the worldwide Technology Center for Gas Processing of DM International (Davy McKee) in Houston, Texas.

Dr. Yao performed graduate study/research at Purdue University related to the measurement and prediction of thermodynamic properties of cryogenic gas mixtures. This work enabled him to co-invent several processes for the separation and processing of natural gas. He also contributed to the design of gas plants in Australia, New Zealand, Venezuela, the UK, North Sea, Norway, and the United States.

Dr. Yao is the author of more than 20 technical publications and holds more than 15 patents. He is a member of the American Institute of Chemical Engineers.

The original version of this paper was presented at the 86th Annual Gas Processors Association Convention,

held March 11–14, 2007, in San Antonio, Texas, USA.

1 Bechtel affiliate IPSI LLC, based in Houston, Texas, was formed in 1986 to develop technology and provide conceptual/front-end design services for oil and gas production and processing facilities as well as for engineering, procurement, and construction companies.

Page 177: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 167

Dr. Yao holds PhD and MS degrees from Purdue University, West Lafayette, Indiana, and a BS degree from National Taiwan University, Taipei, all in Chemical Engineering.

Roger Chen has more than 30 years of experience in research, process design, and development in gas processing and in oil and gas production facilities. As a senior vice president of IPSI LLC, he is responsible for process design and development for gas processing facilities.

Dr. Chen designed the Enterprise Neptune II natural gas plant in Louisiana, constructed to match the capacity of Neptune I. He used an IPSI patent process in the design. Dr. Chen also has served as the technical auditor for several LNG projects, including Darwin in Australia, Zaire Province in Angola, and BG Egyptian in Idku, Egypt.

Previously, Dr. Chen was senior process engineer for IPSI. In this role, he initiated the process design for BG’s Hannibal gas processing plant located near Sfax, Tunisia. Dr. Chen also has served as a chief process engineer for IPSI, with a focus on the BG Pascagoula liquid recovery facility, part of the 1.5-billion-cubic-feet-per-day Pascagoula natural gas processing plant in Mississippi. His activities included process design and startup assistance.

Dr. Chen has been a member of the American Institute of Chemical Engineers and the American Chemical Society for more than 40 years, and the Gas Processors Association Research Steering Committee for 8 years. He holds 10 patents and has authored more than 30 technical publications.

Dr. Chen holds PhD and MS degrees from Rice University, Houston, Texas, and a BS degree from National Taiwan University, Taipei, all in Chemical Engineering.

Doug Elliot, a Bechtel Fellow and a fellow of the American Institute of Chemical Engineers, has more than 40 years of experience inthe oil and gas business, devoted to the design, t e c h n olo g y de ve lo p m e n t , and direction of industrial research. He is president,

chief operations officer, and co-founder (with Bechtel Corporation) of IPSI LLC.

Before helping establish IPSI, Dr. Elliot was vice president of Oil and Gas for DM International(Davy McKee). He started his career with McDermott Hudson Engineering in 1971 following a post-doctoral research assignment under Professor Riki Kobayashi at Rice University, where he developed an interest in oil and gas thermophysical properties research and its application.

Dr. Elliot has authored or co-authored more than 65 technical publications and holds 12 patents. He served on the Gas Processors Association Research Steering Committee from 1972 to 2001 and as chairman of the Gas Processors Suppliers Association Data Book Committee on Physical Properties. Dr. Elliot also served as chairman of the South Texas Section and director of the Fuels and Petrochemical Division of the American Institute of Chemical Engineers and is currently a member of the PETEX Advisory Board.

Dr. Elliot holds PhD and MS degrees from the University of Houston, Texas, and a BS degree from Oregon State University, Corvallis, all in Chemical Engineering.

Stanley Huang is a staffLNG process engineer with Chevron Energy Technology Company in Houston, Texas. His specialty is cryogenics, particularly as applied to LNG and gas processing. Since 1996, Dr. Huang has worked on many LNG baseload plants and receiving

terminals. He has also fostered process and technology improvements by contributing more than 20 publications and corporate reports.

Before joining Chevron, Dr. Huang worked for IPSI LLC and for KBR, a global engineering, construction, and services company supporting the energy, petrochemicals, government services, and civil infrastructure sectors.

By training, Dr. Huang is an expert in thermo-dynamics, in which he still maintains a keen interest. After leaving school, he worked for Exxon Research and Engineering Company as a post-doctorate research associate. Dr. Huang then worked for DB Robinson and Associates Ltd. in Alberta, Canada, a company that provides phase behavior and fluid property technology to the petroleum and petrochemical industries. He contributed more than 30 papers and corporate reports before 1996, including one on a molecularly based equation of state called SAFT, which is still popular in polymer applications today.

Dr. Huang holds PhD and MS degrees in Chemical Engineering and an MS in Physics,all from Purdue University, West Lafayette, Indiana, and a BS in Chemical Engineering from National Taiwan University, Taipei, Taiwan. He is a registered professional engineer in the state of Texas.

Page 178: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 168

Page 179: BTJ Book V1 N1 2008 Final

TECHNOLOGY PAPERS

Bechtel PowerT e c h n o l o g y P a p e r s

171Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical BoilersKathi Kirschenheiter Michael Chuk Colleen Layman Kumar Sinha

181CO2 Capture and Sequestration Options — Impact on Turbomachinery DesignJustin Zachary, PhDSara Titus

201Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power PlantsSanj Malushte, PhDOrhan Gürbüz, PhD Joe Litehiser, PhD Farhang Ostadan, PhD

Power — Springerville Power ExpansionSpringerville Unit 3, a 400 MW,

pulverized-coal-fi red generating station in Arizona, was named “2006 Plant of the Year”

by Power magazine.

Page 180: BTJ Book V1 N1 2008 Final
Page 181: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 171

INTRODUCTION

The engineering, procurement, and construction (EPC) contractor may ensure

efficient once-through supercritical boiler startup and commissioning by developing practical steam purity chemistry limits and a timely, workable approach to meeting these limits. Once-through supercritical boiler chemistry is uncontrollable by boiler blowdown; therefore, constant, stringent chemistry control is required. Significant operation outside boiler and turbine manufacturer chemistry limits may void the warranty, leaving the owner/EPC contractor solely responsible for all costs associated with repairs required within the warranty period.

In addition to boiler and turbine supplier warranty-related water quality and steam purity limits, various industry groups (e.g., American Society of Mechanical Engineers [ASME], VGB PowerTech, and Electric Power Research Institute [EPRI]) have developed standards that represent industry consensus on good, prudent

practice for cycle chemistry control. Within the past 15 years, supplier and industry group chemistry limits have been re-evaluated and revised for once-through supercritical boilers. Revisions include operating under various control modes, such as oxygenated treatment (OT) and all volatile treatment (AVT). Operators, engineers, and turnkey contractors have also reviewed chemistry limit guidelines. Further examination of revised chemistry guidelines show that specified chemistry constraints can be achieved during operation using full-flow, online condensate polishers with timely regenerations. However, during commissioning, it is difficult to ensure that these stringent limits are met without allowing for an uncharacteristically long startup time.

Most chemistry control guidelines developed by industry groups address plant operation after commissioning and initial startup. These guidelines include action levels outlining acceptable chemistry deviations based on hours of

Abstract—As new power plants commit to once-through supercritical boilers and rush to come on line, engineering, procurement, and construction (EPC) turnkey contractors face both a short-term and long-term chemistry dilemma related to oxygenated treatment (OT) during normal long-term operation. Since most industry experience is based on converting existing once-through boilers from all volatile treatment (AVT) OT, relatively little information exists on newer boilers operating on OT. Electric Power Research Institute (EPRI) all volatile treatment oxidizing (AVT[O]) and all volatile treatment reducing (AVT[R]) startup guidelines facilitating conversion to OT are sound but untested on new boilers and do not address considerations like cycles without deaerators, which must be treated on a case-by-case basis. The startup and commissioning cycle, including startup on AVT and quick conversion to OT, is the EPC turnkey contractor’s responsibility. To ensure efficient startup and commissioning of once-through supercritical boilers, the EPC turnkey contractor must address these chemistry issues and develop a practical approach to achieving steam purity and specified feedwater chemistry requirements.

Keywords—action level, all volatile treatment (AT); all volatile treatment (oxidizing) (AVT[O]); all volatile treatment (reducing) (AVT[R]); American Society of Mechanical Engineers (ASME); coal-fired power plants; chemistry guidelines; commercial operation; condensate polishers; Electric Power Research Institute (EPRI); engineering, procurement, and construction (EPC) contractor; lump-sum, turnkey (LSTK); once-through boilers; oxygenated treatment (OT); risk assessment, startup; steam and cycle chemistry; steam purity; supercritical

Originally Issued: October 2007Updated: December 2008

CONTROLLING CHEMISTRY DURING STARTUP AND COMMISSIONING OF ONCE-THROUGH SUPERCRITICAL BOILERS

Kathi Kirschenheiter

[email protected]

Michael Chuk

[email protected]

Colleen Layman

[email protected]

Kumar Sinha

[email protected]

Page 182: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 172

The sooner

full-load

turbine roll

is reached,

the sooner

target feedwater

chemistry and

steam purity

may be achieved.

operation outside recommended chemistry limits, and are valuable tools for operators. Action levels and allowable hours of chemistry excursion are implemented to protect power plant components from corrosion; however, controlling chemistry during startup and commissioning of once-through supercritical boilers and steam-related systems is a completely different scenario.

CHEMISTRY CONTROL PHILOSOPHY FOR ACHIEVING STEAM PURITY

The EPC contractor may ensure that chemistry limits are met during and after

once-through supercritical boilers commissioning by implementing the following steps:

• Control system component cleanliness during shop fabrication

• Control system component cleanliness during construction

• Flush system components prior to startup• Implement stringent water quality

requirements for hydrotesting• Perform boiler and feedwater system

chemical cleaning• Flush system components thoroughly

following chemical cleaning• Perform steamblows to obtain steam cycle

cleanliness• Implement time-based, progressively

improving feedwater and steam chemistry targets

Boiler and feedwater system startup cleaning issues depend on boiler type, heat exchanger equipment type and metallurgy, and success of pre-commissioning cleanliness measures. Without industry standard guidelines for power plant component cleaning methods, the EPC contractor must implement its own methods to quickly achieve and control boiler feedwater and

steam chemistry. Although increasing blowdown and makeup demineralized water to the cycle is effective for cleaning drum boilers, these simple cleaning methods are not practical for once-through supercritical boilers. As demonstrated in numerous startups, full-load turbine roll will dilute concentrated pockets of impurities in the feedwater system and uniformly mix feedwater with condensate. Therefore, the sooner full-load turbine roll is reached, the sooner target feedwater chemistry and steam purity may be achieved.

Before awarding a contract (boiler or turbine), the EPC contractor should negotiate startup and commissioning feedwater quality and steam purity guidelines with the boiler and turbine suppliers to ensure that long-term warranties are not voided during startup and commissioning activities. In addition, industry standard steam purity guidelines for operation should be relaxed to the most practical limits feasible during commissioning, while considering the owner’s long-term warranty interests.

EPC CONTRACTOR’S CHEMISTRY CONTROL PROGRAM

The EPC contractor’s chemistry control program must start at the equipment manufacturer’s

fabricating facilities where cleanliness methods for boiler tubes and other system components are initiated. To ensure that system components have been kept clean during fabrication, contract-negotiated cleaning and inspection procedures should be implemented. Hydrotesting components in the shop using pH-adjusted demineralized water to maintain component cleanliness followed by a final rinse with a silica-free, vapor phase corrosion inhibitor (VPCI) to reduce corrosion from residual moisture after shop cleaning or hydrotesting is recommended.

ABBREVIATIONS, ACRONYMS, AND TERMS

ACC air-cooled condensers

ASME American Society of Mechanical Engineers

AVT all volatile treatment

AVT(O) all volatile treatment (oxidizing)

AVT(R) all volatile treatment (reducing)

EPC engineering, procurement, and construction

EPRI Electric Power Research Institute

LSTK lump-sum, turnkey

ORP oxidation-reduction potential

OT oxygenated treatment

VPCI vapor phase corrosion inhibitor

Page 183: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 173

Most

once-through

supercritical boilers

in the US

have been

converted from

previously

predominant

AVT to OT.

Next, the EPC contractor must implement cleanliness methods during field fabrication to ensure that all construction debris is removed from the system upon completion, and cleanliness is maintained during component installation by capping pipe ends and cleaning weld areas. pH-adjusted demineralized water is recommended during boiler, condensate, and feedwater system components and piping flushing and hydrotesting to eliminate potential scaling and deposits. Potable quality water is acceptable for flushing and hydrotesting if a thorough chemical cleaning of components follows flushing and hydrotesting, and if a pH-adjusting chemical or silica-free VPCI is added to flush or hydrotest water.

Following flushing and hydrotesting, boiler, condensate and feedwater systems should be chemically cleaned using demineralized water as the chemical dilution medium. After cleaning, boiler and feedwater systems should be flushed with pH-adjusted demineralized water treated with a suitable, silica-free VPCI. Chemical cleaning is essential to the chemistry control program as it improves boiler chemistry stability by safely removing all deposits from inside boiler tubes (including organics from manufacturing; rust, mill scale, and welding slag from construction; and residual contaminants from hydrotesting).

Finally, the EPC contractor must meet agreed-upon chemistry limits in a timely fashion and complete system steamblows. Steamblows, which clear final debris and surface scale from the steam side of the system through thermal cycling and physical force of steam through the components, are the final step in ensuring steam chemistry meets required limits. Steamblows should be conducted using pH-adjusted demineralized water.

Bechtel contends that startup chemistry guidelines should primarily focus on main steam chemistry targets, including cation conductivity, silica, and sodium, as they are easily and reliably measured using relatively inexpensive online instrumentation. Targets for chlorides, sulfates, and organic compounds should be deferred until the end of the commissioning cycle. Degassed cation conductivity is the preferred conductivity to be measured during commissioning since system air leaks are still being discovered and sealed during the startup and commissioning phase. The measurement of degassed cation conductivity will aid in differentiating between air leaks and other contamination sources.

ONCE-THROUGH BOILER STARTUP CHEMISTRY TRENDS

Most once-through supercritical boilers in the US have been converted from previously

predominant AVT to OT, with new facilities almost exclusively using OT. This chemistry change requires all-ferrous metallurgy in the feedwater train, and precludes copper or copper-based alloy feedwater heat exchangers in system design and bronze impellers in condensate pumps and valve trims in the condensate system. The benefits from operating a once-through supercritical boiler on OT include:

• Lowering overall corrosion rates by forming a protective, double-oxide layer with a controlled amount of oxygen present in the condensate (This protective layer is considered to be more stable than the oxide layer formed using AVT.)

• Decreasing boiler chemical cleaning frequency due to reduced amounts of iron transport and deposition

• Allowing quicker, cleaner startups and reduced corrosion product transport rates during cold and hot startups

• Allowing boiler operation at lower pH with overall objective of minimizing chemical costs

• Eliminating feeding, handling, and storage of oxygen scavenger products

To achieve these overall short- and long-term objectives, chemistry controls must be tightened during startup and commissioning. However, tighter chemistry controls add extra time to the already tight startup schedule, and longer startup time equates to lost revenue.

Some once-through supercritical boiler manufacturers have instituted penalties against the allowable pressure drop during initial boiler performance testing, an additional complication that may impact startup and commissioning activities. These penalties are based on extended operation on all volatile treatment reducing (AVT[R]) during startup and commissioning. The reducing environment (negative oxidation-reduction potential [ORP]) present when operating on AVT(R) may contribute to increased iron transport, subsequently increasing the pressure drop through the boiler. These pressure drop correction penalties will be fervently debated by the EPC contractor during commissioning and challenged by both owners and plant operators.

Page 184: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 174

EPRI OT CHEMISTRY GUIDELINES

During steam-side startup and commissioning, the EPC contractor is mostly interested in

main and reheat steam chemistry. Table 1 lists EPRI recommendations for once-through boilers operating under OT, including normal target value and action levels 1, 2, and 3.

Table 1 includes three parameters requiring special consideration during commissioning: cation conductivity, silica, and sodium. Monitoring cation conductivity is essential since it warns of salts and acids that may cause turbine corrosion. Controlling silica levels in the steam is important as silicate scaling may contribute to turbine capacity and efficiency losses. Monitoring sodium is critical for avoiding corrosion because uncontrolled sodium hydroxide concentrations are known to cause corrosion damage failures in boiler tubes. [1]

Recommendations listed in Table 1 are based on stringent steam quality and feedwater requirements for long-term corrosion control and for reducing forced outages caused by water quality. Most boiler and turbine manufacturers have either agreed to the chemistry limits outlined in Table 1 or have proposed similar limits. From Bechtel’s perspective, these recommendations are acceptable during operation.

Although recommendations listed in Table 1 are acceptable for targeted chemistry limits during operation, EPC contractors would like to see the following two columns added to this table:

• Allowable chemistry excursions during hot startup

• Allowable chemistry excursions during cold startup

Feedwater chemistry control is also essential for successful OT. Table 2 specifies EPRI feedwater chemistry guidelines.

The two most important parameters in Table 2 are feedwater cation conductivity and pH. Cation conductivity should be maintained below 0.15 μS/cm during operation on OT. Normal pH range for feedwater under OT is 8.0 to 8.5. The EPC contractor is challenged with controlling pH when feedwater cation conductivity increases to concentration levels listed in Table 1, action levels 1, 2, and 3 (≤0.3 μS/cm, ≤0.6 μS/cm, >0.6 μS/cm, respectively). [1] Controls for pH and cation conductivity are discussed in boiler and steam turbine startup documents; however, these chemistry control guidelines are not always consistent. The pH/conductivity relationship is crucial for once-through cycles on OT; thus, the EPC contractor implements the chemistry control at its own risk.

Important issues to be addressed when implementing OT include:

• At what point during the startup and commissioning process should the chemistry regime be switched from AVT to OT to prevent frequent switching back and forth between a reducing and an oxidizing environment?

• What would be the “detrimental effects” of going from an oxidizing atmosphere to a reducing (or close to reducing) atmosphere, for temporary periods?

• How can these “detrimental effects” be quantified and addressed during design and equipment procurement?

ROLE OF CONDENSATE POLISHERS DURING COMMISSIONING

Once-through supercritical boilers are commonly installed with full-flow

condensate polishers to control corrosive impurities concentration in condensate and feedwater systems. The presence of impurities in feedwater will significantly affect feedwater chemistry, potentially exceeding boiler supplier feedwater limits and turbine supplier steam

The

pH/conductivity

relationship

is crucial for

once-through

cycles on OT.

Table 1. EPRI Recommendations for Main and Reheat Steam Chemistry for Once-Through Boilers on OT [1]

ParameterTarget Value Action Levels

N 1 2 3

Cation Conductivity, μS/cm ≤0.15 ≤0.3 ≤0.6 >0.6

Silica, ppb ≤10 ≤20 ≤40 >40

Sodium, ppb ≤2 ≤4 ≤8 >8

Chloride, ppb ≤2 ≤4 ≤8 >8

Sulfate, ppb ≤2 ≤4 ≤8 >8

TOC, ppb ≤100 >100

Table 2. EPRI Feedwater Chemistry Control Guidelines for Once-Through Boilers on OT [1]

Parameter Normal Limit

Cation Conductivity, μS/cm ≤0.15

pH, STU 8.0 to 8.5

Dissolved Oxygen at Economizer Inlet, ppb 30 to 150

Iron, ppb ≤2

Ammonia, ppm 0.02 to 0.07

Page 185: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 175

purity specifications. Although chemistry control with full-flow condensate polishers makes startup and commissioning activities progress smoothly, a certain degree of boiler cleanliness must be achieved before placing condensate polishers in operation. If condensate polishers are operated before a certain level of cleanliness is achieved, there will be an increase in chemical regenerations frequency (for deep-bed condensate polishers) or resin replacement (for precoat condensate polishers), which will lead to an increase in operations and maintenance costs during commissioning.

The EPC contractor should evaluate both precoat and deep-bed condensate polishers for use during commissioning and startup. Tight, perfectly installed condenser tubes can’t be confirmed during startup without extensive condenser tube testing and installation of an expensive leak detection system. Therefore, the EPC contractor must use its own experience in selecting one type of polisher over the other, weighing the cost/benefit of each type of polisher. Generally, when circulating water is brackish or seawater, a deep-bed polisher is required without exception. Bechtel’s design standard is to use deep-bed condensate polishers on all cycles designed to operate on OT.

Impurities shaken loose during startup may cause a chemistry hold, where plant load increases are temporarily halted until these impurities are removed from the system. For a once-through supercritical boiler, impurities are removed exclusively by condensate polishers subsequent to chemical cleaning and boiler flush. Once impurities are removed, the chemistry hold is lifted and the plant is allowed to continue to ramp up to full load without exceeding allowable boiler or turbine chemistry limits. Operation at low or reduced loads during startup is frequently insufficient to eliminate these chemistry holds. Installation of polishers allows the plant to reach full power more quickly, resulting in substantial cost savings and increasing revenue production. The cost of condensate polisher regenerations should be accounted for in the overall commissioning costs. To minimize the number of condensate polisher regenerations, polishers should be operated beyond ammonia break, if feasible. However, presence of a full-flow condensate polisher doesn’t make the unit immune to chemistry problems. A major condenser leak during commissioning will still lead to severe chemistry excursions, even with the aid of a condensate polisher.

CASE HISTORIES

Case History 1: Once-Through Supercritical Boiler Commissioning With ACCThis case study discusses a power plant currently in full operation. It has a once-through supercritical boiler, commissioned in early 2000, and an air-cooled condenser (ACC). Startup followed a chemistry control program similar to what is now classified as all volatile treatment oxidizing (AVT[O]).

Feedwater chemistry and steam purity control is a challenge on ACC-equipped units. During commissioning, the EPC contractor faced numerous difficulties controlling oxygen, pH, and cation conductivity due to the ACC’s large condensing surface and frequent regenerations required by precoat condensate polishers at ammonia break. In addition, turbine supplier silica requirements couldn’t be relaxed because this provision hadn’t been negotiated with the turbine supplier before turbine contract award.

The EPC contractor mitigated chemistry and steam purity control issues using a membrane contactor to remove dissolved gasses, particularly dissolved oxygen, from the makeup water to the cycle. Membrane contactors containing microporous hydrophobic membranes were used to bring gas and liquids in direct contact without mixing. The contactors lowered gas pressure and created a driving force that removed dissolved gasses from the water. Installed at the optimum location, membrane contactors are highly efficient and compact. Using a membrane contactor allowed the EPC contractor to reduce makeup water impurities, resulting in improved feedwater quality control.

Challenges of Commissioning a Once-Through Supercritical Boiler with an ACCChemistry control during commissioning of a once-though supercritical boiler with an ACC is complicated by the ACC’s large condensing surfaces, on which high-purity steam must condense. To meet steam quality requirements, these surfaces must be contaminant free. However, ACCs cannot be chemically cleaned or efficiently flushed with water; therefore, the EPC contractor must rely on cleanliness controls implemented during shop fabrication and site installation. In addition, large surface areas dramatically increase iron content in condensate. Full-flow condensate polishers help to remove iron; however, pressure drop through the polishers increases rapidly, compared to a system operating with a traditional steam surface condenser, and requires frequent regenerations

Feedwater

chemistry and

steam purity control

is a challenge

on ACC-equipped

units.

Page 186: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 176

or polisher bed cleanings. Additional precoat filters or cartridge filters upstream of the main condensate polishers should be considered, at a minimum, for initial startup and commissioning to provide additional cleaning, to supplement online condensate polishers in crud removal, and to speed up the plant startup process.

Case History 2: Once-Through Supercritical Boiler CommissioningThis case history discusses a unit in the final stages of construction. The preliminary startup and commissioning chemistry control philosophy has been developed. The unit will start up on AVT(O) and will normally operate on OT.

The EPC contractor and the boiler and turbine suppliers negotiated target chemistry guidelines to be used during plant commissioning. After cleaning and flushing contaminants from condensate, feedwater, and boiler systems and completing steamblows, the EPC contractor will initiate startup in turbine bypass mode until startup steam chemistry limits listed in Table 3 are met.

After demonstrating that startup steam chemistry limits have been met, turbine roll will be initiated and the turbine startup process will continue, including loading the turbine to full load. The steam chemistry will be monitored to ensure that chemistry is continually improving from the startup steam chemistry limits listed in Table 3 to the balance-of-commissioningperiod chemistry limits listed in Table 4. A negotiated period of 168 operating hours will be allowed to achieve steam chemistry below the balance-of-commissioning-period chemistry limits.

If the chemistry limits in Table 4 are not met within the allotted 168 operating hours, the EPC contractor and turbine supplier shall mutually agree to an approach for demonstrating balance-of-commissioning-period chemistry limits while operating in the bypass mode.

The negotiation of relaxed turbine steam purity limits during startup confirms that an additional allowance can be given to the EPC contractor for impurities that could impact startup and delay the overall commissioning schedule.

Recommended Startup Feedwater Chemistry for Once-Through Boilers When Implementing OTAVT(O) and AVT(R) are the two best-known methods referenced by EPRI for startup of once-through boilers implementing OT during operation. Operational chemistry control guidelines for each of these methods are summarized in Table 5 and Table 6, respectively.

During commissioning, the EPC contractor must develop a chemistry implementation program to meet guidelines specified in Table 5 and Table 6, respectively. For startup and commissioning of a once-through supercritical boiler with a deaerator, feedwater chemistry control guidelines specified under AVT(O) and AVT(R) are readily achievable. However, for cycles without a deaerator, it is more difficult to achieve AVT(O) and AVT(R) feedwater chemistry guidelines (particularly dissolved oxygen and iron limits), even if oxygen is removed from makeup water prior to intro- duction into the cycle through, for example, membrane contactors. Elimination of noncondensable gases from the system is

The negotiation

of relaxed turbine

steam purity limits

during startup

confirms that

an additional

allowance

can be given

to the EPC

contractor.

Table 3. Startup Steam Chemistry Limits

Parameter Limit

Degassed Cation Conductivity, μS/cm <0.45

Sodium, ppb <12

Silica, ppb <40

Chloride, ppb <12

TOC, ppb <100

Sulfate, ppb <12

Table 4. Balance-of-Commissioning-Period Chemistry Limits

Parameter Limit

Degassed Cation Conductivity, μS/cm <0.30

Sodium, ppb ≤3

Silica, ppb <20

Chloride, ppb ≤3

TOC, ppb ≤100

Sulfate, ppb ≤3

Table 5. AVT(R) Feedwater Chemistry Control Guidelines [1]

Parameter Normal Limit

Cation Conductivity, μS/cm ≤0.2

pH, STU 9.2 to 9.6

Dissolved Oxygen at Economizer Inlet, ppb <5

Iron, ppb <2

Copper, ppb <2

ORP, mV –300 to –350

Table 6. AVT(O) Feedwater Chemistry Control Guidelines [1]

Parameter Normal Limit

Cation Conductivity, μS/cm ≤0.2

pH, STU 9.2 to 9.6

Dissolved Oxygen at Economizer Inlet, ppb <10

Iron, ppb <2

Copper, ppb <2

Page 187: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 177

limited to the condenser air removal system efficiency and capacity when no deaerator is included in the cycle design.

Suggested chemistry guidelines for cycles without deaerators are listed in Table 7. These proposed guidelines are based on Bechtel’s startup experience, taking into account that oxygen removal to the low levels proposed for AVT(R) and AVT(O) operation is an important, but not crucial, requirement in the absence of a deaerator.

Case History 3: Once-Through Supercritical Boiler Commissioning Without DeaeratorThis case history discusses a project development phase, once-through supercritical unit without a deaerator. However, a preliminary plant startup and commissioning plan has been developed. The unit will be operated on OT during normal operation. Because there is no deaerator, reaching EPRI-recommended cation conductivity, iron, and dissolved oxygen limits will be a greater challenge.

The commissioning and startup plan includes unit startup on AVT, as recommended by EPRI. [1] However, to minimize high iron transport and deposition, the plan calls for unit startup on AVT(O). Startup on AVT(O) will control pH by adding ammonia, increase temperature to reduce dissolved oxygen concentration through use of an auxiliary boiler and sparging in the hotwell, and reduce cation conductivity by treating water through full-flow condensate polishers. Additional startup schedule time, compared to time normally allotted for a unit with a deaerator, has been included because reaching dissolved oxygen and cation conductivity limits is not anticipated to be a quick and easy task. Additional schedule time was also added for condenser and feedwater sparging with steam from the auxiliary boiler, helium sweep of condenser and vacuum areas, and unit inspection for vacuum leaks. If it is impossible to meet the dissolved oxygen and cation conductivity limits within a reasonable timeframe, the option for startup on AVT(R) is

available. Once cation conductivity has reached the required 0.15 μS/cm level, the unit will be switched from AVT to in accordance with the EPRI guidelines. [1]

EPC STARTUP CHALLENGES

In addition to stringent steam quality limits implemented by steam turbine suppliers,

boiler manufacturers have tightened limits on feedwater chemistry. OT guidelines call for consistent feedwater quality with cation conductivity ≤0.15 μS/cm (see Table 2) before and during OT chemistry program implemen- tation. AVT guidelines call for similar chemistry limits (<0.2 μS/cm). Since meeting these chemistry limits during startup and commissioning is extremely difficult, the EPC contractor requires standards to be relaxed during commissioning to permit timely unit startup.

One of the EPC contractor’s biggest dilemmas during commissioning is determining the appropriate time to switch from AVT to OT, even when considering EPRI and boiler supplier guidelines. Once cation conductivity levels are stable below 0.15 μS/cm, EPRI recommends operation on OT with oxygen injection in a pH range of 8.0 to 8.5. EPRI guidelines also state that oxygen injection into feedwater may continue with pH controlled between 9.2 and 9.6 and cation conductivity between 0.15 μS/cm and 0.3 μS/cm. However, at cation conductivity levels greater than 0.3 μS/cm, EPRI recommends that oxygen injection be terminated and AVT resumed. [1] Upsets in cation conductivity may lead to serious corrosion problems if oxygen is continuously fed during upset conditions. Defining stable operation, given the many factors in play and pieces of equipment still being tested during typical unit startup, is the true challenge to the EPC contractor. Preventing system chemistry switching back and forth between AVT and OT is extremely important. Detrimental effects caused by system chemistry switching between AVT and OT include increased iron transport through dissolution of magnetite and protective hematite (developed during OT operation) layers, boiler deposits, and increased boiler pressure drop.

WARRANTY IMPLICATIONS

Steam turbine suppliers are also setting limits, in the equipment contract, on the

number of hours a turbine can be operated with out-of-specification chemistry. These limits are typically listed in an action-level

In addition to

stringent steam

quality limits

implemented by

steam turbine

suppliers,

boiler manufacturers

have tightened

limits on

feedwater chemistry.

Table 7. Suggested Startup Feedwater Chemistry Guidelines for Once-Through Cycles Without Deaerators

Parameter Limit

Cation Conductivity, μS/cm <0.2

pH, STU 9.2 to 9.6

Dissolved Oxygen at Economizer Inlet, ppb <100

Iron, ppb <5

Copper, ppb <2

Page 188: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 178

format where minor chemistry excursions are allowable for predetermined time periods without violating the equipment warranty. The more severe the chemistryexcursion, the shorter the allowable operating time period the supplier will allow while maintaining the equipment warranty. These action levels impose additional restraints on steam turbine operation. The limit on number of hours of operation in each action level is very difficult to meet without adversely impacting the equipment warranty should delays arise during unit startup. During startup and commissioning, steam chemistry is expected to be degraded as compared to when the unit is in full-load, steady-state operation because of numerous cold and hot startups experienced in a short timeframe. Therefore, during each startup, the turbine will operate with degraded steam purity (within the specified action levels). From Bechtel’s perspective, hours of operation under each of the different action levels accumulated during the commissioning and startup phase should not count against allowable hours for warranty purposes.

CONCLUSIONS

The EPC contractor’s ultimate goal is to perform an efficient, once-through supercritical

boiler and turbine startup and commissioning. Stringent operational chemistry guidelines applied to startup and commissioning activities negatively impact quick and efficient startup. To meet chemistry guidelines, rigorous cleaning and inspection procedures must be adhered to during all fabrication, construction, and installation phases. The success of any cleaning program is ultimately judged by the ease with which acceptable feedwater and steam chemistry is achieved.

Practical startup chemistry guidelines should be established by consensus among the turbine manufacturer, boiler manufacturer, and EPC contractor early on in project development and outlined in equipment contracts. These startup guidelines should be based on the EPC contractor’s startup experience, the manufacturers’ desire to prevent corrosion and deposition in equipment components, and the EPC contractor’s and owner’s desire for efficient and timely unit startup. Table 8 provides practical chemistry limits suitable for startup and commissioning activities for once-through supercritical boilers.

Using the practical chemistry limits provided in Table 8, typical operation duration would be about one week and would be outlined in the equipment contracts. After this, the normal chemistry limits, as recommended by EPRI and equipment manufacturers, would be met and maintained. Time-based chemistry limits and cumulative hours under action levels would be started after commissioning.

For once-through supercritical boilers without deaerators, startup chemistry guidelines should be developed allowing the EPC contractor as much allowance on dissolved oxygen as practical.

Finally, it is important to develop a relationship of trust among the EPC contractor, turbine and boiler suppliers, and owners/operators. For it is through trust that the combined chemistry knowledge of these parties can be integrated to complement plant startup and bring the unit online more quickly, resulting in economic rewards for all.

REFERENCES

[1] “Cycle Chemistry Guidelines for Fossil Plants: Oxygenated Treatment,” Electric Power Research Institute (EPRI) Technical Report 1004925, March 9, 2005 <http://my.epri.com/portal/serverpt?space=CommunityPage&cached=true&parentname=ObjMgr&parentid=2&control=SetCommunity&CommunityID=277&PageID=0&RaiseDocID=000000000001004925&RaiseDocType=Abstract_id>.

Table 8. Steam Purity Limits During Startup—EPC Contractor Recommendation for Once-Through Boilers

Parameter Frequency Limit

Degassed Cation Conductivity, μS/cm

Continuous Sampling

<0.45

Sodium, ppb Grab Daily <12

Silica, ppb Grab Daily <40

Chloride, ppb Grab Daily <12

TOC, ppb Grab Weekly <200

Sulfate, ppb Grab Daily <12

Stringent

operational

chemistry

guidelines

applied to

startup and

commissioning

activities

negatively impact

quick and

efficient startup.

This paper was originally presented at the68th Annual International Water Conference (IWC),

held October 21–25, 2007, in Orlando, Florida.

Page 189: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 179

BIOGRAPHIESKathi Kirschenheiter has worked for Bechtel Power for more than 7 years. Her experience has focused mainly on engineering design of water and wastewater treatment systems, including equipment specification and procurement. She has worked with several different Bechtel GBUs in

various locations, including Power in Alabama with the Tennessee Valley Authority’s Browns Ferry Nuclear Plant Unit 1 Restart, BNI in Washington with the Hanford Waste Treatment Plant Project, and OG&C in London with the Jamnagar Export Refinery Project.

Kathi holds a BS in Chemical Engineering from Purdue University, West Lafayette, Indiana, and is currently pursuing an ME in Environmental Engineering from the Johns Hopkins University, Baltimore, Maryland. She is a registered professional engineer in the state of Maryland in Environmental Engineering and is currently a Black Belt Candidate in Bechtel Power’s Six Sigma program.

Michael Chuk has worked for Bechtel’s Power business for more than 3 of his 4 years in the industry. His experience includes the engineering, design, and procurement of water and wastewater treat-ment systems for power plant projects. Michael is part of the mechanical engineering

group, and he has most recently been working on the Prairie State 1,600 MW supercritical coal-fueled plant project. Mike’s work has included awarding water treatment packages and completing system design and sizing calculations for all of the project’s water treatment equipment. Previously, he worked in Bechtel Power’s water treatment engineering group on several other fossil, nuclear, and integrated gasification combined-cycle (IGCC) plant projects. His responsibilities included water balance calculations and water characterization calculations, and early procurement activities such as preparation of material requisitions and specifications.

Michael holds a BS in Chemical Engineering from Worcester Polytechnic Institute, Worcester, Massachusetts. He is an engineer-in-training in the state of Maryland.

Colleen Layman, manager of water treatment, has more than 15 years of experience in water and wastewater treatment for power generating facilities. Her wide variety of experience includes engineer- ing design, construction, and startup of power generating facilities; field service and

engineering of water and wastewater treatment equipment and water quality control programs;

and experience in the day-to-day operations of a power plant burning waste anthracite coal. Currently, as manager of Bechtel Power’s water treatment group, she is responsible for the conceptual design, process engineering, startup and operational support of the water/wastewater treatment systems,and the steam/water chemistry issues for Bechtel’s power projects.

Colleen is an active member of both the American Society of Mechanical Engineers and the Society of Women Engineers. She currently serves as a member of ASME PTC 31 – High-Purity WaterTreatment Systems, as a member of the ASME Research and Technology Committee on Water and Steam in Thermal Systems, and as President of the Baltimore-Washington Section of the Society of Women Engineers.

Colleen holds an MS in Water Resources and Environmental Engineering from Villanova University, Pennsylvania, and a BS in Mechanical Engineering Technology from Thomas Edison State College, Trenton, New Jersey. She is a registered professional engineer in the state of Ohio.

Kumar Sinha, principal engineer, water and waste-water, on Bechtel’s mechanical engineering staff, has 40 years of experience (30 years of which have been with Bechtel) dealing with water and wastewater for power plants, refineries, and other industries. He has

held increasingly responsible positions, including senior water treatment engineer, water treatment supervisor, senior engineer and project engineer with Bechtel Civil, supervisor and principal engineer with the Fossil Technology Group, and principal engineer for the Mechanical Project Acquisition Group. His experience includes project and process engineering, licensing, construction support, startup, and hands-on operation of water and wastewater systems in the US and abroad. Areas of expertise include boiler water and steam chemistry, pretreatment and demineralization, water desalination, treatment of cooling water, and wastewater disposal, including water recycle and zero discharge.

Kumar has been an executive committee member of the International Water Conference since 2004 and was general chair in 2007 and 2008, was a member of the American Society of Mechanical Engineers Subcommittee on Water Technology and Chemistry, was a member and director of the Engineers Society of Western Pennsylvania in 2007 and 2008, and is a retired member of the American Institute of Chemical Engineers.

Kumar received an MS in Energy Engineering from the University of Illinois, Chicago Circle campus, and has completed various business courses at Bechtel and Eastern Michigan University. He holds a BS in Chemical Engineering from the University of Ranchi, India, and is a registered professional engineer in the state of Illinois.

Page 190: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 180

Page 191: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 181

INTRODUCTION

CO2 Capture OptionsIn the past few years, the major thrust to lower greenhouse gases (GHG) emitted into the atmosphere has been directed toward increasing the thermal efficiency of plant equipment, in particular the turbomachinery. The other option for lowering GHG emissions is to capture the CO2, a process associated with significant thermal efficiency losses. In the power generation industry, the most common CO2 capture technologies fall under these categories:

• Post-combustion capture of CO2 from the plant exhaust flue gases using chemical absorption.

• Capture of CO2 before the combustion process. In this arrangement, the fuel is synthesis gas (syngas) containing mostly hydrogen and CO. The CO is converted to CO2 in a water-shift reactor; then, the CO2 is removed by physical absorbent and hydrogen (H2) is used as fuel in the gas turbine.

• CO2 capture from a number of different processes such as oxy-fuel combustion and chemical looping.

These technologies all have relatively low efficiency and high cost. Other than the post-combustion amine-based processes, all are in various stages of concept validation or small-scale demonstration. [1] The available CO2 capture options are summarized in Figure 1.

Meaning of CO2 Capture ReadyCO2 capture and sequestration (CCS) from power generation sources will eventually be required in one form or another. Today the timing and the extent of regulations governing the process are only speculative. So far, none of the existing technologies has emerged as the dominant solution, and many new and innovative alternatives are in various stages of research, development, or testing. For plants in the planning or design stage, owners; engineering, procurement, and construction (EPC) contractors; and equipment suppliers are trying to determine which features need to be applied today for future CCS implementation.

Abstract—In today’s climate of uncertainty about carbon dioxide (CO2 ) emissions legislation, owners and power plant planners are evaluating the possibility of accommodating “add-on” carbon capture and sequestration (CCS) solutions in their current plant designs. The variety of CCS technologies under development makes the task very challenging. This paper discusses the more mature post-combustion CCS technologies, such as chemical absorption, and the associated equipment requirements in terms of layout, integration into the generating plant, and auxiliary power consumption. It also addresses supercritical coal-fired as well as combined cycle plants; evaluates plant configuration details and various operational scenarios; and discusses the issues related to balance-of-plant systems, including water treatment, availability, and redundancy criteria. The paper then presents several options for pre-combustion processes such as oxy-fuel combustion and integrated gasification combined cycle (IGCC) water-shift carbon monoxide (CO) conversion to CO2 . The impacts of several processes that only partially capture carbon are also evaluated from an engineering, procurement, and construction (EPC) contractor’s perspective as plant designer and integrator. Finally, the paper presents several examples of studies in development by Bechtel in which a neutral but proactive technical approach was applied to achieve the best and most cost-effective solution.

Keywords—chemical looping, CO2 capture, gas turbine, Graz cycle, oxy-fuel combustion, steam turbine, turbomachinery

Originally Issued: June 2008Updated: December 2008

Justin Zachary, PhD

[email protected]

Sara Titus

[email protected]

CO2 CAPTURE AND SEQUESTRATION OPTIONS—IMPACT ON TURBOMACHINERY DESIGN

Page 192: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 182

The “CO2 capture

ready” plant

concept presents

technical and

commercial

challenges.

In this context, the term “CO2 capture ready” requires careful consideration. Beyond the technical challenges, the commercial investment in the specific features aimed at future CCS must be justified. Selecting a specific CCS technology poses a significant risk because the technology could become obsolete and result in a stranded asset. At this juncture, a pragmatic approach requires evaluating all known factors in existing carbon capture technologies, considering the additional space for the carbon capture facility, and laying out the plant to incorporate and modify existing hardware at a later date. This paper addresses the impact various methods of carbon capture have on the turbomachinery and the gas and steam turbines, focusing mainly on “add-on” features that do not require substantially modifying existing equipment. Because carbon capture is an energy-intensive process, the discussion also covers the impact on plant performance of large steam extractions for CO2 capture processes and

the use of electrical power for CO2 compression. Cost estimates of various technologies are not included, however, due to the high volatility of labor, material, and equipment prices.

BRIEF REVIEW OF CO2 CAPTURE TECHNOLOGIES

Post-Combustion The post-combustion capture of CO2 from flue gases can be done by various methods: distillation, membranes, adsorption, and physical and chemical absorption. Absorption in chemical solvents such as amines is a proven technology performed consistently and reliably in many applications. It is used in natural gas sweetening and hydrogen production. The reaction between CO2 and amines currently offers the most cost-effective solution for directly obtaining high-purity CO2. [2]

ABBREVIATIONS, ACRONYMS, AND TERMS

AC amine cooler

AGR acid gas removal

AQCS air quality control systems

ASU air separation unit

AZEP advanced zero emission plant

B&W The Babcock & Wilcox Company

CAR ceramic auto-thermal recovery

CC combined cycle

CCS carbon capture and sequestration

CEDF Clean Environmental Development Facility

CFB circulating fluidized bed

CFD computational fluid dynamics

CLC chemical looping combustion

DOE US Department of Energy

ENCAP enhanced capture CO2 program

EPC engineering, procurement, and construction

Fg the force exerted by gravity

FG flue gas

FGD FG desulfurization

FGR FG recirculation

GHG greenhouse gases

GJ gigajoule, an SI unit of energy equal to 109 joules

GTH2 gas turbine burning hydrogen

GTsyn gas turbine burning syngas

HHV higher heating value

HP high pressure

HPT HP turbine

HRSG heat recovery steam generator

HTT high-temperature turbine

HX heat exchanger

ICFB internally CFB

ID induced draft

IP intermediate pressure

IGCC integrated gasification CC

ITM ion transport membrane

LP low pressure

LPST LP steam turbine

MEA monoethanolamine

MPT measuring point for mass, pressure, and temperature

Mwe megawatt electrical

OTM oxygen transport membrane

PC pulverized coal

ppm parts per million

SCOC-CC semi-closed oxy-fuel combustion CC

STG steam turbine generator

syngas synthesis gas

USC ultra-supercritical

Page 193: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 183

In the post-combustion capture process, the flue gases from the power plant are cooled and treated to reduce particulates and sulfur oxide (SOx) and nitrogen oxide (NOx). Then, boosted by a fan to overcome pressure drops in the system, the flue gases pass through an absorber. A lean amine solution, typically monoethanolamine (MEA), counter-currently interacts with the flue gases to absorb the CO2. The clean flue gases continue to the stack. The CO2-rich amine solution is then pumped into a stripper (regenerator) to separate the amine from the CO2. The energy to desorb the CO2 from the solution is provided by steam. The CO2-rich solution at the top of the stripper is condensed, and the CO2 is sent for further drying and compression.

A schematic is given in Figure 2, and Table 1 summarizes the advantages and limitations of this process. Several commercial solvents are available. Fluor’s Econamine FG(SM), using a 30% aqueous solution

of MEA solvent, is the most widely deployed process with more than 20 plants located in the United States, China, and India. Yet none of the plants capture CO2 from coal-derived flue gas. KS-1 solvent produced by Mitsubishi Heavy Industries, Ltd., is in operation in four commercial-scale units capturing between 200 and 450 metric tons of CO2 per day. The main effort in the development aims at lower heat of reaction, lower sensible heat duty, and lower heat of vaporization. Bechtel, in cooperation with HTC Purenergy, conducted an engineering study for a 420 MW gross power combined-cycle CCS system in Karsto, Norway, applying a proprietary solvent. See [3] for details on the solvent properties and regeneration process.

Post-Combustion Chilled AmmoniaAmmonia is the lowest form of amine. Like other amines it can absorb CO2 at atmospheric

CC withCO2 + H2O

Turbine

PC

USC PCSemi-OpenCC

Graz Cycle

USC ICFB

USC PC

HybridIGCC

Oxygen-BlownIGCC

Air-BlownIGCCCC

USC PC

Retrofit PC

H2 Turbine

Post-Combustion

Amine-BasedSolvent

ChilledAmmonia

Pre-Combustion

WatershiftSelexol

Oxy-Fuel Cumbustion

ASUCO2 + H2O Condenser

Chemical Looping

CarbonateLooping

NO-BasedLooping

PressurizedCC

CO2 Capture

Figure 1. Available CO2 Capture Options

ADVANTAGES LIMITATIONS

• Proven technology

• Known chemical process

• Ability to predict performance and solvent consumption

• High energy consumption for solvent regeneration

• High rate of process equipment corrosion

• High solvent losses due to fast evaporation

• High degradation in presence of oxygen

Table 1. Amine Scrubbing Advantages and Limitations

Page 194: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 184

pressure, but at a slower rate than that of MEA. The chilled ammonia system uses a CO2 absorber similar to sulfur dioxide (SO2) absorbers and is designed to operate with slurry.

The process requires the flue gas to be cooled before entering the cleanup system. The cooled flue gas flows upward, counter-current to a slurry containing a mix of dissolved and suspended ammonium carbonate (AC) and ammonium bicarbonate (ABC). More than 90% of the CO2 from the flue gas is captured in the absorber. The CO2-rich spent ammonia is regenerated under pressure to reduce the CO2 liquefaction compression energy requirement. The remaining low concentration of ammonia in the clean flue gas is captured by cold-water wash and returned to the absorber. The clean flue gas, which now contains mainly nitrogen, excess oxygen, and a low concentration of CO2, flows to the stack.

A schematic of this process is given in Figure 3, and Table 2 summarizes the advantages and limitations of this process. Alstom owns the process rights. Several participants, including Alstom, have started tests on a sidestream pilot plant at the Pleasant Prairie Power Plant in Wisconsin. This pilot will be able to capture CO2 emissions from a slipstream of less than 1% from one of the two boilers operating at the plant.

Additionally, a non-chilled ammonia scrubbing process is being planned by Powerspan Corp. for demonstration at FirstEnergy’s Burger Power Station in Ohio.

Amine scrubbing is

a proven technology

with known

limitations.

CHEMICAL ELEMENTS

ABC ammonium bicarbonate AC ammonium carbonateAr argonC carbonCaCO3 calcium carbonateCaO calcium oxideCaS calcium sulfideCaSO4 calcium sulfateCO carbon monoxideCO2 carbon dioxideH2 hydrogenH2O waterHg mercuryHX hydrogen halideMe metalMeO metal oxideN2 nitrogen gasNG natural gas NH3 ammoniaNiO nickel oxideNi nickelNO nitric oxideNO2 nitrogen dioxide NOX nitrogen oxideO2 oxygenSO2 sulfur dioxideSOX sulfur oxide

Vent to Reheat/Stack

Flue Gas fromPower Plant

Lean AC

Absorber

BoosterPump

StorageTank

CrossExchanger

Filtration

Condenser

Regenerator (Stripper)

Sludge

NO2CO2

CO2 to Compression/Dehydration

RefluxDrum

MEAPre-Claimer

Preflux Pump

Figure 2. Post-Combustion Amine Process

Page 195: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 185

Chilled ammonia

is one

capture option.Pre-Combustion IGCCThe main advantage of IGCC pre-combustion CO2 capture is that the amount of fluid to be processed is much smaller than for post-combustion capture at a coal-fired or combined cycle plant. For IGCC, only the syngas is treated, whereas for post combustion the entire exhaust flue gas flow must be processed. For oxygen-blown IGCC, the main syngas components are H2 and CO, with some CO2, steam, nitrogen gas (N2 ), and traces of other elements.

The raw syngas produced by the gasifier must be cleaned of contaminants, including mercury, sulfur, and fluorides. The syngas chemical cleaning processes, acid gas removal (AGR)

such as Rectisol® or Selexol™, are able to remove a certain amount of CO2. However, the actual conversion of the carbon monoxide (CO) into CO2 and H2 occurs in a water-shift process in which steam and syngas are mixed in the presence of a catalyst to convert the CO to CO2 in an exothermic reaction. The shift stage can be integrated into the process either before (sour shift) or after (sweet shift) the sulfur removal stage. A CO2 high-concentration capture (90%) will require two stages of shift in addition to the enhanced AGR. The fuel to be burned in the gas turbine is mainly H2 with diluent. The amount of shift will determine the

Figure 3. Chilled Ammonia Schematic [4]

HX

FlueGas

120 °F

35 °F

Purge

Stage TwoCooling

Chiller

CO2Absorber

Wash

ExistingStack

Lean AC

Rich ABC

HP Pump

Wash

Regenerator

Reboiler

CO2

ExistingFGD

Cooling andCleaning of FG

CO2 RegenerationCO2 AbsorptionFlue GasWaterRich SlurryLean SlurryCO2

Table 2. Chilled Ammonia Advantages and Limitations

ADVANTAGES LIMITATIONS

• Compared to using amine, the regeneration energy is potentially lower.

• Applicable for new plants and for retrofi t of existing coal-fi red power plants.

• AC and ABC are stable over a wide range of temperatures.

• Solvent is oxygen and sulfur tolerant.

• The heat of the absorption reaction is lower.

• Potential to capture 90% of CO2 emissions.

• Ammonia volatility can be an issue.

• Absorption rate is slower than that of MEA and requires as much as three times more packing to achieve same CO2 removal performance.

• Several absorber vessels are required, increasing capital cost and affecting plant layout.

• Large-scale process experience is limited; experience should increase knowledge of feasibility.

Page 196: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 186

concentration of H2, which could vary from 35% to 90%. A process schematic is shown in Figure 4.

Oxy-Fuel CombustionIn an oxy-fuel combustion-based power plant, oxygen rather than air is used to combust fuel, resulting in a highly pure CO2 exhaust that can be captured at relatively low cost and sequestered. Often, the oxygen is mixed with flue gas to regulate burning, as well as to achieve a high CO2 level in the flue gas. For the Rankine steam cycle, the volume of flue

gas leaving the boiler is considerably smaller than the conventional air-fired volume. The reasons for this difference are that N2 in the air is not part of the flue gas and that the amount of flue gas is approximately 80% less for combustion with oxygen than with air. The flue gas consists primarily of CO2. The schematic is given in Figure 5, and Table 3 summarizes the advantages and limitations of this process.

Theoretical and experimental research on this technology has intensified in the past 2 years. A 30 MWe oxy-fuel plant is under construction near Schwarze Pumpe, Germany,

Combustion using

only oxygen

results in

highly pure

CO2 exhaust.

Figure 5. Oxy-Fuel Process Block Flow Diagram

Air

MPT

SealLeak Seal

LeakN2

O2

Boiler Flue GasCleaning

CO2Compression

Steam PowerCycle

AirSeparation

Unit

Coal

Boiler Fuel Consumption(GJ/hr HHV)

CombustionHeat

CO2

CO2 Recycle

Extraction toCO2 Plant

CO2

CO2

Particle RemovalSOx Renoval

NoncondensableGases

HP

SealLeak

ThrottlingValve

IP LP LP

Figure 4. IGCC with Two-Stage Acid Gas Removal

CO2

N2

Gasification Removal ParticlesModule Scrubber

O2

ASUA

A Shift Reactor Hg Removal

SulfurRemoval

CO2Compressor

HRSG

STG

Coal Handling

Two-Stage AcidGas Removal

Diluent

N2

Gas Turbine

Page 197: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 187

to demonstrate the technology using an Alstom-supplied boiler. In the United States, the Babcock and Wilcox Company (B&W) and Air Liquide have converted B&W’s 30 MW Clean Environment Development Facility (CEDF) in Alliance, Ohio, to an oxy-combustion system. IHI Corporation has a 1.2 MW pilot-scale testing facility in Japan.

The oxy-fuel combustion process uses an air separation unit (ASU), a device requiring high electricity consumption. To reduce the auxiliary load, new, less energy-intensive oxygen separation technologies are in development, including ion transport membrane (ITM), oxygen transport membrane (OTM), and BOC’s ceramic auto-thermal recovery (CAR) oxygen production process.

Oxy-fuel combustion is also associated with promising combined cycles involving gas and steam turbines. [5] The Graz cycle and the semi-closed oxy-fuel combustion combined cycle are two examples now under theoretical investigation. The oxy-fuel combustion concept is applicable to a variety of fuels, including methane, syngas, and biomass gasification. In the Graz cycle, the working fluid following the combustion process is a mixture of steam (approximately 75%) and CO2 (approxi-mately 24%), with some small amounts of N2 and O2. The turbomachinery needed due to the unusual working fluid is discussed below. The expected cycle efficiency is in the range of 50%. A 50 MW oxy-fuel demonstration plant using methane fuel is being planned in Norway. In the United States, the Department of Energy (DOE), in cooperation with Siemens, has instituted a program to develop a high-temperature turbine for these types of plants. A pilot demonstration plant is expected to be operational in about 2015.

Chemical LoopingChemical looping combustion (CLC) employs a circulating solid such as metal oxide or calcium oxide to carry oxygen from the combustion air to the fuel. Direct contact between the combustion air and fuel is avoided, and the flue gas is free of N2. The metal oxide is reduced in a fuel reactor by a mixture of natural gas and steam and oxidized in an air reactor. The oxidation reaction is exothermic, and the resulting heat production in the air reactor is used to maintain the oxygen carrier particles at the high temperature necessary for the typically endothermic reaction in the fuel reactor. The fuel could be syngas from coal gasification, natural gas, or refinery gas.

The net heat evolved is the same as for normal combustion with direct contact. Because the fuel has no direct contact with air, the flue gas is free of N2. The N2-free flue gas contains moisture, SOx, CO2, and particulates. For coal fuel, after the particulates are removed by a baghouse and moisture is removed by cooling, the remaining gas is essentially high-purity CO2. The high-purity CO2 is compressed and cooled in stages to yield a liquid ready for sequestration. A schematic representation is given in Figure 6,

Chemical looping

combustion

promises flue gas

free of N2 .

Figure 6. Generic CLC Process [6]

CO2 and H2O

Fuel

FuelReactor

N2O2

Air

AirReactor

Table 3. Oxy-Combustion Advantages and Limitations

ADVANTAGES LIMITATIONS

• High concentration of CO2 (80%) in fl ue gas may lower the cost of carbon capture.

• Retrofi t possibilities due to similar heat transfer rates; boiler can be used as oxygen or air blown.

• 60% lower NOx emissions, and lower mercury concentration in furnace backpass gas.

• Amount of unburned Carbon (C) in ash reduced.

• Char combustion rate increased.

• Expensive ASU is typically a cryogenic air separation process with high energy demand (15%–20% of power output).

• Due to air ingress, further cryogenic distillation is required after combustion to purify CO2 in fl ue gas, which consumes additional energy.

• Air infi ltration into system is inevitable to a certain extent and is detrimental to process operation.

Page 198: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 188

and Table 4 summarizes the advantages and limitations of this process.

Intensive academic research programs now under way are mainly concentrated on finding the appropriate metal oxides for different fuels. Alstom has completed engineering studies and bench-scale tests on the chemical looping process, and some pilot-scale process testing is proceeding. Chalmers’ 10 kW CLC testing concluded the following for nickel (Ni)-based particles:

• No CO2 is released from the air reactor.• No significant carbon formation yields

100% CO2 capture.• Sand tests show low leakage from the

air reactor to the fuel reactor: almost pure CO2 is possible.

• 99.5% conversion of fuel occurs at 800 ºC.

Because this method is still undergoing research, further details will not be discussed here.

CASES

This section discusses the impact of each CO2 capture technology on the cycle, and

particularly on the turbomachinery.

Coal-Fired Supercritical With Post-Combustion CCS

Power Plant Site Location, Available Space, and Other RequirementsBefore discussing in detail the evaluation process of determining the impact on the turbomachinery, it is worthwhile to briefly mention the other implications of equipping a plant with a CCS system. Primarily, the suitability of the CO2 sequestration site needs

to be considered. If the plant is situated far from an adequate geological storage location or an enhanced oil recovery site, the cost of constructing a pipeline and the additional loads for pumping CO2 must be accounted for. Space requirements and plant layout should also be considered. By itself, the CO2 capture hardware has a large footprint. For amine scrubbing, the CCS plant components (absorber, stripper, compression stations, and various cooling and storage tanks) occupy a significant area. The plant layout also has to accommodate large ducts for the flue gases, which need to be routed from the exit of the air quality control system (AQCS) block, between the induced-draft (ID) fan, to the amine scrubber without interfering with roads, buildings, and other infrastructure.

As discussed in greater detail below, large low-pressure (LP) pipes are needed to transfer the steam from the steam turbine generator (STG) to the amine scrubber, which requires pipe racks with adequate support. The entire balance of plant equipment must also be augmented to cater to the CCS requirements. The electrical system design for transformers, transmission cables, motor control centers, and other elements needs to be enhanced. Particularly when existing plants are being retrofitted with CCS capabilities, the ripple effect of adding a CO2 plant requires detailed and careful review. One other consideration applies to the plant heat sink, which should be sized to allow the condenser and cooling tower to accommodate the additional steam when the post-combustion capture system is not in operation.

Steam Turbine GeneratorA significant amount of steam is required for solvent regeneration. The steam consumption for

CCS plant

components occupy

a significant area.

Table 4. Chemical Looping Advantages and Limitations

ADVANTAGES LIMITATIONS

• No energy penalty for oxygen production and CO2 separation.

• No need for energy-intensive, high-cost ASU (assuming that coal gasifi cation is not needed).

• Potential exists for greater than 98% CO2 capture.

• Potential exists to use a variety of fuels (natural gas, coal, residual oil, etc.).

• Possible to retrofi t conventional air-blown CFB to CLC CFB with limited modifi cations.

• Alstom bench scale tests suggest potential to meet ultra-clean low emissions targets, including CO2 capture, at about the same cost and effi ciency as today’s power plants.

• Most work performed to date has used methane as fuel; only limited studies with oxygen carriers used to react with coal or gasifi ed coal.

• Carbon deposition (formation of solid carbon) can occur.

• For combustion of solid fuels, separate energy-intensive (due to use of ASU) gasifi cation process is required.

• Metal oxide must have high affi nity for reaction with fuel but must not decompose at high temperatures or over a large number of cycles.

• CO can be also produced; some mechanism to control it possible based on circulation rate.

• Process has not been demonstrated on a large scale.

Page 199: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 189

a representative amine-based post-combustion capture system is shown in Table 5. Typical steam conditions are 3 bar and 270 ºC. The amount of steam for 90% CO2 recovery from the flue gas may be as high as 1.6 kg of steam for 1 kg of CO2—more than 50% of the LP steam turbine flow. Therefore, it is imperative in all plant operational scenarios to consider the possibility that the CO2 capture plant might not be able to receive part or all the extraction steam. This consideration is important in a case in which the steam turbine was permanently configured to operate with a reduced LP steam flow. Because venting such large quantities of steam is not an option in this case, any design must offer rapid configuration changes that allow the LP modules to operate under zero extraction conditions. The available options to extract the steam from the system are throttle LP,

floating-pressure LP, LP spool with clutched LP turbine, and backpressure turbine.

• Throttle LP—This configuration keeps the crossover pressure constant despite the large amount of steam extracted. The arrangement requires a throttling valve downstream of the solvent steam extraction point. Despite the significant throttling losses that occur, this setup offers flexibility to extract any amount of steam needed (i.e., for less than 90% CO2 capture scenarios) and the capability to restore full power generation rapidly when the CO2 capture system is not in operation. The throttling valves are commercially available for current LP crossover pipe sizes. A schematic is provided in Figure 7.

• Floating-pressure LP—In this arrangement, the turbine intermediate-pressure (IP) module must be designed to operate with a variable backpressure. When the CO2 capture plant operates, the crossover pressure is lower. In this case, the last-stages loading of the LP module has increased and the exit losses are higher. For retrofits, the IP last stages can be replaced to match the desired operating conditions at both high and low steam flows, depending on the CO2 capture steam demand. Obviously, additional valves in the extraction line and downstream of the extraction point in the crossover pipe facilitate operational control for the different steam flows required by variable CO2 capture rates (e.g., 30%, 50%,

Steam for

90% CO2 recovery

may be more

than 50% of

the LP steam

turbine flow.

Table 5. Performance Comparison for Plants With and Without CO2 CCS Capabilities

Performance Element

Without CCS

With CCS

Delta, %

Gross Power, MW 865 702 19

Net Power, MW 800 542 32.3

Steam Turbine Gross Power, MW

865 662 23.5

Auxiliary Loads, MW 65 160 145

Noncondensing Turbine, MW N/A 40 N/A

Crossover Steam Extraction, % IP Exhaust Flow

0 62 N/A

Figure 7. Throttling LP Turbine

Boiler FuelConsumption(GJ/hr HHV)

SealLeak

SealLeak

SealLeak

HP LP LP

MPT

IP

Extraction toCO2 Plant

Throttling Valve

IP

Page 200: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 190

90%) from the flue gas stream. A schematic is provided in Figure 8.

• LP spool with clutched LP turbine—In this scheme, one of the LP modules is connected via a clutch to the generator in an arrangement similar to that used in a single shaft combined cycle in which a clutch is situated between the generator and the steam turbine. In this case, when the CO2 capture plant is in operation, only one LP module is operating and the other is disconnected. The inlet flow and pressure of the operating LP module have to be designed to accommodate the steam conditions at the anticipated CO2 capture levels.

This option is costly, requiring additional structural pedestals and a longer turbine hall, and offers little flexibility for various CO2 capture rates. However, restoring the full capacity of the module when the CO2 plant is not functional is not a complex activity. A schematic is provided in Figure 9. The LP module, which remains in operation, performs at the design conditions, thus achieving a higher efficiency than with other options. A variant of this arrangement could even operate without a clutch. In this setup, the second LP must rotate; thus, a minimum amount of steam flow (between 5% and 10% of the LP flow) must pass through this module to prevent

Figure 8. Floating-Pressure LP Turbine

Boiler FuelConsumption(GJ/hr HHV)

SealLeak

SealLeak

SealLeak

HP LP LP

MPT

IP

Extraction toCO2 Plant Floating Pressure

IP

Figure 9. LP Spool with Clutched LP Turbine

Boiler FuelConsumption(GJ/hr HHV)

SealLeak

SealLeak

SealLeak

HP LP LP

MPT

IP

Extraction toCO2 Plant

Shutoff Valve

IP

Page 201: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 191

overheating or mechanical vibrations under these minimal flow conditions. Extracting additional steam without producing real power is an added loss for the system. A more permanent solution for the second LP module is to replace the bladed rotor with a dummy. In this scenario, when the post-combustion capture is not operational, the steam cannot be returned to the cycle to produce power and must be either vented or condensed.

• Backpressure turbine—If the steam extraction for the post-combustion capture plant is taken from the IP/LP crossover pipe, the pressure and the temperature are too high for direct use in the sorbent regeneration process. One solution to exploit the available energy is to generate electric power through a noncondensing turbine, and use the power to reduce the auxiliary loads. A schematic is provided in Figure 10.

Table 5 offers an example of the impact of a CO2 capture plant on the overall performance for a nominal 800 MW net power plant without post-combustion capture. It should be emphasized that each project must conduct its own evaluation based on specific site conditions, the selected capture technology, and type of sorbent used. Because each steam

turbine vendor has a different cycle design with dissimilar IP module exhaust pressures, the output power of the noncondensing turbine varies accordingly.

In the given example, it can be seen that the steam extraction for the post-combustion capture plant reduces the steam turbine output by almost 23%. Because of the post combustion capture plant,which in this example also contains the CO2 compression section, the auxiliary loads increase by almost 95 MW. The noncondensing turbine produces 40 MW of power; without this turbine, the auxiliary loads would be even higher.

Summary of Options ImpactFigure 11 depicts the relative efficiency loss for each option. [7] This comparison of plant output does not account for the auxiliary power loss due to CO2 compression loads. As expected, the setup with an additional noncondensing turbine offers the lowest power loss (7%), followed by the clutch arrangement, which has the least steam throttling and lowest LP turbine losses. However, both options require additional hardware or significant modifications to plant arrangement. For a retrofit, these alternatives require substantial pre-investment and site preparation.

A backpressure

turbine can

produce power

to reduce

auxiliary loads.

Figure 10. Additional Noncondensing Turbine

Boiler FuelConsumption(GJ/hr HHV)

SealLeak

SealLeak

SealLeak

HP LP LP

MPT

IP

Extraction toCO2 Plant

IP

Generator

NoncondensingTurbine

To CO2 Capture Plant

Page 202: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 192

Steam Cycle Oxy-Fuel Combustion with Post-Combustion Capture System

Steam Cycle • Location, additional equipment, and space

requirements—As with the other CCS options, proximity to a CO2 sequestration site should be considered for the oxy-fuel option, and adequate feasibility studies must be conducted. The distance from a geological storage location or an enhanced oil recovery site influences not only the plant’s economics, but also its configuration and auxiliary power requirements. Compared to a conventional coal-fired plant, an oxy-fuel plant requires a number of new components besides the CO2 capture hardware. Examples include an ASU, additional flue gas treatment modules, several heat exchangers to extract low heat, and fans and ducts for flue gas recirculation (FGR).

The optimal FGR ratio is still a topic of investigation. A commonly used value is 0.7, where zero is pure oxygen combustion with no recirculation. [8] Adequate space must be allocated not only for the equipment, but also for the interconnecting pipes, electrical cables, and controls. While SO2 and other elements can be removed in the CO2 capture plant, the quality of recirculation flue gas must be controlled in supplementary or modified

sulfur-removal devices to avoid long-term corrosion of the boiler. Another aspect to be considered is the increased cooling duty of the plant required by the ASU, flue gas condenser, and CO2 compression unit. When the heat sink is a cooling tower, the plant layout needs to account for additional cells capable of coping with a larger cooling load than can be handled by a conventional plant without CCS.

• Steam turbine generator—In principle, the steam turbine configuration for this CCS option is the same as for a conventional steam plant without carbon capture. However, the cycle energy balance indicates that several sources of low-grade heat, such as the ASU and the CO2 compressor, could be recovered, allowing substantial reduction in bleed flows for condensate and feedwater heating. As a result, there is an increased LP flow through the turbine. If the LP module last-stage blade system and generator are sized properly to handle the additional flow, the steam turbine gross power output increases. According to [7], the gross power output could be an estimated 4.5% or higher. This arrangement also yields a better efficiency, between 0.3 and 1.3 percentage points. Some LP steam extractions are required for oxygen preheating and the ASU plant dryers.

Oxy-fuel

combustion

requires

flue gas

recirculation.

Figure 11. Power Loss for Various LP Turbine Arrangements

Powe

r Los

s, %

Options for Steam Extraction Arrangements

0

2

4

6

8

10

12

14

Floating Pressureand Back

Pressure Turbine

ClutchArrangements

FloatingPressure

Throttle LP Turbine

Page 203: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 193

Combined Cycle• Configuration—Two combined cycle

examples are presented (see [5]). The first is the semi-closed oxy-fuel combustion combined cycle (SCOC-CC), which consists of a high-temperature Brayton cycle with a compressor for CO2, a high-temperature turbine (HTT), a combustion chamber, a heat recovery steam generator (HRSG), and a conventional bottoming cycle. The combustion process occurs at 40 bars and has a nearly stoichiometric mass ratio of fuel and oxygen. The fuel gas composition is mainly CO2 (92.5%), with steam (7.1%) and some residues of N2 and O2. The expansion process from almost 1,400 ºC is done in the HTT, cooled with recycled CO2, pressurized by the compressor. [5] The second example, the Graz cycle, has a more complicated layout with two loops. The first loop is a high-temperature cycle comprising two compressors with intercooling, an HTT, a HRSG, and a-high pressure (HP) turbine. The LP cycle has a low-temperature turbine, two compressors for CO2, and a condenser. The two-compressor configuration allows water (H2O) segregation after the first compressor, thus reducing the power demand for the second compressor. See Figure 12 for the Graz cycle schematic.

• Development considerations—While the expected thermal efficiency of both the Graz and SCOC-CC cycles is close to 50%, including CO2 compression auxiliary load, the unusual working fluids require a massive development effort in design, testing, and validation for the high-temperature

turbomachinery. Using the working fluid for cooling poses the risk of blocking the internal cooling passages with soot and ash from the combustion process. Additional critical items are the LP turbine, which works with a mixture of steam and CO2 , and the condenser, where operation at very low pressures may lead to severe metal corrosion. In the power generation industry, where reliability and simplicity of operation are paramount, these promising cycles are far from practical implementation in the next decade.

Natural Gas Combined Cycle with Post-Combustion CCS

ConfigurationThe use of chemical solvents in post combustion CO2 capture is a proven technology. The real challenge is to identify the most efficient conversion technology in terms of steam consumption for the solvent regeneration and use of electricity for CO2 compression. To exemplify the impact of a CCS on turbomachinery, Table 6 provides information about a 1 x 1 combined cycle consisting of one F class gas turbine and one steam turbine. The amount

For steam

consumption,

the real challenge

is to identify

the most efficient

conversion

technology.Figure 12. Principal Flow Schematic of Modified Graz Cycle Power with Condensation/Evaporation

LPST Condenser

CO2

H2O

C1 C2 HPT

HRSG

C3 C4

HTT

Combustor

IntercoolerDeaerator

Table 6. Typical Plant CO2 Capture Parameters for a Combined Cycle Plant

Parameter Value

Plant Output, nominal 450 MW

Exhaust Flue Gas Flow 65,000 tons/day

CO2 Capture at 85% Rate 3,200 tons/day

Reboiler Steam Consumption 4,500 tons/day

Electricity Consumption 200 kW/ton CO2

Page 204: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 194

of CO2 capture was arbitrarily set at 85%. There are many designer formulas for solvents, consisting of several primary, secondary, and tertiary amines including MEA and other reactive ingredients. Therefore, providing an absolute ratio of steam or electrical consumption per ton of CO2 captured is not representative. Figures 13 and 14 show how changing the target CO2 percentage affects the steam and electricity required for capture and compression. It is assumed that the basis for evaluation is 95% CO2 removal from the flue gas.

It can be seen that reducing the CO2 capture rate from 95% to 80% reduces steam consumption by 20% and electricity consumption by 5%.

Impact on Gas TurbineIn a gas turbine, the nature of the premix combustion system decreases the concentration of CO2 in the exhaust flue gas to half of that in a coal-fired boiler. Recirculating part of the exhaust gases achieves a higher CO2concentration. Among the thermal NOxreduction techniques developed in recent decades, internal FGR is being used as a very effective method to lower peak flame temperatures. However, the FGR rate can only be increased to a certain value for stable operation. Particularly interesting to note for this mode of operation is the fact that the NOx emission levels and combustion system acoustics are substantially improved.

However, the process could affect combustion stability and heat transfer properties. Theoretically, the amount of recirculated flow could be close to 40% of the exhaust gases. For CCS, FGR takes place at the compressor inlet. It should be noted that the amount of cooling necessary to bring the flue gases from exhaust conditions (at least 40 ºC) to ambient temperature adds a substantial parasitic load. Due to the high sensitivity of gas turbine output to the compressor inlet temperature, a mixed stream of external air and recirculated gas above ambient temperature would certainly reduce the power generated. Large gas turbine manufacturers are conducting extensive studies not only on the operational impact of the FGR on various components, but also on the technical and economical optimization of the amount of recirculated flue gas.

IGCC with CCSThe main impact on IGCC with CCS is the use of H2-rich fuel in the gas turbine. At 90% carbon capture, the expected hydrogen concentrations in the fuel may vary from 30%—78%. Hydrogen is an excellent fuel with a high heating value (52,000 Btu/lb). For comparison, natural gas has only 21,000 Btu/lb. The flame temperature is hotter (more NOx), and flame propagation is faster, requiring modified combustor cooling schemes. The current proposal for an H2 burner is based on the diffusion flame, which is more stable and less prone to combustion oscillations than the premix lean combustion flame. At this time, the state-of-the-art process for premix combustion of fuels with high H2 concentrations (greater than 50%) is only in the experimental phase. The main unresolved issues continue to be premature ignition (flash-backs) and combustion noise.

Figure 13. Steam Consumption at Various CO2 Capture Rates

–25

–20

–15

–10

–5

0

92.090.088.086.082.080.078.0

Perc

enta

ge, k

g st

eam

/kg

CO2

CO2 Recovery Percentage84.0 94.0 96.0

Figure 14. Electricity Consumption at Various CO2 Capture Rates

0

1

2

5

6

92.090.088.086.082.080.078.0

Perc

enta

ge, K

W/C

O 2 ton

CO2 Recovery Percentage

84.0 94.0 96.0

4

3

98.0

Page 205: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 195

In diffusion combustion, the H2 is mixed with N2 before entering the system. The N2 dilution is used to meet the NOx emission limit (15 ppm). Because the firing temperature of a gas turbine operating on H2 is approximately 28 ºC lower than that of a turbine operating with conventional IGCC syngas, the additional mass of the inert gas N2 expanding in the turbine section helps compensate for the power loss associated with the lower firing temperature.

As discussed earlier, an IGCC with no CCS uses syngas, a fuel with much lower H2 content and a significant amount of CO. The syngas combustion process has been known for many years and is used extensively in conventional IGCC applications using E- and F-class turbines. High H2 operating experience indicates that in some process gas applications, the concentration of H2 could be 60%–70% and even reach 90%. For example, GE reports [9] that an MS6000B gas turbine is burning refinery gas with a 70% H2 concentration at the San Roque site in Cadiz, Spain, and that at the Daesan, Korea, site, the H2 percentage can be as high as 95% for this model. The F-class operational experience indicates combustion with lower levels of H2 concentration, about 44%. However, flame dynamics are such that combustion can occur safely only in the diffusion mode.

Despite the number of dilution additives (steam, N2), current technology has high NOx emissions. In diffusion mode, combustion of H2-rich fuel is limited to 50% [10] to meet emissions targets and control flame stability. If N2 is the dilution agent, then the percentage of H2 cannot exceed 35%. The commonly used diluents are pure steam, pure N2, and different mixtures of both. In Europe, many initiatives and research activities, such as the Enhanced Capture of CO2 (ENCAP) program, aim at developing premix burners capable of burning high percentages of H2.

The development of the burners is only the first step of the integration process. The impact on the combustion system, either annular or can, must also be evaluated. Major equipment suppliers, including GE, Siemens, and MHI, are conducting extensive combustion tests to demonstrate lower than 15 ppm NOx. As part of the first phase of DOE’s Advanced Hydrogen Turbine Development Program, Siemens investigated several promising premix combustion configurations for H2 concentrations up to 60%.

There are also other differences between a gas turbine burning conventional syngas (GTsyn) and one burning fuel with a high concentration of H2 (GTH2). In order to maintain compressor pressure ratio, the first-stage turbine nozzle area must be sized properly to account for the differences between the larger flow of GTsyn and GTH2. An increased percentage by volume of H2 affects the life of the turbine hot sections as a result of the higher moisture content of the combustion products. [11] The GTH2 exhaust gas moisture content (12.4%) is higher than that of the GTsyn (8.4%); thus, the heat transfer properties and behavior of the hot gas path are different. More work is needed to redefine the computational fluid dynamics (CFD) boundary conditions and to conduct durability and life expectancy analyses.

Ultimately, the metal temperature increases due to the higher moisture content accompanying a higher H2 content, resulting in a significant reduction of hot path component life. The practical solution recommended by gas turbine suppliers is to reduce the firing temperature, which reduces power output and efficiency. To protect the hot gas path components, the initial GTH2 firing temperature is approximately 28 ºC less than that of a GTsyn. A relationship for the reduction of firing temperature as a function of H2 percentage [11] is:

Tf = 13.312 x (volume % H2) ^ 0.69 (1)

In order to increase the firing temperature and reduce the NOx emissions, several options are under investigation: new blade cooling concepts, advanced materials, high-temperature thermal barrier coating, and hybrid component design. The hybrid component is superior to a monolithic component, allowing expensive materials to be incorporated in the airfoils just in high-temperature areas. It is obvious that only intensive and continuous development efforts will make it possible to meet the ambitious goals set by DOE for IGCC plants with CCS capability: 2 ppm NOx emissions, 3%–5% improved cycle efficiency, and capability to burn high-H2 fuel.

Chemical Looping Combustion

Combined Cycle with Gaseous Fuel • Cycle configuration —In CLC, the combustion

occurs without any direct contact between the air and the fuel. As previously described, the CCS energy penalty is lower for CLC than for either pre- or post-combustion methods, because the CO2 is not diluted with other combustion products. [12] A combined

ENCAP

aims at developing

premix burners

to burn

high H2 fuel.

Page 206: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 196

cycle with chemical looping is depicted in Figure 15. The principles of chemical looping were presented earlier in the paper (see Figure 6). This section provides details for a specific combined cycle application.

Pressurized air from the compressor enters the air reactor (oxidizer), where it reacts with the metal to create a metal oxide. This oxidation process is exothermic. The air (depleted oxygen) exiting the oxidizer at high temperature and pressure is available for further expansion in a turbine, to generate power. The air exiting the turbine passes through an HRSG, where its remaining energy is extracted for use in a conventional steam bottoming cycle. In the other system element, the fuel reactor, fuel, and metal oxide react to strip the metal from the O2 to create a CO2- and H2O-rich steam. This steam can be further expanded in a turbine and then condensed to separate the H2O and the CO2. For this application, the most promising substance is NiO. The metal oxide requires an inert stabilizer to improve chemical reactivity and ensure mechanical stability.

• Turbomachinery—The gas turbine industry has sufficient technical knowledge to convert existing products for this application. The air compressor [12] has a moderate pressure ratio (18) and an air flow close to 800 kg/sec. The turbine inlet temperature is not above current gas turbine values (1,140 ºC), and compressor bleed air can be used for cooling duty. The turbine exhaust temperature (500 ºC) is typical for HRSG applications.

The steam bottoming cycle is no different from the one used in a conventional combined cycle. According to [12], a typical cycle efficiency, including CO2 compression plant load, is close to 50%.

Better cycle efficiency can be achieved if the hot stream of CO2 and steam exiting the fuel reactor passes through an additional turbine before being condensed. While cycle efficiency improves by 2%, this new turbine adds complexity and requires a full development program. An additional challenge facing the cycle is imposed by the requirement to maintain equal pressure at the exits of the two reactors under all operating conditions, to avoid gas leakages.

Steam Cycle with Solid FuelIntensive studies are being conducted to identify appropriate substances for the chemical looping process of solid fuels. One option is to use calcium sulfide and sulfate (CaS and CaSO4) reactions in one reaction loop between the oxidizer and the reducer [13]. The fuel reactor (reducer) uses coal, steam, and calcium carbonate (CaO3) as input. The heat of the exothermic reaction in the oxidizer is transferred to steam and from there to a conventional steam cycle.

Another option (see [14]) uses calcium compounds to carry O2 and heat between two reaction loops. The first loop uses CaS/CaSO4 reactions to gasify the coal. In a water-shift reactor, CO is then combined with steam to generate CO2 and H2. The second loop uses calcium oxide (CaO) and CaCO3. Thermal energy is transferred from one loop to the other using bauxite as the heat transfer medium. In the final account, the products of the process are concentrated streams of CO2 for sequestration and H2 for consumption as fuel. An experimental pilot unit of this hybrid gasification and chemical looping is currently under development by the process owner, Alstom. A successful full scale demonstration of the process is expected in the 2016–2020 time frame.

The gas turbine

industry

can convert

existing products

for CLC

combined cycle.

Figure 15. CLC in Combined Cycle Configuration

HRSGAir

Compressor Air Turbine

STG

Me

MeO

NG Fuel

CO2

Air (Depleted O2 )

Stack

Page 207: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 197

Turbomachinery for the Cycle In the first option, the steam generated by the process is used in a conventional steam turbine. However, the steam temperature and pressure values could exceed the current level of ultra supercritical steam turbine conditions. An additional concern is part load operation. The chemical looping process is not yet sufficiently developed to allow more detailed steam turbine design and operation.

In the second option, the turbomachinery is identical to that of an IGCC gas turbine using a highly concentrated stream of H2 as the operating fuel. (A detailed discussion is provided in the previous section, IGCC with CCS.)

CO2 Compression IssuesA typical CO2 processing system includes compression, dehydration, and purification/liquefaction. As described earlier, this process is one of the major contributors to auxiliary power consumption and higher costs for the power plant.

The compression process [15] includes at least two compressors, intercoolers, water separators, dehydrators, and purifiers. The amount of impurities in the CO2 stream has a major impact on the process. The presence of SO2 and H2O may decrease the amount of compression work, while the existence of N2, O2, and argon (Ar) may increase it. In the selection process, the intercooler temperature must be higher than the condensing temperature of the mixture. Additionally, CO2 compression equipment requires stainless steel construction due to the presence of water vapors and potential corrosion.

A discussion of turbomachinery for CCS plants would not be complete without mentioning CO2 compression technology. [16] The major effort in this area is dedicated to identifying processes capable of reducing power consumption, which represents 40% of the auxiliary loads. In some cases it represents 8%–12% of plant power output. [17]

For CO2 compression applications, the traditional approach has been to use high-speed

reciprocating compressors. However, centrifugal compressors offer a challenging alternative: better efficiency, oil-free compression, and less maintenance. Given the importance of intercooling capability, it is worthwhile to mention an integral-gear design for centrifugal compressors that offers more flexibility for intercooling after each stage, optimization of flow coefficients due to selection of the most favorable rotating speed for each pair of impellers, and finally, a choice of drivers—either motors or steam/gas turbines.

A novel technology supported by DOE funding is the Ramgen supersonic shockwave CO2 compressor. [17] Following a process similar to the one occurring in the air intake for aircraft engines at supersonic speeds, the device is able to achieve compression ratios of 10:1 in a single stage. With stage discharge temperatures of approximately 230 ºC, the energy removed in the intercooler can be recovered and used in the solvent regeneration process. According to the details provided by Ramgen, 70% of the electrical input energy for compression can be recovered as useful heat. The second phase of this promising program will include detailed design specifications.

Thermal Performance Comparison It is clear that the CCS processes discussed in this paper yield lower thermal efficiency than conventional systems without CCS. To quantify this phenomenon, the following examples are provided:

• Chilled ammonia—A study to evaluate the energy consumption and the cost of a full-scale CO2 capture system was conducted by Alstom (see [4]) and the results were compared to those of a study of an MEA system performed by Parsons in 2000 and 2002 (see [4]). The base power plant is a supercritical pulverized coal (PC) boiler firing 333,542 lb/hr of Illinois No. 6 coal, operating at 40.5% net efficiency (at higher heating value [HHV]), and generating 462 MW of net power. Plant energy performance with and without CO2 capture is summarized in Table 7.

Table 7. Chilled Ammonia Plant Energy Performance

Parameters Supercritical PC Without CO2 Removal

SCPC With MEA CO2 Removal (Parsons Study) [4]

SCPC With NH3 CO2 Removal (Alstom Study) [4]

Net Power Output 462,058 329,524 421,717

Net Effi ciency, % HHV 40.5 28.9 37.0

Page 208: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 198

• Oxy-fuel combustion—Thermal performance is better for air-fired units than for oxy-combustion, as shown in Table 8. The unburned carbon content in fly ash is similar in both processes; however, additional coal needs to be burned for oxy-fuel combustion to achieve the same net output.

CONCLUSIONS

In anticipation of the future greenhouse gases regulations, the power industry has embarked

on a major effort to develop alternative capture and compression technologies, mainly for CO2. The proposed processes all require substantial amounts of energy, which negatively affects plant net power output and efficiency. This paper reviewed the current leading options and addressed their impacts on turbomachinery. Due to the uncertainties associated with CCS legislation, many developers, EPC contractors, and equipment manufacturers are looking at options to make plants currently in the design stage capture-ready, thus minimizing the future costs of CO2 capture retrofits. Apart from the technical implications of various CO2 capture processes, a collective effort of the engineering community should be devoted to inform and educate the public about the direct impact of the CCS on the electricity production and cost. For their part, by providing a long-term framework for CCS, policymakers could stimulate the deployment of low-carbon-footprint technologies and encourage the development of more cost-effective concepts.

Post-Combustion Capture PlantsThe main impact on the steam turbine for either amine- or ammonia-based CCS technology is attributed to the large steam extractions needed for solvent regeneration. A range of solutions, standalone or in combination, is available to cope with various amounts of extracted steam. The options presented can be implemented with limited effect on steam turbine efficiencies. These post-combustion capture methods are the most suitable for

future capture-ready power plants, requiring minimal pre-investment for steam turbines and only a few later modifications. Current turbine designs and plant layouts can also accommodate more efficient future post- combustion CO2 capture technologies as they become available. While the technical and economic penalties for CO2 capture are high, post-combustion technology represents one of the most probable short-term solutions. The use of ammonia in place of traditional amines may eventually reduce the parasitic electrical and heat loads.

Oxy-Fuel Combustion Combined CyclesSeveral cycles have shown theoretical promise of high efficiency. The use of unconventional working fluids demands extensive development efforts before any large-scale implementation can occur. This methodology does not lend itself to the concept of capture-ready because the turbomachinery is process specific. Under current legislative conditions and given the status of competitive technologies, a 2015 or 2020 implementation date is reasonable.

Other Methods Other capture methods, such as chemical looping, are in the initial development or pilot demonstration stage. Their chances of being implemented in full-scale applications ultimately depend on future legislative and environmental policies.

TRADEMARKS

Econamine FG is a service mark of Fluor Corporation.

Rectisol is a registered trademark of Linde AG.

Selexol is a trademark owned by UOP LLC, a Honeywell Company.

ACKNOWLEDGMENTS

Justin Zachary wishes to express his gratitude to his co-author, Sara Titus, for preparing the background information on various CO2 capture methods used in the development of this paper.

Table 8. Oxy-Fuel Combustion Thermal Performance

Process Effi ciency, % HHV

Air-Fired PC Boiler 39.1

Oxy-PC with CO2 Capture 29.9

Oxygen Transport Membrane Process with CO2 Capture 34.5

Post-combustion

capture methods

are most suitable

for future

capture-ready

plants.

Page 209: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 199

REFERENCES

[1] J.-Ch. Haag, A. Hildebrandt, H. Hönen, M. Assadi, and R. Kneer, “Turbomachinery Simulation in Design Point and Part-Load Operation for Advanced CO2 Capture Power Plant,” Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 14−17, 2007, Paper GT2007-27488, access via <http://store.asme.org/product.asp?catalog%5Fname=Conference+Papers&category%5Fname=&product%5Fid=GT2007%2D27488>.

[2] S. Chakravarti, A. Gupta, and Balazs Hunek, “Advanced Technology for the Capture of Carbon Dioxide from Flue Gases,” Proceedings of First National Conference on Carbon Sequestration, Washington, DC, May 15–17, 2001 <http://www.netl.doe.gov/publications/proceedings/01/carbon_seq/p9.pdf>.

[3] A. Veawab, A. Aroonwilas, A. Chakma, and P. Tontiwachwuthikul, “Solvent Formulation for CO2 Separation from Flue Gas Streams,” paper from the Faculty of Engineering, University of Regina, Regina, Saskatchewan, Canada, Proceedings of First National Conference on Carbon Sequestration, Washington, DC, May 15–17, 2001 <http://www.netl.doe.gov/publications/proceedings/01/carbon_seq/2b4.pdf>.

[4] S. Black, “Chilled Ammonia Process for CO2 Capture,” Alstom position paper, November 29, 2006 <http://www.icac.com/files/public/Alstom_CO2_CAP_pilot_psn_paper_291106.pdf>.

[5] W. Sanz, H. Jericha, B. Bauer, and E. Göttlich, “Qualitative and Quantitative Comparison of Two Promising Oxy-Fuel Power Cycles for CO2 Capture,” Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 14−17, 2007, Paper GT2007-27375, access via <http://store.asme.org/product.asp?catalog%5Fname=Conference+Papers&category%5Fname=&product%5Fid=GT2007%2D27375>).

[6] Q. Zafar, T. Mattisson, M. Johansson, and A. Lyngfelt (Chalmers University of Technology), “Chemical–Looping Combustion—A New CO2 Management Technology,” Proceedings of First Regional Symposium on Carbon Management, Dhahran, Saudi Arabia, May 22–24, 2006 <http://www.entek.chalmers.se/~anly/co2/ 54Dharan.PDF> and <http://www.co2management.org/proceedings/Chemical_Looping_Combustion_Qamar_Zafar.pdf>.

[7] “CO2 Capture Ready Plants,” International Energy Agency (IEA), Greenhouse Gas R&D Programme, Technical Study – Report Number 2007/4, May 2007 <http://www.iea.org/textbase/papers/2007/CO2_capture_ready_plants.pdf>.

[8] E. Rubin, A.B. Rao, and M.B. Berkenpas, “Development and Application of Optimal Design Capability for Coal Gasification Systems: Oxygen-based Combustion Systems (Oxyfuels) with Carbon Capture and Storage (CCS),” Carnegie Mellon University Contract DE-AC21-92MC29094 (final report submitted to U.S. DOE in May 2007) < http://www.iecm-online.com/PDF%20files/2007/2007rd%20Rao%20et%20al,%20IECM%20Oxy%20Tech.pdf>.

[9] B. Jones, “Gas Turbine Fuel Flexibility for a Carbon Constrained World,” Workshop on Gasification Technologies, Bismarck, North Dakota, June 28−29, 2006 <http://www.gasification.org/Docs/Workshops/2006/Bismarck/03RJones.pdf>.

[10] G. Rosenbauer, N. Vortmeyer, F. Hannemann, and M. Noelke, “Siemens Power Generation Approach to Carbon Capture and Storage,” Power-Gen Europe 2007 (Madrid, Spain) Conference Proceedings, June 26−28, 2007, access via <http://www.pennwellbooks.com/poeuandporee.html>.

[11] E. Oluyede and J.N. Phillips, “Fundamental Impact of Firing Syngas in Gas Turbines,” Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 14–17, 2007, Paper GT2007-27385, access via <http://store.asme.org/product.asp?catalog%5Fname=Conference+Papers&category%5Fname=&product%5Fid=GT2007%2D27385>.

[12] R. Naqvi and O. Bolland, “Optimal Performance of Combined Cycles with Chemical Looping Combustion for CO2 Capture,” Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 14−17, 2007, Paper GT2007-27271, access via <http://store.asme.org/product.asp?catalog%5Fname=Conference+Papers&category%5Fname=&product%5Fid=GT2007%2D27271>.

[13] G. Jukkola, “Combustion Road Map and Chemical Looping,” CURC Technical Subcommittee Meeting presentation, Pittsburgh, Pennsylvania, October 2007.

[14] G.J. Stiegel, R. Breault, and H.E. Andrus, Jr., “Hybrid Combustion-Gasification Chemical Looping Coal Power Technology Development,” Project Facts – Gasification Technologies, U.S. DOE, Office of Fossil Energy, NETL, 10/2008 <http://www.netl.doe.gov/publications/factsheets/project/Proj293.pdf>.

[15] H. Li and J. Yan, “Preliminary Study on CO2 Processing in CO2 Capture from Oxy-Fuel Combustion,” Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 14−17, 2007, Paper GT2007-27845, access via <http://store.asme.org/product.asp?catalog%5Fname=Conference+Papers&category%5Fname=&product%5Fid=GT2007%2D27845>.

[16] P. Bovon and R. Habel, “CO2 Compression Challenges,” CO2 Compression panel presentation at ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada,May 14−17, 2007, see <http://www.netl.doe.gov/technologies/coalpower/turbines/refshelf/asme/ASME_TURBO_EXPO_CO2_Panel_MAN_TURBO_presentation.pdf>.

[17] P. Baldwin, “Ramgen’s Novel CO2 Compressor,” Ramgen Document 0800-00153, August 2007 <http://www.ramgen.com/files/Ramgen%20CO2%20Compressor%20Technology%20Summary%2008-21-07.pdf>.

The original version of this paper was presented at ASME Turbo Expo 2008, held June 9–13, 2008,

in Berlin, Germany.

Page 210: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 200

BIOGRAPHIES Justin Zachary is currently assistant manager oftechnology for Bechtel Power Corporation. He oversees the technical assessment of major equipment used in Bechtel’s power plants worldwide. Additionally, he is engaged in the evaluation and integration of integrated gasification

combined cycle power island technologies. He also actively participates in Bechtel’s CO2 capture and sequestration studies, as well as the application of other advanced power generation technologies, including renewables.

Dr. Zachary has more than 30 years of experience with electric power generation technologies, particularly those involving the thermal design and testing of gas and steam turbines. He has special expertise in gas turbine performance, combustion, and emissions for simple and combined cycle plants worldwide. Before coming to Bechtel, he designed, engineered, and tested steam and gas turbine machinery while employed with Siemens Power Corporation and General Electric Company. He is a well-recognized international specialist in turbo-machinery and has authored more than 72 technical articles on this and related topics. He also owns patents in combustion control and advanced thermodynamic cycles.

Dr. Zachary is an ASME Fellow and a member of a number of ASME performance test code committees.

Dr. Zachary holds a PhD in Thermodynamics and Fluid Mechanics from Western University in Alberta, Canada. His MS degree in Thermal and Fluid Dynamics is from Tel-Aviv University, and his BS in Mechanical Engineering is from Technion — Israel Institute of Technology, Haifa, both in Israel.

Sara Titus is a mechanical engineer on the Edwardsport Integrated Gasification Com-bined Cycle project. In her 2 years with Bechtel Power, she has already contributed to a variety of projects as a control systems engineer, in nuclear operating plant services; a mechanical

engineer, in the environmental group; and an air quality control systems engineer, in the fossil technology group.

Before her current position, Sara supported a front-end engineering and design study for a CO2test center in Norway. She was the responsible engineer for the test center’s rich amine reclaimer and regenerator systems, and also helped to evaluate optional functionalities being considered for the project.

Earlier, Sara worked as a control systems engineer on nuclear power projects for the Tennessee Valley Authority (TVA) and Southern Nuclear Operating Company, where her duties included performing safety-related equipment qualification assessments and preparing design change packages. In addition, she worked on the Holcomb Station power plant expansion and the IGCC plant for American Electric Power.

Sara has authored several technical papers on the topic of CO2 capture technologies and economics. Her most recent, “CO2 Capture and Sequestration Option: Impact on Turbo Machinery,” was presented at the ASME Turbo Expo conference in Berlin, Germany, in June 2008.

Sara holds MS and BS degrees in Chemical Engineering from the University of Maryland, Baltimore County, and is a member of the Society of Women Engineers, Women in Nuclear, and North American Young Generation Nuclear.

Page 211: BTJ Book V1 N1 2008 Final

© 2008 Bechtel Corporation. All rights reserved. 201

INTRODUCTION

The safe-shutdown earthquake (SSE) ground motion for nuclear power plants (NPPs) in

operation before January 10, 1997, or for those plants whose construction permits were issued before then, is governed by the requirements of 10 CFR 100 Subpart A. [1] These requirements are less rigorous than those for the new generation NPPs (licensed after January 10, 1997). For new nuclear plants, the governing require-ments are specified in 10 CFR 100 Subpart B [2] and 10 CFR 50 Appendix S [3]. The current regulation specifies that a more rigorous approach applying the probabilistic seismic hazard assessment (PSHA) method be used to determine the design ground motion.

This development, along with a host of subsequent developments, has had a significant impact on the procedures for not only the seismic ground motion determination, but also the downstream geotechnical and structural analyses and seismic design. The new procedures are often difficult to implement, and unlike before, both the nuclear industry and the Nuclear Regulatory Commission (NRC) have much less experience in their implementation and regulatory review. With this situation in mind, the Nuclear Energy Institute (NEI) has been working with the NRC to help clarify and streamline the implementation rules, most of which have by now been addressed.

The new procedures for seismic ground motion, analysis, and design have had a significant impact on the relevant NPP design services performed by geologists, seismologists,

geotechnical engineers, and structural engineers. Additionally, the following two factors have had a further impact on the scope and challenges faced during the seismic design work process:

• Use of the latest seismicity data and ground motion models leads to increased estimates of seismic hazard, particularly in the high-frequency (HF) range for the Central and Eastern United States (CEUS). This change has a direct downstream impact on seismic qualification of equipment and the degree of refinement needed in the seismic analysis models used by structural and geotechnical engineers.

• The advent of the standard plant concept along with the associated revised licensing process (10 CFR 52 [4]; see [5] for further information) has altered the roles of seismic engineers and specialists working for the owners (nuclear utilities), nuclear steam supply system (NSSS) vendors (who are now the standard plant suppliers), and engineering and construction (E&C) companies such as Bechtel.

As a result, many seismic design and analysis issues have been identified during the recent design certification document (DCD) reviews for standard plants, as well as in early site permit (ESP) and combined operating license (COL) applications for the various candidate sites. NEI has facilitated addressing these issues by appointing a Seismic Issues Task Force (SITF) comprising senior seismic specialists from the nuclear industry. The task force has been working

Sanj Malushte, PhD

[email protected]

Orhan Gürbüz, PhD

[email protected]

Joe Litehiser, PhD

[email protected]

Farhang Ostadan, PhD

[email protected]

Issue Date: December 2008

Abstract—This paper provides a review of the evolution of seismic safety-related developments concerning safety-related nuclear power plant (NPP) facilities during the past 15 to 20 years and describes how these developments have shaped the recent changes in seismic regulations and the associated implementation rules.

Keywords—design certification, dynamic soil properties, ground motion attenuation, ground motion high-frequency content, ground motion incoherency, nuclear power plant, operating basis earthquake, performance-based spectrum, probabilistic seismic hazard analysis, safe-shutdown earthquake, seismic hazard, seismic risk, site response analysis, soil-structure interaction, standard plant, uniform-hazard spectrum

RECENT INDUSTRY AND REGULATORY DEVELOPMENTS IN SEISMIC DESIGN OF NEW NUCLEAR POWER PLANTS

Page 212: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 202

Use of the

latest seismicity

data and ground

motion models

leads to increased

estimates of

seismic hazard,

particularly in the

high frequency

range for

the CEUS.

with the NRC to help resolve the issues for the past 3 years. This paper provides the background behind the current developments and discusses outcomes and status of many of the issues that have recently been addressed.

BACKGROUND

The past practice for determining design seismic ground motion for NPP sites can

best be described as applying a method called deterministic seismic hazard assessment (DSHA), as delineated in Standard Review Plan (SRP, NUREG [Regulatory Guide]-0800) Section 2.5.2, Rev. 2. [6] This method entailed identifying all seismic sources that posed a seismic hazard at the site, defining their maximum credible earthquake magnitude and distance from the site for each source, and using an appropriate ground motion attenuation relationship to determine the peak

ground acceleration (PGA) caused at the site due to each seismic source. The design value for PGA at the site was taken as the maximum of the various PGA values due to the individual seismic sources.

For some existing sites (typically in the CEUS), the SRP allowed use of less sophisticated approaches (e.g., review of past seismic activity, recorded ground motions at nearby locations, historical records of damage data, and best judgments of experts in local geology and seismology) because of insufficient data and seismic ground motion models for these regions (unlike for the Western United States, where fault characteristics were better known and understood).

Regardless of whether the PGA was determined using the DSHA or even simpler approaches, the shape of the resulting seismic response

ABBREVIATIONS, ACRONYMS, AND TERMS

ASCE American Society of Civil Engineers

CAV cumulative absolute velocity

CEUS Central and Eastern United States

CFR Code of Federal Regulation

COL combined operating license

CSDRS certified seismic design response spectrum

CT cyclic triaxial (test)

DCD design certification document

DOE US Department of Energy

DSHA deterministic seismic hazard assessment

E&C engineering and construction

EPRI Electric Power Research Institute

ESP early site permit

FIRS foundation input response spectrum

g spectral acceleration

GMRS ground motion response spectrum

HF high-frequency

IPEEE individual plant examination for external events

ISG Interim Staff Guidance (NRC)

ISRS in-structure response spectrum/spectra

ITAAC inspections, tests, analyses, and acceptance criteria

LLNL Lawrence Livermore National Laboratory (US DOE)

NEI Nuclear Energy Institute

NGA next generation attenuation (ground motion attenuation model)

NPP nuclear power plant

NRC US Nuclear Regulatory Commission

NSSS nuclear steam supply system

NUREG Regulatory Guide

OBE operating-basis earthquake

PBS performance-based spectrum

PGA peak ground acceleration

PSHA probabilistic seismic hazard assessment

RC/TS resonant column/torsional shear (test)

RG (NRC) Regulatory Guide

SECY Office of the Secretary

SITF Seismic Issues Task Force (of NEI)

SRP Standard Review Plan (for nuclear power plants, NUREG-0800)

SSC structures, systems, and components

SSE safe-shutdown earthquake

SSHAC Senior Seismic Hazard Analysis Committee

SSI soil-structure interaction

UHS uniform-hazard spectrum

Page 213: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 203

The advent of

the standard

plant concept, along

with the associated

revised licensing

process, has altered

the roles of

seismic engineers

and specialists

working for

the owners (nuclear

utilities), NSSS

vendors (who are

now the standard

plant suppliers),

and E&C

companies.

spectrum for rock sites was typically assigned using prescriptive rules, as in NRC Regulatory Guide (RG) 1.60. [7] Thus, given the PGA value, simple rules were used for constructing both horizontal and vertical response spectra. For soil sites, the soil amplification effects (whereby the propagating seismic waves from the crystalline bedrock below the site are amplified as they pass through the intervening soil layers) were captured by performing site amplification studies. However, 10 CFR 100 Appendix A, RG 1.60, and SRP 2.5.2, Rev. 2, did not prescribe rigorous rules for performing such studies, especially for accounting for the uncertainties about the soil stratification and its properties.

The probabilistic seismic hazard assessment (PSHA) method was first formulated by Prof. Allin Cornell in 1968 [8]; however, it was not immediately embraced by the NRC and industry because of insufficient understanding within the profession as well as insufficient seismologic data. During the mid-to-late 1980s, motivated by a desire to better understand the available seismic margins at the existing NPPs, the NRC became interested in getting a better grasp of the seismic hazard at the existing sites by using the PSHA method. This initiative led to two studies: NUREG/CR-5250 [9], commissioned by the NRC, and the Electric Power Research Institute (EPRI) report EPRI-NP-4726 [10], supported by the industry.

While the two studies produced similar hazard curves for each site and generally similar representations of relative hazards at various sites, the absolute hazard estimates differed significantly for several sites. This variation caused concern regarding the viability of the PSHA technique in terms of the ability to produce consistent hazard estimates, independent of the analysts. The NRC therefore decided to supplement the Lawrence Livermore National Laboratory (LLNL) study by improving the elicitation of data and its associated uncertainty among the experts to better capture the gaps in their knowledge. The results of this study were published in NUREG-1488. [11] Although the PSHA results in NUREG-1488 showed a reasonable agreement with regard to the return periods associated with plant-specific SSEs, the LLNL seismic hazard estimates in the 10-4 to 10-6 annual probability of exceedance range were systematically higher than the EPRI hazard results for this range, which are of most interest in terms of seismic risk to NPPs.

To address the foregoing concern, a working group called the Senior Seismic Hazard Analysis Committee (SSHAC) was created during the mid-1990s. The group was sponsored by the NRC, the

US Department of Energy (DOE), and EPRI. The SSHAC’s charge was to provide an up-to-date procedure for obtaining reproducible results from the application of PSHA principles, not to advance the basic foundations of PSHA or develop a new methodology. This focus led to an emphasis on procedures for eliciting and aggregating data and models for performing a hazard analysis.

In 1997, the SSHAC issued a report entitled “Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts.” [12] At the request of the NRC, the report was reviewed by the National Research Council, which issued its own assessment in a report entitled “Review of Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts.” [13] These efforts helped establish PSHA as a rational method for seismic hazard characterization.

SEISMIC DEVELOPMENTS FROM THE MID-1990s TO 2004

Having established the viability of the PSHA approach, 10 CFR 100 Subpart B (10 CFR

100.23 in particular) was written to specifically invoke PSHA as a method for determining the design ground motion for new generation NPPs. EPRI-NP-4726 had established that, on a median basis, the SSE spectra for a group of existing nuclear power plants corresponded to an annual exceedance probability of about 10-5 per year (see Figure 1). Given this finding, the NRC introduced RG 1.165 [14] in 1997 to require that the new generation of nuclear power plants be designed for a seismic hazard of 10-5 per year (referred to as

Composite Probability of Exceeding SSE

Cum

ulat

ive D

istrib

utio

n

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

10-7 10-6 10-5 10-310-4

Med

ian:

1E-

5

Figure 1. Composite Probability of Exceedance for SSEs at Existing Nuclear Power Plants [14]

Page 214: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 204

the reference probability in RG 1.165) using the PSHA method. The thinking was that the new plants thus designed would be at least as “safe” as the median level from among the existing fleet of plants, a somewhat misleading premise because seismic safety is a function of both the seismic hazard and the seismic fragility of the plant structures, systems, and components (SSC), not the seismic hazard alone.

RG 1.165, with its requirement to produce design response spectra for a 100,000-year return period, went relatively unnoticed until 2003, when site studies for new plant licenses began. Use of the latest seismicity data and ground motion models showed a general increase in the predicted ground motions, especially for large return periods and in the HF range for rock sites. (Both the EPRI and LLNL studies had already confirmed that the seismic ground motion for the CEUS regions has significant HF content, an issue that generated much debate about if and how SSCs can be accurately analyzed or tested for HF excitation.)

Besides the cost implications, the increase in hazard estimates also meant that the median seismic hazard associated with the design spectra for the existing fleet of plants was less than 10-5

per year if the latest data/models were to be used, thus bringing into question the very basis for use of the 10-5 per-year value as the reference probability. Because seismic-related cost and schedule impacts are significant for the nuclear power industry, there was a desire to identify an alternative rational basis that would lead to more reasonable design ground motion estimates.

With the above concerns in mind, the industry looked to use seismic risk, rather than seismic hazard, as the more appropriate basis for seismic design for new NPPs. Certain previous and parallel developments have influenced the course of the more recent seismic regulatory guidance and implementation criteria:

• In the early 1990s, the NRC asked all nuclear plant licensees to conduct an individual plant examination for external events (IPEEE) program to evaluate the plant risks associated with seismic events, high winds, internal fire events, etc. For reporting seismic assessment results, the licensees were given a choice to use either the seismic margins approach (i.e., maximum ground motion that could be resisted versus the PGA value that the plant was designed for) or a more comprehensive annual seismic risk approach. In all, licensees of 25 existing plants conducted the

seismic risk-based evaluations. The results proved to be a better indicator of each plant’s seismic safety because the studies addressed both the hazard and fragility aspects of the controlling SSCs.

The NRC published a summary report, NUREG-1742, in 2002 based on the IPEEE results reported by each utility. [15] With either EPRI or LLNL data [10, 11] as the hazard basis, the report provided a good perspective on the seismic risk (or seismic safety, depending on one’s perspective) at the existing NPPs—rather than [9] and [10], which provided information on the seismic hazard only. Similar to the concept of reference probability introduced by the NRC in RG 1.165 (defined in terms of median annual seismic hazard level corresponding to the SSE spectra for the existing nuclear plants), NUREG-1742 enabled the nuclear industry to think in terms of a reference risk—determined as the median annual seismic risk for the existing NPP units—as an indicator of their seismic safety. Figure 2 shows that the reference risk value for seismically induced failure is about 1.2 x 10-5. As this value is based on the seismicity data and ground motion models from the early 1990s, the annual seismic risk estimate would be higher if one used the current data/models (which result in increased hazard estimates).

• The industry’s desire to use seismic safety (rather than seismic hazard, as stipulated in RG 1.165) as the basis for seismic design of new NPPs was shaped by the “risk-based” design approach first introduced in DOE Standard 1020-02. [16] The case for using a risk-based approach received a further boost because of its incorporation in ASCE Standard 43-05. [17] Both of these standards specify that seismic design of NPPs be based on a performance goal of 10-5 per year (i.e., the probability of failure of any SSC due to a seismic event must be less than 10-5 per year). This was a serendipitous development for the nuclear industry because the acceptable risk goal in ASCE 43-05 happened to be about the same as the median seismic risk level for existing NPPs (as Figure 2 shows).

Using conservative fragility characteristics for typical nuclear SSCs as the fragility basis, ASCE 43-05 provides a simple method for deriving the “performance-based” spectrum (PBS) for a risk of 10-5 per year. One method for developing the PBS is to use

Similar to the

concept of reference

probability

introduced by the

NRC in RG 1.165,

NUREG-1742 enabled

the nuclear industry

to think in terms of

a reference risk—

determined as the

median annual

seismic risk for

the existing

NPP units—

as an indicator of

their seismic safety.

Page 215: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 205

The adoption of a

risk-based design

approach by DOE

and ASCE standards

was a serendipitous

development

for the

nuclear industry

because the

acceptable risk

goal in ASCE 43-05

happens to be

about the same

as the median

seismic risk level

for existing NPPs.

the PSHA-based mean uniform-hazard spectra for 10,000-year and 100,000-year return periods. The uniform risk PBS thus

developed lies in between the two mean hazard spectra (see Figure 3 for comparison). The PBS also turns out to be smaller

Figure 3. Comparison of the Mean 10-4 and 10-5 UHS and the Performance-Based GMRS [19]

Frequency, Hz

Spec

tral A

ccel

erat

ion,

g

1

0.1 1 100100

2

Mean UHS 1E-5MeaMean UHS 1E-4MeaPerformance-Based GMRS

Figure 2. Annual Seismic Core Damage Frequency at Existing Nuclear Power Plants [18]

Cum

ulat

ive D

istrib

utio

n

Seismic Core Damage Frequency

0

1.0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

1.00E-07 1.00E-06 1.00E-05 1.00E-04 1.00E-03

Page 216: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 206

compared to the 100,000-year median uniform-hazard spectrum (UHS) per RG 1.165 (Figure 4 provides a sample comparison and includes the RG 1.60-based spectrum). The spectrum reduction is especially significant in the HF range, which was a matter of concern for the nuclear industry. Thus, it was clear that the performance-based approach per ASCE 43-05 would be an attractive option in place of the uniform-hazard-based approach per RG 1.165. (The case for using the performance-based [i.e., uniform risk] spectrum per ASCE 43-05 was made in [18].)

• The potential concern about design and testing of SSCs to address the HF content in CEUS ground motion had been acknowledged in the nuclear industry since the early 1990s. The general consensus was that HF excitation is inconsequential to structures and most systems (e.g., piping). The concern about chattering response of old electrical relays subjected to HF excitation could be easily addressed by simply replacing them with relays that use solid-state electronics (and preventing use of the old relays in future NPPs). An EPRI report [21] sought to address these issues in the early

1990s; however, further progress and NRC consensus was not achieved at the time because of insufficient regulatory impetus (i.e., no new plants were being built or licensed). In any case, these early developments provided a framework for further discussions with the NRC during the past few years.

• The advent of the standard plant concept also had implications for NPP seismic analysis and design. The standard plant is a priori designed for a site-independent seismic spectrum (called the certified seismic design response spectrum [CSDRS]) and an array of possible soil conditions, with the expectation that the selected seismic design parameters will envelop the site-specific design spectra and soil characteristics at the candidate sites. While attractive in principle, this goal has been somewhat elusive because most standard plant designs have employed design spectra that do not contain sufficient HF content to envelop the site-specific spectra for many CEUS hard-rock sites (see Figure 5). Furthermore, none of the standard plant designs would be able to withstand the expected design ground motion for West Coast sites (especially in the

The PBS

turns out

to be smaller

compared to the

100,000-year

median UHS

per RG 1.165.

Figure 4. Sample Comparison of RG 1.60 0.30 g Spectrum, RG 1.165 10-5 Median UHS, and ASCE 43-05 10-5 PBS [20]

Frequency, Hz

Spec

tral A

ccel

erat

ion,

g1.0

1 100.1

10.0

0.1 100

Approx. RG 1.165 (Median: 10-5) Approx. ASCE PB Spectrum RG 1.60 at 0.30 g

Page 217: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 207

low-to-medium frequency range). As a result, the standard plant designs often would need to be assessed for site-specific soil and ground motion parameters, and rules for conducting such assessments needed to be developed to address this new situation for the industry.

• The operating-basis earthquake (OBE) is another ground motion concept that has evolved over the past 15 years, especially as NSSS suppliers developed their respective standard plant concepts. Experience from seismic design/qualification tests for existing plants had showed that, compared to the SSE, the OBE rarely governed the final design of SSCs (especially the structures). This determination meant that the significant design, analysis, and testing effort expended for OBE was not worthwhile, so the industry lobbied the NRC during the early 1990s to make OBE an “inspection level” earthquake rather than an explicit design-level earthquake. The NRC’s earlier concurrence on this subject was first documented in SECY (Office of the Secretary) 93-087 [22] and later reflected in 10 CFR 50 Appendix S.

The OBE concept, however, has needed some revisiting during the past few years as it became clear that not all of an NPP’s seismic category I structures would be standardized

(e.g., intake structure configuration can vary from site to site, a reason why it is usually not part of the standard plant offering). Another factor is that the free-field motion corresponding to foundation elevations of various structures differs because not all structures have the same embedment in soil. As OBE and SSE are both meant to encompass not only site-dependent but also structure-independent ground motions, it became necessary to define them clearly and to clarify their usage for both standard and nonstandard plant structures.

• The phenomenon of ground motion incoherence, attributable to wave passage effect as well as random incoherency, had been recognized for a long time. The wave passage effect reflects the fact that a time lag is associated with passage of a given wave such that identical particle motion at two different locations cannot happen at any given instant of time (the wave passage effect is captured in most commercial software for SSI analysis). Random incoherency, which is the more significant source of incoherency, relates to the fact that the particle motion at any location within the soil is a result of many reflected and refracted waves that pass through at any given time (i.e.,

While attractive in

principle, the basic

goal of having a

“standard plant”

seismic design

that covers most

candidate sites has

been somewhat

elusive because

most standard

plant designs have

employed design

spectra that do not

contain sufficient

HF content

to envelop the

site-specific spectra

for many CEUS

hard-rock sites.

Figure 5. Comparison of Design Motion Developed for a CEUS Rock Site, and Design Motions Used for Design Certification [23]

Augmented RG 1.60 (0.30 g) RG 1.60 (0.30 g) CEUS Rock

Frequency, Hz

Spec

tral A

ccel

erat

ion,

g

1.0

1 100.1

10.0

0.1 100

5% Damping

Page 218: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 208

waves do not arrive in an orderly fashion; many waves are incident at a given time due to multiple reflections in the soil layers).

It has been known that the incoherency increases with frequency and spatial separation between observation points. While some empirical models existed for describing ground motion incoherency, there was no proper technique to account for its effect on the seismic response of structures. ASCE 4-98 [24] permitted an ad hoc reduction of the design ground motion spectrum based on the foundation expanse and frequency range (i.e., larger reductions were permitted for HF range and large foundation sizes). However, because of the lack of rigor in this scheme, the NRC never fully endorsed it. Nonetheless, it was always recognized that the use of a good incoherency model, combined with its proper implementation into seismic SSI analysis (whereby the structure is analyzed for the incident incoherent motion), would result in a reduced significance of HF excitation. The trend toward using a very large common nuclear island basemat also meant that the resulting foundation sizes were ripe for realizing the benefit of incoherency. There was thus a clear impetus to develop a consensus on incoherency models and their treatment in the seismic analysis schemes.

• RG 1.138 [25] requires field measurements for seismic shear wave velocity and subsequent lab testing for determining dynamic soil properties (i.e., strain dependence of soil damping and modulus of elasticity) to characterize the seismic behavior of the underlying soil strata. Such tests need to be performed for both the in situ layers above the bedrock as well as for the backfill material to be placed on top of the uppermost competent in situ material.

A recent development in this regard is the so-called resonant column/torsional shear (RC/TS) test. This increasingly used method, developed at the University of Texas at Austin, enables both damping and modulus characteristics to be determined from a single test and with multiple uses of the same specimen (compared to the separate cyclic triaxial [CT] tests and RC tests mentioned in RG 1.138). Although not mentioned directly in RG 1.138, the RC/TS test is considered to be superior for synthesizing the test results along with saving time. For these reasons, the nuclear industry became interested in seeking regulatory approval of this method.

The seismic-related activities in support of recent ESP and COL applications further highlighted the importance of the foregoing developments in terms of seismic ground motion development, site geotechnical studies, and subsequent seismic analysis and design. These developments thus formed the bases for industry’s desire to influence the seismic requirements.

SEISMIC ISSUES ADDRESSED SINCE 2004

Recognizing the importance of the issues discussed earlier, both the NRC and the NEI

formed senior-level seismic task forces to achieve consensus and resolutions. These task forces have been meeting regularly since late 2004 and are now at a point at which they have resolved most of the issues, which are captured in RG 1.208 [19], the NRC’s May 2008 Seismic Interim Staff Guidance (ISG) document [26], and the latest revisions of SRP Sections 2.5.2 [27] and 3.7.1 [28]. The following sections discuss some of the more important issues and their resolutions.

Option to Use Uniform-Hazard Spectrum (UHS) or Performance-Based Spectrum (PBS)The industry (NEI) succeeded in persuading the NRC to allow the use of PBS. The NRC issued a new regulatory guide (RG 1.208 [19]) that allows the use of PBS in lieu of UHS per RG 1.165. Figures 3 and 4 illustrate why the PBS approach is attractive to the nuclear industry; most of the recent applicants have been opting for the PBS approach. Use of either method requires rigorous PSHA work and site amplification studies with support from top-notch geology and seismology experts from boutique firms and architectural/engineering companies as described below:

• Geology and seismology experts performing detailed geological, seismological, and geophysical investigations to identify and characterize regional/local seismic sources.

• Seismology/probability experts performing PSHA work for developing UHS (and subsequently PBS, if desired) by considering the seismicity data for the sources and appropriate ground motion attenuation models. The spectra thus obtained correspond to the crystalline bedrock below the site, which can be a few tens of feet to many hundreds of feet below grade.

• Geotechnical experts performing site amplification studies to determine the free-field ground motion at various locations within the site soil profile (wherever structure foundations are to be located). This work

The trend

toward using

a very large

common nuclear

island basemat

also meant that

the resulting

foundation sizes

were ripe

for realizing

the benefit of

incoherency.

Page 219: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 209

entails the use of test data for dynamic properties of the soil layers at the site (from the bedrock to the grade) and accounting for the uncertainties. Because the soil near the grade elevation is typically not of the “competent” caliber, the properties of the structural backfill (or lean concrete layer) have to be accurately determined and accounted for in the site amplification studies. As a result, the design ground motion is initially established only at the uppermost competent soil layer because the backfill properties are not known upfront.

NRC Interim Staff Guidance − Definitions of Key Ground Motion Terms, Including Interpretations of SSE and OBE The NRC requires that the design ground motions be established at (1) the top of the uppermost competent in situ soil layer under the site (generally defined as a layer in which the seismic shear wave velocity is at least 1,000 ft/sec), and (2) various elevations corresponding to the bottoms of the foundations of all safety-related structures. The latter is necessary because the foundations are often located on top of structural fill materials that are used to replace the unsuitable upper layers at most sites.

To ensure that the standard plant design is adequate for the site-specific ground motion, the site-independent design ground motion (which is applied as free-field ground motion either at the grade level or at the foundations of the concerned structures) used for design of the standard plant structures is compared with the site-specific spectra. With the advent of the standard plant concept, it also has become necessary to clarify what the OBE spectrum means for SSCs that are not part of the standard plant. The following new terms and definitions have thus been introduced in the recent NRC seismic ISG [26]:

• Certified seismic design response spectrum (CSDRS)—Site-independent seismic design response spectrum approved by the NRC for a certified standard plant design.

• Ground motion response spectrum (GMRS)—Site-specific ground motion response spectrum (horizontal and vertical) determined as free-field outcrop motions on the uppermost in situ competent material, determined using RG 1.165 or RG 1.208.

• Foundation input response spectra (FIRS)—As the GMRS is established at the uppermost in situ competent layer, the resulting motion has to be “transferred” to the base elevations

of each seismic category I foundation. These site-specific (amplified) ground motion response spectra at the foundation levels in the free field are referred to as FIRS and are derived as free-field outcrop spectra.

• Safe-shutdown earthquake (SSE)—The SSE for the site is the performance-based design motion defined at the ground surface. Given this definition, any deviant as-found conditions can be evaluated against this spectrum, provided that the condition is subsequently restored to the design basis (e.g., the CSDRS for standard plant). Also, the slope stability soil lique-faction analyses need to be performed using the site-specific SSE.

This definition poses a dilemma because it is difficult to locate (and maintain) seismic monitoring instrumentation at the GMRS elevation. Therefore, the subject of ground motion monitoring requirements still needs to be sorted out. The NRC plans to publish a revised version of RG 1.12 [29] and possibly RG 1.166 [30] to address this issue with industry input. For now, NRC’s seismic ISG [24] states that the applicant’s monitoring and instrumentation plan will be reviewed on a case-by-case basis.

• Operating-basis earthquake (OBE)—For license applications for the use of a certified standard plant design, the OBE ground motion is defined as follows: – For the standard plant structures, the OBE

ground motion is one-third of the CSDRS.– For the safety-related structures that

are not part of the certified design, the OBE ground motion is one-third of the design motion response spectra, as stipulated in the design certification conditions specified in the DCD (which could be the CSDRS or GMRS).

– It is noted that, for situations when the DCD specifies GMRS as the design motion response spectrum for site-specific (non-standard) structures, the OBE for such structures need only match (or exceed) one-third of the GMRS. As the GMRS is often lower than the CSDRS, this would result in a lower OBE for the non-standard SSCs. However, selection of a rather low OBE level poses an economic risk to the plant in that a potential occurrence of an earthquake that exceeds the low OBE threshold would trigger an extended plant shutdown for post-earthquake inspections.

Because the soil

near the grade

elevation is

typically not of

the “competent”

caliber, the

properties of the

structural backfill

(or lean

concrete layer)

have to be

accurately

determined and

accounted for

in the site

amplification

studies.

Page 220: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 210

Outcome of Initiatives for Reduction of HF Content and Its ImpactDuring recent years, the nuclear industry has mounted a multi-pronged effort to minimize the significance of HF content in CEUS hard-rock ground motions. (The HF content can still persist for medium hard sites because their soil layers may not fully filter out the HF content.) This effort consisted of the following initiatives and outcomes:

• Use of PBS is now permitted in lieu of the UHS approach, which results in an effectively smaller return period for the HF range of the response spectrum and correspondingly reduced spectral values (see Figure 3). This approach complies with RG 1.208.

• Use of the cumulative absolute velocity (CAV) is permitted as a filter to help weed out seismic hazard contributions from low-magnitude earthquake events. It has been known that such events produce lower levels of ground shaking that usually do not damage most structures, let alone nuclear structures. It is also known that the low-magnitude events contribute more significantly to the HF content because the HF waves incur less attenuation compared with low-frequency waves. With this in mind, the industry proposed use of CAV (a term closely correlated with the earthquake magnitude) as a filter for eliminating seismic hazard contributions from low-magnitude events. The NRC endorsed this approach in RG 1.208 with the stipulation that a conservative CAV threshold of 0.16g-seconds be used for hazard calculation.

• Truncation of the number of standarddeviations included in defining the ground motion model, which can have a significant impact on the hazard estimates, is not permitted. An industry study to determine if such models could be truncated with a limited number of standard deviations found that there was no rational basis for such truncation (except as implied by the inherent strength limit of the geologic materials through which the motion is transmitted). While RG 1.208 does not permit such truncation, it does acknowledge that the magnitude of the standard deviation need not be too conservative. That is, if better ground motion models are developed using more data, the conservatism in the standard deviation magnitude can be reduced.

• Incorporation of ground motion incoherence is now permitted in the seismic SSI analysis.

Several sample studies have confirmed that the application of incoherent ground motion into SSI analysis results in reduced impact of the HF content (i.e., reduced HF acceleration levels in the in-structure response spectra and reduced response of walls and floors to HF excitation). Many researchers have proposed incoherency models, and the NEI in particular advocated those proposed by Abrahamson. [31] The NRC approved this model for rock sites and its incorporation into the SASSI program by Ostadan [32], among other similar implementations.

All of these industry initiatives, except for truncation of the number of standard deviations used in the ground motion model, have been successful in reducing the significance of HF content in the design ground motion. However, it is important to note that the wave incoherence benefit can be realized only when the unreduced (high-frequency-rich) spectrum is applied as (incoherent) free-field ground motion during the SSI analysis.

Impact of HF Content on Structural Modeling and Seismic QualificationDespite the aforementioned improvements, there is still significant HF content in the ground motion for CEUS rock sites (and very stiff sites). Consequently, new rules have been introduced to ensure that structural analyses and seismic qualifications are conducted properly to capture the response to HF excitation.

• Seismic analysis models must be refined enough to accurately capture response to the HF content (at least up to 50 Hz) of the horizontal and vertical GMRS/FIRS. The NRC considered this requirement to be important in developing accurate in-structure response spectra (ISRS) for walls and floors (as well as in accurately capturing any significant HF response of wall and floor panels). The ISG [26] further requires that the ISRS be developed for frequencies up to 100 Hz. The spectra thus developed will enable proper seismic qualification of HF-sensitive equipment and systems supported by the structures.

• Use of screening techniques to identify and screen out electrical and mechanical equipment not sensitive to HF excitation is outlined in the ISG. [26] This screening helps narrow down the number of items that have to be qualified for the HF excitation. It is expected that a large number of SSCs could be screened out using the screening criteria

All of the industry

initiatives with

regard to HF

ground motion,

except for

truncation of the

number of standard

deviations used

in the ground

motion model,

have been

successful in

reducing the

significance of

HF content in the

design ground

motion.

Page 221: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 211

provided in the ISG document. For the remaining SSCs, which are deemed to be HF sensitive, the seismic qualification method (whether by test, analysis, or a combination thereof) will need to be suitable to capture the response (and any vulnerability) to HF excitation.

Determination of Dynamic Soil Properties and Engineered BackfillThe RC/TS method has now been accepted by the NRC and is being widely used by most of the COL applicants. For some time, the University of Texas at Austin has been the only facility capable of conducting these tests for nuclear applications (i.e., with the requisite quality assurance program). While relatively efficient, an RC/TS test still takes about 1 week per granular sample and 2 weeks for cohesive samples (it is not needed for hard-rock sites), which has created a bottleneck for the COL applicants. Soil testing also becomes a significant challenge for deep-soil sites, where hard rock is not reached even at depths of several hundred feet. Careful planning and coordination with other applicants are therefore needed at the outset of an ESP or COL project to ensure that the application submittal schedule is realistic.

Reconciliation of Site Parameters with Standard Plant DesignThe standard design is based on a site-independent ground motion (represented by CSDRS) and a variety of potential soil profiles to arrive at an SSI response. Generally speaking, a standard plant supplier considers several soil profiles deemed representative of candidate sites and provides a design that considers the maximum response from all such profiles. Therefore, from a seismic design standpoint, the following site-specific characteristics have to be considered in assessing whether the standard plant design envelops the site conditions:

• Reconciliation with site-specific seismic design spectrum—The CSDRS must exceed the site GMRS (or structure-specific FIRS, if the structure is not founded at the GMRS elevation) at all frequencies across the spectrum. As noted earlier, this is often not the case for CEUS hard-rock sites because most suppliers did not choose sufficient HF-rich design spectrum for their standard design.

• Reconciliation with site soil profile—The applicant must demonstrate that the site soil profile is bounded by or very close to one of the generic profiles considered in the standard

plant design. This condition can also be elusive because the presence of a soft soil lens by way of an engineered fill layer on top of a medium hard or hard rock layer (a not-so-uncommon scenario) has generally not been considered by most standard plant suppliers. The presence of such layers can in fact be problematic because of increased shaking levels due to reflection of seismic waves from the underlying hard layer. Most ancillary structures are not as deeply embedded as the nuclear island structure (which may be directly founded on rock at hard-rock sites). One remedy for such situations is the use of lean concrete as a backfill (in lieu of the usual compacted structural backfill), which can help align the resulting soil profile with one of the generic profiles considered in the standard plant design.

• Inclination (“dip”) of the top surface of the uppermost competent in situ soil layer—The standard plant design typically factors in some dip (say, up to 20 degrees) for the soil layer where CSDRS is applied. The applicant must verify that the dip at the site, if any, does not exceed the limit prescribed by the standard plant supplier.

• Dynamic soil-bearing capacity and friction coefficient at soil-foundation interface—The standard plant suppliers often prescribe a minimum value for the dynamic bearing capacity for the soil directly below the foundation and for the friction coefficient at the soil-foundation interface. These limits ensure that (1) the supporting soil at the site can withstand potentially large transient bearing pressures caused during design level shaking, and (2) there is enough friction resistance at the foundation interface to prevent sliding of the structure. Of these two parameters, the dynamic bearing capacity can be problematic because there are no prescribed tests to determine it. Consequently, the applicant (and/or the E&C company representing the applicant) must somehow justify that the soil in question has sufficient dynamic bearing capacity.

Site-specific reconciliation is warranted if one or more of the aforementioned conditions cannot be satisfied. As more COL applicants move forward, many applications are requiring such reconciliation, which consists of demonstrating that the critical sections of structures, as reported in the DCD, remain adequate for the forces and moments resulting from site-specific conditions. (This evaluation often works out favorably because most standard plants have been designed

One remedy for

avoiding situations

involving a soft

backfill layer

underlain by a

hard in situ layer

is to use lean

concrete as backfill

(in lieu of the

usual compacted

structural backfill),

which can help

align the resulting

soil profile with

one of the generic

profiles considered

in the standard

plant design.

Page 222: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 212

very conservatively, using the envelope of design requirements stemming from several generic soil profiles.) The reconciliation leads to either (1) demonstrating that the site-specific ISRS are enveloped by those reported in the DCD, or (2) generating a new (higher) set of ISRS for the applicant’s use during the detailed design phase. So, even with the advent of the standard plant concept, industry observers are noting that significant seismic analysis and design activities are taking place and are expected to continue.

STILL-TO-BE-RESOLVED SEISMIC ISSUES

Most major seismic issues have been addressed during the past year or two. However,

several issues, discussed in the following subsections, are continuing to be resolved.

New Ground Motion Models for the CEUSAs mentioned earlier, the magnitude of standard deviation in the ground motion model can contribute significantly to a high hazard estimate. The problem has been especially significant for the CEUS because, unlike the Western United States, there has been a general lack of specific ground motion attenuation models for this region. To address this issue, the NRC recently sponsored ([33]) a researchproject called NGA-East (Next Generation Attenuation models for the region), which is patterned after a similar NGA-West project that was concluded in 2007. It is expected that the NGA-East work will lead to some reduction in seismic hazard estimates for the CEUS, which will be in line with similar benefits realized based on the results of the NGA-West project. While the final results from this project are a few years away, it is likely that the nuclear industry will start using some of the research data and formulations as they become available.

Application of FIRS for SSI AnalysisSo far, the NRC has indicated that an SSI analysis should be conducted by applying the amplified ground at the grade level, whereby the input motion on the embedded portion of the structure (i.e., basement walls and basemat) is determined through a de-convolution analysis. The industry has been arguing against this approach. The NEI issued a white paper in September 2008 (see [34]) and reached an agreement with the NRC to use the performance-based design motion at the foundation level for SSI analysis. The NRC is in the process of documenting this agreement in a new ISG.

Need for Refined Seismic ModelingThe task of generating refined analytical models to accurately capture seismic response for at least 50 Hz excitation is not easy. The following considerations apply for the size of concrete and soil elements:

• For concrete elements, their size of elements depends on the shear wave velocity through concrete as well as vibration frequencies of individual wall and floor panels. Consideration of these factors leads to element sizes in the range of 10 ft by 10 ft (sometimes slightly smaller for thinner floor slabs). The analytical models used for static analysis often have this degree of refinement. The key difference is that static analysis is not nearly as computationally intensive as seismic analysis; as a result, this level of mesh refinement can be considerably challenging for seismic analysis in terms of software and hardware limits.

• For rock sites (with a shear wave velocity of at least 5,000 ft/sec), the maximum size of the soil elements can be as much as 20 ft by 20 ft for ensuring faithful transmission of 50 Hz frequency waves. This in itself is not a big problem in terms of the potential model size for the SSI analysis. The real problem arises with a common scenario wherein the safety-related structures are underlain by a lens of backfill material with low shear wave velocity characteristics (often 800 ft/sec to 1,000 ft/sec). In this situation, the element size must be limited to 3 ft to 4 ft in these soil layers, which in turn balloons the total number of soil elements used for the SSI analysis (the number of soil elements increases quickly because the soil volume modeled for SSI analysis is often quite large).

The total number of elements dictates the computation time required to solve the SSI problem. Making models work for 50 Hz frequency is a challenge for the whole nuclear industry. This latest NRC requirement has yet to be fully addressed by any of the NSSS vendors and E&C companies. SASSI is the most common and versatile SSI program in the industry; however, both the Bechtel version of the program and the commercially available SASSI version have difficulty meeting the challenge, especially for large structures (i.e., whereas the total number of nodes in SSI analyses was previously less than 100, the number may now reach several tens of thousands for even small structures).

Even with the

advent of the

standard plant

concept, industry

observers are

noting that

significant seismic

analysis and

design activities

are taking place

and are expected

to continue.

Page 223: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 213

Improvements are therefore needed in terms of both software enhancements and increased hardware capacity using many parallel processors. Before such an effort is undertaken (or in parallel with same), one possible option is to demonstrate the acceptability of a lower cutoff frequency (such as 25 Hz) by showing that the wall and floor responses and associated ISRS remain relatively unaffected at 25 Hz as well as 50 Hz. It is also possible to show that the cutoff frequency can be smaller for soft-to-medium-stiff soil sites, because such soil layers may sufficiently filter out HF transmission into the super-structure. In any case, such demonstrations would likely be structure and/or soil profile specific, rather than grounds for an outright exemption from the current 50 Hz NRC requirement.

Engineered BackfillBackfill properties are important for foundation design and for developing site-specific seismic responses of the plant structures. However, the fact that the backfill properties can only be measured once backfill is placed during plant construction imposes a condition on the license (inspections, tests, analyses, and acceptance criteria [ITAAC] on backfill) to be met after backfill is placed. This is not a desirable development for COL applicants. The NEI has formed a task force to develop a solution and reach an agreement with the NRC.

Moisture BarrierAll standard designs require a moisture barrier with a defined frictional capacity between the con-crete and the barrier. A barrier design that meets the requirements has not been fully developed and is subject to future testing and performance assessment. The COL applications continue to accept a condition on the license (ITAAC) for a satisfactory design of a moisture barrier.

CONCLUSIONS

Several challenging seismic issues in designing nuclear power plants have already been dealt

with, and more are still being addressed. Bechtel has remained engaged with other industry players in successful resolution of these issues and has continued to assist customers (standard plant suppliers and utilities) with the difficult implementation process during the ESP and COL application phases. Thus, while its role has changed with the advent of the standard plant concept, Bechtel remains a key player in the nuclear industry. Bechtel’s technical leader-ship in the seismic arena continues to be vital to maintaining its prominent industry role.

REFERENCES

[1] 10 CFR 100 Subpart A – Evaluation Factors for Stationary Power Reactor Site Applications Before January 10, 1997 and for Testing Reactors.

[2] 10 CFR 100 Subpart B – Evaluation Factors for Stationary Power Reactor Site Applications on or After January 10, 1997.

[3] 10 CFR 50 Appendix S – Earthquake Engineering Criteria for Nuclear Power Plants.

[4] 10 CFR 52 – Licenses, Certifications, and Approvals for Nuclear Power Plants.

[5] Bechtel Power Corporation Technical Bulletin TB 30H-G01G-TB031, Issues and Challenges with the 10 CFR Part 52 Licensing Process, Rev. 0, January 2007.

[6] NUREG-0800, “Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants,” Section 2.5.2, Vibratory Ground Motion, Rev. 2, August 1989.

[7] USNRC Regulatory Guide 1.60, “Design Response Spectra for Seismic Design of Nuclear Power Plants,” Rev. 1, December 1973.

[8] C.A. Cornell, “Engineering Seismic Risk Analysis,” Bulletin of the Seismological Society of America, Volume 58, Issue 5, 1968, pp. 1583–1606, access via <http://bssa.geoscienceworld.org/cgi/reprint/58/5/1583>.

[9] NUREG/CR-5250, “Seismic Hazard Characteri-zation of 69 Nuclear Plant Sites East of the Rocky Mountains,” Volumes 1–8, January 1989.

[10] EPRI-NP-4726, “Probabilistic Seismic Hazard Evaluations at Nuclear Power Plant Sites in the Central and Eastern United States,” All Volumes, 1989–1991.

[11] NUREG-1488, “Revised Livermore Seismic Hazard Estimates for 69 Nuclear Power Plant Sites East of the Rocky Mountains,” April 1994.

[12] NUREG/CR-6372, “Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts,” 1997.

[13] National Research Council Report, “Review of Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts,” 1997, access via <http://books.nap.edu/openbook.php?record_id=5487>.

[14] USNRC Regulatory Guide 1.165, “Identification and Characterization of Seismic Sources and Determination of Safe Shutdown Earthquake Ground Motion,” Rev. 0, March 1997.

[15] NUREG-1742, “Perspectives Gained from Individual Plant Examination of External Events (IPEEE) Program,” Final Report, All Volumes, April 2002.

[16] DOE Standard 1020-02, “Natural Phenomena Hazards Design and Evaluation Criteria for Department of Energy Facilities,” Department of Energy, January 2002.

[17] ASCE Standard 43-05, “Seismic Design Criteria for Structures, Systems, and Components in Nuclear Facilities,” American Society of Civil Engineers, January 2005.

[18] S.R. Malushte and R.P. Kennedy, “Implications From Past Seismic Safety Assessments on Development of a Risk-Based Seismic Design Philosophy,” 18th International Conference on

Bechtel

has remained

engaged with other

industry players

in successful

resolution of the

seismic issues, and

has continued to

assist customers

(standard plant

suppliers and

utilities) with

the difficult

implementation

process during

the ESP and

COL phases.

Page 224: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 214

Structural Mechanics in Reactor Technology (SMiRT 18), Beijing, China, August 7–12, 2005, Paper SMiRT18-K01-3 <http://www.iasmirt.org/iasmirt-2/SMiRT18/K01_3.pdf>.

[19] USNRC Regulatory Guide 1.208, “A Performance-Based Approach to Define the Site-Specific Earthquake Ground Motion,” Rev. 0, March 2007.

[20] G. Hardy, “Recent Seismic Research Programs for New Nuclear Power Plants and Performance Based Design,” 19th International Conference on Structural Mechanics in Reactor Technology (SMiRT 19), Toronto, Canada, August 12–17, 2007.

[21] EPRI Report TR-102470, “Analysis of High-Frequency Seismic Effects,” prepared by Jack R. Benjamin and Associates, Inc., and RPK Structural Mechanics Consulting, October 1993.

[22] SECY 93-087, “Policy, Technical and Licensing Issues Pertaining to Evolutionary and Advanced Light-Water Reactor (ALWR) Designs,” April 1993.

[23] EPRI Report 1015108 (Technology Innovation Deliverable), “The Effects of High-Frequency Ground Motion on Structures, Components, and Equipment in Nuclear Power Plants,” Technical Update, June 2007.

[24] ASCE Standard 4, “Seismic Analysis of Safety-Related Nuclear Structures and Commentary,” American Society of Civil Engineers, 1998.

[25] USNRC Regulatory Guide 1.138, “Laboratory Investigations of Soils and Rocks for Engineering Analysis and Design of Nuclear Power Plants,” Rev. 2, December 2003.

[26] COL/DC-ISG-1, “Interim Staff Guidance on Seismic Issues Associated with High Frequency Ground Motion,” May 2008.

[27] NUREG-0800, “Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants,” Section 2.5.2, Vibratory Ground Motion, Rev. 4, March 2007.

[28] NUREG-0800, “Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants,” Section 3.7.1, Seismic Design Parameters, Rev. 3, March 2007.

[29] USNRC Regulatory Guide 1.12, “Nuclear Power Plant Instrumentation for Earthquakes,” Rev. 2, March 1997.

[30] USNRC Regulatory Guide 1.166, “Pre-Earthquake Planning and Immediate Nuclear Power Plant Operator Post-Earthquake Actions,” Rev. 0, March 1997.

[31] EPRI Report, “Hard-Rock Coherency Functions Based on the Pinyon Flat Array Data,” prepared by N. Abrahamson, July 2007.

[32] F. Ostadan and N. Deng, “SASSI-SRSS Approach for SSI Analysis with Incoherent Ground Motions,” Bechtel National report to Nuclear Energy Institute, August 2008.

[33] NRC Seismic Research Program Plan FY 2008–2011, by Structural, Geotechnical & Seismic Engineering Branch, Division of Engineering, Office of Nuclear Regulatory Research, US Nuclear Regulatory Commission, January 2008 <http://peer.berkeley.edu/ngaeast/assets/nrc_research_plan_public.pdf>.

[34] R.P. Kennedy and F. Ostadan, “Consistent Site-Response/Soil-Structure Interaction

Calculations Workshop on Seismic Issues,” presentation to US Nuclear Regulatory Commission, Joint NRC-NEI Seismic Issues Meeting, Palo Alto, California, September 2008.

BIOGRAPHIESSanj Malushte, senior principal engineer and Bechtel Fellow, has more than 25 years of varied experience as a practicing civil/structural engineer, engineering super-visor, resident engineer, assistant chief engineer, project engineer, researcher, adjunct (part-time) faculty member,

and technical specialist. During his 19 years at Bechtel Power, Dr. Malushte has served on many US and international fossil and nuclear power projects in various roles during all phases of projects. He has also provided technical consultation to several other projects for Bechtel’s Mining & Metals, OG&C, and Bechtel National divisions.

Outside of Bechtel, Dr. Malushte spent a year designing structures for chemical and industrial process facilities. He also has 9 years of experience as an adjunct faculty member at The Johns Hopkins University, Baltimore, Maryland, where he has taught graduate level structural engineering courses in structural dynamics, advanced steel design, and earthquake engineering/seismic design—subjects that he has taught extensively in-house at Bechtel as well.

Prior to joining Bechtel, Dr. Malushte worked for 6 years as a research/teaching assistant, research associate, and post-doctoral research scientist at the Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, Virginia, while completing his graduate studies. At Virginia Tech and later at Bechtel, he worked on several research projects funded by the National Science Foundation (NSF), United Technologies Research Center, and Bechtel, in the fields of earthquake engineering, structural dynamics, computational fluid mechanics, and modular walls/floors using steel-concrete composite construction.

Dr. Malushte’s primary expertise is in the fields of earthquake structural engineering, structural mechanics, and analysis for impact loads. He also has significant expertise in interpretation and application of US/international structural codes/standards, steel/concrete/composite design, and use of finite element method. Dr. Malushte is an active member of several key American Society of Civil Engineers (ASCE) and American Institute of Steel Construction (AISC) code/standard committees related to nuclear and non-nuclear structures. He is also a member of the National Earthquake Hazards Reduction Program (NEHRP) Task Committee for Nonbuilding Structures and Nonstructural Components and the Nuclear Energy Institute (NEI) Seismic Issues Taskforce. Dr. Malushte has been an invited member on peer review panels for National Institute of Standards and Technology (NIST) and the NSF, and is a past associate editor of ASCE’s Journal of Structural Engineering. He currently serves as an adviser to the KoreanSociety of Steel Construction (KSSC) for their research

Page 225: BTJ Book V1 N1 2008 Final

December 2008 • Volume 1, Number 1 215

and standardization program on modular composite construction for nuclear facilities, and is the chair of an associated AISC standard committee.

Dr. Malushte has presented several invited seminars worldwide, authored or co-authored more than 25 journal and conference papers, and has served as a peer reviewer for numerous technical journals. He is a fellow of ASCE and UK’s Institution of Civil Engineers, and in 2005, he was elected a Bechtel Fellow—the highest technical recognition conferred within Bechtel.

Dr. Malushte received a PhD and an MS in Engineering Mechanics, and also an MS in Civil Engineering, all from Virginia Tech; a master’s degree in Engineering Management from George Washington University, Washington, DC; and a bachelor’s degree with Honors from the University of Bombay, India. Dr. Malushte is a licensed civil, mechanical, and structural engineer in the United Kingdom (UK), and in several US states, including California.

Orhan Gürbüz is a Bechtel Fellow and senior principal engineer with over 35 years of experience in structural and earthquake engineering. As a Fellow, he is an advisor to senior management on technology issues, and represents Bechtel in technical societies and at industry associations.

As a senior principal engineer, he provides support to various projects and performs design reviews and independent peer reviews. The scope of work includes development of design criteria, seismic evaluations, structural evaluations and investigations, technical review and approval of design, serving as Independent Peer Reviewer for special projects, investigation and resolution of design and construction issues, and supervision of special analyses.

Dr. Gürbüz is a member of the American Society of Civil Engineers’ Dynamic Analysis of Nuclear Structures Committee and the American Concrete Institute 349 Committee. These committees develop and update standards and codes used for the nuclear safety-related structures, systems, and components.

Dr. Gürbüz received a PhD and an MS in Structural Engineering, and a BS in Civil Engineering, all from Iowa State University, Ames, Iowa.

Joe Litehiser, Jr., joined Bechtel straight out of graduate school. During his 38 years withthe company, he has worked as a senior geologist and engineering specialist (seis-mologist) in what is now the Geotechnical & Hydraulic Engineering Services (G&HES) group. During this time,

Dr. Litehiser has developed, reviewed, and/or approved site-specific earthquake design criteria under foreign and domestic regulatory provisions for more than 700 projects across multiple GBUs. He has also published, with Bechtel colleagues, an empirical attenuation relationship for vertical ground motion as well as a probabilistic earthquake shaking hazard algorithm to estimate

probabilistic liquefaction potential; defendedseismic design load choices to projects, foreign consultants, and before licensing panels; managed the Seismology/Geophysics Technical Working Group; and developed a white paper on regional seismicity and faulting for the power plant projects in Turkey following a nearby damaging earthquake.

Dr. Litehiser has been quite active in various organizations. He is a member of (and for 20 years was the secretary of) the Seismological Society of America and the Earthquake Engineering Research Institute; is chairman of ANS 2.30, a committee charged with maintaining the nuclear industry standard for characterization of neotectonic features; was a past member and chairman of the San Francisco Seismic Investigations and Hazards Survey Advisory Committee (SIHSAC), a group of technical specialists charged with keeping the San Francisco Board of Supervisors apprised of appropriate earthquake hazard preparedness measures; and is a past member and chairman of the policy advisory board of the Bay Area Regional Earthquake Preparedness Project (BAREPP).

Dr. Litehiser has authored more than 25 papers, reports, and presentations in technical journals or conference proceedings, and has also been the editor for one book. In 2001, he was elected a Bechtel Fellow in recognition of his substantial technical achievement on behalf of Bechtel.

Dr. Litehiser received a PhD in Seismology and an MA in Geophysics, both from the University of California, Berkeley, and an AB in Geology from Indiana University, Bloomington, Indiana. He is a registered geologist in the state of California.

Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse

group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects.

Dr. Ostadan has published more than 30 technical papers on topics relating to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations.

Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of California’s Seismic Safety Commission.

Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

Page 226: BTJ Book V1 N1 2008 Final

Bechtel Technology Journal 216

Page 227: BTJ Book V1 N1 2008 Final

Prem Attanayake, PhD Amos Avidan, PhD August Benz Siv Bhamra, PhD

Peter Carrato, PhD Doug Elliot, PhD Angelos Findikakis, PhD Benjamin Fultz

Orhan Gürbüz, PhD William Imrie Joe Litehiser, PhD Jake MacLeod

Sanj Malushte, PhD Cyrus B. Meher-Homji Ram Narula Farhang Ostadan, PhD

Stew Taylor, PhD Linda Trocki, PhD Ping Wan, PhD Fred Wettling

Bechtel Systems & Infrastructure, Inc. (BSII)BSII (US Government Services) engages in a wide range of government and civil infrastructure development, planning, program management, integration, design, procurement, construction, and operations work in defense, demilitarization, energy management, telecommunications, and environmental restoration and remediation.

CivilCivil Infrastructure is a global leader in developing, managing, and constructing everything from airport, rail, and highway systems to regional development programs; from ports, bridges, and offi ce buildings to theme parks and resorts.

CommunicationsCommunications integrates mobilization speed and a variety of disciplines, experience, and scalable resources to quickly and effi ciently deliver end-to-end deployment services for wireless, wireline, and other telecommunications facilities around the world.

Mining & Metals (M&M)Mining & Metals excels at completing logistically challenging projects—often in remote areas—involving ferrous, nonferrous, precious, and light metals, as well as minerals and industrial metals on time and within budget.

Oil, Gas & Chemicals (OG&C)Oil, Gas & Chemicals has the experience with a broad range of technologies and optimized plant designs that sets us apart as a worldwide leader in constructing oil, gas, petrochemical, LNG, pipeline, and industrial facilities.

PowerPower is helping the world to meet—in ways no other company can match—an ever-greater energy demand by designing and constructing electric generation facilities burning fossil and nuclear fuels and by providing services for existing plants.

Bechtel FellowsChosen for their substantial technical achievement over the years,

the Bechtel Fellows advise senior management on questions related to their areas of expertise, participate in strategic planning, and help

disseminate new technical ideas and findings throughout the organization.

Bechtel Global Business Units

Page 228: BTJ Book V1 N1 2008 Final

V o l u m e 1 , N o . 1

B r i s b a n e , A u s t r a l i a

F r e d e r i c k , M a r y l a n d , U S A

H o u s t o n , T e x a s , U S A

L o n d o n , E n g l a n d , U K

M o n t r e a l , C a n a d a

N e w D e l h i , I n d i a

S a n F r a n c i s c o , C a l i f o r n i a , U S A

S a n t i a g o , C h i l e

S h a n g h a i , C h i n a

T a i p e i , T a i w a n , R O C

www.bechtel.com

Vo

lum

e 1

, No

. 12

00

8

ForewordEditorial

Seismic Soil Pressure for Building Walls — An Updated Approach Integrated Seismic Analysis and Design of Shear Wall StructuresStructural Innovation at the Hanford Waste Treatment Plant

Systems Engineering — The Reliable Method of Rail System DeliverySimulation-Aided Airport Terminal DesignSafe Passage of Extreme Floods — A Hydrologic Perspective

FMC: Fixed-Mobile ConvergenceThe Use of Broadband Wireless on Large Industrial Project SitesDesktop Virtualization and Thin Client Options

Computational Fluid Dynamics Modeling of the Fjarðaál Smelter Potroom Ventilation

Long-Distance Transport of Bauxite Slurry by Pipeline

World’s First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade® LNG ProcessInnovation, Safety, and Risk Mitigation via Simulation TechnologiesOptimum Design of Turbo-Expander Ethane Recovery Process

Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical BoilersCO2 Capture and Sequestration Options — Impact on Turbomachinery DesignRecent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants

BECHTEL SYSTEMS & INFRASTRUCTURE, INC.

Farhang Ostadan, PhDThomas D. Kohli; Orhan Gürbüz, PhD; and Farhang Ostadan, PhD

John Power, Mark Braccia, and Farhang Ostadan, PhD

CIVIL

Samuel DawMichel A. Thomet, PhD, and Farzam Mostoufi

Samuel L. Hui; André Lejeune, PhD; and Vefa Yucel

COMMUNICATIONS

Jake MacLeod and S. Rasoul Safavian, PhDNathan YouellBrian Coombe

MINING & METALS

Jon Berkoe, Philip Diwakar, Lucy Martin, Bob Baxter, C. Mark Read,Patrick Grover, and Don Ziegler, PhD

Terry Cunningham

OIL, GAS & CHEMICALS

Cyrus B. Meher-Homji, Tim Hattenbach, Dave Messersmith,Hans P. Weyermann, Karl Masani, and Satish Gandhi, PhD

Ramachandra Tekumalla and Jaleel Valappil, PhD Wei Yan, PhD; Lily Bai, PhD; Jame Yao, PhD; Roger Chen, PhD;

Doug Elliot, PhD; and Stanley Huang, PhD

POWER

Kathi Kirschenheiter, Michael Chuk, Colleen Layman, and Kumar SinhaJustin Zachary, PhD, and Sara Titus

Sanj Malushte, PhD; Orhan Gürbüz, PhD; Joe Litehiser, PhD; and Farhang Ostadan, PhD

v

vii

3

13

23

33

43

49

57

77

91

101

111

125

141

157

171

181

201

Bechtel Technology Journal

ContentsAuthors

A n I n d e p e n d e n t A n a l y s i s o f C u r r e n t T e c h n o l o g y I s s u e s

D e c e m b e r 2 0 0 8

Be

chte

l Tech

no

logy Jo

urn

al

8570

M a j o r O f f i c e s I n :


Recommended