+ All Categories
Home > Documents > The Tower Undergraduate Research Journal Volume II

The Tower Undergraduate Research Journal Volume II

Date post: 10-Apr-2015
Category:
Upload: the-tower-undergraduate-research-journal
View: 4,376 times
Download: 4 times
Share this document with a friend
Description:
The Tower is Georgia Tech's Undergraduate Research Journal, where undergraduate researchers showcase their research in a peer-reviewed, on-campus journal. Print copies are available on newsstands around Georgia Tech. In this volume of The Tower, we introduce a new type of submission called a perspective article, where students give an overview on why their field of research is important.The Tower is an interdisciplinary research journal for undergraduate students at the Georgia Institute of Technology.The goals of our publication are to:◦Showcase undergraduate achievements in research,◦Inspire academic inquiry, and◦Promote Georgia Tech's commitment to undergraduate research endeavors.Visit our website, gttower.org, for more information!
90
Transcript
Page 1: The Tower Undergraduate Research Journal Volume II

®

Page 2: The Tower Undergraduate Research Journal Volume II

Mission stateMentThe Tower is an interdisciplinary research journal for undergraduate students at the Georgia Institute of Technology.

The goals of our publication are to:

■ showcase undergraduate achievements in research; ■ inspire academic inquiry; ■ and promote Georgia Tech’s commitment to under

graduate research endeavors.

the editorial BoardChuyong Yi Editor-in-Chief 2008-2010 [email protected] Chen Editor-in-Chief 2010-2011 [email protected] Weigel Associate Editor for Submission & Review [email protected] Valenti Associate Editor for Production [email protected] Keller Business Manager [email protected]. Karen Harwell Faculty Advisor [email protected]

Copyright inforMation© 2010 The Tower at Georgia Tech. Office of Student Media. 353 Ferst Drive. Atlanta, GA 30332-0290

WelcomestaffUndergraduate ReviewersMichael Chen Helen XuAndrew Ellet Shereka Banton Zane Blanton Alexander CaulkAzeem Bande-Ali Katy HammersmithIan Guthridge

Graduate ReviewersYogish Gopala Michelle SchleaDongwook Lim Rachel HorakLisa Vaughn Rick McKeonShriradha Sengupta Hilary SmithNikhilesh Natraj Partha Chakraborty David Miller

ProductionJ.B. Sloan Grace BayonaMatthew Postema

Web MasterAndrew Ash

BusinessJen Duke

Page 3: The Tower Undergraduate Research Journal Volume II

AcknoWledgementsspeCial thanksThe Tower would not have been possible without the assistance of the following people:

Dr. Karen Harwell Faculty Advisor, Undergraduate Research Opportunities Program (UROP)Dr. Tyler Walter LibraryDr. Lisa Yasek Literature, Communication & CultureDr. Richard Meyer LibraryDr. Kenneth Knoespel Literature, Communication & Culture Dr. Steven Girardot Success ProgramsDr. Rebecca Burnett Literature, Communication & CultureDr. Thomas Orlando Chemistry & BiochemistryDr. Anderson Smith Office of the ProvostDr. Dana Hartley Undergraduate StudiesMr. John Toon Enterprise Innovation InstituteDr. Milena Mihail Computing Science & SystemsDr. Pete Ludovice Chemical & Biomolecular EngineeringDr. Han Zhang College of ManagementMr. Michael Nees PsychologyMr. Jon Bodnar LibraryMs. Marlit Hayslett Georgia Tech Research Institute (GTRI)Mr. Mac Pitts Student Publications

Faculty ReviewersDr. Suzy Beckham Chemistry & BiochemistryDr. Tibor Besedes EconomicsDr. Wayne Book Mechanical EngineeringDr. Monica Halka Honors Program & PhysicsDr. Melissa Kemp Biomedical Engineering, GT/EmoryDr. Narayanan Komerath Aerospace EngineeringMr. Michael Nees Psychology

Dr. Manu Platt Biomedical Engineering, GT/EmoryDr. Lakshmi Sankar Aerospace EngineeringDr. Franca Trubiano College of Architecture

Review Advisory BoardDr. Ron Broglio Literature, Communication & CultureDr. Amy D’Unger History, Technology & SocietyDr. Pete Ludovice Chemical & Biomolecular EngineeringMr. Michael Laughter Electrical & Computer EngineeringDr. Lakshmi Sankar Aerospace EngineeringDr. Jeff Davis Electrical & Computer EngineeringMr. Michael Nees PsychologyDr. Han Zhang College of ManagementDr. Franca Trubiano College of ArchitectureDr. Milena Mihail Computing Science & SystemsDr. Rosa Arriaga Interactive ComputingMr. Jon Bodnar LibraryMs. Marlit Hayslett Georgia Tech Research Institute (GTRI)Dr. Monica Halka Honors Program & Physics

The Tower would also like to thank the Georgia Tech Board of Student Publications, the Undergraduate Re-search Opportunities Program, the Georgia Tech Li-brary and Information Center, and the Student Govern-ment Association for their support and contributions.

Page 4: The Tower Undergraduate Research Journal Volume II

Seneca once said “Every new beginning comes from some other beginning’s end,” and as I pen this last let-ter as the Editor-In-Chief of The Tower, the quote seems truer by the minute. Reflecting back on working with The Tower for the last three years, I am flooded with fond memories.

I remember the weekly Friday meetings where Henry (former Associate Editor for Production who had a knack for design), Joseph (former Webmaster who seemed to be able to translate any idea into codes), and I worked until midnight in our corner office laying out articles with an occasional Taco Bell break. Then there was the occasional lunch break where Martha, Henry, Erin and I, despite the name of the meeting, skipped our lunch to have a board meeting. I remember, of course, the second editorial board and our couch meet-ings where we shared our personal moments in which we grew closer than we ever would have been able to by simply working.

This, I believe, is why The Tower saw such success in its first three years. Everyone that was involved in the jour-nal had a personal connection either to the journal or to those working on the journal. Such an intimate working environment allowed each member’s passion and moti-vation for the journal to propagate freely when the team needed it most. We performed not for the position that we each filled but for each other.

The Tower and I had a rather symbiotic relationship—as it grew, I grew. I sincerely hope that Michael, the new Editor-In-Chief, and his new editorial board will enjoy learning from the journal as much as I did. I have full faith that, upon my departure, Michael and his team will bring forth a new beginning in which they will take the journal to a higher level of success.

I would like to thank each and every individual re-viewer (faculty and student), production team member, and business team member who made this journal pos-

sible. I would especially like to thank our faculty advi-sor, Dr. Karen Harwell, for her indispensable wisdom that helped me grow into the editor that I needed to be, and our Director of Student Media, Mr. Mac Pitts, and his assistants, Ms. Nyla Hughes and Ms. Marlene Beard-Smith, for their support and guidance. Lastly, I would like to thank the Georgia Tech Board of Student Publications for its immense support, the Georgia Tech Library for providing us with an online platform that is integral in the workings of the journal, SGA for provid-ing a generous funding for our print journals, and the Omega Phi Alpha service sorority for providing us the manpower when we needed it the most.

Thank you to all who were involved in The Tower, and I wish much luck to Michael and his new editorial board! I believe in you guys!

Best,

Chuyong Yi

Editor-In-Chief, 2008-2010

letters from the editors

Page 5: The Tower Undergraduate Research Journal Volume II

The first time I met Chuyong Yi, our outgoing Editor-In-Chief, she was interviewing me at the Barnes & Noble’s Starbucks. Chu was full of words and highly energetic, so I naturally thought it was the caffeine talking. After our next couple of encounters, I realized that this was actually her personality. During my time at The Tower, I saw that The Tower was a reflection of our Editor-In-Chief ’s personality—bursting full of enthusiasm.

Chu presided over the publication of our first two print journals. I would like to congratulate her for being our leader during the greatest expansion our journal, The Tower, has ever seen. Under Chu, The Tower stayed true to its mission of showcasing undergraduate research and providing a learning tool to aspiring scientists. Our journal, initially started by a small group of undergradu-ates with a tremendous passion for academic research, now contains nearly 40 staff members. Without her hard work and the hard work of each of our staff, our goal of providing an outlet to undergraduate researchers would not be possible.

Sadly, another member of our family is leaving. Known for her tremendous enthusiasm for undergraduate re-search, our faculty advisor, Dr. Karen Harwell, is leav-ing Georgia Tech to pursue other career aspirations. A mentor to all of us here at The Tower, she will be sorely missed.

I hope to carry on the work of Chu, Dr. Harwell, and our founding members by increasing the publication frequency of our print journal and broadening the mate-rial that we publish. I plan to increase our collaboration with student publications dedicated to college-specific research news. I will work to increase our internet pres-ence so that our journal is accessible anywhere and at anytime. With the hard work of our top-notch staff and the advice of the Student Publications Board, I hope to further expand The Tower to greater heights.

Special thanks to our staff reviewers who endured read-ing submission after submission without rest, our pro-duction team who spent countless hours designing the journal, and our business team members. Thanks to the editorial board for their repeated commitment to The Tower, the Library for providing us an online journal system, and the Student Government Association for making The Tower’s print journal possible. Finally, I es-pecially thank Mr. Mac Pitts, the Director of Student Media, for advising us in tough times and his assistants, Marlene Beard-Smith and Nyla Hughes.

Keep on the lookout for new material, whether that be online or in print, from The Tower. I encourage you to visit our website, gttower.org, for more information about what we do and how you can get involved.

Cheers,

Michael ChenEditor-In-Chief 2010-2011

letters from the editors

Page 6: The Tower Undergraduate Research Journal Volume II

getting involvedCall for papersThe Tower is seeking submissions for our Fall 2009 issue. Papers may be submitted in the follow-ing categories:

Article the culmination point of an undergraduate research project; the author addresses a clearly defined research problem

Dispatch reports recent progress on a research challenge; narrower in scope

Perspective provides personal viewpoints and invites further discussions through literature synthesis and/or logical analysis

If you have questions, please e-mail [email protected]. For more information, including de-tailed submission guidelines and samples, visit gttower.org.

Cover design ConstestThis year’s cover was designed by Esther Chung.

The Tower is looking for its next cover design. Submission deadline is February 5, 2011. Top de-sign will win $50 and a t-shirt. Get creative! The template is available at gttower.org. Please avoid copyrighted images. Final designs should be submitted to [email protected].

staffWant to be involved with The Tower behind the scenes? Become a member of the staff ! The Tower is always accepting applications for new staff members. Positions in business, production, review, and web development are available. Visit gttower.org or email [email protected] for more infor-mation on staff position availabilities.

Page 7: The Tower Undergraduate Research Journal Volume II
Page 8: The Tower Undergraduate Research Journal Volume II

tAble of contentsPersPectives

9 value sensitive Programming language designNicholas marquez

17 synthetic biology: aPProaches of engineered biology

robert louis fee

articles

25 network forensics using Piecewise PolynomialsseaN marcus saNders

37 characterization of the biomechanics of the gPiba-vwf tether bond using von willebrand

disease causing mutations r687e and wt vwf a1a2a3VeNkata sitarama damaraju

47 moral hazard and the soft budget constraint: a game-theoretic look at the Primal cause

of the sub-Prime mortgage crisisakshay kotak

57 comPact car regenerative drive systems: electrical or hydraulic

quiNN lai

71 switchable solvents: a combination of reaction and seParations

GeorGiNa w. schaefer

Page 9: The Tower Undergraduate Research Journal Volume II
Page 10: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

A programming language is a user interface. In designing a system’s user interface, it is not controversial to assert that a thoughtful consideration of the system’s users is para-mount, indeed consideration of users has been a primary focus of Human-Computer Interaction (HCI) research. General-purpose programming language design has not had much need for disciplined HCI methodology because programming languages have been designed by programming language users themselves. But what happens when programmers design languages for non-programmers? In this paper we claim that the application of a particular design methodology from HCI – Value Sensitive Design – will be valuable in designing languages for non-programmers.

value sensitive Programming language designNicholas marquezschool of computer scienceGeorgia institute of technology

adVisor:charles l. isbellschool of computer scienceGeorgia institute of technology

9

Page 11: The Tower Undergraduate Research Journal Volume II

introduCtionA programming language is a user interface. In design-ing a system’s user interface, it is not controversial to assert that a thoughtful consideration of the system’s users is paramount. Though there is a large body of re-search from the Human-Computer Interaction (HCI) research community studying just how best to consider a system’s users in the design of its interface there is little history of applying any of these methodologies from HCI to the design of general-purpose programming languages. Ken Arnold has argued that, since program-mers are human, programming language design should employ techniques from HCI (Arnold 2005). While there has been some work in applying HCI to the de-sign of languages for non-programmers, for example, for children’s programming environments (Pane et al. 2002), general purpose programming languages have not suffered much from a lack of HCI methodology in their design because programming languages have been designed by programmers, for programmers. In other words, programming language design has not had much need for disciplined HCI methodology because programming languages have been designed by pro-gramming language users themselves. But what happens when programmers design languages for non-program-mers? How does the language designer know which design decisions to take? We claim that these questions can and should be answered with the help of a disci-plined application of design methodologies developed in the HCI community.

We are designing a language for non-programmers who use computational models in the conduct of their non-programming work, in particular social scientists and game developers who write intelligent agent-based pro-grams. Agent-based programming has, as one of the pri-mary abstractions, “agents” who interact with each other and their environment asynchronously, maintain their own state, and are generally analogous to individual be-

ings within an environment. In designing this language, we believe that working closely with our intended us-ers is crucial to the development of tools that will meet their needs and be adopted. To guide our design interac-tions with our users we are applying the Value Sensitive Design (VSD) ( Friedman et al. 2006; Le Dantec et al. 2009) methodology from HCI. In this paper we give a short description of VSD and discuss how it may be ap-plied to the design of our programming language. This work is currently at an early stage, and our understand-ing and application of VSD is evolving. Nevertheless we believe that the application of HCI methodologies in general, and VSD in particular, will be extremely valu-able in the development of languages and software tools that are intended for non-programmers, that is, profes-sionals for whom programming is an important activity but not the primary focus of their work.

In the next section we provide a brief description of Value Sensitive Design (VSD), then we propose a way of applying VSD to programming language design and conclude with a discussion of how we are applying it in our own language design project.

value sensitive designIn this section we briefly describe VSD as detailed in Friedman, et. al. (2006). We begin with their definition of VSD:

Value Sensitive Design is a theoretically ground-ed approach to the design of technology that accounts for human values in a principled and comprehensive manner.

In this context, a value is something that a person con-siders important in life. Values cover a broad spectrum from the lofty to the mundane, encompassing things like accountability, awareness, privacy, and aesthetics –

PersPective: marquez

10

Page 12: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

anything a user considers important. While VSD uses a broader meaning of value than that used in economics, it is important to rank values so that conflicts can be re-solved when competing values suggest different design choices.

VSD employs an iterative, interleaved tripartite meth-odology that includes conceptual, empirical, and tech-nical investigations. In the following sections we de-scribe each of these types of investigations.

Conceptual investigationsWe think of conceptual investigations as analogous to domain modeling. In conceptual investigations we spec-ify the components of particular values so that they can be analyzed precisely. We specify what a value means in terms useful to a programming language designer. Con-ceptual investigations may be done before significant interaction with the target audience takes place. As is characteristic of VSD, however, conceptualizations are revisited and augmented as the design process proceeds in an iterative and integrative fashion.

An important additional part of conceptual investiga-tion is stakeholder identification. Direct stakehold-ers are straightforward – they are the people who will be writing code in your language using the tools you provide. However, it is important to consider indirect stakeholders as well. For example, working programs may need to be delivered by your direct stakeholders to third parties – these third parties constitute indirect stakeholders. The characteristics of indirect stakehold-ers will implicate values that must be supported in the design of your language. If the indirect stakeholders are technically unsophisticated, for example, then the lan-guage must support the delivery of code that is easy to install and run.

empirical investigationsEmpirical investigations include direct observations of the human users in the context in which the technol-

ogy to be developed will be situated. In keeping with the iterative and integrative nature of VSD, empirical inves-tigations will refine and add to the conceptualizations specified during conceptual investigations.

Because empirical investigations involve the observa-tion and analysis of human activity, a broad range of techniques from the social sciences may be applied. Of all the aspects of VSD, empirical investigation is per-haps the most foreign to the typical technically focused programming language designer. However, as computa-tional tools and methods reach deeper into realms not previously considered, we believe empirical investiga-tions are crucial to making these new applications suc-cessful.

technical investigationsTechnical investigations interleave with conceptual and empirical investigations in two important ways. First, technical investigations discover the ways in which us-ers’ existing technology supports or hinders the values of the users. While these investigations are similar to empirical investigations, they are focused on technolog-ical artifacts rather than humans. The second important mode of technical investigations is proactive in nature: determining how systems may be designed to support the values identified in conceptual investigations.

applying vsd to prograMMing language designIn this section we discuss the ways in which we are ap-plying VSD to the design of a programming language. First we discuss the language itself and the target audi-ence of our language

afaBl: a friendly adaptive Behavior languageAFABL (which is the evolution of Adaptive Behavior Language) integrates reinforcement learning into the programming language itself to enable a paradigm that

11

Page 13: The Tower Undergraduate Research Journal Volume II

we call partial programming (Simpkins et al. 2008). In partial programming, part of the behavior of an agent is left unspecified, to be adapted at run-time. Reinforce-ment learning is an area of machine-learning focused on the technique of having an agent perform actions in its environment which optimize (usually maximize) some concept of a reward. Using the reinforcement learning model, the programmer defines elements of behaviors – states, actions, and rewards – and leaves the language’s runtime system to handle the details of how particular combinations of these elements determine the agent’s behavior in a given state. AFABL allows an agent pro-grammer to think at a higher level of abstraction, ignor-ing details that are not relevant to defining an agent’s be-havior. When writing an agent in AFABL, the primary task of the programmer is to define the actions that an agent can take, define whatever conditions are known to invoke certain behaviors, and define other behaviors as “adaptive,” that is, to be learned by the AFABL runtime system.

We are designing AFABL for social scientists and other agent modelers who are not programmers per se but who employ programming as a basic part of their method-ological toolkit. We also hope to encourage greater use of agent modeling and simulation among practitioners who currently do not use agent modeling, and among agent modelers who would write more complex models if given the tools to do so more easily. Since these kinds of users have very different backgrounds from program-mers it is important to understand their needs and val-ues in designing tools intended for their use. We believe that VSD will be one methodological tool among per-haps many that will help us understand our target audi-ence and truly incorporate their input into the design process. In the next section we discuss how this process is taking place in the design of our language.

using vsd in the design of afaBlWe are working with two different groups who are cur-

rently using agent modeling in their work. The first group, the OrgSim group (Bodner and Rouse 2009), is a team of industrial engineers who are studying organi-zational decision-making using agent-based simulations as well as other more traditional forms of simulations. The OrgSim group wants to model the human elements of organization in order to create richer, more realistic models that can account for human biases, personality, and other factors that can be simulated only coarsely, if at all, using traditional simulation techniques. The sec-ond group is a team of computer game researchers cre-ating autonomous intelligent agents that are characters in interactive narratives (Riedl et al. 2008; Riedl and Stern 2006).

Both of these groups are using the most advanced agent modeling language available to them: ABL (A Behavior Language) (Mateas and Stern 2004). ABL was created in the course of a Computer Science PhD by a games researcher to meet his needs in creating a first-of-its-kind game, an interactive drama. ABL was not designed with the help of or goal of assisting non-programming expert agent modelers. Naturally, both groups have met with difficulty in using ABL. By using VSD in working with these groups we hope to meet their needs with AFABL.

Conceptual Investigations of Agent ModelersAs described earlier, conceptual investigations yield working definitions of values that can be used in the de-sign of technological artifacts – in our case the AFABL programming language. In our conceptual investiga-tions thus far we have identified several values, whose conceptualizations as we currently understand them are listed below.

• Simplicity. Here simplicity has two essential features. First, AFABL must be consistent in its design, both in-ternally and in the extent to which it exploits the users’ current knowledge of programming. Internal consis-

PersPective: marquez

12

Page 14: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

tency means that when users encounter a new language construct for the first time, they should be able to apply their knowledge of analogous constructs they already know. External consistency means that AFABL should use programming constructs that users already know form other languages and require users to learn as few new language constructs as possible.

• Power. A language is sufficiently powerful if it allows its programmers to reasonably and easily write all the programs they want to write in the language. If a lan-guage makes it hard to write certain types of programs, then those programs will usually not be written, thus limiting the scope of use of the language. Naturally, power trades off with simplicity, but simplicity at the expense of essential power is unacceptable to our target audience. In the design implications section below we discuss strategies for dealing with the power versus sim-plicity issue.

• Participation. Our user communities are eager to con-tribute to the design of AFABL and to its documenta-tion and development of best practices. We welcome this participation and believe that it will positively im-pact adoption, both with the users with whom we are already working and new users that will be influenced by our early adopters. VSD directly supports and en-courages this participation in the design process.

• Growth. The language we develop and the theoreti-cal models of intelligent and believable agents that we employ today may not be the last words. It is important that AFABL be able to accommodate new models and applications.

• Modeling Support. A modeling tool imposes a struc-ture on the way an agent modeler thinks about agents. AFABL should do so in a helpful way, if possible, but certainly not hinder particular ways of thinking about agents.

Empirical Investigations of Agent ModelersSolving a problem requires an understanding of the problem. The problem in our case is the experience of agent modelers in using the computational tools at their disposal. To understand the problems agent modelers face and their desiderata for computational tools, we are joining their teams and using their existing tools along-side them. In doing so we hope to gain an appreciation for the goals of their work, the expertise they bring to the task, and the difficulties they have in using existing tools to accomplish their goals. We hope to gain a level of empathy that will help us develop a language and tools that will meet their needs very well.

Technical Investigations of Agent ModelersWhat do they already use? How do their existing tools support or hinder their values? What technol-ogy choices do we have at our disposal to support their values? These are the kinds of questions we address in technical investigations. In our case, there is a rich tap-estry of software tools already in use by our users. These tools include virtual worlds — simulation platforms and game engines — and editing tools for the programs they currently write. Some of these tools are essential to their work and some may be replaced with tools we develop. One overriding value that stems from our us-ers’ existing tool base is interoperability. Any language or tool we develop must support interoperability with their essential tools.

implications of values on progamming language designWe are already familiar with many values supported by the general purpose programming languages we use. C supports values like efficiency and low-level access. Lisp supports values like extensibility and expressiveness. Py-thon supports simplicity. In this section we discuss how some of the values we identified above may impact the design of our language.

13

Page 15: The Tower Undergraduate Research Journal Volume II

Interoperability It is essential that our language and tools support interoperability with the virtual worlds currently in use. In our technical investigations we have found that the simulation platforms and game engines in use support Java-based interfaces, and many of them run on the Java Virtual Machine ( JVM). Since these projects also use ABL, they have existing code bases that use the ABL run-time system and bridge libraries that enable communication between ABL agents and virtual worlds. These technical investigations lead us to the fol-lowing design decisions for AFABL:

• AFABL will run on the JVM. Currently, we are plan-ning to implement AFABL as an embedded domain specific language (EDSL) written in the Scala program-ming language (Odersky et al. 2008). This will allow us to interoperate well with Java programs and ABL while providing advanced language features in the design and implementation of our language.

• AFABL will use the ABL run-time system. While we have decided to depart from the syntax and language implementation of ABL, the agent model and run-time system of ABL represents a tremendous amount of valu-able work that we wish to build on, not reinvent. Ad-ditionally, using the ABL run-time system will allow us to make use of the existing bridges between ABL agents and virtual worlds.

Simplicity and Power Simplicity and power often op-pose each other when taking design decisions, so we discuss these values here together. We hope to maximize both power and simplicity with the following language features:

• Provide a simple consistent set of primitives and syn-tax while providing expressive power through first class functions, closures, objects, and modules. Languages like Ruby and Python can be used by programming novices as well as expert programmers who use advanced expres-sive features such as iterators, comprehensions, closures,

and metaprogramming. We intend to employ the same strategies in the design of AFABL.

• Feel free to make presumptions / Optimize for the common case — A great majority of the time, mod-elers will be using similar methods and approaches. There should be as little friction between the modeler’s thoughts and the compiler’s input as possible. Being able to make sound prejudgments about the program-ming language’s users and the patterns of programming they exhibit is key to opening a whole class of optimiza-tions and simplifications that can help both the user and the compiler. In the context of AFABL, this means that we very much need to evaluate our design at every step with our target userbase and should employ, e.g., VSD in doing so.

• Do not assume anything / Keep uncommon and un-foreseen cases possible — Only close off the language where it would create great disparity of future imple-mentation or for necessary optimizations. In the latter case (should the common case be in use) the alternate, optimized, but less extensible implementation can be used. One should not outright assume anything about the user (because this would restrict future ways in which the language could be used), and should take care to properly document and account for any pre-sumptions. We must be sure to focus AFABL not too much towards our VSD-driven presumptions, lest we unintentionally restrict the ease of use of the language for other types of modelers and users.

Participation Our users have expressed strong inter-est in contributing to the design, documentation, and practices of AFABL. To accommodate our users’ desire for participation we anticipate the following features of AFABL:

• Iterative language development. By designing AFBL around a small set of primitives, we hope to get the lan-

PersPective: marquez

14

Page 16: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

guage into the hands of users early in its development. That way users can experiment with the language and provide feedback throughout its development. Put an-other way, AFABL will be developed with agile software development practices.

• User-accessible documentation system. Many lan-guages already provide programmers with the means to automatically generate documentation from source code. Many language communities also provide user-accessible documentation systems, such as Wikis and web forums, whereby users can share their knowledge and contribute directly to the documentation base of the language. We will employ similar mechanisms for AFABL.

Growth New theories of agent modeling and new vir-tual worlds will be created in the future. To accommo-date these changes, we will design AFABL for growth in the following two ways:

• Support new run-time systems. The ABL run-time sys-tem represents a particular way of modeling behavioral agents. It may be possible to support new agent theories by connecting AFABL with new run-time systems.

• Support the full range of operating system and JVM intercommunication. By providing a full set of inter-communication mechanisms, such as pipes, sockets, file system access, and JVM interoperability, AFABL should be able to accommodate new virtual world en-vironments.

ConClusionIn this paper we have taken the position that design methodologies from the HCI research community can be of great benefit in the development of programming languages. Among the design processes we are employ-

ing, we have singled out Value Sensitive Design and described how it can be used in the design of program-ming language and tools for a non-traditional popula-tion of programmers, in our case agent modelers like social scientists and game designers.

aCknowledgeMentsI wish to thank David Roberts for suggesting the use of Value Sensitive Design, and Doug Bodner and Mark Riedl for allowing us to participate in their projects and their help in designing AFABL.

15

Page 17: The Tower Undergraduate Research Journal Volume II

referenCesArnold, K. Programmers are people, too. Queue, 3(5):54–59, 2005.

Bodner, D.A. and Rouse, W.B. Handbook of Systems Engineering and Management, chapter Organizational Simulation. Wiley, 2009. Friedman, B., Jr., Kahn, P.H., and Borning, A. Human-Computer Interaction in Management Information Systems: Foundations, chapter 16. M.E. Sharpe, Inc, NY, 2006.

Le Dantec, C.A., Poole, E.S., and Wyche, S.P. Values as lived experience: Evolving value sensitive design in support of value discovery. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2009), Boston, MA, USA, April 2009.

Mateas, M. and Stern, A. Life-like Characters. Tools, Affective Functions and Applications, chapter A Be-havior Language: Joint Action and Behavioral Idioms. Springer, 2004.

Odersky, M., Spoon, L., and Venners, B.. Programming in Scala. Artima, 1 edition, 2008.

Pane, J.F., Myers, B.A., and Miller, L.B.. Using hci tech-niques to design a more usable programming system. In Symposium on Empirical Studies of Programmers (ESP02), Proceedings of 2002 IEEE Symposia on Hu-man Centric Computing Languages and Environments (HCC 2002), Arlington, VA, September 2002.

Riedl, M.O. and Stern, A. Believable agents and intel-ligent scenario direction for social and cultural leader-ship training. In Proceedings of the 15th Conference on Behavior Representation in Modeling and Simulation, Baltimore, Maryland, 2006.

Riedl, M.O., Stern, A., Dini, D., and Alderman, J. Dy-namic experience management in virtual worlds for en-tertainment, education, and training. In International Transactions on Systems Science and Applications, Spe-cial Issue on Agent Based Systems for Human Learning, volume 4(2), 2008.

Simpkins, C., Bhat, S., and Isbell, C.L., Jr. Towards adap-tive programming: Integrating reinforcement learning into a programming language. In OOPSLA ’08: ACM SIGPLAN Conference on Object-Oriented Program-ming, Systems, Languages, and Applications, Onward! Track, Nashville, TN USA, October 2008.

PersPective: marquez

16

Page 18: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

Synthetic biology is expected to change how we understand and engineer biological systems. Lying at the intersection of molecular biology, physics, and engineering, the applications of this exploding field will both draw from and add to many existing disci-plines. In this perspective, the recent advances in synthetic biology towards the design of complex, artificial biological systems are discussed.

synthetic biology: Approaches and Applications of engineered biologyrobert louis feeschool of chemistry and biochemistry Georgia institute of technology

adVisor:friedrich c. simmelschool of Physicstechnical university munich

17

Page 19: The Tower Undergraduate Research Journal Volume II

the rise of synthetiC BiologySeveral remarkable hurdles in the life sciences have been cleared during the last half of the 20th century, from the discovery of the structure of DNA in 1959, to the deciphering of the genetic code, the development of re-combinant DNA techniques, and the mapping of the human genome. Scientists have routinely tinkered with genes for the last 30 years, even inserting a cold-water fish gene into wheat to improve weather resistance; thus, synthetic biology is by no means a new science. Synthetic biology is a means to harness the biosynthetic machinery of organisms on the level of an entire ge-nome to make organisms do things in ways nature has never done before.

Synthetic biology, despite its long history, is still in the early stages of development. The first international con-ference devoted to the field was held at M.I.T in June 2004. The leaders sought to bring together “researchers who are working to design and build biological parts, devices, and integrated biological systems; develop technologies that enable such work; and place this sci-entific and engineering research within its current and future social context” (Synthetic Biology 101, 2004). The field is growing quickly, as evidenced by the rapidly increasing number of genetic discoveries, the exploding number of research teams exploring the field, and the funding from government and industrial sources.

Akin to the descriptive-to-synthetic transformation of chemistry in the 1900s, biological synthesis forces scien-tists to pursue a “man-on-the-moon” goal that demands they discard erroneous theories and compels scientists to solve problems not encountered in observation. Data contradicting a theory can sometimes be excluded for the sake of argument, but doing the same while build-ing a lunar shuttle would be disastrous. Synthetic biol-ogy comes at an important time; by creating analogous “man-on-the-moon” engineering goals in the form of

synthetic bioorganisms, it is similarly driving scientists towards a deeper level of understanding of biology.

appliCations of engineered organisMsIt is expected that advances in synthetic biology will create important advances in applications too diverse and numerous to imagine. Applications of bioengi-neered microorganisms include detecting toxins in air and water, breaking down pollutants and danger-ous chemicals, producing pharmaceuticals, repairing defective genes, targeting tumors, and more. In 2008, genomics pioneer Dr. Craig Venter secured a $600 bil-lion grant from ExxonMobil to develop hydrocarbon-producing microorganisms as an alternative to crude oil (Borrell 2009).

Scientists are engineering microbes to perform complex multi-step syntheses of natural products. Jay Keasling, a professor at the University of California, Berkeley, re-cently demonstrated genetically engineered yeast cells (Saccharomyces cerevisiae) that manufacture the imme-diate precursor of artemisinin, a malarial drug widely used in developing countries (Ro et al, 2006). Before, this compound was chemically extracted from the sweet wormwood herb. Since the extraction is expensive and the wormwood herb is prone to drought, the availabil-ity of the drug is reduced in poorer countries. Once the engineered yeast cells were fine-tuned to produce high amounts of the artemisinin precursor, the compound was made quickly and cheaply. This same method could be applied to the mass-production of other drugs cur-rently limited by natural sources, such as anti-HIV drug prostratin and anti-cancer drug taxol (Tucker & Zilin-skas, 2006).

The most far-sighted effort in synthetic biology is the drive towards standardized biological parts and circuits. Just as other engineering disciplines rely on parts that are well-described and universally used — like transistors

PersPective: fee

18

Page 20: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

and resistors — biology needs a tool box of standard-ized genetic parts with characterized performance. The Registry of Standard Biological Parts comprises many short pieces of DNA that encode multiple functional genetic elements called “BioBricks” (Registry ofStan-dard Biological Parts). In 2008, the Registry contained over 2000 basic parts comprised of sensors, input/out-put devices, regulatory operators, and composite parts of varying complexity (Greenwald, 2005). The M.I.T. group made the registry free and public (http://parts.mit.edu/) and has invited researchers to contribute to the growing library.

Some genetic parts code for a promoter gene that begins the transcription of DNA into mRNA, a repressor that codes a protein that blocks the transcription of another gene, a reporter gene that encodes a readout signal, a terminator sequence that halts RNA transcription, and a ribosome binding site that begins protein synthesis. The goal is to develop a discipline-wide standard and source for creating, testing, and combining BioBricks into increasingly complicated functions while reducing unintended interactions.

To date, BioBricks have been assembled into a few sim-ple genetic circuits (McMillen & Collins, 2004). One creates a film of bacteria that is sensitive to light so it can capture images (Levskaya et al). Another operates as a type of battery, producing a weak electric current. Bio-Bricks have been combined into logic gate devices that execute Boolean operations, such as AND, NOT, OR, NAND, and NOR. An AND operator creates an out-put signal when it gets a biochemical signal from both inputs; an OR operator generates an output if it gets a signal from either input; and a NOT operator changes a weak signal into a strong one, and vice versa. This would allow cells to be small programmable machines whose operations can be controlled through light or various chemical signals (Atkinson et al, 2003).

Despite the enormous progress seen in the last five years and some highly publicized and heavily funded feats, the systematic and widespread design of biological sys-tems remains a formidable task.

Current Challenges

standardizationStandards underlie engineering disciplines: measure-ments, gasoline formulation, machining parts, and so on. Certain biotechnology standards have taken hold in cases such as protein crystallography and enzyme nomenclature, but engineered biology lacks a univer-sal standard for most classes of functions and system characterization. One research group’s genetic toggle switch may work in a certain strain of Escherichia coli in a certain type of broth, while another’s oscillatory function may work in a different strain when cells are grown in supplemented minimal media (Endy, 2005). It is unclear whether the two biological functions can be combined despite the different operating parameters. The Registry of Standard Biological Parts and new Bio-fab facilities have recently emerged to begin addressing this issue, and a growing consensus is emerging on the best way to reliably build and describe the function of new genetic components. abstractionDrawing again from other engineering disciplines, and specifically from the semiconductor industry, synthetic biology must manage the enormous complexity of natu-ral biological systems by abstraction hierarchies. After all, writing “code” with DNA letters is comparable to creating operating systems by inputing 1’s and 0’s. Lev-els could be defined as DNA (genetic material), Parts (basic functions, such as a terminating sequence for an action), Devices (combinations of parts), and Systems (combinations of devices). Scientists should be able to work independently at each hierarchy level, so that

19

Page 21: The Tower Undergraduate Research Journal Volume II

Figure 1. The Registry of Standard Biological Parts. This registry offers free access to basic biological functions that are used to create new biological systems. Pictured is a standard data sheet on a gene regulating transcription, with normal performance and compatibility measurements, plus an extra biological concern: system performance during evolution and cell reproduction. The registry is part of a conscious effort to standardize gene parts in the hopes of creating interchangeable components with well-characterized functions when implanted in cells. The project is open source; anybody can freely use and add information to the Registry.

PersPective: fee

20

Page 22: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

device-level workers would not need to know anything about phosphoramidite chemistry, or genetic oscilla-tors, etc. (Canton, 2005). engineered simplicity and evolutionThe rapid progress made by mechanical engineering in this century was made possible by creating easily under-standable machines. Engineered simplicity is helpful not only for repairs but for future upgrades and rede-signs. While a modern automobile may seem complex, the level of complexity pales in comparison to a living cell, which has far more interconnected pathways and interactions. Cells evolved in response to a multitude of evolutionary pressures and mechanisms were developed to be efficient, not necessarily easy to understand (Alon, 2003). A related problem is that other engineered sys-tems don’t evolve. Organisms such as E. coli reproduce and have genetic mutations within hours. While this of-fers possibilities to the biological engineer (for instance, human-directed evolution for fine-tuning organism be-havior), it also increases the complexity of designing and predicting the function of these new genetic systems (Hasteltine, 2007). risks assoCiated with BiologiCal engineering accidental releaseResearchers first raised concerns at the Asilomar Con-ference in California during the summer of 1975 and concluded that current genetic experiments carried minimal risk. The past 30 years of experience in genet-ically-manipulated crops demonstrated that engineered organisms are less fit than their wild counterparts, and they either die or eject their new genes without con-stant assistance from humans. However, researchers concluded that the abilities to replicate and evolve re-quired special precautions. It was recommended that all researchers work with bacterial strains that are specially

designed to be metabolically deficient so they cannot survive in the wild. Still, some have suggested that an incomplete understanding and emergent properties arising from unforeseen interactions between new genes could be problematic. Such dangers have given rise to fears of a dystopian takeover by super-rugged plants that overwhelm local ecosystems.

BioterrorismResearch in synthetic biology may generate “dual-use” findings that could enable bioterrorists to develop new biological warfare tools that are easier to obtain and far more lethal than today’s military bioweapons. The most commonly cited example of this is the resurrection syn-thesis of the 1918 pandemic influenza strain by CDC researchers (Tumpey et al, 2005) and the possibility of recreating smallpox from easily-ordered DNA (Venter, 2005). There has been a growing consensus that not all sequences should be made publicly available, but the fact remains that such powerful recombinant DNA technologies could be used for harm. Attempts to limit access to the DNA synthesis tech-nology would be counterproductive, and a sensible ap-proach might include some selective regulation while allowing research to continue. Now, as SARS, bird influenza, and other infectious disease emerge, these recombinant DNA techniques enhance our ability to manage this threat today compared to what was possible just 30 years ago. The revolution in synthetic biology is nothing less than a push in all fronts of biology, whether that impacts environmental cleanup, chemical synthesis using bacteria, or human health. ConClusionAt present, synthetic biology’s myriad implications can be glimpsed only dimly. The field clearly has the poten-tial to bring about epochal changes in medicine, agri-

21

Page 23: The Tower Undergraduate Research Journal Volume II

culture, industry, and politics. Some critics consider the idea of creating artificial organisms in the laboratory to be an example of scientific hubris, evocative of Faust or Dr. Frankenstein. However, the move from understand-ing biology to designing it for our requirements has al-ways been a part of the biological enterprise and used to produce chemicals and biopharmaceuticals. Synthetic biology represents an ambitious new paradigm for

building new biosystems with rapidly increasing com-plexity in versatility and applications. These tools for engineering biology are being developed and distribut-ed, and a societal framework is needed to help not only create a global community that celebrates biology but also to lead the enormously constructive invention of biological technologies.

Figure 2. Abstraction Hierarchy. Abstraction levels are important for managing complexity and are used extensively in engineering disciplines. As biological parts and functions become increasingly complex, writing ‘code’ with individual nucleotides is rapidly becoming more difficult. Currently, researchers spend considerable time learning the intricacies of every step of the process, and stratification would allow for specialization and faster development. Ideally, individuals can work on individual levels, one can focus on part design without worrying about how genetic oscillators work, while others could string together parts to construct whole systems for possible biosensor applications. Image originally made by Drew Endy.

PersPective: fee

22

Page 24: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

referenCes(2004). Paper presented at the Synthetic Biology 1.0: The First International Meeting on Synthetic Biology, Massachusetts Institute of Technology

Borrell, B. (2009, July 14). Clean dreams or pond scum? exxonmobil and craig venter team up in quest for algae-based biofuels. Scientific American, Retrieved from http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=clean-dreams-or-pond-scum-exx-onmobi-2009-07-14

Ro, Dae-Kyun, Paradise, Eric M., Ouellet, Mario, Fish-er, Karl J., Newman, Karyn L., Ndungu, John M., . . . Keasling, Jay D. (2006). Production of the antimalarial drug precursor artemisinic acid in engineered yeast. [10.1038/nature04640]. Nature, 440(7086), 940-943.

Tucker, J.B., & Zilinskas, R.A. (2006, Spring). The Promise and perils of synthetic biology. The New At-lantis, 12, 25-45.

Registry of standard biological parts. Retrieved from http://parts.mit.edu/

Morton, O. (2005, January). How a Biobrick works. Wired, 13(01), Retrieved from http://www.wired.com/wired/archive/13.01/mit.html?pg=5

Hasty, Jeff, McMillen, David, & Collins, J. J. (2002). Engineered gene circuits. [10.1038/nature01257]. Na-ture, 420(6912), 224-230.

Levskaya, Anselm, Chevalier, Aaron A., Tabor, Jeffrey J., Simpson, Zachary Booth, Lavery, Laura A., Levy, Mat-thew, . . . Voigt, Christopher A. (2005). Synthetic biolo-gy: Engineering Escherichia coli to see light. [10.1038/nature04405]. Nature, 438(7067), 441-442.

Atkinson, Mariette R., Savageau, Michael A., Myers, Jesse T., & Ninfa, Alexander J. (2003). Development of Genetic Circuitry Exhibiting Toggle Switch or Oscil-latory Behavior in Escherichia coli. Cell, 113(5), 597-607.

Endy, Drew. (2005). Foundations for engineering biol-ogy. [10.1038/nature04342]. Nature, 438(7067), 449-453.

Canton, B. (2005). Engineering the Interface Between Cellular Chassis and Integrated Biological Systems. Ph.D., Massachusetts Institute of Technology.

Alon, U. (2003). Biological Networks: The Tinkerer as an Engineer. Science, 301(5641), 1866-1867. doi: 10.1126/science.1089072

Haseltine, Eric L., & Arnold, Frances H. (2007). Syn-thetic Gene Circuits: Design with Directed Evolu-tion. Annual Review of Biophysics and Biomolecular Structure, 36(1), 1-19. doi: doi:10.1146/annurev.bio-phys.36.040306.132600

Tumpey, Terrence M., Basler, Christopher F., Aguilar, Patricia V., Zeng, Hui, Solorzano, Alicia, Swayne, David E., . . . Garcia-Sastre, Adolfo. (2005). Characterization of the Reconstructed 1918 Spanish Influenza Pandemic Virus. Science, 310(5745), 77-80. doi: 10.1126/sci-ence.1119392

Venter, J. C. . (2005). Gene Synthesis Technology. Paper presented at the State of the Science National Science Advisory Board on Biosecurity. http://www.webcon-ferences.com/nihnsabb/july_1_2005.htmll

23

Page 25: The Tower Undergraduate Research Journal Volume II

24

Page 26: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

The information transferred over computer networks is vulnerable to attackers. Network forensics deals with the capture, recording, and analysis of network events to determine the source of security attacks and other network-related problems. Electronic devices send communications across networks by sending network data in the form of packets. Networks are typically represented using discrete statistical models. Discrete statisti-cal models are computationally expensive and utilize a significant amount of memory. A continuous piecewise polynomial model is proposed to address the shortcomings of discrete models and to further aid forensic investigators. Piecewise polynomial approxi-mations are beneficial because sophisticated statistics are easier to perform on smooth continuous data , rather than on unpredictable discrete data. Polynomials, moreover, utilize roughly six times less memory than a collection of individual data points, mak-ing this approach storage-friendly. A variety of networks have been modeled, and it is possible to distinguish network traffic using a piecewise polynomial approach.

These preliminary results show that representing network traffic as piecewise polynomi-als can be applied to the area of network forensics for the purpose of intrusion analysis. This type of analysis will consist of not only identifying an attack, but also discovering details about the attacks and other suspicious network activity by comparing and dis-tinguishing archived piecewise polynomials.

network forensics Analysis Using Piecewise PolynomialsseaN marcus saNdersschool of electrical and computer engineering Georgia institute of technology

adVisor:heNry l. oweNschool of electrical and computer engineering Georgia institute of technology

25

Page 27: The Tower Undergraduate Research Journal Volume II

introduCtion

problem Network forensics deals with the capture, recording, and analysis of network events to determine the source of security attacks and other network-related problems (Corey, 2002). One must differentiate malicious traffic from normal traffic based on the patterns in the data transfers. Network communication is ubiquitous, and the information transferred over these networks is vul-nerable to attackers who may corrupt systems, steal valu-able information, and alter content. Network forensics is a critical area of research because , in the digital age, information security is vital. With sensitive information such as social security numbers, credit card information, and government records stored on a network, the po-tential threat of identity theft, credit fraud, and national security breaches increases. During July of 2009, North Korea was the main suspect behind a campaign of cyber attacks that paralyzed the websites of US and South Ko-rean government agencies, banks and businesses (Parry, 2009). As many as 10 million Americans a year are vic-tims of identity theft, and it takes anywhere from 3 to 5,840 hours to repair damage done by this crime (Sor-kin, 2009). In order to effectively prosecute network at-tackers, investigators must first identify the attack, and then gather evidence on the attack.

The process of identifying an attack on a network is known as intrusion detection. The two most popu-lar methods of intrusion detection are signature and anomaly detection (Mahoney, 2008). Signature detec-tion is a technique that compares an archive of known attacks on a network with current network traffic to discern whether or not there is malicious traffic. This technique is reliable on known attacks but has a great disadvantage on novel attacks. Although this disadvan-tage exists, signature detection is well understood and widely applied. Anomaly detection, on the other hand, is a technique that identifies network attacks through

abnormal activity, which does not necessarily imply ma-licious traffic. Anomaly detection is more difficult to implement compared to signature detection because it must flag traffic as abnormal and discern the intent of the traffic. Abnormal traffic does not necessarily imply malicious traffic.

Electronic devices such as notebooks and cellular phones communicate by transferring data across the In-ternet using packets. A packet is an information block that the Internet uses to transfer data. In most cases, the data being transferred across the Internet must be divided into hundreds, even thousands of packets to be completely transferred. Similar to letters in a postal system, packets have parameters for delivery such as a source address and destination address. Packets include other parameters such as the amount of data being sent in a packet and a checking parameter to ensure that the data sent was not corrupted. The Internet is modeled as a discrete collection of individual data points because the Internet uses individual packets to transfer data. Discrete processes are difficult to model and analyze as opposed to continuous processes because there is not a definite link between two similar events. For example, the concept of a derivative in calculus can only give a logical result if the data is continuous. In many cases, experimental results are given as discrete values. Scien-tists, engineers, and mathematicians sometimes use the least squares approximation to give a continuous model of the data given. Continuous models that represent discrete data are often preferred because they can be used for different types of analysis such as interpolation and extrapolation.

Many forensic investigators use graphs and statistical methods, such as clustering, to model network traf-fic (Thonnard, 2008). These graphs and statistics help classify complex networks into patterns. These patterns are typically stored and represented in a discrete fash-ion because networks transfer data in a discrete manner.

article: sanders

26

Page 28: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

These patterns are used in combination with signature and anomaly detection techniques to identify network attacks (Shah, 2006). In many cases these network patterns are archived and kept for extended periods of time. This storage of packets is needed to compare past network traffic with current network traffic, in or-der to effectively classify network events. Despite this necessity, the storage of packet captures is not desired because packet captures use a significant amount of memory storage, a limited and costly resource. After a variable amount of time, the archived network data is deleted to free memory for future network patterns to be archived (Haugdahl, 2007). Detailed records of net-work patterns can be stored for longer periods of time by increasing the amount of free memory or decreasing the amount of archived traffic.

A continuous polynomial representation of a network is preferred to a discrete representation because discrete representations are limited by the types of analysis and statistics that can be performed. Polynomial approxima-tions of data have limitations as well, such as failing to represent exact behavior, which can be vital depending on the system being modeled. In order to effectively dif-ferentiate traffic, a continuous polynomial approxima-tion must be robust enough to reveal enough details about network traffic. Polynomial representations of data should require less memory storage than discrete representations. For instance, the polynomial, y=x2, could represent a million data points but take up little memory. This observation is important because, in the area of network forensics, memory storage space is a critical factor.

related workShah et al. (2008) applied dynamic modeling techniques to detect intrusions using anomaly detection. This par-ticular form of modeling was only used for identifying intrusions and not for analyzing them or conducting

a forensic investigation. Ilow et al. (2000) and Wang et al. (2007) both used modeling techniques to try to predict network traffic. Wang et al. took a polynomial approach that utilized Newton’s Forward Interpolation method to predict and model the behavior of network traffic. This technique used interpolation polynomials of arbitrary order to approximate the dynamic behav-ior of previous network traffic. Wang et al.’s technique is useful for modeling general network behavior, but using the polynomial approach for intrusion analysis is another issue. Wang et al.’s technique proved that gen-eral network behavior can be predicted and modeled us-ing polynomials, but did not prove whether individual network events can be distinguished and categorized through the use of polynomials.

proposed solutionNetwork data is discrete, scattered, and difficult to ap-proximate; however, approximation and modeling tech-niques are necessary to define networks and to perform important statistics on the network data. Such statistics include the average amount of data each packet carries, the average rate packets arrive to a computer, and how many packets are lost before delivery. These values are used to adequately classify network traffic as normal or malicious. When a system is approximated as a polyno-mial, it is faster to perform basic mathematical opera-tions and statistics such as derivatives, integrals, standard deviation, and variance. The ease of the computation of a parameter allows for a more efficient analysis of the data. Networks send an enormous amount of data each day, and precious time is required to process this data. While the polynomial approximation is fairly accurate, forming a long, a complex approximated polynomial is not practical for the purposes of network forensics since a network will seldom have identical behavior in each session. Assuming each of the five segments of points shown in Figure 1 represents network events (i.e., web sites visited), investigators can approximate and classify

27

Page 29: The Tower Undergraduate Research Journal Volume II

the network activity. The network traffic modeled in all plots in this paper represents the same parameters. The x-axis represents the packet capture time, where the unit of time is not represented in seconds (i.e., real time) but rather a time relative to the order the pack-ets were captured. In other words, time represented in the context of this paper does not represent real time, but serves as a parameter for the data being modeled, approximated packet data length. This parameter is re-ferred to as time because the data being modeled is time dependent. The y-axis represents the data length of the packet captured in bytes. Throughout this paper the terms packet, capture time, and time will be used inter-changeably. In reality, different network events require different amounts of time and numbers of packets than others. For simplicity, all network events plotted in this paper are scaled so that each network event is modeled by equal time intervals.

If these segments were in a different order (i.e., the same web sites were visited in a different order), then the single polynomial in Figure 2 would not be able to compensate for these changes and would be unable to efficiently classify similar network traffic. Essentially, if this single polynomial method were applied, one would need 120 (5!) different polynomials to represent visit-ing five different websites in every possible order. To counter this issue, the idea of approximating network traffic by using a piecewise polynomial is proposed. A single polynomial defines one function to represent data for all units of time, while a piecewise polynomial defines separate functions at distinct time intervals and connects these respective pieces to form a single contin-uous data representation. The property of a piecewise polynomial is important in modeling network traffic as opposed to a single polynomial because many different types of network events can occur. A piecewise poly-nomial can isolate and model the behavior of a single network event, while a single polynomial is limited

Figure 1. Plot of random discrete data.

Figure 2. Single polynomial approximation of data represent-ed in Figure 1.

article: sanders

28

Page 30: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

to modeling clusters of events. The modeling of event clusters is not desired because it will increase the diffi-culty in differentiating network traffic based on a single event. Such a scenario will result in a malicious event be-ing clustered with a normal event, which could lead to failure in identifying an attack. A piecewise polynomial approximation should effectively classify every network event that has transpired using a unique piecewise ap-proximation. The piecewise polynomial approximation of the data shown in Figure 1 is shown in Figure 3.

It is clear that while both polynomial approximations in Figure 2 and Figure 3 can model the data represent-ed in Figure 1, the piecewise polynomial (Figure 3) is more accurate and robust than a single polynomial. A single polynomial should not be used to model more than one network event, because it will not be able to represent the individual different network events that it is composed of. This example is meant to emphasize that if a sequence of 100 network events were defined using one single polynomial, it would be difficult to

identify which network events behaved in a certain way. A piecewise polynomial model will address this issue by modeling each network event as an individual polyno-mial. If the order of the network events (segments) were changed, the individual polynomials would just occur at different time intervals, but each segment will remain the same. In other words, in a piecewise polynomial ap-proximation each segment is represented by a distinct polynomial.

The basic concept is that while the network will not behave the same all the time, it will behave the same in certain pieces. If network traffic can be quantified using piecewise polynomials, investigators can apply signature and anomaly detection techniques to identify and in-vestigate events from a forensics perspective. Piecewise polynomial approximations will be effective because they should approximate the behavior pattern of a net-work with enough resolution to differentiate network traffic.

The primary goal is to test whether or not a piecewise polynomial approach can approximate network data with enough precision to distinguish network traffic. If there are no distinct differences in piecewise polynomi-al approximated network traffic then this approach will not be valid for this application. Conversely, if a piece-wise polynomial approximation can effectively differen-tiate network traffic then it can be applied to intrusion analysis, because intrusion analysis is primarily focused on classifying traffic. This application is beneficial be-cause polynomial-represented data should occupy less memory storage than discrete data, and polynomial data have lessfewer limitations on the type of analysis that can be performed.

Methodology

tools and algorithmsWireshark was used to capture network traffic in packet

Figure 3. Piecewise polynomial plot of data represented in Figure 1.

29

Page 31: The Tower Undergraduate Research Journal Volume II

capture files. A packet capture is a collection of the net-work traffic that has made contact with a computer and is stored in a packet capture file (.pcap file). Wireshark is an effective tool for capturing and filtering network traf-fic, but does not allow for a custom analysis of network traffic. The Libpcap library, which is used by Wireshark, was investigated in order to use the captured network traffic as an input to a custom parsing algorithm. This algorithm opens .pcap files that were saved using Wire-shark and extracts the source address, destination ad-dress, packet data length, and packet protocol into a for-mat that can be used for custom processing. After these aspects of the packet were extracted they were saved in a .csv file (comma separated files) for processing in MATLAB. Although the parameters initially extracted (source address, destination address, packet data length, and packet protocol) are not sufficient to analyze and detect all malicious activity, these parameters are a good starting point for a proof of concept implementation and analysis of this approach.

MATLAB was chosen for its versatility, variety of func-tions, and computing speed in processing large vectors. MATLAB has two built-in functions called Polyfit() and Polyval() that respectively compute polynomial co-efficients and evaluate polynomials by using input data. In MATLAB, the input and output data of polyfit() and polyval() are represented as vectors. Polyfit() uses the least squares approximation to approximate the coeffi-cients of a best fit, Nth-order polynomial for the given vectors of data: X and B. In statistics, the least squares approximation is used for estimating polynomial coef-ficients based on discrete parameters. Polyval() can best be viewed as a support function for Polyfit(), which gives the approximated numerical values of the poly-nomial approximated in Polyfit(),Y. A clearer example of how Polyfit() and Polyval() are related is shown in equation 1.

(1)

P = polyfit( X, b, N)y = polyval(P, X )

Piecewise.m is a custom-developed script, written in MATLAB. Essentially, Piecewise.m uses Polyfit() and Polyval() to create piecewise polynomials. This script was designed to use packet data lengths as the parameter on the y-axis, and packet capture time as the parameter on the x-axis.

iMportant deCisions and Causes for errorAn important parameter used to approximate the data is the order of the polynomial. Typically, the higher the order of the polynomial, the more accurate the approxi-mation; in an approximation of network behavior/pat-terns, though, modeling exact behavior is unnecessarily complex whereas approximating behavior is more use-ful. Thus, the orders of the piecewise polynomials are manually chosen based on the predicted complexity of the network traffic. More complex traffic should be approximated with a higher order polynomial than less complex traffic. This is an assumption that will be used to designate the order of a polynomial given the type of network being modeled. Network traffic was also mod-eled using different orders to determine the effect(s) that changing orders have on the approximation of traffic.

When approximating polynomials, ensuring that there are enough data points to create a reliable approxima-tion is important. For example, if there is one data point a first order polynomial would give an inaccurate ap-proximation, because at least two points are needed to approximate a line. The general rule is that the accu-racy of the polynomial approximation depends directly on the order of the polynomial, and number of data

article: sanders

30

Page 32: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

points used to define the polynomial. The number of data points must be at least one more than the desired order to yield an accurate polynomial approximation. In most cases, the higher the order of the polynomial, the more accurate the approximation is. On the other hand, a polynomial of too high of an order may yield unrealistic results. Thus finding a balance of polynomial order that yield both of approximate and realistic results is important.

experiMents

Closed/Controlled network BehoviorThe first step to determine whether a polynomial can accurately approximate and differentiate network be-havior is to analyze the behavior of a closed/controlled network. As opposed to open networks, closed net-works are not connected to the Internet. The designed closed network was composed of two Macbooks, with four virtual machines operating on the separate Mac-books. Figure 4 gives a visual representation of the de-signed closed network.

A virtual machine is a software implementation of a ma-chine that executes programs like a physical machine. Virtual machines operate on a separate partition of a computer and utilize their own operating system. Due

to the hardware limitations of physical machines, virtu-al machines and physical machines do not execute com-mands simultaneously. From a networking perspective the execution of commands is not a problem, because once connected, networks utilize protocols to send and sometimes regulate the flow of network traffic. In other words, the network does not know that there is a virtual machine operating on a physical machine and thus sup-ports multiple simultaneous network connections.

Packet captures were performed using Wireshark on the Macbook operating with three virtual machines on the ethernet interface. A variety of packet captures were made to compare and contrast network behavior using web pages. If the resulting piecewise polynomials could effectively compare and contrast network traffic based on various behaviors, then the polynomial approxima-tion will be considered a success. The descriptions of these packet capture files are listed below.

• Idleclosed.pcap— a .pcap file that captures the ran-dom noise that is captured when the network is idle.

• Icmpclose.pcap— a .pcap file that is composed pri-marily of ping commands from one Macbook to the other. Ping commands are used to test whether a par-ticular computer is reachable across a network. This test is performed by sending packets of the same length to a computer, and waiting to receive a reply from that com-puter.

• Httpclose.pcap— a .pcap file that includes a brief ping command being sent from one Macbook to the other Macbook, but is dominated by HTTP traffic (basic website traffic). This file also includes a period of idle behavior where the network is at rest.

• Packet Capture A— a .pcap file that contains the network data for visiting a specific site hosted on one Macbook.Figure 4. Visual representation of designed closed network

with virtual machines. VMS circled.

31

Page 33: The Tower Undergraduate Research Journal Volume II

• Packet Capture B— a separate .pcap file that contains the network data for visiting the same site visited in Packet Capture A hosted on the same Macbook at a dif-ferent time.

Idleclosed.pcap and Icmpclose.pcap yield piecewise polynomials that model the behavior of idle and ping traffic respectively. These piecewise polynomials should identify both the idle and ping behavior found in Http-close.pcap. The piecewise polynomials that model two separate .pcap files going to the same pages (i.e. Packet Capture A and Packet Capture B) should resemble each other in behavior. A second order piecewise polyno-mial is used for the closed network analysis because it is assumed that closed network events should not be extremely complex. Higher orders are avoided wherever possible due to reasons explained in Important Deci-sions and Causes for Error.

open/internet network BehaviorWhile experimenting with a controlled network is use-ful, a network that is connected to the Internet will behave differently from one that is not. To investigate a more realistic scenario, one Macbook was utilized to make different packet captures under similar conditions to those in Closed/Controlled Network Behavior, but with contact to the Internet. The details of the packet capture files are listed below.

• Internet.pcap— a .pcap file that contains network data captured while actively browsing the Internet.

• Packet Capture C— a .pcap file that contains the net-work data for visiting a sequence of three web sites on the Internet in a particular order (google.com, gatech.edu, and facebook.com).

• Packet Capture D— a separate .pcap file that con-tains the network data for visiting the same web sites as Packet Capture C but in a different order (gatech.edu, facebook.com, and google.com)

Internet.pcap was used to show the effect the order of a polynomial has on the approximation because it con-tains the most complex network traffic. Packet Capture C and Packet Capture D were used to determine if dif-ferent web sites exhibit distinguishable behavior by us-ing by piecewise and single polynomials. These models will test the theory of the benefit of piecewise polyno-mials over single polynomials similar to the example inthe Proposed Solution. Fourth order piecewise and single polynomials are used for the open network analy-sis, as opposed to second order, because it is assumed that open network events should be more complex than closed networks.

results

Closed network analysisPing Analysis In the closed network case, as defined in the Closed/Controlled Network Behavior, Httpclose.pcap and Icmpclose.pcap both contained the same type of ping traffic going through the network but in differ-ent packet captures and different times. The resulting piecewise polynomial that described this traffic in both packet captures was the constant 98. This constant value of 98 represents that every packet captured had a packet data length of 98 bytes. A constant piecewise polyno-mial is an acceptable value because the ping command constantly sends packets of identical lengths to a single destination.

Traffic Analysis Packet Capture A and Packet Capture B are two different .pcap files that were capturing the same network activity at approximately the same time interval, and are represented as second order piecewise polynomials. According to Figure 5, the two packet captures are represented in a very similar manner. This result is interesting because, while the results are simi-lar, they are not exact. This mismatch is not damaging as Figure 5 shows the relationship of the two data files. The

article: sanders

32

Page 34: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

relationship of the first segments of data is that they are constant around the same value, while the second seg-ments of the data are both decreasing, concave down, and share similar values.

traffic analysis of open networksThe similar packet capture files, Packet Capture C (the upper plot) and Packet Capture D (the lower plot), were plotted in Figure 6 using a fourth order single polyno-mial . Figure 7 shows the plot of Packet Captures C and D using a fourth order piecewise polynomial.

Packet Capture C visits google.com first, followed by gatech.edu, and ends with facebook.com, while Packet Capture D visits gatech.edu first, followed by facebook.com, and ends with google.com. Figure 7 shows that each piecewise polynomial gives each website visited a unique behavior that can be identified with visual in-spection.

Google.com behaves in a sinusoidal type manner, Gat-ech.edu is represented as concave down parabola, and Facebook.com exhibits a strong linear behavior with a small positive slope. Although, the three web sites visit-ed can be clearly identified in Figure 7, it does not seem to be the case in Figure 6.

In the single polynomial approximation the data looks relatively similar , and it is difficult to discern which part of the polynomial represents which website. This result shows that different network events can be approxi-mated and distinguished using a piecewise polynomial approach, whereas a single polynomial approximation is not sufficient to distinguish network events.

significance of orderInternet.pcap was plotted using zero, second, and fifth orders to discern the effect order has in the approxima-tion of a polynomial.

Figure 8 shows that the higher the order of the polyno-

Figure 5. Second order relationship of similar packet capture files.

Figure 6. Single polynomial comparison plots of similar out of order traffic.

33

Page 35: The Tower Undergraduate Research Journal Volume II

mial, the more detail is shown about the network. De-spite revealing more details of a network, Figure 8 does not show which order of the polynomial yields better results. Figure 8 is shown to illustrate the effect order has on the approximation of network traffic. More de-tails are not necessarily better, because too many details may not yield an approximation that is robust enough to identify similar future network traffic and is difficult to interpret.

Memory savingsInternet.pcap was saved in two separate files. One file was saved using Internet.pcap’s polynomial representa-tion, and a separate file was saved using Internet.pcap’s representation as a collection of individual data points (i.e., packets). The polynomial file was 12Kb large while the collection of individual data point file was 72Kb large. This size difference indicates that saving network

Figure 7. Piecewise polynomial comparison of similar out of order traffic. Figure 8. Internet pcap plots of varying orders.

traffic as polynomials instead of a collection of individ-ual points saves memory.

disCussion of resultsThe plots in Results are intended to show whether piecewise polynomials can effectively differentiate and link network traffic. The ping traffic analyzed in Ping Analysis was approximated by piecewise polynomials that exhibited constant behavior. Although this result is desired, ping traffic is the simplest type of network traffic and is not sufficient enough to prove the valid-ity of a piecewise polynomial approach. Traffic analysis of the closed network yielded similar results to the ping analysis, by successfully differentiating and linking net-work traffic. Although the closed network analysis was a success, in reality most network traffic occurs on the Internet. Thus the open network results are of primary interest.

article: sanders

34

Page 36: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

The open network single polynomial approximation was unable to differentiate and link network events, as shown in Figure 6. The plot given in Figure 6 shows two similar curves of different ordered network traffic. Al-though this result is not desired, it was expected that a single polynomial approximation would not be able to classify out of order traffic effectively. Conversely, Figure 7 shows that a piecewise polynomial approximation was able to distinguish each section of the network traffic that was captured. These results show that a piecewise polynomial approximation can be used to classify and differentiate network traffic.

Memory storage is also of primary concern when mod-eling network data. The Internet packet capture shows that the discrete representation of data utilized 72Kb of memory storage, while the polynomial representation utilized 12Kb of memory storage. This result shows that polynomial processes utilize roughly six times less mem-ory storage than discrete processes. This size difference indicates that storing network traffic as polynomials instead of a collection of individual points significantly saves memory. This outcome is important in network forensics because network events can be archived for a longer amount of time than before. This extra storage allows for more extensive and detailed investigations.

ConClusionNetworks can be approximated using piecewise polyno-mials with enough detail to aid forensic investigators. The precision of the approximation depends directly on the order of the polynomial used to approximate the data. In general, the higher the order the more details are revealed. Networks behave differently and therefore every network analyzed needs its own set of polynomi-als to approximate their respective network events. The use of piecewise polynomials is also beneficial because polynomials use roughly six times less memory than in-dividual data points.

future workPiecewise polynomials will be applied to the area of network forensics for intrusion analysis. This analysis will require collection of known data that are classified as either malicious or normal. Also, more information about packets will have to be quantified, to further classify and to distinguish network traffic because ap-proximating packet length and protocols are not suf-ficient to perform a thorough analysis. The malicious data will be modeled as piecewise polynomials and used for signature detection. The normal network traffic will also be modeled as piecewise polynomials and used for anomaly detection.

Future research also includes identifying what certain traffic patterns represent, such as web browsing traffic, video streaming traffic, or file downloading traffic. This classification of network events will enhance a forensics investigator’s ability to quickly determine what events have transpired on a network.

aCknowledgeMentsThis research was conducted with the guidance of Kev-in Fairbanks and Henry Owen and supported in part by a Georgia Tech President’s Undergraduate Research Award as a part of the Undergraduate Research Oppor-tunities Program. This research was also supported in part by the Georgia Tech Department of Electrical and Computer Engineering’s Opportunity Research Schol-ars Program.

35

Page 37: The Tower Undergraduate Research Journal Volume II

referenCesVicka C, Peterman C, Shearin S, Greenberg MS, Bok-kelen J, (2002)Network Forensics Analysis, IEEE Inter-net Computing, http://computer .org/internet/, De-cember 2002

Scott HJ, (2007)Network Forensics: Methods, Re-quirements, and Tools, www.Bitcricket.com, November 2007

Ilow J, (2000)Forecasting Network traffic using FARI-MA models with heavy tailed innovations, Proceedings of the Acoustics, Speech, and Signals Processing, 2000. On the IEEE International Conference-Volume 6, IEEE Computer Society, Washington DC, pp. 3814-3817

Mahoney MV, Chan PK, (2008) Learning Models of Network Traffic for Detecting Novel Attacks, Florida Institute of Technology, www.cs.fit.edu/~mmah oney/paper5.pdf

Parry RL, (2009) North Korea Launches Massive Cy-ber Attack on Seoul, The Times, http://www.timeson-line.co.uk/tol/news/world/asia/article6667440.ece, July 2009

Shah K, Jonckheere E, Bohacek S, (2006) Dynamic Modeling of Internet Traffic for Intrusion Detection, EURASIP Journal on Advances in Signal Processing Volume 2007, Hindawi Publishing Corporation, May 2006

Sorkin, (2009) Identity Theft Statistics, spamlaws.com

Thonnard O, Dacier M, (2008) A framework for attack pattern discovery in honeynet data, The International Journal of Digital Forensics and Incidence Response, Science Direct, Baltimore, Maryland, August 2008

Wang J, (2007)A Novel Associative Memory System Based Modeling and Prediction of TCP Network Traf-fic, Advances in Neural Networks, Springer Berlin, July 2007

article: sanders

36

Page 38: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

Platelet aggregation plays an important role in controlling bleeding by forming a he-mostatic plug in response to vascular injuries. GPIbα is the platelet receptor that medi-ates the initial response to vascular injuries by tethering to the von Willebrand factor (vWF) on exposed subendothelium. When this occurs, platelets roll and then firmly adhere to the surface through the GPIIb-IIIa integrin present on the platelet surface. A hemostatic plug then forms by the aggregation of bound and free platelets which then seals the injury site.

vWF is a multimer of many monomers, with each containing eleven domains. In this experiment, biomechanics of two of the eleven domains, gain of function (GOF) R687E vWF-A1 and wild type (wt) vWF-A1A2A3, were studied using videomicroscopy un-der varying shear stresses. This experiment used a parallel flow chamber coated along one surface with the vWF ligand. A solution containing platelets or Chinese Hamster Ovary (CHO) cells was perfused at varying shear stresses (0.5 dynes/cm2 to 512 dynes/cm2) and cell-ligand interactions were recorded.

Results showed that GOF R687E vWF exhibited slip bond behavior with increasing shear stress, whereas wt A1A2A3 vWF displayed a catch-slip bond transition with varying shear stresses. Interestingly, wt A1A2A3 vWF displayed two complete cycles of catch-slip bond behavior, which could be attributed to the structural complexity of the vWF ligand. However, more experiments need to be performed to further substantiate these claims. Information on the bonding behavior of each vWF can aid understanding of the biomechanics of the entire vWF molecule and associated diseases.

characterization of the biomechanics of the GPIbα-vWF tether bond using von Willebrand disease causing mutations r687e and wt vWf A1A2A3VeNkata sitarama damarajuschool of biomedical engineeringGeorgia institute of technology

adVisor:larry V. mciNtireschool of biomedical engineering Georgia institute of technology

37

Page 39: The Tower Undergraduate Research Journal Volume II

introduCtionCirculating platelets have an important role in healing vascular injuries by tethering, rolling, and adhering to the vascular surface in response to a vascular injury. Un-der normal physiological conditions, platelets respond to a series of signaling events that cause bound plate-lets to aggregate and spread across the exposed surface to form a hemostatic plug (Andrews, 1997). These re-sponses are mediated by receptor-ligand interactions between the platelet and the molecules exposed on the surface. GPIbα is the platelet receptor that medi-ates this initial response to vascular injuries. In arteries this response is initiated when platelet receptor GPIbα tethers to von Willebrand factor, a blood glycoprotein, on exposed subendothelium—the surface between en-dothelium and artery membrane. When GPIbα initially tethers to von Willebrand factor (vWF), platelets first roll and then firmly adhere to the surface through the GPIbα and GPIIb-IIIa integrins present on the plate-let. GPIbα and GPIIb-IIIa integrins are the first two platelet integrins to interact with vWF molecule (Kroll, 1996). Aggregation of bound platelets with additional platelets from the plasma forms a hemostatic plug that seals the injury site (Ruggeri, 1997).

Mutations in either of these binding partners can result in changes in the initial step of the vascular healing pro-cess. Diseases associated with these mutations are called von Willebrand diseases, which can either decrease (loss of function) or enhance (gain of function) the binding activity between the GPIbα and vWF molecules. von Willebrand diseases (VWD) result in a platelet dys-function that can cause nose bleeding, skin bruises and hematomas, prolonged bleeding from trivial wounds, oral cavity bleeding, and excessive menstrual bleeding. Though rare, severe deficiencies in vWF can have symp-toms characteristic of hemophilia, such as bleeding into

joints or soft tissues including muscle and brain (Sadler, 1998).

vWF is a multimer of many monomers, with each con-taining eleven (11) domains (Figure 1) (Berndt, 2000). In this experiment, biomechanics of two of the 11 do-mains, in particular, gain of function (GOF) R687E vWF-A1 and wild type (wt) vWF-A1A2A3 were stud-ied. Biomechanics of the GPIbα-vWF tether bond of these molecules was studied using videomicroscopy in parallel plate flow chamber experiments. One of the two surfaces of the flow chamber was a 35-mm tissue culture dish coated with the vWF ligand (Figure 2). Fluid con-

Figure 1. The vWF molcule. It is a multimer of many mono-mers, with each containing 11 domains. Image adapted from Sadler.

article: damajaru

38

Page 40: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

taining either platelets or Chinese Hamster Ovary cells were perfused at varying shear stresses across this ligand coated surface and the interactions were recorded using high speed videomicroscopy. Analysis of these interac-tions with cell tracking software allowed insight on bond lifetime of the cells and helped suggest the type of bond present (Yago, 2004).

By studying the biomechanics of individual vWF do-mains, it will allow a better understanding of the whole vWF molecule, and more importantly, VWD. With this enhanced understanding of vWF, better and more ac-curate treatments for VWD can be designed in future. This knowledge can also be used in studying and pre-venting life threatening thrombosis and embolism.

Materials and MethodsAll materials were obtained from the McIntire labora-tory stock room. Proper sterile techniques and precau-tions were used for each of the following procedures.

Cells usedEither Chinese Hamster Ovary (CHO) cells or fresh platelets were used to study the vWF ligand interac-tions. Fresh platelets were isolated from blood donors an hour before the experiment. For CHO cells, two specific lineages, αβ9 and β9, were used. CHO αβ9 cells

contain specific integrin that interact with the vWF li-gand, whereas β9 cells do not. Hence, β9 cells served as a control group when CHO cells were used instead of platelets.

preparation of groth mediaTwo types of growth media were prepared for the two types of CHO cells: αβ9 and β9. Both media formula-tions consisted of alpha-Minimum Essential Medium (α-MEM) solution (with 2mM L-glutamine and NaH-CO3), 10% Fetal Bovine Serum (FBS) solution, peni-cillin solution (50X), streptomycin solution (50X), G-418 (Geneticin) solution (50 mg/mL), and metho-trexate powder. The only difference between the two media types was the addition of hygromycin B solution (50 mg/mL) in the αβ9 media.

passaging cellsProper sterile techniques and precautions were used while passaging CHO αβ9 and CHO β9 cells. CHO cells were cultured in 75 cm2 flasks and incubated at 37° Celsius, 5% CO2 using the growth media prepared. These cells were passaged every 2-3 days in order to maintain 80-90% confluency of cells at all time.

hepes-tyrode buffer formationsHepes-Tyrode buffer (also referred as 0% Ficoll) was pre-

Flow chamber floor coated with vWF ligand

Upper flow chamber surface

Fluid flow

Interacting Platelets

Non-interacting platelets

Platelets expressing

GPIbα

Figure 2. TParallel plate flow chamber setup. The floor (bottom) plate in the setup was a 35-mm tissue culture dish coated with vWF ligand. Fluid containing either CHO cells or platelets was perfused at varying shear stresses (0.5 dynes/cm2 to 512 dynes cm2) across the ligand coated surface.

39

Page 41: The Tower Undergraduate Research Journal Volume II

pared by mixing the following chemicals in pure deion-ized water until completely solvated: Sodium chloride (135 mM), sodium bicarbonate (12 mM), potassium chloride (2.9 mM), sodium phosphate monobasic (0.34 mM), hepes (5 mM), glucose D (5 mM), and BSA (1% weight per volume). CHO cells and platelets were sus-pended in this buffer for the flow chamber experiments. This solution consisting of cells and buffer was pumped through the flow chamber at various shear stresses.

ficoll solutionA more viscous Hepes-Tyrode buffer was prepared by adding 6% Ficoll (weight per volume). The final viscos-ity was 1.8 times that of Hepes Tyrode buffer. This ficoll solution is also referred as 6% Ficoll.

parallel plate flow chamber experimentsA parallel plate flow chamber was used in this experi-ment. One of the two surfaces of the flow chamber was a 35-mm tissue culture dish coated with the vWF ligand. Fluid containing either CHO cells or platelets was perfused at varying shear stresses (0.5 dynes/cm2 to 512 dynes/cm2) across the ligand coated surface (Figure 2). The interactions were recorded as 4-second videos at 250 frames/second using high speed videomicroscopy (Figure 3). The parallel plate flow chamber set-up was maintained at 37° Celsius for all experiments.

tracking cell interactionsMetaMorph Offline software was used to track the in-teractions captured with videomicroscopy. Each 4-sec-ond video was opened using this software and a square was drawn around the CHO cell or platelet (hereon referred as “cell”) of interest. Each cell was tracked for at least 250 continuous frames (1 second). In addition, it was ensured that no other cell bumped into the cell of interest while it was being tracked by observing the video.

data analysisThe tracked results from MetaMorph Offline were saved as multiple number strings in Microsoft Excel and processed through MATLAB to compute mean rolling velocities for each shear stress. The mean rolling veloc-ity suggests how fast the cell is rolling while interacting with the vWF ligand at each individual shear stress. These rolling velocities were graphed in Microsoft Excel for each shear stress in a logarithmic scale.

resultsGPIbα and von Willebrand factor (vWF) interactions were recorded using videomicroscopy in parallel plate flow chamber experiments. These interactions were then tracked using MetaMorph Offline, the tracking software, for at least 250 continuous frames (1 second).

Non-rolling cells Rolling cells Figure 3. A snap shot of the videomicroscopy recording of CHO cells interacting on a vWF coated surface. The non-rolling cells free flow over the surface without any interactions. In contrast, the rolling cells are visibly flow-ing slower as they interact with the vWF ligand.

article: damajaru

40

Page 42: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

Figure 4. A mean rolling velocity of platelets on gain on function (GOF) R687E A1. The x-axis rep-resents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity (µm/s). The error bars represent the Standard error of means. SEM. Increasing velocity suggests a slip bond behaviors of GOF R687E.

Figure 5. Mean rolling velocity of platlets on gain of function (GOF) R687E A1. The x-axis repre-sents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity (µm/s). Error bars represent the Standard error of means, SEM. The increasity velocity suggests a slip bond behavior of GOF R687E.

41

Page 43: The Tower Undergraduate Research Journal Volume II

The results from MetaMorph Offline were processed through MATLAB to compute the mean rolling ve-locities for each shear stress used (0.5 dynes/cm2 to 512 dynes/cm2). In order to learn about the GPIbα-vWF tether bond interaction, mean rolling velocities were plotted versus shear stress for each individual experi-ment.

Plotting the results for platelets interacting on gain of function (GOF) mutant R687E vWF-A1 molecule re-vealed a trend of increasing mean rolling velocities with increasing shear stress (Figure 4). The x-axis represents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity (µm/s). The error bars are the standard error of mean (SEM), which is cal-culated by dividing the standard deviation by the square root of number of samples (stdev/√(N)).

Intuitively, with increasing shear stress the bond lifetime decreases for each individual bond (one-to-one mole-cule interaction between GPIbα and vWF ligand); con-sequently, causing the mean rolling velocity to increase at higher shear stresses. This increase in mean rolling ve-

locity is characteristic of a slip bond interaction, because the molecules tend to “slip off ” the ligand more readily at higher shear stress than at lower shear stress.

A separate experiment performed with platelets from a different donor on GOF R687E vWF showed a simi-lar slip bond interaction (Figure 5). Although less data points were collected in this experiment, it showed a similar increase in mean rolling velocity with increas-ing shear stress. A statistical analysis of these two data sets revealed a pearson correlation factor of 0.98 and a p-value greater than 0.05 for a paired t-test. Therefore, reproducability of this trend affirmed the slip bond characteristic of GOF R687E vWF-A1 molecule.

Outliers at high shear stress are attributed to the fact that bond lifetime significantly decreases at higher shear stress. As a result, fewer platelets interact at those shears and thus fewer data points were collected at higher shear stress compared to at lower shear stresses. This is reflected with the large SEM bars for data points at higher shear stress end. Similarly, mean rolling veloc-ity at the lowest shear stress are also variable because of

Figure 6a and 6b. Mean rolling velocity platelets on wt A1A2A3 vWF. Y-axis in both 6a and 6b represent the mean rolling velocity (µm/s), whereas the x-axis in 6a represents the logarithmic shear stress (dynes/cm2) and in 6b represents the logarith-mic shear rate (s-1). The error bars represent the Standard error of means, SEM. This cycle of decreasing and increasing mean rolling velocity is indicative of a catch-slip bond interaction between GPlba and vWF ligand.

article: damajaru

42

Page 44: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

the diffuclty in distinguishing interacting platelets from non-interacting platelets. For both experiments, plate-lets were suspended in Hepes Tyrode buffer.

In contrast, platelets interacting on wild type (wt) A1A2A3 vWF molecule (Figures 6a and 6b) showed a different trend compared to GOF R687E vWF. Y-axis in both Figures 6a and 6b represent the mean rolling velocity (µm/s), whereas the x-axis in 6a represents the logarithmic shear stress (dynes/cm2) and in 6b repre-sents the logarithmic shear rate (s-1). The error bars represent the SEM. As illustrated by the graphs, mean rolling velocity initially decreased, then increased and decreased only to increase again with increasing shear stress (and shear rate). This cycle of decreasing and in-creasing mean rolling velocity is indicative of a catch-slip bond interaction between GPIbα and vWF ligand.

A decrease in mean rolling velocity correlates with an increase in bond lifetime of individual bond, and thus indicating a catch bond because the platelet is “caught” by the ligand. Likewise, an increasing mean rolling ve-locity implies a decrease in bond lifetime of the indi-vidual bond as a slip bond interaction. Platelets on wt A1A2A3 vWF exhibited two complete cycles of catch-slip bond interaction for the range of shear stress mea-sured (0.5 dynes/cm2 to 512 dynes/cm2). For this par-ticular experiment, platelets were suspended in Hepes Tyrode buffer (0% Ficoll) and 6% Ficoll solution. Sus-pending platelets in a more viscous solution was used to verify whether the catch-slip bond was force dependent or transport dependent.

A similar catch-slip bond interaction was illustrated with Chinese Hamster Ovary (CHO) cells interacting on wt A1A2A3 vWF (Figure 7). Although fewer data points were collected for this experiment, it still dem-onstrated two cycles of decreasing and then increasing mean rolling velocity with increasing shear stress. No statistical analysis was performed between these two

results because CHO cells contain isolated GPIbα re-ceptors, whereas platelets have many molecules on their surface. Thus, the mean rolling velocities are compara-bly different between them and not comparable.

disCussionFresh platelets and wild type (wt) Chinese Hamster Ovary (CHO) cells were used on gain of function (GOF) R687E vWF or wt A1A2A3 vWF in order to study some aspects of the GPIbα-vWF tether bond. Par-allel plate flow chamber experiments were the same for each vWF molecule. The only difference was the fluid passing through had either platelets or wt CHO cells. All rolling interactions were observed at 250 frames per second.

It was previously found that wild type-wild type (wt GPIbα on wt vWF) interactions differ from wt-GOF (wt GPIbα on GOF vWF) interactions. An additional experiment (Appendix A, Figure A1) shows platelets on wt vWF-A1. This graph shows a transition of bond-ing behavior from a region of decreasing rolling veloc-ity to an increasing rolling velocity as the shear stress increases. This trend is indicative of a catch-slip bond transition because the rolling velocity decreases (catch behavior) and then increases (slip behavior) with in-creased shear stress.

However, results from platelets on GOF R687E vWF (Figures 4 and 5) showed an increase in rolling veloci-ties with increased shear stress—indicating only a slip bond behavior. This suggests that a catch bond governs low force binding behavior between wt GPIbα and wt vWF-A1; whereas a slip bond governs binding of GOF R687E at high shear stresses. One possible reason for this could be the differential force response of the bond lifetime.

Results of platelets rolling on wt A1A2A3 vWF (Fig-ures 6a and 6b) showed two complete cycles of bonding

43

Page 45: The Tower Undergraduate Research Journal Volume II

behavior from a region of decreasing rolling velocity to increasing rolling velocity as shear stress increased. This cycle of decreasing and increasing mean rolling veloc-ity is indicative of a catch-slip bond interaction between GPIbα and vWF ligand. When rolling velocity decreas-es, it is indicative of a catch bond, suggesting the bonds are stuck or caught on the ligand and hence slowing its velocity. Likewise, when the rolling velocity increases, it indicates a slip bond because the bond comes off or slips off much quicker and consequently increases the roll-ing velocity. Based on previous knowledge (Figure A1), this catch-slip bond behavior can be identified with the presence of wt A1 thedomain in the wt A1A2A3 vWF ligand. However, two observed complete cycles of catch-slip bond might be due to the structural complex-ity of the complete A1A2A3 vWF ligand.

Also, viscosity of the fluid in which platelets were sus-pended was increased by 1.8 times. By comparing the results from these two different solutions (0% Ficoll and 6% Ficoll), it helped determine whether if this catch-slip bond interaction was force dependent or transport dependent. Figures 6a and 6b show a boxed

Figure 7. Mean rolling velocity of CHO cells on wt A1A2A3 vWF. X-axis represents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity (µm/s). The error bars represent the Standard error of menas, SEM. This cycle of decreasing and increasing mean rolling velocity is indicative of a catch-slip bond interaction between GPlba and vWF ligand.

Figure A1. Results of platelets on wt-A1 vWF. Y-axis in both left and right plots represent the mean rolling velocity (µm/s), whereas the x-axisin left represents the logarithmic shear stress (dynes/cm2) and in right represents the logarithmic shear rate (s-1). These plots show a catch-slip bond interaction, the the rolling velocity decreases and then increases with increases shear stress (and shear rate).

article: damajaru

44

Page 46: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

region where the data points for the two different so-lutions (with 0% Ficoll and 6% Ficoll) align or overlap when plotted together. Since shear stress data aligns the best compared to shear rate data, it indicates that force, which regulates shear stress, is probably what governs this catch-slip bond interaction.

A similar catch-slip bond interaction was observed be-tween wt CHO cells on wt A1A2A3 vWF (Figure 7). The bond behavior transitions from a region of decreas-ing rolling velocity to a region of increasing rolling ve-locity. CHO cells have isolated GPIbα receptors, which allows for the isolation of the GPIbα receptor’s contri-bution to the rolling velocity parameter, since platelets have many molecules on their surface. Thus, this trend could be attributed to the GPIbα receptor’s interactions with the vWF molecules and A1A2A3 structure.

Overall, bond behaviors of the two vWF domain, GOF R687E and wt A1A2A3, were successfully characterized. Although the bonding trends of the vWF ligand appear very obvious, more testing will help further substantiate these claims. Based on the results, the next step would be assessing how these bond types adversely affect plate-let aggregation in presence of a vascular injury. By deter-mining the adverse effects of different bond type in each vWF domain, it will further help understand VWD and its causes and potentially lead to a treatment.

ConClusionSome valuable information on tether bonding between GPIbα-vWF, specifically GOF R687E vWF and wt A1A2A3 vWF, was acquired from the four set of ex-periments performed. Results from two experiments revealed a pure slip bond behavior for platelets rolling on GOF R687E (Figure 4-5). Statistical analysis also showed a strong correlation and p-value greater than 0.05 between the two experiments involving GOF R687E vWF; hence, confirming the reproducibility of slip bond behavior. This slip bond behavior is attrib-

uted to the differential force response of bond lifetime between GPIbα and GOF vWF ligand with increasing shear stress.

In addition, studying wt A1A2A3 vWF on platelets and wt CHO cells revealed two complete cycles of catch-slip bond behavior (Figure 6-7). Based on previous knowl-edge, this catch-slip bond behavior can be identified with the presence of wt A1domain in A1A2A3 ligand. However, having two cycles of catch-slip bond behav-ior can be due to the structural complexity of A1A2A3 vWF ligand.

In future studies, more experiments need to be per-formed with wt A1A2A3 vWF on platelets and CHO cells in order to confirm the reproducibility of the results achieved. More data is needed to support the claim that having two cycles of catch-slip bond can be attributed to the structural complexity of A1A2A3 vWF ligand. Similarly, more experiments involving GOF R687E vWF on platelets and CHO cells will further substanti-ate the slip bond behavior of GOF vWF. By studying biomechanics and bond behavior of each domain of the vWF molecule, it will allow a better understanding of vWF and VWD.

45

Page 47: The Tower Undergraduate Research Journal Volume II

referenCesAndrews R K, Lopez J A, Berndt M C (1997) Molecu-lar Mechanisms of Platelet Adhesion and Activation. International Journal of Biochem Cell Biology 29: 91-115.

Berndt M, Ward CM (2000) Platelets, Thrombosis, and the Vessel Wall. Vol. 6. Harwood Academic.

Kroll M, Hellums D, McIntire L, Schafer A, Moake J (1996) Platelets and Shear Stress. The Journal of The American Society of Hematology 88.5: 1525-541.

Ruggeri Z (1997) Von Willebrand Factor - Cell Ad-hesion in Vascular Biology. The American Society for Clinical Investigation 99: 559-564.

Sadler J (1998) Biochemistry and genetics of von ville-brand factor. Annual Reviews 67: 395-424.

Yago T, Wu J, Wey C, Klopocki A, Zhu C, McEver R (2004) Catch Bonds Govern Adhesion Through L-Se-lectin At Threshold Shear. The Journal of Cell Biology 166: 913-924.

article: damajaru

46

Page 48: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

This paper addresses one of the major causes of the sub-prime mortgage crisis prevalent in large American mortgage houses by the end of 2006. The moral hazard scenario and consequent malpractices are addressed with respect to the soft budget constraint. This analysis is done by first looking at the Dewatripont and Maskin model (1995), and then suitably modifying it to model the scenario at a typical mortgage lender. This sim-plistic model provides useful insight into how heightened bailout expectations, caused by precedent actions by the Federal Reserve, fueled risky behavior at banks who thought themselves to be “too-large-to-fail.”

moral hazard and the soft budget Constraint: A game-theoretic look at the primal cause of the sub-prime mortgage crisisakshay kotakschool of economics and school of industrial & systems engineering Georgia institute of technology

adVisor:emilsoN c. silVaschool of economicsGeorgia institute of technology

47

Page 49: The Tower Undergraduate Research Journal Volume II

introduCtionOver the last two decades there has been considerable interest in the study of financial crises and instabil-ity owing largely to the prevalence of financial crises in the recent past. As Alan Greenspan observed, after the collapse of the Soviet Bloc at the end of the Cold War, market capitalism spread rapidly through the develop-ing world, largely displacing the discredited doctrine of central planning (Greenspan 2007). This abrupt transi-tion led to explosive growth that was at times too hot to handle and inadequately controlled, causing several crises in the Third World, most notably in East Asia in 1997 and Russia in 1998. Additionally, there have been periods of economic tumult in the developed world in-cluding the near collapse of Japan in 1990’s, the bailout of Long Term Capital Management by the Federal Re-serve in 1998, and most recently, the subprime mort-gage crisis of 2007-08.

As Dimitrios Tsomocos highlights in his paper on finan-cial instability, “[t]he difficulty in analyzing financial instability lies in the fact that most of the crises mani-fest themselves in a unique manner and almost always require different policies for their tackling” (Tsomocos 2003). Most explanations, however, are modeled on a game-theoretic framework involving a moral hazard scenario brought about by asymmetric information. This choice of framework has been popular because of its ability to predict equilibrium behavior (under rea-sonable assumptions) for a given scenario and explain qualitatively and mathematically why and when devia-tions from this behavior occur.

This paper aims to perform a similar introductory anal-ysis of one of the underlying causes of the current global economic crisis — subprime mortgage lending activity in the US from 2001-07 — in light of the soft budget constraint (SBC). The soft budget constraint syndrome, identified by János Kornai in his study of economic be-havior of centrally-planned economies (1986), has been

used to explain several phenomena and crises in the capitalist world. While initially used to explain short-age in socialist economies, the SBC has been usefully sought to provide explanations for the Mexican crisis of 1994, the collapse of the banking sector of East Asian economies in the 1990’s, and the collapse of the Long Term Credit Bank of Japan.

The soft budget constraint syndrome is said to arise when a seemingly unprofitable enterprise is bailed out by the government or its creditors. This injection of capital in dire situations ‘softens’ the budget constraint for the enterprise – the amount of capital it has to work with is no longer a hard, fixed amount. There is a host of literature, primarily developed from a model designed by Mathias Dewatripont and Eric Maskin, which fo-cuses on the moral hazard issues brought about when a government or central bank acts as the lender of last resort to financial institutions (Kornai et al. 2003).

BaCkgroundThe subprime mortgage crisis of 2007 was marked by a sharp rise in United States home foreclosures at the end of 2006 and became a global financial crisis during 2007 and 2008. The crisis began with the bursting of the speculative bubble in the US housing market and high default rates on subprime adjustable rate mortgag-es made to higher-risk borrowers with lower income or lesser credit history than prime borrowers.

Several causes for the proliferation of this crisis to all sectors of the economy have been delineated, including excessive speculative investment in the US real estate market, the overly risky bets investment houses placed on mortgage backed securities and credit swaps, inac-curate credit ratings and valuation of these securities, and the inability of the Securities and Exchange Com-mission to monitor and audit the level of debt and risk borne by large financial institutions. It would be fair to

article: kotak

48

Page 50: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

say, however, that one of the most fundamental causes of the entire debacle was the lending practices prevalent in mortgage houses in the US by the end of 2006 and the free hand given to these lenders to continue their practices. While securitization produced complex de-rivatives from these mortgages that were incorrectly val-ued and risk-appraised, it was ultimately the misguided decisions made by mortgage lenders that caused default rates to rise when the housing bubble burst, eroding the value of the underlying assets and setting off a chain re-action in the financial sector.

With housing prices on the rise since 2001, borrowers were encouraged to assume adjustable-rate mortgages (ARM) or hybrid mortgages, believing they would be able to refinance at more favorable terms later. How-ever, once housing prices started to drop moderately in 2006-2007 in many parts of the U.S., refinancing be-came more difficult. Defaults and foreclosures increased dramatically as ARM interest rates reset higher. During 2007, nearly 1.3 million U.S. housing properties were subject to foreclosure activity, up 75% versus 2006. (US Foreclosure Activity 2007).

Primary mortgage lenders had passed a lot of the de-fault risk of subprime loans to third party investors through securitization, issuing mortgage-backed securi-ties (MBS) and collateralized debt obligations (CDO). Therefore, as the housing market soured, the effects of higher defaults and foreclosures began to tell sig-nificantly on financial markets and especially on major banks and other financial institutions, both domesti-cally and abroad. These banks and funds have reported losses of more than U.S. $500 billion as of August 2008 (Onaran 2008). This heavy setback to the financial sec-tor ultimately led to a stock markets decline. This dou-ble downturn in the housing and stock markets fuelled recession fears in the US, with spillover effects in other economies, and prompted the Federal Reserve to cut down short term interest rates significantly, from 5.25%

in August ’07 to 3.0% in February ’08 and subsequently down all the way to 0.25% in December ’08 (Historical Changes, 2008).

As the single largest mortgage financing institution in the US, Countrywide Financial felt the heat of the subprime crisis more than a lot of the other affected fi-nancial institutions. Faced with the double whammy of a housing market crash and the stiff credit crunch, the company found itself in a downward spiral, with a rise in readjusted mortgage rates increasing the number of foreclosures which eroded profits.

In the case of Countrywide Financial and other large finance corporations who considered themselves “too-large-to-fail,” the expectation of the downside risk cov-erage was raised to a level that promoted substantial risk-taking. This expectation was based off of precedent actions by the Federal Reserve in bailing out distressed large firms – dubbed the Greenspan (and now, the Ber-nanke) put. Thomas Walker (2008), in his article in The Wall Street Journal, aptly says,

There is tremendous irony, and common sense, in the realization that multiple successful rescues of the financial system by the Fed over several decades will eventually create a risk-taking cul-ture that even the Fed will no longer be able to single-handedly save, at least not without serious inflationary consequences or help from foreign-ers to avoid a dollar collapse. Eventually the cul-ture will overwhelm the ability of the authorities to make it all better.

Ethan Penner of Nomura Capital provides a succinct and veracious definition of the moral hazard dilemma in saying that, “Consequences not suffered from bad de-cisions lead to lessons not learned, which leads to bigger failings down the road (Penner 2008).”

49

Page 51: The Tower Undergraduate Research Journal Volume II

the dewatripont Maskin (dM) ModelMathias Dewatripont and Eric Maskin developed a model in 1995 to explain the softening of the budget constraint under centralized and decentralized credit markets (Dewatripont et al. 1995). The simplest version of their model is a two-period model, with the key play-ers being a banker that serves as the source of capital to each of a set of entrepreneurs that require funding to undertake projects. At the beginning of period 1, each of the entrepreneurs chooses a project to submit for financing, and projects may be defined as one of two types: good or poor. The probability of a project being good is α. The asymmetry in information lies in the fact that once selected, only the entrepreneur knows of the type of the project, i.e. the banker is unable to monitor the project beforehand. The entrepreneur has no bar-gaining power and the banker, if he decides to finance the project, makes a take-it-or-leave-it offer.

Set-up funding costs 1 unit of capital. The banker is able to learn the nature of a project once he funds set-up during period 1. A good project, once funded, yields a monetary return of Rg (>0) and a private benefit Bg (>0) for the entrepreneur; by the beginning of period 2, gains can be made through private benefits, which may include intangibles such as reputation enhancement. A poor project, on the other hand, yields a monetary return of 0 by the beginning of period 2. If the banker ends up dealing with a poor project, he, at the beginning of period 2, has the option of either liquidating the proj-ect’s assets to obtain a liquidation value L (>0) while the entrepreneur earns a private benefit BL (<0), since liquidation would imply a loss in reputation. The other option the banker has is to refinance the project, which would require the injection of another unit of capital at the beginning of period 2. Now the gross return is Rp and private benefit to the entrepreneur is Bp (>0).

A graphical representation of the timing and structure of the DM model is shown in Figure 1.

The fairly simple model proposed by Dewatripont and Maskin, when suitably tweaked, may be used to explain a number of phenomena in both capitalist and socialist economies. The model was originally designed to assess how decentralizing the credit market (under some fairly reasonable assumptions about the comparative nature of Rg and Rp) will harden the budget constraint — mak-ing markets more efficient — by adding incentive to en-trepreneurs to not submit poor projects for financing.

For specific application to this study, the model will be used to study the moral hazard scenario that comes about when financial institutions consider themselves “too-large-to-fail.” These institutions, Long Term Capi-tal Management in the late 1990’s and, more recently, Countrywide Financial, are insured to some measure in the sense that their multi-billion-dollar positions can affect financial markets so heavily that, in the case of a

Figure 1. The Structure of the Dewatripont Maskin Model (Source: Kornai et al., 2003)

article: kotak

50

Page 52: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

downturn, large private banks, the central bank, or the government would be forced to bail them out to avoid a financial meltdown. This insurance against downside risk stimulates the moral hazard scenario and gives in-centive to these financial institutions to make much riskier bets with higher potential return.

MethodologyThe game-theoretic model used in this study has two key players – the borrowing entity (“borrower”) and the lending entity (“the bank”). Additionally, the study looks at the effects of the presence of a lender of last resort. Borrowers, assumed to be identical, can choose from two types of loans offered by the bank – a fixed rate loan with principal Lf and an adjustable rate loan with principal La. Customer utility (U(x)) in the typical concave functional form – increasing with decreasing marginal returns (i.e. U'>0, U''<0), is simplified in this model to be the natural logarithm function.

The fixed rate loan has an interest rate rf . The adjustable rate loan is assumed to have an initial low fixed interest rate r0a which is readjusted after a period λ. The remain-der of the adjustable rate loan is paid off at the rate de-termined at the end of period λ. If market conditions are good at this time, the interest rate is adjusted to r1g, and if they are bad, the rate is adjusted to r1b. Market condi-tions are represented in the model by an exogenous vari-able θ, which is the probability of the market conditions being good, i.e. of the interest rate reset to being r1g.

The bank and customer convene before a loan is offered to discuss the terms of the ARM. Based on the bank’s expectations about the economy (i.e. θ) and of the val-ues of r1b and r1g, the bank and the customer decide on a fixed initial rate r0a and a period λ for which the loan is kept fixed. The computation for λ also involves a parameter, δ, which reflects the increase in default rate

for bad market conditions. This revenue shrinkage fac-tor (δ) can be thought of as an indicator of the bank’s downside risk coverage. In the current framework, it is affected by two key factors:

1. Collateral requirements: Higher collateral would imply more downside risk coverage (i.e. higher δ) but would also reduce the quantity of loans demanded since fewer people would be able to pay the required collater-al for the same loan. The bank would therefore evaluate the benefit (potential revenue) of additional loans with the cost (increased risk) to choose the ideal collateral requirement for the ARM. This cost-benefit analysis is however outside the scope of this study, and δ is there-fore assumed to be exogenous.

2. Bail-out expectations: Increased expectations of a bail-out (i.e. a cash injection in case of bad market con-ditions) would also raise the value of δ, but without shrinkage in loan demand.

The game is played between borrowers and the bank with equilibrium being reached by the bank setting λ such that borrowers are indifferent to either of the two loans, and the borrowers opting for a mixed strategy. The indifferent borrower chooses a fixed rate loan with a probability α such that the expected payoff from either loan is the same for the bank.

This study analyzes the equilibrium of this game under two scenarios – with and without the presence of a lend-er of last resort. The presence of a lender of last resort who is expected to bail the lender out with a cash injec-tion increases the (perceived) value of δ even though the level of protection offered to the bank through collateral remains the same. So, in this case, the revenue shrinkage for the second collection period is reduced (Figure 2).

The optimal loan amount for a fixed rate loan (L*f ) maximizes net utility for the borrower. Net utility is

51

Page 53: The Tower Undergraduate Research Journal Volume II

the difference between the utility gained from the loan amount less the total interest paid over the lifetime of the load. The borrower therefore solves,

i.e.

which yields, (1)

With an adjustable rate loan, the interest payment for the average borrower would be

Therefore, the optimal loan amount for an adjustable rate loan, (2)

In order to ensure that a mixed strategy is employed at equilibrium i.e. to have 0<α<1, the bank sets λ such that borrowers are indifferent to fixed and adjustable rate loans.

Substituting values from (1) & (2), we obtain,

i.e.

Figure 2. Readjustment of adjustable rate loans after the initial period, λ.

article: kotak

maxL f f f

f

U L r L( )-{ }·

max lnL f f f

f

L r L( )-{ }·

Lrf

f

* =1

L r r ra a g b· · · · · ·λ λ θ θ δ0 1 11 1+ - + -( )( )( ) ( )

Lr r r

aa g b

*

( ) ( )=

+ - + -( )( )1

1 10 1 1λ λ θ θ δ· · · · ·

U L r L U L r r r Lf f f a a g b a( )- = ( )- + - + -( )( )· · · · · · ·λ λ θ θ δ0 1 11 1( ) ( )

- - = - + - + -( )( )-ln ln ( ) ( )r r r rf a g b1 1 1 10 1 1λ λ θ θ δ· · · · ·

r r r rf a g b= + - + -( )( )λ λ θ θ δ· · · · ·0 1 11 1( ) ( )

52

Page 54: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

giving us, (3)

analysisIn deriving equation (3) above, we also find that, at equi-librium, the net interest rate charged for a fixed loan and an adjustable loan are the same.i.e.

Since all interest rates and parameters are positive, and since r0a is assumed to be less than rf , the above can only hold true if (4)

Also, it must hold that, (5)

This is derived from equation (4) and from the fact that, as market conditions worsen, liquidity becomes harder to obtain and therefore the cost of debt increases.

Equilibrium behavior that is of interest is the nature of the change in λ with changes in the exogenous param-eters – θ and δ. The rate of change of λ with respect θ to is

(6)

Given condition (4), the sign on the above expression is dependent on the sign of (r1g – δ·r1b). Therefore, if

then the right hand side of equation (6) would be posi-tive, implying that an increase in the probability of a good market conditions would cause an increase in the amount of time that the loan is kept at the low fixed rate r0a. This makes intuitive sense because if

then the bank is not adequately covered against down-side risk, so even though the probability of good market conditions increases, the bank keeps the loan at the fixed low rate longer and decreases the length of the period of uncertain collection, which is subject to downside risk. One concern that arises is why the bank takes any risk in the first place by offering an adjustable rate loan even though the payoff for this is the same as that for the less risky fixed rate loan. The reasoning here would be that adjustable rate loans earn higher commissions, which compensates to some level for this risk. Additionally, ARMs are preferred by more customers, and they there-fore add intangible value in terms of higher volumes, which may lead to lower costs, better customer satisfac-tion and a broader clientele. Also, since the function for λ is a rational function in θ (see equation 3), we can see that the values of r0a, r1b, and r1g need to fall within a certain range to ensure that an ARM is feasible i.e. λ lies between 0 & 1.

Conversely, if

λθ θ δ

θ θ δ*

( )

( )=

- + -( )- + -( )

r r r

r r rf g b

a g b

· · ·

· · ·

1 1

0 1 1

1

1

r r r rf a g b= + - + -( )( )λ λ θ θ δ· · · · ·0 1 11 1( ) ( )

r r r ra f g b0 1 11< < + -( )θ θ δ· · ·( )

r r r ra f g b0 1 1< < <

ððλθ

δ

δ θ δ θ=

-( ) -( )- - +( )r r r r

r r r r

f a g b

a b g b

0 1 1

0 1 1 1 2

· ·

· · · ·

δ <rr

g

b

1

1

δ <rr

g

b

1

1

δ >rr

g

b

1

1

53

Page 55: The Tower Undergraduate Research Journal Volume II

then the bank is covered against downside risk. The right hand side of equation (6) is now negative, so an increase in the probability of good market conditions extends the length of the period of uncertain collection. Additionally, if

then the bank is independent of nature of market con-ditions i.e. independent of θ. However, since δ is not set arbitrarily by the bank, it cannot always pursue this strategy of hedging against market risk (Figure 3).

As mentioned earlier, the presence of a lender of last resort inflates the value of δ without any collateral in-crease. From figure 3 we see that as δ increases, there are three ways in which the bank begins to take up more risk. A rise in the value of δ increases the feasibility of adjustable rate loans – loans that were not feasible for a given economic outlook (i.e. θ value) now start be-coming feasible even though the bank is not adequately

covered against this higher level of risk from its collat-eral collection. Additionally, a rise in δ decreases the sensitivity of λ with respect to θ. A higher δ therefore prompts less vigilant observation of market conditions as small enough changes in market outlook now man-date less significant changes in loan structure. Finally, if δ is raised to high enough value,

then the bank begins to make counter-intuitive deci-sions, and a decrease in the probability of good market conditions now actually brings about an increase in the period of uncertain collection.

ConClusionThis model, albeit simplistic, provides interesting re-sults. The model is designed to mimic the basic setup at a typical mortgage house offering a fixed rate loan and a two-step adjustable rate loan. We are able to show,

article: kotak

Figure 3. Period of fixed rate collection, λ v/s probability of good conditions, θ; rf = 6.2%, r0a = 4.5%, r1b = 10%, and r1g = 7.2%; δ = 0 to 1, in increments of 0.1.

δ =rr

g

b

1

1

δ >rr

g

b

1

1

54

Page 56: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

mathematically, that an increase in the expectation of a bailout by a lender of last resort tends to encourage risky behavior in such mortgage offering agencies in multiple ways. That being said, there is plenty of scope for fur-ther elaboration and sophistication of the model. The market structure currently under investigation is both simplistic and insular, but a more elaborate structure of markets and corresponding interactions could be de-signed. For instance, a good example of possible market stratification is illustrated in Tsomocos (2003). Also, the current loan structure is a two period model with loans changing rates at the end of period one to a new fixed rate for period two. A more complex, multi-period loan structure could be investigated with the adjustable rate set as a random variable and a Markov chain approach used to study the equilibrium behavior in this scenario.

In their investigation of “federalism,” Qian and Ro-land (1988) observe that giving fiscal authority to local governments instead of the central government works to limit the effects of the soft budget syndrome. They propose a three-tiered structure with local governments working between the central government and state and non-state enterprises. The competition among local governments to attract enterprises forces funds to be diverted in infrastructure development, increasing the opportunity cost of a bailout and thereby hardening the budget constraint for enterprises. A similar scenario could be envisioned where the Federal Reserve dis-tributes the decision making authority (and funds) to bailout corporations between the twelve regional Fed-eral Reserve Banks; and would be of interest to study the subsequent change in the behavior of the lending banks.

55

Page 57: The Tower Undergraduate Research Journal Volume II

referenCesDemyanyk, Y & Van Hemert, O. (2008). Understand-ing the Subprime Mortgage Crisis. Retrieved Decem-ber 9, 2008, <http://ssrn.com/abstract=1020396>

Dewatripont, M. & Maskin, E. (1995). Credit and Ef-ficiency in Centralized and Decentralized Economies. The Review of Economic Studies, 62(4), 541-555.

Greenspan, A. (2007, December 12). The Roots of the Mortgage Crisis. The Wall Street Journal, p. A19

Historical Changes of the Target Federal Funds and Discount Rates: 1971 - present. (2008). Retrieved December 12, 2008, <http://www.newyorkfed.org/markets/statistics/dlyrates/fedrate.html>

Kornai, J. (1979). Resource-Constrained versus Demand-Constrained Systems. Econometrica, 47(4), 801-819.

Kornai, J. (1980). Economics of Shortage. Amsterdam: North-Holland.

Kornai, J. (1986). The Soft Budget Constraint. Kyklos, 39(1), 3-30.

Kornai, J., Maskin, E & Roland, G. (2003). Under-standing the Soft Budget Constraint. Journal of Economic Literature, 41(4), 1095-1136.

Onaran, Y. (2008, August 12). Banks’ Subprime Losses Top $500 Billion on Writedowns. Retrieved December 12, 2008, <http://www.bloomberg.com/apps/news?pid=20670001&sid=a8sW0n1Cs1tY>

Penner, E. (2008, April 11). Our Financial Bailout Culture. The Wall Street Journal, p. A17

Qian, Y. and Roland, G. (1998). Federalism and the Soft Budget Constraint. American Economic Review, 88(5), 1143-62.

Tsomocos, D. (2003). Equilibrium analysis, bank-ing and financial instability. Journal of Mathematical Economics, 39, 619-655.

U.S. Foreclosure Activity up 75 Percent in 2007. (2007). Retrieved March 14, 2008, <http://www.real-tytrac.com/ContentManagement/RealtyTracLibrary.aspx?a=b&ItemID=4118&accnt=64953>

Walker, T. (2008, April 23). Our Bailout Culture Creates a Huge Moral Hazard. The Wall Street Journal, p. A16

56

Page 58: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

The objective of the research is to address the power density issue of electric hybrids and energy density issue of hydraulic hybrids by designing a drive system. The drive system will utilize new enabling technologies such as the INNAS Floating Cup pump/pump motors and the Toshiba Super Charge Ion Batteries (SCiB). The proposed architecture initially included a hydraulic-electric system, where the high power braking power is absorbed by the hydraulic system while energy is slowly transferred from both the Inter-nal Combustion Engine (ICE) drive train and the hydraulic drive train to the electric accumulator for storage. Simulations were performed to demonstrate the control meth-od for the hydraulic system with in-hub pump motors. Upon preliminary analysis it is concluded that the electric system alone is sufficient. The final design is an electric system that consists of four in-hub motors. Analysis is performed on the system and MATLAB Simulink is used to simulate the full system. It is concluded that the electric system has no need for a frictional braking system if the Toshiba SCiBs were used. The regenerative braking system will be able to provide an energy saving from 25% to 30% under the simulated conditions.

compact car regenerative drive systems: electrical or hydraulicquiNN laischool of mechanical engineeringGeorgia institute of technology

adVisor:wayNe j. bookschool of mechanical engineeringGeorgia institute of technology

57

Page 59: The Tower Undergraduate Research Journal Volume II

introduCtionWith around 247 million on-road vehicles traveling around 3 trillion miles (Highway Statistics, 2009) ev-ery year, the efficiency of on-road vehicles are of major concern. As a result, hybrid drive trains which dramati-cally increase urban driving efficiency of vehicles have been developed and implemented in vehicles. Existing on-the-road hybrids have their secondary regenerative systems (Electric Motors and batteries) installed on their primary drive trains (ICE drive train) to provide the regenerative braking capability. Recently efforts have been put in designing drive train systems that have either hydraulic or electric components as integral parts of the systems. For example, in the Chevy Volt, a series electric hybrid system, the ICE is used to charge the electric accumulator, which in turn drives the electric motor (Introducing Chevrolet, 2009).

Electric hybrid drive trains have been implemented in passenger vehicles while hydraulic hybrids have been implemented in commercial vehicles. Since electric hybrid systems can operate quietly, enhancing passen-ger comfort, this system is implemented in passenger vehicles. However, current battery technologies in the market prevent high power charging and thus prevent the electric system from replacing frictional brakes. As a result a significant amount of braking energy is lost to the surroundings through heat. Hydraulic hybrids, in contrast, have the ability to capture most of the braking power. However, due to the characteristics of hydraulic components, the hydraulic systems suffer from the accu-mulator’s low energy density; the Noise, Vibration and Harshness (NVH) also significantly affect the driving experience.

In an attempt to address the charging power density challenge faced by Electrical Hybrids and the energy density challenge faced by Hydraulic Hybrids, different drive systems were designed. Braking Power Analysis of the report presents simple braking power analysis as the

foundation of further calculations in other parts. The initial approach to solve the problem was to incorpo-rate an electrical system in an existing hydraulic hybrid system. Hydraulic Hybrid Drive System presents the Hydraulic Hybrid system engineering level analysis, and Hydraulic Accumulator Analysis investigates the hydraulic accumulator. It was confirmed that the hy-draulic accumulator does not have a sufficient energy density for braking energy capture and therefore elec-trical accumulators were introduced to capture the ac-cess energy. Battery Analysis investigates the electrical accumulators. Upon the completion of Analysis, it was concluded that the electrical system alone is sufficient. As a result, an in-hub motor driven electric drive system (Figure 7) was chosen, and the analysis and simulations are presented in Electrical Systems of the paper.

Braking power analysisBraking power analysis is performed to serve as a foun-dation for accumulator analysis in Hydraulic Accumu-lator Analisis and Battery Analysis. Analysis is conduct-ed with an assumption of negligible rolling friction, air drag and other losses. The driving analysis is performed on a mid size passenger sedan, such as a Honda FCX Clarity. The Honda FCX Clarity fuel cell car was se-lected because the weight of the components in the car closely resembles the weight of the suggested drive sys-tem. The assumed vehicle mass is 1625 kg (Honda at the Geneva, 2009). The ECE-15 Driving Schedule is shown in Figure 1.

A 6 second 35 mph to 0 mph deceleration is assumed. The assumed braking slope resembles a rapid urban brak-ing is more rapid than ECE-15 Urban Driving Schedule braking. A rapid 60 mph to 0 mph deceleration is also assumed. Under normal driving conditions, a passenger vehicle will take about 200 ft to decelerate (2009 Driv-er’s Manual, 2009). The deceleration time involved can be obtained using Equation (1) and Equation (2)

article: lai

58

Page 60: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

(1)

where is the final velocity of the vehicle, is the initial velocity of the vehicle, a is the acceleration of the vehi-cle, and s is the displacement involved in the accelera-tion. The time is obtained using Equation (2)

(2)

where t is the deceleration time. Using the provided equation, we obtained 11 seconds as the deceleration time. The energy dissipated can be obtained using Equa-tion (3)

(3)

where KE is the kinetic energy of the vehicle, m is the mass of the vehicle, and v is the velocity of the vehicle. The braking power can then be determined using Equa-tion (4)

(4)

where P is the power involved with the braking. The de-celeration details calculated are tabulated in Table 1.

hydrauliC hyBrid drive systeMThe initial approach to solve the problem was to in-corporate an electrical system in an existing hydraulic hybrid system. The selected hydraulic system is the IN-NAS HyDrid. The architecture of the HyDrid system (Achten, 2007) is presented in Figure 2.

INNAS claimed that 77 Miles Per Gallon (MPG) is possible for the HyDrid system (HyDrid, 2009) because

Figure 1. ECE-15 Driving Schedule; x-axis: time (s); y-axis: vehicle velocity (mph).

v v asf i2 2 2= +

v v atf i= +

KE mv=12

2

P d KEdt

mv dvdt

mva= = =( )

59

Page 61: The Tower Undergraduate Research Journal Volume II

the secondary power plant allows engine off operation, and the Infinitely Variable Transmission (IVT) allows the engine to rotate at optimum RPM for efficiency. The INNAS HyDrid utilizes the INNAS Hydraulic Trans-formers (IHT) (Achten, 2002) in a Common Pressure Rail (CPR) (Vael & Acten, 2000). The IHT is claimed to have unmatched efficiency due to the Floating Cup Principle that it utilizes. The starting torque efficiency, according to Achten, is up to 90% (Achten, 2002) or above. The control method of the HyDrid is not pub-lished; therefore, a possible control method is presented to demonstrate how the IHT functions as an IVT and thus converting the varying pressure from the accumu-lator into the desired pressure for the in wheel constant displacement motor/pumps. When accelerating, ei-ther the pressure accumulator or the ICE will provide the required pressure in the CPR which will in turn be transmitted by the IHT to drive the in-wheel constant displacement motor/pump. The IHT is assumed to be a variable pump coupled with a variable pump/motor. A possible method of controlling the acceleration is to vary the stroke of the variable pump in the IHT while keeping the pump motor stroke and the ICE RPM con-

stant. During braking, the pump (CPR side) stroke is kept constant while the pump motor stroke is varied to charge up the accumulator. The presented control method is presented in Figure 3.

A simulation is performed to demonstrate the control method. The system assumes that the vehicle has a 4 cyl-inder gasoline engine, a 0.85 volumetric efficiency for variable pump/motor, and a 0.92 volumetric efficiency for constant displacement pumps and inactive pressure accumulators. It is also assumed ideal pipe lines and no force is involved in the varying of the pump stroke. The simulation shows how by varying the IHT pump stroke the vehicle speed can closely follow a desired trajectory with minimal ICE rpm variation. The ICE rpm and the pump stroke variation are shown in Figure 4 and 5

Figure 2. HyDrid Drive system adapted from Achten.

vi(m/s) vf(m/s) t(s) Braking Power (kW)

35 0 6 66.060 0 11 99.8

Table 1. Deceleration details for the assumed vehicle.

article: lai

60

Page 62: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

respectively. The resulting vehicle velocity is shown in Figure 6.

The simulated vehicle velocity closely matches with the desired velocity trajectory, which is the ECE-15 driving schedule (Figure 1). The simulated velocity trajectory is idealized because of the idealized assumptions made in creating the simulation model. The pressure values pro-vided from the simulation are also observed to be faulty. This simulation’s values cannot be used for quantitative purposes. However, it is sufficient for the demonstra-tion of the variation between the stroke of the pump in the IHT and the vehicle velocity.

hydrauliC aCCuMulator analysisThe hydraulic accumulator has sufficient power density but a low energy density. An attempt was made to quan-

tify the energy storage capacity of a typical size hydrau-lic accumulator for a hydraulic hybrid vehicle so that the proposed additional battery pack can be correctly sized. A 38L EATON hydraulic accumulator (Product Literature, 2009) is assumed (Used in CCEFP Test Bed 3: Highway Vehicles). The parameters used for energy calculations are tabulated in Table 2.

Figure 3. Suggested Control Method for HyDrid system.

Volume (m3) 0.038Precharge Pressure (MPa) 10.7Precharge Nitrogen Volume (m3) 0.038Maximum Nitrogen Pressure (MPa) 20.6Nitrogen Volume at Maximum Pressure (m3) 0.0176

Table 2. EATON 38 L hydraulic accumulator.

61

Page 63: The Tower Undergraduate Research Journal Volume II

Figure 4. Engine RPM for HyDrid simulation; x-axis: time (s); y-axis: ICE rpm.

Figure 5. Pump stroke variation for HyDrid simulation; x-axis: time (s); y-axis stroke (m).

article: lai

62

Page 64: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

The assumed relationship between pressure and volume is shown in Equation (5)

(5)

where p is the pressure of the nitrogen in the accumula-tor, V is the volume of nitrogen in the accumulator, and n is an empirical constant. Using this relationship, the total energy involved in completely pressurizing or de-pressurizing the accumulator is shown in Equation (6)

(6)

where is the initial pressure, is the final pressure, is the initial volume, is the final volume, and W is the energy

involved. Using Equation (5) and Equation (6) we can calculate the total energy storage of the EATON 38L pressure accumulator, which is 293.6 kJ. Using Equa-tion (3) and assuming a vehicle with the weight of 1625 kg (from Braking Power Analysis), a 38L accumulator is sufficient for the acceleration from 0mph to 42.5mph. It is assumed that no energy is lost due to friction, drag, and inertia changes. As the main purpose of the hydrau-lic system in a Hydraulic Hybrid is to capture urban braking and to accelerate the car to a velocity where the ICE can be started, 293.6 kJ is sufficient. However, if the vehicle is braking from a speed higher than 42.5 mph, or the duration of braking is long, the hydraulic system will not be able to capture the braking energy.

Therefore an electrical system is introduced to capture the excess energy.

Figure 6. Vehicle velocity variation for HyDrid simulation; x-axis: time (s); y-axis: velocity (mph).

pV n = constant

Wp V pV

nf f i i=

-

-1

63

Page 65: The Tower Undergraduate Research Journal Volume II

Battery analysisThe batteries are proposed to serve as secondary energy storage which will capture excess energy that cannot be captured by the hydraulic system. The two electric ac-cumulators analyzed are the Sony Olivine-type Lithium Iron Phosphate (LFP) cells (Sony Launches High-Pow-er, 2009) and the Toshiba SCiB cells (Toshiba to Build, 2009). Both cells exhibit impressive recharge cycles and high charging power density. Cell specifications are shown in Table 3.

The charging power density can be obtained using Equation (7).

(7)

where is the charging time for one cell.(fix this sentence)Energy density can be described using Equation (8).

(8)

where C is the cell capacity in Ah, V is the nominal volt-age, and is the mass of one cell. The energy density and charging power density values obtained are tabulated in Table 4.

As shown in Table 4, the Sony LFP outperforms the Toshiba SCiB in terms of energy density by a factor of 1.3, while the SCiB outperforms the LFP in terms of power density by a factor of 3. As the major limitation in Electric Hybrid systems is the charging power density of batteries, the SCiB cell is used for further analysis.

As calculated in Braking Power Analysis, 66.0 kW is the maximum braking power occurred in a 6 second 35mph to 0 mph city braking. Assuming that all the SCiB cells used are arranged in such a way that the power density is equally shared among the cells and ideal electronic com-ponents, 90.8 kg of the SCiB cells are required. 90.8 kg of SCiB has a total capacity of 22,000 kJ. According to Braking Power Analysis calculations, each city braking involves 197.7 kJ braking energy, which is 0.9% of the battery pack’s capacity. According to Toshiba, the capac-ity loss after 3,000 cycles of rapid charge and discharge is less than 10% (Toshiba to Build, 2009). Using the as-sumptions in Braking Power Analysis and assuming that the 10% of capacity loss after 3000 cycles is negligible, we can obtain 334 thousand cycles as an approximate

SCiB Cell LFPEnergy Density (kJ/kg) 242 316.8

Charging Power Density (W/kg) 726 198

Table 4. Sony LFP and Toshiba SCiB cell spec (charging power density and energy density).

SCiB Cell LFPNominal Voltage (V) 2.4 3.2

Nominal Capacity (Ah) 4.2 1.1Size (mm) approx 62 x 95 x13 d=18, h=65

Weight (g) approx 150 40Charging time 90% in 5 min 99% in 30 minTable 3. Sony LFP and Toshiba SCiB cell spec.

article: lai

Charging power density= (%Charge) Energy Density)

charging

·(t

Energy density= min sec

charging

C Vt

· · ·60 60

64

Page 66: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

for the regenerative braking and driving cycles allowed before the capacity of the SCiB drops below 90%. Us-ing the same assumptions we can also find that 137.6 kg of SCiB is sufficient for the maximum charging power involved in the 11 second 60 mph to 0 mph highway ac-cident braking. The weight of the battery pack required is slightly heavier than the 70kg battery pack in a Toyota Prius electric hybrid vehicle.

eleCtriCal systeMsAs shown in Battery Analysis calculations, the Toshiba SCiBs have a power density that is more than sufficient for regenerative braking. As a result neither the hydrau-lic system nor the frictional braking system is necessary in an electric vehicle equipped with the Toshiba SCiBs. A mechanical emergency brake should be installed to prevent accidents in case of regenerative braking system failure.

The possible simplest design is a plug in electric or a fuel cell vehicle that has 4 in-hub motors. The simplified system is shown in Figure 7. As shown in the Figure 7, the 4 in-hub wheel motors are directly connected to the wheel. With mechanical components such as the ICE, differentials, and the transmission removed, the vehicle weight can be reduced, and the efficiency of the whole driving train can be increased by at least a factor of 3 (Clean Urban Transport, 2009). Some efficiency values

(Achten, 2009; Valøena & Shoesmith, 2009) are pro-vided in Table 5 for comparison. The 4 in-hub motor de-sign also allows the vehicle to enjoy a very small turning radius and other advantages of 4WD vehicles, such as increased traction performance and precision handling. To validate the design, a simulation is done for the sys-tem suggested. Because of the mechanical components removed, a lighter car is selected for simulation. The selected vehicle is a Honda Civic, with a vehicle mass of 1246 kg (Complete Specifications, 2009) and a CdA value of 0.682 m2. The air drag of the vehicle can be cal-culated using Equation (9) (Larminie & Lowry, 2003)

(9)

where is the drag force, is air density, is the drag coef-ficient, A is the cross sectional area of the vehicle facing the front, and v is the velocity of the vehicle. The rolling friction of the vehicle can be obtained using Equation (10) (Larminie & Lowry, 2003).

(10)

Figure 7. Selected Electric Car architecture.

Approx. EfficiencyICE 20%Transmission (automatic) 85%

Transmission (manual) 92% to 97%Differential 90%Motor 90%Battery recharge 80% to 90%

Table 5. Efficiency values of integral components of ICE drive train and electric drive train.

F C Avad d=12

F mgrr rr= µ

65

Page 67: The Tower Undergraduate Research Journal Volume II

where is the rolling friction force, is the rolling friction coefficient, which is assumed to be 0.015 (for radial ply tire) (Larminie & Lowry, 2003), m is the mass of the ve-hicle, and g is the acceleration due to gravity. As in-hub motor specifications are not readily available, the 4 in-hub motors are assumed to resemble the Tesla Roadster drive train system (2010 Tesla Roadster, 2009), which only involves a motor and a fix gear set with a gear train ratio of 8.28. The 375 V AC motor has a 215kW peak power and 400 Nm of torque. All the parameters, in-cluding Equation (9) and Equation (10), are included in the simulation.

The model simulates the vehicle attempting to follow the ECE Driving Schedule shown in Figure 1. The con-troller assumed is a PID controller with a Kp of 10.4, Ki of 0.546, and a Kd of -0.386 (MATLAB Simulink tuned for 2.06 second response time). The resultant ve-locity trajectory and the power variation are shown in Figure 8 and Figure 9 respectively

As expected, the power stays below a value of 66kW, and there are negative values in the time versus power plot as deceleration is involved in the ECE 15 Driving Schedule. Because of the losses, the negative peaks that correspond to braking power have a magnitude that is relatively smaller than the positive peaks that corre-spond to accelerating power. From the simulation, the highest braking power observed is a little over 10 kW, which can easily be fully captured using the 70kg of SCiB battery packs (refer to Battery Analysis for cal-culations). Simulink scopes are added to the system to observe the energy change with and without regenera-tive braking systems. The results are shown in Figure 10 and Figure 11.

Comparing Figures 9 and 10, we can observe a 25% energy saving for the system with regenerative braking. Tuning the PID controller can increase the energy sav-ings value up to 30%. Using a better controller has the potential to increase the energy savings value further.

Figure 8. Vehicle Velocity Variation for Electric Drive System; x-axis: time (s); y-axis: velocity (mph).

article: lai

66

Page 68: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

Figure 9. X-axis: time (s); y-axis: power(W) plot for Electric Drive System Simulation.

Figure 10. Energy required to complete the ECE 15 driving cycle without regenerative braking; x-axis: time(s); y-axis: energy( J).

67

Page 69: The Tower Undergraduate Research Journal Volume II

reCoMMended future workThe hydraulic drive system may deserve a more in depth analysis. If the accumulator energy density can be im-proved, the hydraulic drive system may be a more en-vironmentally friendly option than its electrical coun-terparts as the production and disposal of batteries leave detrimental effects to the environment. It is rec-ommended that the simulation model be improved to better model the actual response of the HyDrid system. For the electrical system, efforts should be invested in the research of in-hub motors, which produce signifi-cantly less torque than a regular AC motor coupled with an 8.28 to 1 gear ratio gear train. Parameters within the Simulink model should be selected to bet-ter represent in-hub motors and the batteries should be modeled with greater detail, as different arrangements of the cells will result in different power density. Losses involved with the electrical components should also be investigated. One challenge that the electric drive sys-

tem should overcome is the energy density issue of the batteries. The energy density of a battery is significantly lower than gasoline. Efforts should also be invested in technologies related with battery recycling. (this is not your future work)

ConClusionAn attempt was made to design a compact car drive system to address the charging power density challenge faced by electric hybrid vehicles and the charging energy density challenge faced by hydraulic hybrid vehicles. The initial approach to solve the problem was to incorporate an electrical system in an existing hydraulic hybrid sys-tem. The INNAS HyDrid was used as the foundation architecture for analysis. Simulations were performed to understand the control dynamics of the HyDrid. Af-ter performing quantitative analysis on hydraulic accu-mulators it was confirmed that hydraulic accumulators cannot provide a sufficient energy density for braking

Figure 11. Energy required to complete the ECE driving cycle with regenerative braking; x-axis: time(s); y-axis: energy( J).

article: lai

68

Page 70: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

energy storage. In one of the intermediate designs, elec-trical accumulators were introduced into the system to capture excess energy that cannot be captured by the hydraulic system. The Sony LFP and the Toshiba SCiB were considered. The Toshiba SCiB was chosen as a re-sult of its superior charging power density performance. Upon further analysis, it was concluded that the batter-ies have a sufficient charging power density to capture braking power. It was then suggested that the electric system can fully replace the hydraulic components, the ICE drive train, and the frictional braking system. With the convoluted hybrid system, which consists of a lot of inefficient components replaced by a simple electric only drive train, the vehicle drive train efficiency can be increased. An electrical system is simulated. The simu-lated models showed energy savings of around 25~30% with regenerative braking. The final drive system design consists of an electric/fuel cell vehicle with four in-hub motors.

69

Page 71: The Tower Undergraduate Research Journal Volume II

referenCesHighway Statistics. (2007). Washington, D.C.: Federal Highway Administration Retrieved from http://www.fhwa.dot.gov/policyinformation/statistics/2007/.

Introducing the Chevrolet Volt. (2010) Retrieved December 5, 2009, from http://www.chevrolet.com/pages/open/default/future/volt.do

Honda at the Geneva Motor Show. Retrieved December 5, 2009, from http://world.honda.com/news/2009/c090303Geneva-Motor-Show/

Larminie, J., & Lowry, J. (2003). Electric Vehicle Tech-nology Explained. Chichester: John Wiley & Sons Ltd.

Driver’s Manual. (2009). Government of Georgia. Retrieved from http://www.dds.ga.gov/docs/forms/FullDriversManual.pdf.

Acten, P.A.J. (2007). Changing the Paradigm. Paper presented at the Proc. of the Tenth Scandinavian Int. Conf. on Fluid Power, SICFP’07, Tampere, Finland.

HyDrid. Retrieved December 5, 2009, from http://www.innas.com/HyDrid.html

Achten, P.A.J. (2002). Dedicated design of the hydrau-lic transformer. Paper presented at the Proc. IFK.3, IFAS Aachen.

Vael, G.E.M., Achten, P.A.J., & Fu, Z. (2000). The Innas Hydraulic Transformer, the Key to the Hydro-static Common Pressure Rail. Paper presented at the International Off-Highway & Powerplant Congress & Exposition, Milwaukee, WI, USA.

Accumulators Catalog. (2005). In Eaton (Ed.), Vickers.

Sony Launches High-power, Long-life Lithium Ion Secondary Battery Using Olivine-type Lithium Iron Phosphate as the Cathode Material. (2009). News Re-leases Retrieved December 5, 2009, from http://www.sony.net/SonyInfo/News/Press/200908/09-083E/index.html

Toshiba to Build New SCiB Battery Production Facility. (2008). News Releases Retrieved Decem-ber 5, 2009, from http://www.toshiba.co.jp/about/press/2008_12/pr2401.htm

Berdichevsky, G., Kelty, K., Straubel, J.B., & Toomre, E. (2006). The Tesla Roadster Battery System. In Tesla Motors (Ed.).Electric Vehicles. (2009). Retrieved from http://ec.europa.eu/transport/urban/vehicles/road/elec-tric_en.htm.

Valøen, L.O., & Shoesmith, M.I. (2007). The ef-fect of PHEV and HEV duty cycles on battery and battery pack performance. Paper presented at the Plug-in Hybrid Electric Vehicle 2007 Conference, Winnipeg, Manitoba. http://www.pluginhighway.ca/PHEV2007/proceedings/PluginHwy_PHEV2007_PaperReviewed_Valoen.pdf

Complete Specifications. Civic Sedan Retrieved De-cember 5, 2009, from http://automobiles.honda.com/civic-sedan/specifications.aspx

Performance Specifications. (2010). Tesla Roadster Re-trieved December 5, 2009, from http://www.teslamo-tors.com/performance/perf_specs.php

article: lai

70

Page 72: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

A switchable solvent is a solvent capable of reversing its properties between a non-ionic liquid to an ionic liquid which is highly polar and viscous. Switchable solvents have applications for the Heck reaction, which is the chemical reaction of an unsaturated halide with an alkene in the presence of a palladium catalyst to form a substituted alk-ene. The objective of this research was to apply a switchable solvent system to the Heck reaction in order to optimize reaction and separations by eliminating multiple reaction steps. Switchable solvents reduce the need to add and remove multiple solvents because they are capable of switching properties and dissolving both the inorganic and organic components of the reaction. This reversal of chemical properties by a switchable solvent provides for easier separation of the product, minimizes the cost by eliminating the need for multiple solvents, and reduces the overall environmental impact of the industrial process. Specifically, the cost is lowered by the ability of the catalyst and solvent to be recycled from the system. In addition, the “switch” that initiates the formation of the ionic liquid switchable solvent is carbon dioxide, which is cheap and nontoxic. In con-clusion, we were able to use a switchable solvent system to obtain good product yields of E-Stilbene, the desired product of the Heck reaction, and recycle the remaining catalyst + solvent which also produced good product yields at a lower economic and environ-mental cost.

switchable solvents: A combination of reaction & separationsGeorGiNa w. schaeferschool of chemical and biomolecular engineeringGeorgia institute of technology

adVisor:charles a. eckertschool of chemical and biomolecular engineeringGeorgia institute of technology

71

Page 73: The Tower Undergraduate Research Journal Volume II

introduCtionA common problem for chemical synthesis is the re-action of an inorganic salt with an organic substrate, which is an important reaction in the production of many industrial chemicals and pharmaceutical prod-ucts. Typically, a phase transfer catalyst (PTC), such as a quaternary ammonium salt, is used and must subse-quently be separated from the product after the reaction has proceeded. However, the separation of a PTC from the product is very difficult. In fact, solvents such as di-methyl sulfoxide (DMSO) or ionic liquids, liquid salts at or near room temperature, that are capable of dissolv-ing both the organic and inorganic components of the reaction still inhibit simple separation of the product from the catalyst. (Heldebrant et al., 2005)

Now imagine a smart solvent that can reversibly change its properties on command through a built in “switch”. Our goal in designing such a solvent is to minimize the economic and environmental impact of such industrial processes while creating a solvent that remains highly polar. These solvents are able to dissolve both the or-ganic and inorganic components of the reaction while highly polar and then change properties for easier sepa-ration and effective product isolation after the reaction is complete.

Switchable solvent systems are capable of doing just that. These systems involve a non-ionic liquid, an alco-hol and amine base, which can be converted to an ionic liquid upon exposure to a “switch”. The switch chosen to induce this change in solvent properties is carbon dioxide. CO2 reacts with the alcohol-amine mixture to form an ammonium carbonate. Furthermore, it is cheap, readily available, benign, and easily removed by heating and purging with nitrogen or argon. Switchable solvent systems therefore should facilitate chemical syn-theses involving reactions of inorganic salts and organic substrates by eliminating the need to add and remove different solvents after each synthetic step in order to

achieve different solvent properties (Heldebrant et al., 2005; Phan et al., 2007).

Industrial chemical production usually requires multi-ple reaction and separation steps, each of which usually requires the addition and subsequent removal of a dif-ferent solvent for each step. For example, the synthesis of Vitamin B12 is achieved in 45 steps. The application of switchable solvent systems to industrial produc-tion processes of major chemicals and pharmaceuticals would significantly lower the associated pollution and cost of these processes by eliminating the need to add and remove multiple solvents for each reaction step. (Heldebrant et al., 2005; Phan et al., 2007).

projeCt desCriptionSwitchable solvents convert between a non-ionic liquid, which has varying polarity, to an ionic liquid whose properties include higher polarity and higher viscosity. As discussed in previous research, the ideal properties of the solvent as a reaction medium include a usable liquid range, chemical stability, and the ability to dis-solve both organic species and inorganic salts. In terms of the solvent’s role in separations, the solvent should be decomposable at moderate conditions with a reason-able reaction rate, the decomposition products should have very high or very low vapor pressures, and recom-bination to form solvent should be relatively easy. Our principle aims in designing a switchable solvent system to optimize reactions and separations were to eliminate multiple reaction steps, reverse solvent properties to facilitate better separations, and minimize the cost and environmental impact by optimizing catalyst and sol-vent recycle. (Heldebrant et al., 2005; Xiao, Twamley, & Shreeve, 2004; Phan et al, 2007).

Ionic liquids have gained popularity in their technolog-ical applications as electrolytes in batteries, photoelec-trochemical cells, and many other wet electrochemical

article: schaefer

72

Page 74: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

devices. They are particularly attractive solvents because of dramatic changes in properties such as polarity, which may be elicited through a “switch”. On the other hand, changes in conditions such as temperature and pressure usually can only elicit negligible to moderate changes in a conventional solvent’s properties making the use of multiple solvents for a single process necessary. In addi-tion, ionic liquids have low vapor pressures essentially eliminating the risk of inhalation. In particular, guani-dinium-based ionic liquids have low melting points and good thermal stability, properties which make these high nitrogen materials attractive alternatives for ener-getic materials. (Xiao, Twamley, & Shreeve, 2004; Gao, Arritt, Twamley, & Shreeve, 2005).

Our research focused on the application of switchable solvents for the Heck reaction in order to optimize the reaction and separation. Specifically, we studied the re-action of bromobenzene and styrene in the presence of a palladium catalyst (PdCl2(TPP)2) and base to form E-stilbene, an important pharmaceutical intermediate in the production of many anti-inflammatories. (Hel-debrant, Jessop, Thomas, Eckert, Liotta, 2005; Xiao, Twamley, & Shreeve, 2004).

As illustrated in Figure 3, the nonionic liquid can be “switched” to an ionic liquid by exposure to carbon diox-ide and reversed back to a non-ionic liquid by exposure to argon or nitrogen. The reaction of bromobenzene and styrene in the presence of palladium catalyst is run in the highly polar ionic liquid which is able to dissolve both the organic and inorganic components of this sys-tem. The ionic liquid is a particularly effective media for this reaction in that it is able to immobilize the palla-dium catalyst while preserving the overall product yield. In addition, ionic liquids are nonvolatile, inflammable, and thermally stable making them an attractive replace-ment for volatile, toxic organic solvents. (Heldebrant, Jessop, Thomas, Eckert, Liotta, 2005; Xiao, Twamley, & Shreeve, 2004; Phan et al., 2007).

Figure 1. The Heck reaction of bromobenzene and styrene in the presence of palladium catalyst and ionic liquid.

Figure 2. Synthesis of the palladium catalyst used in the Heck Reaction (Figure 1).

Figure 3. Ionic liquid formation; note the reversibility of this re-action under Argon.

Figure 4. Switchable solvent system comprised of 1,8-diazabi-cyclo-[5.4.0]-undec-7-ene (DBU) and hexanol.

73

Page 75: The Tower Undergraduate Research Journal Volume II

In a previous study, a guanidine acid-base ionic liquid was determined to be an effective media for the palla-dium catalyzed Heck reaction of bromobenzene and styrene [4]. The guanidine acid-base ionic liquid was used as both the solvent, ligand, and base in this reac-tion with the guanidine acting as a strong organic base able to complex with the Pd(II)salt and as a replacement for a phosphine ligand. High activity, selectivity, and re-usability were all observed under moderate conditions and thus we expect our system to function best at simi-lar conditions. (Li, Lin, Xie, Zhang, Xu, 2006; Gao, Ar-ritt, Twamley, & Shreeve, 2005).

projeCt desCriptionOnce the reaction is run in the ionic liquid, the homo-geneous product-ionic liquid solution is extracted and separated into a two phase product, catalyst/ionic liquid system. After the product is removed by extraction with heptane, the system is exposed to argon and the ionic liquid is reversibly switched back to a nonionic liquid where the system again separates into a two phase salt precipitate byproduct/catalyst and nonionic liquid so-lution system. In this final stage the catalyst and solvent may be removed and recycled back into the process.

Our studies of the palladium catalyzed Heck reaction of bromobenzene and styrene were performed in a

Figure 5. Process Diagram for the Heck reaction performed in a switchable solvent system.

article: schaefer

74

Page 76: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

10ml window autoclave. First the catalyst solution was added and the solvent vacuumed out. Next the ionic liquid, bromobenzene, and styrene were added, and the system was pressurized. The autoclave was then left stirring and heating for three days until the reaction was completed. After three days, the autoclave was al-lowed to cool down and depressurize. Once the system was back to room temperature and atmospheric pres-sure, the homogeneous ionic liquid/product/catalyst solution was extracted from the autoclave with heptane under carbon dioxide in order to sustain the formation of the ionic liquid. The product in heptane phase was then removed and the remaining ionic liquid/catalyst phase was exposed to argon and heat. After exposure to argon and heating, the byproduct salt precipitated out of the catalyst/reversed non-ionic solvent solution. The catalyst and solution was then removed from the salt byproduct and recycled. Any remaining product left in the autoclave from the extraction with heptane was then extracted with dichloromethane (DCM) for later mass balance calculations.

results and ConClusionsThe Heck reaction was optimized by running at various temperature and pressure conditions. In order to assess the success of each system, the conversion percentages were compared.

Based on Figure 4, the optimal conditions for product formation and catalyst + solvent recovery were a tem-perature of 115˚C and a pressure of 30 bar. The reac-tions run at these conditions show very acceptable and repeatable results, such as an 83% and an 84% yield, the highest observed percent yields. However, the high variability of the product yield results at these condi-tions can be explained by the poor extraction methods for this system. Extracting with large amounts of hep-tane (greater than 50mL) led to higher product yields (greater than 60%) whereas extracting with less heptane (10-50mL) led to lower product yields at these condi-tions due to product loss in the system. Therefore, there was a tradeoff between the amount of heptane used in the extraction to recover the product, which adds to the

Figure 6. Palladium (PdCl2(TPP)2) cata-lyzed Heck reaction of bromoben-zene and styrene in a switchable ionic liquid system. The catalyst and ionic liquid solution used in the reaction run at T=115˚C and P=30 bar, which had 55% yield, was recycled from the reaction run at T=115˚C and P=30 bar, which had 83% yield, demonstrat-ing that the catalyst in ionic liquid remained active in the reaction and that was good recycle.

75

Page 77: The Tower Undergraduate Research Journal Volume II

overall cost of the process, and the amount of product we were able to recover from the system.

The catalyst in solvent was also extracted from the reac-tion which had an 83% product yield and recycled for a second reaction performed at the same conditions. Ob-served yield of 55% from the recycle reaction demon-strates that the recycled catalyst activity was preserved and that the solvent was able to “switch” back to an ionic liquid a second time to run the reaction success-fully. In general, the reactions which experienced lower percent yields did so as a result of too harsh reaction conditions, such as too high temperatures and/or pres-sures. We predict that the palladium catalyst loses ac-tivity and perhaps decomposes at higher temperatures and pressures which accounts for the lower yields at harsher conditions. In the reactions with higher percent yields the ionic liquid and catalyst extracted from the system were usually light yellow in color, whereas the reactions with lower percent yields were brown or black indicating catalyst decomposition. In particular, two of the reactions run at T=140˚C and P=50 bar which had product yields of 6% and 9% respectively contained black particles within the catalyst and solvent mixture. In addition to the extremely low percent yield, we be-lieve the palladium catalyst complex decomposed, and the black particles were possibly palladium nanopar-ticles (palladium black).

In order to improve this system, further investigations into the extraction methods should be performed in order to find a work-up method which minimizes prod-uct loss by maximizing the amount of product that can be washed out of the product phase. The variability of the product yield results are a reflection of this difficult extraction procedure. In our work, we extracted with heptane (between 10 and 50mL) because of its low boiling point, which made it easy to remove the left-over heptane in the system after the product had been extracted by heating. In order to improve the extraction

method, different solvents could also be tried in this extraction. In addition, it was often difficult to extract all of the products and reactants from the autoclave due to the viscosity of the ionic liquid. A better extraction solvent would facilitate this step and minimize product, catalyst, and solvent loss. Another difficulty we encoun-tered with our system was evaporating the solvent out of the catalyst solution before adding the other reac-tants. Changing the solvent from chloroform to toluene was attempted, but toluene was even more difficult to evaporate from the system. The solution we found was to dissolve the catalyst directly into 1,8-diazabicyclo-[5.4.0]-undec-7-ene (DBU) and then, after the cata-lyst and DBU solution was completely homogeneous, add hexanol and expose to carbon dioxide in order to convert it to an ionic liquid and then added it to the system directly. NMR samples of the catalyst in DBU and hexanol were taken to test that the activity of the catalyst was preserved in this solution by confirming the catalyst’s molecular structure remained unchanged. In addition, the percent yields of the reactions run with this method confirm that the palladium catalyst retains its activity in the DBU-hexanol solution. This method also eliminates the addition of an extra solvent and its difficult removal from the system.

In conclusion, the application of switchable solvents to the Heck reaction of bromobenzene and styrene was successful in optimizing product (E-Stilbene) yield and catalyst + solvent recovery for a good recyclable system. E-Stilbene is an important intermediate in the synthesis of some pharmaceuticals such as anti-inflammatories and is used in the manufacture of dyes and optical brighteners. A switchable solvent system could there-fore be implemented for chemical synthesis of Heck reaction products in order to reduce the economic and environmental impact of this industrial process. (Hel-debrant, Jessop, Thomas, Eckert, Liotta, 2005; Heldeb-rant et al., 2005).

article: schaefer

76

Page 78: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

referenCesHeldebrant, D.J.; Jessop, Philip G; Thomas, Colin A.; Eckert, Charles A.; Liotta, Charles L. J. The Reaction of 1,8-Diazobicyclo[5.4.0]undec-7-ene (DBU) with Carbon Dioxide. Organic Chemistry 2005, 70, 5335-5338.

Heldebrant, D.J.; Jessop, Philip G; Li, Xiaowang; Eck-ert, Charles A.; Liotta, Charles L. Green Chemistry- Reversible nonpolar-to-polar solvents. Nature 2005, Vol. 436, 1102.

Xiao, Ji-Chang; Twamley, Brendan; Shreeve, Jean’ne M. An Ionic Liquid-Coordinated Palladium Complex: A Highly Efficient and Recyclable Catalyst for the Heck Reaction. Organic Letters 2004, Vol. 6, No. 21, 3845-3847.

Li, Shenghai; Lin, Yingjie; Xie, Haibo; Zhang, Suobo; Xu, Jianing. Brønsted Guanidine Acid-Base Ionic Liquids: Novel Reaction Media for the Palladium-Catalyzed Heck Reaction. Organic Letters 2006, Vol. 8, No. 3, 391-394.

Phan, Lam; Chiu, Daniel; Heldebrant, David J.; Hut-tenhower, Hillary; John, Ejae; Li, Xiaowang; Pollet, Pamela; Wang, Ruiyao; Eckert, Charles A.; Liotta, Charles L.; Jessop, Phillip G. Switchable Solvents Con-sisting of Amidine/Alcohol or Guanidine/Alcohol Mixtures. American Chemical Society 2007.

Gao, Ye; Arrit, Sean W.; Twamley, Brendan; Shreeve, Jean’ne M. Guanidinium-Based Ionic Liquids. Inor-ganic Chemistry 2005, Vol. 44, No. 6, 1704-1712.

77

Page 79: The Tower Undergraduate Research Journal Volume II

sUbmission gUidelinesThe Tower accepts papers from all disciplines offered at Georgia Tech. Research may discuss empirical or theo-retical work, including, but not limited to, experimental, historical, ethnographic, literary, and cultural inquiry. The journal strives to appeal to readers in academia and industry. Articles should be easily understood by bach-elors-educated individuals of any discipline. Although The Tower will review submissions of highly technical research for potential inclusion, submissions must be written to educate the audience, rather than simply report results to experts in a particular field. Original research must be well supported by evidence, arguments must be clear, and conclusions must be logical. More specifically, The Tower welcomes submissions under the following three categories: articles, dispatches, and per-spectives.

forMattingArticlesAn article represents the culmination point of an undergraduate research project, where the author ad-dresses a clearly defined research problem from one, or sometimes multiple approaches.

A properly formatted article must: be between 1500 and 3000 words (not including title page, abstract, and references); include an abstract of 250 words or less; and have at least the following sections:

• Introduction/Background Information• Methods & Materials/Procedures• Results• Discussion/Analysis• Conclusion• Acknowledgements

DispatchesA dispatch is a manuscript in which the author reports recent progress on a research challenge that is relatively

narrow in scope, but critical toward his or her overall research aim.

A dispatch should: not be more than 1500 words (not including title page and references); and have at least the following sections:

• Introduction/Background Information• Methods & Materials/Procedures• Results• Discussion/Analysis• Future Work• Acknowledgements, as a separate page

PerspectivesA perspective reflects active scholarly thinking in which the author provides personal viewpoints and in-vites further discussions on a topic of interest through literature synthesis and/or logical analysis

A perspective should: not be more than 1500 words (not including title page and references); address some of the following questions: Why is the topic impor-tant? What are the implications (scientific, economic, cultural, etc.) of the topic or problem? What is known about this topic? What is not known about this issue? Waht are possible methods to address this issue?

GeneralThe following formatting requirements apply to all types of submissions and must all be satisfied before a submission will be reviewed. All papers must: adhere to APA formatting guidelines as specified in the Publica-tion Manual of the American Psychological Associa-tion, 5th ed. (Washington, DC: American Psycho-logical Association, 2001); be submitted in Microsoft Word format; be set in 12-point Times New Roman font, double-spaces; not include identifying informa-tion (name, professor, department) in the text, refer-

78

Page 80: The Tower Undergraduate Research Journal Volume II

sPring 2010: the tower

sUbmission gUidelinesence section, or on the title page. Papers will be tracked by special software that will keep author information separate from the paper itself; be written in standard U.S. English; utilize standard scientific nomenclature; define new terms, abbreviations, acronyms, and sym-bols at their first occurance; acknowledge any funding, collaborators, and mentors; not use footnotes — if footnotes are absolutely necessary to the integrity of the paper, please contact the AESR at [email protected]; reference all tables, figures, and references within the text of the document; adhere to the Georgia Insti-tute of Technology Honor Code regarding plagiarism and proper referencing of sources; and keep direct quo-tations to an absolute minimum — paraphrase unless a direct quote is absolutely necessary.

deadlinesSubmissions are accepted on a rolling basis throughout the year. The Tower publishes an issue per semester. Due to the review and production process, for submis-sions to be considered for each issue they must be submitted before the publicized deadline, which can be found at gttower.org. Submissions received after this deadline will be considered for the following issue. If the submission quality will be compromised in the attempt to meet the deadline, authors are encouraged to further develop their work and only submit it once it is fully realized.

eligiBilitySubmitters must be enrolled as undergraduate students at the Georgia Institute of Technology to be eligible for publication. Authors have up to three months after graduation to submit papers regarding research com-pleted as an undergraduate. The priciple investigator may not be included among the co-authors.

suBMittingTo submit a paper, authors must register on our Online Journal System (OJS) at http://ejournals.library.gat-ech.edu/tower. Once the author fills out the required information and registers as an author, he or she will have access tot he submission page to begin the multi-step submission process.

For more detailed submission guidelines, as well as cur-rent deadlines and news, please visit gttower.org.

79

Page 81: The Tower Undergraduate Research Journal Volume II

IBEW Local 613 Electricians Do Everything from Wiring Electrical Outlets

to Programmable Logic ControlsYour electrical system is the lifeblood of your business. If it fails, your phones

don’t ring, computers won’t work. Your BUSINESS STOPS.

Don’t take chances with your electrical system. Demand the best trained

Electrical Professionals available from an IBEW Local Union 613 Electrical

Contractor. IBEW Local 613 members complete a rigorous 5 year classroom

and on the job training program. They know how to get your electrical

system back up and running as quickly as possible.

Power Up. Call IBEW Local 613 for the name of Qualified Union Electrical

Contractors near you.

International Brotherhood of Electrical Workers Local 613

501 Pulliam Street, SW Suite 250 Atlanta, Georgia 30312

(404) 523 8107www.ibew613.org

Max Mount Jr

President

Gene R. O'Kelley

Business Manager

advertisements

Page 82: The Tower Undergraduate Research Journal Volume II

sPring 2010 : the tower

Page 83: The Tower Undergraduate Research Journal Volume II

We are looking for chemical, electrical and mechanical engineers. For more information on employment opportunities,log on to www.yokogawa.com/us

Yokogawa Corporation of America800-447-9656 www.yokogawa.com/us

800 524 SERV7 3 7 8

We know your eyes are on the future. So, look at Yokogawa.

advertisements

Page 84: The Tower Undergraduate Research Journal Volume II

Merial, a world leading animal health company, is a proud contributor to Georgia’s growth in the biotech sector.

We are dedicated to enhancing the health and well-being of animals.

Compliments of

Hoover Foods Inc.

www.hooverfoods.com

Hoover Foods is a franchisee of Wendys International, Inc.

sPring 2010 : the tower

Page 85: The Tower Undergraduate Research Journal Volume II

Compliments of

21 East Broad StreetSavannah, GA 31401

912.236.1865Fax: 912.238.5524

A Family Business for 110 Years

w w w. b a r n h a r d t . n e t

Manufacturing & Engineering Facility

1995 Air Industrial Park RoadGrenada, MS 38901

1-800-848-2270

Sales, Marketing & Customer Service

2175 West Park Place Blvd.Stone Mountain, GA 30087

High-Efficiency Air Handlers

advertisements

Page 86: The Tower Undergraduate Research Journal Volume II

Transportation • Environmental • PlanningCivil • Construction • Program Management

Parsons Brinckerhoff3340 Peachtree Rd. NESuite 2400, Tower Place100Atlanta, GA 30326(404) 237-2115www.pbworld.com

The World’s Leading Conveyor Belt Company

For internship and career opportunities please E-Mail [email protected]

325 Gateway DriveLavonia, GA 30535

1-706-356-7607Fax: 1-706-356-7657

www.fennerdunlop.com

METROPOWER, INC.1703 Webb Drive

Norcross, GA 30093Phone: 770-448-1076

Fax: 770-242-5800

ON SITE ELECTRICAL SERVICEAND CONSTRUCTION

sPring 2010 : the tower

Page 87: The Tower Undergraduate Research Journal Volume II

Compliments ofPiedmont CenterCompliments ofPiedmont Center

Aderans Research is dedicated to developing state-of-the-artcell engineering solutions for hair loss, a pervasive conditionwith extremely negative effects on the lives of millions of people.Located in Atlanta GA and Philadelphia PA, Aderans Researchis the pioneering arm of two great companies in the hairrestoration world: Aderans Company, Ltd, the world’s largestmanufacturer of wigs; and Bosley, the largest hair transplantcompany in the world.

Aderans Research2211 Newmarket Parkway, Suite 142

Marietta, GA 300671-678-213-1919

www.aderansresearch.com

1391 Cobb Parkway N.Marietta, GA 30062

770-422-7118

www. stasco-mech.com

Plumbing and piping systemsconstruction, engineering and design.

“Stasco people make the difference!”“Stasco people make the difference!”

3970 Johns Creek CtSte 500

Suwanee, GA 30024770-871-4500

www.fishersci.com

advertisements

Page 88: The Tower Undergraduate Research Journal Volume II

Albany~Augusta~AtlantaAlbany~Augusta~Atlanta Environmental Engineering Water Resources Engineering Sewer and Stormwater Civil Site Transportation Design Operations and Permitting Surveying / GIS / Mapping Funding and Planning Assistance Operations and Permitting www.speng.comwww.speng.com

5036 B.U. Bowman DriveBurord, GA 30518

Phone: 770-904-4444Fax: 770-904-0888

Cunningham ForehandMatthews & Moore

Architects, Inc.

2011 MANCHESTER STREET, N.E. ATLANTA, GEORGIA 30324

404.873.2152

sPring 2010 : the tower

Page 89: The Tower Undergraduate Research Journal Volume II

advertisements

Page 90: The Tower Undergraduate Research Journal Volume II

T T U R

JV

II

Produced by the2009 - 2010 staff of:


Recommended