+ All Categories
Home > Documents > Baltimore, Maryland July 31–August 5, 2005 conference … about being sur e one’s...

Baltimore, Maryland July 31–August 5, 2005 conference … about being sur e one’s...

Date post: 18-May-2018
Category:
Upload: vubao
View: 214 times
Download: 0 times
Share this document with a friend
23
conference reports 14th USENIX Security Symposium Baltimore, Maryland July 31–August 5, 2005 Keynote Address Computer Security in the Real World Butler W. Lampson Summarized by Stefan Kelm As in the past, this year’s keynote was given by someone well versed in dealing with security issues. But- ler Lampson opened his talk by comparing real-world security to computer security. Real-world security is not usually about lock- ing things (or people) up but, rather, is about risk, locks, and deterrence. Risk management, Lampson argued, is important there, since the main issue often is how to recover from an incident at an acceptable cost. Part of this is accountability: unless you can identify the bad guy, you will not be able to deter him. Accountability needs to be enforced at the “end nodes,” i.e., “all trust is local.” Senders of network packets need to be held accountable for their actions. ISPs, for example, should cooperate when trying to stop DDoS attacks. “How much secu- rity?” Lampson asked, and argued that the main goal should be feasi- ble security, stating that “perfect security is the worst enemy of real security.” Applications or operating systems must not become unusable due to bad user interfaces. Lampson then began a lengthy and fairly technical discussion on access control. His main example was that of someone wanting to access a Web page securely. Authentication and authorization are very often confused, he said, but need to be clearly differenti- ated. He said that fine-grained access control was a mistake. More- over, there is a need for solid audit- ing mechanisms, which one espe- cially needs for deterrence. He also discussed secure channels, which in his usage do not refer to physical network channels or paths but to a more general concept. He provided a few examples, such as SDSI/SPKI and ACLs. Closely related is the issue of securely authenticating programs upon load- ing. Being with Microsoft, Lampson brought up NGSCB/TPM and sur- prised the audience by saying that “it’s been put on the shelf” (“I do not believe in the DRM stuff at all,” he said), especially since nobody has figured out how to keep the TCB small, a key requirement. Some of the questions and answers focused on access control and the problems of humans giving away their identity. Curiously enough, Lampson’s reply to one question, “If you want your machine to be moderately secure you need some form of remote administration,” seems to contradict his earlier “all trust is local” statement. To a ques- tion about being sure one’s configu- ration is correct, Lampson replied, laughing, “You want perfection and you’re not gonna get it!” His talk can be downloaded at http://www.usenix.org/events/ sec05/tech/lampson.pdf. For more information, see his home page at http://research .microsoft.com/lampson. Refereed Papers SECURING REAL SYSTEMS Summarized by Kevin Butler An Analysis of a Cryptographically Enabled RFID Device Steve Bono, Matthew Green, Adam Stubblefield, and Avi Rubin, Johns Hop- kins University; Ari Juels and Michael Szyydlo, RSA Laboratories Awarded Best Student Paper! Steve Bono presented his group’s work on analyzing Texas Instru- THANKS TO THE SUMMARIZERS Kevin Butler Ming Chow Jonathon Duerig Serge Egelman Boniface Hicks Francis Hsu Stefan Kelm Mohan Rajagopalan CONTENTS OF SUMMARIES Keynote Address ..................69 REFEREED PAPERS AND PANELS Wednesday . . . . . . . . . . . . . . . .69, 72, 74 Thursday . . . . . . . . . . . . . . .75, 78, 80, 81 Friday . . . . . . . . . . . . . . . . . . . . . . .84, 86 INVITED TALKS Wednesday . . . . . . . . . . . . . . . .71, 73, 75 Thursday . . . . . . . . . . . . . . .76, 78, 81, 83 Friday . . . . . . . . . . . . . . . . . . . . . . .85, 87 BEST PAPER WINNERS Best Paper . . . . . . . . . . . . . . . . . . . . . . .81 Best Student Paper . . . . . . . . . . . . . . . .69 Work-in-Progress Reports ...........88 ;LOGIN: DECEMBER 2005 SUMMARIES: 14TH USENIX SECURITY SYMPOSIUM 69
Transcript

conferencereports

14th USENIX Security Symposium

Baltimore, MarylandJuly 31–August 5, 2005

Keynote Address

Computer Security in the RealWorld

Butler W. Lampson

Summarized by Stefan Kelm

As in the past, this year’s keynotewas given by someone well versedin dealing with security issues. But-ler Lampson opened his talk bycomparing real-world security tocomputer security. Real-worldsecurity is not usually about lock-ing things (or people) up but,rather, is about risk, locks, anddeterrence. Risk management,Lampson argued, is importantthere, since the main issue often ishow to recover from an incident atan acceptable cost. Part of this isaccountability: unless you canidentify the bad guy, you will not beable to deter him. Accountabilityneeds to be enforced at the “endnodes,” i.e., “all trust is local.”

Senders of network packets need tobe held accountable for theiractions. ISPs, for example, shouldcooperate when trying to stopDDoS attacks. “How much secu-rity?” Lampson asked, and arguedthat the main goal should be feasi-ble security, stating that “perfectsecurity is the worst enemy of realsecurity.” Applications or operatingsystems must not become unusabledue to bad user interfaces.

Lampson then began a lengthy andfairly technical discussion onaccess control. His main examplewas that of someone wanting toaccess a Web page securely.Authentication and authorizationare very often confused, he said,but need to be clearly differenti-ated. He said that fine-grainedaccess control was a mistake. More-over, there is a need for solid audit-

ing mechanisms, which one espe-cially needs for deterrence.

He also discussed secure channels,which in his usage do not refer tophysical network channels or pathsbut to a more general concept. Heprovided a few examples, such asSDSI/SPKI and ACLs. Closelyrelated is the issue of securelyauthenticating programs upon load-ing. Being with Microsoft, Lampsonbrought up NGSCB/TPM and sur-prised the audience by saying that“it’s been put on the shelf” (“I donot believe in the DRM stuff at all,”he said), especially since nobodyhas figured out how to keep theTCB small, a key requirement.

Some of the questions and answersfocused on access control and theproblems of humans giving awaytheir identity. Curiously enough,Lampson’s reply to one question,“If you want your machine to bemoderately secure you need someform of remote administration,”seems to contradict his earlier “alltrust is local” statement. To a ques-tion about being sure one’s configu-ration is correct, Lampson replied,laughing, “You want perfection andyou’re not gonna get it!”

His talk can be downloaded athttp://www.usenix.org/events/sec05/tech/lampson.pdf.

For more information, see hishome page at http://research.microsoft.com/lampson.

Refereed Papers

S E C U R I N G R E A L SYSTE M S

Summarized by Kevin Butler

An Analysis of a CryptographicallyEnabled RFID Device

Steve Bono, Matthew Green, AdamStubblefield, and Avi Rubin, Johns Hop-kins University; Ari Juels and MichaelSzyydlo, RSA Laboratories

n Awarded Best Student Paper!

Steve Bono presented his group’swork on analyzing Texas Instru-

TH A N KS TO TH E S U M M A R I Z E R S

Kevin Butler

Ming Chow

Jonathon Duerig

Serge Egelman

Boniface Hicks

Francis Hsu

Stefan Kelm

Mohan Rajagopalan

CO NTE NTS O F S U M M A R I E S

Keynote Address . . . . . . . . . . . . . . . . . .69

R E F E R E E D PA P E R S A N D PA N E LSWednesday . . . . . . . . . . . . . . . .69, 72, 74

Thursday . . . . . . . . . . . . . . .75, 78, 80, 81

Friday . . . . . . . . . . . . . . . . . . . . . . .84, 86

I N V ITE D TA L KSWednesday . . . . . . . . . . . . . . . .71, 73, 75

Thursday . . . . . . . . . . . . . . .76, 78, 81, 83

Friday . . . . . . . . . . . . . . . . . . . . . . .85, 87

B E ST PA P E R W I N N E R SBest Paper . . . . . . . . . . . . . . . . . . . . . . .81

Best Student Paper . . . . . . . . . . . . . . . .69

Work-in-Progress Reports . . . . . . . . . . .88

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 69

ments’ (TI) Digital SignatureTransponder (DST). This is a pas-sively powered device used in vehi-cle immobilizers by automobilemanufacturers such as Ford. It isalso used in the ExxonMobilSpeedpass, a device that can beused in lieu of cash or credit cardsat gas pumps. The DST providessecurity based on a challenge-response protocol, where a 40-bitkey challenge is issued from thereader to the transponder and a 24-bit response is returned by thetransponder, along with its 24-bitserial number. The serial numbercan only be written by the manu-facturer, and the response isencrypted by a 40-bit secret key.

Bono outlined the methodologyused to examine the security of theDST system. They set out to dis-cover whether it was possible torecover the proprietary secret algo-rithm used by the device, purchas-ing an evaluation kit from TI andtesting against the device withstructured bit patterns for the chal-lenge issued. A diagram publishedby TI on the protocol was used as ageneral schematic to verify against,and through experimentation, thegroup verified the diagram andmade tables outlining the operationof the substitution boxes therein.In this manner, the entire cipherwas uncovered.

The 40-bit key used was found tobe small enough to be vulnerable toa brute-force attack. While general-purpose CPUs proved to be slow,requiring about 31 days to uncoverthe key, the JHU team put together16 FPGAs in parallel and were ableto uncover the key in about 35minutes. Real-world applicationswere shown by using the evalua-tion kit in a briefcase and gettingclose enough to a person to retrievethe response from a challenge,effectively making it possible toscan victims for the RFIDs. Addi-tionally, the team built a transpon-der to circumvent an engine immo-bilizer and spoofed a Speedpasssignal to purchase gasoline. To

their surprise, there was little push-back from Ford, who made somephone calls but no legal threats, orTI, who did not want proprietaryinformation published but did notthreaten to sue.

It was noted during the Q&A thatthe cost of the FPGAs used in theattack have dropped to $150 each,making this even more economi-cally feasible. Bono expanded onthis by observing that the decoderchip itself cost a mere $12. Rik Far-row asked how much cryptanalysiswas performed to uncover the algo-rithm, and Bono responded thatbecause the key was so weak, nocryptanalysis was necessary at thetime, although it was performedformally when the protocol wasbroken.

Stronger Password AuthenticationUsing Browser Extensions

Blake Ross, Collin Jackson, NickMiyake, Dan Boneh, and John C.Mitchell, Stanford University

Collin Jackson presented the pass-word “phishing” problem, whereusers cannot reliably identify fakesites set up for purposes of stealingcredit card and other identity data.In particular, the problem of pro-tecting passwords used in multiplevenues was addressed. Some pass-words are used for low-securitysites, such as high school reunions,while others, oftentimes the samepassword, are used for sites requir-ing high security, such as banks,where revelation of the passwordhas drastic consequences. If thesame password is used at bothtypes of sites, breaking a low-secu-rity site could reveal the passwordto a high-security site. Jackson andhis group investigated ways, astransparent to the end user as pos-sible, to ensure that high-securitypasswords were not revealed.

The solution proposed, called Pwd-Hash, is a lightweight browserextension. It generates a uniquepassword that is a hash of the pass-word employed and the domainname of the Web site visited. This

provides a modicum of protectionagainst phishing, as the HMAC willbe different for the password givento a spoofed site compared to thereal one, due to different domainnames. While other password hash-ing schemes exist, Jackson assertedthat PwdHash was the only onethat remained invisible to the user.One particular problem notaddressed by many solutions, how-ever, is the spoofing problem,where a malicious site employsJavaScript or Flash in such a man-ner that the user thinks he is enter-ing information into an encryptedpassword field, but the password issent in the clear, circumventing thehashing mechanism. To handlethis, the tool is set up so that theoriginal password never touchesthe Web site itself, with keystrokesbeing intercepted by the browserextension and the hashed resultsent to the site. A password prefix(in this case, “@@”) is used to acti-vate the browser extension. This isthe best method for securing users,as they do not have to decide whento make a trust decision.

Challenges in this scheme includepassword resets, use in Internetcafes, and dictionary attacks. Jack-son clarified that this tool does notprotect against spyware or DNSpoisoning. To allow passwordresets, the user must enter theunhashed password into a changepage. Use of the password prefixfacilitates this, however, as theprefix ensures that old passwordswill not be hashed and new onesautomatically will be. Because userscannot install the software at Inter-net cafes, an interim solution set up by the authors is to create thehashes from a secure Web page(http://www.pwdhash.com). It wasasserted that dictionary attackswork about 15% of the time, so ifthe password was retrieved from alow-security site and the attackerknew the domain name, their oddsof retrieving the password aremuch lower than the 100% ratecurrently achievable. The ultimate

70 ; L O G I N : V O L . 3 0 , N O . 6

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 71

solution would be to use a betterauthentication protocol.

In the Q&A period, a question wasraised about how to handle policyrequirements for different sites(e.g., minimum number of pass-word characters and use of num-bers or caps). Jackson respondedthat the best way would be to createa policy repository for all sites.Another way is to look at the userpassword itself, but this gives upsome security. A following questionraised concerns about Javascriptfocus-stealing attacks, where a usercould think they are using theextension but the keystrokes arebeing hijacked by a script. This is adifficult problem to solve, but, the-oretically, one could find all ways inwhich focus-stealing may occurand eliminate them; using longerpasswords is also beneficial.Another question had to do withuser-interface issues. The groupfound that, above all, end usersfavored simplicity and ease of useover any other factor.

Cryptographic Voting Protocols: A Systems Perspective

Chris Karlof, Naveen Sastry, and DavidWagner, University of California,Berkeley

An analysis of two new crypto-graphic voting schemes was pre-sented by Chris Karlof. DRE (directrecording electronic) votingmachines are popular for a varietyof reasons, such as their ability todisplay multiple languages andallowance for disabled people tovote more easily, as well as provid-ing quick counts. However, thesoftware and hardware must befully trusted and the process trans-parent, none of which is guaran-teed by current DREs. Allowing avoter-verified audit trail (VVAT)can be done by issuing a paperreceipt. Election officials can usethese to verify recounts, but indi-vidual voters cannot verify theirvote. David Chaum and AndrewNeff have each proposed verifiablycast-as-intended protocols, where

the voter can later check that theirvote was as they registered it. Theballot is encrypted but can be veri-fied later on public bulletin boardsby the voter. The analysis fromKarlof’s group focused on Neff’sscheme, but is applicable toChaum’s as well. The DRE makes apledge that the row chosen by thevoter on the ballot (where a rowconsists of a certain pattern of 1’sand 0’s) is the one they chose, andlater the voter can match the candi-date openings with the pledgemade. To circumvent vote-buying,all candidate rows on the ballot areopened, not just the one correspon-ding with the chosen selection.

Karlof explained that both proto-cols were subject to informationleakage through subliminal chan-nels. The DRE can embed informa-tion within the pledge values, con-structing a ballot where a certainbit pattern indicates the user’schoice. Someone knowing theencoding pattern could then lookat a ballot and know who the voterselected, threatening privacy. Ananalysis of the attack found, in theworst case, it was possible toencode up to 51KB per ballotthrough a subliminal channel,enough to provide plentiful infor-mation on the voter. The only solu-tion appears to be making the bal-lot preparation more deterministic.Another possible attack is to usehumans as cryptographic agents.Humans are not generally good atdetecting subtle deviations, and aDRE can produce a false ballot thatlooks essentially similar to what thevoter would expect. Because theprotocols specify that the usermakes his pledge before the DREoffers a challenge, the DRE is sus-ceptible to cheating, as it can offer areceipt with small differences thatthe user will ignore. There is noclear mitigation strategy other thanuser education and testing duringelections.

Finally, these schemes can onlydetect DoS attacks, not mitigatethem, though that is still better

than what DREs are capable ofdoing today. A simple attack fromwhich recovery is impossible is toplant a trojan horse in every DRE,such that nationwide, the machinesselectively delete ballots and per-form ballot stuffing. Alternately, amachine can deny service selec-tively, such as only when a chosencandidate is losing. Such activitieswould be enough to cast entireelections in doubt, representing athreat to the entire voting system.Flexible recovery strategies includ-ing the use of VVATs are required.In summary, while the protocolsexamined are a large improvementover current implementations inDREs, some issues remain to beironed out.

Invited Talk

Human-Computer InteractionOpportunities for Improving Security

Ben Schneiderman, University ofMaryland

Summarized by Ming Chow

Professor Ben Schneiderman firstreminded the audience that thegoals of user interface design are tobe cognitively comprehensible andto be effectively acceptable, not tobe adaptive, autonomous, oranthropomorphic. The scientificapproach to designing user inter-faces includes specifying users andtasks, accommodating individualdifferences, and predicting andmeasuring learning, performance,errors, and human retention.

Professor Schneiderman stressedthe importance of usability in con-trolling security and privacy, as putforth by the Computing ResearchAssociation (CRA) and the 2005President’s Information TechnologyAdvisory Committee (PITAC)Report. One of the grand chal-lenges established by the CRA in2003 was “to give endusers securitythey can understand and privacythey can control,” and usability isincreasingly important in areassuch as patient health records, law

enforcement databases, and finan-cial management. The 2005 PITACreport noted similar challenges forend users and operators. ProfessorSchneiderman listed five goals ofsecurity and privacy: availability,confidentiality, data integrity, con-trol, and auditability.

Professor Schneiderman presentedthe security and privacy settingsinterface in Microsoft InternetExplorer, which, he noted is rid-dled with usability problems, fromthe tedious online help to the chal-lenge of setting up a Virtual PrivateNetwork (VPN). He also mentionedthe emerging research in the area ofusability and security/privacy.

Professor Schneiderman offeredseveral valuable strategies forimproving the usability of secu-rity/privacy: use a multi-layer inter-face that ties complexity to controland that also permits evolutionarylearning; use a cleaner cognitivemodel that has fewer objects andactions; show the consequences of decisions; and show activitydynamics with a viewable log. Heurged improving commercial prac-tices by putting more emphasis onusability engineering and testing,which will lead to improved prod-uct quality, reduced costs, im-proved organizational reputation,and higher morale. Using his sug-gestions and insights, he presenteda sample design of File-sharing On-web with Realistic Tailorable Secu-rity (FORTS), which uses themulti-layer interface approach.

Finally, Professor Schneidermanpresented information visualizationfor security and repeated themantra of information visualiza-tion: overview, zoom-and-filter,details-on-demand. Human percep-tual skills are remarkable, andhuman storage is fast and vast. Hesuggested using information visual-ization as a valuable opportunityfor security/privacy: for linkingrelationships, profiling users andtraffic, and understanding hostileevents. A number of commercialand academic visualization tools

were demonstrated, includingSpotFire, a rich and powerful com-mercial visualization package.

Panel

National ID Cards

Niels Provos (moderator), Google; DrewDean, SRI International; Carl Ellison,Microsoft; Daniel Weitzner, World WideWeb Consortium

Summarized by Serge Egelman

With the passing into law of theREAL ID Act (P.L. 109-13), manyAmericans have started to becomeaware of the concerns that comewith a national identity system. Itwas only fitting that this year’sUSENIX Security Symposium fea-tured a panel to discuss such con-cerns. In his opening remarks,moderator Niels Provos pointedout that most European countriesalready have had national identitycards for quite some time. He hashad his card for his entire life andhe uses it regularly for such activi-ties as traversing borders and vot-ing without any hassles. He quitelikes his national identity card, infact. But Germany has strong lawsregulating the collection and shar-ing of personal data. The UnitedStates has no such laws, and that iswhy there is a legitimate concernregarding what a national identitysystem will do to personal privacyin this country.

Carl Ellison, an expert on authenti-cation and authorization systemswho currently holds the title ofSecurity Architect at Microsoft, laidout the arguments for and againstnational identity cards. He went onto say that both sides are wrong;the opponents are wrong, in thatthe defeat of such a system will notin fact end data privacy problems,and the proponents are wrong,because they do not understandthat a national identity card willnot achieve the security goals forwhich it was intended (i.e., thecard will never be a “not a terrorist”card). To elucidate these argu-

ments, Ellison went over theprocess of making a security deci-sion: a channel is opened, an iden-tifier is offered, and authenticationoccurs. Authentication involvesproving that the client has a rightto the given identifier and isauthorized to access the requestedresource. Thus, such a securitydecision cannot simply be based on a name or identifier; it must also involve determining whetherthe person has appropriate permis-sion. This problem can clearly beseen with the proposed nationalidentity system in this country: it isaiming to prevent terrorism, butonly knowing a name says very lit-tle about whether someone is a ter-rorist and what their intentionsmay be.

Ellison then brought up the exam-ple of Walton’s Mountain. It is a fictional place where all of the resi-dents are born and eventually die;everyone knows each other. Thus,when a security decision needs tobe made, any resident just needs a name and can then recall memo-ries about the person. Nationalidentity cards are trying to accom-plish the same thing through whatEllison calls “faith-based security.”Through the use of biometrics and identity documents, the gov-ernment is trying to make assur-ances about names so that they can recall “memories” about a per-son from a nationwide database.Unfortunately, such a database doesnot exist, and even if it did, wewould not know anything about aperson we had never interactedwith before. This is not a propersecurity decision; we are doingauthentication but not authoriza-tion. Urbanization made this a verydifficult task, and the Internet hasmade it impossible.

Drew Dean’s interest in the issue ofnational identity cards can be seenby his involvement in two separateNational Research Council studieson authentication and nationalidentity systems. He mentionedthat in getting to the conference he

72 ; L O G I N : V O L . 3 0 , N O . 6

had to show two different forms ofidentification: a passport to get onthe airplane and a state driver’slicense to rent a car. In this country,a state driver’s license is recognizedby every state (although there is nofederal law mandating this, everystate has passed its own law to rec-ognize out-of-state licenses for thepurpose of comity). However, out-side of the U.S., it varies. One of theNRC studies that he referred tobrought up the fact that a nationalidentity system needs to covermore than just U.S. citizens. Thisand other problems are often fail-ures of the system, not just thecard. But before such a system canbe fixed (or properly imple-mented), a few questions need tobe answered: What will the pur-pose be? Who will be enrolled?What information is stored? Whohas access to the information?What are the implications withregard to identity theft? While it isclear that existing credentials arevery weak, it is even clearer that asingle nationwide system wouldcreate a single point of failure.

Daniel Weitzner has also beeninvolved with National ResearchCouncil studies on national iden-tity systems. He started by men-tioning that the Washington, D.C.,sniper and the 9/11 hijackers havebeen the biggest motivators for cre-ating a national identity system. Itwas largely the terrorist hijackingsthat motivated the passage of theREAL ID Act, which mandatesstates to create uniform identitycards within the next three years.The law defines what is to beincluded on the cards and what isto be stored in the national data-base, but it makes no mention ofhow the data can be accessed orused, and by whom. It is alsounclear if it will solve the problemsthat it intends to.

Regarding the sniper case, thelicense plate number was recordedat least ten times near the sites ofthe crimes, but the car wasn’t asso-ciated with the crime. As Weitzner

put it, they were “looking for awhite truck with white peopleinstead of a blue car with blackpeople.” Had each license-spottingbeen stored in a database whichwas shared by all of the policeforces, they could have correlatedthe fact that this car was spotted atthe scene of many of the shootings.But at the same time, this chal-lenges our current privacy model.Many intrusive practices occurfrom drawing inferences, ratherthan from data collection alone.Credit card transactions lead toprofiling, Web logs lead to user pat-terns, and location-based systemslead to discovering travel patterns.What we need right now from atechnical standpoint is enforcementof rules, as well as secure audit sys-tems. From a policy standpoint weneed to shift from limits on datacollection to limits on data usage,where we can require accounta-bility and auditing. The currentthreats to privacy are not comingfrom the information itself, butfrom the inferences. Thus, byincreasing exposure to the personalinformation collected, we can actu-ally advance personal privacy.

The question on everyone’s mindfor the panel was whether therewould be a benefit to being anational identity cardholder. Whilethey differed in their reasoning, allof the panel members agreed thatthe costs would greatly outweighthe benefits. Carl Ellison referred toWalton’s Mountain again, remind-ing everyone that implementingauthorization on the cheap is stillan unsolved problem. Issuing cardsin no way achieves authorization.Daniel Weitzner drove home thatpoint, saying that when confrontedwith a new technology that they do not understand, governmenttreats it as a panacea. Such systemsare expensive to implement and do not provide the solution thattheir proponents claim. Drew Deanmentioned that one of the biggestprivacy concerns is with regard tosecondary uses of personal infor-

mation. Originally, social securitynumbers were to be only used by the Social Security Agency, justas a driver’s license was originallymeant to be a license to drive. Butsince these systems exist, privateindustries have used them for other uses rather than spendingmoney to create their own systems.All of these systems undergo func-tion creep, and privacy concernsabound.

Invited Talk

Homeland Security: Networking,Security, and Policy

Douglas Maughan, DHS, HSARPA

Summarized by Ming Chow

Douglas Maughan, program man-ager at the Department of Home-land Security Science and Technol-ogy Directorate, discussed some ofthe issues and tools the departmentis currently working on. Maughanprovided an overview of the organi-zation of the DHS, and discussed itsresearch and deveopment priori-ties. He also explained the differ-ences between research and devel-opment funding at DARPA and atthe DHS: at the DHS, 85–90% offunds are tied to requirements, and10–15% of funds are dedicated toresearch. The five priorities ofcybersecurity in the department aretesting and evaluating threats, criti-cal infrastructure, customer ser-vice, coordinating research amongagencies, and creating partnerships.Maughan engaged the audience indiscussion about two policy issues:DNS, and securing protocols forthe routing infrastructure. Heacknowledged that people areunhappy with ICANN’s model ofmanaging DNS, which is a key partof the global Internet, and askedthe audience several questions,including: What incentives shouldbe put in place for industries to useDNSSec? Should the rootkey bemanaged using threshold cryptog-raphy or a single rootkey? UnlikeDNS, there is no governance for therouting infrastructure. Maughan

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 73

acknowledged that ISPs are doingthe bare minimum to protect net-works, and he asked the audiencewhat incentives should be providedto industries to encourage theiradoption of a standard and develop-ment of solutions for deployment.

Next, Maughan presented two DHSprojects, DETER and PREDICT.DETER is a shared testbed infra-structure for medium-scale securityresearch, including repeatableexperiments, especially for experi-ments that may involve “risky”code. The Protected Repository forDefense of Infrastructure againstCyber Threats (PREDICT) is arepository of defense infrastructuredata, where the aim is to have pri-vate corporations donate real inci-dent data for security researchersand academia to use. The goal ofthese projects is to provide anexperimental infrastructure to aiddevelopment of a large-scaledeployment security technologysufficient to protect our vital infra-structures. These projects are notwithout controversy. Maughanasked the audience to consider anumber of other questions, includ-ing: What industries should beinvolved with DETER, and how?What is the level of anonymizationof the data? What should be thelevel of institutional sponsorship ofPREDICT, and what happens if oneviolates the terms of agreement?

Refereed Papers

D I AG N O S I N G TH E N E T

Summarized by Mohan Rajagopalan

Empirical Study of Tolerating Denial-of-Service Attacks with a ProxyNetwork

Ju Wang, Xin Liu, and Andrew A. Chien,University of California, San Diego

Denial of service (DoS) attacks area key problem as Internet serviceapplications become an importantpart of the enterprise. This workfocused on infrastructure-level DoSattacks and was based on two key

ideas: enforced mediation, and the notion of distributed frontends. Since theoretical models can-not capture the dynamics of net-work and application behavior asobserved in large networks, theauthors’ work addressed these chal-lenges and performed a realisticstudy by using a large-scale packet-level online simulator, MicroGrid,that was better than NS2 and Plan-etLab.

The experiments produced threeresults: first, they showed that thisapproach performed better in termsof baseline performance. Second,the proxy network was effectiveagainst both “spread attacks” and“concentrated attacks.” Finally, theresults showed that their systemwas scalable.

The first questioner asked Ju tocompare their MicroGrid-basedapproach to a simpler one based onNS2. Ju replied that scale is impor-tant for realism and NS2 could notprovide a realistic approximation.He referred to the paper for furtherdetails on what realism meant.When asked to comment on theswitch over time he replied thatwhile they did not consider it, itwas something that would be seenin a real system.

Robust TCP Stream Reassembly in thePresence of Adversaries

Sarang Dharmapurikar, WashingtonUniversity; Vern Paxson, InternationalComputer Science Institute, Berkeley

Sarang Dharmapurikar describedthe growing interest in higher-levelpacket processing. The motivatingquestion for this work was whetherit’s possible to reassemble packetsat high speed. Previously, systemseither did not have a buffer and sowould drop packets (TCP instabil-ity) or would guess the amount ofbuffer required. The primary con-tribution of this work was to ana-lyze TCP traces in order to measurebuffer requirements that could thenbe used to improve the system. Theobjective was to optimize for theaverage case by introducing an

inline hardware device that couldkill connections and allow normal-ization while preserving TCPdynamics.

This work presented three funda-mental measurements: first, up to15% of the connections may havehad out-of-order packets; second,the maximum buffer required issmall; and, finally, 60% of the holeslasted for less than 1ms. This indi-cated that reordering and not drop-ping was the right strategy. In orderto deal with adversarial connec-tions they proposed a policy-baseddefense; to prevent the attackerfrom filling the buffer with a singleconnection, they would restrict thepolicy of each connection to a pre-set threshold. Their policy wouldprevent multiple connections froma single host in order to prevent the adversary from creating multi-ple connections. The final policyevicted a page randomly and killeda connection in case of an over-flow. The talk mentioned zombieequations that would be used toimprove connection eviction pack-ets. In conclusion, this work pre-sented the facts that TCP reassem-bly would be important for securityand that trace-driven analysis canbe used to design and tune the sys-tem.

The first question dealt with anadversary who would send a bunchof holes and then a bunch of smallpackets to fill the holes, thus flood-ing the analyzer. Sarang replied thatthis could be treated as an anomaly.The second question concerned the use of multi-path for groupresiliency. The response was itwould be difficult to handle.

Countering Targeted File AttacksUsing LocationGuard

Mudhakar Srivatsa and Ling Liu,Georgia Institute of Technology

Mudhakar Srivatsa presented Loca-tionGuard, which provides locationhiding to protect against DoS andhost-compromised attacks. Thereare two major problems this worktries to address: access control in a

74 ; L O G I N : V O L . 3 0 , N O . 6

wide area file-storage system, anddefending against targeted attacks.The authors’ approach tries to hidefiles, locate them for known users,and prevent inference attacks. Alocation key is used to hide thelocation of the file (A:(file,loc_key)-> location). The implementationwas based on files stored in a dis-tributed hash table.

Their approach uses a probabilisticlook-up scheme which builds on a“safe obfuscation” algorithm forsecure routing by never disclosingthe file ID. In order to preventinference attacks that are based on observing file accesses and frequency, files are divided intochunks. Periodically, the locationkey is changed, and this rekeyingnullifies all past file inferences. Theactual implementation is based onChord using AspectJ. The authorsfound that their approach effec-tively defended against DoS, DDoS,and host compromise attacks andincurred minimal overheads.

Invited Talk

Electronic Voting in the United States:An Update

Avi Rubin, Johns Hopkins University

Summarized by Ming Chow andJonathon Duerig

Avi Rubin began by discussing hisrecent experiences at an annualconference of state chief justicesheld in South Carolina, where heserved on a panel about electronicvoting. Surprisingly, most of thechief justices were not aware of theelectronic voting problem, andmost do not even buy into the ideaof trojan horses. However, Rubin’stalk pointed out some of the prob-lems that result when voting tech-nology loses transparency. It isimportant to educate the chief jus-tices in this area, since they willincreasingly be the arbiters of whowins elections, as was seen in arecent election in Washingtonstate. Rubin noted that it was diffi-cult to explain the technical issues

of electronic voting to a mostlynontechnical group at the confer-ence. Several chief justices (ofPennsylvania, Washington, PuertoRico, and Florida) praised Rubin’stalk for making them believersregarding the electronic votingproblem and for stressing theimportance of a paper trail.

Rubin reviewed the background ofthe electronic voting problem.Shortly after the debacle of the2000 presidential election, Con-gress passed the Help America VoteAct (HAVA). The purpose of the actwas to establish a program to pro-vide funds to states to replace thepunchcard voting program. In2003, $1.4 billion was given tostates to buy electronic voting sys-tems. Members of Congressapproved of the idea of electronicvoting and didn’t find any problemswith systems, rebuking Rubin.However, before the 2004 presiden-tial election, the controversy sur-rounding electronic voting esca-lated. Rubin noted numerousproblems, including weak require-ments from independent testingauthorities (ITAs), no source codereview of systems, controversiesover the lack of a paper trail, lackof accommodation for blind peo-ple, and the fact that some peopledo not even look at their receipts.

Rubin noted that there is still a dis-connect between Congress and thecomputer science community andthat the HAVA money is almostgone: $4 billion has been spent.Maryland commissioned severalstudies to figure out how to retrofitnew voting safeguards onto the oldtechnology. The finding is thatthings are being done wrong, butthere is no money to fix them.Rubin recalled a trip to the CarterCenter in Atlanta, where he foundthat the people are very concernedabout the fact that there is no wayto observe electronic voting. InOregon, everyone votes by mail;there, voter coercion and resale areproblems. Except for Baltimore,Maryland is still using the highly

controversial Diebold electronicvoting machines. In New Jersey,legal battles over voting continue torage. Politicians in Washington donot seem worried about these prob-lems. People in positions of powerare invested in voting-machinecompanies. Although progress isbeing made in confronting theproblems in existing voting tech-nology, the overall picture is mixed.And the difficulties in disseminat-ing information on the problem ofelectronic voting means that manypeople in this country still do notbelieve there even is a problem.

Refereed Papers

M A N AG I N G S E C U R E N E T WO R KS

Summarized by Stefan Kelm

An Architecture for GeneratingSemantics-Aware Signatures

Vinod Yegneswaran, Jonathon T. Giffin,Paul Barford, and Somesh Jha, Univer-sity of Wisconsin, Madison

In this talk Jonathon describedboth the architecture and theimplementation of Nemean, a sys-tem for automatic IDS signaturegeneration. One of the objectives ofNemean is to take the human outof the signature-generation loop inorder to reduce errors (both falsepositives and false negatives). Hesaid that current solutions do notmake use of application-level pro-tocol semantics, whereas Nemeanoperates on the application layer,working with what he calledsemantics-aware signatures. Indoing so, it is able to aggregate TCPflows, generate signatures forattacks where the exploit is only asmall part of the payload, and pro-duce generalized signatures. And itis easy to understand and, impor-tantly, to validate.

Nemean’s architecture consists ofdata collection, flow aggregation,service normalization, and cluster-ing. The data collection componenttakes its input from a honeynet; thecurrent implementation captures

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 75

HTTP and NetBIOS. The main partof the flow aggregation componentis to manually assign weights tosingle data packets, which are sub-sequently used for automatic signa-ture generation. Service normaliza-tions take care of possible problemswithin the data flow. Finally, theclustering component is dividedinto session clustering and connec-tion clustering.

Jonathon then presented some veryimpressive results of an experimentthat ran over two days: they trainedNemean using captured honeynetdata and achieved a detection effec-tiveness of about 99%, with 0 falsealarms. Their research suggestedthat, depending on the attack, con-nection-level clustering makessense at times and session-levelclustering seems appropriate atothers. For more information, seehttp://www.cs.wisc.edu/~giffin/.

MulVAL: A Logic-Based NetworkSecurity Analyzer

Xinming Ou, Sudhakar Govindavajhala,and Andrew W. Appel, PrincetonUniversity

Xinming Ou presented MulVAL, anew approach to network securityanalysis. The motivation behindthis approach is to find possiblesecurity weaknesses in softwareand/or network configurationsbefore running a particular service.An administrator, Xinming argued,should be able to put questions to aso-called “reasoning engine”—forexample, is there an attack paththat could lead to exposure of con-fidential data?

Input from sources such as CVE isconverted into input which maysubsequently be used through logicprogramming. The authors choseMulVAL, which is a subset of Pro-log. Xinming gave two examples:network and machine configura-tions are being expressed as datalogtuples—“serviceRunning(web-server, httpd, tcp, 80, apache)”—whereas the reasoning logic isbeing specified as datalog rules—“networkAccess(Attacker, Host2,

Protocol, Port . . .)”. Standard pro-log engines then conduct the analy-sis of configurations.

The basic idea behind the architec-ture is to have a small scanner run-ning on each host within a networkand an analyzer which looks fornew information sent by the scan-ners. Xinming described variousreasoning rules such as possibleexploitation of known vulnerabili-ties, OS semantics, and attack tech-niques. He then presented somereal-world results of MulVAL andargued that their system scalespretty well, mainly because of Pro-log’s system optimization. Theyused MulVAL to check their depart-ment’s network configuration andimmediately found a potential two-stage attack path due to multiplevulnerabilities that existed on asingle server.

Xinming said that future workinvolves testing the system on morenetworks and that reasoning rulesfor Windows systems are needed,too. He concluded that logic pro-gramming is a good approach tonetwork security analysis.

For more information, go tohttp://www.cs.princeton.edu/~xou/.

Detecting Targeted Attacks UsingShadow Honeypots

K.G. Anagnostakis, University of Penn-sylvania; S. Sidiroglou, and A.D.Keromytis, Columbia University; P.Akritidis, K. Xinidis, and E. Markatos,Institute of Computer Science–FORTH

Stelios Sidiroglou presentedShadow Honeypots, a securityarchitecture combining rule-basedintrusion detection systems (suchas snort) which are good at detect-ing known attacks with honeypotsand other anomaly detection sys-tems which are good at detectingzero-day attacks. By taking “thebest of both worlds” one should beable to minimize both false posi-tives and false negatives.

Unlike the traditional approach,shadow honeypots allow for twomodes of operation: client-side and

server-side. The basic idea is tohave a filtering component as wellas anomaly detection sensors. Sit-ting behind those sensors is theshadow honeypot, which is aninstance of the system or softwareto be protected. It is basically amodified version of the softwareitself, with various hooks intro-duced throughout the source code.

The prototype implementation pre-sented by Stelios introduces a fewnew system calls such as transac-tion() and shadow_enable(): if theshadow honeypot classifies input asmalicious, the corresponding pack-ets are discarded; if the packets areregarded as okay, they will be han-dled correctly and transparently bythe system.

Stelios presented two widely usedprototype implementations modi-fied by those shadow honeypot system calls: the Apache Webserver and the Firefox browser. Inthis implementation they focusedon memory violations such asbuffer overflows. And althoughbenchmarking the modified ver-sions showed an overhead of 20%and 35%, respectively, Stelios saidthat the ability to significantlyreduce the rate of false positives is a good reason to improve shadowhoneypots.

For more information, seehttp://www1.cs.columbia.edu/~ss1759/.

Invited Talk

Cybersecurity: Opportunity andChallenges

Pradeep K. Khosla, CyLab, CarnegieMellon University

Summarized by Boniface Hicks, OSB

Pradeep Khosla discussed variouselements of CMU’s CyLab(http://www.cylab.cmu.edu/), ofwhich he is the director. CyLab notonly studies the technologicalaspects of computer security, butalso integrates efforts with the Tep-per School of Business and theHeinz School of Public Policy. It

76 ; L O G I N : V O L . 3 0 , N O . 6

extends internationally and in-cludes the efforts of 150 securityprofessionals and more than 50industrial affiliate member compa-nies. It is an ambitious and wide-reaching research center, embrac-ing both short- and long-termprojects.

Khosla himself is helping to buildsurvivable storage systems. Inhopes of making storage perpe-tually available, even in the face of failure or compromise of somedisk arrays, the team, led by GregGanger, is using redundancy in anovel way. A naive approach wouldbe merely to break up a file into athousand pieces, like a jigsaw puz-zle, and store the pieces on differ-ent disk arrays. In this way, if onepiece were compromised, no in-formation would be gained. Animprovement is to duplicate thestorage and break it up into four1000-piece puzzles. In this way,even failure of a disk will causeminimal damage, and the degrada-tion will be graceful over the failureof multiple disks. Furthermore,their system is self-healing, recog-nizing what has been lost andrecovering it by using redundantinformation. In this way, they havebeen able to build a robust systemusing only non-robust compo-nents. As expected, however,increased safety is paid for withslower access rates.

Another CyLab project is the GreySystem. Khosla showed a demo ofthis system, which is already beingdeployed in the computer sciencebuildings at CMU. A person can getinto his own office using a cellphone with Bluetooth. Further-more, a person can remotely giveauthority for someone else to enterhis office over the cell phone. Thesystem allows for one cell phone toprovide a certificate to another cellphone, which can then use the cer-tificate to authenticate with thedoor. The logic for this delegationsystem is handled using automatedtheorem-proving software devel-

oped by Pfenning and Lee some 10years ago. This novel application isone they never expected; it demon-strates how pure research producesunexpected results, even a decadeafter it has been developed. Khoslaused this opportunity to petitionfor government agencies to be will-ing to provide funds for the sake oflong-term results.

Using the Grey System, what pre-vents someone from stealing a cellphone and breaking into that per-son’s office? Ideally, the cell phonewould authenticate its user—usingbiometrics, for example. Khoslarecognized that no biometric is per-fect, but perhaps a combination offace and fingerprint, voice and irisrecognition would make a robustsystem. One group in the CyLabhas been making great progress inface recognition. Although thereare an impossible number of vari-ables (pose, illumination, expres-sion, occlusion, time lapse, etc.),the lab has made significant pro-gress in gaining excellent accuracywith the help of very few trainingimages. Their software has pro-duced far better results than cur-rent commercial software. There isstill the challenge, however, ofincorporating this resource-richtechnology into resource-con-strained devices such as cell phonesor PDAs. Also, there is need forbetter user input for these devices,such as voice recognition. Further-more, as this technology becomesmore advanced and is more broadlytrusted (biometrics will be requiredon passports by the year 2010),there are various business and pol-icy issues which must be explored.It may be desirable to encrypt thebiometrics on a passport, forexample.

The last significant area covered byKhosla was education. Using exam-ples from his own experiences withhis son, he described the need forchildren to be made “cyberaware.”Since it is so easy for a teenager toget a malicious script from the

Internet and cause great damage, itis important to educate children inethics and norms for Internet use.CyLab has taken on this socialresponsibility by forming a pro-gram that seeks to educate 20,000young people in the Pittsburgharea, with the hopes of educating10 million in the future. They’retrying to reach kids aged 5 to 10 by incorporating ethics into aninteractive game, which is availableat http://mysecurecyberspace.com.In this game, the player interactswith characters such as Elvirus andMC Spammer. A study of the20,000 children who will berequired to play this game is beingconducted scientifically, with along-term evaluation of the effec-tiveness of this approach.

Throughout his presentation,Khosla made some observationsabout open areas and the waves ofthe future. He claimed that Human-Computer Interaction (HCI) is nowthe hot field in computer science.He identified the emerging field ofresource-constrained devices suchas mobile phones and even RFIDs,and believes they will be ubiqui-tous in the near future. Mobileaccess is the new wave, he said; itholds the promise of providingtelephony in developing nations—77% of the world is already withinrange of a mobile network. At thesame time, privacy, security, andcapture resilience are needed formobile technologies. Finally, therewere comments about the reduc-tion in funding for these projects.Khosla reiterated how important itis that there be ongoing funding forsecurity—it is a problem that willnever simply be solved. He alsochallenged DARPA not to requireso many projects to be classified,since that leads to duplication ofeffort. Finally, there was an audi-ence comment encouraging incen-tives to get kids involved in bugreporting as well as in reportingmalicious activity. Khosla wel-comed this idea.

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 77

Panel

Sniffing Conference Networks: Is ItLegal? Is It Right?

Abe Singer, San Diego SupercomputerCenter; Bill Cheswick, Lumeta Corp.;Paul Ohm, U.S. Department of Justice;Michael Scher, Nexum, Inc.

Summarized by Serge Egelman

At many security conferences,intercepting wireless network traf-fic has become commonplace. Def-Con is at the extreme—passwordsare annually written down andtaped to a “wall of shame.” ButUSENIX Security has not been verydifferent. Often the motivation cal-imed is that of educating usersabout poor security habits, but onething is fairly certain: this behavioris illegal. This panel examined boththe ethical and the legal impact ofsniffing wireless conference net-works.

Bill Cheswick, currently the chiefscientist at Lumeta Corporation, iswell known for his 1991 paper “AnEvening with Berferd,” in which helures a hacker to a machine that isbeing monitored. For monthsCheswick watched as this crackerwould attack other machines fromthe honeypot he had set up. At onepoint during this study Army Intel-ligence came to Cheswick and readhim his Miranda rights; they sawattacks coming from his machineand assumed that he had some-thing to do with it. He was let offthe hook after explaining the proj-ect. While Cheswick learned manyof this particular cracker’s tech-niques, he had received little ethi-cal guidance on how to proceed.Was what he was doing illegal?Was it ethical? After all, it was hisown machine and this cracker hadaccessed it without authority(which certainly is illegal). Whilethis example falls into a gray area,Cheswick mentioned how he usedto sniff conference networks forplaintext passwords so that hecould educate people about inse-cure protocols. Upon finding out

that this activity was illegal, he hasrestricted his sniffing to networksthat he owns.

Paul Ohm, an attorney for theUnited States Department of Jus-tice, gave an overview and historyof the various federal computercrime laws. Starting in 1968, Con-gress passed regulations abouteavesdropping after being outragedby the egregious activities of theFBI in monitoring citizens withoutany legal oversight. This law madeit illegal both to tap phone lineswithout a warrant and to bug. Thisis commonly referred to as theWiretap Act (Title III of theOmnibus Crime Control and SafeStreet Act of 1968). In 1986 Con-gress passed the Electronic Com-munications Privacy Act (ECPA, 18U.S.C. §§ 2510-2521). With a fewexceptions, this law made it illegalto intercept electronic communica-tions. Monitoring one’s own net-work to protect rights and propertyis permissible, as is monitoring anetwork with consent from theusers. Ohm pointed out that somemight argue that by broadcastingpasswords through the air in plain-text, the user is essentially “askingfor it.” Although it is entirely possi-ble that this might eventually winin court, such a victory would be atthe cost of thousands of dollars inlegal fees for the defendant. At thesame time, the likelihood of some-one being arrested at a conferencefor sniffing traffic is very small.Ohm explained that when decidingto prosecute such a case, intent iscrucial. But sniffing conferencetraffic also raises many ethicalquestions: even if it were legal,would this behavior be acceptablefor someone not in attendance atthe conference? What is the differ-ence between a conference attendeesniffing traffic and an FBI agentsniffing traffic? The laws are inplace to protect everyone equally.

Abe Singer of the San Diego Super-computer Center chose to concen-trate on the ethical questions. Someof the common justifications range

from “It’s not a wiretap if there’s nowire” to “I’m protecting the net-work.” Of course this begs thequestion of what exactly is beingprotected by acquiring someoneelse’s passwords. Another justifica-tion, “The user deserved it forusing plaintext passwords,” is simi-lar to “She deserved it for walkingdown a dark alley alone.” This sortof behavior embarrasses those whoare subjected to it. These are oftennew users who do not know anybetter. Instead of alienating them,our time would be better spenteducating them. Of course, oneway around this would be to forceall conference attendees to signwaivers of consent. Just imagine an ISP requiring this of all itscustomers.

Mike Scher, general counsel andcompliance architect for Nexum,Inc., chose to focus on enforcingnormative behavior. Before a law is passed, there is always some consensus that the law serves toprohibit behavior in violation ofethical norms. We will often tellcolleagues when they are behavingimproperly. But in this community,sniffing is an ethical gray area, andit is therefore very difficult tobecome watchdogs. On the onehand, there are security luminarieswho are using plaintext passwords,and on the other, there are othersecurity luminaries who are sniff-ing. As a community, we need toreach a consensus as to whetherthis is unethical.

Invited Talk

Treacherous or Trusted Computing:Black Helicopters, an Increase inAssurance, or Both?

William Arbaugh, University ofMaryland

Summarized by Kevin Butler

The debate about trusted comput-ing is passionate and pointed; asArbaugh states, it is good whenpeople debate issues, but bad whenpeople make unsubstantiated

78 ; L O G I N : V O L . 3 0 , N O . 6

claims. Arbaugh presented anoverview of trusted computing andspoke of the positive effects andpossible negative ramifications.Much of the debate centers on whocontrols one’s computing and one’sinformation. There is a tensionbetween owners and users of infor-mation. Owners want to controlinformation (e.g., patient data) andwhile this seems laudable, there arescenarios where data leakage isimportant, such as with whistle-blowers in a company (e.g.,tobacco companies wanting to keeptheir documents secret). Trustedcomputing is inherently a “dual-use” technology, which can usedfor good purposes or ill. A user’sexpectations for what trusted com-puting might be will differ in manyways from what a larger company’sexpectations will be. An object istrusted and trustworthy if and onlyif it operates, and can be expectedto operate, as expected. Therefore,one definition is that trusted com-puting is when your computeroperates as expected. Note that theexpectations themselves are notincluded in this definition.

What is a trusted computing base(TCB)? It’s the totality of compo-nents responsible for enforcingsecurity policy, including hardware,firmware, and software. A key com-ponent of a TCB, the referencemonitor, mediates all access toobjects from subjects. The imple-mentation of a reference monitor isknown as a reference validationmechanism (RVM); it should betamper-proof and unable to bebypassed, but small enough to bewell analyzed and tested. The refer-ence monitor acts as a base case;i.e., if the base case fails, the prooffalls apart. These concepts and oth-ers were codified in 1983 in the“Orange Book,” which providedgood definitions and theory butwas unwieldy in practice. Trustedcomputing was not seriously con-sidered again until 2002, when the Trusted Computing Group(TCPA/TCG) went public. Therehas been a flurry of recent activity:

next year may bring virtualizationsoftware from Intel, secure execu-tion mode from AMD, and otherefforts.

The TCG features as its core ele-ment the Trusted Platform Module(TPM), a passive device that onlydoes something if commanded overthe system bus. This means it can’tperform actions such as raining andinterrupt to stop processing, can’ttake over a machine, and can’tdelete files. It’s essentially a smart-card soldered to the computer, so ithas lots of interesting crypto func-tions implemented in hardware,including random number genera-tion and symmetric and asymmet-ric encryption. Storage is protectedthrough on- and off-device shieldedlocations, and protected executionprovides an environment for pro-tected crypto functions to executewithout modifications or exposureof key information. A key functionof the TPM is attestation, in whichthe current status of both the TPMand the machine on which it re-sides is attested to by the TPM.Platform configuration registers(PCRs) are held in volatile storagein the TPM, and can be initializedto zero but not directly written to.The other operation permissible isextension, in which an extendedvalue is hashed with the old valueof the PCR to create a new value.

Arbaugh suggested that trustedcomputing can be broken into twophases: getting started (the pre-boot phase) and the operational, orpost-boot, phase, where the systemmust remain trustworthy. Authenti-cated boot can be performed by theTPM; it ensures that at boot timethe system is in a secure initialstate, assuming that the measuredsoftware is trustworthy. This latterconcept is problematic, as nobodyto this point is capable of makingsuch a guarantee. Authenticatedboot is a passive method; if thebootstrap process detects maliciousactivity, it cannot stop the systemfrom booting, and it might not evenbe able to detect if there is malice.Briefly, the operation breaks boot-

strapping into several steps, wherea hash is taken at each step and thePCR extended. Integrity measuresare stored in a write-once register,so the hashes can be securely com-pared. While it can be proven toanother authority, there is no wayto prove to the user that they are ina trusted configuration, due to thelack of a trusted path between thehardware and the user (e.g., an OScan spoof values as displayed to themonitor). Secure boot, by contrast,is an active process that can pre-vent malice from executing. It pro-ceeds similarly to authenticatedboot, but proves that it is in thecorrect configuration existentially,as execution is halted if the hashesdo not match. However, it cannotprove a trusted configuration to athird party. Arbaugh suggested thatwhat is needed is a trusted boot,combining authenticated andsecure boot. There are times whenbeing able to provide the systemconfiguration to a third party ishelpful, though this is open toabuse. However, malice shouldnever be executed if it can bedetected, no matter how good theprotection is. The addition of atrusted path to the user is the onlyway to implement this.

Post-boot methods include IBM’sextension of the TCG into runtimeoperation and software to use theTCG past boot virtualization, suchas Vanderpool and Pacifica. InIBM’s work, presented at the 2004USENIX Security Symposium, allobjects are measured and a list ismaintained in kernel data, withmeasured values going into a PCR.This only works if all software istrustworthy, meaning that muchmore software than just the BIOSand boot routine must be verified.Virtualization modifications areproposed by Intel and AMD; how-ever, previous work showed thatsome instructions in the x86instruction set cannot be virtual-ized without breaking the virtual-ization itself. Domain managerssuch as VMware and Xen act likereference monitors, where each OS

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 79

runs in a partition, firewalled fromeach other. Multi-level securitycould be implemented effectivelythrough this scheme, but there isstill a problem of moving informa-tion between partitions, and partic-ularly of covert channels betweenthe virtualized OSes. The Vander-pool specification includes thehighly problematic virtualization ofI/O. Lagrande includes processorand I/O modifications to increasesecurity and has trusted I/O pathsto the video and keyboard plus pro-tected execution and additionalmemory protection.

The main thesis of the talk was thattrusted computing can be used ingood and bad ways, and Arbaughconsidered examples of each. Elec-tronic voting is a particularly goodapplication, as attestations with atrusted boot are what one wantsfrom a voting machine. However,digital rights management (DRM)restrictions can be brought intoplace, thanks to the configurationattestations. The ability to lock filesand protect crypto keys with theTPM prevents key escrow, and thepolice cannot access your keys.However, files can be locked toapplications to limit competition.Strong authentication can be pro-vided to the platform, which canhelp parental controls, but couldprovide a loss of anonymity. Theonly way to get lawmakers to dothe right thing is either throughgenerous campaign donations or byexplaining things without extrem-ism in a way that they will under-stand. Arbaugh put forth the ideathat, contrary to current claims, theTCG could be beneficial to GNUsoftware: evaluation and certifica-tion on an approved platformmight eliminate government resis-tance to its use.

Arbaugh made some predictionsfor trusted computing. Improve-ments will come from virtualiza-tion, but Lagrande will not survive,as the market will not understandthe need for trusted paths, nor will

it be willing to spend the money.The TCG will be hacked; looking atthe XBox as an example shows thathardware hacking is just a differentskill set from software, thoughsome tools are more expensive. Inconclusion, all technology is essen-tially dual use, and while laws andpolicies attempt to limit evil uses,they cannot be completely elimi-nated. One has to decide for oneselfif the good provided by trustedcomputing outweighs the bad.

Refereed Papers

AT TAC KS

Summarized by Mohan Rajagopalan

Where’s the FEEB?: The Effectivenessof Instruction Set Randomization

Ana Nora Sovarel, David Evans, andNathanael Paul, University of Virginia

The authors’ objective in this paper,presented by Ana Nora Sovarel, wasto evaluate whether an attackercould detect the randomization keyremotely and then spread a wormon a network of instruction-set ran-domized machines. Their attackwas based on exploiting incremen-tal behavior by guessing instruc-tions that corresponded to shortcontrol flow. They concentrated ona two-byte sequence that was usedfor a jump attack. A prime assump-tion in this work was that the samekey would be used each time theapplication was randomized.Experiments were performed onFedora without Address Space Lay-out Randomization. In particularthe experiments evaluated whetherit would be practical to spread aworm in such a deployment.

Comments generally targeted theassumption that the same random-ization key would be used eachtime. It was pointed out that ISRschemes re-randomize on each forkoperation, and re-randomization is performed at load time, so theunderlying assumption was incor-rect.

Automating Mimicry Attacks UsingStatic Binary Analysis

Christopher Kruegel and Engin Kirda,Technical University Vienna; DarrenMutz, William Robertson, and GiovanniVigna, University of California, SantaBarbara

This paper, presented by ChrisKruegel, discussed automating con-trol flow attacks by analyzing appli-cations to identify locations that anattacker could exploit.

In particular, the authors hoped todefeat host-based intrusion detec-tion systems through mimicryattacks, such as hijacking PLTentries. The goal was to set up anenvironment in which the attackercould regain control after executingthe first system call. Symbolic exe-cution was used to perform staticanalysis. They identified severalinstances where the attack wouldsucceed on real programs.

Someone asked whether this tech-nique would work for non–buffer-overflow attacks. Chris replied thatall that matters is the ability toinject code.

Non-Control-Data Attacks Are Realis-tic Threats

Shuo Chen, Prachi Gauriar, andRavishankar K. Iyer, University ofIllinois at Urbana-Champaign; Jun Xuand Emre C. Sezer, North Carolina StateUniversity

Shuo Chen from UIUC presentedthe last paper of this session, whichexplored how data flow can beexploited in order to compromisesystems.

The premise of this work was thatseveral types of data, such as con-figuration inputs and user inputs,are security-critical and can be usedto drive exploits. While it has beenknown that such attacks exist, theextent to which they are applicablehas not yet been assessed. Theauthors show that many non-con-trol vulnerabilities exist and theextent of damage is comparable totraditional attacks. Their experi-

80 ; L O G I N : V O L . 3 0 , N O . 6

ments indicated that several real-world programs, such as FTP, SSH,and Web servers, were vulnerableto such attacks. They were evalu-ated along two dimensions: thetype of security-critical data, andthe specific memory vulnerabilitythat can be used to access the data.

Several defenses to protect againstcontrol data tampering were pre-sented, ranging from the enforce-ment of non-executable pages tousing low-level hardware infra-structure to protect control data. In general, memory corruptionattacks remain a difficult problem.

Invited Talk

How to Find Serious Bugs in Real Code

Dawson Engler, Stanford University

Summarized by Francis Hsu

Dawson Engler shared his experi-ences using two dynamic tech-niques, implementation-levelmodel checking and execution-generated testing, to find as manyserious bugs as possible in realcode. His earlier experiences withstatic techniques proved effective atchecking surface visible propertieslike proper locking semantics.Since no code needed to be run oreven compiled in static checkingand it scaled well, it worked well infinding thousands of errors in code.Dawson successfully commercial-ized these two years ago by found-ing Coverity, a self-funded com-pany with over 70 customers.However, this talk was not abouthis static analysis successes. Whilehis dynamic techniques required allthe code to run and took hours todiagnose a single bug when found,they did address a failing of statictechniques: checking propertiesimplied by code.

Implementation-level model check-ing is a mutation of formal methodtechniques, adapted for real code.Model checking is like testing onsteroids, where every possible ac-tion is done to every possible sys-tem state. Since model checking

makes low-probability events ascommon as high-probability eventsby exhausting the state space, cor-ner-case errors could be foundquickly. Dawson had several yearsof mixed results, but finally had abreakthrough success in checkingthree heavily used Linux file sys-tems. He ran the entire Linux ker-nel with a virtual formatted disk inthe model checker, applied eachpossible operation to the file sys-tem with failures at any point, and checked for proper crashrecovery. Although the file systemswould normally recover correctlyafter a crash, Dawson discoveredthat they usually broke whencrashes occurred during the crashrecovery process. In the end, hefound 32 errors, including 10places where a poorly timed crashwould result in complete data loss.

An attendee wanted to get a handleon how much human and compu-tational time was needed to applythe model checking for bug find-ing. Dawson said he wouldn’t besurprised if it took a couple ofweeks up front, since it’s hard tofigure out correct behavior of thecode and understand any discov-ered bugs. The computational timecould be infinite for a run andwould also require lots of memoryfor searching the large state space,but in his experience Dawson usu-ally found useful results in secondsor minutes of a run. Not findingany results in that time wouldlikely be caused by a problem inthe testing and not because thecode was bug-free.

Another person asked if Dawsonhad seen cases in his testing wherethe access to the disk was nottrusted to write the data it wasgiven, and if he had seen any differ-ences between brands. Dawsonresponded that he had tested thefile system on RAM disks for per-formance reasons, but it could havebeen done on physical disks. Athird attendee asked if Dawson hadmode-checked fsck. Dawson con-firmed that he did perform an end-

to-end check of all the componentsof the file system, including fsck.

In the second half of the talk, Daw-son described his more recent workwith execution-generated testing,or “how to make code blow itselfup.” Creating good test cases forsystem code is hard work. Manualconstruction of test cases is labori-ous, and automated random “fuzz”testing may not hit corner cases orerrors that require structuredinputs. Execution-generated testingsolves these problems by runningthe code to generate its own inputtest cases. Starting with an initialvalue of anything for the input, theprogram execution generates con-straints for the values at fork pointsin the code. The collection of theseconstraints can then be used togenerate inputs which, in turn, areused to test the code. With thistechnique Dawson generated for-mat strings to test printf and net-work input to test an MP3 server,and discovered bugs in both.

Dawson has made the slides of histalk available at http://www.stan-ford.edu/~engler/usenix-secu-rity05.pdf.

Refereed Papers

P ROTE C TI N G TH E N E T WO R K

Summarized by Kevin Butler

Mapping Internet Sensors with ProbeResponse Attacks

John Bethencourt, Jason Franklin, andMary Vernon, University of Wisconsin,Madison

n Awarded Best Paper!

Internet sensor networks are col-lections of systems monitoring theInternet, producing statisticsrelated to traffic patterns andanomalies. Examples include col-laborative intrusion detection sys-tems and worm monitoring cen-ters. Network integrity is based on the assumption that the IPaddresses of the systems serving assensors are secret; otherwise the

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 81

integrity of the produced data isreduced. Attempts to maintainanonymity include hashing oreliminating sensitive report fields(e.g., the IP address where anattack arrived), prefix-preservingpermutations, and bloom filters.However, John Bethencourt pre-sented a new class of attacks dis-covered by his group, called proberesponse attacks, which are capableof compromising the anonymityand privacy of Internet sensors.

Using the SANS Internet StormCenter (ISC) as an example, Beth-encourt showed that given an IPaddress, if a probe is sent to theaddress then one can wait for thesensor network to report activity; ifit doesn’t, the address is monitored.With the ISC, only one TCP packetis necessary to initiate a probe con-nection, as incomplete SYNs aremonitored. It is possible to sendpackets to every potential address,though this is not possible in aserial manner, given that most par-ticipants make only hourly reportsand there are 2.1 billion routableaddresses. Checking in parallel,however, is feasible. Starting withthe full list of addresses, the searchspace is divided into intervals. Af-ter sending a series of probes andwaiting two hours, the reports canbe checked for activity, and thosereporting none are discarded. Forthe others, a divide-and-conquerstrategy can be used to further subdivide the intervals and makeprobes until, ultimately, all moni-tored IP addresses are found. Simu-lation results show that an attackerusing a T3 can complete the attackin five days. With this information,an attacker can avoid monitoredaddresses in malicious activitiessuch as port scanning or propagat-ing worms, avoiding detection.Sensors can also be flooded witherrant data. While the ISC was pri-marily considered, similar attacksare possible against other sensornetworks, such as Symantec’sDeepSight site.

While hashing, encryption, andomitting certain report fields canmake attacks more difficult, theyare still possible. Private reportswould be effective but wouldseverely limit utility. Top lists couldpublish only the most significantevents, providing some usefulinformation but not a complete pic-ture, allowing attackers to avoiddetection by keeping activity belowthreshold levels. Puzzles, captchas,and random log sampling are othertechniques to prevent informationattacks. One question posed waswhether sensing in the core wouldbe more useful than at the edge.This is more difficult to implement,as was mentioned in other papers.Another questioner asked aboutbiasing data, as clever attackers canattack sensors from a variety oflocations. More investigation intothese forms of attack is needed.

Vulnerabilities of Passive InternetThreat Monitors

Yoichi Shinoda, Japan Advanced Insti-tute of Science and Technology; Ko Ikai,National Police Agency of Japan;Motomu Itoh, Japan Computer Emer-gency Response Team Coordination Center ( JPCERT/CC)

Yoichi Shinoda described still othermethods of finding vulnerabilitiesin threat-monitoring networks.Passive threat monitors wereinspired by the successes of Inter-net telescopes; results have beenpublished in graph and table form.Determining where sensors are cancompromise the monitoring net-work’s integrity and can be per-formed by looking for feedback toinduced input. By propagating anumber of UDP packets at four /24address blocks, they graphed themonitoring system, showing aspike four hours afterward. By tar-geting a particular system andlooking at information such ascompany white papers and hand-outs, the basic system propertiescan be determined. Combined withpacket-marking algorithms, whichcan be customized to the type of

feedback from the network, sensorscan be found efficiently. This wasbacked up by case studies.

Protecting the monitors is not easy.Methods include throttling infor-mation flow, providing less infor-mation, and, in particular, detect-ing marking activity, looking forstatistical anomalies where flurriesof similar messages are sent. Whilesystem protection methods havebeen proposed, their effectivenessand completeness have not yetbeen verified, and unknown attacksmay yet exist. Information leakscan still occur even with protec-tion, and continuous assessment isnecessary to study attacks and pro-tection methods.

A question about correlating sensorinformation was posed during theQ&A session. If sensor output isnormalized as a countermeasurebased on sensors looking at differ-ent networks, could similar pat-terns still be observed? Shinodaresponded that while this wasexplored in the paper, the problemis that different monitors have dif-ferent sets of sensors providing dif-ferent results, and knowing whydifferent results are provided is stilla work in progress.

On the Effectiveness of DistributedWorm Monitoring

Moheeb Abu Rajab, Fabian Monrose,and Andreas Terzis, Johns HopkinsUniversity

To protect against threats, moni-toring active networks, and theroutable unused IP address space inparticular, is attractive, since nolegitimate traffic should occur inthese areas. With a single monitor,backscatter patterns can be found ifa DoS attack is initiated; it is alsouseful for worm detection. How-ever, a single monitor view is toolimited, as worm scans that hitother parts of the network will bemissed. Moheeb Abu Rajab pre-sented methods of monitoring forworms using multiple, distributedmodels, concentrating on the fact

82 ; L O G I N : V O L . 3 0 , N O . 6

that non-uniform distributionsmore accurately model the realworld. For an extended wormpropagation model, the modelmust incorporate population den-sity distribution, especially non-uniform worm propagation.

Equations were derived for thenumber of infected hosts in a /16subnet, with the total infectionbeing the sum of infected hosts.Abu Rajab presented simulationsthat showed that while non-uni-form scanning worms propagatedslightly more slowly than uniformscanning worms over uniformlydistributed hosts, they spreadmuch faster when a real data setwas used. Based on this, betterworm detection can be imple-mented by concentrating on differ-ent evaluation metrics. Systemdetection time—the time for themonitoring system to detect a newscanner with a particular level ofconfidence—is important. Deploy-ing distributed monitors withsmaller address blocks, giving afiner level of granularity, producedoptimal response times. Even par-tial knowledge of population distri-bution was found to improve detec-tion times by a factor of 30.

In the Q&A, an audience memberasked whether the worm will takelonger to propagate if it starts invery sparse populations under askewed population distribution.Abu Rajab responded that becauseworms have a random componentto their dissemination, even if somestart in sparse areas, at some pointthey will target heavily populatedsubnets and propagate much fasterfrom there onward. Another ques-tion concerned the speed of detec-tion as a metric; has the communi-cation overhead between probesbeen considered as a factor reduc-ing the speed at which the wormcan be detected? This is a goodquestion, agreed Abu Rajab. Theresearch to this point concentratedon evaluating space requirementsand assumed that an infrastructure

was in place; for distributed sys-tems, an adaptive routing systemthat minimized overhead wouldhave to be implemented.

Invited Talk

Open Problems with CertifyingCompilation

Greg Morrisett, Harvard University

Summarized by Mohan Rajagopalan

Greg began the talk by stating thatmobile code is not the basic secu-rity problem. The real difficulty liesin understanding the semanticproperties of code rather than itssyntactic properties. For example,even simple policies are undecid-able. Proof-carrying code (PCC) isan approach where each program is accompanied by a proof. Theadvantage here is that functionalityis moved from the trusted comput-ing base to the proof checker. Certi-fying compilers are programs thatsystematically transform proofsalong with source. The questionnow is how to derive initial proofs.

One approach is to use type systemsin such a way that they map to poli-cies. Citing Microsoft Research’sSingularity project as an example,he mentioned that some high-levellanguage-based approaches havesuggested eliminating C altogether.Software fault isolation is anotherapproach; it checks that all memoryaccesses are to valid locationswithin a program’s address space.The idea here is to track mappingfrom source to target address. Con-trol flow isolation was mentioned asan implementation for the x86 plat-form. This approach meant thatpolicies were relatively simple andeasy to enforce—for example, byrewriting the binary.

The remainder of the talk dealtwith C and type safety, focusing ontwo approaches: CCured (Necula etal.) and Cyclone (Morisset et al.).The first idea proposed was toinsert code to box all values and tagthem at runtime to check the right

types. This approach was rejecteddue to the excessive overhead itimposed. A better idea would be toenforce soft typing—do type infer-ence at compile time. Any staticallyinferred code need not be checked.CCured is based on this principleand introduces three types: T_safe,which corresponds to a single valuethat need not be checked at run-time; T_seq, which evaluates to asequence of values that may betraced using fat pointers (performbounds checks); and T_wild,which indicates a pointer to atagged value. Security constraintsare generated based on how point-ers should work. A disadvantage ofthis approach is that the compilermay insert undesirable checks—within inner loops, for example.

Cyclone, on the other hand, aimsto be the type-safe language thatCCured maps to. Programmerscontrol where and when to tag val-ues, allocate memory, etc. Thedownside is that much more infor-mation is required from the pro-grammer. For example, there aretwo ways to do bounds checks,either through the fat keyword orby placing an assertion. Floyt-Hoare Logic is used for verification,and the key challenges that need tobe addressed are scalability andsoundness. For example, whentranslating diamonds there is anexponential blowup. Loop invari-ants pose another problem, and thesolution here is to rely on iterativefixed-point computations.

A challenge they have to cope withis that of unsound assumptions.Current work is targeted at increas-ing the trustworthiness (mismatchin assertions), extensibility, andcompleteness. Extensibility dealswith the problem of using a varietyof techniques to check the VCs thatare generated. There are three keydomain-specific problems thatGreg mentioned in terms of com-pleteness: first-order logic does notwork; concurrency; and, finally,substructural languages.

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 83

PCC is a powerful principle: Itminimizes the TCB and places theburden on the code producer. Cer-tifying compilers are a good step inthat direction, but they are weakand their theorems are loose. Inresponse to questions, Greg men-tioned that Cyclone is currentlyavailable and that software mainte-nance is an interesting direction toexplore with VCs.

Refereed Papers

D E F E N S E S

Summarized by Francis Hsu

Protecting Against Unexpected SystemCalls

C.M. Linn, M. Rajagopalan, S. Baker, C.Collberg, S.K. Debray, and J.H. Hart-man, University of Arizona

Mohan Rajagopalan presentedwork on a collection of host-basedtechniques to limit the scope ofremote code injection attacks, bydenying a remote attacker use ofthe system calls.

By recording in an InterruptAddress Table all the addresses ofall the legal system calls of an exe-cutable before it is run, the tech-nique prevents the use of anynewly inserted system calls frominjected code. To deter mimicryattacks of injected code using thelegitimate system calls in the pro-gram, the actual syscall instructionis disguised as other instructionsthat trap into the kernel. Additionalbinary obfuscation techniques,such as dead code insertion andlayout randomization, make itmore difficult to scan for the sys-tem calls. To thwart scanningattacks against the code, a pocket-ing technique splits the code sec-tion into noncontinuous segmentsand unmaps the unused regions ofprocess address space.

The authors implemented thesetechniques with a binary rewritingtool that analyzed executables andembedded a new ELF section and a

modified OS kernel that made thechecks. The techniques worked toprotect an executable subjected tosynthetic attacks written by theauthors, while imposing less than15% overhead in performance andan increased memory cost of 25%.

Efficient Techniques for Comprehen-sive Protection from Memory ErrorExploits

Sandeep Bhatkar, R. Sekar, and DanielC. DuVarney, Stony Brook University

Exploitation of memory errors hasbeen responsible for 80% of CERTadvisories over the last two years.Although prior work in addressspace randomization removes thepredictability of memory locations,it still allows attacks using existingpointers to calculate relativeaddresses and does not preventdata overwriting or leakage. San-deep Bhatkar presented a way toaddress this problem with a set oftransformations on the stack, staticdata, code, and heap to randomizethe absolute location and relativedistances of all objects.

The authors produced a modifiedcompiler and loader to rewrite theC source of existing programs tosupport the randomization. Theactual randomization of the pro-gram’s objects then only occurs atruntime, enabling the same binaryproduced by the compiler to be dis-tributed to all users. Experimentshave shown that the transforma-tions add an average overhead of11%, which is comparable to previ-ous address space randomizationtechniques that did not address allthe other attacks mentioned above.

Finding Security Vulnerabilities inJava Application with Static Analysis

V. Benjamin Livshits and Monica S.Lam, Stanford University

While Java has addressed the prob-lem of buffer overruns from un-checked input, Java Web applica-tions are still vulnerable when datain the input buffer is not properlyvalidated. Ben Livshits listed themany sources of injected data to

such a Web application, such asparameter manipulation, hiddenfield manipulation, header manipu-lation, and cookie poisoning. Oncethe injected data is in the program,it can be used to exploit the appli-cation through SQL commandinjections, cross-site scripting, andarbitrary command injections. Toaddress the multitude of injectionand exploit techniques, Livshitspresented a framework for formal-izing the vulnerabilities and a staticanalysis tool to discover vulnerabil-ities in these applications.

Vulnerabilities such as SQL injec-tion caused by parameter manipu-lation can be described at a highlevel in a Program Query Language(PQL), and these specifications are automatically transformed intoa static analysis. The static analysisis both sound and precise, guaran-teed to find all the vulnerabilitiesdescribed in such a specificationwhile limiting the number of falsepositives. More precision is gainedthrough use of both a context-sen-sitive analysis and an improvedobject-naming scheme to help withpointer analysis.

The authors have collected a set ofopen source Web applications toform Stanford SecuriBench, abenchmark on which their and oth-ers’ security tools could be evalu-ated. Livshits reported that staticanalysis of this code found a totalof 29 security vulnerabilities withonly 12 false positives with theirmost precise analysis.

OPUS: Online Patches and Updates forSecurity

Gautam Altekar, Ilya Bagrak, PaulBurstein, and Andrew Schultz,University of California, Berkeley

While software vendors may race toprovide patches after a discoveredsecurity vulnerability, users fre-quently do not respond with thesame urgency. Gautam Altekar sug-gested that the current patchingmechanism is responsible, sincepatches are unreliable, irreversible,and disruptive. Altekar introduced

84 ; L O G I N : V O L . 3 0 , N O . 6

OPUS as a practical dynamic patch-ing system to address the problemof patches, making the patch saferand removing the need for a user torestart the patched application.

OPUS consists of three compo-nents: a static analysis tool toaddress the safety of dynamicpatches, a dynamic patch genera-tion tool integrated with the GNUbuild environments, and a runtimepatch installation tool. The staticanalysis identifies a patch’s unsafeside effects (e.g., writes to non-local data such as the heap orreturn values). To install the patch,the new, modified function iscopied to memory and a forward-ing jump is added to the start of theold function. To ensure that the oldand new code are not mixed, theredirection is done only after theold function is no longer on the callstack.

To date, the authors have generateddynamic patches for 30 vulnerabili-ties from vendor-supplied patcheswithout modification. Altekar re-ported that they could not generatedynamic patches in some instances.These were for cases such as modi-fications to global values, inputconfiguration files, functions at thetop of the call stack, and inlinefunctions.

An attendee asked if restartingapplications was such a large prob-lem that online patching would be necessary. Altekar respondedthat they address a usability issue,where patching has gotten to be soannoying that users are ignoringthem. Another attendee suggestedthat online patching is useful in sit-uations where an administratorpatching the system isn’t the onesitting at the computer. Such anadministrator would not want todisrupt the users and might need towait for the users to restart theapplications on their own.

More information about OPUS isavailable at http://patch.cs.berkeley.edu.

Invited Talk

What Are We Trying to Prove?Confessions About Certified Code

Peter Lee, Carnegie Mellon University

Summarized by Boniface Hicks, OSB

Peter Lee gave an excellentoverview of the work that has beendone in proof-carrying code (PCC)and outlined the challenges thatremain. PCC developed as a way tosay something concrete about asoftware artifact (e.g., mobile code)without the use of a third party orthe heavy overheads of executionmonitoring, while still maintaininga small Trusted Computing Base(TCB). Peter Lee and George Nec-ula accomplished this by providingproofs of safety properties, whichcan be small even for large pro-grams. A proof for the theorem“There are no buffer overflows”would be an example. These proofsare tied into the program text insuch a way that they are tamper-proof (one can’t change the proofwithout changing the program).Furthermore, because the burdenof proof is placed on the softwareproducer, they are lightweight tocheck. Lee gave the example of amaze. For an infinite-width maze,it might be impossible automati-cally to find a path from start to fin-ish, but given a path, it is trivial toverify it. For real programs, the“path” can be expressed as an MLprogram which can be verifiedmerely by ensuring that it type-checks. At this point Lee rhap-sodized on the sheer beauty of thissimple, yet powerful solution.

Unfortunately, the proofs getoppressively large. As an optimiza-tion, the proofs can be turned into“oracle strings.” To return to themaze analogy, an oracle stringwould provide only the answers toqueries about which way to go ateach intersection. Thus, the oraclestring, which would express only“Left,” “Right,” “Right,” for exam-ple, could be encoded as a binarystring. This gives the proof a very

compact form, requiring onlyslightly more work on the part ofthe automatic verifier. In a real pro-gram, the oracle strings are tied tothe program itself. The verifier iter-ates through the program text, andwhen it finds a dangerous com-mand (STORE, for instance), itqueries the oracle string aboutwhether this command is safe. Theoracle string provides the neededevidence. This turns out to be veryeffective. The checker is less than52KB and the proofs are generally0–10% of the program size. In sometests the oracle strings were muchsmaller than the checksum for theprograms. The SpecialJ compiler,which compiles Java class files withoracle strings into x86 binaries,using heavy optimizations justifiedby proofs, outperformed Java,JavaML, and the JIT compiler. TheTCB for PCC is only approximately100KB.

Unfortunately, the picture is not allso rosy. Lee made his confessionsduring the second part of the talk.The first major obstacle is that themodule that checks the code(VCgen+) is rather beastly. Thecore of VCgen is 20,000 lines of Ccode, designed specifically for x86code output from a Java compilerwith a specific policy. To change thepolicy, one must change the VCgencode. Andrew Appel et al. came upwith another solution to alleviatethis problem. By finding the rightglobal invariant (a long, compli-cated thing) and proving that thestart state and each future stateobeys it, one can use PCC to provesafety properties about programs.They call this Foundational PCC.Other variants of this approach,including TALT and TL-PCC, havebeen developed as well. Unfortu-nately, none of the foundationalsystems are practical yet, because of large proof size or slow proof-checking times.

Another confession Lee made con-cerned the safety policy. What isthe “right” safety policy, and howcan it be specified? Currently, the

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 85

two key properties that have beenused are type safety and memorysafety. This is certainly valuable; iteliminates one of the most oftenexploited security vulnerabilities,buffer overflows. On the otherhand, as one member of the audi-ence pointed out, this kind of bugaccounts for only 50% of securityfailures. PCC is fundamentally lim-ited to safety properties. Althoughsafety properties can be used toapproximate liveness and informa-tion flow properties, this approxi-mation leaves something to bedesired. When specifying policy,one really wants to say somethingdirect: that no program shouldwrite to the kernel, for example. InPCC such a property can only beexpressed in an indirect way, byspecifying programs’ structuralrules that imply this condition.Some promising directions fordeveloping solutions to this prob-lem are use of first-order temporallogic, and model checking.

In conclusion, Lee asserted thatcertified code is a great way toensure safe code. Proof-carryingcode is able to eliminate the mostbasic program flaws exploited insecurity attacks. Engineering PCCinto a practical system, however, ischallenging. Furthermore, someattacks are not (yet!) able to beaddressed by PCC. For example,one would like to guard against tro-jan horses. It is usually the case,however, that trojan horses are safeand live. In this case, PCC may notbe very useful, because it may onlyverify that the trojans won’t crash.Vergil Gligor asked a questionabout the limitations of approxi-mating information flow policieswith safety policies. He noted, forexample, that Bell-LaPadula andBiba are both approximations ofinformation flow policies. Eacheliminates a different covert chan-nel. Their composition, however,introduces a new covert channel.This goes to show that one of thehard problems in certifying code isgetting the security policy right—

hopefully, PCC can make someheadway in this.

Refereed Papers

B U I L D I N G S E C U R E SYSTE M S

Summarized by Francis Hsu

Fixing Races for Fun and Profit: Howto Abuse atime

Nikita Borisov, Rob Johnson, NaveenSastry, and David Wagner, University ofCalifornia, Berkeley

In “Fixing Races for Fun and Profit:How to Use access(2)” at last year’sUSENIX Security Symposium,Dean and Hu presented a counter-measure to a race condition attack,where an adversary is required towin k-races instead of just one foran attack to succeed. They accom-plish that by making the access andopen calls in a loop, so that anattacker would need to change thesymbolic links to point to the cor-rect files many times. This yearNaveen Sastry presented an attackon such a defense by constructing afilesystem maze to win the racesagainst the loop and synchronizingwith the access and open systemcalls.

Filesystem mazes ensnare the vic-tim process making the access andopen checks, forcing the process to block for I/O and allowing theattacker to win the race. The attackwas constructed by creating chainsof deep directory trees and placingthe target at the end of it. If one ofthe directories was not in the buffercache, the victim process wouldneed to block and incur disk I/O.To reliably detect when each accessor open call began, the authorsmonitored the atime of a symboliclink in the path given to the victimprocess. Even against a k-race algo-rithm where k=100, the author’sattack succeeded 100 out of 100trials on one of the platformstested.

An attendee observed that theorder of the access and open calls

was built into the assumptions ofthe attack and asked what wouldhappen if the order was random-ized. Sastry deftly advanced to abackup slide that described theattack on a randomized k-raceusing system call distinguishers. Heexplained that the information onthe system call being made can begathered from the process ID underthe /proc file system. Anotherattendee noted that having deepdirectories of hundreds or thou-sands of directories for the attackmight be detected as unusualbehavior. Sastry reported that whilemazes of size 800 were used in theattacks, he speculated that muchsmaller mazes of 10 or 20 mightwork if an effective strategy forflushing the buffer cache at thesame time was used.

Building an Application-Aware IPSecPolicy System

Heng Yin and Haining Wang, College ofWilliam and Mary

Heng Yin began his presentation by describing the security benefitsof IPSec, but noted the failing thatthe transport mode of IPSec is notwidely used because of the lack ofPKI deployment and poor applica-tion support. The IPSec policysupport lacked knowledge aboutapplication context, disallowingfine-grained policy that might beneeded by applications such aspeer-to-peer systems that deal withunpredictable remote hosts anddynamic port usage. Additionally,the application API support ofIPSec is inferior compared to themore popular SSL/TLS.

The authors addressed these weak-nesses of IPSec by creating anapplication-aware IPSec policy sys-tem, and they implemented it on aLinux 2.6 system. Evaluation of thesystem revealed that IPSec couldcounter network-level attacks suchas SYN flooding using fewer CPUcycles than other mechanisms suchas SYN cookies. The authors alsosecured the FTP protocol with anIPSec policy to provide privacy for

86 ; L O G I N : V O L . 3 0 , N O . 6

the communications, and theyobserved that files could be trans-ferred faster under the secured FTPthan with sftp, a protocol securedat the application level.

Shredding Your Garbage: ReducingData Lifetime Through SecureDeallocation

Jim Chow, Ben Pfaff, Tal Garfinkel, andMendel Rosenblum, Stanford University

Jim Chow began his presentationby posing the following rhetoricalquestion: How good are computersat keeping secrets? To gauge thelifetime of data in a computer’smemory, the authors ran a smallprogram that filled several mega-bytes of memory with markers,then freed it. They then continuedto use their machines normally. Atthe end of each day, they wouldobserve the memory contents. Sur-prisingly, they were able to recoverkilobytes to megabytes of data,weeks afterward, even after themachines were rebooted.

Chow noted that good applicationprogrammers may remember toproperly overwrite memory thatmay have contained sensitive data.However, he argued that protectingsensitive data is a whole-systemproperty, since data in memory maybe copied by the system to manydifferent buffers or flushed to aswapfile on disk. To remedy this,the authors propose secure deallo-cation of the memory by explicitlyclearing the contents of any mem-ory whenever it is freed by the sys-tem. Chow reported that such asystem incurred 0–7% performanceoverhead even in the worst cases.He explained this minimal over-head was because while data wouldbe free in kilobytes or megabytesper second, the system could zeroout memory in gigabytes per sec-ond.

The talk was followed by a livelyQ&A session. An attendee asked ifthe authors had looked at thelatency overhead of their system.Chow replied that the paper didnot specifically address latency, but

noted that applications didn’t nor-mally batch free operations, so theoverhead was spread out. Anotherattendee noticed that some bench-marks reportedly ran faster withthe secure deallocation system,which Chow attributed to noise inthe benchmarks since overheadswere very small. When asked aboutthe half-life of data in their mark-ing experiment, Chow said that itwas very short, within seconds, butthe time to live for some bits werevery long. An attendee wondered ifthe experiments demonstrated thatthe buffer caches on the testedoperating systems were inefficient,since the unified virtual memoryshould have allowed the regularmemory to be used for bufferingI/O. Chow explained that holes inthe pages used were responsible forallowing data to survive usage bythe buffer cache. While pages werereused and reclaimed, they werenot completely overwritten in theprocess.

Invited Talk

Six Lightning Talks (and a long one)

Ben Laurie, The Bunker

Summarized by Stefan Kelm

Ben opened his remarks by admit-ting that he thought he’d have togive a one-hour presentation untilhe was told that his session wouldcover 90 minutes. He thereforeadded some topics and changed thetitle of his talk from “Four Light-ning Talks.” (Ben impressed theaudience with some extremelyfancy animations throughout histalk, most of which he apparentlyhadn’t seen before himself.)

Ben delved into the first of his six(plus one) topics: “Why opensource vendors are bad for secu-rity.” He argued that vendors ofopen source software often causesecurity problems because theychange default installation directo-ries, split software packages, fail tochange version numbers correctly,and sometimes even introduce

security flaws during packaging.This in turn makes it hard for theuser or administrator to apply secu-rity patches. Ben stated that ven-dors create the myth that they areneeded for reliability, which, in hisopinion, is not true. Ben thentalked about the much-discussedissue of full disclosure and arguedthat the role of coordinating bodiessuch as CERT or NISCC in practiceis reduced to protecting their stake-holders. With respect to the opensource vendors the solution he pro-posed was that “packagers shouldmake themselves redundant.”

His next topic was on an almostancient rule of thumb, first definedin RFC 760: “An implementationshould be conservative in its send-ing behavior and liberal in itsreceiving behavior.” He gave someexamples of servers which, in hisopinion, are way too liberal in whatthey accept as an incoming connec-tion. He cited HTTP Request Smug-gling, a real-life attack scenario thathas not garnered much public dis-cussion. He concluded that beingliberal in what a server accepts isbad for security.

DNSSec, which Ben covered next,has been in the IETF standards forquite some time but is not beingused by anyone, due to several(mostly organizational) problems.Ben described some of those prob-lems: the size of DNSSec packets,islands of trust, the key-rolloverproblem, and issuing DNSSec-secured negative responses withoutallowing what is called “zone walk-ing” DNS servers. He pointed outsolutions to those problems, eventhough some of them remain in thestandards.

Next, Ben discussed privacy-enhanced identity management(PEIM) and a library he and a col-league are currently writing toimplement a bit-commitmentscheme which is related to zero-knowledge (ZK) proofs. As anexample, he mentioned the infa-mous “Where’s Waldo?” questionin which I want to prove I know

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 87

where Waldo is without revealingWaldo’s location. The librarythey’re writing implements severalZK proofs and provides low-levelfunctions to do the necessarycrypto operations, but no protocols.

Ben moved to his next sub-talk, thefocus of which was an OpenPGPSDK he is currently writing. TheSDK will be a BSD-licensed free Cimplementation of OpenPGPwhich aims to be complete, flexi-ble, storage agnostic, protocolagnostic, and correct (in contrast tobeing too liberal, as proposed inRFC 760). Since an end-user appli-cation already exists with gpg, allthey’ll be providing is a library, notan application.

Before starting with his “real” talk,Ben briefly discussed anonymouspresence, another solution tosecretly communicating with oth-ers. In this example a so-called“rendezvous server” allows Alice(who else?) to rendezvous with(guess who) Bob. The two mainobjectives are that Alice doesn’twant anyone to know she’s talkingto Bob, and Alice and Bob don’twant their conversations to belinked, even in the presence of aglobal passive adversary. Eventhough the rendezvous server isnot regarded as trusted, the proto-col allows for these goals to beachieved. Apres, an anonymous-presence implementation, is a Perllibrary written by Ben and imple-mented for plain TCP and IRC.

After these short talks Ben tried tosqueeze his remaining “long talk”into the final minutes but failed to do so. His final talk was onanother implementation of hiscalled CaPerl, which implementscapabilities in the Perl program-ming language. If one wants to runpossibly hostile code safely, tradi-tional approaches such as sand-boxes and jails often fail for severalreasons: they often are either toorestrictive or too lax; moreover,there’s no easy way to specify accessto a file by a certain program while

disallowing access by any otherprogram.

A solution to this problem is capa-bilities (not to be confused withPOSIX capabilities), nicely de-scribed by Ben as “an opaque thingthat represents the ability to dosomething.” Using capabilities, anenvironment can choose exactlywhat the visiting code can do. Hewent on to talk about how toimplement capabilities in differentprogramming languages and,finally, presented CaPerl, his “sur-prisingly small” implementation:CaPerl is able to convert standardPerl into a capabilities language,and it compiles into standard Perl,the main modification being theintroduction of trusted vs. un-trusted code within CaPerl. (Ben’sexplanation of trusted vs. untrustedcode was way too short, so theinterested user should check bothhis slides and his Web site for fur-ther information.) On using CaPerlthe output is Perl, which one runsthe normal way, with the CaPerllibraries in the path.

For more information, have a lookat Ben’s home page at http://www.apache-ssl.org/ben.html.

Work-in-Progress Reports

Summarized by Jonathon Duerig

The Program Counter Security Model:Automatic Detection and Removal ofControl-Flow Side Channel Attacks

David Molnar, Matt Piotrowski, DavidSchultz, and David Wagner

In a regular cryptographic attackmodel, the adversary has access toa box with a key and an arbitrarymechanism. The adversary seesoutput given known inputs. In thereal world, other characteristics canbe used, such as time or powerusage. This WiP is about prevent-ing attacks based on side channelsthat leak control flow information.Suppose that the adversary cantrack the program counter as agiven algorithm is executed. Given

this model, a system is secure if theadversary learns nothing in spite ofthis extra information. The authorsare developing a system to auto-matically detect and fix algorithms(in C) that are insecure in the faceof a leaked program counter. Thecost of modifying an algorithm toresist an attack using the programcounter is a fivefold increase intime and a twofold increase inspace. They are also developing astatic analyzer for assembler codebased on taint. This can detectinsecurities introduced by an opti-mizing compiler.

Implementing N-Variant Systems

Benjamin Cox, University of Virginia

Benjamin Cox is developing a sys-tem to protect vulnerable Web ser-vices. An input replicator splitsinput from the user to several vari-ants of a Web server. These variantsare artificially diverse, running indisjoint address spaces and withpotentially different instructionsets. A monitor detects when sys-tem call parameters disagree andshuts all Web servers down if theydo. A simultaneous attack isrequired to compromise the systemas a whole, and the artificial diver-sity makes simultaneity more diffi-cult. He has thwarted an attack ona vulnerable Web server (a formatstring attack). Open questionsremain: What kinds of variationswork well? What kinds of classes ofattacks can we prevent? Can thesystem perform acceptably? Thereare two current problems with thesystem. First, some input and out-put can be done without resortingto system calls. The monitor maytherefore be bypassed by suchmethods. Second, while the serveris harder to compromise, it is easierto kill. The long-term goal is to getsome provable security that doesn’trely on secrets: for instance, a sys-tem where even if the variationswere known, the system would stillbe secure.

88 ; L O G I N : V O L . 3 0 , N O . 6

Effortless Secure Enrollment for Wire-less Networks Using Location-LimitedChannels

Jeff Shirley, University of Virginia

How do you enroll temporary usersinto wireless networks? Such a sys-tem must be easy and providemutual authentication, ensuringthat the enrollee is an authorizeduser and that the wireless networkis trusted. The solution is location-limited channels. The author pro-poses audio tones as such a chan-nel. It is human-evident, the rangeis limited, and it is available on allsystems. Previously authorizedusers act as intermediaries. Theyverify through the audio propertythat the authorized new users are atthe same place. This leverages therelationship between the currentuser and the prospective user. Theauthor has a working implementa-tion. There are several open issues:How should the client software bedistributed? How can interoper-ability be ensured? Can the reliabil-ity and transmission speed of thechannel be improved?

Revamping Security Patching withVirtual Patches

Gautam Altekar, University ofCalifornia, Berkeley

Patching is ineffective because it isunreliable, disruptive, and irre-versible. There is no extant workthat addresses all of these issues.Many kinds of patches have twobasic parts: a check and a fix. Thecheck is a test added to the originalcode to determine if the vulnera-bility will be triggered. The fix isthe code to handle the anomaloussituation. The author presents thenotion of a virtual patch, where thedeveloper denotes which part ofthe patch is the check and whichpart is the fix. The check is sand-boxed to prevent a side effect fromaffecting the rest of the programunless the vulnerability is trig-gered. Each check and fix can berepresented as a nested C function.Much of the overhead can be opti-mized away. Virtual patches are

nondisruptive, because they aresimple additions to the programand can be inserted dynamically.The limitation is that the program-mer must explicitly annotate thecode to indicate which part of thepatch is the check and which partis the fix. Is there a virtual patchthat is equivalent to any conven-tional one? If so, conversion is pos-sible. Given a patch for some bug,is there some way to change thebehavior of the program to allow asingle check and fix?

Automatically Hardening WebApplications Using Precise Tainting

Salvatore Guarnieri, University ofVirginia

The goal of the system is to preventPHP and SQL injection attacks. Anexample of the relevance of thisproblem is the recent attack onphpBB which was based on PHPinjection. The problem was that the programmer called “http-decode” one too many times. Thisallowed code to be inserted. Thesolution is to insert a dynamic fine-grained taint analysis. All user-sup-plied data is marked as dangerous.Taint is determined on a charactergranularity rather than the coarser-grained string granularity. The sys-tem is implemented in PHP. It mod-ifies taint info in the same way thatthe string is modified. It preventstainted data from being used forsystem state. The system detectswhat the tainted information willbe interpreted as. Dangeroustokens, such as unexpected delim-iters, can be detected. Serveradministrators can install this sys-tem merely by switching the ver-sion. Application developers needdo nothing.

Automatic IP Address Assignment forEfficient, Correct Firewalls

Jonathon Duerig, Robert Ricci, JohnByers, and Jay Lepreau

Having worked on optimizing theassignment of IP addresses to nodesin a network so as to minimize thesize of routing tables, the authorsare now looking at extending this

work into minimizing firewall rulesets. Firewalls typically match IPaddresses using subnets, but thisapproach scales poorly if the sets ofhosts that are protected by a partic-ular firewall rule have discontinu-ous subnets. In addition to effi-ciency concerns, this producescorrectness problems. The morefirewall rules there are, the morelikely it is that one of them is incor-rect (i.e., does not express thedesired policy). Given a complextopology with a large number ofhosts and policies, an organizationcan end up with a huge number ofrules. The authors’ work on rout-ing-table minimization uses a met-ric called Routing Equivalent Sets(RES), which quantifies the extentto which routes to sets of destina-tions can be aggregated. Using thismetric, they achieve a two- orthreefold decrease in the number of routes. There are two basicapproaches to adapting RES to fire-wall rule sets, depending on howmuch information is supplied. Ifthe only information is the firewalllocations as annotations, thenwhen evaluating RES, count onlythe firewalls. If the firewall rule setsare also provided, then the algo-rithm can assign addresses usingsets of nodes covered by a commonpolicy. Both of these approacheslook promising, but need to beevaluated.

Turtle: Safe and Private Data Sharing

Bogdan C. Popescu, Bruno Crispo, andAndrew S. Tanenbaum, Vrije Univer-siteit, Amsterdam, The Netherlands;Petr Matejka, Charles University,Prague, Czech Republic

The goal of Turtle is to use a peer-to-peer network for safe sharing of sensitive data which cannot becensored by an adversary. The bestcurrent example of this kind of sys-tem is Freenet, but even it fails toprovide complete protection. Theconnectivity model is open andgood nodes can interact with cen-sored nodes when exchanging data.When a good node is so exposed,the owner of the good node is open

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 89

90 ; L O G I N : V O L . 3 0 , N O . 6

to legal harassment. Turtle creates a peer-to-peer overlay networkbased on social links. Communica-tion between links is encrypted.The key distribution must be com-pletely decentralized, and messagesmust go hop by hop across theoverlay network. To start a virtualconnection, flood query is used tofind the endpoint. Only parties thattrust each other communicate.There is no direct link between thesource and the destination of thevirtual circuit. This means thateven if the destination is compro-mised, there is no way to find outwhich node the source is and viceversa. Though node compromisescause only local damage and thissystem is immune to Sibyl attacks,the system is still susceptible to asubpoena attack.

Towards an Online Flow-Level Anom-aly/Intrusion Detection System forHigh-Speed Networks

Yan Chen, Northwestern University

Most intrusion detection systemsare end-host-based. Rapidly andaccurately identifying attacks iscritical for large network operators.Therefore the author proposes asystem which detects networkanomalies at the routers. The sys-tem stores data-streaming compu-tation in reversible sketches. Thisallows millions of flows to berecorded. So far, the author hasfocused on TCP SYN scanning.Existing schemes for detectionhave high false-positive rates. Thesystem infers key characteristics ofmalicious flows for mitigation. Thisis the first flow-level intrusiondetection system that can sustaintens of gigabytes per second. Theinput streams are summarized andvalues are forecast for the nextintervals. If the incoming value isdifferent from the forecast, then ananomaly has been detected. Thiswas evaluated on 239 million hostswith worst-case traffic.

Mitigating DoS Through Basic TPMOperations

William Enck, SIIS Lab, Penn StateUniversity

Denial of service (DoS) attacks arean ever-increasing problem. Oneway of avoiding DoS attacks is byrequiring clients to solve computa-tional puzzles, which slows downthe rate at which a client can makerequests. There is an inherent un-fairness about this system becausesome computers are orders of mag-nitude more efficient than others.One way to level the playing field is by requiring the puzzles to becalculated by the Trusted PlatformModule (TPM), the hardwareprocessor behind trusted com-puting. There are fundamentalcharacteristics of the TPM: access-ing it is slow and it cannot executearbitrary code. This slow access canbe used as a rate limiter. The puz-zle that the client must solve caninvolve accessing the TPM a certainnumber of times. This would pro-vide a constant delay. TPMs will beubiquitous; therefore they can beused as an efficient and effectiveresource limit.

PorKI: Making PKI Portable inEnterprise Environments

Sara Sinclair and Sean Smith,Dartmouth College PKI/Trust Lab

The goal of PorKI is to attack theproblem of usability in public keyinfrastructures. Users need theirkeys to be portable. Whether theyactually move from one computerto another or whether they are run-ning a number of virtual machineson the same physical workstation,they want to use their standard keypairs everywhere. One solution isto have a key dongle, but theserequire special software. PorKI putsthe key pairs on a Palm Pilot andtransfers them via Bluetooth(though they do not rely on theBluetooth security model). ThePalm Pilots can generate short-lived keys and these can interactwith keys on the workstations

themselves. The information can beused to customize the user experi-ence, for instance by not authenti-cating sensitive data on a publiccomputer (notifying the userappropriately). Some trust informa-tion can be stored in the machinewithout requiring user effort. Thereare many other applications. Openissues include protecting the keyrepository, finding a good way toestablish trust between the work-station and the PDA, and extendingthe key-transfer protocol beyondBluetooth.

DETER

Terry V. Benzel, University of SouthernCalifornia

In the past, most network securityresearch has been done in small orisolated labs. DETER aims to pro-vide more objective, scientific, andreproducible measurements.DETER provides a secure infra-structure with networks, tools,methodologies, and supportingprocesses, plus reusable librariesfor conducting realistic experi-ments. It takes concepts from sci-ence and math where results arereproducible. DETER, which isaccessible over the wide area net-work, also allows canned topolo-gies and attacks, and quick runs ofdifferent experiments. Based onEmulab, DETER has 201 nodes offour different types. It contains acontrol plane and various types ofPCs and switches. Each node canrun virtualized. Clients can runFreeBSD and Linux, and soon willbe able to run Windows. DETER ishosting an upcoming workshop.More information about DETERand the workshop can be found athttp://www.isi.edu/deter/.

Minimizing the TCB

David Lie, University of Toronto

The Trusted Computing Base(TCB) is the group of componentsof a system that a segment of codemust trust to function correctly andsecurely. The operating system,

libraries, and other applications areall part of the TCB. For most sys-tems the TCB is millions of lines ofcode. The author shows how tominimize the TCB for a particularsecurity-critical section of code. Hedoes this by running that piece ofcode in its own virtual machinewith a custom operating system.Since the operating system is sin-gle-threaded and need not optimizeheavily, it can be much simplerthan a general-purpose operatingsystem. This can reduce the size ofthe TCB from millions of lines ofcode to around ten thousand. Atthat scale, it becomes feasible torun static analysis tools and gaineven more confidence in the cor-rectness of the code. The security-critical section can even be imple-mented in a safer language. Theonly remaining issue is that thedeveloper has to select the portionof the program that is security-criti-cal, which may be nontrivial.

Strider HoneyMonkeys: Active Client-Side Honeypots for Finding Web SitesThat Exploit Browser Vulnerabilities

Yi-Ming Wang, Microsoft Research(Strider Research Group)

A user visits a URL with a Webbrowser. Since Web sites cantransparently redirect the browser,a malicious URL can send thebrowser to many different interme-diate URLs. Each intermediate URLcan try a different exploit on thebrowser. HoneyMonkeys are pro-grams that emulate a human usinga browser. They seek out Web sites

with various versions of thebrowser software, trying to getinfected. A HoneyMonkey is insidea virtual machine for quick resetafter an infection. Infections aredetected because their payloadcompromises the host by modify-ing the registry or the file system.HoneyMonkeys use previouslydeveloped software (Strider Gate-keeper and Strider Ghostbuster) todetermine whether the payload hasbeen delivered. HoneyMonkeysdetect the payload rather than thevulnerability. This means that theycan detect an exploit even if thevulnerability is unknown (zeroday). Several versions of thebrowser are used: an unpatchedversion to detect all maliciousURLs, partially patched versions to detect how effective patching is,and fully patched versions to detectzero-day exploits. The HoneyMon-key crawls when it detects a sitewith many exploits. Malicious sitestend to be well connected witheach other. The sites that host theoriginal URLs redirect to the spy-ware sites who pay them. Informa-tion is frequently stored in theredirected URLs, including vulner-ability names and account names.Many malicious sites are among the top click-through links from asearch engine. They are most likelyto occur on sites about celebrities,game cheats, song lyrics, and wall-paper. Because HoneyMonkeysdetect zero-day exploits, they canbe used to discourage such ex-ploits.

Making Intrusion Detection SystemsInteractive and Collaborative

Scott Campbell and Steve Chan,Lawrence Berkeley National Laboratory,NERSC

Most open source applications arecontrolled by text configurationfiles. They are often non-interac-tive. This applies to security moni-toring response software as well.The lack of interactivity makesadaptive changes more difficult andmakes it much harder to teach ortrain new operators to use them.The presented work improves uponBro, a stateful network intrusiondetection system, in two ways.First, the authors added an interac-tive command line interface to it.This allowed state, such as memoryor CPU usage, or host characteris-tics to be queried. It also enabled,among other things, additionalmonitoring of particular connec-tions. Second, they turned the com-mand line interface into a Jabberbot. The system can be monitoredand controlled through an instantmessenger conference. This allowsmany interactive sessions to be runsimultaneously. Each bot can jointhe same conference and be con-trolled and monitored in tandem.Logs can be saved easily in any chatprogram. New operators canobserve firsthand the interactionsof more experienced administra-tors. This also allows the networkintrusion detection system to berun easily from anywhere using anyJabber client.

; LO G I N : D E C E M B E R 2 0 0 5 S U M M A R I E S : 1 4 TH U S E N I X S E C U R IT Y SYM P O S I U M 91


Recommended