+ All Categories
Home > Documents > 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A...

22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A...

Date post: 26-Dec-2015
Category:
Upload: earl-perkins
View: 214 times
Download: 0 times
Share this document with a friend
Popular Tags:
30
22.69 Double-barrelled surname costs disabled mother Nigel Metheringham <[email protected]>14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving tax credits worth 190 pounds a week because she is among hundreds of claimants whose double-barrel surnames are not recognised by Government computers. Sue Evan-Jones has fought for more than three months to persuade the Inland Revenue that her surname has two parts after she was told the system was confused by hyphens. The fact that obvious input validation problems, and properly specifying the valid forms of input in the original design are still being got horribly wrong in 2003 fills me with despair. Source: http:// www.telegraph.co.uk/news/main.jhtml?xml =/news/2003/04/14/ncred14.xml
Transcript
Page 1: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

22.69• Double-barrelled surname costs disabled mother• Nigel Metheringham <[email protected]>14 Apr

2003 10:54:08 +0100 A disabled mother of three has been barred from receiving tax credits worth 190 pounds a week because she is among hundreds of claimants whose double-barrel surnames are not recognised by Government computers. Sue Evan-Jones has fought for more than three months to persuade the Inland Revenue that her surname has two parts after she was told the system was confused by hyphens. The fact that obvious input validation problems, and properly specifying the valid forms of input in the original design are still being got horribly wrong in 2003 fills me with despair. Source: http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2003/04/14/ncred14.xml

Page 2: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

17.79• Deep Blue - Deep Trouble• Erik Hollnagel <[email protected]>Thu, 22 Feb 1996 08:34:06 +0100 In the latest battle between man and

machine, the ACM Chess Challenge between Garry Kasparov and Deep Blue, which took place in Philadelphia, 10-17 Feb 1996, the machine apparently ran into some unexpected problems. Many of these were due to improper handling of the input and output, rather than to the software itself. In the 22nd move of game four, the following happened. The description is taken from the transcript, which can be found on www.chess.ibm.park.org/deep/blue/commgm6.html; since this is a stenographic record, there may be some technical inaccuracies, but the essentials are correct. "DR. FENG: I went back and Garry is thinking I should wait there until I finish the moves. The monitor that we are using is an energy saving monitor so it went blank. So I went back and I am typing the move, A3 and somehow the A did not get recognised, it recognised it as 3. And it was a command to process number 3 and then I type F3 and the machine say I see a repetition and then say there is a beep. And it stopped beating [beeping?] after that. And then I check with the guy in the back room. They are saying that could cause the program harm and I checked with Valvo and we restart the program. Mr. Seirawan: So they had to restart the program and get it back up to speed. How much time did that cost the computer on its clock? DR. FENG: About ten minutes. And those are lost. Mr. Seirawan: Just to be clear, stay with us please, ten minutes to reboot everything and then ten minutes of lost thought time, so DEEP BLUE got hit with a 20 minute -- DR. FENG: It would have played the same any way. Mr. Seirawan: It got hit with a 20 minute penalty, it seems to me." >From the risk perspective, the situation is that we have a highly developed and very sophisticated parallel computer which is supposed to represent the state of the art, at least in chess computing. But apparently all the efforts have been lavished on the chess playing part and none on the interface or input checking. Checking the syntactical correctness of the input should be elementary, even - or particularly - in complex software systems. Woe to the student who forgets that! But what would have happened if the computer had not been playing chess, but been applied to share trading, controlling a train signalling system, a satellite, a nuclear power plant or something else where the consequences had been more severe for a third party? Erik Hollnagel, Ph.D., Principal Advisor, OECD Halden Reactor Project P. O. Box 173, N-1751 Halden, Norway +47.6918.3100 [email protected] [In a language in which dairy is pronounced daily, I suppose this might be Deep Yogurt -- as they say in California. PGN]

Page 3: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.26• Beware assumptions about keyboard layouts...• "Perry Pederson" <[email protected]>Fri, 2 Mar 2001 08:30:00 -0800 I

recently started a checking account at Hewlett-Packard's credit union, and as part of the process of obtaining a VISA debit card attached to the account, I needed to create a four digit PIN number for the card. After the card was initialized at the credit union, it failed to work at ATM machines, giving me an "invalid PIN number" error. I re-initialized the card three different times with credit union personnel, to no avail. Finally, after several calls to the credit union's main office to determine why my PIN wasn't taking, I noticed that the keypad that I used at the credit union to set the PIN number had the rows appearing in the opposite order of a "normal" PC keyboard-- the topmost row of keys had the numbers "123", the second row "456", and the third row "789". When I was generating my PIN, I was automatically pressing keys that had the same pattern as an "old" PIN that had I used at a previous bank without checking the numeric values associated with the keys. Once I entered the "correct" numbers, the card worked fine at ATM machines. The RISKs here should be obvious-- one should observe the input hardware being used, regardless of how similar it may look to other input devices.

Page 4: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

22.75• Re: Modern Computers, Unsafe at any speed?• Bill Stewart <[email protected]>Thu, 29 May 2003 02:31:35 -0700 I was startled by "Len

Spyker" <[email protected]>'s assertion in RISKS-22.74 that "all that software now wasting CPU time checking for overflows is no longer needed" because hardware can protect us against overflows. Hardware can't protect you against wrong answers, and while it can detect some kinds of overflows and halt a program rather than let it dangerously stomp on other space, that isn't always the right way to respond to a problem - you might want to do other things like giving the user or administrator an error message rather than stopping. Also, hardware protection against stack overflows is easier than protection against overflows of individual arrays that don't go outside the segment, and setting up protection for arrays, at least on most hardware, is a lot more work. Yes, this will generally stop many kinds of potential security attack. But back in the mid-70s, when I was learning to program well in college (as opposed to learning to program haphazardly in high school), one of the first and most critical lessons was to always check your program's input and *never* trust it. It might be bad input by accident, or malicious input on purpose, and the input data we had to run our class programs on was always malicious, particularly designed to catch off-by-one errors, which are a common problem with arrays. Empty-input errors are fun too, and are often caused by input data that's out of sync, or by input data that's the wrong type (e.g. letters when you need numbers.) Some computer languages will help a lot with bounds checking, while others, like C, will let you shoot yourself in the foot, though they make it hard to shoot somebody else in the foot. Cornell's PL/C compiler (for their dialect of PL/I) not only detected syntax errors, it tried to correct them. Sometimes it did it right, sometimes it did it wrong, but it at least let you try to run the program so you could find as many bugs per set of keypunch exercise as possible.

Page 5: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

6.14• A second Sun clock error: no sanity checking• John Bruner <[email protected]>Sun, 17 Jan 88 18:53:39 PST The recent incident with the

Sun leap-year clock problem illustrates a RISK which noone has mentioned yet: software which blindly trusts hardware without performing sanity checks on the data received therefrom. There were two coding errors in the Sun clock code. The first was the use of a side effect in a macro argument, which caused the hardware time of day register (TODR) to be loaded with garbage. The second error was the use of the contents of the TODR without any range checking. Classically, the time in UNIX has been maintained by software in response to interrupts from an interrupt source (line clock or programmable timer). This is true on the Sun as well, except that every 30 seconds the Sun kernel also compares the software-maintained time to the contents of the hardware TODR. If the two values differ, provisions are made to synchronize the software-maintained time to the hardware TODR. The apparent assumption here is that the TODR will be more accurate, and usually that assumption is justified. The system call "settimeofday" changes both the software-maintained time and the TODR. When the unfortunate leap-year bug manifested itself, "settimeofday" correctly changed the software-maintained time but trashed the TODR. Within 30 seconds the kernel detected that the two values were different and starting trying to "correct" the software-maintained time to match the garbage in the TODR. A simple range check applied to the difference between these two values could have detected that the TODR was trashed and suppressed this "feature." John Bruner (S-1 Project, Lawrence Livermore National Laboratory) [email protected] (415) 423-4848

Page 6: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

23.51• Lack of sanity checking in Web shopping cart software• <Richard Kaszeta <[email protected]>> Fri, 27 Aug 2004 14:25:19 -0500 The Lack of Sanity Checking in Web

Shopping Cart Software or "The Story of the 1.1 Cocktail Shakers" Recently, I was browsing the web site of a large Burlington,NJ-based retailer, and decided to add a cocktail shaker to my shopping cart. Due to some slightly twitchy fingers resulting from my morning coffee, I accidentally entered the number 1.1 (instead of 1) to the the "quantity desired" box, and found myself with a shopping cart containing 1.1 cocktail shakers at $9.99/each, for a grand total of $10.99 plus shipping of $5 (shipping is $5/item, for a total of $5.50 for 1.1 items). At this point curiosity got the best of me, and I decided to check out. To my surprise, the site's shopping cart software never did a sanity check on the data, and simply confirmed my order for 1.1 cocktail shakers, and I also received an email confirmation for "Qty: 1.1." My credit card was charged for $16.49. Due to the atomic nature of cocktail shakers, it's obvious that at some point something was going to have to give, and this apparently happened in the shipping department: my "Shipping Confirmation Notice" listed the quantity shipped as "1", but confirmed that the total charges were still those for 1.1 shakers ($16.49) instead of the appropriate charges for a single shaker ($14.99). Indeed, as expected, I received a single cocktail shaker in the mail, with a receipt for "Cocktail Shaker, Qty 1", also listing the inappropriate price. It was relatively easy to square the charges away, but the company's customer service representative had to get a supervisor involved, as they apparently hadn't seen this before. The RISK is obvious: a lack of sanity checking on input data resulted in a spurious order being sent through the system, with additional lack of double-checking resulting in a discrepancy between what was shipped and what was billed. Months later, the error remains uncorrected, and you can still order fractional items, with the additional risk that a dishonest customer may be able to able to get a discount by ordering slightly less than a single item and hope for a "roundup" when it gets shipped. Really, it's too bad, because I was really thinking that my cocktail shaker is a bit small, and could use another 10% of volume. :) That, or perhaps I should buy 0.9 shakers to go with my 1.1 shakers to make a matched pair. Richard W Kaszeta <[email protected]> http://www.kaszeta.org/rich [On the other hand, a round-down would be more consistent: Suppose you had ordered .99 shakers. You probably would have been billed for .99 shakers and received none. Shake-ri-la. PGN]

Page 7: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

12.31• Risks of assumptions? (Re: Chase, RISKS-12.28)• R. Cage <[email protected]>11 Sep 91 21:58:31 GMT

>People don't compute the crash-safety of new automobiles (well, I'm sure that >they do at some early stage), they run them into walls to see what happens. As it turns out, this is almost exactly backwards. Running a car, especially a hand-built prototype car, into a wall is horrendously expensive. Exercising a FEA model inside a Cray is very cheap in comparison, and it takes a lot less work to reconstruct a computer model after a crash, or modify it to work better. About the only crash-testing we do these days is to confirm the results of the computer models. The sanity-checking is done; we have no chance of GIGO resulting in bad products getting out. The effectiveness of the models is a result of a great deal of work in building and testing them. It's a good thing that the properties of sheet metal are not very difficult to determine. Having people just assume that climate models, or drug models, or population models are just as reliable is, IMHO, a big RISK. Russ Cage [email protected] russ%[email protected]

Page 8: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

13.81• Airliners playing chicken• "David Wittenberg" <[email protected]>Tue, 22 Sep 92 14:22:52 EDT In November (presumably 1991), a Fokker 100, flight

1163 landed on runway 22L at O'Hare. Winds were from 240 at 25 kts. Shortly after landing, they discovered that the thrust reversers weren't working, but the multi-function display unit showed no problems. They then found out that the brakes weren't working either. The stick shaker was on. (A stick shaker literally shakes the yoke to warn that a stall may be imminent.) They took the high speed turnoff onto a taxiway, and then turned back onto runway 22L (going in the other direction, so it could also be called 4 R), just as a United 737 landed on the far end of 22L. Denny Cunningham described it: "The UAL 737 had already touched down on 22L and was rolling head on toward the Fokker. [The Controller] immediately issued a go-around to the next arrival, then started a persuasive campaign to convince the pilot of the 737 on rollout that it would be in the best interest of aviation safety to make the highspeed taxiway without delay. With the radome of the Fokker starting to fill his windshield, the 737 pilot concurred in a tone of amazement not usually heard on ATC frequencies. He managed to clear the runway a few seconds before the Fokker flashed by going in the opposite direction. The Fokker pilot kept one engine running to provide hydraulic power to the steering. At the end of 22L, he turned onto runway 27L, which was being used for take-offs. The planes which were waiting to takeoff were unable to make any room for the Fokker on the taxiway. At this point there were 3 jets rolling on runway 27L. The tower said that it looked like Oshkosh for airliners. The plane just starting its takeoff roll rushed his takeoff to get out of the way. The Fokker finally stopped in the middle of runway 27L, and was towed off safely. Noone was hurt, and there was no damage to any of the airplanes. It turns out that the "squat" switch which determines if the plane is in the air had jammed, so the plane "thought" it was in the air, and safety switches prevented the brakes or thrust reversers from working while the plane was in the air. Shortly after this incident, a captain attended school on Fokker 100s and asked what the appropriate procedure was in the event of malfunctioning ground/flight switches. He was told that there wer no such procedures, because it couldn't happen. This is excerpted from two articles in "IFR: The Magazine for the Accomplished Pilot", Vol. 8, number 9 (sept. 92). They were published under the title "EEK! No Brakes! Ho Hum, just another day at O'Hare; Two airliners playing chicken on runway 22L" "Cockpit View" by Joseph J Poset taken from the May issue of "Airline Pilot", and "From the Tower" by Denny Cunningham. This incident was not directly caused by a computer. Switches are used in all sorts of safety devices, both with and without computers. The danger from computers is that they tempt us to add many more such switches, which will eventually fail. In case anyone is tempted to say that safety features such as the one which prevented the brakes from working should be removed, remember that they are often crucial. The opposite kind of accident happened on 5 July, 1970 near Malton Airport in Toronto, where a DC-8 crew accidentally deployed the aircraft's spoilers in flight, killing all aboard. The (US) FAA then required a placard reading "DEPLOYMENT IN FLIGHT PROHIBITED" over the spoiler lever. A Canadian official called this ridiculous, and instead proposed a placard reading "DO NOT CRASH THIS PLANE". In fact the placard did not prevent a similar (but non-fatal) accident on 23 June, 1973 at JFK. So, placards don't work, and we install safety devices to prevent people from doing stupid things. Then the safety devices fail and cause crashes. All one can do is to try to only add safety devices which help more often than they do damage, and not panic when a safety device does cause damage. We know that will happen, despite all attempts to reduce the frequency.

Page 9: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

17.48• Re: Solid code (RISKS-17.45) and solid buildings (17.16)• Steve Branam - Hub Products Engineering <[email protected]>Wed, 22 Nov 95 12:16:04 EST [This message is a

resubmission of comments I originally made regarding an item in RISKS 17.16. While the context is different, I feel they are apropos, since people are debating the safety of removing runtime assertions that check for programming errors.]

• In RISKS DIGEST 17.16, Andy Huber <[email protected]> says "One of the conclusions is that many buildings fail due to a lack of redundancy, which I find very interesting since very few operating systems (or software of any kind) has any kind of redundancy designed or built into it.]"

• On the contrary, I do see quite a bit of software with redundancy built in (but not nearly enough). We often think of redundant systems as meaning side-by-side system/subsystem duplication, but a practice that should be more widely promoted is that of "sanity checking", where redundant (i.e., apparently unnecessary) code is introduced to check information generated by another part of the software. The idea is that one part of the system checks up on the other part, to determine if the other part has "gone insane".

• For example, a routine may compute a value intended to be in a certain range, then pass it to another routine that first verifies the value is in range before actually trying to use it. Especially when this is all within a single subsystem, one could argue that checking the value is unnecessary, since it was just generated by a routine known to be generating correct data; thus compute cycles and memory (for the additional instructions) are being wasted. This argument is generally the one used to skip sanity checking (assuming anyone bothers to consider it in the first place). However, the further apart the producer and consumer of the value are, the greater the opportunity for something to corrupt it.

• Besides, what happens when someone makes an incorrect "enhancement" to the producing routine, so that now it does produce bad data? Or what if the consuming code is reused on another project, called from different code than that for which it was originally written? Or what if the stack gets corrupted due to a bad interrupt service routine? Or there is a read error pulling the value in from the database on the disk? Or... ad infinitum until you can be convinced that it is worthwhile to do a little more rigorous checking of data before using it.

• The point of all this is to detect data corruption or faulty code and try and do something about it before the situation gets worse. For instance, some operating systems incorporates sanity checks on internal data structures, and generally respond to detected faults by crashing the system with a "bugcheck", the logic being that once a problem is detected, the system should be shutdown before the problem propagates further and possibly corrupts user data. So while users might complain about the system going down, bugchecks are their friends.

• The difference between software and buildings is that a building can not take itself down and bring itself back up to correct its problems. The same may be said of critical realtime control software, since an aircraft on final approach is not a good time for the flight control system to reboot itself. So in reality, detecting a problem is the easy part; it's dealing with the problem with the least undesirable consequences that is the hard part!

• Steve Branam

Page 10: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

17.48• Re: Writing solid code (Wolff, RISKS-17.46)• Marcus Marr <[email protected]>Tue, 21 Nov 1995 16:18:15 GMT • Roger Wolff and others recommend leaving checks for programming errors in

production code to ensure the consistency of internal data representations, but also to ``catch lots of errors earlier on''.

• This has the `disadvantage' of clearly identifying problems in the code, which:• (a) the users are more likely to report, and

(b) the users are going to want fixed.• The cost of telephone support would increase, as would the workload of the

programming departments.• ``There are no bugs in Microsoft products that the majority of users want fixed''• Make sure that the majority of users don't know about the bugs (the ``luddites'' will

probably blame themselves for ``not using the software properly''), and the above marketing statement will stand.

• The risks? `Luddites' expecting higher quality software.• Marcus

Page 11: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

17.48• Re: Writing solid code (Beatty, RISKS 17.45)• David Phillip Oster <[email protected]>Tue, 21 Nov 1995 17:22:54 GMT • I seem to have read a different "Writing Solid Code" than the other posters to this list. The version I read supported the following coding

style:• Add two blocks of checks to significant pieces of your algorithm:• <1> ASSERT

<2> fixupalgorithm

• The ASSERT checks are designed for use in a run-time environment where all the source code is available. If an ASSERT check fails, it does so showing the user the line in the source where a precondition violation has been detected. It is intented for programmers who are clients of an interface, and its goal is to get programmers to call the interface correctly. ("Writing Solid Code" also recommends that the ASSERT re-compute the return value by another, slower algorithm, and compare the two values. Surely such checks can be removed in the production version.)

• The "fixup" checks often check the exact same preconditions that the ASSERT checks check. These checks are intended to be executed in all environments, but since they check the same things as the ASSERTs, they generally don't fire in the development environment, since the ASSERT will already have fired. The "fixup" checks job is to handle the error in a way appropriate for a production environment. Often this means throwing an exception, which cleanly backs out of a partially completed operation, and displays an error message to the user explaining in the users terminology what went wrong.

• Sometimes it just means fixing up the precondition and continuing. For example, if you pass a bad pointer for one of three output arguments, should the program die or should it compute the two outputs it can, and not touch the third? The "Writing Solid Code" answer to this question is: The program should die during development so that the developer will discover and fix the problem. The program should continue after deployment, because immediate termination is less useful to the customer.

• If you want to bash Microsoft, just quote the section of "Writing Solid Code" where the author says he wrote the book because he had observed a decline in code quality inside Microsoft, that the lessons of the past that his book was supposed to teach were being ignored within Microsoft.

• Re: Writing solid code (da Silva, RISKS 17.46)• Edward Reid <[email protected]>Tue, 21 Nov 95 09:39:56 -0500

Page 12: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

17.48• > This is much like the old Burroughs boxes, where the compiler guaranteed

> that no operation that violated the security policy was generated.• Presumably Peter is speaking of the Burroughs B-6700 -- and its modern

descendent, the Unisys A-Series, which has all the same attributes. In fact, the A-Series is still virtually code-compatible with the defunct B-6700.

• To say the compiler guaranteed that no operation violated security is a vast oversimplification. Security, in the broad sense, is a joint responsibility of the compilers, the instruction set architecture (ISA), and the operating system (MCP). The compilers do not generate code which unconditionally violates security, but often the generated code is further checked by the ISA or the MCP. For example, the ISA checks array bounds and prevents the code from accessing memory outside that allocated to the task. The MCP manages file access security. And so on. This division of responsibilities is a sound method, as evidenced by the high security level and reliability of A-Series systems. Doing all the checks at run time is not necessary.

• The A-Series actively supports the paradigm that all assertion checks should remain in the code. The most obvious such support is the ISA checking for array bounds, as it can do so in parallel with the actual memory addressing operations at no cost in performance and a very small cost in the price of the CPU. Numerous similar examples exist in the architecture.

• Edward Reid

Page 13: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

17.48• Re: Writing solid code (Beatty, RISKS 17.45)• Thomas Lawrence <[email protected]>21 Nov 1995 19:07:49 GMT • >There are probably a few assertions which belong only in development code,

>because they are so expensive to check that they make the product unusable.>In my experience, this is surely *not* the common case.

• I'm not sure if this experience is that wide-spread. In my own experience, there are a great many assertions which are unacceptably expensive to check. Here are a few examples:

• Heap/Pointer Integrity: Many (most?) programmers use pointer checking techniques to catch dangling pointer uses and memory leaks. This typically involves using a special tool (e.g. purify) if you're lucky enough to run on a platform that supports it, or using a special malloc library along with routines called by the user at appropriate points (e.g. CheckPtr, CheckRefWithinRange, etc.) Most of these schemes cause programs to run 3 to 10 times slower than without the checks. That's generally not acceptable in a production application, so such checks are omitted from the final version. I would estimate that at least 50% of the assertions I put in my code are calls to such utilities.

• Algorithmic Redundancy: It is common to run multiple independent algorithms in a debug version and compare their results. For instance, in the Microsoft book, they say the debugging spreadsheet engine uses 2 algorithms to recompute values. One is the fancy dataflow based one which only recomputes stuff that depends on changed values. The other recomputes everything. Then the results are compared. If they are different, then there is a bug somewhere. This approach is useful anytime one uses a very complex algorithm to try to improve performance. Applications which use many such algorithms (spreadsheets, databases, and compilers, perhaps?) may have a significant amount of code devoted to these checks. Since the verification algorithm is usually much slower, you wouldn't want to include it in the production version.

• Structural Checks: In programs that use complex internal data structures (like compilers), it is common to write routines which verify the integrity of the structure. Such routines are usually expensive since they traverse the entire structure. I certainly have a tendency to call these checks anytime I modify the data structure in some way. This usually results in serious slowdowns, so they can't be included in the final version.

• Simple checks such as array bounds, NULL pointer, and unexpected switch/enum value checks can also be very expensive. I tend not to be that great a programmer, so my code is really peppered with these checks. (Some modules have more than 50% of the lines devoted to this stuff.) Even after turning off the fancy memory and algorithm checks, these alone may cause slowdowns of 2 or 3, and sometimes serious as 5 times (always in the compute-intensive sections of the code, too, wouldn't ya know it!). You might want to keep some checks in the final version, but you may have to turn off a significant number of them.

• The main reason for this approach to programming is the reality that programmers don't like to put stuff in the code that makes it run slower. If you can convince a programmer that "your checks will not be in the final version", then he'll be much more willing to put assertions into the code. By having so many more assertions, you can catch many bugs much more quickly than by putting in only as many assertions as you can get away with given the program performance you are targeting. By catching more problems, you may be able to produce a program that has fewer bugs by the time it reaches the user. Although bugs that do slip through might be harder to detect, and perhaps more damaging, the tradeoff may still favor turning off many of the assertions.

• I'd like to suggest a different reason why Microsoft's products may not be very good. It's because of their complexity. The approach advocated in the book is based on the idea that bugs will be caught during the beta-test stage. In many situations, this is reasonable (and many solid programs have been written using the above approach; it's not just Microsoft's idea, although they published the book). However, I suspect that Microsoft's applications are becoming so complex that it is no longer possible to thoroughly test the programs, so bugs start slipping through. Perhaps a redesign of Microsoft's design philosophy is what's required.

• Perhaps the best solution is to give 2 versions of the program to the end user. One with debugging, to satisfy people who are paranoid about losing all their data due to internal program corruption. The other without any debugging, for situations where you need maximum performance. Then let the user decide which to use.

• >(no, I haven't reported these to Microsoft. Why should I? They're not going>to fix it... we're going to have to upgrade to 3.51 just to get around this>bug, for all that Bill Gates claimed "A bug fix is the worst possible reason>to release an upgrade" or words to that effect).

• The bugs will be in the code regardless of whether or not you remove the assertions (excepting the tradeoff I mentioned above). Whether or not the company is willing to release bug fixes for the bugs (however detected) is an entirely different matter -- a marketing matter. I'd say the programmers are well aware of the reported bugs, and probably have versions of the code with the bugs fixed. It's management's decision not to provide these fixes to the end user, for whatever reason (saving money, etc).

Page 14: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.87• Automated Debit: "There's nothing we can do to stop it."• Carl Fink <[email protected]>Wed, 16 Jan 2002 14:02:08 -0500 A

Georgetown, TX man who had arranged for his water bill to be automatically debited from his bank account alertly noticed that his monthly bill was for over $21,000. (If he hadn't noticed, the debit would have happened, causing him to bounce multiple checks before the error was corrected.) When he told the city of the problem, "They said there was absolutely nothing they could do to stop the automated debit, and it was out of their hands." Their solution was to send a city employee with a check for $21,000 to reimburse their customer! http://www.austin360.com/statesman/editions/tuesday/metro_state_1.html Risks? Lack of sanity checking on a new billing system springs to mind. Lack of any way to correct errors is also quite prominent. Carl Fink, Manager, Dueling Modems Computer Forum http://dm.net/ [email protected]

Page 15: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.87• Buffer overflows and other stupidities• Earl Boebert <[email protected]>Mon, 14 Jan 2002 10:07:55 -0700 "I used to be disgusted, now I try to be

amused." -- Elvis Costello "What a stupid robot." -- Marvin the Paranoid Android In my view, attempts to close the buffer overflow vulnerability through software or compiler tricks are doomed to one degree of failure or another because you're trying to program around a stupid processor design. Certain contemporary processors actually host a Pantheon of stupidities, consisting of a Greater Stupidity and two handmaiden Lesser Stupidities. Greater Stupidity: Read access implying execute access. Any piece of data that the processor can be tricked into loading into the command register immediately becomes code. This is a stupidity of such breadth and depth that it comes with an event horizon. Lesser Stupidity I: Segmented addressing that isn't. Let's say you have an addressing scheme consisting of segment number plus offset. This raises the question of what to do when, in executing code, block moves, etc., the offset gets counted up to maximum length plus one. Smart answer: take a fault. Dumb answer: set offset to zero and count up one in segment number. Lesser Stupidity II: Brain-dead stack design. If you enumerate the design space of dynamic storage management, you may realize that one actually has to *work* to produce a stack design so dumb that overflow attacks are possible. Here are four classes of designs that are immune to the vulnerability: 1. Descriptor stacks. The only thing that goes in the stack are addresses, preferably with a bounds value attached. Overflow a buffer and at worst you clobber the heap. Penalty: one level of indirection, which (The Horror! The Horror!) may cause your dancing pigs to dance slower than the other guy's. Possibility: can be fitted transparently to existing processor designs, assuming anybody cared. 2. Stack per protection domain. This assumes you can find the perimeters of your protection domains. Also slows down dancing pig displays because of copying parameters from stack to stack. 3. Separate control and data stacks. CALL/RETURN works the control stack, PUSH/POP works the data stack. Doh. 4. Error-checking stacks. A whole raft of options, including "shadow stacks" with checksums, return addresses protected with trap bits, etc. etc. So, if it's all so straightforward and well known, why hasn't some vendor or other fixed it? Answer: the dancing pigs have won. [Ah, yes. Earl is tacitly recalling the good old days of Multics (beginning in 1965) and its progenies (SRI's object-oriented Provably Secure Operating System design 1973--1980, and the Honeywell/Secure Computing Corporation type-enforced systems), all of which took care of this problem and so many others so long ago. But with today's badly designed bloatware, the dancing pigs are increasingly becoming 700-pound porkers that can barely move around the pigsty without massive memory and processing power, and whose pigpen could not even contain them if they were in reality Trojan pigs. PGN]

Page 16: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

22.28• Bear Stearns' bare sterns: erroneous order• David Lesher <[email protected]>Wed, 2 Oct 2002 23:34:42 -0400

(EDT) > Bear Stearns placed an erroneous order to sell $4 billion worth of stock > late Wednesday at the New York Stock Exchange, but most of the order was > canceled before it was executed. The NYSE said a clerical error caused > the brokerage house to enter the order to sell $4 billion worth of > Standard & Poor's securities at about 3:40 p.m. -- 20 minutes before the > stock market closed. The order should have been for $4 million. All but > $622 million of the $4 billion transaction was canceled prior to > execution, the NYSE said in a statement. The NYSE had no further > comment. Officials at Bear Stearns were not immediately available for > comment. [AP item] We have talked about sanity checking time after time. You'd think that a major move would require MULTIPLE management approvals.....but.. We have met the enemy and he is us...

Page 17: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

22.28• Address change blocked by online entry validation• "George N. White III" <[email protected]>Thu, 3 Oct 2002 22:16:48 -0300 (ADT) Canada Post recently

changed my home mailing address. Previously my address involved a rural route number and mail was addressed to the town in which the post office was situated. The new address has the same street and number, but omits the rural route designation and has a different town and postal code. This change was first announced over a year ago, but the new postal codes were only announced a few weeks ago, and are "official" on Oct. 21, 2002. BC (before computers) I would simply have mailed change-of-address cards that take only minutes to fill out. Now I have a choice. I can spend minutes online trying to find an actual mailing address, or minutes filling out an online form, only to find that the new address fails the online entry validation when I submit the form. Many of the companies I deal with, including well-known online retailers, allow customers to update their personal information online. In one case, when I clicked "submit", the result was an error page stating that my postal code was not valid for my street address. After contacting customer support, I was told that I could bypass the checks by submitting the form a second time. The risks here are from data validation systems which assume that there is a unique mapping (e.g., between street address and postal code) and can only be updated at a single point in time, so users will be making updated entries before the database has been updated, or will fail to make the update so their records become "invalid" when the mapping is updated. During a transaction, a mailing address is required when the order is placed. Credit card companies may check the shipping address when the charge is applied, hopefully not long before when the item is ready to ship. My new postal code is interesting, as it consists entirely of pairs of easily confused letters and numbers: "2Z", "3B", and "6G". Was this error-prone code rejected when postal codes were first issued, and then pressed into service when a new code was required? It will be interesting to observe how often errors are made by people manually transcribing the values I entered in WWW address forms into their mailing databases. George N. White III <[email protected]> Head of St. Margarets Bay, Nova Scotia, Canada

Page 18: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

9.39• Even COBOL programmers need to know about range

checking.• Bryce Nesbitt <[email protected]>Fri, 3 Nov 89

17:40:53 EST Last week I received this letter from my bank: GREAT NEWS FOR THE HOLIDAYS! Dear Bryce C. Nesbitt: You are important to us. And, because of the excellent way you've handled your finances, we are pleased to increase the credit limit on your Meridian Open Line of Credit to $0. Now you have more buying power when you need it most - in time for the holidays. ... Thanks a lot. Before the promotion my credit limit was $5,000.00. The rest of the letter talked about the free Mini-Vac that could be mine if I'd just borrow $1,000 (funny, there was no mention of the over-limit penalty :-). The bank had little to say about the event. I assume the calculation was based on a number of factors, including the "high credit" on the account. Since I have never drawn on this account, high credit would be zero.

Page 19: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

9.96• Bank deposits huge amount in account and blames owner!• Richard Muirden, A Star Trek Fan <[email protected]>Mon, 28 May 90

13:58:27+1100 I thought this personal story might be of interest to RISKS readers: In mid 1988 I had an interesting experience with my bank account - I had had $87,889,984 (or some such random value in the $87 million range!) added to my account!! On asking the bank concerned if they could fix the problem they blamed me for "Keying in the amount at an ATM!" Of course I protested my innocence - where would I get that sort of money from?! :-) Now I would have thought that surely: a) The ATM software would check for such obvious erroneous data if I had in fact entered such an amount as a deposit. (ever heard of range checking?!) b) With such large sums of money would the computer not alert an operator to check to see that it was valid (considering that I do not hold a corporate account). The problem was fixed after several weeks (!) and although rather amusing {and if only I got the interest on that money :-( } to do an account balance and see a nice amount for a change :-) but it still leaves me wondering just what happened and why they should blame *me* for such an obvious computer error! Maybe it was because I am a student! I wonder if this kind of error has occurred to anyone else. -Richard Muirden [email protected]

Page 20: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

10.1• ATM range checking• ZENITH <[email protected]>Fri, 01 Jun 90 07:07 PDT In RISKS 9.96,

Richard Muirden writes of his experience with $89m showing up in his bank account; his bank blamed him for keying in the amount at the ATM. He went on to wonder about the range checking that might or might not be employed at the ATM to catch "such obvious erroneous data". It is my experience that there is, indeed, NO such checking performed--at least at one institution, at one time. A few years back, my credit union installed an ATM machine; as part of the hoopla surrounding the event, they had a demonstration where members could "practice" on the machine, using a card provided by the demonstator. I, being the obnoxious sort, made use of the opportunity to determine an empirical answer to the question of range checks. I cheerfully deposited the amount of $99,999,999 in the account. The demonstrator was rather worried when I showed him the receipt (oops--I meant "transaction record"); it seems that they were using a live account for the demo, which meant that all these phoney transactions would show up on the balance sheets at the end of the day! I did hear later that the trouble caused was minimal, but they did have to jump through some hoops to make sure there were no ripple effects caused by that $99m. P.S.--My personal feeling is that any non-zero deposit is an "obviously erroneous value"; I don't like giving my money to a machine in exchange for a worthless transaction record. Andy

Page 21: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

10.3• Re: ATM range-checking (RISKS-10.01)• Jim Horning <[email protected]>1 Jun 1990 1336-PDT (Friday) It's pretty clear

that different banks have different practices, as well as diverse equipment. My bank (Wells Fargo) advertises that they will credit you with an extra $10 if the ATM makes any mistake on a deposit (and, indeed, I've never detected one). They also do some range-checking. I haven't conducted extensive experiments, but I recently deposited a check for an order of magnitude more than my usual deposit, and was asked to confirm an extra time before the transaction was completed. I thought that this was a very sensible precaution. In a related vein: When I first got my ATM card it was limited to $200/day of cash withdrawal, which is not unreasonable. However, after a decade of modest inflation, there were times (like just before trips) when a larger sum would have been convenient. One day it occurred to me to try to withdraw more, and what do you know? It disbursed $300 without complaint. So my trips to the ATM became less frequent. Some time later, I noticed that years of carrying the card in my wallet had cracked it, right across the magnetic stripe. So I asked for a new one. Now I'm limited to $200/day again. I infer that it was a fault on the stripe that let me withdraw more. I would have hoped that the limit was enforced by something less subject to decay and/or tampering. Jim H.

Page 22: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

12.20• The story of O• Jerry Leichter <[email protected]>Thu, 29 Aug 91 09:00:58 EDT A recent RISKS mentions the problems of one

Stephan O in getting computers to accept his single-letter last name. This is an OLD problem. "Ng" is a moderately common Chinese name (well, to be more accurate, it's a moderately common rendering of an underlying Chinese name probably more often written as Eng or Ing, and undoubtedly pronounced using a phoneme not present in English). I recall at least one report, probably in Datamation, many years ago - probably early '70's - of the trials and travails of a programmer whose last name was Ng. It seems the payroll computer just would not accept that as a valid name. As I recall, his paychecks were eventually made out to one Damn U Acceptit. The underlying issue here - and one we haven't gotten any better in dealing with in 20 or more years of trying - is that of "unreasonable" data. A common complaint is that computers accept everything literally; with no knowledge of real-world reasonableness, they are perfectly happy to accept that a homeowner use a million kilowatt-hours in a month (because of a small error in trans- cription), or what have you. The usual prescription is "Check for reasonable- ness". Unfortunately, the world is sometimes "unreasonable"! The "robust" software that avoids accepting random junk produced by line noise for names has prob- lems with Ng and O. The range-checking software that discards "impossible" values suppresses all data about the ozone hole over the Antarctic. As Mr. O's story illustrates, it's not just computers that run into this problem. A "dumb" program, with no recourse to "common sense", would accept the name with no problems. A "smarter" program, embodying the programmer's model of what names look like, rejects it just as Mr. O's teachers did. The only difference is that, with the teachers, he could convince them that O it was. The program has no escape hatch. However, people sometimes have no escape hatch either. Everyone has had to deal with bureaucrats who just would not bend "procedure", even when it was clear that "procedure" just was not working. Everyone has also run into at least one pig-headed individual, operating entirely without the excuse of organizational inertia, who would not bend from his belief in some particular way of doing things, evidence to the contrary notwithstanding. Probably the most significant effects of this phenomenon are in the many examples of intelligence organizations which ignore what in retrospect are "clear warnings" of problems because the evidence is "unreasonable" in terms of their theory of the world. Or consider the Challenger disaster, and the effects of deliberate blindness to evidence. -- Jerry

Page 23: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

12.20

• The Story of O• Stuart I Feldman

<[email protected]>Wed, 28 Aug 91 15:04:47 -0400 If I remember an NPR item on the problems of Stephen O, he has particular difficulties because programs that launder names to fix up entry errors assume that a single O is part of an Irish name (as in PGN's O'O). An example of the risks either of ethnocentric (Eurocentric?) computer programming or of excessive cleverness. stu feldman

Page 24: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

12.20• A number is no name• "Clifford Johnson" <[email protected]>Wed, 28 Aug

91 16:44:19 PDT In addition to the story about the computer-related inconvenience of a person having the name "O", it is worth mentioning a California judge's ruling (Marin county, 1984) refusing to permit the name "3", or even its romanized form "III". The person in question had been called "3" since his childhood, being the third child, but the judge ruled that a number cannot be a legal name. Only the spelling "Three" was permissible. Social security fought the name change, arguing that the case presented an exception that would cost them too much to program for. [Having just seen on PBS a rerun of the old Victor Borge equivalent of the young people's guide to pronunciation, one would assume that if they permit "O" and "3" that someone might try for "!" (Jack Splat?) or "#" (they make calculators!) or "&" (Georges Amper Sand?) or even "~" (ma hatma tilde?). An opportunity to circumflex your imagination! PGN]

Page 25: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

12.20• The need for utilities to deal with non-standard situations• Tom Lincoln <lincoln%[email protected]>Thu, 29 Aug 91 17:41:45 PDT Koenig in RISKS-12.18 states: It's practically

impossible to keep two separate databases in step for any length of time. That's true even when one of the `databases' is reality itself. It is **particularly** true when reality is to match some formal data structure because reality is full of all sorts of non-standard situations. The story of (Stephen) O the following day illustrates how pervasive the problem is. See Spafford's contribution to RISKS-12.19, where numerous systems could not accept a letter as a last name. What if he had to be admitted to a hospital with an automated registration and admission system? The real problem does not lie in the particular cases... those already submitted to the RISKS FORUM are too numerous to count... but rather with the general lack of utilities and procedures to manage non-standard situations wherever they arise in on line computing. The data model will never be completely correct, and the real world is a moving target. Very commonly, the person at the terminal can see the absurdity, but has no override to do something about it. Take the case of a nearby hardware store: They have tried to order some power tools from Black & Decker. However, the order has been rejected because there is a non-zero balance of over 60 days. In this case, however, it is not a debit, but a $8.49 credit! B&D does not send out checks to adjust a credit balance, but rather applies the credit to the next order... But in this case... And there is no override... Of course this is a bug. The test should be for a balance less than zero. There should be an exception sequence managed on paper by a supervisor.... but there isn't. Clearly, exceptions have not been anticipated. But there are always exceptions. These must be resolved by the direct user (often a clerk) where the transactions are made. At the very least the user must be able to put non-standard material in an exception que to be resolved by higher authority. Take the case of a physician submitting a missing (?lost) prescription for Medicare patient reimbursement. The instructions are to back date it to the original date. However, the physician, wishing to be accurate, puts down both the original date and the date that the prescription was rewritten, noting that this is a resubmission for a lost document. It is rejected. There is no way to submit a non-standard document.... The only way is to pretend that it is an original. Clearly, the problem is with procedures first, and only subsequently with the computer implementation. Managing non-standard situations needs to be an integral part of of all software that must deal with unstructured aspects of the real world. The idea of managing non-standard situations should be incorporated in the operating system and in the structure of commercial data bases. When this advanced day arrives, life will be much easier, and their will be fewer funny examples in the RISKS FORUM. TOM LINCOLN [email protected]

Page 26: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.85• Re: "Buffer Overflow" security problems (PGN, RISKS-21.84)• Dan Franklin <[email protected]>Sun, 6 Jan 2002 11:40:50 -0500 >

Perhaps in defense of Ken Thompson and Dennis Ritchie, C (and Unix, for > that matter) was created not for masses of incompetent programmers, but > for Ken and Dennis and a few immediate colleagues. Which only serves to emphasize Henry's point. The code that those "few immediate colleagues" wrote also suffered from buffer overflow problems. Not only did many ordinary commands written at Bell Labs fail given long enough lines, but in one early version of UNIX, the (written in C) login command had a buffer overflow problem that permitted anyone to login by providing sufficiently long input. In other words, C buffer overflows have caused security problems ever since the language was created; and even the earliest users of C have been caught by it. If software were really an engineering field, we would learn as engineers do to avoid tools and methods that persistently lead to serious problems. Note that gcc, the very popular GNU C Compiler, has experimental extensions to support bounds checking; see http://gcc.gnu.org/extensions.html. Let us hope that one of these extensions makes its way out of the laboratory soon. If it became a standard gcc option, the current sorry situation might begin to improve.

Page 27: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.84• Security problems in Microsoft and Oracle software• "NewsScan" <[email protected]>Fri, 21 Dec 2001

08:47:58 -0700 Two top companies have issued new statements acknowledging security flaws in their products: Microsoft (Windows XP) and Oracle (the 9i application server, which the company had insisted was "unbreakable." Resulting from a vulnerability called "buffer overflow," both problems could have allowed network vandals to take over a user's computer from a remote location. Microsoft and Oracle have released software patches to close the security holes, and a Microsoft executive says: "Although we've made significant strides in the quality of the software, the software is still being written by people and it's imperfect. There are mistakes. This is a mistake." (San Jose Mercury News 21 Dec 2001; NewsScan Daily, 21 December 2001) http://www.siliconvalley.com/docs/news/svfront/secur122101.htm

Page 28: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.84• "Buffer Overflow" security problems• Henry Baker <[email protected]>Wed, 26 Dec 2001 21:19:22 -0800 I'm no fan of lawyers or litigation, but it's high time that someone defined

"buffer overflow" as being equal to "gross criminal negligence". Unlike many other software problems, this problem has had a known cure since at least PL/I in the 1960's, where it was called an "array bounds exception". In my early programming days, I spent quite a number of unpaid overtime nights debugging "array bounds exceptions" from "core dumps" to avoid the even worse problems which would result from not checking the array bounds. I then spent several years of my life inventing "real-time garbage collection", so that no software -- including embedded systems software -- would ever again have to be without such basic software error checks. During the subsequent 25 years I have seen the incredible havoc wreaked upon the world by "buffer overflows" and their cousins, and continue to be amazed by the complete idiots who run the world's largest software organizations, and who hire the bulk of the computer science Ph.D.'s. These people _know_ better, but they don't care! I asked the CEO of a high-tech company whose products are used by a large fraction of you about this issue and why no one was willing to spend any money or effort to fix these problems, and his response was that "the records of our customer service department show very few complaints about software crashes due to buffer overflows and the like". Of course not, you idiot! The software developers turned off all the checks so they wouldn't be bugged by the customer service department! The C language (invented by Bell Labs -- the people who were supposed to be building products with five 9's of reliability -- 99.999%) then taught two entire generations of programmers to ignore buffer overflows, and nearly every other exceptional condition, as well. A famous paper in the Communications of the ACM found that nearly every Unix command (all written in C) could be made to fail (sometimes in spectacular ways) if given random characters ("line noise") as input. And this after Unix became the de facto standard for workstations and had been in extensive commercial use for at least 10 years. The lauded "Microsoft programming tests" of the 1980's were designed to weed out anyone who was careful enough to check for buffer overflows, because they obviously didn't understand and appreciate the intricacies of the C language. I'm sorry to be politically incorrect, but for the ACM to then laud "C" and its inventors as a major advance in computer science has to rank right up there with Chamberlain's appeasement of Hitler. If I remove a stop sign and someone is killed in a car accident at that intersection, I can be sued and perhaps go to jail for contributing to that accident. If I lock an exit door in a crowded theater or restaurant that subsequently burns, I face lawsuits and jail time. If I remove or disable the fire extinguishers in a public building, I again face lawsuits and jail time. If I remove the shrouding from a gear train or a belt in a factory, I (and my company) face huge OSHA fines and lawsuits. If I remove array bounds checks from my software, I will get a raise and additional stock options due to the improved "performance" and decreased number of calls from customer service. I will also be promoted, so I can then make sure that none of my reports will check array bounds, either. The most basic safeguards found in "professional engineering" are cavalierly and routinely ignored in the software field. Software people would never drive to the office if building engineers and automotive engineers were as cavalier about buildings and autos as the software "engineer" is about his software. I have been told that one of the reasons for the longevity of the Roman bridges is that their designers had to stand under them when they were first used. It may be time to put a similar discipline into the software field. If buffer overflows are ever controlled, it won't be due to mere crashes, but due to their making systems vulnerable to hackers. Software crashes due to mere incompetence apparently don't raise any eyebrows, because no one wants to fault the incompetent programmer (and his incompetent boss). So we have to conjure up "bad guys" as "boogie men" in (hopefully) far-distant lands who "hack our systems", rather than noticing that in pointing one finger at the hacker, we still have three fingers pointed at ourselves. I know that it is my fate to be killed in a (real) crash due to a buffer overflow software bug. I feel like some of the NASA engineers before the Challenger disaster. I'm tired of being right. Let's stop the madness and fix the problem -- it's far worse, and caused far more damage than any Y2K bug, and yet the solution is far easier. Cassandra, aka Henry Baker <[email protected]>

Page 29: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.84• Sometimes high-tech isn't better• "Laura S. Tinnel" <[email protected]>Sat, 29 Dec 2001 17:19:30 -0500 We're all aware that many companies have buried their heads in the

sand on the security issues involved with moving to high-tech solutions in the name of convenience, among other things. When we're talking about on-line sales, educational applications, news media, and the like, the repercussions of such are usually not critical to human life, and therefore the trade-off is made. However, I've just encountered something that is, well, disconcerting at best. Earlier today as I sat unattended in an examination room for a half hour waiting on the doctor to show up, I carefully studied the new computer systems they had installed in each patient room. Computers that access ALL patient records on a centralized server located elsewhere in the building, all hooked up using a Windows 2000 domain on an ethernet based LAN. Computers that contained accessible CD and floppy drives and that could be rebooted at the will of the patient. Computers hooked up to a hot LAN jack (oh for my trusty laptop instead of that Time magazine...) Big mistake #1 - the classic insider problem. Once the doctor arrived and we got comfy, I started asking him about the computer system. (I just can't keep my big mouth shut.) Oh he was SO proud of their new fangled system. So I asked the obvious question - what would prevent me from accessing someone else's records while I sat here unattended for a half hour waiting for you to show up? With a big grin on his face, he said "Lots of people ask that question. We have security here; let me show you." Big mistake #2 - social engineering. Then he proceeded to show me that the system is locked until a password is entered. Of course, he said, if someone stole the password, then they could get in, but passwords are changed every 3 months. And, he continued, that's as secure as you can get unless you use retinal scans. (HUH?) I know all about this stuff, for you see "my dear", I have a masters degree in medical information technology, and I'm in charge of the computer systems at XXXX hospital. OK. Time to fess up. Doc, I do this for a living, and you've got a real problem here. 1, Have you thought about the fact that you have a machine physically in this room that anyone could reboot and install trojan software on? A: Well that's an issue. 2. Have you thought about the fact that there's a live network connection in this room and anyone could plug in and have instant access to your network? A: You can really do that??? There's a guy that brings his laptop in here all the time. 3. I assume you are using NTFS (yes), have you locked down the file system and set the security policies properly? You do understand that it is wide open out of the box. A: I don't know what was done when the computers were set up. 4. Have you thought beyond just the patient privacy issue to the issue of unauthorized modification of patient records? What are you doing to prevent this? What could someone do if they modified someone else's records? Make them very ill? Possibly kill them? A: That's a big concern. (well, duh?) Then there was a big discussion about access to their prescription fax system that could allow people to illegally obtain medication. I didn't bother to ask whether or not they were in any way connected to the Internet. They either have that or modems to fax out the prescriptions. At least he said he'd talk to his vendor to see how they have addressed the other issues. Perhaps they have addressed some of these things and the doctor I was chatting with simply didn't know. I'm not trying to come down on these doctors as I'm sure they have very good intentions. I personally think having the medical records on-line is a good idea in the long term as it can speed access to records and enable remote and collaborative diagnoses, potentially saving lives. But I'm not convinced that today we can properly secure these systems to protect the lives they are intended to help save. (Other opinions are welcome.) And with the state of medical malpractice lawsuits and insurance, what could a breach in a computer system that affects patient health do to the medical industry if it becomes reliant on computer systems for storage/retrieval of all patient records? A couple of things. First, I'm not up on the state of cyber security in medical applications. I was wondering if anyone out there is up on these things or if anyone else has seen stuff like this. Second, if a breach in the computer system was made and someone was mistreated as a result, who could be held liable? The doctors for sure. What about the vendor that sold and/or set up the system for them? Does "due diligence" enter in? If so, what is "due diligence" in cyber security for medical applications? Third, does anyone know if the use of computers for these purposes in a physician's office changes the cost of malpractice insurance? Is this just too new and not yet addressed by the insurance industry? Is there any set of criteria for "certification" of the system for medical insurance purposes, possibly similar to that required by the FDIC for the banking industry? If so, is the criteria really of any value?? [This is reproduced here from an internal e-mail group, with Laura's permission. A subsequent response noted the relative benignness of past incidents and the lack of vendor interest in good security -- grounds that we have been over many times here. However, Laura seemed hopeful that the possibility of unauthorized modification of patient data by anyone at all might stimulate some greater concerns. PGN]

Page 30: 22.69 Double-barrelled surname costs disabled mother Nigel Metheringham 14 Apr 2003 10:54:08 +0100 A disabled mother of three has been barred from receiving.

21.85• Re: "Buffer Overflow" security problems (Baker, RISKS-21.84)• Jerrold Leichter <[email protected]>Mon, 7 Jan 2002 12:00:29 -0500 (EST) Henry Baker complains about the continuing stream of problems due to buffer

overflows, and blames the C language. PGN repeats a number of common defenses for C: - It's perfectly possible to write bad, buggy code in the best languages; - It's perfectly possible to write good code in the worst languages; - It's wrong to blame Ken Thompson and Dennis Ritchie (who, BTW, Mr. Baker did not) because they never intended for C and Unix to be used the way they are today; - Expanding on this, spreading the blame for the use of inappropriate Microsoft systems in life- and mission-critical applications to just about every one who's ever touched a computer. I've been a C programmer for some 20 years, a C++ programmer for 6. I know well the advantages of the languages. But I'm really tired of the excuses. No, Thompson and Ritchie are not to blame. Anyone who actually reads what they've written over the years - papers or code - will know that they understand the tradeoffs and make them very carefully. I wish my code could be as good as theirs! Unfortunately, I can't say the same about much of the C and C++ culture that grew up around their inventions over the years. A programming community develops its own standards and styles, its own notions of what is important and what isn't important. These standard, styles, and notions are extra- ordinarily influential. Some of the influence is transmitted through teaching; much is transmitted through the code the community shares. The most pernicious influences in the C/C++ community include: - An emphasis on performance as the highest goal. For the most recent manifestation of this, you need only look to the C++ Standard Template Library (STL). It has many brilliant ideas in it, but among the stated goals, from the first experiments, was to produce code "as efficient as the best hand-tuned code". "As *safe*" or "as *reliable*" were simply not on the table. The STL has attained its stated goals. Yes, there are debugging versions with things like bounds checking, but "everyone knows" that these are for testing; no real C++ programmer would think of shipping with them. - A large body of code that provides bad examples. Why are there so many buffer overflows in C code? The C libraries are, to this day, full of routines that take a pointer to a buffer "that must be large enough to contain the result". No explicit size is passed. I'm told that the guys at AT&T long ago removed gets(), a routine like that which reads input, from their own library. It persists in the outside world - an accident waiting to happen. Some routines have only very recently even appeared in alternative versions that have buffer length arguments - like sprintf() and its relatives. Until snprintf() became widespread (no more recently than the last 5 years), it was extremely difficult to write code that safely wrote arbitrary data to an in-memory buffer. (If you think it's easy, here's a quick question: How large must a buffer be to hold the result of formatting an IEEE double in f format with externally-specified precision? Hint: The answer is *much* larger than the "about 16" that most people will initially guess.) As part of a C++ system I work on, I have a vector-like data structure. The index operation using [] notation is range- checked. For special purposes, there's an UnsafeAt() index operation which is not. Compare this to the analogous data structure in the C++ library, where [] is *not* range checked and at() is. When the choice is between a[10] and a.at(10), which operation will the majority of programmers think they are supposed to use? Which data structure would you rather see taught to the programmers who will develop a system your life will depend on? (BTW, extensive profiling has yet to point to []'s range checking as a bottleneck, with the possible exception of the implementation of a hash table, where unsafeAt() could be used in a provably-correct way.) - A vicious circle between programmers and compiler developers. C and C++ programmers are taught to write code that uses pointers, not indices, to walk through arrays. (The C++ STL actually builds its data structures on the pointer style.) So why should C/C++ compiler developers put a lot of effort into generating good code for index-based loops? C/C++ programmers are taught not to expect the compiler to do much in the way of common sub-expression elimination, code hoisting, and so on - the earliest C compilers ran on small machines and couldn't afford to. Instead, C/C++ programmers are taught to do it themselves - and the C language allows them to. So why should C/C++ compiler developers bother to put much effort here? Put this together and you can see that checking your array accesses for out-of-range accesses can be a really bad idea: Your check code could run every time around the loop, instead of being moved out to the beginning as a FORTRAN programmer would expect. I'm sure there are some - perhaps many - C/C++ compilers today that would provide such optimizations. Given the generality of C and C++, it can be a challenge, but the techniques exist. However, it's an ingrained belief of C/C++ programmers - and a well-founded one - that they can't *rely* on the availability of such optimizations. (A FORTRAN programmer can't point to a standard in his reliance on such optimizations, but no one today would accept a FORTRAN compiler that didn't do them.) I haven't even touched on the closely related issue of the dangers of manual memory management, and the continuing refusal of the C/C++ community to accept that most programs, and certainly most programmers, would be better off along every significant dimension with even a second-rate modern memory allocator and garbage collector -- especially in the multi-threaded code that's so common today. Is it *possible* to write reliable, safe code in C or C++? Absolutely -- just as it's *possible* to drive cross-country safely in a 1962 Chevy. Does that mean the seat belts, break-away steering columns, disk brakes, air bags, and many other safety features we've added since then are unnecessary frills? Programming languages matter, but even more to the point, programming *culture* matters. It's the latter, even more than the former, that's given us, and will continue to give us, so much dangerous code. Until something makes it much more expensive than it is now to ship bad code -- and I believe that Mr. Baker is right, and the only thing that will do it is a few big liability judgments - nothing is likely to change. Unfortunately, liability judgments will bring other changes to the programming world that may not be nearly so beneficial. Re: "Buffer Overflow" security problems (Baker, RISKS-21.84)


Recommended