+ All Categories
Home > Documents > Insuring computing risks : A fresh start

Insuring computing risks : A fresh start

Date post: 19-Nov-2016
Category:
Upload: david-davies
View: 215 times
Download: 2 times
Share this document with a friend
7
Computer Fraud & Security Bulletin January 7 992 INSURING COMPUTING RISKS A Fresh Start David Da vies Hogg Robinson Gardner Mountain The only similarity between what we call computers today, and what we (or our predecessors) called computers twenty years ago, is the name. The power of today’s computer, the scope of its applications, its integration into the very fabric of business, the nature of the risks that it introduces.. . all of these things have changed beyond recognition. However, neither the insurance industry nor the insurance buyer appears to have noticed the change, the policies that protect most companies today are far more pertinent to the risks of computing in the 1970s than to computing in the 1990s. The original thought processes of the insurance industry twenty years ago, mostly quite justified at the time, were: Computers cost a lot of money, so we will design a computer policy that relates almost exclusively to repairing or replacing the hardware in the event of loss or damage. The risks of a computing process are in direct proportion to the cost of hardware on which it is run. Computers have moving parts so we will insure them in our engineering department, i.e. in the same way that we insure boilers, pressure vessels and industrial machines. If computer equipment is damaged the computer user can take his punch cards to a computer bureau or to a friendly computer user down the road, and get him to process his cards on an overnight run. Therefore the only cover that is needed for consequential losses is the extra payments that may have to be made to the bureau, to staff who work overtime, etc. l Because all the data is backed up and stored (often on site) in fireproof safes, the only cover that we need for data is the cost of keying in one or two days work that was lost since the last backup was taken. Ninety percent of companies, both big and small, still have computer insurance and almost all insurers still offer computer insurance that reflects precisely these thought processes, even down to the engineering department’s continuing involvement. A 1990 ‘state-of-the-art’ German computer policy excludes “operating media, e.g. lubricating oil, fuel, chemicals”! When it comes to computer crime matters are even more complex. Computer crime is one of those phrases that is used glibly, but there is no common understanding as to its meaning. Computer fraud? Computer virus? Hacking? Stealing computers? Malicious damage? All of these? Fraud by employees, non-employees or quasi-employees? The most well known policy to use the title ‘computer crime’ is the Lloyds Electronic and Computer Crime Policy. This policy, aimed primarily at banks and financial institutions, has been available for some years. But a new version, dated January 1991, has recently been released and, if nothing else, illustrates the point about the roots of early computing being very evident in ‘state-of-the-art’ insurance covers today. To take one example, the definition of electronic data processing media begins, “punched cards, magnetic tapes, punched tapes or magnetic disks...“. It is possible that there is still the odd punched card or punched tape user out there somewhere, but probably not within the financial sector. As the wording then goes on to refer to “other bulk media on which electronic data are recorded” even that backward user would not be prejudiced by the removal of reference to this long obsolete bit of computing history, whereas the user of read/write optical discs, which are at the forefront of technology, may not be insured. 01992 Elsevier Science Publishers Ltd
Transcript
Page 1: Insuring computing risks : A fresh start

Computer Fraud & Security Bulletin January 7 992

INSURING COMPUTING RISKS

A Fresh Start

David Da vies

Hogg Robinson Gardner Mountain

The only similarity between what we call computers today, and what we (or our predecessors) called computers twenty years ago, is the name. The power of today’s computer, the scope of its applications, its integration into the very fabric of business, the nature of the risks that it introduces.. . all of these things have changed beyond recognition. However, neither the insurance industry nor the insurance buyer appears to have noticed the change, the policies that protect most companies today are far more pertinent to the risks of computing in the 1970s than to computing in the 1990s.

The original thought processes of the insurance industry twenty years ago, mostly quite justified at the time, were:

Computers cost a lot of money, so we will

design a computer policy that relates almost exclusively to repairing or replacing the hardware in the event of loss or damage.

The risks of a computing process are in direct proportion to the cost of hardware on which it is run.

Computers have moving parts so we will insure them in our engineering department, i.e. in the same way that we insure boilers, pressure vessels and industrial machines.

If computer equipment is damaged the computer user can take his punch cards to a computer bureau or to a friendly computer user down the road, and get him to process his cards on an overnight run. Therefore the only cover that is needed for consequential losses is the extra payments that may have

to be made to the bureau, to staff who work overtime, etc.

l Because all the data is backed up and stored (often on site) in fireproof safes, the only cover that we need for data is the cost of keying in one or two days work that was lost since the last backup was taken.

Ninety percent of companies, both big and small, still have computer insurance and almost all insurers still offer computer insurance that

reflects precisely these thought processes, even down to the engineering department’s continuing involvement. A 1990 ‘state-of-the-art’ German computer policy excludes “operating media, e.g. lubricating oil, fuel, chemicals”!

When it comes to computer crime matters are even more complex. Computer crime is one of those phrases that is used glibly, but there is no common understanding as to its meaning. Computer fraud? Computer virus? Hacking? Stealing computers? Malicious damage? All of these? Fraud by employees, non-employees or quasi-employees?

The most well known policy to use the title ‘computer crime’ is the Lloyds Electronic and

Computer Crime Policy. This policy, aimed

primarily at banks and financial institutions, has

been available for some years. But a new

version, dated January 1991, has recently been

released and, if nothing else, illustrates the point

about the roots of early computing being very

evident in ‘state-of-the-art’ insurance covers

today. To take one example, the definition of

electronic data processing media begins,

“punched cards, magnetic tapes, punched tapes

or magnetic disks...“.

It is possible that there is still the odd

punched card or punched tape user out there

somewhere, but probably not within the financial

sector. As the wording then goes on to refer to

“other bulk media on which electronic data are

recorded” even that backward user would not be

prejudiced by the removal of reference to this

long obsolete bit of computing history, whereas

the user of read/write optical discs, which are at

the forefront of technology, may not be insured.

01992 Elsevier Science Publishers Ltd

Page 2: Insuring computing risks : A fresh start

January 1992 Computer Fraud & Security Bulletin

The main core of the wording is that of fraud by non-employees that is made possible by the use of electronic equipment, including telex and facsimile machines and even fraudulent instructions given electronically. In the 1991 revision there is an attempt to include the computer virus for the first time, but serious errors in the wording negate most of the cover the underwriter, presumably, intended to provide.

Like so many policies, the logic of what is included and what is excluded from the Lloyds Electronic and Computer Crime Policy only becomes apparent when one takes into account:

the other policies that this is intended to supplement;

the demands for cover in specific areas where insurance buyers have become concerned, having read often

over-sensational press reports (e.g. of

computer viruses);

the underwriter’s reluctance to become involved in risks that he does not understand, or that he feels could cause him to lose his shirt (e.g. software bugs), and

the way in which computer crime insurance has developed -by gradual advances from pre-computer concepts rather than by complete re-think.

The final problem with existing computer policies is the methods used by insurers to judge the quality of the risk. Traditionally the insurer will

require the insured to complete a proposal form, the answers to which then become the basis of the insurance, i.e. if it is discovered at the time of a claim that a statement made in proposal form is not correct, the insurer is entitled to decline the claim.

In addition, insurance case law requires that the insured declares to the insurer all facts that may affect the underwriter’s decision on whether to accept the risk and, if so, the terms upon which it is accepted. Unfortunately many proposal

forms were drafted in the 1970s and, particularly in areas such as fraud, concentrate on questions that relate to the risks of fortress mainframe rather than those of open accessibility via LAN, EDI or EFT. Many proposal forms still contain questions such as, “Are programmers denied access to the computer room?” and yet contain no questions regarding logical access controls. When asked why they had not consulted their own very powerful DP department to design a more modern proposal form, one major insurer replied, “we tried to do that; we spent two days with our DP people but could not understand a word they said. In the end we decided to stick with the proposal form we knew.”

Given that virtually no relevant questions are asked on the proposal form and that therefore the underwriter would probably not understand the information anyway, this then gives the insured the problem of what to disclose to the insurer.

Some insurers have sought to combat this problem by imposing warranties that require certain things to happen. Unfortunately, however, the insurers do not understand either the technology of the terminology and impose requirements such as “data is not stored for a longer period than the maker’s instructions”, warranties that the insured has an “archival filing system” or, some insurer’s answer to the virus risk, warranting that the insured does not “violate software licence conditions”.

A more acceptable option is to have a bridge between the insurer and the insured, i.e. a specialist consultant to evaluate the security standards of the risk and either recommend that

it be insured (occasionally subject to certain risk improvements being carried out) or declined.

Unfortunately, on many occasions the insurer has viewed the risk audit as replacing the proposal form and has warranted information contained in risk audit into the policy.

This means, in effect, that the snapshot of the risk that existed at the time of the audit, down to its minute detail (a typical audit report being in

01992 Elsevier Science Publishers Ltd 9

Page 3: Insuring computing risks : A fresh start

Computer Fraud & Security Bulletin Januarv 1992

excess of forty pages) must be totally preserved for cover to remain effective. If any aspects are changed the insurer has the right to decline any claim.

To conclude, it is time for the insurance

industry to start again. The remainder of this

paper will outline the thought processes that

have led to the design from scratch of a totally

new computer crime insurance and the cover

that has resulted.

A New Approach

Let us clear the decks and try and ignore

everything we know about insurances that have

been historically available. As we have seen,

most of what is available today is irrelevant and

can only clutter our thinking. Firstly we should

ignore completely the value of the computer equipment itself. There are three reasons for this:

1.

2.

3.

It allows us to concentrate on the risks that really matter today.

It removes the red herring that the size or value of the computing equipment has any relevance to the real risks with which we should be concerned.

There are in any event many policies available to cover computing equipment, either specifically or as part of general cover for all physical assets.

We are therefore leff with three types of risk

which we must contemplate for possible

inclusion within our cover, and each of which

should also be considered when reviewing the desirability of any risk management measures

for particular computing functions or

applications.

1. The assets controlled by the computing system or application.

2. The integrity of the system.

Each of these wiil be considered.

The Assets Controlled by the System

In turn this category can be divided into the tangible (i.e. money, title to property, goods, stock, etc. - the traditional subject of fraud insurance) and intangible (i.e. the information, or data, that is held within the system).

Tangible

The phrase ‘computer fraud’ became

popular in the days when computer systems were discreet, as the treasury management systems (TMSs) in most corporations today, and there was therefore little risk of demarcation between what was a computer fraud and what was not. This is very pertinent if computer fraud is covered but not other fraud, or indeed if computer fraud and non-computer fraud are covered by different insurers. Today the situation becomes far more complex as, apart from special instances such as TMSs, computers are so integrated into most companies’ activities that every financial transaction involves a computer at some point. The employee who fiddles his expense account has not committed computer fraud in the conventional sense, but if that simple act of data diddling results in a payment that is made by a computer is that a computer fraud for insurance purposes? If not, where is the demarcation line - how close to the computer must the fraud be in order to qualify? Certainly instances of data diddling as far removed from the computer as the expenses fraud have been included within computer fraud casebooks, with

resultant distortion of the statistical analysis of those cases.

These really begs the question ‘why insure computer fraud as a separate entity’? The answer appears to be that computer fraud is an emotive subject and insurance buyers are far more readily convinced that they should buy cover for this dramatic and exciting new risk than that they should buy cover for traditional manual

fraud.

3. The consequences of the non availability of the system.

10 01992 Elsevier Science Publishers Ltd

Page 4: Insuring computing risks : A fresh start

January 7 992 Computer Fraud & Security Bulletin

Computerization has three potential effects on the fraud risk:

1. In certain circumstances it can put control of the company’s assets into the hands of third parties or quasi third parties (e.g. ex-employees, contract operators) who would not be insured by a standard fraud policy.

2. It can put control of the company’s assets into the hands of quite low level employees who are able to manipulate the computer system but whose pedigree and honesty may not have been as thoroughly checked as would be the case for the traditional high risk employees.

3. In the hands of the right person the computer can be used to bypass the traditional division of responsibilities and systems of

controls and audit trails upon which management have traditionally relied.

The first two of these exposures can be addressed by arranging computer fraud cover for third parties and quasi third parties to supplement existing employee fraud insurance,

and by insuring all employees, not just those having direct control over corporate assets. The third effect can only be addressed by proper risk management controls.

In tangible (Data)

Few companies appreciate the value of the data stored within their computer system or, indeed, contemplate the possibility either that all of their data could be lost or that their valuable

trade secrets could be accessed by a third party. They invariably arrange their insurance accordingly.

The value of data can be calculated according to one of several formulae - simple re-keying time since the last off site back-up, the cost of re-keying a substantial part of the database if all is lost, the additional work that is necessary to re-acquire the data if it is not readily available in hard copy format, and finally, the intrinsic value of the data if it simply cannot be recreated. All of these risks should be insurable

with sums insured that increase dramatically as each of the categories escalates, but with

premium rates that reduce accordingly to reflect the increasing remoteness of the risk in a well controlled system.

The risk of industrial espionage is virtually uninsurable and must remain so for the foreseeable future. The reason for this is the

challenge of demonstrating to any insurers’ satisfaction the financial impact of a known incident.

Imagine this scenario, which has happened to one of our clients: there is a theft, computing equipment and valuable contents are ignored but disks are stolen containing the client’s product recipes, developed and formulated over many years. The company has back-up copies of the recipes so has suffered no immediately apparent loss, but clearly the theft was targeted. Which of their competitors, in which country, now has that information? How will it be used and how will our client suffer, possibly many years into the future?

Substitute for theft a hacker downloading information and covering up evidence of his intrusion, or indeed, a competitor receiving tempest emanations, and the event itself may be impossible to prove, but even if it were the consequences would not be. Industrial

espionage must therefore be considered uninsurable.

System Integrity

One of the double edged swords of

computing is the ease with which data can be

amended. On the positive side, no retyping or

reprinting of records, no tippex, no rubbers. On

the negative, all the problems of computer

generated evidence and, indeed, of knowing

whether to trust the information that is held by the

system once you know or strongly suspect, that

someone with possibly hostile or mischievous

intentions may have wandered around it with the

opportunity to alter.

Confirming the integrity of the system under

such circumstances could involve the diversion

01992 Elsevier Science Publishers Ltd 11

Page 5: Insuring computing risks : A fresh start

Computer Fraud & Security Bulletin January 1992

of a considerable amount of resource together

with the non-availability of the computing system during the integrity verification process. As long

as the intrusion is proved this risk should be

within the capabilities of the insurance industry.

However the risk has not, to the author’s

knowledge, been addressed by any insurance

cover.

System Availability

There are three possible causes of system unavailability:

1. Physical (breakdown of or damage to hardware).

2. Intrusion.

3. Errors.

Physical

Lost revenue caused by unavailability of a computer system owing to physical damage to hardware, power supply, telecommunications links, and, in some policies, inability to process data that has been lost or corrupted, is freely available in most countries. This is the domain of the standard computer policies to which reference has already been made and need not be discussed further.

System Intrusion

1.

2.

3.

There are three types of system intrusion:

a ‘real time’ intrusion in which an unauthorized user (hacker or employee) gives a direct command that causes the system to fall over.

An indirect intrusion, e.g. a logic or time bomb, planted either by means of a virus or directly by the attacker.

The consequences of the system being deliberately taken down whilst a logic or time bomb is searched for (for example after an extortion threat) or the system’s integrity is verified.

The interruption to the business caused by system non-availability for these reasons should be insurable, although there are no known policies to cover such risks.

Errors

The consequences of errors are insurable in certain restricted situations, i.e. the error that causes physical damage or breakdown, or that causes data to be lost or corrupted.

Other errors, such as operating errors, (an operator loading the wrong tape resulting in the double payment of financial transactions) or programming error (defective software causing a system to fall over or transactions being made incorrectly) are currently uninsurable. Restricted (Lloyds) cover was available during 1989/90 for system unavailability due to software bugs, however the cover was subject to so many restrictions that it was difficult to envisage circumstances in which a claim would be paid. The cover was ultimately withdrawn through lack of sales; whether the low sales resulted from the cover’s restrictions being fully appreciated or from other reasons is not known.

System non-availability owing to programming errors is theoretically insurable. Both cause and losses should be relatively easy to prove, but the exposure only becomes serious for those companies whose tolerance period for which they can be without their computer systems, is measured in hours or minutes rather than in days or weeks. Whilst the number of companies falling within this category continually increases there is no evidence that there are sufficient potential buyers to make this form of insurance attractive to underwriters.

Indirect Consequences

Many of the risks above could have indirect consequences. For example, the damage to the reputation of a financial institution if it became known how easily it had been defrauded could result in future loss of business, although precisely how much business would be virtually impossible to prove. Such indirect losses are

12 01992 Elsevier Science Publishers Ltd

Page 6: Insuring computing risks : A fresh start

January 1992 Computer Fraud & Security Bulletin

currently uninsurable and will probably remain uninsurable for the foreseeable future as:

1. The consequences are difficult to predict and so the risk is difficult to appraise and premium difficult to calculate.

2. There is a strong feeling amongst insurers that risks of this nature are a commercial, trading, risk rather than falling within the category of being fortuitous and therefore insurable.

The Challenge of Risk Appraisal

There is no doubt that the risk appraisal can only be undertaken by a risk consultant conversant in DP terminology:

1. A proposal form is inadequate because, even if the right questions were asked, the answers could not be understood by the insurance underwriter. There is no known instance of an insurance underwriter being sufficiently conversant with computing

technology and terminology - particularly that applicable to mainframes.

2. The non DP insurance surveyor concentrates on things he is comfortable with, e.g. counting the fire extinguishers. He may try to overcome his lack of knowledge

by using a checklist or questionnaire prepared by someone else, but will be easily hoodwinked by technical gobbledegook used, in reply, to conceal weaknesses and which he rarely has the courage to admit that he does not understand.

3. At some point in the future interrogation by intelligent software may be viable, but in the writer’s experience there is nothing to beat

the clues that may come from the body language of the respondent, the hesitation in the voice, the atmosphere that can be immediately sensed or other factors that tell

the experienced consultant when to ask more questions and when not to. None of these can be replicated by software.

So, we will use a real human being, with DP

risk expertise, to investigate and produce the

following reports:

1. A full report for record purposes.

2. An executive summary report for non-DP management, possibly telling them for the first time in language that they can

understand the nature of the risks that they are exposed to in their DP operations.

3. An executive report for the underwriter (who rarely will have time to read lengthy reports) similar to the management report and, like the management report, giving prioritized risk improvement recommendations. The underwriter’s report should however be coded to indicate those recommendations which, in the consultant’s belief, are critical to the underwriter granting cover and those which, if completed, should beneficially affect the premium that is charged.

4. A risk statement similar to the executive summaries, stating the chief characteristics of computer usage, the key risk factors and the key risk protection elements. This statement, preferably on a single sheet of paper, should be the only thing that is warranted into the policy cover and should

therefore be capable of frequent review by the insured and periodic audit by the consultant or by other external auditors.

By adopting this method all parties have been treated fairly; the insurer is given relative risk information and the insured should be able to avoid the possibility of claims being declined because of non disclosure of material facts.

Theory into Practice

All of the thought processes, the risk identified as being insurable, (many of which are currently not catered for by existing policies), and the method of risk appraisal suggested above have been incorporated into a new wording that Hogg are launching on the London insurance market later this year. Amazingly, it represents the first complete rethink of computer related

01992 Elsevier Science Publishers Ltd 13

Page 7: Insuring computing risks : A fresh start

Computer Fraud & Security Bulletin January 1992

risks for many years. If insurance buyers take the cover it will undoubtedly encourage a general move by the insurance market towards addressing the computing risks of the 1990s rather than of the 1970s. However if, like many of its predecessors, it does not sell well because the average insurance buyer is not DP literate, and cannot understand the cover on offer, the insurance industry may continue to sell policies for the risks of the 1970s well into the next millennium.

0 David Davies, 1991

ESPIONAGE VIA

The Future Risk

Scott Magruder and Stanley Lewis, Jr.

University of Southern Mississippi

Viruses are in the news a great deal today and are a danger to worldwide business, or to any user of a computer. However, as bad as the situation is, it will get worse. The more that users of computers and information systems are unaware and unprepared for virus attacks today, the more vulnerable they will be in the future. This paper describes some possible uses for viruses that are not currently known to exist. These situations will show the danger is far greater than was once thought.

A virus is a program that has the ability to replicate itself. It may do so by attaching itself to other programs or to specific portions of disks. [White, et a/.] The virus can cause immediate damage to data or programs, but usually it waits a specific period of time before it does any damage. However, it is constantly looking for opportunities to make copies of itself in other programs or on the boot sector of floppy disks. This process of duplication is referred to as

infecting the file or disk. Some viruses do not cause any major damage, they are more-or-less just nuisances. But if a virus erases data or programs, the damage may be non-repairable. A more insidious virus may change data so the user does not know anything has happened, but

data used in decision-making may now be corrupt. Variations on the virus theme are given next.

A worm is a program which replicates itself and consumes computing power, thus preventing the owners of the computing power from using it as they desire. McAfee states that a worm does not replicate, but destroys data. This illustrates the fact that there are no generally accepted definitions of some of these terms. However, this does not decrease their potential to damage data and programs, and cause harm to the firm.

A Trojan horse is a program which is supposed to be used for a specific, known function. This function may be actually performed. However, there is another function in the program which also is performed every time the original program is executed, which may steal information or cause other harm.

A logic bomb or time bomb is a program which waits for a specific event or time period to occur and then it triggers, causing the damage it was programmed to perform.

The Robert Morris Jr. Worm which infected the Internet/Arpanet network gives a starting point upon which we can extrapolate about future problems with these types of programs. Internet/Arpanet are two major

telecommunications networks that allow researchers to disseminate research results and discuss these results. Arpanet in addition allows civilian researchers to exchange information with some military researchers.

A great deal has already been written about this worm [Clinton White, Jr; Phillip Gardner; McAfee and Haynes; etc.] This information will not be repeated here, however, the following

specifics are useful in our extrapolation:

14 01992 Elsevier Science Publishers Ltd


Recommended