+ All Categories
Home > Documents > (C) Anton Setzer 2003 (except for pictures)csetzer/lectures/critsys/02/critsysfinalhrefa0.pdf ·...

(C) Anton Setzer 2003 (except for pictures)csetzer/lectures/critsys/02/critsysfinalhrefa0.pdf ·...

Date post: 23-Mar-2020
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
47
(C) Anton Setzer 2003 (except for pictures) CS 411 Critical Systems: http://www-compsci.swan.ac.uk/csetzer/lectures/ 02/index.html Dr. Anton Setzer http://www-compsci.swan.ac.uk/csetzer/index.htm Lent Term 2003 Critical Systems, CS 411, Lentterm 2003, Sec. A0
Transcript

(C) Anton Setzer 2003 (except for pictures)

CS 411 Critical Systems:http://www-compsci.swan.ac.uk/∼csetzer/lectures/critsys/02/index.html

Dr. Anton Setzer

http://www-compsci.swan.ac.uk/∼csetzer/index.html

Lent Term 2003

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-1

(C) Anton Setzer 2003 (except for pictures)

A0. Introduction, Overview

(a) A case study of a safety-critical system failing.

(b) Two aspects of critical systems.

(c) Administrative Issues.

(d) Plan.

(e) Literature

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-2

(C) Anton Setzer 2003 (except for pictures)

(a) A Case Studyof a Critical System Failing

Definition: A::::::::::::::critical

::::::::::::::::system is a

• computer, electronic or electromechanical system

• the failure of which may have serious consequences, such as

– substantial financial losses,– substantial environmental damage,– injuries or death of human beings.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-3

(C) Anton Setzer 2003 (except for pictures)

Three Kinds of Critical Systems.

•::::::::::::::::::::::::::::Safety-critical

::::::::::::::::::systems.

– Failure may cause injury or death to human beings.– Main topic of this module.

•:::::::::::::::::::::::::::::::Mission-critical

:::::::::::::::::::systems.

– Failure may result in the failure of some goal-directed activity.

•:::::::::::::::::::::::::::::::::Business-critical

:::::::::::::::::system.

– Failure may result in the failure of the business using that system.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-4

(C) Anton Setzer 2003 (except for pictures)

Examples of Critical Systems

• Safety-Critical

– Medical Devices.– Aerospace∗ Civil aviation.∗ Military aviation.∗ Manned space travel

– Chemical Industry.– Nuclear Power Stations.– Traffic control.∗ Railway control system.∗ Air traffic control.∗ Road traffic control (esp. traffic lights).

– Other military equipment.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-5

(C) Anton Setzer 2003 (except for pictures)

Examples of Critical Systems (Cont.)

• Mission-critical

– Navigational system of a space probe.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-5a

(C) Anton Setzer 2003 (except for pictures)

Examples of Critical Systems (Cont.)

• Business critical

– Customer account system in a bank.– Online shopping cart.– Areas where secrecy is required.∗ Defense.∗ Secret service.∗ Sensitive areas in companies.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-6

(C) Anton Setzer 2003 (except for pictures)

Asta Train Accident (January 5, 2000)

Report from November 6, 2000http://odin.dep.no/jd/norsk/publ/rapporter/aasta/

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-7

(C) Anton Setzer 2003 (except for pictures)

��� ��������� ������������������������������������������� ��!"�����# $�%����������������"&'���(��#��)&��#*"���*�

+-,./$021435046798$:47$0�;$046<904=

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-8

(C) Anton Setzer 2003 (except for pictures)

��� �������� �������������������������! #"$������%�&��'%�!$(���)*�����,+�!�-�������.�������������%��!����!������/(/���

'�0���!1'�23$���./(��'��4!������$(!�/������5���!541�6��4

798;:<=>:?>@>@>A8;=CBEDGF;:@>H>8;@>F;I>F;:J>F;<

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-9

(C) Anton Setzer 2003 (except for pictures)

����������

��� ����������������������

�� ����� !����������"���#�

�� ����� !����������"���%$

�� ����� &�"��������"�&'�(

�� ����� &�"��������"�&'�)

�� ����� &���*������"�,+-$

�� ����� !����������"��+.�

��/ �����0���"��������"��+

1��2� �34�"��"����56+

�0���5*78���"�����:92�����������" 0+

;�<2=���>������"��"�0��5

?@�A��A��)

1�B0����"��"����5�C

1�B��������"����5�D

�2 E����F E� =�>��������G��5H("C

�I ��*��� �� =�"�-�"��"�0��5�)�D

1�B��������"����5KJ

10B���-�"��"����56L?@�A��A��(

;�<2=���>������"��"�0��5

�0���5E78����������92�����������" G�

MN�����������"����0��5�C0OD

MP���>���2������"����5��

MP�������2�-����"����56+

LQ�����G�@���"�2� E�25K5��7.R

LQ�����G�@���"�2� E�25K5��7.R

S-T%U-VW�X*Y*Z\[

S-T�]�X�^H_

`.a�b,c6d

egf\h�iAj.k

la0mncAd

opf\h*i2jFk

qb8j.h�c\rsjt`ucHr�c6i2a�v

1��2� �3w����"�G��5��

MN���"�����0�"������*5�L4O�J

C���x0��5��� ����"�"��0�

y:z��{�|/�P��}6~�����{��0�H��0�������0��z������|�0�@�w��n����{��

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-10

(C) Anton Setzer 2003 (except for pictures)

10 persons on boardlocal

75 persons on boardexpress

Road crossingRena

green red?

Rudstad

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-11

(C) Anton Setzer 2003 (except for pictures)

RenaTrain 230275 passengers

RudstadTrain 236910 passengers

Sequence of Events:

• Railway with one track only. Therefore crossing of trains only at stationspossible.

• According to timetable crossing of trains at Rudstad.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-12

(C) Anton Setzer 2003 (except for pictures)

RenaTrain 230275 passengers

RudstadTrain 236910 passengers

• Train 2302 is 21 minutes behind schedule.When reaching Rena, delay is reduced to 8 minutes.Leaves Rena after a stop with green exit signal 13:06:15,in order to cross 2369 at Rudstad.

• Train 2369 leaves after a brief stop Rudstad 13:06:17,3 minutes ahead of timetable,probably in order to cross 2302 at Rena.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-12a

(C) Anton Setzer 2003 (except for pictures)

RenaTrain 230275 passengers

RudstadTrain 236910 passengers

• Local train shouldn’t have had green signal.

• 13:07:22 Alarm signaled to the rail traffic controller (no audible signal).

• Rail traffic controller sees alarm approx. 13:12.

• Traffic controller couldn’t warn trains, because of use of mobile telephones(the correct number hadn’t been passed on to him).

• Trains collide 13:12:35, 19 persons are killed.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-13

(C) Anton Setzer 2003 (except for pictures)

��� ���������� ����������������������������������������������������� ������!��"#"����

$&%('%�)+*#%-,.'./0'12�3.4(%(5

���������� ����67�����0 #�����98�!��"���:;���� +8� ��9���<��0�����=����>�<�����������������������< �����0!��"�"��#�

$&%('%�)+*#%-,.'./0'12�3.4(%(5

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-14

(C) Anton Setzer 2003 (except for pictures)

Investigations

• No technical faults of the signals found.

• Train driver was not blinded by sun.

• Four incidents of wrong signals with similar signaling systems reported:

– Exit signal green and turns suddenly red.Traffic controller says, he didn’t give an exit permission.

– Hanging green signal.– Distant signal green, main signal red, train drives over main signal, and

pulls back.Traffic controller surprised about the green signal.

– 18 April 2000 Train has green exit signal.When looking again, notices that the signal has turned red.Traffic controller hasn’t given exit permission.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-15

(C) Anton Setzer 2003 (except for pictures)

Investigations

• Several safety-critical deficiencies in the software found (some known before!)

• The software used was completely replaced.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-15a

(C) Anton Setzer 2003 (except for pictures)

• SINTEF (Foundation for Scientific and Industrial Research at the NorwegianInstitute of Technology) found no mistake leading directly to the accident.Conclusion of SINTEF: No indication of abnormal signal status.⇒ Mistake of train driver (died in the accident).(Human Error).

• Assessment of report by Railcert

– Criticism: SINTEF was only looking for single cause faults, not for multiplecauses.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-16

(C) Anton Setzer 2003 (except for pictures)

Conclusion

• It is possible that the train driver of the local train was driving against ared signal.

• The fact that he was stopping and left almost at the same time as theother train and 3 minutes ahead of time, makes it likely that he receivedan erroneous green exit signal due to some software error.

It could be that the software under certain circumstances when giving anentrance signal into a block, for a short moment gives the entrance signal forthe other side of the block.

One possible reason for that could be a level road crossing in between andrace conditions.

• Even if this particular accident was not due to a software error, apparentlythis software has several safety-critical errors.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-17

(C) Anton Setzer 2003 (except for pictures)

Conclusion (Cont.)

• In the protocol of an extremely simple installation (Brunna, 10 km fromUppsala), which was established 1957 and exists in this form in 40 installationsin Sweden, a safety-critical error was found when verifying it with a theoremprover formally 1997.

• Lots of other errors in the Swedish railway system were found during formalverification.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-18

(C) Anton Setzer 2003 (except for pictures)

Causal Factors.

We consider a three-level model (Leveson, pp. 48 – 51).

• Level 1: Chain of events.

– Described above.

• Level 2: Conditions, which allowed the events on level 1 to occur.

• Level 3: Conditions and Constraints,

– that allowed the conditions on the second level to cause the events at thefirst level, e.g.∗ Technical and physical conditions.∗ Social dynamics and human actions.∗ Management system, organizational cultrure.∗ Governmental or socioeconomic policies and conditions.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-19

(C) Anton Setzer 2003 (except for pictures)

Root Causes

• Problems found in a Level 3 analysis form the root causes.

–::::::::::Root

:::::::::::::::::causes, weakenesses in general classes of accidents, which

contributed to the current accident but might affect future accidents.– If the problem behind a root cause is not fixed, almost inevitably an

accident will happen again.– Many examples, in which despite a thorough investigation the root cause

was not fixed and the accident happened again.– Example: DC-10 cargo-door saga.∗ Faulty closing of the cargo door caused collapsing of the cabin floor.∗ One DC-10 crashed 1970, killing 346 people.∗ As a consequence a fix to the cargo doors was applied.∗ The root cause, namely that the collapsing of the cabin floor when

the cargo door opens wasn’t fixed.∗ 1972, the cargo door latch system in a DC-10 failed, the cabin floor

collapsed and only by chance the plane was not lost.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-20

(C) Anton Setzer 2003 (except for pictures)

Level 2 Conditions

• The driver left the station although the signal should have been red.

– The local train probably had for short period a green light, maybe causedby a software error.

– Incidents had happened before, but were not investigated.– Software wasn’t written according to the highest standards.

• The local train left early.

– Due to the fact that the train driver was relying on his watch (whichmight go wrong) and not on an official clock.

• The local train drove over a possibly at that moment red light.

– There was no ATC (automatic train control) installed, which stops trainsdriving over a red light.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-21

(C) Anton Setzer 2003 (except for pictures)

Level 2 Conditions (Cont.)

• The traffic controller didn’t see the control light.

– Control panel was badly designed.– A visual warning signal is not enough, in case the system detects a possible

collision of trains.

• The rail controller couldn’t warn the driver, since he didn’t know the mobiletelephone number.

– To rely on a mobile telephone network in a safety critical system isextremely careless.∗ Mobile phones often fail.∗ The connections might be overcrowded.∗ Connection to mobiles might not work in certain areas of the railway

network.– The procedure for passing on the mobile phone numbers was badly

managed.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-22

(C) Anton Setzer 2003 (except for pictures)

Level 2 Conditions (Cont.)

• The fire safety of the train engines was not very good.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-22a

(C) Anton Setzer 2003 (except for pictures)

Level 3 Constraints and Conditions

• Cost-cutting precedes many accidents.

– Difficult, to maintain such a small railway line.– Resulting cheap solutions might be dangerous.

• Flaws in the software.

– Control of railway signals is a safety-critical system and should be designedwith high level of integrity.

– Very difficult to write correct protocols for distributed algorithms.– Need for verified design of such software.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-23

(C) Anton Setzer 2003 (except for pictures)

Level 3 Constraints and Conditions (Cont.)

• Poor human-computer interface at the control panel.

– Typical for lots of control rooms(problem in nuclear power stationscriticality of such a design is not yet sufficiently acknowledged(see problems with the UK airtraffic control system, which are sometimesdifficult to read.)

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-23a

(C) Anton Setzer 2003 (except for pictures)

Level 3 Constraints and Conditions (Cont.)

• The railway controller was overworked.

• Overconfidence in ICT.

– Otherwise one wouldn’t have used such a badly designed software.– Otherwise one wouldn’t have simply relied on the mobile phones – at least

a special agreement with the mobile phone companies should have beenset up.

• Flaws in management practices.

– No protocol for dealing with mobile phone numbers.– No mechanism for dealing with incidents.∗ A mechanism should have been established to thoroughly investigate

them.∗ Most accidents are preceded by incidents, which are not taken seriously

enough.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-24

(C) Anton Setzer 2003 (except for pictures)

Lessons to be Learned

• Safety-critical systems are very complex –System aspect.

– Software, which includes parallelism.– Hardware.∗ Might fail (light bulb of a signal might burn through), relays age.∗ Have to operate under adverse conditions (low temperatures, rain,

snow).– Human-computer interaction.– Protocols the operators have to follow.– Training of operators.– Cultural habits.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-25

(C) Anton Setzer 2003 (except for pictures)

Lessons to be Learned (Cont.)

• A sequence of events had to happen in order for the accident to take place.

–:::::::::::::::::::::::Preliminary

:::::::::::::::::events.

= events which influence the initiating event.without them the accident cannot advance to the next step (initiatingevent).In the main example:∗ Express train is late. Therefore crossing of trains first moved from

Rudstad to Rena.∗ Delay of the express train reduced. Therefore crossing of trains moved

back to Rudstad.–

:::::::::::::::::::Initiating

:::::::::::::event,

::::::::::::::trigger

::::::::::::::event.

Mechanism that causes the accident to occur.In the main example:∗ Both trains leave their stations on crash cause, maybe caused by both

trains having green signals.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-26

(C) Anton Setzer 2003 (except for pictures)

Lessons to be Learned (Cont.)

–::::::::::::::::::::::::::Intermediate

:::::::::::::::::events.

Events that may propagate or ameliorate the accident.∗ Ameliorating events can prevent the accident or reduce its impact.∗ Propagating events have the opposite effect.

– When designing safety critical systems, one should∗ avoid triggering events· if possible by using several independent safeguards,

∗ add additional safeguards, which prevent a triggering event fromcausing an accident or reduces its impact.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-27

(C) Anton Setzer 2003 (except for pictures)

Lessons to be Learned (Cont.)

• Usually at the end of an investigation conclusion “human error”.

– The architecture of the software was investigated but no detailed searchfor a bug was done.

• Afterwards, concentration on trigger events, but not much attention topreliminary and intermediate events – root cause often not fixed.

• Most failures of safety-critical systems were caused by multiple failures.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-28

(C) Anton Setzer 2003 (except for pictures)

(b) Two Aspects of Critical Systems

• (1) Software engineering aspect.

– System aspect.∗ Computer system.∗ Hardware connected (hardware failure in sensors, relays and computers)∗ Interaction with environment.∗ Human-machine interaction.

– Methods for identifying hazards and measuring risk (HAZOP, FMTAetc.)

– Standards.– Documentation (requirements, specification etc.)– Validation and verification.– Based on techniques used in other engineering disciplines (esp.

chemical industry, nuclear power industry, aviation).

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-29

(C) Anton Setzer 2003 (except for pictures)

Two aspects (Cont.)

• (2) Tools for writing correct software.

– Software bugs can not be avoided by careful design.∗ Especially with distributed algorithms.

– Need for verification techniques using formal methods.– Different levels of rigour:

(1) Application of formal methods by hand, without machine assistance.(2) Use of formalized specification languages with some mechanized

support tools.(3) Use of fully formal specification languages with machine assisted

or fully automated theorem proving.– However, such methods don’t replace software engineering techniques.∗ Formal methods idealize a system and ignore aspects like hardware

failures.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-30

(C) Anton Setzer 2003 (except for pictures)

Two Streams in this Module

• Stream A

– Software engineering aspect + general overview over formal methods.– Industrial practices.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-31

(C) Anton Setzer 2003 (except for pictures)

Two Streams in this Module (Cont.)

• Stream B (interleaved with Stream A):

– Closer look at one prototype example of a tool which allows to write100% correct software.

– Programming with dependent types, based on Martin-Lof type theory.– Still area of research, a few successful industrial applications (using the

theorem prover Coq).– Use of the theorem prover Agda.∗ This part will be heavily machine based.∗ Experimental system.∗ Interesting: use of a theorem prover, which is used like a programming

language.∗ Application of its ideas not limited to safety critical software.· Dependent types will sooner or later be used in ordinary programming

languages.(Templates in C++ or soon in Java (??) is one approximation.)

– Goal is that students have been in contact with one proof assistant.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-32

(C) Anton Setzer 2003 (except for pictures)

(c) Administrative Issues

Address:Dr. A. SetzerDept. of Computer ScienceUniversity of Wales SwanseaSingleton ParkSA2 8PPUK

Room Room 211, Faraday Building

Tel. (01792) 513368

Fax. (01792) 295651

Email [email protected]

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-33

(C) Anton Setzer 2003 (except for pictures)

Assessment

• 80% Exam.

– One question concerning stream A.– One question mixeture of stream A and B.– One question concerning stream B.

• 20% coursework:

– 4 small assignments. Each counts 5% (Plan, might be changed).∗ Handed out approx. every 2nd week.∗ Due two weeks later.∗ Mainly associated with stream B.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-34

(C) Anton Setzer 2003 (except for pictures)

Timetable, Course Material

• Two lectures per week.

– Monday, 13:00, Robert Recorde Room.– Thrusday, 12:00, Robert Recorde Room.

• Web page contains overhead slides from the lectures.Course material will be continually updated.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-35

(C) Anton Setzer 2003 (except for pictures)

(d) Plan

• Learning outcome:

– Familiarity with issues surrounding safety-critical systems.– Understanding of techniques for specifying and verifying high-integrity

software.– Experience with one proof-assistant.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-36

(C) Anton Setzer 2003 (except for pictures)

Plan Stream A

A0. Introduction, overview.A1. Safety criteria.A2. Hazard and risk analysis.A3. Programming languages for writing safety-critical software.A4. Fault tolerance.A5. The development cycle of safety-critical systems.A6. System reliability.A6. Design methods and tools.A7. Formal methods.A8. Verification, validation, testing.A9. Case studies.

• Probably not all topics covered (last year only A3 was reached).

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-36a

(C) Anton Setzer 2003 (except for pictures)

Plan Stream B

B1. Introduction.B2. The logical framework.B3. Data types.B4. Interactive programs in dependent type theory.B5. Case studies.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-37

(C) Anton Setzer 2003 (except for pictures)

(e) Literature

• In general, the module is self-contained.

• In the following a list of books, which might be of interest, if you later haveto study critical systems more intensively.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-38

(C) Anton Setzer 2003 (except for pictures)

Books Relevant for Stream A

• Main course book:

– Neil Storey: Safety-critical computer systems. Addison-Wesley, 1996.

• Supplementary books on software-engineering aspects:

– Nicolas J. Bahr: System safety and risk assessment: a practicalapproach. Taylor & Francis, 1997.Intended as a short book for engineers of many disciplines.

– Nancy G. Leveson: Safeware. System safety and computers. Addison-Wesley, 1995.Concentrates mainly on human and sociological aspects.

– Part 4 and 5 in Ian Sommerville: Software Engineering. 6th Edition,Addison-Wesley, 2001.

– Peter G. Neumann: Computer related risks. Addison-Wesley, 1995.A report on a lot of (100?) accidents and incidents of critical errors insoftware.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-39

(C) Anton Setzer 2003 (except for pictures)

Books Relevant for Stream A (Cont.)

• Some books on general formal methods:

– Jonathan Jacky: The way of Z. Practical programming with formalmethods. Cambridge University Press, 1997.Practical application of the specification language Z in medical software.Probably not to be treated here.

– Steve Schneider: The B-method. Palgrave, 2001.B-method is a specification language, which might replace Z in the future.Has industrial applications.

– John Barnes: High integrity Ada. The SPARK approach.Use of a subset of Ada with proof annotations in order to develop securesoftware. Developed and used in industry.

– Michael R. A. Huth, Mark D. Ryan: Logic in Computer Science. Modelingand reasoning about systems. Cambridge University Press, 2000.Introduction to logic. Covers model checking, an industrial method, verywell. But probably not to be treated here.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-40

(C) Anton Setzer 2003 (except for pictures)

Literature Relevant for Stream B

• B. Nordstrom, K. Peterson, J. M. Smith: Programming in Martin-Lof ’stype theory. Available viahttp://www.cs.chalmers.se/Cs/Research/Logic/book/.Course book, although a little bit too high level.

• B. Nordstrom, K. Peterson, J. M. Smith: Martin-Lof ’s type theory.Handbook of Computer Science, Vol 5, 1-37. Oxford Univ. Press, 2000.Available viaftp://ftp.cs.chalmers.se/pub/cs-reports/papers/smith/hlcs.ps.gz

• Aarne Ranta: Type-theoretic grammar. Clarendon Press, 1995.Use of type theory in linguistics and for translation between languages.Supposed to have a good and simple introduction into type theory.

Critical Systems, CS 411, Lentterm 2003, Sec. A0 A0-41


Recommended