+ All Categories
Home > Documents > Assessing Diagnostic Expertise in...

Assessing Diagnostic Expertise in...

Date post: 17-Apr-2018
Category:
Upload: dinhkhuong
View: 222 times
Download: 7 times
Share this document with a friend
3
Proceedings 19 th Triennial Congress of the IEA, Melbourne 9-14 August 2015 1 Assessing Diagnostic Expertise in Practice Mark Wiggins a a Department of Psychology and Centre for Elite Performance, Expertise, and Training, Macquarie University, Sydney, NSW, AUSTRALIA 1. Introduction As technology improves and operators become further ‘distanced’ from the process systems over which they exercise control, they have become increasingly reliant upon the interpretability of user interfaces in diagnosing the system state. Diagnosis is a term that is normally associated with medical practice. However, diagnostic skills are evident in other domains, although they may framed around terms such as situation assessment (Klein, Calderwood, & Clinton-Ciroccoo, 2010) or sensemaking (Wu, Convertino, Ganoe, Carroll, & Zhang, 2013). Fundamentally, accurate and timely diagnosis is dependent upon the capacity to recognise environmental features and interpret the pattern as meaningful. In turn, this capacity is dependent upon a repertoire of feature-event/object relationships in memory in the form of cues (Juslin, 2000). Evidence to support the utilisation of cues is evident from expert-novice comparisons, where the superior response latency and accuracy of experts is explained by the rapid activation of cue-based associations in memory (Reischman & Yarandi, 2002). 2. Practice Innovation The successful utilisation of cues requires at least four components to be satisfied, including: (a) the capacity to identify key features from an array; (b) the capacity to associate features and events/objects in memory (c) the capacity to distinguish relevant from less relevant features, and (d) the capacity to prioritise the acquisition of feature-related information during situation assessment (Wiggins, 2014). Together, performance on these components has differentiated the accuracy and reliability of operators’ situation assessment in paediatrics (Loveday, Wiggins, Searle, Festa, & Schell, 2013), power control systems, software engineering (Loveday, Wiggins, & Searle, 2013), and aviation (Wiggins, Azar, Hawken, Loveday, & Newman, 2014). This paper illustrates an approach to normed performance on these types of tasks, and how the information arising from this assessment can assist in the selection and assessment of operators, the development and evaluation of training systems, comparative analyses between organisations, and the calculation of system-wide reliability assessments. 3. Sources of Information In developing the components that enable the differentiation of diagnostic skills, it was important to ensure that each component in isolation, demonstrated a capacity to differentiate skilled from less skilled operators. For example, in associating features and events/objects in memory, a paired association task was developed based on research involving expert and novice forensic investigators (Morrison, Wiggins, Bond, & Tyler, 2013). The task involves the presentation of a feature followed by an event/object, and an assessment on the part of the operator as to the extent to which features and events/objects are related. Morrison et al. (2013) demonstrated that experts were more consistent than non-experts in their assessments of the associations that were presented. In distinguishing relevant from less relevant features, the design of the task was drawn from the Cohran-Weiss-Shanteau (CWS) index in which experts differ from non-experts in their categorisation of relationships (Weiss & Shanteau, 2003). Specifically, when asked to assess which of a series of feature-event associations was important in a preceding situation assessment task, experts tended to be more categorical than non-experts (see Figure 1).
Transcript
Page 1: Assessing Diagnostic Expertise in Practiceergonomics.uq.edu.au/iea/proceedings/Index_files/papers/1229.pdf · Assessing Diagnostic Expertise in Practice ... accuracy and reliability

Proceedings 19th Triennial Congress of the IEA, Melbourne 9-14 August 2015

  1  

Assessing Diagnostic Expertise in Practice

Mark Wigginsa

aDepartment of Psychology and Centre for Elite Performance, Expertise, and Training, Macquarie University, Sydney, NSW, AUSTRALIA

1. Introduction

As technology improves and operators become further ‘distanced’ from the process systems over which they exercise control, they have become increasingly reliant upon the interpretability of user interfaces in diagnosing the system state. Diagnosis is a term that is normally associated with medical practice. However, diagnostic skills are evident in other domains, although they may framed around terms such as situation assessment (Klein, Calderwood, & Clinton-Ciroccoo, 2010) or sensemaking (Wu, Convertino, Ganoe, Carroll, & Zhang, 2013).

Fundamentally, accurate and timely diagnosis is dependent upon the capacity to recognise environmental features and interpret the pattern as meaningful. In turn, this capacity is dependent upon a repertoire of feature-event/object relationships in memory in the form of cues (Juslin, 2000). Evidence to support the utilisation of cues is evident from expert-novice comparisons, where the superior response latency and accuracy of experts is explained by the rapid activation of cue-based associations in memory (Reischman & Yarandi, 2002). 2. Practice Innovation

The successful utilisation of cues requires at least four components to be satisfied, including: (a) the capacity to identify key features from an array; (b) the capacity to associate features and events/objects in memory (c) the capacity to distinguish relevant from less relevant features, and (d) the capacity to prioritise the acquisition of feature-related information during situation assessment (Wiggins, 2014). Together, performance on these components has differentiated the accuracy and reliability of operators’ situation assessment in paediatrics (Loveday, Wiggins, Searle, Festa, & Schell, 2013), power control systems, software engineering (Loveday, Wiggins, & Searle, 2013), and aviation (Wiggins, Azar, Hawken, Loveday, & Newman, 2014). This paper illustrates an approach to normed performance on these types of tasks, and how the information arising from this assessment can assist in the selection and assessment of operators, the development and evaluation of training systems, comparative analyses between organisations, and the calculation of system-wide reliability assessments. 3. Sources of Information

In developing the components that enable the differentiation of diagnostic skills, it was important to ensure that each component in isolation, demonstrated a capacity to differentiate skilled from less skilled operators. For example, in associating features and events/objects in memory, a paired association task was developed based on research involving expert and novice forensic investigators (Morrison, Wiggins, Bond, & Tyler, 2013). The task involves the presentation of a feature followed by an event/object, and an assessment on the part of the operator as to the extent to which features and events/objects are related. Morrison et al. (2013) demonstrated that experts were more consistent than non-experts in their assessments of the associations that were presented.

In distinguishing relevant from less relevant features, the design of the task was drawn from the Cohran-Weiss-Shanteau (CWS) index in which experts differ from non-experts in their categorisation of relationships (Weiss & Shanteau, 2003). Specifically, when asked to assess which of a series of feature-event associations was important in a preceding situation assessment task, experts tended to be more categorical than non-experts (see Figure 1).

Page 2: Assessing Diagnostic Expertise in Practiceergonomics.uq.edu.au/iea/proceedings/Index_files/papers/1229.pdf · Assessing Diagnostic Expertise in Practice ... accuracy and reliability

Proceedings 19th Triennial Congress of the IEA, Melbourne 9-14 August 2015

  2  

Figure 1. Example of the rating stage of the Feature Discrimination Task

Finally, in establishing the prioritisation of features, a process tracing task was developed that

assessed the sequence of information acquisition in response to a problem, against the sequence of task-related information as it was presented (see Figure 2). Based on work in the aviation context (Wiggins & O’Hare, 1995), experts tend to be less sequential than non-experts in their acquisition of task-related information. 4. Findings

Assessments using the combination of tasks have been used successfully to differentiate the performance of experts and non-experts. Referred to as the EXPERT Intensive Skill Evaluation (EXPERTise 2.0), it is a situation judgement test that is customised to the particular interface or industrial setting within which it is to be administered. It can be completed on-line, and yields summary data against a normed dataset. 5. Discussion

Normed-data are currently being compiled in a variety of industries that will enable ongoing assessments of performance and, ideally, provide a platform for the assessment of prospective process control operators during selection. It also has the potential to enable the evaluation of training initiatives that target the development of diagnostic skills. This includes the capacity to identify precisely, any deficiencies in cue utilisation, thereby enabling the introduction of targeted and potentially more cost-effective training interventions.

Finally, EXPERTise 2.0 provides an evidence-based approach to the assessment of newly developed interfaces so that it becomes clear whether the introduction of new technologies

Page 3: Assessing Diagnostic Expertise in Practiceergonomics.uq.edu.au/iea/proceedings/Index_files/papers/1229.pdf · Assessing Diagnostic Expertise in Practice ... accuracy and reliability

Proceedings 19th Triennial Congress of the IEA, Melbourne 9-14 August 2015

  3  

enhances or degrades the existing capacity for diagnosis. This information may be used to initiate a change in the design. Alternatively, targeted training interventions might be developed that can reduce the potential for diagnostic errors.

Figure 2. Example of the Transition Task Acknowledgements

This innovation would not have been possible without the support of the Australian Research Council, TransGrid, PowerLink, Transpower New Zealand, and the Australian Energy Market Organisation. References Juslin, P. N. 2000. Cue utilization in communication of emotion in music performance: Relating performance to

perception. Journal of Experimental Psychology: Human perception and performance, 26(6), 1797. Klein, G., Calderwood, R., & Clinton-Cirocco, A. 2010. Rapid decision making on the fire ground: The original study plus

a postscript. Journal of Cognitive Engineering and Decision Making, 4(3), 186-209. Loveday, T., Wiggins, M., & Searle, B. 2014. Cue Utilization and Broad Indicators of Workplace Expertise. Journal of

Cognitive Engineering and Decision Making, 8(1), 98–113. Loveday, T., Wiggins, M. W., Searle, B. J., Festa, M., & Schell, D. 2013. The capability of static and dynamic features to

distinguish competent from genuinely expert practitioners in pediatric diagnosis. Human Factors, 55(1), 125-137. Morrison, B. W., Wiggins, M. W., Bond, N. W., & Tyler, M. D. 2013. Measuring relative cue strength as a means of

validating an inventory of expert offender profiling cues. Journal of Cognitive Engineering and Decision Making, 7, 211-226.

Reischman, R. R., & Yarandi, H. N. 2002. Critical care cardiovascular nurse expert and novice diagnostic cue utilization. Journal of advanced nursing, 39(1), 24-34.

Weiss, D. J., & Shanteau, J. 2003. Empirical assessment of expertise. Human Factors, 45(1), 104-116. Wiggins, M. W. 2014. The role of cue utilisation and adaptive interface design in the management of skilled performance

in operations control. Theoretical Issues in Ergonomics Science, 15(3), 283-292. Wiggins, M. W., Azar, D., Hawken, J., Loveday, T., & Newman, D. 2014. Cue-utilisation typologies and pilots’ pre-flight

and in-flight weather decision-making. Safety Science, 65, 118-124


Recommended