APPROPRIATE TOUCH INTERACTIONS FOR
NAVIGATING 3D OBJECTS
IAN BERESFORD
SUPERVISOR:
DR. CHRIS BOWERS
BSC (HONS) COMPUTING
UNIVERSITY OF WORCESTER
MAY 2015
2
ABSTRACT The discovery of a Viking ship in a Newport riverbed has presented not only an
exciting historical discovery, but also the potential for a bespoke, educational exhibit,
encompassing a digital solution to understand more about this unusual vessel.
This study explores the potential for public, multi-touch technology as part of the
exhibit, in the form of a 3D simulated model of the Newport ship, focusing primarily on
the effective implementation of an appropriate touch interaction system.
Following research into this area, and a usability study involving the Shipshape
application, this study finds that whilst neither the tested direct or relative control
systems are currently ready for deployment in a public space, usability investigations
found that direct input is a more usable interaction system, and received a more
positive response from users.
With refinements, alterations and recommendations covered in this report, an effective
interaction system for the Shipshape simulation should be ready for implementation,
and installation.
3
TABLE OF CONTENTS
Abstract ____________________________________________________________ 2
i. Acknowledgements ________________________________________________ 5
1. Introduction ______________________________________________________ 6 1.1. Human-Computer Interaction _________________________________________ 7 1.2. Aims and Objectives _________________________________________________ 7
2. Literature Review _________________________________________________ 8 2.1. Multi-Touch Technology in Public Spaces _____________________________ 8
2.1.1 Encouraging Engagement __________________________________________ 9 2.2. Touch Interaction for Social Learning ________________________________ 10 2.3. Defining Direct & Relative Input ______________________________________ 11
2.3.1. Gesture Control for Direct Input ____________________________________ 13 2.4. Supporting User Experience for Touch Interaction ____________________ 14
3. Research Methodology ___________________________________________ 16 3.1. Usability Testing ____________________________________________________ 16 3.2. Measuring Usability _________________________________________________ 17 3.3. Research Session Structure _________________________________________ 17 3.4 Observational Techniques ___________________________________________ 18 3.5. Pilot Testing ________________________________________________________ 20
4. Pilot Study Findings ______________________________________________ 21 4.1 Pilot Study Setup ____________________________________________________ 21 4.2. Issues Raised Through the Pilot Study _______________________________ 23 4.3. Methodology Revision ______________________________________________ 23
5. Results & Analysis _______________________________________________ 24 5.1. Usability Surveying _________________________________________________ 25 5.2. Performance Data __________________________________________________ 27 5.3. Observations and Video Recording ___________________________________ 29
4
6. Discussion ______________________________________________________ 30 6.1. Usability Survey Feedback __________________________________________ 30
6.1.1. Support Requirements ____________________________________________ 31 6.1.2. System Appeal __________________________________________________ 33 6.1.3. Ease of Use _____________________________________________________ 34
6.2. Relative Control Expectations _______________________________________ 35 6.3. Direct Input Gestures _______________________________________________ 36
7. Reflection _______________________________________________________ 37 7.1. Survey Concerns ___________________________________________________ 37 7.2. Item Locations _____________________________________________________ 38
8. Conclusion ______________________________________________________ 40 8.1. Recommended System ______________________________________________ 40 8.2. Recommended System Revisions and Modifications __________________ 40 8.3. A Successful Methodology? _________________________________________ 41
9. Reference List ___________________________________________________ 42
10. Appendices _____________________________________________________ 46 3.2. Sample SUS Survey _________________________________________________ 46
5
I. ACKNOWLEDGEMENTS I would like to pay thanks to a number of people for their help and support throughout
the completion of this study.
Firstly, Dr Chris Bowers, without whom I would not have been introduced to this
interesting topic. In addition to providing the foundation for this project, he has also
offered many hours of support in ensuring the completion of this project, for which I
am immensely grateful.
Second, I wish to acknowledge and thank all of the participants who have taken the
time to contribute to the project. Without their valuable insight and feedback, this
study would not be possible.
Finally, my family for all of their support, not just throughout this study, but also my
entire time at the University of Worcester.
6
1. INTRODUCTION
The discovery of the Newport ship represents a unique and interesting historical find.
Believed to be the only known vessel of its kind, an extensive recovery and
preservation operation has been underway since its discovery in 2002, in an attempt
to reconstruct and study the ship, which, if successful, could prove a valuable asset
and provide a wealth of information about sea fairing craft of its age.
The recovery is a time consuming and delicate process, which involves unearthing
and extracting thousands of individual pieces of timber from the Usk river bed, before
cataloguing and analysing them, and allowing them to be treated, to preserve them for
reconstruction.
As each piece was documented, a three dimensional scan was taken, which has
allowed historians and conservationists to build a picture of how the vessel might look
when re-assembled.
However, the preservation process is a long-drawn procedure, and at this time, it is
expected that the first timbers will not be ready for reassembly until at least 2016
(Friends of the Newport Ship, 2013). Re-building the vessel and proposed museum
structure will then undoubtedly take many more years to complete.
Owing to the historical significance, and unique nature of this ship, historians have
been keen to study it in greater detail, in order to better understand its purpose, its
construction, and the reasons for its unique design. Unfortunately, the time and labour
requirements of the reconstruction and preservation process have the potential to
hamper this research.
In response to this, the 3D scans of each piece of timber have been digitally
reassembled, to produce an accurate three-dimensional model of the vessel. This has
been packaged into an application, referred to as Shipshape. It is hoped that not only
will this application provide an invaluable tool with which to study the Newport ship, in
order to prepare and document findings about the ship, ready for public consumption
in a proposed exhibition.
7
Whilst previous projects have been undertaken to develop and construct the
Shipshape application, it remains incomplete. Until now, there has been no agreed
way to interact with the application. It is imperative that an effective interaction system
is implemented, which can be used by both experts, and the general public, as part of
the exhibition of the ship.
1.1. HUMAN-COMPUTER INTERACTION
Over the past ten years the way we, as humans, interact with technology has evolved
dramatically. No longer do we rely on simple technologies such as the keyboard and
mouse. Instead, there are a broad range of HCI methods available, from speech
interpretation, to eye-tracking software.
However, there has been one piece of technology which, in recent years, has become
ever more commonplace in day-to-day life; touch interaction.
In reality, touch interaction through screens and pads has been available since the
1970s (CERN, 2010), but it has only really been since the late 2000s that we have
become more accustomed to encountering these systems on a regular basis, with the
number of touch based mobile devices including phones and tablets experiencing an
ever-growing surge in popularity.
The range of applications for touch technology is vast, and it provides an interesting
alternative to more traditional controller-based solutions.
1.2. AIMS AND OBJECTIVES
Following on from previous research and development of the Newport ship project,
this report addresses the issue of usability when designing and implementing an
interactive, multi-touch walk-up exhibition, simulating a navigable 3D object.
The primary objective of this study is to determine the most effective and user-friendly
implementation of two different touch control system, direct and relative input. It is
hoped that the results of this investigation will allow an informed decision to be made,
as to the most appropriate interaction system.
8
2. LITERATURE REVIEW
Research undertaken prior to the study getting underway will be used to build an
understanding of implications of creating a multi-touch based system for public use,
and how effective design steps can be used to maximise both usability and
performance, ensuring a beneficial and rewarding experience for the user.
2.1. MULTI-TOUCH TECHNOLOGY IN PUBLIC SPACES
Compared to alternative solutions in public places multi-touch interfaces offer a
variety of benefits. According to Jacucci et al. (2010: p.2) many institutions desire
exhibitions which are capable of offering “a variety of content, themes and
categories”. This makes multi-touch an ideal solution, combining a digital display with
ample potential for user interaction.
As systems such as multi-touch allow more than one user to interact with the display
at the same time these exhibits can also stimulate social learning, fostering what
Brignull and Rogers (2003: p.4) refer to as the “honeypot effect”; a phenomenon
where bystanders and passers-by are drawn into the display, by interest in the on-
going interaction, thus opening the potential for discussion and socialisation between
users. Of course, many other external factors may determine the success of an exhibit
of this nature, including positioning and footfall to name a few, but the core elements
to create this “honeypot” are already well established with the use of multi-touch
(Hornecker et al. 2007: p.5).
However, the use of multi-touch technology in this walk-up environment presents
obstacles. In particular, the ways in which users are able to take control of the exhibit.
Owing to the nature of these displays, it is important that users are able to approach
an interactive installation and discover how to control and manipulate it in a matter of
seconds. Throughout the course of the day one display may be passed by any
number of potential users, and it is important that as many as possible have the
chance to explore it. Therefore, great consideration should be taken when designing
and implementing the software to be deployed at these exhibitions.
9
It is important that users are able to quickly understand these installations and feel a
sense of progress to ensure complete engagement with the exhibit. If the user is not
taken in with the activity, and finds using the system arduous and unrewarding it is
likely they will not persevere with the activity, and instead move on (Hornecker 2008:
p.4).
2.1.1 ENCOURAGING ENGAGEMENT
Design and installation of interactive technologies in public locations must take into
account a variety of factors which may not be present in other technology areas, such
as the web.
Perhaps most prominent is the ability for a display to be able to draw in new users,
and promote engagement. Whilst the honeypot effect is a desired situation, it is
important to understand the social factors that should be accounted for if this effect is
to be achieved.
Brignull and Rogers (2003: p.8) point out that for a user to participate in the activity
and begin to engage, they must first be willing to invest their time and effort, likening
this to a currency, which will only be invested should the user envisage themselves
benefiting.
In order to encourage the user to take the time, they must first be able to easily
understand what interaction is required from them, and how they will pick up and use
the system. Therefore, it is important that the system is transparent and easy to
picture from the offset (Brignull, Rogers 2003). Although the honeypot effect dictates
that users will learn from watching others using the system before them, it should be
understood that the presence of this initial actor is not always guaranteed.
Consequently, an element of independent discovery and learning may be necessary to
act as a catalyst for the honeypot.
10
Another fundamental aspect in determining successful user engagement is the ease of
recovery when encountering issues or making a mistake. Brignull and Rogers (2003)
again highlight this issue, suggesting that users are less likely to engage with the
system if they encounter difficulties. The fact that the user is being watched by others
is suggested to heighten pressure upon the user, thus resulting in embarrassment if
they are unable to achieve their objective, or encounter issues.
Brignull and Rogers (p.6) also quote one of their participants as saying “...there was
pressure to formulate something not too dumb", indicating pressure upon the
participant to respond in a manner which would be socially accepted by those
observing.
In order to combat this effect it is important that the system is designed in such a way
to either minimize the noticeable effect of errors, or allow them to be easily corrected.
2.2. TOUCH INTERACTION FOR SOCIAL LEARNING
Hornecker (2008: p.4) indicated in a study of touch table implementation at Berlin
Museum of Natural History that many users were seen to observe others interacting
with the setup before participating themselves. As a result, they were able to quickly
understand the basic functionality of the system.
As multi-touch is an incredibly versatile technology, it is possible to produce scalable
systems which allow many users to be interacting with the system at once, such as
using a large projected image, with several interaction points located around the
screens edges (Shen et al., 2009). This means that not only are users able to learn
interaction techniques from each other, but also to collaborate in the interest of
completing their objective.
Within education, collaborative learning has been an interest for a number of years,
with a great deal of studies being conducted around it. Barron’s (2009) research has
suggested that collaboration amongst groups, who allow an open forum and share
ideas are generally able to solve problems more effectively, learning from each other
along the way.
11
Higgins et al. (2012) conducted further research in addition to Barron, focusing
specifically on the use of multi-touch collaboration in comparison to traditional, paper-
based alternatives. Using a sample of school children as the foundation for their
study, the study concluded that those who engaged using multi-touch technology
were able to begin making collaborative decisions and discussing opinions in greater
depth at an earlier stage, whereas those using paper-based alternatives. Due to the
affordances available to multi-touch technology compared to its traditional
counterparts, Higgins et al. suggest that groups were able to share a joint attention,
which allowed problem solving to get underway much faster, encouraging the
participants to explore alternative solutions.
In an exhibition space it is important that the users of the system are able to extract
the desired information within a reasonable timeframe. Not only does collaboration
allow for multiple users to be interacting with the system at once, increasing
availability, but in light of the above research, may also reduce the time taken for a
user to absorb the required information. Not only would this allow users to distribute
their time across multiple exhibits more evenly, but also it is possible that overall
engagement with the system is improved, given Brignull and Rogers’ suggestion that
potential users will weigh up the time and effort versus the benefits from interacting.
2.3. DEFINING DIRECT & RELATIVE INPUT
Li et al. (2014: p.2) describe direct input as “allowing users to directly manipulate
objects on the interface, without intermediates such as a mouse”. This is generally
achieved by tracking the movement and location of multiple fingers across a touch
sensitive surface, and using software to interpret these motions into software
commands.
Owing to the amount of devices that include multi-touch interaction technology, the
number of implementations for direct input is ever growing. The ability to map the
motion of several fingers to a single function allows users to navigate systems with
ease, requiring far less additional input controls than previous. For example, a
traditional mobile phone may include nine numeric keys, answer and reject keys,
power, and volume control buttons, to name just a few.
12
By contrast modern smartphones which make use of multi-touch screens, such as
Apple’s iPhone may have as little as 4 physical buttons to control volume, power and
to return to the device home screen.
This simplification not only allows for hardware costs to be reduced, but also
significantly reduces the number of control inputs required for the system, making it
theoretically simpler for users to understand (Nielsen and Budiu 2013: p.25).
Relative input, or mapping, is the principle of providing a separate control system,
which, whilst still a part of the same touch screen, allows the user to perform
operations without directly interacting with the subject itself (Hoober and Berkman
2011: p.315).
This may be thought of as an on-screen controller, which has multiple functions
mapped to different ‘soft’ buttons or hotspots, which when touched will initiate the
desired operation.
In mobile technology an example of the difference between direct and relative input
may be given using the Android and iOS approaches to multi-tasking.
Whilst on the Android platform manufacturers have the freedom to alter this function,
is a generally found that the on-screen soft keys are used to open the multi-tasking
window (Figure 1), where currently running applications are found. This is an example
of relative control.
Figure 1: The Android soft keys, with multi-tasking highlighted (Source: Android Legend, 2014)
13
In contrast, iOS uses a gesture to
open the multi-tasking window,
which involves swiping four
fingers upward (Figure 2). This
would be a direct input, using a
gesture with no visual cue to open
the window.
Each interaction technology has
its own merits, with many citing
that direct input allows for a more
natural and fun user experience
(Hoober and Berkman 2011: p.318 and Hornecker 2008: p.7).
Relative controls on the other hand, allow for greater overall control for the system
developer, allowing constraints to be put into place where necessary (Hoober and
Berkman 2011: p.316).
2.3.1. GESTURE CONTROL FOR DIRECT INPUT
Nielson and Budiu (2013: p.59) note that there are a number of frequently used
gestures which have become common across devices that they are almost a
considered a standard. These include gestures which are implemented within the
application, such as pinch-to-zoom, which has become widely used due to its
association with the physical action of stretching an object to enlarge it (Lü and Li,
2012).
As touch enabled devices are becoming increasingly common in day-to-day life,
perhaps most notably due to smartphones and tablet computers, we are encountering
multi-touch gestures increasingly frequently. As a result, it may be that some users
who are more accustomed to using multi-touch devices are able to adapt quicker to
the challenge, and take less time to learn and familiarise themselves with the
technique needed to control the object.
Figure 2: An example of how the iOS multi-tasking gesture is performed (Source: WikiHow, no date)
14
Nielson and Budiu (2013: p.142) call upon the Apple iPad as an example of familiarity
with gestures, citing that whilst some gestures such as one-finger tapping and
dragging come reasonably naturally to users, upon entering the realms of multi-touch
it is not uncommon to find users experiencing difficulty in discovering these gestures
independently.
They suggest that much of gesture control comes down to memorability. The two
stress the importance of retaining ‘standard’ gestures, and avoiding defining new
gestures, which are unique to a particular application or system.
2.4. SUPPORTING USER EXPERIENCE FOR TOUCH INTERACTION
As stated above, it is important that users are well informed of the required controls
needed to operate an application. If the interaction system is not clear, problems are
likely to arise when users attempt to take control of the system.
Whilst memorability does have role to play in direct input, it is important for both direct
and relative control that the users are supported throughout their experience using the
system, in order to ensure they are able to retrieve the desired outcomes.
If the user is unable to use the system to its full potential there is the risk that the
system is not fulfilling its objective, rendering it ineffective.
Hoober and Berkman (2011: p.269) as well as Nielson and Budiu (2013: p.60) both
point to progressive disclosure as a potential inhibitor to any user experience issues
that may arise. Progressive disclosure is a design principle, whereby information or
tools are revealed to the user as they progress through the system.
In the beginning the user will only have access and an explanation of the basic tools
required at that point in the system. As they delve deeper into more advanced and
sophisticated territory, more expert controls are uncovered and explained (Nielson
2000).
By staggering the introduction to features in this manner, the user is less likely to find
themselves overwhelmed with excessive information, and therefore be able to absorb
the information displayed to them more effectively (Spillers, 2014).
15
Overall, this should make the application easier to learn, and thus, allow users to reap
greater benefits from it (Nielson 2006).
Another way in which user experience can be improved is by implementing cues and
providing ‘signposts’ where necessary. Hess (2010) defines signposts as a method of
giving users a glimpse of the path ahead, whilst making their way through a system.
Hess suggests that these are particularly useful when there are multiple paths which
could be taken to achieve a goal, such as in a website with many layers.
As well as showing progress ahead, Hess also points out the benefits of informing the
user where it is they have come from previously. By informing the user of each
available route open to them, they are able to feel more content and relaxed, and
focus on enjoying and benefiting from the experience.
16
3. RESEARCH METHODOLOGY
In order to for this project to fulfil its objective of determining the appropriate touch
interaction system to control the 3D object (The vessel), it is important to understand
how users of the system respond to differing approaches. The following methodology
highlights the testing points for this study, and how research will be conducted
throughout.
3.1. USABILITY TESTING
Usability testing is the general term attributed to any number of tests, which seek to
identify and resolve any issues with how people use and interact with computer and
electronic systems. One of its main purposes is to inform system development and
refinement, in an attempt to produce an effective and usable end product which users
will find both easy and fulfilling to learn and use (Rubin 1994: p.26).
Dumas and Redish (1993: p.23) highlights the importance in usability testing of
selecting participants who are representative of real users, citing that unless
participants are a true reflection of the average user it is not possible to see what will
happen when the product is in the hands of a real user.
Due to the nature and location of the exhibit in this scenario it is difficult to create a
specific user profile and demographic, as theoretically it is open to the general public,
and accessible by all. Therefore, various participants have been selected, of various
ages and backgrounds. It is expected that is will provide a broader overview of usage
habits, which may influence the outcome of this project.
During the study all participants will have their data anomalised, in order to protect
their privacy, and personal data such as participant age will not be collected.
17
3.2. MEASURING USABILITY
SUS, devised by Brooke in 1986, is considered to be a industry standard, owing to its
simple, easily administered approach, which yields informative and reliable results
even with small samples (Usability.gov, no date).
As mentioned above, the survey itself poses ten set questions, which the participant
answers on a scale of 1 to 5, where 5 is Strongly Agree and 1 Strongly Disagree.
Owing to its numeric nature, SUS allows for quantitative data to be collected and
analysed, making patterns easily identifiable, and clear statistical analysis possible,
something which may prove harder to achieve by relying purely on a participant’s
subjective opinion (Brooke 1996).
A sample survey, which was issued for this project, can be found in appendix section
1 of this report.
3.3. RESEARCH SESSION STRUCTURE
Participants will be required to make their way around the 3D vessel using one of the
two interaction systems (Direct input or relative). Around the ship are a number of
object, placed in various locations, intended to encourage the participant to explore
the model. These locations include deck-level, below deck, and atop the mast.
Only one item will be visible at a given time, and the user is informed which item to
find via an on-screen message. Once the user has read which object they are looking
for, they are asked to strike a key and begin searching. This keystroke will start a timer
in the background, which records the time taken for the participant to locate the item,
and press another key, confirming their selection.
Upon finding an item, the ship will return to its default starting position, the user is
given the name of the next item to locate, and the process repeats. The order in which
the items appear on the ship is determined at random at the beginning of each test.
However, the each item will only appear once.
In total there eight items to identify, after which the application will inform the user that
they have completed the test, and to report to the supervisor.
18
Once the user has completed this test, they will be asked to complete a standard
system usability scale (SUS) survey, consisting of ten questions, after which they will
repeat the test using the other interaction technique.
3.4 OBSERVATIONAL TECHNIQUES
One of the key reasons for undertaking this usability test is to shed light on how users
interact with the touch interaction system. Whilst data will be collected to understand
how efficient and effective each approach is, as well as the user opinion on each
method, it may not be possible to deduce user habits from this numeric data.
There are multiple approaches to participant observation during research sessions,
each with their own benefits.
Overt research observation is a regularly used approach, in which a research monitors
a participants behaviour, documenting any observations made it real time. Whilst this
is one of the most straightforward approaches, it does present a number of issues.
Firstly, when undertaking a study with a small research team there is a greater reliance
on the observations of each research conducting the observation. In this instance only
one researcher will be present at one time. This therefore poses concerns of crucial
behaviours passing unnoticed and undocumented, which may affect overall reliability
of results. An example of this would be where the researcher is occupied recording
notes of an observation whilst another, more significant, event is taking place. Due to
these events all taking place in real-time it is also impossible to review data not
documented at a later date, which can hinder deeper research, should it be needed.
Another issue raised by Grey (2009: p.293) is the effect of observation upon
participants. When knowingly observed, it is possible that pressure can affect the
judgment and behaviour of the participant, thus impacting upon the validity of
research conducted. Grey later (p.421) indicates the importance of building rapport
with participants prior to commencing the session. By building a sense of trust at a
more ‘emotional and personal’ (Bailey 1996: p.60) level, the participant may feel more
at ease, and likely to respond in a more natural manner, though this cannot be
guaranteed.
19
Whilst Covert research observation may alleviate concerns of false behaviour, it is
surrounded with ethical concerns regarding the lack of disclosure given to
participants. In addition, as observations must be made in secret, it is not always
possible to monitor developments up close – something which can be very important
in certain scenarios.
Video coding involves recording participants during the research session in order to
observe their interactions and create an additional data source, which can be
reviewed at a later date once the test has been competed (Ravden and Johnson 1989:
p.89). These recordings can reveal new information about how users respond to
systems, which may otherwise pass un-noticed.
For example, it may be possible to observe multiple users experiencing issues with a
particular function, such as pressing a button. It may be possible to observe the
subject trying multiple approaches to overcome the issue, and a trend could become
visible across either all users, or even a particular demographic. This discovery could
potentially be incorporated into the solution developed in response, the development
of which has been informed unknowingly by the users themselves.
There is no silver bullet solution when determining which observational techniques to
employ. Each has its merits and shortcomings, and also its suitability to an individual
project. Grey (2009: p.293) therefore suggests a mixture of methods, in order to
collect reliable, well-documented data, with techniques seeking to compensate each
other’s drawbacks.
A mixture of overt observation and video coding will be employed in this study, to
ensure a clear understanding of user expectations and interactions, particularly with
gesture based input systems. While it may not be necessary to review all video
footage captured during these sessions, it will be possible to flag a particularly event
to review at a later stage. In addition, it may also allow any anomalies shown in the
timer data, if a participant has experienced particular difficulties at any point. This may
then be taken into consideration when discussing the results of the research.
20
3.5. PILOT TESTING
Rubin (1994: p.95) stresses the importance of conducting a pilot test prior to
undertaking the main study. By running a test session, it allows any issues to be
highlighted and amended before they can impact upon the results of the study, and
also save a lot of time and inconvenience for both the research team and the
participants.
A pilot session allows for users to complete a sample of the survey, which can prove
beneficial in determining if there are any issues that may skew data, such as bias
questions.
In addition to this, Dumas and Redish (1993: p.264) offers example of more technical
issues within research methods which may arise; an example being that users cannot
save a document as the name is already in use.
As discussed earlier, the SUS Survey participants will be required to complete is an
industry standard, so no issues are expected to arise as a direct result of this.
However, as participants will be required to complete the survey twice, there is
potential for survey fatigue to set in, or for participants to have difficulty comparing
and contrasting the two systems.
Additionally, from a technical perspective, it is important that both the hardware and
software application used for this investigation are fit for purpose. These two
fundamental components of this study, and any issues arising from these could prove
detrimental to the validity of the data collected.
In light of the dependency on these elements to produce reliable and informative
results, a pilot study shall be conducted, following the session structure detailed
above. It is hoped that this will allow the usability testing to deliver accurate data,
which will enable a well-informed decision to be made as to the final interaction
system to be implemented, and any alterations or adaptations arising as a result.
Findings of the pilot study and details of any revisions to the research methodology
will follow later in this report.
21
4. PILOT STUDY FINDINGS
A pilot study was conducted using a sample of five participants. Once individually
briefed, each was asked to complete the study, as stated earlier in the research
methodology.
Throughout the study, in addition to the research data collected, the participants were
overtly observed from beginning to end, in order to detect any issues or unforeseen
problems arising from the methodology. These could then be taken into consideration
when making any revisions to the methodology, prior to the main study getting under
way.
4.1 PILOT STUDY SETUP
Figure 3 shows the setup used to conduct this
pilot. The touch table was positioned against a
plain background.
Initial setup tests revealed that the surface the
camera was mounted to was highly reflective,
and therefore cause significant camera glare,
owing to the brightness of the touch table
screen. To alleviate this, two sheets of white
paper were secured to the surface, covering the
reflective areas, and improving image quality.
Figures 4 and 5 show still captures from the
video footage recorded during each session. In
order to protect the privacy of each participant,
and retain an anonymous identity, only their
hands, and occasionally the top of their head is
visible.
Figure 3: The setup used for the pilot study,
with the touch table positioned below the
video camera
22
However, the recorded footage was favourable, and allowed not only on-screen
activity to be clearly recorded, but also the motions and gestures of the participant’s
hands, which may provide a valuable insight into the reasoning behind the numerical
results.
Whilst the recording equipment used posses the ability to capture audio, due to the
mounting equipment used to secure the camera, much of the audio is distorted or
inaudible. Fortunately, due to the participants already being under overt observation, it
is possible for any audible comments made to be recorded in note form, for future
analysis, should this prove significant.
Figure 4 and 5: Still frames from the video footage obtained during the pilot study.
23
4.2. ISSUES RAISED THROUGH THE PILOT STUDY
One of the participants pointed out during their second run through of the test that the
objects appear in the same positions, in spite of the order being changed. This raises
serious concerns regarding data validity.
It is possible that if participants realise this in the early stages of the second test they
will navigate to the objects from memory, rather than roaming around the ship
attempting to locate them. This could have a significant impact on performance data
recorded by the application.
The performance data is intended to provide information as to the fastest interaction
system, by recording the time taken to find each item. However, as the participant
may be able to recall item locations from memory, these times will potentially be lower
as a result, whilst failing to consider the actual effectiveness of the system.
4.3. METHODOLOGY REVISION
Due to this concern, it has been decided that all participants of the study will continue
to use both direct and relative input systems, as originally. Their feedback from both
SUS survey will be taken into consideration, in order to assess how usable
participants found each system, and determine if there is a preference.
However, whilst performance data will still be captured for both exercises, the user ID
for each test will be recorded, including the order in which the test was undertaken
(Direct first, followed by relative, or vice-versa). This will allow for performance data to
be omitted from the final results, should it appear that data is skewed.
By making this change, the core elements and tasks for each session can remain
unaffected, as the other aspects of the test proved successful, whilst at
simultaneously ensuring validity of the data collected and any recommendations
informed as a result.
24
5. RESULTS & ANALYSIS
A total of 29 participants were recruited to take part in the study, each testing the two
systems, forming a within study. Their brief was simply to navigate the vessel using
the control system supplied, identifying objects listed on screen, and complete the
SUS survey upon completion.
The results presented are broken down into three categories:
• Usability Surveying – Total scores for each system, obtained from a total of 58
surveys – 29 per system
• Performance Data – Analysis of times taken to locate each object within the
vessel – 13 per system, owing to omissions resulting from pilot study
• Observations and Video Recording – Behaviours and common occurrences
noted during research sessions, which may provide information to explain any
significant numerical data points
25
5.1. USABILITY SURVEYING
In SUS, any score over 68 is considered to be above average, and anything under is
below average.
As can be seen in Figure 6 both direct and relative controls received very mixed
results. However, direct input obtained a higher scores overall, with the uppermost
result being one point from achieving 100%.
Figure 6: A comparative box plot, illustrating SUS responses, with direct input on the left, and relative on the right.
26
With an average score of 76 direct input can be considered a usable system, whereas,
relative control was an average of 64, placing it below average for usability.
As participants completed two surveys each, the results were paired by their test ID
number, allowing comparison of scores to take place. When compared, it was found
that 24 participants of 29 surveyed rated direct input higher than relative, representing
82.76% of participants.
By contrast only 4 (13.79%) considered relative control to be the more usable system,
with the remaining 1 participant (3.45%) scoring both systems equally.
In light of this, it would preferable to implement direct input into the system, purely
from a usability perspective. However, the fact that relative controls scored lowly (As
low as 37 in one case) would suggest that there are failings in the design and
implementation of the system, which warrants further investigation.
27
5.2. PERFORMANCE DATA
Performance data was recorded throughout each exercise, in the form of the time
taken to locate each object. These were recorded in milliseconds, and have been
converted to seconds for ease of reading.
Table 1: Analysis of performance data results for direct input. Participants located 8 objects during each test. The time to locate each object is represented in seconds.
The data obtained is highly mixed in its outcomes, with no clear pattern showing. It
was initially hypothesized that participants would find both systems easier to use as
they spent more time using it, which would be represented through decreasing times
in the performance data. It was also expected that direct control would see a more
dramatic change in times, on account of the fact that users need to become
accustomed with the gestures required to control the ship.
However, upon collating the results, it appears that this decrease is not as
pronounced as initially expected, with many data values failing to correspond to the
trend.
Table 2: Analysis of performance data results for relative input. As above, times are recorded in seconds.
28
In order to interpret this data, the time taken to complete each task in order from the
first to the last object has been recorded. From these times it is possible to calculate
an average task completion time for each participant.
By comparing these average times, it is not only possible to produce an average task
length for the entire control system, but also draw comparison between how quickly
participants were able to located items.
From Table 1 it can be deduced that direct input exhibited an overall lower average
time, with an overall average of 44.7 seconds, when compared to relative with a total
45.4. However, in both cases, results where heavily mixed, with some participants
completing tasks far quicker than others, and the overall margin between the two total
averages is minimal.
29
5.3. OBSERVATIONS AND VIDEO RECORDING
During the research sessions participants were overtly observed, in order to indicate
any particular patterns in behaviors, or any specific issues that would not be apparent
within the performance data.
As expected, when undertaking the direct input control system, participants would
spend the initial period familiarising themselves with the gestures required controlling
the ship.
Some participants also appeared to have difficulty using the relative controls, initially.
It would seem that users frequently pressed one button, expecting a different
response to the one they received; they could then be observed to double take and try
the opposite function. For some participants, this behaviour would only be exhibited
during the earlier stages of the session, and quickly adapted to the control
implementation. However, other continued to experience issues throughout the
session, possibly suggesting a deeper, underlying issue with the control system.
30
6. DISCUSSION
Having now completed the research and analysis phase of this study, and presenting
the results gathered, this section seeks to extract or provide reasoning’s and offer
explanations, based upon the various types of data obtained, as well as initial
research conducted for the literature review portion of the study.
6.1. USABILITY SURVEY FEEDBACK
Whilst scores from the usability surveys completed for each system have identified
that direct input was found by more participants to be the more usable system, further
details can be extracted from the surveys, beyond their overall score. These provide
many interesting points for discussion, and suggestions for further enhancements.
As stated previously, SUS is comprised of 10 questions, with odd numbered
questions focusing on positive aspects, such as “I think that I would like to use this
system frequently”, whilst even questions being more negative in tone; “I found the
system unnecessarily complex”. Each of these questions is answered with a 1 to 5
scale. This allows for specific areas to be targeted, and allows focus to be applied to
these areas when making any revisions to the system.
Research conducted earlier in this study, has highlighted the importance of an easily
learnt system, in order to ensure the user is engaged and able to retrieve the
information they desire quickly, and with minimal effort (Hornecker, 2008).
31
6.1.1. SUPPORT REQUIREMENTS
Question 4 of the survey asked participants whether they would need the support of a
technical person to be able to use the system.
Table 3: A summary of responses to SUS question 4, showing the number of responses awarded in each category for direct and relative control.
System Disagree/Strongly Indifferent Agree/Strongly
Direct 15 2 12
Relative 23 0 6
As can be seen above, of the 29 people surveyed 12 (41.38%) agreed to strongly
agreed with this statement for direct input. By contrast, only 6 (29.69%) of participants
answered in this manner when assessing relative controls.
A result of this nature is not entirely unexpected. When using the relative controls
users are presented with 10 clearly labeled buttons, which indicate the movement
which they will initiate.
However, in this study, when presented with the direct input system, participants were
given no form of instruction from the system. Although a demonstration was given to
each participant prior to the study commencing, it was observed that many struggled
to remember these, and often asked for support, in order to be reminded of a specific
gesture.
Due to the nature of this exhibit, it may not be possible to have someone familiar with
the system stationed alongside it to offer support. As a result, usage may be reduced
due to the lack of instruction users are given on how to manipulate the ship, thus
reducing the effectiveness and educational value of the model.
In light of this development, it would be advisable to produce a tutorial aspect to
accompany the exhibit. Whilst this could take the form of accompanying signage,
explaining each gesture and its resulting action, a preferable solution would be to
implement an interactive signposting throughout the system.
32
Hess’ (2010) research from earlier in this study
suggests that users will experience greater
benefits by including signposting and cues.
These may be prompted if a delay in interaction
is detected, serving as a quick reminder to the
user, should they have forgotten how perform an
action.
This may take the form of a simple graphic,
which appears close to where the user has last
touched the screen, to illustrate the movement
(Figure 7).
As well as the above responses to question 4 13.79% of users surveyed felt that they
would need to learn a lot being beginning to use the direct input system, answering
agree to strongly disagree to question 10 of the survey, which addresses this point.
This further supports the case for an instructional guide to the system.
Upon first approaching the exhibit, a simple
tutorial may be given, consisting of an
interface overlay, as demonstrated in many
mobile applications (Figure 8). This will briefly
explain the controls, and highlight any other
points of interest.
By implementing such a system, users should
feel more comfortable using the Shipshape
application, whilst having no immediate impact
upon the operational requirements of the
exhibit itself. This would allow for the overall
more usable direct input system to be
implemented, whilst addressing the critical
issue of system uptake.
Figure 7: An example of a basic gesture
cue graphic (Source: Mjanja Tech, 2011)
Figure 8: A sample walkthrough screen
seen in many mobile applications upon first launch (Source: Tech Crunch, 2012)
33
6.1.2. SYSTEM APPEAL
Survey question 1 requires the participant to state how much they would like to use
the system frequently. This information can be used to build a picture of any
preference users may have, or any particular aversion to one system.
Table 4: A summary of responses to SUS question 1, showing the number of responses awarded in each category for direct and relative control.
System Disagree/Strongly Indifferent Agree/Strongly
Direct 0 6 23
Relative 10 4 15
Whilst neither system achieved a 100% positive response, direct input is generally
favorable, with no participants actively disliking the system. In contrast 10 (34.48%)
people stated they would not like to use relative controls frequently.
In spite of this Shipshape not being actively intended to have reuse value, the fact that
participants stated they would not wish to use relative controls frequently, would
suggest that they had a more negative user experience than they did with direct input.
If true, this could hamper the effectiveness of the exhibit, thus reducing its
effectiveness.
34
6.1.3. EASE OF USE
Question 3 poses the simple question of how easy the system was to use. This simple
question however is key in determining the successfulness of the Shipshape control
system.
Table 5: A summary of responses to SUS question 3, showing the number of responses awarded in each category for direct and relative control.
System Disagree/Strongly Indifferent Agree/Strongly
Direct 2 6 21
Relative 8 4 17
21 (72.41%) of users indicated that, despite issues surrounding support and guidance,
they found direct input to be easy to use.
By comparison relative control scored lower, with over a quarter (27.59%) of
participants stating they did not find the controls easy to use.
35
6.2. RELATIVE CONTROL EXPECTATIONS
One point that was raised by a number of participants was that the onscreen buttons
did not respond as expected. This behavior could also be observed in many of the
recordings taken during the session.
It appears that when a buttons is
pressed, in this instance the down
arrow, the expected reaction would be
that the camera moves down, causing
the 3D model to travel up the screen to
reveal content which is outside of the
bottom of the frame.
However, in this study, the pressing of
the down key will result in the 3D object
itself moving down, whilst the camera
angle is fixed. This causes the content
outside of the top of the frame to be
revealed instead (Figure 9).
As this simulation and its control
systems are designed to manipulate the 3D model of the ship, it would appear
appropriate that the buttons respond by moving the ship around, as oppose to the
camera. However, in light of this discovery from the research sessions it would appear
that not all users view the application as such.
Example footage of this behavior can be observed in the accompanying DVD
appendix of this report.
Figure 9: Movement of the 3D model as found in the
application (Top) versus the participant expected movement (Bottom)
36
6.3. DIRECT INPUT GESTURES
One of the initial hypotheses stated at the beginning of is study was that participants
using direct interaction techniques, by manipulating the object through gestures,
would experience initially slower results at the beginning of the test, whilst they
familiarise themselves with the gestures and technique required, but would ultimately
find navigation to be faster and more efficient once they had overcome this initial
learning period.
Evidence seen in Table 1 of this this document (Page 29) would suggest that this is
not true.
Whilst the controls implemented make use of common gestures such as pinch-to-
zoom, participants still took time to understand how to precisely manipulate the
model, and in some cases, looked to the research supervisor for assistance.
However, as stated previously by Nielsen and Budiu (2013), straying from the
‘standard’ gestures, may introduce further complications. This was exhibited by
several users, who had difficulty in using the three-fingered drag, in order to move the
vessel around the table without rotating it.
These problems could be alleviated as discussed earlier in this discussion section, by
the implementation of signposting, visual cues and progressive disclosure. This would
provide a fundamental guide on how to interact with the application, without limiting
the playful and fun nature of the touch table, as described by Hornecker (2008: p.7).
Owning to the diverse user demographic required for the Shipshape application, it is
important that as many users as possible are able to effectively use the system,
regardless of previous technical ability.
37
7. REFLECTION
Being predominantly focused on design – particularly the web – this project has
offered an interesting insight into both user experience design and usability testing.
The research undertaken to complete the literature review for this project has provided
a fascinating look at the complexities of interaction design, and the effect that this can
have upon both a system and its users respectively.
In addition, the first hand experience of planning and conducting a usability and
testing study has proved invaluable, demonstrating the vast amount of consideration
which must be taken when constructing a valid and meaningful research
methodology. Details such as understanding the user demographic and the influence
it may have upon the design are vital factors in determining the successful delivery of
a product, and it is this experience which can be taken away and applied to future
projects or studies which may be undertaken.
However, whilst generally this project has ran smoothly and delivered satisfying
results, upon reflection, there are some changes which would benefit this project, or
similar, if it was to be repeated.
7.1. SURVEY CONCERNS
Firstly, whilst this issuing of a standardised survey like SUS was beneficial from an
analysis perspective, as discussed in the methodology, on its own it did not appear to
yield a clear result, as represented earlier in this report (Page 27). Although the
responses did suggest that direct input was the preferred choice by users, the margin
remains fairly small.
It is possible that the questions used in the survey were not applicable to this
situation, as SUS is intended to assess the usability of a particular system. However,
aside from the point raised in respect of the way button input and camera/model
control are connected, both the direct and relative touch input systems scored highly.
38
Whilst performance data from the two tests suggests that direct input would be a
preferable approach owing to tasks being completed in less time, it appears that both
systems are usable, with participants awarding high scores to both.
In hindsight it the use of the Standard Usability Survey was an inappropriate choice
for this study. A survey of this nature would be better administered during the
development of the application, whilst interaction systems are still being created and
programed. At this stage, it would be vital to determine how functional each system is,
and whether or not users can operate it.
However, when this project was undertaken, development of the 3D model,
application, and interaction systems had already been completed and tested, with the
intention of determining which interaction system would be most suitable in this
instance. Therefore, the testing of individual usability of the two systems may not have
been completely relevant to this project.
This data should by no means be disregarded, however. As highlighted in the results
of the surveys, it was found that participants considered the relative control system
used in this instance to be below average in terms of usability. This information could
be used to produce a revised system, which could then be re-tested to measure
performance.
Also, it was possible to break down the analysis of the SUS survey to individual
questions, which has allowed for many valuable insights which have influenced the
final results of this study.
7.2. ITEM LOCATIONS
Another concern raised, which would perhaps warrant consideration in a repeated
experiment, would be the location of the items to be found around the vessel.
As has already been discussed in this report, items hidden around the ship are placed
in pre-determined locations. However, their order is randomized between tests. The
pilot study highlighted the issue of participants remembering where items are located,
and navigating directly to them, thus potentially providing unreliable data.
39
Whilst a solution to this problem was implemented prior to the main study getting
underway, item location has presented another issue during the analysis stage. It is
believed that this was caused as a result of certain items being hidden in more
obscure places, such as atop the mast of the ship. Upon reviewing the video footage,
it was observed that users frequently struggled to locate these items, often requiring
guidance, or in some cases, completely abandoning the task and moving to the next
item. These cases can be observed in the performance data results as anomalies,
which are completely out of line with the general pattern of results.
Due to the randomized nature in which the objects appear, and the fact that the
performance data does not log which item is time is tracked against, it is difficult to
highlight a particular item and omit it from the results, in order to prevent this
information from affecting the results.
To combat this in a repeated or similar study it items could be placed in pre-
determined locations, and called in a set order. However, a second set of objects in
different, fixed positions could be created, to be used in the repeated exercise for the
second interaction system.
Not only would this allow for a specific, problematic object to be highlighted within the
performance data, but would also overcome the issue addressed during the pilot
session, of locations being recalled from memory, as the locations will be different.
This would allow all data collected from each participant to be taken into
consideration in the final results.
8. CONCLUSION
As outlined in the introduction of this report, this study has was intended to identify
and make recommendations as to the most appropriate touch interaction system to
control the Shipshape application.
8.1. RECOMMENDED SYSTEM
In light of the evidence produced from the investigation conducted as part of this
study, and the significance of these findings against the research carried out in the
early stages of the project, it is recommended that direct input be implemented.
Research has found that not only is the direct input system considered to be more
usable by participants, but also, participants found the system both easier to use, and
more appealing than the alternative relative system.
As has been stated numerous times throughout this report, ease of use and
enjoyment play a significant role in ensuring a successful public installation.
Ultimately, the success of an exhibit such as the Shipshape application will be
determined by its popularity amongst users, and how much they are able to take away
from their encounter with the model.
By selecting a system which has been proven to provide a usable experience, as well
as supporting collaborative and social learning as indicated through research, the
system will hopefully deliver a positive experience, and be effective in its objective to
educate visitors.
8.2. RECOMMENDED SYSTEM REVISIONS AND MODIFICATIONS
Prior to implementing and installing the Shipshape system with the recommended
direct input system, attention should first be paid to the issues which have arisen as a
result of this study.
41
Particular attention should be paid to the types of gesture used, and the support
provided by the application.
As has already been suggested, the use of signposting, tutorial overlays and
progressive disclosure would greatly enhance the usability of the system, and ensure
that as wider group of users as possible are able to not only make use of the
simulation, but also actively engage with it, boosting their learning experience.
Of course, it goes without saying, that future revisions to the application would benefit
from undergoing similar testing to this study, in order to ensure maximum
effectiveness.
8.3. A SUCCESSFUL METHODOLOGY?
Whilst the subject of this study has been very specific in its nature, the reasons for
undertaking and structuring of usability testing remain relevant to any number of
projects.
The information presented through the methodology of this project is in itself a
valuable part of the project, and something that would most certainly be of benefit to
other projects.
Although some concerns around the methodology have been raised in the reflection
section of this report, with these amendments the methodology may prove useful in a
repeat study, such as to test the suggested alterations to the system.
In conclusion, this study has highlighted the importance of thorough usability testing,
and has applied this to create an appropriate and informed recommendation for the
Newport ship Shipshape application.
Usability testing is far from a dying trend. Wherever we desire intuitive and functional
systems, we will have a need for comprehensive usability tests.
42
9. REFERENCE LIST
Android Legend (no date) How to Enable Soft Keys (Navigation Bar) on Any Android
Phone (4.0+). [Online] Available from: http://www.androidlegend.com/wp-
content/uploads/2014/02/hqdefault.jpg [Accessed 14 December 2014].
Barron, B. (2003) When Smart Groups Fail. Journal of the Learning Sciences. [Online]
12 (3) 307-359. Available from:
http://www.tandfonline.com/doi/pdf/10.1207/S15327809JLS1203_1 [Accessed: 19
March 2015].
Brignull, H. & Rogers, Y. (2003) Enticing People to Interact with Large Public Displays
in Public Places. Human-Computer Interaction -- Interact '03. [Online] 17-24. Available
from:
http://www.idemployee.id.tue.nl/g.w.m.rauterberg/conferences/interact2003/INTERAC
T2003-p17.pdf [Accessed: 02 December 2014].
Brooke, J. (1996) SUS - A quick and dirty usability scale. Usability evaluation in
industry. [Online] 4 (7) 189-194 . Available from: http://cui.unige.ch/isi/icle-
wiki/_media/ipm:test-suschapt.pdf [Accessed: 10 January 2015].
CERN. (2010) Another of CERN's Many Inventions. CERN Bulletin. [Online] (12)
Available from: http://cds.cern.ch/record/1248908 [Accessed: 04 December 2014].
Dumas, J. & Redish, J. (1993) A Practical Guide to Usability Testing. 2nd edition.
Exeter, Intellect Books.
Friends of the Newport Ship. (2013) Excavation and Conservation. [Online] Available
from: http://newportship.org/excavation-and-conservation.aspx [Accessed 16
November 2014].
43
Grey, D. (2009) Doing Research in the Real World. 2nd edition. [ebook] London, SAGE.
Available from:
https://books.google.co.uk/books?id=l4dEQTQl1NYC&pg=PA397&lpg=PA397&dq=O
vert+observational+research+behaviour+changes&source=bl&ots=1VzsGxPxJK&sig=
pznvzzxcIOR6DzPcajiIFIahSJw&hl=en&sa=X&ei=rPc-
VbHLBofaOPaGgcAI&ved=0CFsQ6AEwCA#v=snippet&q=Overt&f=false [Accessed 02
February 2015].
Hess, W. (Wednesday 10 March 2010) Guiding Principles for UX Designers. UX
Magazine. [Online] Available from: http://uxmag.com/articles/guiding-principles-for-
ux-designers [Accessed 27 March 2015].
Higgins, S. Mercier, E. Burd, E. & Joyce-Gibbons, A. (2012) Multi-touch tables and
collaborative learning. British Journal of Educational Technology . [Online] 43 (6)
1041–1054. Available from:
http://onlinelibrary.wiley.com.proxy.worc.ac.uk/doi/10.1111/j.1467-
8535.2011.01259.x/epdf [Accessed: 13 March 2015].
Hoober, S. & Berkman, E. (2011) Designing Mobile Interfaces. Patterns for Interaction
Design Sebastopol, CA, O'Reilly Media, Inc.
Hornecker, E. (2008) “I don’t understand it either, but it is cool” Visitor Interactions
with a Multi-Touch Table in a Museum. IEEE Tabletops and Interactive Surfaces
(Tabletop 08). [Online] 121-128. Available from:
http://www.ehornecker.de/Papers/BerlinMuseumTabletop08.pdf [Accessed: 05
December 2014].
Hornecker, E. Marshall, P. & Rogers, Y. (2007) From entry to access - how shareability
comes about. DPPI 2007. [Online] 328-342. Available from:
http://mcs.open.ac.uk/pervasive/pdfs/horneckerDPPI07.pdf [Accessed: 01 December
2014].
Jacucci, G. Morrison, A. Richard, G. Kleimola, J. Peltonen, P. Parisi, L. & Laitinen, T.
(2010) Worlds of Information: Designing for Engagement at a Public Multi-touch
Display. CHI 2010. [Online] 10 (4) 2267-2276. Available from:
http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.186.1
214 [Accessed: 01 December 2014].
44
Li, Y. Lu, H. & Zhang, H. (2014) Optimistic Programming of Touch Interaction. ACM
Transactions on Computer-Human Interaction (TOCHI). [Online] 21 (4) Available from:
http://dl.acm.org.proxy.worc.ac.uk/citation.cfm?id=2631914 [Accessed: 14 March
2015].
Miller, R. (2000) Jakob Nielsen Answers Usability Questions. [Online] Available from:
http://slashdot.org/story/00/03/03/096223/jakob-nielsen-answers-usability-questions
[Accessed 09 April 2015].
Mjanja Tech (2011) Pinch zoom multi-touch gesture. [Online] Available from:
https://mjanja.ch/wordpress/wp-content/uploads/2011/08/Pinch_zoom.png
[Accessed 03 April 2015].
Nielsen, J. (2006) Progressive Disclosure. [Online] Available from:
http://www.nngroup.com/articles/progressive-disclosure/ [Accessed 18 March 2015].
Nielsen, J. & Budiu, R. (2013) Mobile Usability. Berkeley, CA, Pearson.
Ravden, S. & Johnson, G. (1989) Evaluating Usability of Human-Computer Interfaces.
A practical method. Ellis Horwood Books in Information Technology Chichester, Ellis
Horwood Limited.
Rubin, J. (1994) Handbook of Usability Testing. Wiley Technical Communication
Library New York, John Wiley & Sons, Inc.
Shen, C. Ryall, K. Forlines, C. Esenther, A. Vernier, F. Everitt, E. Wu, M. Wigdor, D.
Ringel Morris, M. Hancock, M. & Tse, E. (2010) Collaborative Tabletop Research and
Evaluation. Computer-Supported Collaborative Learning Series 10. [ebook] Springer
US. Available from: http://link.springer.com/chapter/10.1007%2F978-0-387-77234-
9_7 [Accessed 11 March 2015].
Spillers, F. (2014) Progressive Disclosure. [Online] Available from:
https://www.interaction-design.org/encyclopedia/progressive_disclosure.html
[Accessed 17 April 2015].
Tech Crunch (2012) Rethinking the Mobile App “Walkthrough”. [Online] Available from:
https://tctechcrunch2011.files.wordpress.com/2012/12/pulse1-320x480.jpg?w=320
[Accessed on 03 April 2015].
45
Usability.gov. System Usability Scale (SUS). [Online] Available from:
http://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
[Accessed 06 January 2015].
WikiHow (no date) Use Multi-Task Gesture on the iPad version 2. [Online] Available
from: http://pad2.whstatic.com/images/thumb/3/30/Use-Multitasking-Gestures-on-
the-iPad-Step-2-Version-2.jpg/900px-Use-Multitasking-Gestures-on-the-iPad-Step-2-
Version-2.jpg [Accessed 14 December 2014].
46
10. APPENDICES
3.2. SAMPLE SUS SURVEY
Strongly Strongly disagree agree 1. I think that I would like to use this system frequently 2. I found the system unnecessarily complex 3. I thought the system was easy to use
4. I think that I would need the support of a technical person to be able to use this system
5. I found the various functions in this system were well integrated 6. I thought there was too much inconsistency in this system 7. I would imagine that most people would learn to use this system very quickly 8. I found the system very cumbersome to use
9. I felt very confident using the system
10. I needed to learn a lot of things before I could get going with this system