1
DEGREE PROJECT IN TECHNOLOGY AND LEARNING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2020
Promoting conceptual understanding in high-school physics Exploring the effects of using an audience response system Diana Diez
KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT
3
Promoting conceptual understanding in high-school physics Exploring the effects of using an audience response system Diana Diez
MASTER OF SCIENCE IN ENGINEERING AND IN EDUCATION Title in English: Promoting conceptual understanding in high-school
physics: Exploring the effects of using an audience response system.
Title in Swedish: Att främja konceptuell förståelse inom gymnasiefysik: Undersökning av effekterna av att använda ett publiksvarssystem.
Supervisor: Kristina Edström, School of Industrial Engineering and
Management.
Co-supervisor: Linda Kann, School of Electrical Engineering and Computer
Science.
Uppdragsgivare: Niklas Ingvar, Mentimeter AB.
Examinator: Cecilia Kozma, School of Industrial Engineering and
Management.
5
Abstract
Research shows that students may be proficient in solving physics problem mathematically
but still lack a fundamental understanding of the phenomena in question. One reason may
be that a traditional approach to physics instruction emphasises instructors transfer of
material to the students and problem-solving, sometimes at the expense of conceptional
understanding.
This master thesis combines socio-cultural and behaviouristic perspectives to analyse the
effects of audience response systems in learning environments, in particular physics
instruction. An audience response system is a tool that collects responses from the
participants. It is commonly used to create interaction, thus moderating the approach of
pure transmission of information. The current state of research shows that the effects of
audience response systems depend on how it is used by the instructor.
Audience response systems have been popular for use in peer instruction in physics and part
of this study was to evaluate the design of conceptual problems. Using a mix-methods
approach with interviews, observations, and tests, this thesis explores teachers’ experiences
from using audience response systems to stimulate thinking and discussion on conceptual
questions. Different modalities of systems are also compared. The study was affected by the
school closure due to the COVID-19 pandemic, however the remote teaching situation also
makes the topic even more important.
The findings confirm what is previously established about the role of the instructor and that
the effects depend on their intention. This study demonstrates that an audience response
system can be used for formative assessment, initiate discussions, simultaneously engage
multiple participants, prompt instructors to reconsider their methods and support a
productive learning environment. Important features of an audience response system are
ease of use, clear display of responses, synchronous participation, and anonymity.
Keywords: Audience response system, Mentimeter, formative assessment, peer
instruction, physics
6
Sammanfattning
Tidigare forskning visar att studenter kan vara skickliga i att lösa fysikproblem matematiskt
men ändå sakna en grundläggande förståelse för fenomenen i fråga. En möjlig förklaring är
att fysikundervisning traditionellt fokuserar på överföring av material från lärare till elev
med ett fokus på problemlösning, ibland på bekostnad av den konceptuella förståelsen.
Detta examensarbete kombinerar sociokulturella och beteendemässiga perspektiv för att
analysera effekterna av publiksvarssystem i lärande miljöer, primärt i fysikundervisning. Ett
publiksvarssystem är ett verktyg som samlar in respons från deltagarna. Det används
vanligen för att skapa interaktion, och därmed reducera fokuset på ren överföring av
material i undervisningen. Det aktuella forskningsläget visar att effekterna av
publiksvarssystem beror på hur det tillämpas av läraren.
Publiksvarssystem har populärt använts i peer instruction (kamratlärande) i fysik och en del
i denna studie har varit att utvärdera design av konceptuella frågor. Med hjälp av kvalitativa
och kvantitativa metoder (intervjuer, observationer och tester) undersöker detta arbete
lärares erfarenheter av att använda publiksvarssystem för att stimulera tänkande och
diskussioner om konceptuella frågor. Vidare jämförs modaliteten hos olika system. Studiens
utformning påverkades av skolstängningarna till följd av COVID-19-pandemin, dock ger de
förutsättningar som kommer med distansundervisning ytterligare relevans för ämnet.
Resultaten bekräftar vad som tidigare har fastställts om lärarens roll och att effekterna beror
på dennes intentioner. Studien visar att ett publiksvarssystem kan användas för formativ
bedömning, initiera diskussioner, engagera flera deltagare simultant, uppmuntra lärare att
ompröva sina metoder och hjälpa till att skapa förutsättningar för en produktiv
inlärningsmiljö. Viktiga funktioner i ett publikresponssystem är användarvänlighet, tydlig
presentation av svar, möjlighet till synkront deltagande för många personer samt
anonymitet.
Nyckelord: Publiksvarssystem, Mentimeter, formativ bedömning, kamratlärande,
gymnasiefysik.
7
Preface
Little did I know when I attended the Armada banquet way back in 2018, that I would
stumble upon my thesis project, or at least run into my supervisor-to-be, Niklas Ingvar.
Thank you for welcoming me to Mentimeter one year later and helping me set out the
direction of the project. You have always made time for me, no matter how busy you were.
Your thirst for knowledge is so inspiring and I have truly enjoyed our sessions when we tried
to make sense of the science.
Stort tack till mina handledare, Kristina Edström och Linda Kann. Utan er hade detta blivit
dubbelt så jobbigt och hälften så bra. Kristina, tack för dina kloka insikter kring utmaningen
och tjusningen med ett forskningsarbete. Du har alltid haft ett gott råd och en bra referens i
ärmen, för att inte tala om hur du magiskt vet hur man får ett textstycke att lyfta. Linda, din
positiva inställning och ditt stora lugn har varit ovärderliga - det märks att du har varit med
förr! Du har en underbar förmåga att få hopplösa saker att kännas buslätta.
Att skriva den här rapporten har varit en lång och snårig process och jag är glad för att min
examinator Cecilia Kozma och min opponent Holly Lindkvist har funnits där för att reda ut
det sista. Tack för era värdefulla kommentarer och feedback.
Patrik – tack för att du har hjälpt mig att ro det här i hamn. Många gånger har jag velat ge
upp, men det fick jag inte göra för att du alltid har funnits där. Jag tror att alla tvivlar på sig
själva när de skriver sitt examensarbete och jag är absolut inget undantag. Du har dock alltid
fått mig att känna mig kompetent och att jag kommer klara det till slut. Tack för att du har
trott på mig, och tack för kaffet.
Det finns en person utan vilken detta arbete inte skulle blivit möjligt – läraren som har ställt
upp på min idé och låtit mig samla in data i mängder. Trots en minst sagt ovanlig och
omtumlande vår har du tagit dig tiden att få in projektet och lagt en massa tid på att planera
och planera om. Det har varit både roligt och lärorikt att få smyga in i dina fysiska och
digitala klassrum – tack för att du unnade mig det!
Diana
8
Table of contents
1 Introduction ............................................................................................. 9
1.1 Purpose and objective .................................................................................................... 10
1.2 Research questions ........................................................................................................ 10
2 Background ............................................................................................. 11
2.1 Mentimeter ...................................................................................................................... 11
2.2 Misconceptions in physics ............................................................................................. 12
2.3 Quality education for everyone ..................................................................................... 13
3 Theoretical framework .......................................................................... 14
3.1 Pedagogical theory ......................................................................................................... 14
3.2 Previous research ........................................................................................................... 18
4 Method ................................................................................................... 25
4.1 Research design .............................................................................................................. 25
4.2 Research context ............................................................................................................ 26
4.3 Data collection ................................................................................................................ 26
4.4 Data analysis................................................................................................................... 32
4.5 Ethical considerations.................................................................................................... 33
5 Results ................................................................................................... 34
5.1 Results from the pre-study ............................................................................................ 34
5.2 Results from the main study.......................................................................................... 37
6 Discussion .............................................................................................. 49
7 Conclusions ............................................................................................ 52
7.1 Future research .............................................................................................................. 53
References ........................................................................................................ 54
Appendix A - Observation scheme ............................................................................................. 57
Appendix B – Guidelines on writing multiple-choice questions ............................................ 58
Appendix C – interview questions ............................................................................................. 63
Appendix D – presentation slides .............................................................................................. 65
9
1 Introduction
Traditional education refers to customs that have been established over time in a society’s
educational tradition, and these vary depending on cultural and historical context.
Traditional education is often described as adopting a transmissive instructional model,
which is based on the standpoint that learning happens as a result of transfer of knowledge
(Xie et al., 2018). This makes for a teacher-centred approach, where lecturing is a main part
of the instruction and the students’ role is to memorise and demonstrate that they have
registered the content.
The passive nature of absorbing information spurs students towards surface learning as
opposed to deep learning. Therefore, including more student-centred teaching is beneficial
to promote deeper understanding of fundamental principles of a subject. (McCarthy &
Anderson, 2000). Furthermore, the process of retrieval is essential for consolidating
knowledge, which is why it is helpful to design learning activities that incorporate
reconstruction of facts (Karpicke, 2012). Letting the students take active part in their
learning process is based on the idea that an individual is constructing their knowledge by
interacting with their surrounding and making meaning of what they percept (Piaget, 2008).
Piaget’s constructivism theory can be expanded by adding a social dimension, emphasizing
how social interactions with other people and use of tools shape cognitive development
(Vygotsky, 1999). The contrast between deep and surface learning and importance of
interaction is well-known among educators, but the question is how it should be done in
practice to achieve the desired effects.
The term interaction implies a dynamic process of exchange between several parties, i.e.
communication between people or the use of a computerised media device. The type of
interaction that occurs between a presenter and their audience is called audience response.
This interaction is typically created using an audience response system. Audience response
systems have been around since 1960s, starting with eliciting responses from movie and
television show audiences. The first patents for voting machines were granted in the 1970s
(Gordon & Becker, 1973; Simmons & Marquis, 2010).
The audience response system technology evolved over time and a typical setup consisted of
specialised hardware combined with presentation software. The audience used handheld
wireless devices called clickers to record their answers to multiple-choice questions and the
results were displayed on a screen for the presenter or through a projector to the audience.
The recent development in the technology has made a shift from hardware-based audience
response systems to web-based applications. A software-based audience response system
utilises personal computing devices that the audience typically has access to such as
smartphones or laptops. This has resulted in a significant reduction in cost for the
institutions that utilised audience response systems. Furthermore, instructors experience
that abandoning clickers saves time and hassle that comes with setting up the system and
distributing the devices
Along with the practical aspects, software-based audience response systems open the
possibility of including a broader range of features beyond multiple choice questions. One
example of a software-based audience response system is Mentimeter. Mentimeter is an
interactive presentation tool that incorporates an audience response system to create
audience engagement in lectures, workshops and meetings in both educational and
corporate settings. This thesis will explore the effects of using this audience response system
in a learning environment and how it supports instructors in their pedagogical
considerations.
10
The practice of using an audience response system embodies the behaviourist principle of
stimulus-response patterns (Fies & Marshall, 2006). Posing a question (stimulus) induces
changes in behaviour of the audience (response), along with feedback when the results are
displayed. This means that audience response systems can be understood from a
behaviouristic point of view and this thesis will use this framework along with the socio-
cultural perspective on learning.
1.1 Purpose and objective The aim of this thesis is to explore the implementation of an audience response system in a
learning environment, namely a high-school physics class. The objective is twofold: given
that an audience response system is merely a tool in the hands of the instructor, it is
interesting to analyse how the instructors use such a tool in their teaching. The other goal is
to assess the effects on the audience.
Conceptual understanding of science is crucial to master more advanced theory and
applications. However, conventional physics instruction often focuses on problem solving.
This is reflected by the design of textbooks and tests that are usually heavy on calculations
and emphasise a strictly mathematical approach to physics. While students may become
proficient in setting up and solving equations, they can still lack fundamental understanding
of the phenomena in question. There is a risk that this consolidates serious misconceptions
about physics. A popular method to foster greater conceptual understanding is peer
instruction, which is typically supported by an audience response system. Part of this thesis
is to evaluate conceptual questions used in physics instruction on motion of objects.
1.2 Research questions This study aims to answer the following research questions:
1. How can an audience response system be used to facilitate learning in a Swedish
high-school physics class? 2. What are the effects of using an audience response system in a learning
environment?
11
2 Background
2.1 Mentimeter Mentimeter is an interactive presentation platform that collects responses from the audience
using smart devices. Mentimeter supports a range of features including content slides and 13
interactive questions types (see some examples in fig. 1). During a presentation, the
audience can interact anonymously with the questions using a laptop or smartphone (see fig.
2). The responses are visualized in real-time and the data can be exported and analysed after
the presentation. Running Mentimeter requires a web browser and Internet connection for
both the presenter and the audience. The presenters need to create an account on
mentimeter.com. Mentimeter has several pricing options and a freemium plan among them.
This means that the product is free of charge but full access to all features require payment.
Mentimeter also offers discounted educational plans for teachers and students. The software
includes functions such as multiple choice-questions, quizzes, word clouds and questions
from the audience. (Mentimeter, n.d.) A few examples are presented below:
Figure 1. Examples of slides and question types in Mentimeter. From upper left and
clockwise: Content slide, Multiple choice, Word cloud, Scales. (Mentimeter, n.d.)
12
Figure 2. The smartphone user interface in Mentimeter. (Mentimeter, n.d.)
2.2 Misconceptions in physics Understanding fundamental concepts is crucial for building knowledge and mastering a
certain discipline. If a conceptual misunderstanding gets to propagate over time, it will
inevitably create obstacles in future learning. It is important to properly learn the basics, as
a misconception can be difficult to correct once consolidated and can sometime derives from
early stages of school (Liu & Fang, 2019).
A typical example of a misconception in physics is the idea that for an object to move, it
must be acted upon by continuous force. This is despite students being able to recite
Newton’s first law; that an object will remain at rest or continue moving at a constant
velocity, unless a force acts upon in. This means that a book sliding over a table will continue
moving after letting it go, but also slow down because of the friction.
Another misconception about motion is illustrated by the conceptual question in figure 3,
the confusion between speed and velocity. Speed is a scalar quantity that represents the rate
of change of position of an object. Velocity, being a vector quantity, has both magnitude and
direction. A car rounding a curve may have constant speed but since it is changing its
direction, the velocity is not constant. The implication of Newton’s first law in this case is
therefore that there is a net force on the car.
13
Figure 3. A conceptual problem about Newton’s first law. Figure by the author.
In addition to this, there is also a linguistic aspect of the mix up between speed and velocity
in Swedish. In physics instruction, teachers often use cars to exemplify principles about
motion. Speed, for instance, is the value shown by the speedometer. However, in Swedish
the speedometer is called hastighetsmätare, which means velocity meter. This adds another
dimension of confusion to a concept that the students already perceive as tricky.
A common instrument to assess students understanding of concepts are so-called concept
inventories. A concept inventory consists of a series of problems about the subject designed
to evaluate understanding and pick up common misconceptions. The problems are typically
multiple-choice questions for testing content knowledge and identifying the number of
students that have a certain opinion about the topic. Depending on the curriculum, the
inventories are used by teachers across different educational levels. Some examples in
physics are the Force Concept Inventory, the Mechanics Baseline Test and the Dynamics
Concept Inventory that target student misconceptions about force and motion (Liu & Fang,
2019).
2.3 Quality education for everyone One of the 17 United Nations sustainable development goals is to “ensure inclusive and
equitable quality education and promote lifelong learning opportunities for all” (United
Nations, n.d.). The target is to guarantee equal access to quality education for all girls and
boys, on all levels. To have equal and qualitative access to education can also mean to be
able to take active part in the learning process, on equal terms with other students. It is
therefore interesting to explore methods of instruction that aim to create engagement and
interaction in the classroom.
As of 2020, this goal has taken a serious hit due to the spread of COVID-19. The pandemic
has had a significant impact on education of children and young people, 1.5 billion of them
being affected by the means taken to slow the spread (United Nations, n.d.). School closures
and limited social contacts inhibit the development of children, especially those who live in
vulnerable conditions. This study was conducted during the early stages of the pandemic
14
and the transition to remote teaching is therefore incorporated in this thesis, adding further
relevance with respect to the UN sustainability goal of quality education.
Not everyone can take part in remote learning and there is a risk of a widening in the already
existing digital gap. For instance, less than one half of primary and secondary schools in
sub-Saharan Africa have access to necessary services and facilities such as electricity, the
Internet and computers. Thus, the use of information and communication technologies in
education must be reviewed in the light of these facts. (United Nations, 2020)
3 Theoretical framework
3.1 Pedagogical theory
3.1.1 Socio-cultural perspectives on learning From a socio-cultural perspective, learning is a result of how people interact with each other
and how they act within their cultural environment. What people think or know is
understood by analysing their speech and actions (Jakobsson, 2012). These actions are
supported by cultural tools, so called artefacts. Artefacts are objects that humans have
created and that in turn shape how we think or act (Vygotsky, 1999). Physical tools, such as
hammers, needles or computers and representations, such as the numerical system,
calendars and the musical notation are all examples of artefacts. An audience response
system is also an artefact in this sense. The human interaction with artefacts is called
mediation, which means that an individual acquires mental abilities using tools – the
abilities are mediated by artefacts (ibid).
One of the most important cultural artefacts that humans have developed is speech.
Communication through speech is essential for describing and discussing the surrounding
environment and shaping our understanding of the world. Thus, the socio-cultural
perspectives put emphasis on interaction between people as the key component for
acquiring knowledge. This implies that an individual’s learning is not necessarily restricted
by their inherent capabilities, since these can be expanded with help from others. Vygotsky
(1999) describes this in a model called the zone of proximal development (ZPD). The ZPD is
the distance between what an individual can learn by themselves and what they can learn
with help from a more knowledgeable part (see fig. 4). For that individual, articulating
thoughts and explaining concepts is also positive for consolidating their own understanding.
15
Figure 4. Zone of proximal development. Figure by the author.
3.1.2 Behaviouristic perspectives on learning Behaviourism is empiricist by its nature, emphasising what is observable and measurable. It
does not seek to understand cognitive processes – from a behaviouristic point of view, it is
more interesting to analyse observable human behaviour. The basic premise is that every
action is followed by a consequence, which moderates our behaviour. This leads to the
concept of stimulus, which is an object or event that triggers a response from the exposed
individual. This is an associative learning process named operant conditioning, where the
behaviour can be moderated by additional reinforcement. (Skinner, 2008)
The instructions on the top of a Mentimeter slide (see fig. 5) is a stimulus that prompts the
participant to visit a web page for that presentation. When responding to that stimulus by
following the instructions, the user is met with another stimulus – a Multiple choice
question for instance. This stimulus may trigger a more elaborate change in behaviour;
considering the question, evaluating the options and submitting an answer. Finally, as the
votes are displayed, the feedback can act as a reinforcement or further moderate the
behaviour of the audience.
Figure 5. Instructions on top of a Mentimeter slide.
A behaviouristic perspective is rooted in the belief that an individual is shaped by their
environment. The ability to learn is affected by their acquired experiences and can be
facilitated with directed instruction. (Skinner, 2008)
3.1.3 Formative assessment There is no unambiguous definition of formative assessment and the theoretical principles
behind it can be dated to the 1930s (Hirsh & Lindberg, 2015). The concept has been
16
popularised at the dawn of the 21st century thanks to Black and William (1998), who base
their review on the following interpretation:
”All those activities undertaken by teachers, and/or by their students, which
provide information to be used as feedback to modify the teaching and learning
activities in which they are engaged.”
Based on this definition, the standpoint in this thesis is that formative assessment is an
action undertaken by teachers and students to gather information that helps them make
relevant considerations regarding the instruction.
In a report on 21st century international and Swedish research on formative assessment,
Hirsh and Lindberg (2015) note a rise of information and communication technologies (ICT)
for formative assessment. They acknowledge that formative assessment demands much
effort from the teacher and that ICT tools can streamline the process. As the definition of
formative assessment dictates, there is not one distinct way of using ICT for that purpose.
The assessment is therefore dependent on the tool in question and particularly on how it is
used. Audience response system is a category of ICT and this thesis will focus on how it can
support formative assessment.
3.1.4 Peer instruction Peer learning is a method where discussions between students are part of the instruction.
This approach utilises the fact that students, thanks to their similarity in age and previous
knowledge, are good at helping each other understand the content and explain difficult parts
of the material. This method has been linked to better student performance in terms of
higher grades and learning outcome, compared to traditional methods of instruction such as
lectures (Caldwell, 2007).
There are some variations of peer learning. In this study, we will focus on the method that
has gained popularity thanks to Harvard professor Eric Mazur, who put it into practice in his
introductory physics classes (Mazur, 2014). In Mazur’s model of peer instruction, the
lessons consist of several small lectures about core concepts, each followed by a ConcepTests
on this format:
1. Question posed 2. Students given time to think 3. Students record their individual answers (optional) 4. Students convince their neighbor (peer instruction) 5. Students record revised answers (optional) 6. Feedback to teacher: Tally of answers 7. Explanation of correct answer
The posed question should be purely conceptual, as opposed to a computational problem.
According to Mazur’s studies (2014), students that receive conventional physics instruction
have more trouble solving conceptual questions than problems that require several
calculations. This highlights the difference between memorisation and understanding, as
many students prioritise learning step-by-step strategies for problem solving. This kind of
problems typically appear on physics tests and although it can be rewarding to focus om
memorising the recipes, gaps in understanding of fundamental concepts can make it
difficult to handle unexpected cases.
The answers are typically recorded using an audience response system. Show of hands or
holding up flashcards with labels for each option are common ways that are simple to
implement but vulnerable when it comes to accuracy and the lack of anonymity. A digital
17
audience response system that displays the outcome of the voting gives immediate feedback
to the teacher and provides anonymity to the students.
The outcome of the second voting (step 5) provides the instructor with insights about how
well the class has grasped the concept in question, thus introducing an element of formative
assessment. The teacher can then decide whether the topic requires further lecturing, or if it
is time to proceed to the next concept. This procedure limits the time that the teacher
mediates information in favour for peer learning. (Mazur, 2014)
Students find that the discussions help them learn and better understand the material and
they feel engaged while working in groups (Caldwell, 2007). This is also reflected by
improved academic achievements (ibid). When comparing the results, the second round of
responses shows an increase in number of correct answers. The students further report a
higher grade of confidence when voting for the second time (Mazur, 2014). Mazur did his
research on university physics classes, but there are studies that show positive outcomes for
high-school students as well (Cummings & Roberts, 2008).
Orienting the physics instruction towards conceptual understanding does not mean that
development of other important abilities suffer; in fact, there is evidence for improved
problem-solving skills thanks to this approach (Mazur, 2014). There is also some concern
about reduced content coverage in favour for working with conceptual questions, but the
reviewed literature implies that active learning is still more effective (Caldwell, 2007).
18
3.2 Previous research
3.2.1 Current state of research on audience response systems A literature review by Fies and Marshall (2006) claims that most literature on audience
response systems up until then is anecdotical or based on unfair premises. The comparisons
between audience response system-supported learning and other environments differ in
several aspects. Some studies compare traditional practices without any elements of
interaction with interactive audience response system-lessons, thus creating vastly different
starting points and making it difficult to isolate the effects of the system itself. The lack of
randomized controlled studies is still apparent in the 2012 literature review by Boscardin
and Penuel, which indicates that it is troublesome to evaluate the magnitude of impact of
audience response system. On the other hand, introducing an audience response system to a
learning environment seems to result in adaptation of more interactive methodologies.
Apart from the lack of rigorous studies, the authors point out shortage of research on
audience response systems that support complete anonymity where the identity of the
respondent is hidden.
Fies and Marshall’s review covers 24 publications on pedagogical theory and use of audience
response systems. Throughout the literature, there is consensus that audience response
systems are successful when used with proper pedagogical methodologies. This conclusion is
supported by Kay and LeSage’s review of 52 papers on audience response systems (Kay &
LeSage, 2009). There are indications of benefits of audience response systems in peer
instruction and some results that show that students value audience response system-
supported discussions (ibid). Using audience response systems in class is perceived as
interactive, engaging and enjoyable. Boscardin and Penuel (2012) found several studies that
report significant knowledge gains in lessons with audience response systems compared to
traditional lecture formats, but these results are attributed to formative assessment and
student engagement stimulated by the tool. Fies and Marshall (2006) conclude that
audience response systems should be viewed as pedagogical tools with multiple ways of use
and propose controlled studies with more controlled variables, where use of the audience
response system is varied.
Hunsu et. al. (2015) aim to address the concerns raised in both Fies and Marshall’s (2006)
and Boscardin and Penuel’s (2012) papers. They conducted a meta-analysis to compare
cognitive and non-cognitive outcomes in audience response system-supported and
traditional environments. A total of 53 studies were included, based on experimental
research design and the possibility of computing effect sizes from original data. Studies with
multiple interventions were rejected to isolate the effects of the audience response system
itself. They found small but significant positive effects on cognitive outcomes (retention,
knowledge transfer and achievement) and larger positive effects on non-cognitive outcomes
such as engagement, participation, self-efficacy and interest. The effects varied with class
size and were greater in smaller groups. When comparing audience response system
questions with similar question-driven instruction without audience response systems, the
effects were negligible. This indicates that it is not the audience response system itself but
rather the questions designed for the instructions that leads to a positive outcome. The
authors conclude from the overall result of the meta-analysis that emphasis must be put on
strategic preparation and facilitation during the lessons. Developing effective questions,
encouraging peer discussion and providing feedback is important for optimising the
outcome of audience response system use. Finally, the authors suggest more randomised
studies with observation of pre-intervention baseline for future research. (Hunsu et al.,
2015)
19
The meta-analysis by Castillo-Manzano et. al. (2016) yields similar conclusions about small
but favourable effects of audience response systems and implies that there are underlying
factors such as educational context that influence these results. In addition to this, they
found some evidence for greater impact in lower levels of education (i.e. high school and
elementary school), although the sample was considerably smaller than the university level-
studies. Unlike the above referred reviews, the authors do not comment on how pedagogical
strategies synergise with audience response systems. Instead, they emphasize the practical
aspects of implementing audience response systems in teaching, with focus on the
accessibility of modern systems and gradual introduction while continuously evaluating the
tool (Castillo-Manzano et al., 2016).
3.2.2 Pedagogical goals of using audience response systems Caldwell (2007) has published a summary of current research with examples of
implementations of audience response systems across a variety of disciplines and subjects.
This paper is a contemporary of Field and Marshall’s (2006) review but also includes best-
practice tips on how to use audience response systems in teaching. Several purposes for
using audience response systems emerged across the reviewed studies: increasing
interaction, assessing student preparation, formative assessment, tests, making lectures
more fun, and prompting discussion. These goals are not solely dependent on using an
audience response system – in fact, teachers have used interactive questions for a long time,
originating from Socrates methods of questioning. Caldwell’s analysis is therefore in line
with Field and Marshall’s (2006) findings – that an audience response system is a tool that
enables different instruction methods. Apart from creating a conversation about the course
material, the questioning approach helps drawing the students’ attention towards important
topics and stimulates their meta-cognitive processes when they reflect upon their knowledge
(Boscardin & Penuel, 2012).
A review by Key and LeSage (2009) unveils similar motives associated with use of audience
response systems. They divide them into following categories:
• motivational strategies to enhance participation and engagement
• assessment based strategies for contingent teaching and formative or summative
assessment
• learning based strategies for stimulating attention, preparation and discussion
There is consensus across the literature that audience response systems enhance
participation, which can be considered a crucial condition for any learning to take place
(ibid). Audience response system usage is sometimes linked to a portion of course grade by
giving credits for scoring on ARS-tests. This has naturally a significant impact on student
participation (Caldwell, 2007; Kay & LeSage, 2009). The increased engagement can also be
explained by the adaption of interactive teaching that comes with audience response systems
or the novelty of the technology itself (ibid). Boscardin and Penuel (2012) state that there is
a positive correlation between active participation and learning gains, and there is evidence
of high-quality interactions that comes in audience response system-supported activities –
verbalisation of students’ thoughts, focus on relevant areas and probing questions (Kay &
LeSage, 2009).
3.2.3 Formative assessment with audience response systems One of the main features of using an audience response system is the ability to easily assess
the students’ understanding and further adjust the mode of instruction, i.e. formative
assessment. This kind of intervention leads to high learning gains and requires continuous
monitoring and adjustments by the instructor (Boscardin & Penuel, 2012). A digital
audience response system gives a quick overview and helps to identify misconceptions that
20
need to be addressed. The ability to conduct contingent instruction may be dependent on
instructors experience and confidence in dropping some of the control that naturally comes
with a completely pre-planned lesson (Kay & LeSage, 2009). Boscardin and Penuel (2012)
state that successful implementation of formative assessment requires both deep knowledge
of the content as well as pedagogical skills, which implies that these factors are crucial to
exploit the full potential of the audience response system.
Caldwell (2007) agrees with Fies and Marshall (2006) that it is difficult to determine
whether it is the audience response system itself or the combination with interactive
methods that is responsible for the positive outcomes of the reviewed studies. Audience
response systems have a neutral or positive effect on learning outcomes but seem to be
particularly powerful in cooperative learning such as peer instruction (Boscardin & Penuel,
2012; Caldwell, 2007; Fies & Marshall, 2006; Kay & LeSage, 2009). However, Caldwell
points out that interactive questioning becomes more difficult to adapt in larger lectures and
that assessment of smaller samples of the class is often misleading. There are other ways to
collect responses from the whole group, although the low-tech methods such as show of
hands are more difficult to estimate and do not keep the responses anonymous. A digital
audience response system can provide anonymous voting and quick display of responses,
and the records can often be saved for further analysis (Caldwell, 2007). The pressure to
vote with the majority is also reduced by anonymous audience response systems (Boscardin
& Penuel, 2012). Just as Fies & Marshall (2006), Caldwell concludes that using an audience
response system is prompting teachers to rethink their instruction and lowers the threshold
for adapting interactive methods.
3.2.4 Students’ attitudes towards audience response systems When it comes to students’ attitudes towards audience response systems, they are in general
favourable – Caldwell (2007) found positive ratings of 70% and above or 4 out of 5 on a
Likert scale. The students express that the tools are fun and helpful and are particularly
happy with the anonymity and the opportunity to compare their answers with their
classmates’. They also recognise the importance of collaborative learning. Some
disadvantages that students note are technical problems, costs associated with clickers and
uncertainty about the learning value of audience response systems. Technological advances
that help overcome these obstacles, such as smartphones, Wi-Fi and necessary technical
equipment are more available than ever (Castillo-Manzano et al., 2016). There are some
concerns regarding using an audience response system just for the sake of having an digital
tool and not because it is fit for the course material (Caldwell, 2007). Furthermore, some
students find little value in deviating from traditional teaching, think that an audience
response system is distracting, or do not feel comfortable with answering or discussing
questions. On the other hand, there is some indication that audience response system use is
reducing the influence of the most vocal students and gives an opportunity for those who are
more shy to participate on equal terms (Kay & LeSage, 2009). Throughout the literature,
this effect is attributed to the anonymity component of the audience response system
(Boscardin & Penuel, 2012).
3.2.5 Best practices for using audience response systems Given the consensus that an audience response system is more of a tool than a teaching
approach, it is important to have a clear purpose for using it and keep that purpose in mind
when designing audience response system questions (Boscardin & Penuel, 2012). The
purpose should also be explained to the students so that they understand the expected gains
of the audience response system activities. (Caldwell, 2007; Kay & LeSage, 2009)
Well-designed questions are essential for successful use of audience response systems,
according to Caldwell’s review (2007). Conceptual questions are more useful than questions
21
that require calculations and recall of facts, because they focus on understanding rather than
memorisation. Questions that yield a wide distribution of responses and expose
misconceptions are beneficial for spurring discussion. The voting options should include
common mistakes that students make. Preparing good questions can be challenging and
places heavy demands on the teacher to invest time into the task (Boscardin & Penuel, 2012;
Caldwell, 2007; Kay & LeSage, 2009). Collegial work or using concept inventories that have
been developed in different subjects can be a way of managing the initial workload (ibid).
As for the number of questions during one session, there are some indications of 2-5
questions per lecture hour being a reasonable amount (Caldwell, 2007; Kay & LeSage,
2009). Attention drops occur approximately after 15-20 minutes and an audience response
system question can help regaining focus (Boscardin & Penuel, 2012; Caldwell, 2007; Kay &
LeSage, 2009). The number of options that is suggested in the literature is four to five and
the wait time before closing the voting is up to the teachers judgement based on topic,
difficulty and pedagogical goals (Kay & LeSage, 2009). When a peer discussion precedes the
voting, the noise level in the class seem to be a common indicator for the time to move on
(Beatty et al., 2006).
After the display of responses, the students should be allowed to discuss the outcome and
explain their thoughts on the concepts. Small group discussions are preferred compared to
class-wide discussions. The wrong options should be explored as well, to ensure that the
students understand why these are wrong. It is also important to summarise the discussions
afterwards and add further explanations or lecture more if needed (Caldwell, 2007).
3.2.6 Research specifically focusing on Mentimeter There are a few published papers on Mentimeter, most of them taking a descriptive
approach. Little (2016), Rudolph (2018), and Moorhouse and Kohnke (2020) have
published technological reviews that cover some use cases, benefits and drawbacks of
Mentimeter in educational contexts. Case studies further illustrate these points, as seen in
publications by Skoyles and Bloxsidge (2017) on lectures on reference systems, Mayhew
(2018) in political science, Vallely and Gibson (2018) in teacher education department, and
Hill (2019) on large lectures for undergraduate, masters and doctoral levels.
All the above papers point out the benefit of the bring your own device (BYOD)
characteristic as it reduces costs for both institutions and students. Traditional audience
response system systems require access to specific hardware, and sometimes the students
must purchase the clickers themselves. Another advantage is that Mentimeter is device
agnostic with no installation needed, and the students can use any device that has an
internet connection (Little, 2016; Mayhew, 2018; Rudolph, 2018). This saves time for more
teaching and learning during class (Little, 2016; Skolyes and Bloxsidge, 2017).
It is not uncommon to raise concerns about cell phone use in classrooms. Some teachers
find it disruptive, and students can quickly lose their attention during class. Rudolph
recognises the negative effects of the off-task use of cell phones on knowledge retention and
students’ performance. However, he argues for using devices for educational purposes,
transforming the problem into an opportunity (Rudolph, 2018). According to him,
“technology is a mere enabler of best practices in teaching and learning”, further stressing
the importance of the facilitator for positive outcomes (ibid). The case study by Skoyles and
Bloxsidge (2017) exemplify this approach; the authors felt frustrated by the non-interactive
traditional lecture style that merely engaged the front two rows of students. They noted that
many students already used their phones and computers during class and decided to take
advantage of that.
22
Unlike hardware-based audience response systems, Mentimeter provides several question
formats beyond the traditional multiple-choice, giving more freedom to the practitioners
and their judgement (Little, 2016; Mayhew, 2018; Skoyles and Bloxsidge, 2017). Mayhew
recognises the potential of the integrated audience response system and presentation
software in Mentimeter, further providing flexibility for the educators and the tool being
user-friendly because of its familiar look and features (Mayhew, 2018).
Hill (2019) states that because of their scalability, large lectures continue to dominate higher
education. Large lectures are cost-effective arrangements; however, they fail to engage
students in active learning. She proposes, referring to Habel & Stubbs (2014), that an
audience response system is a useful tool for engaging students and presents some use cases
for Mentimeter in large lectures. The Quiz feature test students’ understanding, Word cloud
is suitable for ice breakers and initiating further discussion, and Open-ended questions
collect arguments on a topic and visualise those for the audience. This shows, as previously
stated by Little, the range of possibilities for educators to utilise Mentimeter for didactic
purposes. The reviews agree with previous research on audience response systems that the
crucial part is how educators choose to use the tool and how it supports their pedagogical
intentions (Mayhew, 2018; Skoyles & Bolxsidge, 2017; Wood, 2017).
The research indicates some downsides due to the design of the Mentimeter presentation
tool. The character limitation on slides might force the educator to abandon their intended
activity and reconsider the question format (Hill, 2019; Skoyles & Bloxsidge, 2017).
Furthermore, verbal discussions risks being reduced in favour of digital responses, which is
however mitigated by giving the participants time for group discussion (Moorhouse &
Kohnkhe, 2020). There are some concerns raised about the need to have a personal device
and access to reliable Internet connection (Mayhew, 2018; Hill, 2016) and lack of certain
question types or features (Hill, 2019; Mayhew, 2018). Skoyles’ and Bloxsidges (2017)
experience, students spontaneously share devices if needed, and still benefit from seeing the
voting results displayed.
Vallely and Gibson (2018) discuss three possible applications of Mentimeter; gauging
opinion, engaging discussion, and voicing concerns. Gauging opinion can be done with the
Scales feature, for instance, to identify gaps in students’ knowledge and to adjust the
curriculum accordingly. The authors state that “the tool has proved useful for
asynchronously collecting student responses and using these to shape future
teaching.”(Vallely & Gibson, 2018, p. 2), thus demonstrating the opportunity of using
Mentimeter for formative assessment. Skoyles and Bloxsidge draw the same conclusion;
“The variety of question styles moves beyond simple yes or no answers, which enables
deeper learning and provides a range of formative assessment options.”
Word cloud or Open-ended questions can be used as prompts for engaging discussion and
initiating debate (Vallely & Gibson, 2018; Hill, 2019; Moorhouse & Kohnke, 2020). The
instructors can also give students opportunities to anonymously voice concerns and collect
real-time feedback, which further supports dialogic teaching (Vallely & Gibson, 2018;
Moorhouse & Kohnke, 2020).
Beyond the technological reviews, some studies evaluate the impact of Mentimeter on
student engagement and efficiency in large lectures (Wood, 2019), creative mathematical
thinking (Andriani et al., 2019) and productive skills in English (Puspa & Imamyartha,
2019). Furthermore, a study by Hill and Fielden investigated the participation in quizzes
and student’s perception of anonymity provided by Mentimeter (Hill & Fielden, 2017).
Andriani, Dewi and Sagala (2019) used Mentimeter to develop “blended learning media” to
improve students’ mathematical creative thinking skills. The authors define blended
23
learning as a process that combines face-to-face learning with computer-assisted learning
and learning media as “technologies that are utilised to deliver lecture material” (Andriani
et al., 2019). The effect is measured by the normalised gain of the test scores before and after
the intervention. The result is a gain of 0.2. The value is indeed a low score according to
Hake’s definition of the normalised gain (1998), which is defined as the actual average gain
divided by the maximum possible average gain. The authors conclude that “this media is not
effective to improve the ability of creative mathematical thinking” (Andriani et al., 2019).
However, the paper would appear to be over-ambitious in its claims. The main weakness of
the study is the failure to give a comprehensive overview of the methodology. There is no
information on the content of the learning material, how it was used in the intervention or
discussion of pedagogical implications. This would be particularly important to include,
given what is already established about technology primarily being a tool in the hands of
educators.
Puspa and Imamyartha (2019) have explored students’ attitudes towards introducing
Mentimeter in the English classroom by gathering data from a cross-sectional survey
involving 120 students at the university level. The survey covered three categories:
employment of a web-based application, the use of Mentimeter in English education and
the impact of Mentimeter on the students’ productive skills. A majority of the respondents
are first-time users of Mentimeter. They show a positive attitude towards web-based tools
and Mentimeter in particular – the respondents agree that it’s comfortable to use and helps
with motivation. At the same time, the students report that they do not have enough mobile
data for using online apps and that they expect the authorities, presumably their university,
to provide the necessary internet access. They also recognise that there is a character
limitation for responses, which is similar to Hill’s (2019) remarks on the slide format from a
teacher’s perspective. The survey section about Mentimeter’s impact on productive skills
reveals one of the limitations in this study’s method. The questions posed are not necessarily
an accurate operationalisation of the “impact on productive skills”, which makes the
conclusions somewhat weak.
The ratio of respondents to the number of attendees in Mentimeter activities varies across
the reviews and studies, from 50-75 % (Vallely & Gibson, 2018), two thirds as observed by
Skoyles and Bloxsidge (2017) to 80 % estimated by Hill (2019). Hill and Fielden (2017)
measured the participation in Mentimeter quizzes in a series of university lectures on
ecology, finding the engagement being 79.3 % (n = 17.4 in average) over five sessions in 3
quizzes each. Finally, Wood (2019) conducted a study over three academic years posing
questions with Mentimeter in each lecture. The participation percentages varied from a
minimum of 40 % to a maximum of 84 % (n = 101 in the first year, n = 110 in the second
year, n = 150 in the third year).
Both Skoyles and Bloxsidge (2017) and Mayhew (2018) discuss how using an audience
response system impacts the teacher’s attitude and role during a learning session.
Mentimeter prompts the teacher to rethink their way of delivering the content and take
advantage of the interactive features (Skoyles & Bloxsidge, 2017). This enables a switch from
a traditional, teacher-centred, lecture style to a less passive approach. Mayhew argues that
the teacher needs to be confident in handling that transition and accept a lower level of
control in return for more student-centred instruction (Mayhew, 2018). However, findings
from Wood’s (2019) study suggest that “in theory, staff say that they want contingent
teaching but when presented with the opportunity, it can be overly demanding.”
Mayhew (2018) further stresses the importance of investing time in developing
pedagogically sound audience response system-activities. Questions should have a clear
learning purpose, encourage interaction and discussion and explore links between concepts
24
and ideas, as stated by Beatty (2004). It is also essential to allow time for discussion of the
voting results and be prepared to explore any issues that may arise and explain why an
audience response system is used, claims Mayhew (2018) with reference to Caldwell (2007).
The first part of Hill & Fielden’s (2017) study aimed to explore the Quiz feature in
Mentimeter with focus on students’ perceptions and engagement. In a group of 22 students,
they found that although lecture attendance decreased over time, the proportion of
attending students that participated in the quizzes did not. The findings from the following
questionnaire suggests that students think that interactive online quizzes are a fun way to
break up lectures and that it helps them consolidate learning, which is consistent with data
from Wood’s (2019) study. A majority (76.5 %) of students responded to the questionnaire,
and they all said they would recommend Mentimeter for other lectures.
In the second part of the study, Hill & Fielden used Mentimeter for gathering anonymous
questions from students during a Q&A session. Students’ perceptions of this feature were
investigated in a series of closed and open questions, with 6 of 14 students participating in
the questionnaire. Three themes emerged from the analysis; creating a voice and being
heard, the learning context and Mentimeter ease of use. The respondents recognise that
their confidence in posing questions depends on the setting (i.e. smaller groups creating fear
of embarrassment) and appreciate that Mentimeter allows them to be heard in a safe way.
One of the students expresses the following:
Yes, I feel it’s important that the least vocal of us are given the opportunity to have
our voice heard as some members of the group can often dominate classes with
questions and information that isn’t really relevant to the topic (Hill & Fielden,
2017, p. 21).
The anonymity provided by Mentimeter is a recurring theme across multiple reviews and
studies. Rudolph points out that it’s usually a small group of active students that answer the
oral questions posed by the teacher, and that more reluctant participants can respond
thanks to Mentimeter (Rudolph, 2018). His claim is supported by Hill (2019), Valley and
Gibson (2018), Wood (2019) and Mayhew (2018). Their studies establish that anonymity
creates a safe environment for students to interact with the material without fear of
judgement. In particular, Mayhew proposes that anonymity is “…being of particular value
to less confident students who might otherwise remain silent because they fear being
wrong or they fear asking ‘silly’ questions.” (Mayhew, 2018, p. 549).
25
4 Method
4.1 Research design This study aims to explore the use of an audience response system (Mentimeter) and
examine the effects of using this audience response system. The goal was to do a randomised
case study with a mix-method approach, collecting qualitative data from observations and
interviews as well as quantitative test scores. To gather and observe data on a phenomenon
from several standpoints is in line with the principle of data triangulation (Denscombe,
2008). Triangulation helps in achieving concurrent validity, by demonstrating whether
different data-collecting instruments are aligned in their results (Cohen et al., 2007). On the
other hand, gathering data from multiple sources can lead to increased complexity of
analysis and contradictory results compromise the ambition to get a consistent picture of the
studied phenomena (Denscombe, 2008).
A pre-study preceded the main study. The goal of the pre-study was to identify potential
topics for the main study and test some of the data gathering methods that would be used
later. The pre-study included two lecture observations followed by interviews with the
teachers, along with a literature study. The results from the pre-study helped to set the scope
for the main study and are also presented in 5.1 Results from the pre-study. An overview of
the whole process is shown in figure 6.
Previous research has established that there is a lack of rigorous studies on the effects of
audience response systems, with equal instruction methods in both the test group and the
control groups (Boscardin & Penuel, 2012; Fies & Marshall, 2006). Thus, the original aim of
this study was to create a randomised controlled design, only varying the presence of the
audience response system in the test group. Many studies also noted the lack of observations
before the intervention, so a baseline observation was planned. Furthermore, most studies
on audience response systems seem to be on university level, making the high-school
perspective relevant. As there is consensus among the meta reviews that the effects of an
audience response system depended on the instructor’s pedagogical approach and
experience (Caldwell, 2007; Fies & Marshall, 2006), this was considered when designing the
interview questions and during the planning session with the teacher.
4.1.1 Complications following the 2020 outbreak of COVID-19. The study was scheduled for mid-March 2020 with the lessons for the test group and control
group scheduled the same week, starting with the control group. However, this happened at
the same time as the Swedish government issued a recommendation to close the upper
secondary schools to limit the outbreak of COVID-19. By this time, the control group
observation had been performed, but the test group lesson had to be cancelled. This meant
that the original controlled designed could not be followed, as all teaching was done
remotely from there on, thus introducing a fundamental change in the setup of the lessons.
Lesson 2 was postponed as the teacher and I adjusted the plan. We tried to keep as many
similarities between lesson 1 and 2 as possible, although acknowledging that the comparison
would be skewed because of the new circumstances. Furthermore, we added a third lesson
with the whole group where we made some adjustments based on our experience from
lesson 2. The data from lesson 3 is not included in this thesis due to time restrains. The
section 4.3.5 Main study describes the final design of each part of the study.
26
Figure 6: A flowchart showing the working process. The first part consisted of a literature
study and two case studies and the second part consisted of planning and observation of two
lessons for the main study.
4.2 Research context The study was performed in a Swedish high-school class in the first physics course (Physics
1a) of the national science program with specialisation in social science. The physics subject
was chosen due to its tradition in audience response system-supported peer instruction, as
well as being one of the authors majors. The selection of the class resulted from convenience
sampling, where previously established contact with the teacher was the main reason. The
preconditions in the class matched the scope of this research and the teacher was interested
in incorporating the study in the planning for the semester.
After reviewing the curriculum and matching it to the timeline of the thesis, we decided to
design a lesson on motion and distance-time graphs, including concepts of displacement
and distance travelled. The curriculum for Physics 1a describes what the students are
expected to learn in the course along with core content and knowledge requirements for
grading. The following core content was covered by the material in the lessons in the study
(Skolverket, n.d.):
• “Speed, momentum and acceleration to describe motion.”
• “Identifying and studying problems using reasoning from physics and mathematical modelling covering linear equations, power and exponential equations, functions and graphs, and trigonometry and vectors.”
As for the knowledge requirements, the goal was to give the students the opportunity to
“give an account […] of the meaning of concepts, models, theories and working methods
from each of the course's different areas. Students use these […] to look for answers to
issues, and to describe […] the phenomena and relationships of physics.” (Skolverket, n.d.)
[…] indicates the phrases used for distinguishing between different grades.
4.3 Data collection
4.3.1 Literature study This study began with a systematic literature study to get an overview of the current state of
research on the topics of this thesis. The first attempts to search for “audience response
system” produced a substantial amount of publications that spanned over several decades.
In order to grasp the vast range of research, I prioritised looking for meta-reviews and
highly cited articles in four databases: Scopus, Web of Science, ERIC and IEEE.
Additionally, I performed a search adding relevant keywords related to the research
27
questions and ideas from the initial skimming of publications. The following query was used
for searching in titles, abstracts and keywords:
“clickers” OR ((“audience” OR “personal” OR “mobile” OR “student” OR
“classroom”) AND (“response-system*” OR “participation-system*”)) AND
“physics” AND (“peer-instruction” OR “anonymity” OR “formative-assessment”))
Up until now, there have been a few publications specifically on Mentimeter, most of them
from 2018 onwards. A total of 11 articles on Mentimeter were found by February 2020 and
all of them are included in the literature study of this thesis. Finally, some literature on peer
instruction, formative assessment and research methodology was reviewed in the process.
4.3.2 Interviews Interviews can be used for understanding complex and deep issues and to collect qualitative
data from a certain group of people. An unstructured interview is suitable when the
researcher needs to understand what is not known and to identify topics of interest. The
explorative approach is therefore good for gathering hypotheses rather than facts, so that
the interviewer gains insights about the topic with the interviewees help (Cohen et al.,
2007). A semi-structured interview is a method to explore the informant’s motivations and
thoughts in depth and requires more preparation of questions and topics, while having the
flexibility to deviate into interesting topics that come up (Denscombe, 2008).
The pre-study interviews had dual purposes; to understand what teachers think of using an
audience response system and to gather ideas for the main study. For this reason, the
interviews hade a certain level of structure with some open-ended questions prepared
beforehand but otherwise, the respondents could freely elaborate on their thoughts.
According to Denscombe (2008) it is important to have a list of issues to address and some
knowledge of the area to be able to pose relevant questions and prompt the interviewee to
elaborate further. Therefore, I read some of the literature on audience response system in
advance and discussed the interviews with my supervisors. See appendix C for the interview
questions.
The aim of the main study is to gain understanding of the reasoning behind pedagogical
strategies, which is why semi-structured interviews were chosen as one of the methods of
data gathering. A weakness in using interviews is that the number of informants is usually
limited due to the time and effort needed for organising the meetings (Denscombe, 2008).
This can lead to doubts when it comes to reliability of the results, which is why I did
observations as well for data triangulation.
4.3.3 Observations An observation is a more direct way of extracting data from a natural setting instead of
relying on what the informants say about the situation. The data gathered from an
observation describes what happened in a certain setting and it is important not to disturb it
(Denscombe, 2008). In every observation, I was sitting in the back of the classroom so that I
could get an overview while not being in the groups visual field. When using a recorder, I
informed the group about the recording but made sure that the equipment was not too
visible to make the participants feel more comfortable. I also took notes in my observation
schemes to register what I saw and my interpretation of the situation. The process of
observation is prone to be compromised by memory flaws and selective perception. The
systematic note-taking helps keeping the recording on the right track and mitigates the
impact of personal factors (Denscombe, 2008).
The grade of structure in an observation depends on how the information is registered. The
scale spans from notes from the observation without any categories to systematic
28
registration of events minute-by-minute. When the observation has a specific focus, the
registration of data should be more structured. An open and explorative focus, on the other
hand, goes well with an unstructured approach (Björndal, 2005). Just like with the
interviews, the pre-study observations had an explorative approach and some level of
structure in the recording. The main study observations were mostly focused on capturing
the events surrounding the audience response system-sessions and were recorded in an
observation schedule as seen in Appendix A. I case study A, I took notes by hand without a
specific template, while case study B was recorded in the observation schedule shown in
figure 7.
4.3.4 Pre-study Two exploratory case studies were conducted to understand how an audience response
system can be used in practice and to explore the educator’s opinions on using the tool.
Exploratory case studies can be useful for generating hypotheses for further research (Cohen
et al., 2007). The selection of informants was based on recommendations from the
supervisors and their availability for participation within the time span of the pre-study.
Both case studies included observation of a lecture, followed by an interview with the
lecturer. Although the lectures were at university level whereas the main study in this thesis
is about high-school physics, they were found to provide relevant insights about
incorporating audience response systems in the instruction. Large-lecture teaching is a
common method at university level due to their scalability but is ineffective when it comes to
engaging students (L. Hill, 2019). As the number of students in a high school class is seldom
over 30, one can assume that it is less challenging to create interactivity in that setting.
However, this does not mean that the problem is not occurring and teacher-oriented
instruction methods are dominating high-school physics instruction (Cummings & Roberts,
2008).
4.3.4.1 Case study A
The observed lecture was part of a first programming course for first-year university
students, where the teacher used a clicker-based audience response system for posing
multiple-choice questions. This first observation was unstructured and recorded by taking
computer-written notes on what happened in each audience response system-session during
the lecture. The following topics were identified prior to the observation and prioritised
during the note-taking: introduction of clickers, presentation of each question, time for
consideration and voting, evaluation of voting results and transition to the next part of the
lecture. The selection of topics was based on the literature study and discussions with the
supervisors. The purpose was to narrow down the scope of the observation to the parts of
the lecture most interesting to the topic of this thesis.
A semi-structured interview with the lecturer followed immediately after the class, using
pre-written open questions combined with discussion of notions from the lecture. The
interview covered the teacher’s purposes in using an audience response system, creating
multiple choice questions, posing questions during a lecture, effects on the audience and
advantages and limitations of the tool. Instead of recording the interview, I took notes on
the computer and summarised them the same afternoon. The interview questions are found
in Appendix C.
4.3.4.2 Case study B
The case study B was a rhetoric class at university. This time, the teacher used Mentimeter.
The observation in case study B had a somewhat higher grade of structure than case study A.
This observation focused more on exploring how different functions in Mentimeter worked
in practice. For this purpose, an observation protocol template was prepared, as seen in
figure 7. Each Mentimeter feature (e.g. Multiple choice, Open ended, Content slide, etc.) was
29
assigned a code that was noted in the leftmost column whenever it came up during the
lecture.
This observation was audio-recorded, mainly for practising capturing the audio in a
classroom and distinguishing between voices when listening to the record as part of the pre-
study was to get familiar with different types of data gathering. The audio from the follow-up
interview was recorded and auto-transcribed by the transcription software Vocalmatic.
Figure 7: Observation schedule for case study B.
The case studies provided insights on what audience response system-supported instruction
looks like in practice and how teachers experience working with different tools and the
results are presented in section 5.1. The main takeaways from the case studies regarding the
pedagogical aspects were the importance of having a purpose for each question and using
the opportunities for formative assessment. Furthermore, the informants discussed how
using an audience response system affects their own role and that it helps them rethink the
way they deliver content and create interaction in the classroom. To further explore these
insights, I formulated research questions about the use of an audience response system, with
an underlying focus on understanding the pedagogical intentions of the teacher.
4.3.5 Main study 4.3.5.1 Planning and writing guidelines
The teacher and I met to discuss the subject content, methodology and to prepare the
lessons in the study. At the same time, I did a small unstructured interview to assess the
teacher’s previous experience with interactive instruction. The aim of the planning session
was to gather data on the teachers reasoning about the intended activities to compare them
to the coming observations and the follow-up interview. Another purpose of joint planning
was to give the teacher autonomy over the process while providing support from the findings
and best practice tips from the literature, as well as helping to get started with Mentimeter.
The planning session was supposed to be audio recorded for further analysis, but the
equipment failed, so that I had to summarise what I remembered as soon as possible.
Additionally, I could discuss further with the teacher over chat and there received some help
with clarifying the parts that I did not recall. We had ongoing contact before and after the
planning session, where we exchanged ideas on the planning and useful material.
During the literature study, I found a number of resourceful articles on how to work with
audience response systems in the classroom. As a part of the planning I summarised some of
them to create guidelines for the teacher to facilitate the implementation of conceptual
questions in the instruction. The material is based on best practices for using audience
response systems by Martyn (2007), the revised taxonomy of multiple-choice item-writing
guidelines by Haladyna et. al. (2002), and a question-driven instruction method formulated
by Beatty et. al. (2006). Beatty et al. work focuses on physics instruction while Haladyna et
al. and Martyn provide general advice on both pedagogical and stylistic matters. I
summarised the main points of these works and revised them so that they would be more
relevant for this study, the purpose being to provide pedagogical recommendations to the
30
teacher. We discussed the guidelines to make sure that they were useful. As a result of the
discussion, I did some additional clarifications to make the recommendation more
comprehensible. The guidelines are found in Appendix B.
I gave the guidelines to the teacher along with links to relevant concept inventories from
Physport.org, an physics education resource, developed by the American Association of
Physics Teachers (McKagan, 2020). The teacher used this material to create lecture slides
with concept questions which we discussed and refined before carrying out the lessons. We
were also using Erik Mazur’s book on peer instruction to identify useful problems.
4.3.5.2 Baseline observation – lesson 0
Before splitting up the class in two groups for the study, I observed one of their regular
lessons. Hunsu et. al. (2015) recommend that studies on audience response systems should
compare audience response system-supported instruction to pedagogical methods used pre-
intervention. Furthermore, the teacher reported that the class typically worked with
diagnostic questioning, where the teacher poses multiple choice questions to check on
student’s understanding during the lesson. Therefore, it would be valuable to see how the
teacher’s instruction is affected by introducing a different method of interactive instruction.
The observation was audio-recorded and noted in an observation scheme, as seen in
Appendix A. The same scheme was used in the following observations as well.
4.3.5.3 Control group observation – lesson 1
The class was randomly split into two equally big groups for the parallel lessons. The control
group was scheduled before the test group. The reason for this was to create a natural
progression from the teacher’s regular style of instruction (diagnostic questioning with show
of fingers) to peer-instruction with show of fingers, before introducing Mentimeter as the
method of voting.
Lesson 1 was held in a classroom designed for doing lesson studies. The classroom was
equipped with multiple microphones that recorded the sounds across the room, along with
cameras in both front and back. The students sat in pairs so that they could discuss the
concept questions with each other. The teacher alternated between lecturing using the
Google Slides presentation, drawing and showing examples on the whiteboard and posing
the questions from the presentation. The lesson was both video- and audio-recorded, along
with my observation notes.
The lesson outline alternated short bursts of lecturing with conceptual questions. Each
conceptual question followed the ConcepTest structure described in 3.1.4 Peer instruction. A
total of three multiple choice-questions were posed, along with an open question about the
possible interpretations of a graph that the students got to discuss before the teacher called
them out to present.
31
Figure 8. The classroom used in lesson 1
4.3.5.4 Test group observation – lesson 2
The test group lesson was postponed due to the restriction introduced after the COVID-19
outbreak and needed to be done remotely. The teacher used Google meet for setting up the
virtual classroom and shared his screen to alternate between a Mentimeter presentation and
Microsoft Whiteboard. The initial goal was to keep lesson 2 as similar as possible to lesson 1.
This was compromised by the fundamental difference between the remote and the physical
classroom. Furthermore, we had to abandon the peer instruction-element and limit each
conceptual question to one round of voting. The teacher and I considered some alternatives
for the students to discuss their answers with each other but could not find a suitable way to
incorporate it given the remote conditions. This meant that we kept the following from the
ConcepTest steps:
1. Question posed 2. Students given time to think 3. Students record their individual answers via Mentimeter 4. Live display of answers
The teacher had the liberty to decide on how to follow up each question, which was part of
the subject that this study aimed to investigate. In addition to observing the teacher’s
pedagogical consideration during the lesson, the focus was to understand the interaction in
a remote setting. Apart for voting on Mentimeter questions, the students could use either
the chat in Google meet or create a private chat with the teacher to ask questions. They could
also unmute their microphones to speak to the class. The number and type of questions
posed were the same as in lesson 1 – three Multiple choice and one Open Ended, where the
students recorded their answers in a free text field.
32
Figure 9. The virtual whiteboard from lesson 2.
4.3.5.5 Test
At the end of the each of the lessons, the group got to do a small test to assess their
comprehension of the whole lesson. The test provided some quantitative data from all the
participants and was effective to administer. On the other hand, it does not capture how the
student’s reason, which could have been assessed by a focus group. The test consisted of one
conceptual question that was designed to cover all the different themes that were discussed
in class. The control group did the test in writing, while the remote test group got an
additional question in Mentimeter. The test is presented in the results section.
4.3.5.6 Interview with the teacher
The follow-up interview took place right after lesson 2 and was done remotely in a semi-
structured way. The aim of the interview was to understand the teacher’s intentions with the
instruction, how the questions worked in practice, differences between the physical and the
remote lessons, explore the decisions based on the feedback from the audience response
system and the user experience of Mentimeter. The interview was captured with a screen
recorder. The interviews were transcribed, and any quotes used in this report are translated
from Swedish by the author, thereby paraphrased in English. The interview questions are
found in Appendix C.
4.4 Data analysis Transcription is time-consuming and laborious, but there are several benefits in converting
audio records into written form. Transcriptions makes it easier to get an overview of the
data and analyse it, as well as extracting quotes for illustrative purposes (Denscombe,
2008). Therefore, all the interviews and observations in the main study – except for the
baseline observation – were transcribed, along with the interview in case study B. Using an
automatic transcription software saved some time, although I still needed to go through the
recordings to double check the quality of the transcriptions. At the same time, I took notes
on relevant issues that I identified. Proofreading the transcripts was the first step of the
thematic analysis that followed, based on the process described by Kuckartz (2014). The
thematic categories were determined by an inductive approach using the themes that
emerged in the texts. The themes differ between the pre-study and the main study and are
33
accordingly presented in the result sections 5.1 and 5.2. In the next step, I went through the
data a second time and assigned relevant parts to the themes.
4.5 Ethical considerations There are multiple ethical aspects to consider when conducting observations on human
subjects. The Swedish Research Council (Vetenskapsrådet, 2002) states four ethical
principles for research in humanities and social sciences.
• Information – the researcher is obliged to inform the participants about the purpose
of the study. The information must state their role in the project, how the study will
proceed, and that the participation is voluntary and can be cancelled at any time.
The participants need to receive information before the study.
• Consent - the subject has the right to make an independent decision about their
participation in the study. The researcher needs to obtain informed consent, and the
participant can withdraw their consent at any time, without penalty.
• Confidentiality - research data must be handled with confidentiality and protected
from unauthorized access. Personal information must be recorded in a way that
prevents the identification of individuals and staff members that handle sensitive
data should sign a confidentiality agreement.
• The use of data - the gathered data may only be used for research purposes, not
commercial. It is not allowed to use the research to make any decisions or take
measures that affect the individual without their consent.
The following measures were taken in order to meet the above criteria: The participants in
all the interviews and observations received information about the purpose of the study and
how the data would be used. The interviewees gave their explicit consent to their
participation and allowed me to record the audio. I ensured that they understood that they
could withdraw at any time and that any data that they had provided by then would be
discarded. The students in the observed lessons were offered to be excluded from the
notions in the observation schemes and the transcriptions of the recordings if they did not
want to participate, since they were required to attend their class. All the data was
anonymised so that the identity of the informants and students would not be revealed. Only
the author and the supervisors had access to the data and the participants were informed
that it would be stored for a maximum of one year after the end of this study.
34
5 Results
This section presents the results in the following order: first the results from the pre-study
(section 5.1) and second the results from the main study (section 5.2), including the
questions used in the lessons.
5.1 Results from the pre-study During the interviews in case study A and case study B, the informants discussed the
following aspects of using an audience response system:
• Creating interaction and engaging students
• Inspiring and prompting discussion
• Formative assessment
• Purposeful use of technology
• Social aspects
• Technological aspects
• Prompting the teacher to reconsider their instruction
These themes emerged during the data analysis and are elaborated below.
5.1.1 Creating interaction and engaging students The interviewee in case study A pointed out that there are several ways to create interaction,
but clickers usually have a high response rate. Posing a multiple-choice question and asking
the students to vote by show of hands yields lesser activity than using a digital audience
response system. In case study B, the teacher reported that the students are used to hands
up-answering and there are typically just a few that participate that way. The teacher
explains further how a discussion about participation in the classroom was initiated.
“I noticed that there were a few students who were not really participating actively in
class and because that doesn't have an exam it is really important to me that the
students not only are prepared to the lessons but that they show that they are prepared
as well […] I said (to a student) you know, you have some really good ideas – why do
you not show them more frequently or openly? And they said, ‘well you know, I do not
really like to speak up and that is why I do not think it is the best way of learning.’ “
Introducing an audience response system to the class helped getting input from the shyer
students and follow up with activities that engaged more people. The interviewee also noted
that with voluntary raise of hands, it sometimes feels like the students are responding what
they think the teacher wants to hear. With Mentimeter, the teacher felt that all the students
submitted more honest responses.
5.1.2 Inspiring and prompting discussion In both cases, the displayed responses inspired conversation between students, especially
small group discussions where everyone felt more comfortable to participate and articulate
thoughts that might be perceived as unconventional. The teacher in case study B noted
comments from the students that they believed would not have come up if they students had
to raise their hands to speak. Using Open ended questions in Mentimeter was particularly
useful in order to get creative responses.
“We don't want students talking over one another but what we do is, we do want
them to feel that they can come up with half an idea and another one can run with it
and then you build this idea up and that's something that's great about the tool.”
35
Although being sceptical about using cell phones and computers in the class because of
potential distraction, the teacher found that the students worked more closely together in
Mentimeter-supported activities and that the quality of interaction increased.
“But what I was absolutely amazed about was that it forced them to communicate
more with one another […] it's very open-ended and the students are forced to talk
about their own experiences.”
5.1.3 Formative assessment The teacher in case study A reported that the outcome of the voting helped with the planning
of the next step of the lecture. Normally, the teacher can anticipate how the students will
vote but sometimes the results are surprising, which makes it important to adapt to it. This
can mean that the teacher goes back a few slides to repeat something or skips some of the
next slides because there is no need for further explanation. The feedback received via the
audience response system steers the pacing of the lecture and adding a feedback question at
the end of the session helped the teacher make changes to the instruction between lectures.
5.1.4 Purposeful use of technology A range of different interactive activities were observed in the case studies, audience
response systems and interactive slides being some of them. The teacher in case study B
explained that Mentimeter was not always used during class as different situation required
different approaches. When talking about using Mentimeter in different subjects, the
teacher reasoned that it would be good if instructors would not use it all the time and that
they should use it in different ways for the sake of variety. Both informants reasoned that
using an audience response system can be done for several reasons, depending on the
situation. A few examples came up; checking students’ understanding, repeating a concept,
identifying common misconceptions and posing easy questions to boost students’
confidence.
The interviewee in case study B was a bit sceptical beforehand because the students already
use their phones and computers much. However, flipping the classroom so that the students
do more learning at home and then use the class for problematising and discussion proved
to be effective.
5.1.5 Social aspects Dependence on the group – The lecturer in case study A experienced that there is a higher
acceptance for active participation in lectures in higher grades. In some classes, the student-
teacher interaction is limited to a small group of students, who typically have high self-
confidence and tend to speak more. This may isolate other, less confident students.
Increase focus and boost energy – the informants reported that the students are more
focused on the task when an audience response system question is posed, unlike during the
regular lecturing. Adding a few questions throughout the session helps to activate the class
and boost the energy in the room.
Boosting self-confidence – One of the interviewees argued that it is important for the
students to see that they are not alone in their assumptions about the topic. The teacher
believed that the students were more prone to ask for clarifications if they knew that their
peers also had trouble understanding. Sometimes, the teacher also posed easy questions to
show the students that they already have some knowledge.
5.1.6 Technological aspects There were a few obstacles when using the clicker-based audience response system in case
study A; compatibility with different operating systems and difficulties with posing
36
questions spontaneously. The teacher who preferred clickers explained that it was to avoid
the students getting distracted by other notifications on their phones. In case study B, the
interviewee reported that missing some typographical features in Mentimeter compared to
other presentation tools. On the other hand, the teacher had started to put less information
on the slides and talk more about the content. At the same time, the teacher reported that
the tool was intuitive to use; “I don't feel like a technologist at all […] but this is the closest
thing I have found that does work […] I got it right just by my hunch”.
5.1.7 Prompting the teacher to reconsider their instruction The teacher in case study B described that they typically have lots of interaction in their class
and considered themselves to be quite good at that. However, using an audience response
system provided new insights on how the teacher usually managed the interactions and
rethink those methods.
“The biggest advantage for me is that it has made me reconsider interaction in the
classroom […] When I used Mentimeter for the first time it made me completely
rethink things - it made me realise the limitation of hands up. It made me realise the
limitation of calling out – and a knew that not all students like to be called out.”
37
5.2 Results from the main study This section present data from the main study, starting by describing the teacher’s previous
experience with interactive instruction and the planning of the lessons in this study. This is
followed by themes regarding pedagogical considerations and reflections that emerged in
the data analysis.
• Transition to remote teaching
• Introducing the element of formative assessment
• Practical aspects of audience response system modality
• Supporting interaction between the students
• Interface and user experience
• Giving every student a voice
• Using an audience response system to support didactic choices.
The section is completed by an overview of the conceptual questions used in the study. The
table below shows a summary of the different data sources that were analysed in the main
study.
Table 1. Overview of data from the main study, presented in chronological order.
Source Data format Description Lesson planning & interview
Written notes
Planning session with the teacher. Discussion of methods. Interview about the prior experience of interactive techniques.
Lesson 0 Audio recording
Observation protocol Observation of pre-intervention instruction.
Lesson 1 Video recording Google slides Test results Observation protocol
Group 1: Peer instruction with analogue audience response. Physical classroom.
Lesson 2 Screen recording Chatlog Mentimeter slides Test results Observation protocol
Group 2: Synchronous remote teaching with digital audience response. Online classroom.
Interview Video recording Interview with the teacher about the pedagogical considerations and reflections on both lessons.
38
5.2.1 Previous experience with interactive teaching During the lesson planning and the initial interview, the teacher described several
techniques that they have used for creating a more interactive learning environment. The
first is that the teacher themself select a student to answer, rather than letting them raise
their hands voluntarily. This allows more students to contribute, but at the same time keeps
them more anxious. The teacher stressed the importance of having a good relationship with
the students, believing that it boosts their willingness to learn. To avoid the intrusive
questioning, the teacher implemented other interactive methods. They One method is to
frequently encourage the students to discuss with the person sitting next to them, which
allows the teacher to pick up how they argue on the topic.
In the teachers experience, when students respond to the posed questions, they sometimes
just try to say what they believe that the teacher wants to hear. That is not the point of
questioning, as the teacher’s goal is to understand what the students struggle with and help
them with that or adjust the instruction. The teacher reports communicating the purpose of
questioning to the students and reminding them that it is to support their learning and not
to judge their performance. The overall goal of interactive teaching is to help students learn
more. The teacher also points out the importance of assessing their understanding and
planning the tuition accordingly, as well as encouraging active learning and contribution
from everyone.
Another method described by the teacher is diagnostic questioning, which is a way to assess
the students’ understanding of key concepts and identify their misconceptions. The
questions are posed to the whole class and have several response options. The students
answer the questions simultaneously by raising the number of fingers that corresponds to
each alternative. This way, all students can respond as opposed to one student at a time. The
teacher mentioned one disadvantage of this method; the students in the back of the
classroom can see the hands of those in front of them. This leads to an instant change of
responses based on what other students believe.
Diagnostic questioning was part of the instruction in the baseline observation (lesson 0). On
several occasions, the teacher posed a question and asked the students to vote. A few times,
the teacher encouraged the students to discuss with their neighbours and sometimes the
students initiated the group discussions themselves.
5.2.2 Planning the lessons The teacher planned the first pair of lessons with my support. We used the lesson plans from
the previous school year and modified them to introduce the element of peer instruction.
The teacher and I reviewed examples of concept questions that covered the physics content
in question and the meaning of different response options. We also discussed the
implementation of the questions in class and the differences between using Mentimeter and
the analogue counterpart, i.e. show of fingers. The limitations of multiple choice-questions
were discussed; the teacher felt somewhat limited by the inability of an audience response
system to capture more open questions and qualitative responses. However, this concern
was mitigated when we explored the features in Mentimeter, where the teacher agreed that
Open ended was a suitable option. Please refer to section 4.3.5.1 for a full description of the
planning process. This is how the teacher reflected upon the effort that must be put into
designing good conceptual questions:
It takes some time to create these questions and figure out the options. It was good
to have a book [Peer Instruction: A User’s Manua] with some suggestions, but I
think that to get the best result, one must think about which options to include.
39
Two separate presentations were created for each lesson: lesson 1 in Google Slides and
lesson 2 in Mentimeter. Slides are found in Appendix D. The presentations are equivalent –
the only difference is how the questions were framed.
5.2.3 Pedagogical considerations and reflections 5.2.3.1 Transition to remote teaching
The original study design aimed to keep lesson 1 and 2 as similar as possible, varying only
the modality of the audience response system. The sudden shift to remote teaching triggered
a deviation from that plan, meaning that the peer instruction element had to be removed.
Apart from that, the teacher reports that experiencing little difference in the lesson planning
itself. Both lessons were planned simultaneously and had the same lecture outline and
presentation slides. As for the necessary adjustments for the remote lesson 2, the main
concerns were finding a combination of programs for the intended activities, i.e. Mentimeter
and Microsoft whiteboard. The teacher pointed out missing a big part of both verbal and
non-verbal communication that normally goes on in the classroom.
Teachers have very big ears when the students discuss in the classroom, or at least
I do, and I absorb as much as I can. This way, I can check how much the students
have grasped without them understanding that I am listening. This is something
that is missing [in a digital setting], you also miss a lot of facial expressions and
things that I need to know if I should proceed or go faster or slower when I teach ...
The lack of interaction in lesson 2 was somewhat mitigated by using Mentimeter - the
teacher had been experiencing a palpable loneliness in other remote lessons that did not
have the element of audience response system. Talking in an empty classroom without
students physically present added insecurity to the teaching. The teacher stated that the
interactive slides helped regaining a sense of participation from the class and allowed every
student to speak. With Mentimeter, the small interactions helped the teacher understand
the audience and make minor adjustments. This was opposed to non-audience response
system remote lessons where the teacher had trouble getting responses from the students.
And now that everyone had to open the app and vote and I know that everyone has
reflected upon the question and what they think about it, or at least the majority of
the students. And it felt like a huge benefit to me - to feel that I am not actually
sitting alone and talking in an empty hall without any students, but rather that
they are with me and they think and listen.
The teacher expressed some concerns about whether the less strong students in the class were able to follow the remote lessons properly. Normally, those students refrain from asking questions in the classroom and the teacher believed that it is even less likely that they would write something in the group chat. In a physical setting, the teacher can approach each student directly and check on them.
But it is very difficult for the weaker students, I feel it is hard to make sure that
they keep up. How do I know that they listen and understand? They hardly dare to
ask questions in the classroom and now ...
The teacher felt that the audience response system was compensating for some aspects that were missing in the remote classroom, such as small interactions – facial expressions, affirmation from the students – that helped the teacher determine the pace of the lesson. Continuous check-ups using Mentimeter provided some support in that matter. An audience response system in a remote lesson might give a better opportunity for the students to follow and raise concerns anonymously.
40
5.2.3.2 Introducing the element of formative assessment
Using conceptual questions in both lessons provides several opportunities for formative
assessment. Please refer to section 5.2.4 Conceptual questions for a list of the problems.
Most students voted for the correct alternative in question 1, lesson 2 (see fig. 11). However,
the teacher explained the decision to go through every option anyway in order repeat the
main concept in the question and emphasize the meaning of the slope of the distance-time
graph, i.e. the velocity of the object. As this is a core concept for further understanding of the
material, the teacher stated that it probably would have been repeated anyway, regardless of
how the voting went.
There are examples of how the voting outcome helped the teacher to determine the next step
of the lesson. The results of the voting on question 3, lesson 2 (see fig. 13), indicated that the
vast majority of the students could tell the difference between displacement and distance. In
this case, the teacher did not find it useful to lecture further on the subject. Voting on
question 2 in the same lesson led to a different decision made by the teacher. This time, the
votes were more evenly spread among the options, with 50% of the students picking the
correct alternative and 42% voting for the second most popular. The teacher put more effort
into exploring how the students’ reasoned and explaining the correct answer and why the
other alternatives were wrong. This method can help the students to use the flipped
approach on their own and solve problems by exploring why some alternatives do not work.
This way, they know that some options are wrong and then they become a bit
more attentive and try to detect why something might be wrong […] I felt when I
saw the response rate that I wanted to emphasise this question. I thought that this
is way to go to get the students to reflect more upon it.
5.2.3.3 Practical aspects of audience response system modality
Picking show of fingers as the analogue audience response system was due to practical
reasons – the teacher had been using that method before, and the class was used to it. The
teacher referred to Craig Barton’s argument for show of fingers, that the students can always
simply raise their hands. This method was also easier than distributing coloured papers for
voting, according to the teacher. Ease of use was also in favour of the digital audience
response system – the teacher noted that the students always have their phones. The teacher
also stressed that the instant display of votes makes it easy for the students to see each
other’s opinions.
Furthermore, the teacher expressed some concerns regarding the limitation in the number
of response options using just one hand. When designing the questions for lesson 1 and 2,
they intended to have six options per question but had to cut one of them so that the
students would be able to vote with up to five fingers. Effects of this manipulation can be
seen in question 3 (see fig 13) about displacement and distance, where every possibility
could have been covered by six options and the teacher had to cut one of them. When
reflecting upon question 1 (see fig. 11), the teacher stated that they would probably have
made another exclusion. The question was about the motion of a car that slows down before
a red light, and the excluded option was a graph that illustrated the belief that slowing down
means a negative velocity. This is a common misunderstanding about how velocity works.
5.2.3.4 Supporting interaction between the students
Lesson 1 incorporated elements of peer instruction, which enabled more interaction between
students as they were discussing the questions and explaining their thoughts to each other
before a second round of voting. Most students voted for the correct answer after the
discussion, which is also recognised by the teacher. In the interview, the teacher reflected
upon why that happened. There could be some influence from seeing other students vote,
41
but the teacher felt confident that the improvement was mostly due to the discussion. The
teacher also stated that the peer discussion reduced the need for additional explanation by
the teacher. The teacher explanation played a bigger role in Lesson 2, because it was difficult
to properly simulate a peer instruction-session in the remote classroom. The teacher
considered splitting up the students in smaller meeting rooms, but it was not an easy
solution, and they would not be able to hear the students discussing.
5.2.3.5 Interface and user experience
When planning the lessons and preparing the Mentimeter presentation, the teacher
reported that they wanted to be able to share the presentation with others for collaborative
editing. This feature is not yet available in Mentimeter. The teacher had also some trouble
fitting the question text in the template due to the character limitations of the fields. The get
around was rephrasing the questions and in one case (the exam question) the teacher had to
create an image and paste it into the presentation.
The teacher also had some trouble with adding a picture to the Multiple choice question
type. The image and the votes are not displayed at the same time (see figure below), and the
presenter must hover over the votes to show the picture in their place. The teacher would
have preferred to be able to display both at the same time and point directly in the graph in
the presentation during the explanation after voting. However, the graph was displayed in
the voter interface on the phones. During the lesson, the teacher kept an eye on their own
phone to make sure that everything worked properly for the students. The teacher
encountered the same issue while exporting the presentation as a PDF file, where the image
was layered over the voting results.
Figure 10: A multiple choice question that contains a picture of a graph; the image and the
votes are not presented simultaneously.
As for the live display of votes, the teacher thought that it might have influenced how the
students voted. The teacher recognised that it would have been a good idea to hide the
answers until all students had submitted their votes, which the teacher realised was possible
to do in Mentimeter. The votes typically converged around one or two options, which in
practice eliminates all the other alternatives from consideration. On the other hand, the
42
teacher pointed out that it is not necessarily bad that the students see which options are
more justifiable than others.
Overall, the teacher had a positive experience using Mentimeter and reports that it was easy
to quickly gather responses from the students. The teacher acknowledged that it had been
difficult to get any interaction with the students in the remote lessons, especially when
posing questions. The teacher had trouble getting any responses at all, which was also
observed during lesson 2 whenever an open question was posed to the whole group. On the
other hand, getting responses on the conceptual questions via Mentimeter was notably
easier and all students participated in the voting.
5.2.3.6 Giving every student a voice
The whole class voting on conceptual questions has significant differences compared to asking a question and waiting for the students to volunteer to answer. The teacher believed that one can be easily tricked into thinking that everyone is on the same page if one student gets the correct answer. Seeing the distribution of votes helps avoiding this pitfall.
As a teacher, one believes that one has a gut feeling about the students’ progress
and what they think. I think that one can often be mistaken or accidentally apply a
broad-brush approach. As if now that I have listened to 5 out of 32 students, it
would be a good indicator for what everyone thinks – this is not really the case.
The teacher further pointed out the efficiency of synchronous voting appreciated the possibility to see all Open ended submissions on question 4 simultaneously and exploring various explanations about the graph. Getting qualitative input from all students would not have been possible in a setting without Mentimeter. Normally, the teacher only has time to let a few students present their thoughts and it is possibly that other students may get away with not thinking about the topic or vocalising their own reasoning. Anonymity was believed to be of great importance to get the students to participate in the interactive elements of the lesson. The teacher thought that it provided a sense of safety for the students and helped them get comfortable with responding to questions posed. When Mentimeter was introduced during lesson two, one of the students asked in the chat whether or not they would respond anonymously, which the teacher thought was an indicator of how important it was to the students to avoid being judged.
It was nice to get a quick overview of what everyone thinks, even those who barely dare to speak up in the classroom.
The teacher noted that some students do not want or dare to speak up in the classroom, but given the opportunity to do so anonymously, most of them did indeed respond (12/13 during lesson 2). The response rate did not seem to differ from lesson 1 with show of fingers, although it could be observed that some students showed signs of worry (i.e. not displaying their hands clearly, looking around the classroom for other students’ votes) when they voted.
5.2.3.7 Using an audience response system to support didactic choices
One major aim of the lessons was for the students to understand the connection between
motion of objects and distance-time graphs. This concept was introduced in lesson 0, which
preceded the pair of audience response system-lessons. The teacher wanted to repeat the
material from lesson 0 and create a smooth transition to the next part of the curriculum. We
designed question 1 to emphasize the meaning of the slope of the distance-time graph. This
was easily done using the Image choice feature in Mentimeter in lesson 2, where each
picture of a graph was automatically assigned to a response option. For lesson 1, the images
were manually added to the slide and labelled with numbers.
43
The teacher stated that the multiple-choice format is not suitable for all purposes of questioning. Question 4 was about interpretation of graphs – instead of connecting the motion of an object to a graph as in question 1, the students were supposed to derive possible events depicted by a given graph. This makes for many possible responses, which means that a multiple-choice question is not a meaningful way to assess the students’ understanding. In lesson 1, the students got to discuss with their neighbour and then some of the groups presented their explanations. Sharing was part of the teacher’s intention so that the students would hear different interpretations of the same graph. The discussion was omitted in lesson 2, but this time, every student was able to share their opinion and view the input from their peers in an Open ended question type in Mentimeter. Their responses were shown on the shared screen giving both the teacher and the students an overview. The teacher reported that using Mentimeter has helped to rethink some of the ways of teaching. During the planning, the teacher assumed at first that it would not be possible to collect qualitative answers and therefore dismissed question 4 about graph interpretation. Exploring all the features did however give a different perspective on how certain activities could be done and inspired a different approach.
I thought that this was very inspiring, that the limitation is rather how I think the teaching should be, and that I had to rethink it a bit.
5.2.4 Conceptual questions The selection of conceptual questions is based on the current area in the curriculum, which
is motion of objects. The teacher and I defined the main learning objectives and concepts
that were of importance. As part of the preparation, we looked through concept inventories
linked on Physport (2020) and Peer Instruction: A User’s Manual (2014) for examples of
question formats and inspiration for content.
During the process, we discussed the style and phrasing of the questions, as well as the
answer options and their meaning. Catching common pitfalls was of particular interest, as
well as prompting discussion between students. The questions were also checked against the
course curriculum and the content in the textbook that was used in class. For more
background on the methodology, please refer to section 4.3.5.1.
There are a couple of concepts that are important to understand regarding motion that these
questions aim to cover, as well as several misconceptions to address. One concept is the
distance-time graphs and their derivative, i.e. the velocity. A positive slope of the graph
means that the velocity is positive and vice versa, meaning that the velocity is different from
speed because it considers the direction of motion. The steepness of the graph is also
important to consider, as it indicates the magnitude of the velocity. Finally, the difference
between displacement and distance is addressed. The displacement between two points in
the graph that have the same value on the distance axis is zero, but to get the distance
travelled, one must add up the lengths of each elevation and drop. This means that the
distance travelled will always be equal to or greater than the displacement of an object.
44
5.2.4.1 Question 1
Figure 11: Which graph is the best representation of a car stopping before a red light?
The main concept in this question is that the slope of a distance-time graph is the velocity of
the object. The correct answer is E. It is possible to misunderstand the meaning of the origin
of the graph and believe that the object should stop when the graph hits zero on the distance
axis. This is addressed by option F. Furthermore, one might believe that a decrease in
velocity mean that the graph has a negative slope, as seen in options C and F. Option C does
also represent the confusion between distance-time and velocity-time graphs and the belief
that the braking is continuous.
45
5.2.4.2 Question 2
Figure 12: The graph shows two trains running on parallel tracks. Which of the following
options is true?
The main concept in this question is that the slope of a distance-time graph is the velocity of
the object. The correct answer is 3) – the trains will have the same velocity at some point
before tB. It is important to understand that two objects have the same velocity if their
graphs have the same slope. A common misconception is that the graph itself represents
velocity, which is implied by option 1) that states that both trains have the same velocity at
time tB. Option 2), saying that the velocity of both trains is constantly increasing, addresses
the idea that a positive derivative of the graph means that the velocity increases. The fourth
option is that the trains will never be next to each other which further checks the student’s
understanding the of what the speed-velocity graph shows, i.e. information about the
objects’ position at a given time.
46
5.2.4.3 Question 3
Figure 13: Victor walks from one place to another. After he stops, his displacement is [pick
option, e.g. always bigger or equal to] the distance travelled.
The main concept in this question is the difference between displacement and distance. The
correct answer is the fourth option. This question is designed to cover every possible answer
that can be given to check whether the students have grasped an important definition. The
displacement is a change in position while the path between the initial and the final
positions can look different. One of the alternatives had to be omitted in lesson 1 to keep the
number of options to 5.
47
5.2.4.4 Question 4
Figure 14: What event can be described using this distance-time graph?
The main concept in this question is interpretation of graphs and drawing parallels between
the information given in the graph and real-life events. This question is open to allow the
students to formulate their own understanding and illustrate the variety of possible
explanations to the same graph.
48
5.2.4.5 Question 5 – test question
This is the test question that was used to evaluate the students’ understanding at the end of
the lesson. The question was designed to cover multiple topics that were discussed during
the lesson, thus being more complex with blanket coverage of different concepts – the slope
of the graph representing velocity, difference between displacement and distance travelled.
For instance, the difference between 2) and 4) is the steepness of the graph. The student
needs to understand that fast movement mean higher velocity and therefore higher
steepness in that area. 94 % (16/17) of the students in lesson 1 and 67 % (8/12) in lesson 2
provided a correct answer to the question.
A person starts at point P in the image below, staying there for a while and then moves along
the axis to Q, where they stop. Then the person runs quickly to R, stops again and then walks
slowly back to P. The unit in the graph is arbitrary.
Which distance-time graph represents this persons’ motion?
49
6 Discussion
The aim of this thesis was to understand the effects of using an audience response system in
a learning environment, with focus on physics instruction. This section will discuss the
results from the study with respect to the theoretical framework and address how the
findings contribute to the field of research on audience response systems. Methodological
issues are also discussed.
Both lessons were ended with a conceptual test to assess the learning outcome of each
session. As stated in the 5.2.4.5, 94 % of the students in lesson 1 and 67 % in lesson 2
answered correctly. Attributing these results solely to the modality of the audience response
system is impossible due to the vast difference in the setting and instruction of each lesson
necessitated by the pandemic. In the light of previous research, the outcome of audience
response system-supported teaching is more dependent on the type of instruction rather
than the tool. There can be several explanations to the higher proportion of the students
passing in lesson 1. A simple reason is that lesson 2 might have been perceived as more
difficult, even though the content is the same. It is impossible to recreate the exact same
setting, as learning is dependent on the social context and hence the participants in that
situation. No teacher acts the same on two occasions, even if the content of the lesson is
identical. From a behaviouristic point of view, the actions and responses of the individuals
are essential for the outcome.
Nevertheless, there is support for peer instruction having a positive impact on learning, as
presented in section 3.1.4. This indicates that the first lesson might have incorporated a
more efficient teaching strategy than the second, which only had the element of question-
driven instruction. Furthermore, the modality of the lesson might have played a part –
although no research on remote learning has been reviewed in this study, the nature of the
setting can be assumed to have an impact. From a socio-cultural point of view, the obvious
lack of interpersonal communication between the students is a clear disadvantage.
Furthermore, the results themselves may have been affected by the modality of the test. In
lesson 1, the students got the test on paper. The graphs were printed clearly and some of the
students used them to draw and make notes as they solved the problem. In the second
lesson, the students could only watch the graph on their screens, where the graphs had a
smaller size and lower resolution. The intention in the original design of the study was to
examine the content in the same way, to avoid this particular issue.
The use of an audience response system for facilitating learning depended on the modality.
There are a number of characteristics that were relevant. Both analogue (show of hands) and
digital audience response system were reported to be easy to use. Show of fingers was
deemed flexible and quick – there was not necessarily a need for pre-writing questions and
raising hands usually went fast. However, coming up with questions on the go was not
common practice as the teachers in the pre study and main study agreed that creating
pedagogically sound problems required an effort. This notion recurs throughout the
research and is advocated for by Mazur (2014) when it comes to fostering conceptual
understanding in physics.
The ease of use of the web-based audience response system was explained in a similar was as
the convenience of show of fingers– the students always have their phones and it is easy to
ask them to vote on a question. This raises the question whether the different modalities are
interchangeable from a simplicity aspect. It is important to bear in mind that a web-based
audience response system such as Mentimeter has some technical requirements. These
requirements were easy to meet in a Swedish high-school environment, but in a global
50
context, there are still many areas where lack of proper IT infrastructure is a problem. On
the other hand, smartphones can be more accessible and affordable than hardware-based
audience response system.
Once set up, Mentimeter grants access to a broad range of features, such as the possibility to
collect open-ended responses simultaneously. It also comes with the option to show the
responses in a clear way, thus giving the instructor a more accurate overview of the state. As
one of the informants pointed out, it could be helpful for the students to see each other’s
responses. This way, the feedback does not necessarily have to come from the teacher. The
zone of proximal development is defined as the distance between what a person can learn on
their own and what they can achieve with help from a more knowledgeable individual. This
person does not necessarily have to be the teacher, as the students may be able to discuss
something with their peers and sort out any issues among themselves. Furthermore, the
teacher may happen to provide an explanation on a level that is not comprehensible for the
students at all. Display of answers can therefore be seen as a way to put the students in their
zones of proximal development, as they are exposed to variety of input and can thus find
plausible ways of understanding the material.
It was observed in lesson 1 that not all students were interested in the voting as some
refrained from participating or did not raise their hand clearly, which can be interpreted as a
sign of reluctance or lack of time to make up their minds. In lesson 2, the number of
registered votes was shown on the slides and the teacher insisted that everyone should vote
before closing the poll, which resulted in a constant level of participation. The conclusion is
that a digital audience response system creates a better overview and provides the students
with a higher sense of accountability. It also helps the teacher to ensure that the students
can take the time they need for consideration, although this can prove to be time-inefficient
and the last respondents can still just hurry through the voting.
As for the live display of votes, there can be several effects that occur. As mentioned above, it
can on one hand serve as a push in the right direction when the students consider the
problem. On the other hand, the responses may converge around the wrong option or the
students may quit thinking for themselves and just go with the majority. With show of
fingers, the voting was done completely simultaneously as the teacher counted down. This
may have resulted in more honest votes, but it was also observed that some students in the
back of the room instantly changed their responses when they saw how their peers voted. In
lesson two, the teacher realised they could disable the live display of responses in
Mentimeter, thus avoiding the priming effect of the early votes.
The informants discussed how an audience response system was beneficial for social effects
such as motivation and engagement that they deemed important for a fruitful learning
process. This reasoning was based on their experience of audience response system as they
usually got high response rates and could therefore conclude that the students took an active
part in the instruction. This notion is in line with the belief that active participation is
necessary to consolidate knowledge on a deeper level. Previous research has also established
that audience response systems have positive effects on participation and interest and that
they are often used for motivational purposes.
The evidence for whether an increase of positive learning outcomes can be attributed to an
audience response system itself is weak, but there is also consensus that using a tool should
be a conscious decision on the behalf of the instructor. Hence, a teacher may deem it
valuable to support student engagement with an audience response system and thus
indirectly facilitate the instruction. The conclusion is that an audience response system can
be part of a pedagogical strategy, but not stand alone.
51
In the case of the remote teaching, the mere presence of an interactive tool seemed to play
an important role. The teacher reported that they missed the verbal and non-verbal
communication that normally goes on in the classroom. In a physical setting, the teacher
gets some response from the students through their facial expressions and body language. It
is difficult to get an overview in a conference call, which makes it even more important to
receive direct feedback from the participants.
The audience response system was used to elicit answers from the students and the teachers
reported that they used the information to decide what to do next. In a way, the answers
served as a stimulus for the teacher, prompting them to give relevant response back to the
students. Some of the observed strategies were repetition, rephrasing, initiating peer
discussion, asking a student to explain or elaborate their thoughts, giving an example,
confirmation and praise and proceeding to the next part. These are all examples of a
teacher using an audience response system for formative assessment and for adjusting their
instruction. At one point, the teacher still went with their original intention despite what the
feedback from the class indicated, because they still deemed necessary to put extra emphasis
on the concept in question. Making a decision like this after the inquiry is always a
possibility, as the teacher always weighs their professional experience into their judgement.
Formative assessment does not only cover the role of the teacher, it is also an action
undertaken by the students to provide feedback for modification of instruction. In the case
of an audience response system, it is therefore completely necessary that the students take
an active part in voting, not only for the sake of the teacher, but also to gain important
insights about themselves and figure out what they can do to improve their own learning.
This strengthens the importance of the stimulus to vote, as well as reinforcement and the
response provided by display of answers.
It is not possible to comment on how the students acted upon seeing the tally of votes in
Mentimeter due to the remote setting. However, some reactions were observed in lesson 1
after the show of fingers, as the students looked around and sometimes changed their vote
or conferred with the person next to them. The instant change of vote may be due to a
sudden revelation thanks to seeing others vote or out of pure conformity – this was not
further explored in this study. Assessing the cognitive processes is difficult, but the
behaviour changed. Although we do not know why these occurred, it can be assumed that
the audience response system had something to do with it. The change of votes after a
discussion might be thanks to the opportunity to articulate one’s thoughts, as dialogue
(speech) is one of the most important artefacts for mediating knowledge, and the interaction
with a peer puts the student in the zone of proximal development.
Audience response systems were shown to prompt the teachers to incorporate interactive
elements, lowering the threshold to introducing conceptual questions. The teachers
themselves reported that they found the tool helpful and inspiring. The findings in this study
support the standpoint that the tool serves as a catalyst for interactive teaching. There is a
risk that the instruction becomes completely technology-driven, which makes it important
that the teacher is mindful of this pitfall. Furthermore, there are always limitations to a tool
– as seen in show of fingers limiting the number of possible responses, character limitations
affecting the wording on the slides and the inability to display both an image and the options
at the same time. This is why a pedagogical mindset is required in any activity in the
classroom and the research agrees that the learning goals should be put first. The
informants in this study agreed with this and reported that they use the audience response
system when they have a purpose of doing so. At the same time, the teachers acknowledged
that a web-based audience response system offered possibilities that they did not think of
before, making the technology an enabler rather than a limitation. Examples of this is
52
reducing the domination of a few vocal students and using open ended questions to gauge
the students’ understanding on a deeper level.
7 Conclusions
This thesis aimed to address the topics of how audience response systems can be used to
facilitate learning in physics and the effects of using them in a learning environment. Several
examples of usage of audience response system in education were found over the course of
this study, and the main findings concern designing and evaluating physics instruction in
high school. Below is a summary of the findings and the answers to the research questions of
this study.
1. How can an audience response system be used to facilitate learning in a Swedish high-school physics class?
• Audience response systems can indirectly support active participation and deep learning as the teachers that come across such tools are prone to use them for more interactive instruction. Both interviews and observations showed that the audience response system inspired teachers to reconsider the methods they use. This implies that the tool can contribute to the development of innovative ideas and backs up the previous conclusion about the importance of the user.
• As for using an audience response system to facilitate learning, the tool proved to be efficient to simultaneously assess the understanding of the whole class. This helped the teacher to make more informed choices about their instruction – i.e. formative assessment. A clear display of answers by a software-based audience response system helps the presenter to get more accurate feedback from the audience. Several examples of how the teacher’s subsequent actions were affected by the response from the audience emerged; repetition, initiation of peer discussion, asking a student to explain or elaborate their thoughts, confirmation and praise, proceeding to the next concept.
• The study demonstrated how teachers intentionally used audience response system to create a productive learning environment. Among the purposes were boosting students’ self-confidence, increasing the engagement of participants who would not otherwise speak out, unveiling common misconceptions in the audience, reducing the influence of a small but vocal crowd, increasing focus and boosting energy in the room. The anonymity of software-based audience response system was reported as a particularly important feature for these purposes.
• An audience response system proved to be useful in the process of peer instruction, thus supporting an evidence-based method for teaching physics. The interface of a software-based audience response system makes it easy to administrate conceptual problems and prompt discussion among students, as seen in the case study B. The very creation of the questions is rather dependent on pedagogical considerations and knowledge of the subject. These deliberations also constitute opportunities for the teacher to exercise and strengthen their pedagogical content knowledge.
2. What are the effects of using an audience response system in a learning
environment?
53
• The results imply that the effects of using an audience response system depend on why and how it is used, which emphasises the role of the user and suggests that an effective audience response system should be developed with some of the following aspects in mind: ease of use, variety of features, inspiration and flexibility in supporting a broad range of use cases.
• The display of student responses by an audience response system helps to initiate conversations in the audience and shapes the social context for interaction between participants. Software-based audience response systems excel in this respect, because of the quick and comprehensive overview that is shown to the audience.
• Although this study originally had the aim to evaluate the impact of audience response system modality on learning outcomes, it was impossible to assess this properly. When the school changed teaching mode to remote due to the pandemic, other factors could not be kept constant so as to isolate the effects of using the audience response system.
7.1 Future research This study shows a number of applications of audience response systems in both physical
classroom and remote settings. The changed conditions for the study led to an unexpected
focus on remote teaching, which turned out to be of great interest because of the rapid
expansion of distance education due to the COVID-19 pandemic. While research on E-
learning and massive open online courses can be helpful, the extensive closures of schools
globally exposed an urgent need of pedagogic recommendations for remote teaching, across
all educational stages.
This study concludes that audience response systems can be used to support engagement in
learning – an issue that many teachers, children and families worldwide struggle with given
the circumstances of distance education. It would be fruitful to further study remote
learning that implements software-based audience response systems. Furthermore, as
previous research points out a lack of studies on equivalent instruction methods that either
incorporate an audience response system or not, it remains interesting to recreate the study
design originally proposed in this thesis.
As one of the main conclusions of this thesis is the importance of the role of the instructor
and how and why they use an audience response system, it is of great interest to further
explore the actual behaviours and motives that prove to be effective. The precise process of
evaluation of the student responses that happens on the go during a lesson, and the
subsequent actions from the teacher remain to be unravelled. Finally, as this thesis did not
investigate students’ perception of software-based audience response systems, further work
focusing on their perspective is needed for a more complete understanding of the effects.
54
References
Andriani, A., Dewi, I., & Sagala, P. N. (2019). Development of blended learning media using the mentimeter application to improve mathematics creative thinking skills. Journal of Physics: Conference Series, 1188(1). https://doi.org/10.1088/1742-6596/1188/1/012112
Beatty, I D, Leonard, W. J., Gerace, W. J., & Dufresne, R. J. (2006). Question driven instruction: Teaching science (Well) with an audience response system. In Audience Response Systems in Higher Education: Applications and Cases. IGI Global. https://doi.org/10.4018/978-1-59140-947-2.ch007
Beatty, Ian D., Gerace, W. J., Leonard, W. J., & Dufresne, R. J. (2006). Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31–39. https://doi.org/10.1119/1.2121753
Björndal, C. R. P. (2005). Det värderande ögat: observation, utvärdering och utveckling i undervisning och handledning (1st ed.). Liber.
Boscardin, C., & Penuel, W. (2012). Exploring benefits of audience-response systems on learning: A review of the literature. Academic Psychiatry, 36(5), 401–407. https://doi.org/10.1176/appi.ap.10080110
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. CBE Life Sciences Education, 6(1), 9–20. https://doi.org/10.1187/cbe.06-12-0205
Castillo-Manzano, J. I., Castro-Nunõ, M., López-Valpuesta, L., Sanz-Diáz, M. T., & Yñiguez, R. (2016). Measuring the effect of ARS on academic performance: A global meta-analysis. Computers and Education, 96, 109–121. https://doi.org/10.1016/j.compedu.2016.02.007
Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education (6th ed.). Routledge/Taylor & Francis Group. https://doi.org/10.1080/19415257.2011.643130
Cummings, K., & Roberts, S. G. (2008). A study of peer instruction methods with high school physics students. AIP Conference Proceedings, 1064, 103–106. https://doi.org/10.1063/1.3021227
Denscombe, M. (2008). The Good Research Guide: For Small-Scale Social Research Projects (Vol. 15, Issue 2). https://doi.org/10.7748/nr.15.2.88.s4
Fies, C., & Marshall, J. (2006). Classroom response systems: A review of the literature. Journal of Science Education and Technology, 15(1), 101–109. https://doi.org/10.1007/s10956-006-0360-1
Gordon, T., & Becker, H. (1973). United States Patent ( 19 ). 541(19).
Hill, D. L., & Fielden, K. (2017). Use of Mentimeter to promote student engagement and inclusion (Issue December 2017). http://insight.cumbria.ac.uk/id/eprint/3473/
Hill, L. (2019). Resource Review: Metimeter A Tool for Actively Engaging Large Lecture Cohorts. In Academy of Management Learning & Education.
Hirsh, Å., & Lindberg, V. (2015). Formativ bedömning på 2000-talet – en översikt av svensk och internationell forskning.
Hunsu, N. J., Adesope, O., & Bayly, D. J. (2015). A meta-analysis of the effects of audience response systems (clicker-based technologies) on cognition and affect. Computers and Education, 94, 102–119. https://doi.org/10.1016/j.compedu.2015.11.013
Jakobsson, A. (2012). Sociokulturella perspektiv på lärande och utveckling Lärande som
55
begreppsmässig precisering och koordinering. 3, 152–170.
Karpicke, J. D. (2012). Retrieval-Based Learning: Active Retrieval Promotes Meaningful Learning. Current Directions in Psychological Science, 21(3), 157–163. https://doi.org/10.1177/0963721412443552
Kay, R. H., & LeSage, A. (2009). A strategic assessment of audience response systems used in higher education. Australasian Journal of Educational Technology, 25(2), 235–249.
Liu, G., & Fang, N. (2019). Student Misconceptions about Force and Acceleration in Physics and Student Misconceptions about Force and Acceleration in Physics and Engineering Mechanics Education *. January 2016.
Mayhew, E. (2018). No Longer a Silent Partner: How Mentimeter Can Enhance Teaching and Learning Within Political Science. Journal of Political Science Education, 15(4), 546–551. https://doi.org/10.1080/15512169.2018.1538882
Mazur, E. (2014). Peer Instruction: A User’s Manual.
McCarthy, J. P., & Anderson, L. (2000). Active learning techniques versus traditional teaching styles: Two experiments from history and political science. Innovative Higher Education, 24(4), 279–294. https://doi.org/10.1023/b:ihie.0000047415.48495.05
McKagan, S. (2020). Where can I find good questions to use with clickers or Peer Instruction? https://www.physport.org/recommendations/Entry.cfm?ID=93637
Mentimeter. (n.d.). Mentimeter for Education, Schools and Universities. Retrieved August 1, 2020, from https://www.mentimeter.com/
Piaget, J. (2008). Barnets själsliga utveckling (2nd ed.). Norstedts akademiska förlag.
Rudolph, J. (2018). A brief review of Mentimeter – A student response system. In Journal of Applied Learning & Teaching (Vol. 1, Issue 1).
Simmons, W., & Marquis, J. (2010). United States Patent : 5861366 United States Patent : 5861366. New York, 2(12), 1–29.
Skinner, B. F. (2008). Undervisningsteknologi (2nd ed.). Norstedts Akademiska Förlag.
Skolverket. (n.d.). Ämne - Fysik. Retrieved July 26, 2020, from https://www.skolverket.se/undervisning/gymnasieskolan/laroplan-program-och-amnen-i-gymnasieskolan/gymnasieprogrammen/amne?url=1530314731%2Fsyllabuscw%2Fjsp%2Fsubject.htm%3FsubjectCode%3DFYS%26lang%3Dsv%26tos%3Dgy%26p%3Dp&sv.url=12.5dfee44715d35a5cdfa92a3
Skoyles, A., & Bloxsidge, E. (2017). Have You Voted? Teaching OSCOLA with Mentimeter. Legal Information Management.
United Nations. (n.d.). Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all. Retrieved August 1, 2020, from https://sdgs.un.org/goals/goal4
Vallely, K. S. A., & Gibson, P. (2018). Engaging students on their devices with Mentimeter. In Compass: Journal of Learning and Teaching (Vol. 11, Issue 2). https://doi.org/10.21100/compass.v11i2.843
Vetenskapsrådet. (2002). Forskningsetiska principer. 1–17. http://www.codex.vr.se/texts/HSFR.pdf
Vygotsky, L. S. (1999). Tänkande och språk (4th ed.). Bokförlaget Daidalos AB.
56
Xie, C., Wang, M., & Hu, H. (2018). Effects of constructivist and transmission instructional models on mathematics achievement in mainland China: A meta-analysis. Frontiers in Psychology, 9(OCT), 1–18. https://doi.org/10.3389/fpsyg.2018.01923
58
Appendix B – Guidelines on writing multiple-choice questions
Question-driven instruction
Figure 15: The question cycle for question-driven instruction (Beatty et al., 2006)
Beatty et. al. (2006) describe this model in their paper Question driven instruction:
Teaching science (Well) with an audience response system. Question-driven instruction
differs from Mazur’s peer instruction; instead of mini-lectures combined with conceptual
questions, the authors advocate for the question cycle in the figure above. The questions
cover not only conceptual understanding but also target cognitive skills and metacognition
about physics. The authors also state that instructions need a better understanding of the
audience response system-questions and their underlying potential, which they claim is
limited by the design of concept inventory questions.
Every question needs to have an explicit pedagogic purpose.
• Content goal - what piece of the material is covered in the question?
• Process goal - what cognitive skills are important to promote?
• Metacognitive goal - what kind of beliefs about physics is this question reinforcing?
59
There are four tactics in the table that help to meet the above goals.
Tactics for directing students’ attention
Removing nonessentials Avoid distractive features and unnecessary steps. If the question
is about conceptual understanding, don’t add calculations.
Compare and contrast When presenting two things for comparison, the attention will be
drawn to differences. Create a sequence of questions on the same
situation with slight differences.
Extending the context Asking familiar questions about an unfamiliar situation, by
adding or changing some features (e.g. additional forces, angles,
curves etc.).
Reusing familiar question
scenarios
Avoid interpretations of the question that require too much effort,
so that students can focus on understanding the point instead.
Oops-go-back A sequence of two related questions. The first question is a trap
for common misconceptions and the second shed new light on the
problem that makes students go back and rethink the previous
one.
Tactics for stimulating specific cognitive processes
Compare and contrast
Extend the context
As described above. Use these tactics to develop corresponding
habits of mind.
Interpret representations Ask questions the help students get more familiar with different
representations, such as graphs and verbal descriptions. Many
students are attached to algebraic representations in physics.
Constrain the solution Include instructions about which approach to use or avoid
pushing students to seek alternative methods.
Reveal a better way Ask a question that students usually solve in a complicated way
and demonstrate a more elegant approach during the discussion.
Strategize only Ask the students to present an approach for a solution without
actually solving the problem.
Include extraneous
information and omit
necessary information
These questions help students consider what information is
needed to solve a problem.
60
Tactics for formative use of response data
Answer choices reveal likely
student difficulties.
This helps the instructor to know if something needs to be
addressed further. Make sure that the spectrum of choices is
broad.
Include “none of the above” This allows for alternative responses that were not included.
Make it the right answer sometimes so that the students can be
comfortable with not agreeing with any of the anticipated
alternatives.
Tactics for facilitating productive discussion
Qualitative questions Qualitative questions are better for promoting discussion of ideas,
concepts and general relationships than quantitative questions.
Analysis and reasoning
questions
Ask questions that require decision making rather than
calculation or memory recall.
Multiple defensible answers Create questions that require unstated assumptions or questions
where the answer depends on the interpretation.
Catching misconceptions Design questions that deliberately catch common misconceptions
to make students aware of those.
Emphasize reasoning over
correctness
When moderating the discussion, make sure that it’s focused on
the articulation of ideas rather than the correct answer.
Check list for writing multiple choice questions
The checklist is based on the revised taxonomy of multiple-choice item-writing guidelines
Haladyna et. al (2002).
Content concerns
1. Every question should reflect specific content and specific mental behaviour 2. Focus on important concepts and avoid trivial content 3. Use novel material and language to test higher-level learning and avoid memory
recall 4. Avoid over specific and over general content 5. Keep vocabulary simple for the group
Style concerns
1. Use correct grammar, punctuation, capitalization and spelling
2. Minimize the amount of reading in each item
61
Writing the question
1. Ensure that the directions in the question are very clear.
2. Include the central idea in the question instead of the options.
3. Avoid excessive wordiness.
4. Word the question positively, avoid negatives such as NOT or EXCEPT.
Writing the options
1. Develop as many choices as you need, but research suggests three is adequate.
2. Vary the location of the right answer according to the number of choices.
3. Place choices in logical or numerical order.
4. Keep choices homogeneous in content and grammatical structure.
5. Keep the length of choices about equal.
6. Avoid All-of-the-above.
7. Phrase choices positively; avoid negatives such as NOT.
8. Avoid giving clues to the right answer, such as
a. Specific determiners including always, never, completely, and absolutely.
b. Clang associations, choices identical to or resembling words in the stem.
c. Grammatical inconsistencies that cue the test-taker to the correct choice.
d. Conspicuous correct choice.
e. Pairs or triplets of options that clue the test-taker to the correct choice.
f. Blatantly absurd, ridiculous options.
9. Make all distractors plausible.
10. Use typical errors of students to write your distractors.
11. Use humour if it is compatible with the teacher and the learning environment.
Best practices for using audience response systems
The list comes from Martyn’s (2007, p. 73) compilation of recommendations on
implementing clickers in the classroom.
1. Keep slides short to optimize legibility.
2. Keep the number of answer options to five.
3. Do not make the questions overly complex.
4. Keep voting straightforward and simple.
5. Allow enough time for students to answer questions. Some general guidelines:
● Classes of fewer than 30 students: 15–20 seconds per question
● Classes of 30 to 100 students: 30 seconds per question
● Classes of more than 100 students: 1 minute per question
6. Allow time for discussion between questions.
7. Encourage active discussion with the audience.
8. Do not ask too many questions; use them for the key points.
9. Position the questions at periodic intervals throughout the presentation.
10. Include an “answer now” prompt to differentiate between lecture slides and
interactive polling slides.
11. Use a “correct answer” indicator to visually identify the appropriate answer.
12. Include a “response grid” so that students know their responses have registered.
62
13. Increase responsiveness by using a “countdown timer” that will close polling after a
set amount of time.
14. Test the system in the proposed location to identify technical issues (lighting, signal
interference, etc.)
15. On the actual day of the session, allow time to start the ARS.
16. Rehearse actual presentation to make sure it will run smoothly.
17. Provide clear instructions on how to use the ARS to the audience.
18. Do not overuse the system or it will lose its “engagement” potential.
Links to physics concept inventories Where can I find good questions to use with clickers or Peer Instruction? from Physport
(2020).
https://www.physport.org/recommendations/Entry.cfm?ID=93637
63
Appendix C – interview questions
Case study A The interview was in Swedish; the questions are therefore translated.
• How long have you been using an audience response system in your instruction?
• What is the purpose of using an audience response system?
• How do you formulate the questions in your presentation?
• Could you please describe how you act when a question is posed?
• How do the student responses affect your next actions in class?
• What immediate effects can you see among the students when you pose a question?
• Can you see any long-term effects?
• Can you please name the three biggest advantages of using an audience response
system?
• What drawbacks have you experienced using the audience response system?
• Is there any functionality that you miss?
Case study B
• How long have you been using an audience response system in your instruction?
• What is the purpose of using Mentimeter?
• How did you formulate the Mentimeter questions in the presentation?
• What immediate feedback did you get from the student’s answers?
• How did that feedback affect your next actions in class?
• Can you please name the three biggest advantages of using Mentimeter?
• What drawbacks have you experienced with Mentimeter so far?
• Is there any functionality that you miss?
Main study The interview was in Swedish; the questions are therefore translated.
Planning
• If you compare the classroom lesson and the digital lesson: How did you plan each
lesson? Can you name any differences in the planning itself?
• In lesson 1, you used finger pointing as the response system. Why did you choose
this method? What purposes does it serve?
• Which features of Mentimeter did you use when planning lesson 2? Why did you
choose these features? What were your intentions with the interactive slides?
• How did the features of Mentimeter inspire you?
• Can you name any limitation you experienced in Mentimeter?
• Can you name any limitation in planning the classroom lesson?
Implementation
• What are the biggest challenges of remote teaching?
64
• Could you describe any challenges in switching lesson 2 to remote at such short
notice?
• How do you create interaction in your other remote lessons?
• How did you feel about teaching remotely with Mentimeter?
• How was the process of asking questions and receiving responses from the students
through Mentimeter?
• How did you use the feedback from the class in the next part of the lesson? Do you
have any examples of decisions you could make using the information you received?
Reflection
• Can you name three things that went well in the remote lesson?
• Can you name three things that you would like to improve in the remote lesson?
• What would you do differently if you were to teach another remote lesson with
Mentimeter now that you have this experience?