Types of research I
Historical
generates descriptions, and sometimes attempted
explanations, of conditions, situations, and events that have occurred in the past. For example, a study that
documents the evolution of teacher training programs
since the turn of the century, with the aim of explaining the historical origins of the content and processes of
current programs.
Descriptive
provides information about conditions,
situations, and events that occur in the present. For
example, a survey of the physical
condition of school buildings in order to establish a descriptive profile of the facilities that exist in a typical
school.
Correlational
Involves the search for relationships between
variables through the use of various measures of
statistical association. For example, an investigation of
the relationship between teachers’ satisfaction with
their job and various factors describing the provision and quality of teacher housing, salaries, leave entitlements,
and the availability of classroom supplies.
Types of research II
Causal
aims to suggest causal linkages between
variables by observing existing phenomena and
then searching back through available data in
order to try to identify plausible causal
relationships. For example, a study of
factors related to student ‘drop out’ from secondary school using data obtained from school records over
the past decade.
Experimental
Is used in settings wherevariables defining one or
more ‘causes’ can be manipulated
in a systematic fashion in order to discern ‘effects’ on
other variables. For example, an investigation of the
effectiveness of two new textbooks using
random assignment of teachers
and students to three groups – two groups for each of the
new textbooks, and one group as a ‘control’ group to
use the existing textbook.
Case Study I
generally refers to two distinct research approaches. The first consists
of an in-depth study of a particular student, classroom, or school with the
aim of producing a nuanced description of the pervading cultural setting that affects education, and an account of the interaction that take place between students and other
relevant persons - For example, an in-depth exploration of the patterns of
friendship between students in a single class: For example, an in-depth
exploration of the patterns of friendship between students in a
single class.
Types of research III
Case Study II
The second approach to Case Study Research involves the application of quantitative research methods to non-
probability samples – which provide results that are not necessarily designed to be
generalisable to wider populations. For example, a
survey of the reading achievements of the students in one rural region of a particular
country.
Ethnographic
usually consists of a description of events that occur within the life of a group – with particular reference to the interaction of individuals in the context of the sociocultural norms, rituals, and
beliefs shared by the group. The researcher generally participates in some part of the normal life of the group and uses what he or she learns
from this participation to understand the interactions between group members: For
example, a detailed account of the daily tasks and interactions encountered by a school principal
using observations gathered by a researcher who is placed in the position of ‘Principal’s Assistant’ in order to become fully involved in the daily life
of the school.
A spectrum of paradigms
Predict Understand Emancipate Deconstruct
Positivism Interpretative Ethnographic Criticality Radical
Change
Evidence comes from wider data
Evidence comes from our practice
Evidence comes from scientific
method
Evidence comes from interacting
with people
You know how people are bound by the laws of gravity? Science has tested and shown that gravity is a natural force that works on human bodies. Positivism looks at sociology the same way that scientists look at gravity; it says that there are certain natural forces that work on societies, and they should be studied in a scientific manner.So, you're an educationalist and you want to study why poor people are not doing so well in education. As a positivist, you might look for an external force that causes this to happen.
Is it because poorer people have less access to books and other resources in their households? This is your hypothesis you want to test. You want to test this hypothesis and will probably look to undertake a controlled, experimental approach - possibly using tools such as Randomised Controlled Trials (RCTs).
This is the essence of positivist educational research; you develop a hypothesis about the forces at work in education and test it using scientific tools (like surveys, experiments and statistical analysis). Positivist research is most commonly aligned with quantitative methods of data collection and analysis.
Positivism studies the rules that govern behaviour in society through a scientific lens it is sometimes referred to as 'scientific method' or 'science research', is based on the rationalistic, empiricist philosophy that originated with Aristotle, Francis Bacon, John Locke, August Comte, and Emmanuel Kant. If you are a positivist educationalist, you are interested in the science of education. You want to apply the scientific method and scientific tools to your studies to find the natural laws of human behaviour within society.
Interpretivist/constructivist approaches to research have the intention of understanding "the world of human experience" (Cohen & Manion, 1994), suggesting that "reality is socially constructed" (Mertens, 2005, p.12). The interpretivist/ constructivist researcher tends to rely upon the "participants' views of the situation being studied" (Creswell, 2003, p.8) and recognises the impact on the research of their own background and experiences.
Constructivists do not generally begin with a theory (as with rather they "generate or inductively develop a theory or pattern of meanings" (Creswell, 2003, p.9) throughout the research process. The constructivist researcher is most likely to rely on qualitative data collection methods (surveys, interviews, questionnaires, discussions, observations) and analysis or a combination of both qualitative and quantitative methods (mixed methods).So, in our question about poorer people, access to resources and performance we would not start off with any hypothesis but we would want to observe poorer pupils in school and at home, develop some questionnaires about their lives and lifestyles (around access to resources), interview them (and possibly their parents) about the resources they have and the focus of our question might well develop and change as a result of these questions we are asking.Quantitative data may be utilised in a way, which supports or expands upon qualitative data and effectively deepens the description.
The interpretivist/constructivist paradigm grew out of the philosophy of Edmund Husserl's phenomenology and Wilhelm Dilthey's and other German philosophers' study of interpretive understanding called hermeneutics.
Criticality or Transformative Research according to Mertens (2005) arose during the 1980s and 1990s partially due to dissatisfaction with the existing and dominant research paradigms and practices but also because of a realisation that much sociological, educational and psychological theory which lay behind the dominant paradigms "had been developed from the white, able-bodied male perspective and was based on the study of male subjects" (Mertens, 2005 p.17).
Critical researchers felt that the interpretivist/constructivist approach to research did not adequately address issues of social justice and marginalised peoples (Creswell, 2003, p.9).Transformative researchers "believe that inquiry needs to be intertwined with politics and a political agenda" (Creswell, 2003) and contain an action agenda for reform "that may change the lives of the participants, the institutions in which individuals work or live, and the researcher's life" (Creswell, 2003).
Critical researchers may utilise qualitative and quantitative data collection and analysis methods in much the same way as the interpretivist/constructivists. However, a mixed methods approach provides the transformative researcher structure for the development of "more complete and full portraits of our social world through the use of multiple perspectives and lenses" (Somekh & Lewin, 2005, p.275), allowing for an understanding of "greater diversity of values, stances and positions" (Somekh & Lewin, 2005).Going back to our question about poorer pupils and access to resources we might start to question if there is a political or policy underpinning which is impacting on the question. Who are we defining as poor, what are the methods we are using to determine success, what kinds of resources are dominant and why are these considered "better" or "worse".
Bias is defined as any tendency which prevents unprejudiced consideration of a question. In research, bias occurs when “systematic error [is] introduced into sampling or testing by selecting or encouraging one outcome or answer over others” (Miriam-Webster).
Bias can occur at any phase of research, including study design or data collection, as well as in the process of data analysis and publication. Bias is not a dichotomous variable. Interpretation of bias cannot be limited to a simple inquisition: is bias present or not? Instead, reviewers of the literature must consider the degree to which bias was prevented by proper study design and implementation.
As some degree of bias is nearly always present in a published study, readers must also consider how bias might influence a study's conclusions.
Objectivity cannot be equated with mental blankness; rather, objectivity resides in recognizing your preferences and then subjecting them to especially harsh scrutiny – and also in a willingness to revise or abandon your theories when the tests fail (as they usually do)
MGould (2000: 104-5)
http://www.mmiweb.org.uk/hull/site/pt/evidence_bias.html
A Reliability refers to the repeatability of findings. If the study were to be done a second time, would it yield the same results? If so, the data are reliable. If more than one person is
observing behavior or some event, all observers should agree on what is being recorded in order to claim that the data are reliable.
Reliability also applies to individual measures. When people take a vocabulary test two times,
their scores on the two occasions should be very similar. If so, the test can then be described
as reliable. To be reliable, an inventory measuring self-esteem should give the same
result if given twice to the same person within a short period of time. IQ tests should not give different results over time (as intelligence is
assumed to be a stable characteristic).
A Validity refers to the credibility or believability of the research. Are the findings genuine? Is hand strength a valid
measure of intelligence? Almost certainly the answer is "No, it is not." Is score on the examination a valid
predictor of success during the first year of college? The answer depends on the amount of research support for
such a relationship.There are two aspects of validity:
Internal validity - the instruments or procedures used in the research measured what they were supposed to
measure. Example: As part of a stress experiment, people are shown photos of war atrocities. After the study, they are asked how the pictures made them feel, and they respond
that the pictures were very upsetting. In this study, the photos have good internal validity as stress producers.
External validity - the results can be generalised beyond the immediate study. In order to have external validity, the claim that spaced study (studying in several sessions ahead of time) is better than cramming for exams should apply to more than one subject (e.g., to math as well as history). It
should also apply to people beyond the sample in the study.
MethodologyThe methods section describes actions to be taken to investigate a research
problem and the rationale for the application of specific procedures or
techniques used to identify, select, process, and analyze information applied
to understanding the problem, thereby, allowing the reader to critically
evaluate a study’s overall validity and reliability. The methodology section of a
research paper answers three main questions:
(1) Why was the particular research approach taken?
(2) How was the data collected or generated?
(3) How was the data analysed (to answer the research questions)
Definitions
Paradigm: a world view underlying the theories and methodology of a particular scientific subject - an overarching idea or worldview.
Axiom: a statement or proposition which is regarded as being established, accepted, or self-evidently true, ”the axiom that sport builds character
Methodology: the reasons and justification for a approach to the research and the ways in which the data will be collected
Methods: The particular instruments for data collection
Quantitative: pertaining to those things which are measurable
Quantitative: pertaining to those things which are non-measurable