+ All Categories
Home > Education > I did it

I did it

Date post: 12-Jan-2015
Category:
Upload: jeffset-lastimado
View: 205 times
Download: 4 times
Share this document with a friend
Description:
h
Popular Tags:
38
CHAPTER 1: Experimental Psychology and the Scientific Method Psychological science: research about the psychological processes underlying behavior Content of science: what we know (facts learned) Process of science: systematically gathering data, explanations, etc Methodology: scientific techniques used to collect and evaluate psychological data Commonsense psychology: everyday, nonscientific data gathering that shapes our expectations and beliefs and directs our behavior toward others (don’t ask roommate for a favor when she is in a bad mood) -ability to gather data in a systematic and impartial way is constrained due to 1) the sources of psychological information and 2) our inferential strategies Nonscientific Sources of Data -conclusions drawn have limited accuracy and usefulness due to biases -sources are not always good for obtaining valid information about behavior even though it comes from trusted sources like friends, family, etc; because it is offered by people we like it is accepted without question -more likely to accept information from someone who is attractive, popular, etc Nonscientific Inference -accurate predictions are increased with length of acquaintanceship -can lead to overestimating behaviors -traits are more useful when predicting how someone will behave over the long term -situations are better predictors of momentary behaviors -overconfidence bias: compounding inferential shortcomings; predictions feel more correct than they actually are -these biases are brain’s way of coping with mass amounts of information Scientific Mentality -behavior must follow a natural order so that it can be predicted; Alfred North Whitehead -determinism: the belief that there are specifiable causes for the way people behave that can be discovered through research Gathering Data -empirical data: data that are observable or experienced
Transcript
Page 1: I did it

CHAPTER 1: Experimental Psychology and the Scientific Method

Psychological science: research about the psychological processes underlying behavior

                Content of science: what we know (facts learned)

                Process of science: systematically gathering data, explanations, etc

 

Methodology: scientific techniques used to collect and evaluate psychological data

 Commonsense psychology:  everyday, nonscientific data gathering that shapes our expectations and beliefs and directs our behavior toward others (don’t ask roommate for a favor when she is in a bad mood)

  -ability to gather data in a systematic and impartial way is constrained due to 1) the sources of psychological information and 2) our inferential strategies

  Nonscientific Sources of Data

-conclusions drawn have limited accuracy and usefulness due to biases

-sources are not always good for obtaining valid information about behavior even though it comes from trusted sources like friends, family, etc; because it is offered by people we like it is accepted without question

-more likely to accept information from someone who is attractive, popular, etc

 Nonscientific Inference

                -accurate predictions are increased with length of acquaintanceship

                -can lead to overestimating behaviors

                -traits are more useful when predicting how someone will behave over the long term

                -situations are better predictors of momentary behaviors

-overconfidence bias: compounding inferential shortcomings; predictions feel more correct than they actually are

-these biases are brain’s way of coping with mass amounts of information

 Scientific Mentality

                -behavior must follow a natural order so that it can be predicted; Alfred North Whitehead

-determinism: the belief that there are specifiable causes for the way people behave that can be discovered through research

 Gathering Data

                -empirical data: data that are observable or experienced

                -empirical data collected in a systematic way is ideal; still not guaranteed to be correct

 General Principles

                -laws: principles that have the generality to apply to all situations

                -theories: advanced understanding but not enough information to be declared a law

 

Page 2: I did it

Good thinking: collecting and interpreting data systematically and objectively with no personal biases or beliefs; includes being open to new ideas even when they contradict prior beliefs

-parsimony: “Occam’s razor”; entities should not be multiplied unless necessary; if two explanations are equally believable, the simpler one is preferred

 Self-Correction

-“Weight-of-evidence” approach: the more evidence that accumulates to support a particular explanation or theory, the more confidence we have that the theory is correct

-theories are best tested through falsification instead of verification; challenging existing explanations to prove theories

 Replication: repeat procedures multiple times to verify results; multiple researchers should verify the experiment

 4 Major Objectives of research conducted in psychology:

1)      Description: initial step toward understanding; a systematic and unbiased account of the observed characteristics of behaviors; good descriptions allow grater knowledge of behaviors

2)      Prediction: the capability of knowing in advance when certain behaviors would be expected to occur because we have identified other conditions with which the behaviors are linked

3)      Explanation: understanding what causes something to occur

4)      Control: the application of what has been learned about behavior; once the knowledge about a behavior is learned, it is possible to use that knowledge to effect change or improve behavior

a.       Applied research: research designed to solve real-world problems

b.      Basic research: research designed to test theories or to explain psychological phenomena in humans and animals

 Observation: systematic recording of events

 Measurement: assignment of numerical values to objects or events or their characteristics according to conventional rules; physical dimensions, intelligence tests, etc

Experimentation: process undertaken to test a prediction that particular behavioral events will occur reliably in certain, specifiable situations

                -testable: predications must be testable to perform an experiment

                                -must have procedures for manipulating the setting

                                -the predicted outcome must be observable

 

Antecedent conditions: the circumstances that come before the event or behavior that we want to explain

 Treatments: set of antecedent conditions

 Psychology experiment: controlled procedure in which at least two different treatment conditions are applied to subjects

 Cause and effect relationship: greatest value in psych experiment; relationship between antecedent conditions

Research Ethics: research must consider safety and welfare of all animal and human subjects

            Three basic ethical principles established after WWII: Respect for persons (every person is a person with the right to make their own decisions), Beneficence (obligation to minimize risk of harm and maximize possible benefits), and Justice (fairness in both the burdens and benefits of research)

Page 3: I did it

            IRB: institutional review board; a review committee who looks over experiments to make sure they are ethical before the study is conducted

-at risk: more likely to be harmed in some way by participating in research; IRB evaluates if risk are outweighed by benefits; called the risk/benefit analysis

-informed consent: safeguard rights of individuals so subjects sign a consent that agrees to participate after being fully informed about the nature of the study; consent must be given freely, they must be free to drop out at any time, researchers have to give a full explanation of procedures, risks and benefits of experiment must be made clear, assure that data will be kept confidential, and subjects may not be asked to release the researchers from liability or waive their legal rights in the case of negligence

-consent should be in writing

Minimal risk: risk that is no greater in probability and severity than that ordinarily encountered in daily life or during routine physical and psychological exams

Debriefing: explaining the true nature of the study after deceiving the subjects

Fraud: data falisfication

    -plagarism

Animal welfare: humane care of animals

    Institutional animal care and use committee( IACUC): IRB for animals

    animal rights: all sensate species (those who feel pain) have equal rights

CHAPTER 3 : Alternatives to Experimentation: nonexperimental designs

Nonexperimental: study behaviors in natural settings, explore unique or rare occurrences, or sample personal information

internal validity: degree to which a research design allows us to make casual statementsexternal validity: generalizability or applicably to people and situations outside the research settingresearch can be described along two major dimensions:1) the degree of manipulation of antecedent  conditions; varies from low to high (letting whatever happens naturally to controlling it); experiments à high manipulation, nonexperimentsà low manipulation2) the degree of imposition of units; the extent to which the researcher limits the responses a subject may contribute to the data; (ex: watching whatever teens do, low imposition; watching how often teens listen to rap, high imposition) 

5 nonexperimental approaches:

 1. phenomenology: the description of an individual's immediate experience; instead of looking at behaviors external to us, being with personal experience as a source of data (no constraints)    -Purkinje effect— noticed that during twilight red seems black but blue stayed within hues à eventually led to understanding of sensitivity to colors of different wavelengths

-process of observing may alter from person to person making replication hard (ex: if Purkinje was colorblind his perception of the sunset would be different from everyone else’s)

-describes behavior but does not explain

  2. case study: descriptive record of a single individual's experiences or behaviors or both kept by an outside observer    -often used in clinical psychology    -five major purposes of a case study        1) they are a source of inferences, hypotheses, and theories; ex: by watching kids, researchers made descriptions of normal development        2) they are a source for developing therapy techniques        3) they allow the study of rare phenomena        4) they provide exceptions, or counter instances, to accepted ideas, theories, or practices        5) they have persuasive and motivational value

Page 4: I did it

deviant case analysis: an extension of the evaluative case study; deviant and normal individuals are compared for significant differences; (ex: looking at the developments of normal and schizophrenic kids à schizophrenic kids have differences in autonomic systems and this discovery may help figure out who will be schizophrenic)

Disadvantages: working with few subjects, people evaluated may not be representative of general population, cannot inspect an individual at ALL times, & subjects may neglect to mention certain information

retrospective data: data collected in the present that are based on recollections of past events; severe problem in case studies; what case studies rely on

 Field studies: nonexperimental approaches used in the field or in real-life settings; degree of constraint on responses varies from study to studynaturalistic observation: the technique of observing behaviors as they occur spontaneously in natural settings; few constraints; researchers stay out of the way so behaviors are not altered and settings are kept as natural as possible            -systematic observation: the researcher uses a prearranged strategy for recording observations in which each observation is recorded using specific rules or guidelines so that observations are more objective; they would give the same results to different researchers

 -Reactivity: the tendency of subjects to alter their behavior or responses when they are -aware of an observer's presence-unobtrusive measures: behavioral indicators can be observed without the subject's knowledge; ex: wear and tear on your textbook can indicate how long you studyParticipant-observer study: special kind of field conversation; researcher actually becomes part of the group being studied; may be problematic if observer gets attached in anyway

4. Archival study: a descriptive method in which already existing records are reexamined for a new purpose. 5. Qualitative research: relies on words rather than numbers; relies on self-reports, personal narratives, and expression of ideas; study of phenomena that are contextual (they cannot be understood without the context that they appear in)

-paradigm: set of attitudes, values, beliefs, methods, and procedures that are generally accepted within a particular discipline at a certain point in time empirical phenomenology: "contemporary phenomenology"; might rely on the researcher's own experiences or on experimental data provided by other sources; relies on one or more of the following sources of data:    -the researcher's self-reflection on experiences relevant to the phenomenon of interest    -participants' oral or written descriptions of their experiences of the phenomenon    -accounts of the phenomenon obtained from literature, poetry, visual art, TV, theatre, and previous phenomenology (and other) research

CHAPTER 4 : Alternatives to Experimentation: Surveys and Interviews

Survey research: useful way of obtaining information about people’s opinions, attitudes, preferences, and behaviors simply by asking Examples: telephone surveys, election polls, television ratings, and customer satisfaction surveys

-gather data about experiences, feelings, thoughts, and motives that are hard to observe directly

-useful collecting data about sensitive subjects because they can be given anonymously so people will answer more honestly

-useful for making inferences about behavior but they do not allow for testing hypotheses about causal relationships directly

-used in conjunction with other research designs

Page 5: I did it

-gather large amounts of data efficiently

-low in manipulation; range from low to high imposition units

-responses can be limited (yes or no questions) or free response

Two most common types of surveys:

            -Written questionnaires: handed out or sent through mail

            -Interviews: face-to-face or on the phone; in person interviews can be individual or group

*generalizability of surveys depends on how subjects were selected

Constructing Surveys

            -Step 1: map out your research objectives, making them as specific as possible; (ex: objective is to measure the attitudes of psych students toward animal research in psychology; ask specific questions about things like animal rights, animal welfare, benefits to humanity, etc); to get ideas for objectives, look at previous research

            -Step 2: design the survey items; decide how you are going to address the imposition of units (do you want long, free responses or a limited number of alternatives)

-closed questions (structured questions): ex: do you want to smoke? or rate on a scale of 1-10; must be answered by one of a limited number of alternatives; closed questions are easier to quantify (easier to give a percent or number of how many children answered each of the four possible questions about cartoons)

-open-ended questions (open questions): solicit information about opinions and feelings by asking the question in such a way that the person must respond with more than a yes, no, or 1-10 rating (ex: what are your feelings about airport security); can be used to clarify or expand answers to closed questions (combination questions --> ex: how much time do you spend watching cartoons, less than an hour, 1-2 hours, 2-4 hours, 5+ hours --> followed by why do you watch? What do you think about the characters who hit each other? Etc)

-content analysis: the process of quantifying open question answers; similar to coding behaviors using systematic observation techniques; ex: for the question about characters hitting each other, a content analysis may be what kinds of things may cause you to hit someone? -->Divide those responses in categories (someone said something to me, someone looked at me funny, etc)

                             *Tips for constructing questions: keep it simple and keep people involved

 Double-barreled questions (compound questions): questions that ask for responses about two (or more) different ideas in the same questions; should be avoided; ex: Do you like strawberries and ice cream? --> You like strawberries but not ice cream so you could not answer this question

Exhaustive: response choices need to contain all possible options; ex: what exercise do you do the most? Play a sport, walk, or jog but your favorite is yoga and that is not listed

            -you can use “other” as an option but do so only if it would be chosen rarely because then it is harder to interpret results (it would be difficult to interpret answers with the option of "other" because there would be too many different responses); if a question requires 6 or more response options, using an open-ended question would be better

Level of measurement: the kind of scale used to measure a response for a closed question; different statistical tests are required for different levels of measurement; four kinds of scales:

            -nominal: simplest level of measurement; classifies response items into two or more distinct categories (that can be named) on the basis of a common feature; cannot be quantified; ex: true-false test, answer is only one of those; lowest level of measurement because it provides not information about magnitude (ex: political affiliation -->you belong to one party but no party is better than the other)

            -ordinal scale: rank ordering of response item; magnitude of each value is measured in the form of ranks; ex: ranking presidential candidates; gives a relative order of preference but is not precise (with presidential polls, it tells who is most/least popular but not exactly how popular they are)

Page 6: I did it

            -interval scale: measures the magnitude or quantitative size using measures with equal intervals between the vales; no true zero point (the true absence of any measurable temperature); ex: temperature in Fahrenheit; 40 degrees is not twice as hot as 20 degrees because the intervals between values are equal

            -ratio: highest level of measurement; equal intervals between all values and a true zero point; measurements of physical characteristics like height and weight can be measured with ratio scales

*The best type of scale to use will depend on two things: the nature of the variable you are studying and how much measurement precision you desire*  (presidential candidate example: you may only want to know the candidates marital status (nominal) or how many years candidate has been married (ratio)

*SCALING TECHNIQUES*-semantic differential: evaluating variable on a number of dimensions; two adjectives (ex: positive and negative) separated by a scale (usually consisting of 7 blanks)

-Likert: present a positively worded statement with a negatively worded statement (strongly agree or strongly disagree)

Continuous dimension: when variables lend themselves to different levels of measurement; traits, attitudes, and preferences are all continuous; ex: trait of sociability can range from very unsociable to very sociable (each person falls somewhere on that dimension)

*when selecting a level of measurement that all "fit" equally well, choose the highest level possible because it provides more information about the response)*

Important considerations for Survey items:

            -get subjects involved right away by asking interesting questions

            -the first question should be something that people will not mind answering; should have these characteristics:

                        -relevant to the central topic

                        -easy to answer

                        -interesting

                        -answerable by most respondents

                        -closed format

            -the first few questions should be ones that subjects do not have to think about (no open ended), are able to answer without saying “I don’t know”, and will think are relevant to the topic of the survey

*make sure questions are not value laden --> do not word your questions in ways that would make a positive (or negative) response seem embarrassing or undesirable; ex: do you believe doctors should be able to kill unborn babies in the first trimester or do you believe doctor should be able to terminate a pregnancy in the first trimester --> first question is difficult to say yes to due to the negative wording

Response styles: tendencies to respond to questions or test items in specific ways, regardless of the content; ex: people differ in response styles, such as willingness to answer, position preferences, and yea-saying or nay-saying

            -willingness to answer: comes into play whenever questions require specific knowledge about facts or issues; when unsure, people leave questions blank or guess which makes results harder to interpret

            -position preference: occurs with multiple choice questions; ex: when in doubt you always choose b; to avoid, vary the arrangement of correct responses (ex: in a survey with questions about attitudes towards abortion, do not always put "pro-choice" as option B

            -manifest content: the plain meaning of the words that actually appear on the page; ex: have you ever visited another country literally means have you ever visited another country

            -yea-sayers: apt to agree with a question regardless of its manifest content

Page 7: I did it

            -nay-sayers: tend to disagree no matter what they are asked

^^can be avoided by designing the questions that force the subject to think more about the answer; ex: do you agree or disagree that the cost of living has gone up in the last year? or In your opinion, have prices gone up, gone down, or stayed about the same the past year, or don't you know? --> building specific content into the options like in the second question makes people think harder about their choice

-to avoid yea and nay-sayers, you can use the unfounded optimism inventory (underline the optimistic answer which can be yes or no and it forces yea/nay-sayers to choose either of the options; ex: i know that everything will be alright: YES NO; I always stand in the slowest line at the bank YES NO

-once the questions have been designed they need to be pretested

            -context effects: (caught through pretesting); sometimes the position of a questions or where it falls within the question order can influence how the question is interpreted; likely when two questions are related

            Buffer items: used to separate questions that are similar; questions that are unrelated to both of the related questions

           latent content: the way people interpret what you are trying to ask; subjects may not fully understand

 Collecting Survey Data:

Questionnaires: (if written) instructions should be simple and clear; if possible, let subjects fill out in private or anonymously if possible 

Mail surveys: include a cover letter, make sure questionnaire and return procedure protects anonymity, include return envelope and stamp; holding a drawing prize or compensation for return can increase return rates; keep track of who does not return questionnaires; send a second survey to people who did not respond (it can increase response rate)

Telephone surveys: most widely used method, may not get completely forthright answers; usually not open ended

Internet surveys

Interviews: one of the best ways to get high-quality survey data; expensive; take twice as long to conduct

    -structured interview: the same questions are asked in the same way each time; provide more usable, quantifiable data

    -unstructured interview: more free flowing; interviewer is free to explore issues as they come up; info may not be usable for statistics

Focus groups: face to face technique used less often for data collection; good for pretesting; groups have similar characteristics (all women, all black, etc); group is brought together by an interviewer called a "facilitator"; facilitator wants group to answer a set of open-ended questions but the discussion is not limited

-response rate and representativeness is effected with each different one

*Evaluating surveys and survey data

Reliability: the extent to which the survey is consistent and repeatable; survey is reliable if responses to similar questions in the survey should be consistent, the survey should generate very similar responses if it is given to survey-givers, and the survey should generate very similar responses if it is given to the same person more than once

Validity: the extent to which a survey actually measures the intended topic; does the survey measure what you want it to measure? does performance on the survey predict actual behavior? does it give the same results as other surveys designed to measure similar topics? do the individual survey items fairly capture all the important aspects of the topic?; pretesting questions increases validity

Sampling: deciding who or what the subjects will be and, then, selecting them

Page 8: I did it

Population: all people, animals, or objects that have at least one characteristic in common; ex: all undergraduate students

Sample of subjects: a group that is a subset of the population of interest

Representativeness: how closely the sample mirrors the large population

Probability sampling: selecting subjects in such a way that the odds of their being in the study are known or can be calculated; begin by defining the sample you want to study (ex: women born in 1975 now living in Seattle), then choose an unbiased method for selecting the subjects (random selection: any member of the population has an equal opportunity to be selected)

    -Simple random sampling: most basic form of probability sampling; a portion of the whole population is selected in an unbiased way; all members of the population being studied must have an equal chance of being selected

    -Systematic random sampling: all members of the population are known and can be listed in an unbiased way; a research picks the nth person; n is determined by size of population and the desired sample size

    -Stratified random sampling: used when populations have distinct subgroups; obtained by randomly sampling from people in each subgroup in the same proportions as they exist in the population

        -example: majors at Clemson; 50% of the students are engineering majors --> our sample of the population has to have 50% engineers to represent the             general population

    -Cluster sampling: sample entire clusters or naturally occurring groups that exist within the population; used if individual sampling is impossible due to cost or too large of a population; less reliable (example of clusters: zip code areas, school districts, etc)

        -example: if you want to sample students from the business of behavioral science, you just give everyone who takes intro to psych a survey

Nonprobability sampling: subjects are not chosen at random

Quota sampling: select samples through predetermined quotas that reflect the makeup of the population

Convenience sampling: using any groups who happen to be available

Purposive sampling: when nonrandom samples are selected because the individuals reflect a specific purpose of the study

    -ex: comparing new training program for employees in two departments --> select the employees of those two departments

Snowball sampling: researcher locates one or a few people who fit the criteria and asks these people to find more people

*Reporting Samples    -the way a sample is chosen influences what can be concluded from the results    -must explain the type of sample used & how subjects were recruited    -details that may have influenced the type of subject need to be reported (ex: if they got paid, fulfilled a course requirement, etc)

CHAPTER 5: (Part I): Alternatives to Experimentation: correlational and quasi-experimental designs

**Two categories of nonexperimental research methods:    *correlational designs        -used to establish relationships among preexisting behaviors and can be used to predict one set of behaviors from others (ex: predicting your college grades from your entrance exam)        -can show relationships between sets of antecedent conditions and behavioral effects (ex: relationship between smoking and lung cancer)        -antecedents are preexisting; conditions are not manipulated or controlled by researchers

Page 9: I did it

        -harder to establish a cause and effect relationship  

  *quasi-experimental designs        -conditions cannot be manipulated or controlled        -seem like real experiments but lack one or more essential elements (manipulation of antecedents, random assignment, etc)        -subjects are selected on a basis of preexisting conditions        -used to compare behavioral differences associated with different types of subjects (ex: normal or schizophrenic children), observe naturally occurring situations (raised in a one or two parent home), unusual events (surviving a hurricane), etc

Researcher studies a set of preexisting conditions         -can increase understanding of biological, environmental, cognitive, and genetic attributes

Gives researcher more systematic control over the situation than other nonexperimental designs

Used when researcher cannot be assigned at random to different manipulations or treatments (ex: effect of lighting on working productivity in two different companies; differences could be from workers’ abilities or the lighting)**Both tend to be higher in external validity****"treatments": selected life events or preexisting characteristics of individuals**

Both methods rely on statistical data analyses

**Correlational study: one that is designed to determine the correlation (degree of relationship) between two traits, behaviors, or events; when two things are correlated, changes in one are associated with changes in another    -used often to explore behaviors that are not yet understood    -asking how well the measures go together    -once the correlation is known, predictions can be made; higher correlation means more accurate predictions

Researcher measures events without attempting to alter the antecedent conditions in any way    -simple correlations: relationships between pairs of scores from each subject; Pearson Product Moment Correlation Coefficient (r) is used to compute; when r is computed, outcome can only have a positive, negative, or no relationship; use a General Linear Model for statistical formulas (assumes relationship between X and Y is generally the same)    *values of a correlation coefficient can only vary between -1.00 and 1.00*   -scatterplots (or scattergraphs): visual representations of the scores belonging to each subject in the study; dots show if the pattern is positive, negative, or nonexistent-regression lines: lines of best fit; lines drawn on the scatterplot; direction of line corresponds to the direction of the relationship; line represents mathematical equation that represents the linear relationship between two scores-positive correlation: value of r is positive; also called a direct relationship-negative correlation: value of r is negative; aka inverse relationship**the direction of the relationship (positive or negative) does not affect ability to predict scores** (ex: you could predict vocabulary just as well from negative scores as you could positive ones)-if r is near zero there is no relationship

Correlation coefficients can be affected by a nonlinear trend, range truncation, and outliers-range truncation: artificial restriction of the range of values of X or Y (ex: shoe size increasing as children age)-outliers: extreme scores that can affect correlation coefficients

**CORRELATION DOES NOT MEAN CAUSATION**    -casual direction between two variables cannot be determined by simple correlations

-bidirectional causation: behaviors affecting each other-third variable problem: a third agent making the two behaviors seem like they are related

coefficient of determination (r2): estimates the amount of variability in scores on one variable that can be explained by the other variable; how much one variable can explain the variability in scores of the other variable;  (ex: firm handshake and first impressions experiment; r=.56, r2=.31 ß 31% of fluctuations in positivity scores was because of firmness of handshake); people argue that anything over .25 is considered a strong association

Page 10: I did it

linear regression analysis: when two behaviors are strongly related, researcher estimates a score on one of the measure behaviors from a score on the other (ex: watching TV and vocab test scores were correlated --> substitute viewing time into the equation for regression line so we can estimate their score on vocab test); stronger the correlation, better the prediction

multiple correlation (R): a measure predicted by multiple other measured behaviors; test the relationship of several predictor variables (X1, X2, X3...) with a criterion variable;  similar to r but R allows us to use information provided by two or more measured behaviors to predict another measured behavior when we have that info available; R2 can be used the same way as r2

-third variable: another agent that may cause the two behaviors to appear related (ex: amt. of TV watched, age, and vocabulary; TV and vocab are age related, but all three variables are not related therefore age is a third variable)

partial correlation: allows the statistical influence of one measured variable to be held constant while computing the correlation between the other two; ex: if age is a third variable that is largely responsible, controlling the contribution of age should decrease the correlation between TV time and vocabulary à making this correlation less significant proves that age was the initial factor making correlation so high multiple regression analysis: used when you want to predict the score on one behavior from the score on the other when two or more related behaviors are correlated.

CHAPTER 5:

Causal Modeling: creating and testing models that may suggest cause and effect relationships among behaviors    -path analysis: method that can be used when subjects are measured on several related behaviors; researcher creates models of possible causal sequences            -generates more information for prediction and can generate experimental hypotheses            -limited in the sense that models can only be constructed using the behaviors that have been measured (if researcher omits an important behavior, it will be missing in the model too)            -low in internal validity because it is based on correlational data; the direction of cause to effect can't be established with certainty/third variables can't be ruled out completely            -useful for making hypotheses for future research and predicting potential causal sequences in instances where experimentation is not feasible            -ex: IV (ice cream) and DV (Crime); as ice cream consumption increases crime increases by .6; because this seems so unreal you try to think of other variables that mediate the path from ice cream to crime; turns out temperature effects ice cream consumption which causes crime to rise            -causation is something the researcher decided       -cross-lagged panel design: uses relationships measured over time to suggest the causal path            -subjects are measured at 2 separate points in time on the same pair of related behaviors or characteristics --> then scores from these measurements are correlated in a particular way and the pattern of correlations is used to infer the causal path                -most famous one done was about violent TV and aggressiveness in kids as they grew from young to old            -correlations along the two diagonals are the most important for determining the causal path because they represent the effects across the time lag (largest diagonal correlation to indicate the causal direction            -researcher can infer causation

Quasi-Experimental Designs: aka "natural experiment";  seem like real experiments, but they lack one or more essential elements such as manipulation of antecedents or random assignment to treatment conditions        -explore the effects of different treatment conditions on preexisting groups of subjects or to investigate the same kinds of naturally occurring events, characteristics, and behaviors that we measure in correlational studies        -goal of a quasi-experiment is to compare different groups of subjects, looking for differences between them, or looking for changes over time in the same group of subjects        -quasi-treatment groups: formed in the simplest quasi-experiments; based on a particular event, characteristic, or behavior whose influence we want to investigate

Page 11: I did it

        -low in internal validity because causes of effects observed are never known for certain        -sometimes subjects are exposed to different treatments but the experimenter cannot control who receives which treatment because random assignment if not possible     **AN IMPORTANT DIFFERENT BETWEEN EXPERIMENTS AND QUASI-EXPERIMENTS IS THE AMOUNT OF CONTROL THE RESEARCHER HAS OVER SUBJECTS WHO RECEIVE TREATMENT**        -kinds of quasi-experimental designs:                -ex post facto studies: the research systematically examines the effects of subject characteristics (SUBJECT VARIABLES) but without actually manipulating them (has no direct control); researcher forms treatment groups by selecting subjects on the basis of differences that already exist; deals with things that occur (like a correlational study) but it also allows a researcher to zero in on those occurrences in a more systematic way --> instead of studying a whole range of people along a dimension, the focus can be on a carefully chosen subset (usually the two most extreme) --> increases the chance that the effects of change will be see more clearly; low in internal validity; enables researchers to explore dimensions that they cannot/would not choose to study experimentally (ex: personality disorders); ex: knowing a subjects' variables and asking them to describe their experience on the job

            -nonequivalent group designs: when the researcher compares the effects of different treatment conditions on preexisting groups of participants (ex: the lighting being tested in two different companies); measure subjects on any attributes that may threaten internal validity so that you can demonstrate statistically the nonrandom groups did not differ in an important way; prove that groups are "equivalent" before so that they causes for results are plausible; not an option to have random sampling

            -longitudinal design: long term studies used to measure the behaviors of subjects at different points in time to see how things have changed; time consuming and hard to conduct because retaining subjects is difficult over a long period of time  -cross-sectional studies: used instead of longitudinal studies; instead of tracking the same group over a long span of time, subjects who are already at different stages are compared at a single point in time; requires more subjects, statistical tests are less powerful, and using different groups has the risk of subjects have different characteristics that influence the results; imposition of units is generally high   -pretest/posttest design: used to assess the effects of naturally occurring events (like approval ratings before and after a presidential speech) when a true experiment is not possible; one problem is that if a study extends past a single session, there are too many other factors that can influence improvement --> possibility of "PRACTICE EFFECTS" (aka pretest sensitization); lacks internal validity; sometimes this design is used along with one or more comparison groups that attempt to control for internal validity issues                    -Solomon 4-group design: number of comparison groups are needs --> a nonequivalent control group (took both pre and post tests but was not exposed to the "treatment", a group that received the treatment and took only the posttest, and a posttest-only group

CHAPTER 6 : Formulating the hypothesis

Hypothesis: represents the end of the long process of thinking about a research idea; it is the same thing as a thesis; statement that predicts the relationship between at least two variables; statement is designed to fit the type of research design; every experiment has at least one

-nonexperimental hypothesis: a statement of your predictions of how events, traits, or behavior might be related; NOT a statement about cause and effect

-experimental hypothesis: tentative explanation of an event or behavior; explains the effects of specified antecedent conditions on a measured behavior

 

*HYPOTHESES MUST BE SYNTHETIC, TESTABLE, FALSIFIABLE, PARSIMONIOUS, AND (HOPEFULLY) FRUITFUL*

Synthetic statements: can either be true or false; experimental hypotheses must be a synthetic statement; If-then form; ex: “hungry students read slowly” ß could be proven true or false

 

Page 12: I did it

Nonsynthetic statements should be avoided

                -analytic statement: one that is always true; ex: “I am or am not pregnant”

-contradictory statement: statements with elements that oppose each other; they are always false; ex: “I do or do not have a brother”

^^neither need to be tested because the outcome is already known

Testable statements: means for manipulating antecedent conditions and measuring the resulting behavior must exist; ex: if dogs’ muscles twitch when they sleep, then they are dreaming

Falsifiable statements: “disprovable” by research findings; need to be worded so that failures to find the predicted effect must be considered evidence that the hypothesis is false; example about reading the whole text book carefully and being able to design a good experiment

Parsimonious statements: simplest explanation is preferred; all research hypotheses should be parsimonious

Fruitful: when a hypothesis leads to new studies; ideally hypotheses are fruitful; hard to know in advance if a hypothesis will be fruitful; ex: when a kid cries, expose him to harmless animal and kid will cry whenever he sees harmless animal à led to other classical conditioning experiments

 Inductive model: used when formulating a hypothesis; process of reasoning from specific cases to more general principles; basic tool of theory building; ex: see people in athletic clothes cut in line for food and no one challenges them so being an athlete allows privileges that non-athletes don’t have

Theory: set of general principles used to explain and predict behavior; through induction, theories are built by taking bits of empirical data and making general explanations about those facts

Deductive model: converse of the induction model; process of reasoning from general principles to make predictions about specific instances; possible to deduce predictions about what should happen in new situations; ex: equity theory used to predict behavior

            -both induction and deduction are used to formulate hypotheses; induction devises general principles used to organize/explain/predict behavior until better principles are ofund and then deduction tests those implications

*building on research and nonexperimental studies can help form a hypothesis; ex: cigarettes causing lung cancer done on rats not people; looking at past research can also help you see what problems may arise

Serendipity: knack of finding things that are not being sought out; ex: Pavlov’s dog (his original research was dog’s stomach secretions but instead he did a classical conditioning experimental)

            -also a matter of knowing enough to use an opportunity (are observations interpretable, do they explain something previously unexplained, do they suggest a new way of looking at a problem, etc)

Intuition: knowing without reasoning; guides what we choose to study; review experimental literature to avoid pointless experiments (ex: dogs cannot see color, prior work says they don’t, so don’t do an experiment to check if they can); most accurate if it comes from experts; intuitive knowledge cannot be interpreted as right until tested and should not destroy objectivity

Help in generating a hypothesis:

                -pick a psychology journal and read through an issue to help pick a topic of interest

                -try observation; watch people’s behavior in public places

-turn attention to a real-world problem and try to figure out what causes it; benefit of this is that once the cause is determined, a solution often suggests itself

-be realistic about time frames

Research Literature:

Page 13: I did it

Psychological journals: periodicals that publish individual research reports and integrative research reviews (up-to-date summaries of what is known about a specific topic)

-can help you find good ideas for how to develop your hypothesis

 Meta-analysis: good source of information; found in journals or edited volumes; statistical reviewing procedure that uses data from many similar studies to summarize research findings about individual topics

 Introduction: selective review of relevant, recent research’ only articles that are directly related to the research hypothesis are included; provide empirical background to guide reader about hypothesis

 Discussion: implications of findings; what the results mean

 CHAPTER 7:THE BASIC OF EXPERIMENTATION

Experiments are preferred to other research methods because if properly conducted, they allow us to draw causal

inferences about behavior

                -when well conducted, it is high in internal validity

Two treatment conditions are required so that we can make statements about the impact of different sets of antecedents;

if only one is used, there would be no way to evaluate what happens to behaviors as the conditions change

 IV and DV’s

 Independent Variable: the dimension that the experimenter intentionally manipulates; antecedent that is chosen to vary;

often are aspects of the physical environment

                -must be given at least two possible values in every experiment

                -levels of the IV: researcher varies the levels of the IV by creating different treatment conditions

 Ex Post Facto study: researcher can explore the way behavior changes as a function of changes in variables outside the

researcher’s control; typically are subject variables/characteristics of the subjects themselves that cannot be manipulated

 In a true experiment, we test the effects of a manipulated independent variable—not the effects of different kinds of

subjects; we have to make certain that our treatment groups do not consist of people who are different on a preexisting

characteristic

 Dependent variable: measure the DV to determine whether the IV had an effect; is the particular behavior we expect to

change because of our experimental intervention; variable is dependent if the values are assumed to depend on the values

of the IV: As the IV changes value, the DV value will change

 Schachter: shock therapy treatment experiment and anxiety (experiment on pg 189)

 Hess: tested that large pupils make people more attractive (pg 191)

Operational Definition: specifies the precise meaning of a variable within an experiment: it defines a variable

Page 14: I did it

in terms of

observable operations, procedures, and measurements; it clearly describes the operations involved in manipulated

or measuring the variables in an experiment

 Experimental Operational Definitions: explains the precise meaning of the independent variables; describe exactly

what was done to create the various treatment conditions of the experiment

 Measured operational definitions: describe exactly what procedures we follow to assess the impact of different treatmen

conditions; include exact descriptions of the specific behaviors or responses recorded and explain how those responses

are scored

 Hypothetical constructs (concepts): unseen processes postulated to explain behaviour.

Many variables can be measured in more than one way

 Level of measurement: the kind of scale used to measure a variable

 Reliability: meaning consistency and dependability

 Interrater reliability: the agreement between measurements

 Interitem reliability: the extent to which different parts of a questionnaire, test, or other instruments designed to

assess the same variable attain consistent results

Validity: refers to the principle of actually studying the variables that we intend to study

                -face validity: self-evident way of measuring

-content validity: does the content of our measure fairly reflect the content of the thing we are measuring?

 Predictive validity: do our procedures yield info that enables us to predict future performance/behavior

                Concurrent validity: compares scores on the measuring instrument with an outside criterion

Construct validity: most important aspect; deals with the transition from theory to research application

Internal validity: the degree to which a researcher is able to state a causal relationship between antecedent conditions

Extraneous variables: things changing throughout the experiment

Confounding: when the value of an extraneous variable changes systematically across different conditions of the experiment

 Instrumentation: when some feature of the measuring instrument itself changes during the experiment

 Statistical regression: “regression toward the mean”; occurs whenever subjects are assigned to conditions on the basis of extreme scores

QUESTIONS

2. a) IV—absence, DV—the heart

b) IV—direction of photo, DV—photo

Page 15: I did it

c) IV—color of room, DV—mood

d) IV—people present, DV—level of anxiety

4. a) period of absence

b) photo right side up or upside down

c) pink or blue room

d) number of people

5. a) feelings in the heart

b) the person in the photograph

c) degree of sadness

d) how anxious person is

6. a) prior feelings before absence; how long/far absence is

b) if person identifying photo knows person in picture

c) shade of color of room

d) What people are waiting for?

 7. a) interrater reliability: the degree of agreement among different observers or raters; if researchers

agree on the results

b) test-retest reliability: consistency between an individual’s scores on the same test taken at two or more different

times; taking a test after already taking it and re-studying

c) Interitem reliability: the degree to which different items measuring the same variable attain consistent results;

weighing something with two different units of measure

d) Content validity: the degree to which the content of a measure reflects the content of what is being measured;

questions students have about an exam

e) Predictive validity: the degree to which a measuring instrument yields information allowing prediction of actual behaviour

or performance; Schachter’s experiment

f) Concurrent validity: the degree to which scores on the measuring instrument correlate with another known standard

for measuring the variable being studied; compare anxiety test scores

g) Construct validity: the degree to which an operational definition accurately represents the construct it is intended

to manipulate or measure; IQ and intelligence tests

 8. the certainty that the changes in behavior observed across treatment conditions in the experiment were

Page 16: I did it

actually

caused by the independent variable; it is important because if an experiment is not internally valid you cannot

identify the independent variable.

CHAPTER 8 : SOLVING PROBLEMS: CONTROLLING EXTRANEOUS VARIABLE

Physical Variables: aspects of the testing conditions that need to be controlled; ex: day of the week, testing room, noise, etc

                Ways to control Physical variables:

1)      Elimination: take out the condition (i.e. sound proof a noisy room)

2)      Constancy of Conditions: done when you can’t eliminate; keep all aspects of the treatment conditions as

similar as possible (i.e. can’t take paint off wallsàput all subjects in same room); physical variables and mechanical

procedures (written instructions to ensure consistency) are usually constant with little effort

3)      Balancing: when elimination or constancy can’t be used; distributing the effects of an extraneous variable

across the different treatment conditions of the experiment (i.e. random assigning)

 **Control improves through block randomization**

More controlled extraneous variablesàincrease in internal validity, decrease in external validity

Social variables: qualities of the relationships between subjects and experimenters that can influence results

1)      Demand characteristics: aspects of the experimental situation that demand that people behave in a particular way;

ex: strangers in elevators don’t make eye contact; experimenters want subjects to be as naïve as possible

a.       Controlling demand characteristics

                                                                           i.      Single-Blind Experiments: subjects do not know which treatment

they are getting; placebo effect: thinking there is an effect when there is not

                                                                         ii.      Cover stories: a plausible but false explanation for the procedures

used in the study; not always used because they deviate from fully informed consent

b.      Experimenter Bias: the experimenter does something that creates confounding in the experiment

                                                                           i.      Rosenthal effect: “Pygmalion effect”’; experimenters treating

subjects differently depending on what they expect from their subjects

                                                                         ii.      Double-Blind experiments: used to eliminate experimenter effects

; subjects and experimenters do not know which treatment is being given

Personality Variables: personal characteristics the experimenter brings to the experimental setting

(nice, friendly experimenter vs. cold, rude experimenter); important to maintain consistency across subjects and treatments

Page 17: I did it

*not as important with volunteer subjects*

Context Variables: variables that come about from procedures created by the environment, or context, of

the research setting

                -ex: subject recruitment, selection, assignment procedures, etc

                -name of experiments are kept neutral so no bias

                -note which experiment conditions cause drop outs

 QUESTIONS

3.       Keep all aspects of conditions as similar as possible; ex: physical variables (color of room) and mechanical procedures (write out instructions)

4.       Distributing the effects of an extraneous variable across different treatment conditions; ex: time of day and day of the week

5.       Randomly assign the same number of people with treatment A and B to one room and the rest to the other room

7A. explain experimenter bias and how it can lead to false results

7B. double-blind: subject and experimenter don’t know to prevent experimenter bias in part A

 8A. it can lead to the Rosenthal effect; results can be different if rats are treated differently

8B. break groups of rats into smaller groups and do it over a number of days

9. it should be used when providing an explanation for experiment without hinting towards hypothesis; should not be used if you have internal validity without a cover story; cover stories can lead to deception which is a departure from a subjects full consent

CHAPTER 9 :BASIC BETWEEN SUBJECT DESIGN

Experimental Design: general structure of experiment; design is made up of number of treatments

                Determining the Design:

1)      Number of IVs

2)      Number of treatment conditions needed to make a fair test

3)      Whether the same or different subjects are used in each of the treatment conditions

Between-Subject Designs: Different subjects take part in each condition of the experiment; draw conclusions by making comparisons between the behaviors of different group subjects; more than 1 or 2 subjects necessary

*the more the sample resembles the whole population, the more likely it is that the behavior of the sample mirrors that of the population*

 Encouraging volunteers by making experiment seem:

                -appealing/interesting

                -nonthreatening

Page 18: I did it

                -meaningful

 *If individuals in the population are all very similar to the DVàsmall sample size

*If individuals are likely to be differentàlarge sample size

: an estimate of the size of the treatment effect; fewer subjects necessary to detect a treatment effect;

Effect size advisable to have at least 15-20 subjects; small number make effect hard to detect

 Two Group design: when only two treatment conditions are needed, the experimenter may choose to form two separate groups of subjects

-Two independent groups design: randomly picked subjects are placed in each of the two treatment conditions through random assignment (if no way to randomly assign subjects, there will be less external validity in conclusions/how well findings can be applied to other situations)

-random assignment: every subject has an equal chance of being placed in any of the treatment conditions; this method controls for subject variables; NOT THE SAME AS RANDOM SELECTION (is possible to select a random sample from the population but then assign the subjects to groups in a biased way)

 Experimental condition: apply a particular value of the IV to the subjects and measure the DV; subjects in this condition are called “experimental group”

 Control Condition: used to determine the value of the DV without a manipulation of the IV; subjects in condition are called “control group”; same as experimental without the manipulation; aka “no-treatment” condition; closer the control group is to placebo groupà less chances of accidental confounding and more internal validity

 Two Experimental Groups Design: used to look at behavioral differences that occur when subjects are exposed to different values or levels of the IV; used once it has been established that an IV produces some effect not ordinarily present; no control group (ex. People hearing good or bad news study)àno way of verifying a conclusion without a control group: the conclusions that may be drawn from an experiment are restricted by the scope and representativeness of the treatment conditions

-use two independent groups if there is only one IV and it can be tested with 2 treatment conditions

 *more subjectsàbetter chances that randomization will lead to equivalent groups of subjects*

 Two Matched Groups: two groups of subjects but the researcher assigns them to groups by matching or equating them on a characteristic that will probably affect the DV; used when randomization doesn’t guarantee that the treatment groups will be comparable on all the relevant extraneous subject variables

                3 ways to match subjects across the two groups after experiment:

1)      Precision matching: insist that the members of the matched pairs have identical scores

2)      Range matching: more common; requires that the members of a pair fall within a previously specified range of scores; advised to sample as many subjects as possible when using range matching

3)      Rank-ordered matching: subjects are ranked by their scores on the matching variable; all subjects are typically used unless you have an uneven number of scores from throwing one out; down side is there may be unacceptably large differences between members of pairs

 *by matching on a variable that is likely to have a strong effect on the DV, we can eliminate one possible source of confounding & can make effect on IV easier to see*

 Matching is especially useful when there is a very small number of subjects because there is a greater chance that randomization will produce groups that are dissimilar

 Multiple groups design: there are more than two groups of subjects and each group is run through a different treatment condition; used when the amount or degree of the IV is important (i.e. drug dosage)

-Multiple independent groups design: most commonly used multiple groups design; the subjects are assigned to the different treatment conditions at random

                -consider practical limits

Page 19: I did it

 Pilot study: pretest selected levels of an IV before doing the actual thing; keeps practical limits in mind

 QUESTIONS

4,6,7,10,11,12

4. I do not accept that conclusion because the jogging that the control group did is a confounding variable on their health state.

6. I would test this hypothesis by having two different rooms: one with people standing close together and the other with people spread apart. There would be two treatment conditions. You could test this hypothesis with more than one design, for example, have different sized rooms.

7. I would use a two matched group designed; probably the range matched design in particular  to pair up people who have known each other for similar for similar amounts of time to avoid confounding variables

CHAPTER 10 :BETWEEN-SUBJECT FACTORIAL DESIGN

Factorial designs: studying 2 or more independent variables at the same time; provide more information that experiments with one IV    -factors: the independent variables in the designs    -two factor experiment: simplest factorial design; only has two factors    -data from a factorial experiment gives us:

1) Information about the effects of each IV in the experiment ("main effects")

2) Answer the question: how does the influence of one IV affect the influence of another?

Main effect: action of a single IV in an experiment; how much did the change in one IV change the subjects' behavior; change in behavior associated with a change in the value of the single IV within the experiment    -an experiment with one IV has one main effect; usually experiments with one IV do not have a main effect    -more than one IV --> each one has a main effect    -as many main effects are there are factors    -main effects may/may not be statistically significant; to tell if they are, run statistical tests

    -statistical tests on factorial experiments also tell whether the two factors operate ind. or not

            -ex: plant study with music and talking; is it conversation or music that have effect on plant or do music and conversation have an effect on each other

Interaction: the effect of one IV changes across the levels of another IV; ex: a little alcohol is okay, a sleeping pill is okay, but if taken together they are deadly --> this is an interaction; the effects of one factor will change depending on the levels of the other

-example: drowsiness with alcohol=4, sleeping pill drowsiness= 3, effect of both =7; but for an interaction, the effects INCREASE and =9

-when there is an interaction, you cannot picture complete results without considering both factors because the effects of one factor will CHANGE depending on the levels of the other factors    -an interaction tells us that there could be limits to the effects of one or more factors so an interaction QUALIFIES the main effects    -number of interactions depends on the number of IVs    -two IVs --> one interaction    -high-order interactions: more than 2 IVs; ex: driver experience, alcohol, and degree of darkness all interact in causing car accidents

-also measured with statistical tests

*POSSIBLE TO HAVE AN INTERACTION WITH NO MAIN EFFECTS AND POSSIBLE FOR MAIN EFFECTS WITH NO INTERACTION*

EX2: name length and nicknames 

design matrix: simple design of experiment; easier to understand what you are testing, what kind of design you are using, and how many treatment conditions are required

Page 20: I did it

shorthand notation: tells the number of factors involved (numerical value of each number tell how many levels each factor has --> 2 x 2 design has two factors and each factor has two levels); also know that the experiment has four different conditions; ex: 2 x 2 (type of Name x Length of name)

EXAMPLE: 2 x 3 x 2 factorial design --> three factors, numerical value of each digit tells number of levels of each factor (2 factors, 3 factors, 2 factors), 12 separate conditions; called a three factor experiment

crossover interaction: the effects of each factor completely reverse at each level of the other factor; maximum interaction possible

 # of significant effects: condition A, B, and C all have a main effect. A and B can interact, B and C, A and C, and A, B, and C= 7 significant effects

*to choose which between subjects design to use, think of how many IVs you have and the number of treatment conditions needed to test your hypothesis

CHAPTER 11:WITHIN SUBJECT DESIGN

Within-subjects design: each subject participates in more than one condition of the experiment            -this increases power --> greater chance of detecting a genuine effect of the IV            -also known as a repeated-measures design    -make comparisons of the behavior of the same subjects under different conditions    -more likely to find if IV has an effect using within-subjects design

 EX: priming use of homophones (words that sound alike) on ability of old/young people to recall famous person’s name à center of a cherry, filler picture, a __ away keeps doctor away, brad pitt; predicted that the name of the cherry center would help recall the actors name

Within-subjects factorial design: a factorial design where subjects receive all conditions in the experiment

Ex: showing people different slides of facial expressions (IV1) and using male and females on the slides (IV2)    -may require fewer subjects than a between subjects design

Mixed design: combines within and between subject variables in one experiment

Ex: using within subjects (the four types of facial expressions) and a between subjects that cannot be manipulated (gender or age)    -more complex but common

Advantages of Within-Subjects Design    -helps when there are not many subjects available    -saves time when actually running the experiment (don’t need to train twice as many people)    -best chance of detecting the effect of IV    -controls for extraneous variables because subjects do not differ between conditions    -most perfect form of matching    -subjects are measured more than once on the DV (measured after each condition)    -practical and methodological gains; on-going record of subjects’ over time

    -increased power

Disadvantages of Within-Subjects Design    -generally require subjects to spend more time in the experiment    -more time setting up equipment for individual subjects    -subjects can often only be in one condition because it is either impossible, useless, or would change effects if they weren't (ex: people driving cars for the first time can’t experience their first time ever driving twice)

-Subjects get bored or tired    -interference between conditions is the biggest drawback; if the treatments clash too much you have to use a between subjects design

-most of the problems with within-subjects designs are linked to the IV

order effect: when a subject's response differs due to position, order, or series of treatments; a potential

Page 21: I did it

confound; ex: ask people to watch TV commercials and rank how much they liked them à a commercial that comes on first may get higher because it is paid most attention rather than third or fourth when a viewer tunes out moreFatigue effects: cause performance to decline as the experiment goes on because subjects get tiredPractice effects: different factors may lead to improvement as the experiment goes on; as subjects become more familiar, they get better    ^^progressive error: these positive and negative changes; includes any changes in subjects' responses that are caused by testing in multiple treatment conditions; increases as the experiment goes on; ex: try new cola after waiting for 2 hours then try the old brand of cola after waiting for 2 more hours

Controlling for Within-Subjects design:    -make sure extraneous variable affects all treatment conditions in the same way with elimination, constancy, or balance

*Cannot eliminate or hold constant order effects in within-subjects design    -counterbalancing: distribute progressive error across the different treatment conditions of the experiment; guarantees that order effect won’t be the reasons for changes in the DV    -Subject-by-subject counterbalancing: present all treatment conditions more than once to each subject; idea is to redistribute progressive error so that they will have the equal amount in each condition; two types:            -reverse counterbalancing: present all treatment conditions twice, first in one order, then in the reverse order; makes progressive error the same (new cola=brand A, old cola=brand B: give subjects cola in the order ABBA; progressive error in trial 1=1, 2=2, 3=3, and 4=3; progressive error for brand A=5 and for brand B=5); DOESN’T ALWAYS WORK            -progressive effect errors may be linear, curvilinear (inverted-U shape), or nonmonotonic (changing direction)    -Block randomization: used when progressive error is nonlinear; each treatment must be presented several times; ex: treatment ABCD would be broken into blocks BDCA, DBAC, ACDB, CABD, and BADC and each condition would be repeated for as many treatments as there are (ex: 20); typically used when treatment conditions are short    -counterbalancing is negative because you have to present each condition to each subject more than once    -across-subject counterbalance: distribute the effects of progressive error so that if the subjects are averaged, the effects will be the same for all conditions in the experiment; 2-types of:            -complete counterbalancing: controls for progressive order by using all possible sequences of the conditions and using every sequence the same number of times; gets harder when more conditions are added

*to decide how many possible orders you can present treatment conditions; multiply the number of subjects like N! (3x2x1=6 possible orders)*            -partial counterbalancing: use these procedures when we cannot do complete counterbalancing but still want to have some control over progressive error across subjects            -randomized partial counterbalancing: simplest partial counterbalancing; randomly select out as many sequences as we have subjects for the experiment; ex: 120 possible sequences (5 treatment conditions) and 30 subjects; randomly select 30 sequences and assign them to a random subject; doesn’t control for order effects as effectively as complete counterbalancing ; use as many randomly selected sequences as there are experimental conditions    -Latin square counterbalancing: uses a square or matrix where each treatment appears only once in any order position in the sequences; provides protection against order effects but cannot control for other systematic inference between 2 treatment conditions (ABCD, BADC, CDAB, DCBA à A comes before B twice)

-carryover effects: the effects of some treatments will persist (carry over) after treatments are removed; ex: Identifying three scents à lilac, gasoline, and perfume; after smelling gasoline, the subject cannot correctly identify the perfume because the smell of gas was still there); (Ex2: a funny scene in a movie won’t seem as funny as it usually would if a subjects just watched a really sad scene)

-subject-by-subject counterbalancing and complete counterbalancing  can somewhat help by balancing them over the entire experiment    -balanced Latin squares: each treatment condition appears once in each position in the order sequence; precedes and follows every other condition an equal number of times; ex: ABDC, BCAD, CDBA, DACB à no letter precedes the other more than once

 *every experiment with a within-subjects condition needs some kind of counterbalancing

            -1IV or factorial design à counterbalance all conditions

            -Within-subjects factorial à counterbalance all conditions (multiply the levels of each factor together to get

Page 22: I did it

total number of conditions)

            -ex: 4 x 2 within-subjects design has 8 conditions to be counterbalanced

            -in a mixed design, only the within-subjects have to be counterbalanced

-counterbalance for each subject when expecting large differences in pattern of progressive error from subject to subject (ex: weight training experiment everyone will have fatigue at different levels)

-counterbalance all subjects when differences won’t be large

 *Treatment order is ALWAYS a between-subjects factor)

CHAPTER 12 :WITHIN SUBJECT DESIGN (small N)

Large N designs: (N is the number of subjects needed in an experiment); require manipulating or selecting IV’s and testing a number of subjects

Small N designs: test only one or very few subjects

-researchers who choose this way feel that large N designs lack precision because they combine the data from different subjects to determine the effects of the IVs

-ex: children’s attention span to cartoons with/without violence; large sample of children may vary too much to tell if there is an effect (more fearful kids may not watch the screen as long during violent scenes and a large N design isn't sensitive enough to detect this error)

-behavior is studied much more intensely in a small n design experiment; behavior is measured many times

-used in labs and in field studies; human and animal behavior

-used for practical reasons; ex: if researchers want to study animal brains or tissue, the animal has to be sacrificed --> makes more sense to use as few subjects as possible in cases like this

-most often used in experimentation with operant conditioning; also used in clinical psychopathology and psychophysics (how we sense and perceive physical stimuli)

-B.F. Skinner studied positive and negative reinforcement --> “the experimental analysis of behavior” states it is better to use careful, continuous measurements rather than statistical tests

Baseline: a measure of behavior as it normally occurs without the experimental manipulation

ex: compare the growth of a plant when you do/do not talk to it; for a time you do not talk to the plant and after measuring it every Monday for three months you establish a baseline        -in the second part of the experiment you introduce talking and measure it every Monday for three months--if the plant grew talking had an effect; it if did not grow talking has no effect

*to make sure maturation is not a confounding threat, small n designs remove the IV and return to the original control condition after completing the experimentation manipulation (goes back to measuring the plant every Monday for three months without talking to it)

ABA Designs: refers to the order of the conditions of the experiment; A (baseline condition) followed by B (experimental condition) returns back to A

            -may be used only if the treatment conditions are reversible

            -also called reversal designs

            -can be used for large N designs too

-Variations of ABA: you can do it more than once (ex: the ABABA design with the husband and wife leaving clothes in the living room)

-you can also extend the conditions in a small N experiment (ex: ABACADA where B, C, and D represent 3 different treatment conditions)

-researchers often try to change behaviors by implementing punishments or rewards; ex: the little boy scared of

Page 23: I did it

crickets and it effected his math scores--> they didn't return to the baseline immediately because the effects were so small so they did an A-B-BC-A-B-BC design where A was the baseline, B was the increase in cricket exposure, and C was the reward for doing math problems right

-sometimes researchers sacrifice some precision for ethical reasons (ex: the boy kicking and hitting himself for attention) and can only use an AB design --> if the AB design works then the goal of helping patients was successful

Small N design is often used to test the effects of positive or negative reinforcement on individuals with behavioral problems; ex: sometimes rewards for positive or improved behavior was rewarded and negative behavior was punished which caused changes in the IV

Multiple baseline design: a series of baselines and treatments are compared within the same person, but once a treatment is established, it is not withdrawn

        -used when it is not desirable to reverse treatment conditions, when the researcher wants to test a treatment across multiple settings, or when the researcher wants to assess the effects of a treatment on several behaviors

    -EX: a boy watches cartoons before and after school; parents reward him with toys whenever he does something before and after school besides watching TV; once giving him toys has changed his TV viewing in the morning, the parents can do the same thing for after school and prove it was successful treatment

Statistics are not usually used in small n designs; you can look at the data to see changes in the IV rather than running statistical tests

-the role statistics do play is to infer things about the population from sample data because making generalizations based on one subject isn’t reasonable

Changing criterion design: behavior will be modified in increments and the criterion for success (reinforcement) will intentionally be changed as the behavior is modified; used when the behavior being modified cannot be changed all at once; DV would be your criterion; reward changes with criterion; behavior cannot be modified all at once so you use this type of design

-ex: weight lifting program; lift light weights and get support from the trainer first and as the weights get heavier you still get the same amount of support from the trainer

-useful when the eventual, desired behavior must be shaped

Discrete trials design: does not rely on baselines; instead it relies on presenting and averaging across many applications of different treatment conditions and comparing performance on the DV across the treatment conditions; repeated presentation over many trials can show reliable effects of the IV; sensory systems among people are similar so it is reliable to generalize for the whole population

When to use/advantages of small N designs:

            -when you are studying a particular subject (ex: a disturbed child)

            -when very few subjects are available

            -you get a more accurate picture of results or effects (because you measure the effects multiple times and observe it closer)

-low in external validity; it may be hard to generalize for the entire population based on the results of a few subjects; EX: reactions may be different for individuals in different situations (ex: we all get startled at a loud gunshot noise, but some people may laugh with relief that it was the researcher that shot the gun or be angry about it

-history threats are a problem with small N designs so it is important to replicate findings before generalizing them; ex: someone nice could come give the plant fertilizer right as researchers begin talking to it

When to use large N designs:

            -when population cannot be generalized from one or few subjects

CHAPTER 13 :WHY WE NEED STATISTICS ?

Page 24: I did it

Statistics: quantitative measurements of samples; allows us to evaluate objectively with the data; carry out statistical tests to see if the IV PROBABLY caused changes in the DV

statistical inference: answers the questions, are the differences between groups significantly greater than expected between any sample in the population?

Variability: 2+ groups-->expect variability    -amount of change or fluctuation seen  statistics set up standards so results show no individual bias

null hypothesis    -don't test the research hypothesis directly    -null states that results are so similar they mean nothing    -null is assumed true until proven wrong

statistical significance: reject null and confirm change between 2 groups

alternative hypothesis: no way to directly test it    -research hypothesis (data came from different populations)    -can never prove its correct (only show null is PROBABLY wrong) Process of Statistical Inference    1. consider the population to be sampled: because of variability, individual scores on the DV will differ    2. consider different random samples within the population: their scores on the DV will also differ because of normal variability. Assume null hypothesis is correct    3. apply the treatment conditions to randomly selected, randomly assigned samples    4, after the treatment, the samples now appear to belong to different populations: Reject the null hypothesis

directional hypothesis: predicts the way the difference between groups will go

normal curve: bell shapes; scores fall close to the center

significance level-->criterion for deciding whether to reject the null or not

experimental errors: variations in subject scores produced by uncontrolled extraneous variables, experimenter bias, etc

Type I error: reject the null hypothesis when the null is trueType II error: fail to reject a false null hypothesis

Critical regions: parts of the distribution that make up the most extreme 5% of the mean differences (p<.05)

2-tailed test (nondirectional): the 5% critical region is divided onto either side of the distribution (i.e. 2.5% on the left and right side of the center1-tailed test (directional): the 5% is not distributed to both sides; it can only go one way

inferential statistics: used as indicators; also called "test statistics"

measure of central tendency: summary statistics that describe typical distribution of scoresmean: average of scoresmedian: middle numberrange: difference between highest and lowest score

CHAPTER 14:

Analysis of Variance (ANOVA): statistical procedure used to evaluate differences among three or more treatment means; divides all the variance in the data into component parts and then compares/evaluates them for statistical significance

Simplest ANOVAs:  -Within-groups variability: degree to which the scores of subjects in the same treatment group differ from one another (how much subjects vary from others in the group)   -Between-Groups variability: degree to which the scores of subjects in different treatment groups differ from

Page 25: I did it

one another

Sources of Variability    -individual differences    -making small mistakes within the experiment (i.e. errors in measuring lines that subjects drew) or other errors with the procedure

Factors of variability can be lumped into one category called:

error: individual differences, undetected mistakes in recording data, variations in testing conditions, and a host of extraneous variables that can be sources of variability; can lead to variability between different groups or within the same group    -experimental manipulation: only a source of variability in a between subjects design; test subjects under different conditions; only produces variability between the responses of different treatment groups; other subjects vary because of individual differences or error but this time they vary because of differences in treatment conditions

*VARIABILITY WITHIN GROUPS COMES FROM ERROR; VARIABILITY BETWEEN GROUPS COMES FROM ERROR AND TREATMENT EFFECTS* (if IV had an effect, between groups variability should be larger than within)

******F ratio: variability between groups / variability within groups    -if the IV had no effect, the f ratio should = 1    -larger the effect of the IV, larger the f ratio should be    -if the amount of between groups variability is large compared with the amount of within groups variability then the f ratio is statistically significant (reject null)

One-way between-subjects analysis of variance    -treatment groups must be independent

-only one IV    -samples must be randomly selected    -normally distributed on the DV and the variances are equal (homogeneous)

mean square: average of squared deviation; used when discussing an ANOVA

            MSw: represents the portion of the variability in the data that is produced by the combination of sources called “error”

            MSb: the amount of variability produced by both error and treatment effects in the experiment

F ratio = mean square between / mean square within

Summary table: includes all basic info needed to compute F and the actual computed value; example:

Source                           df                 SS                 MS             F

Between                     2                     10.55       5.28           MSb/MSw=7.23

Within                           12                 8.8               7.3

Total                                 14                 19.35      

Graphing the results: line or bar graph to help summarize findings; IV on horizontal axis, DV on vertical axis; data points represent group means

Interpreting Results

Page 26: I did it

-when testing f, you are testing only the overall pattern of treatment means

-if you are using an ANOVA test with more than 2 treatment groups, you cannot tell WHERE the significant difference is but you can tell that there IS one; ex: no, brief, and long ownership of mugs)

-two types of follow-up tests:    -post hoc tests: tests done after the overall analysis indicates a significant difference; used to make pair by pair comparisons of the different groups to see where the difference is; can increase chance of type II error; more conservative, less powerful   -priori comparisons: tests between specific treatment groups that were anticipated or planned before the experiment was conducted; aka "planned comparisons"; chance of type I error not increased if the # of planned comparisons is less than the # of treatment groups; less conservative, more powerful

-use priori over post hoc if there are too many predictions or you want to explore group differences not already predicted

-can’t just use a bunch of t-tests because that increases the chance of a type I error

Moderating variable: one that can moderate, or change, the influence of the IV    -analysis of covariance (ANCOVA) can control statistically for potential moderating variables; used to accomplish 2 things:

Refine estimates of experimental error

Adjust treatment effects for any differences between the treatment groups that existed before the start of the experiment

One way repeated measures ANOVA: analyze the effects in a multiple group experiment testing one IV that uses a within subjects design

main effects: impact of each IV 

Two way ANOVA: treatment groups are independent from each other and the observations are randomly sampled; assume population from each group is normally distributed on the DV

QUESTIONS13. a. you can determine which of the 3 groups is different from each and how they vary    b. you would need to conduct a post hoc test or priori comparison to see what the relationship is between the means because conducting a bunch of t-tests increases the chance of a type I error.

14. a. .03    b. yes it is significant    e. yes you do need to run follow-up tests

CHAPTER 15: Drawing conclusions: the search for the elusive bottom line

*Evaluating the experiment from the inside: Internal Validity

-when evaluating an experiment, start from the inside-the only studies that have questionable internal validity but are still published are ones that have heuristic value (they would lead other researchers to conduct more experiments)

to guarantee internal validity:    -plan ahead    -be sure procedures incorporate appropriate control techniques

Page 27: I did it

    -use standard techniques (random assignment, constancy of conditions, counterbalancing, etc)    -be sure the experimental setup created the conditions you wanted                -manipulation check: verifies how successfully the experimenter manipulated the situation he or she intended to produce; ex: making sure subjects felt sad after watching the sad film or checking that the low-anxiety condition really had less anxious subjects than the high anxiety condition

Informal interviews (or written questionnaires): given after the experiment to ask subjects their thoughts and feelings towards the experiment; researcher tries to get a sense of whether subjects guessed the hypothesis        -pact of ignorance: between subject and researcher; subjects may be aware that their data will be discarded if they guess the right hypothesis so they don't reveal that they know -OR- experimenters may not ask subjects too many questions so that all of their data is usable

statistical conclusion validity: the validity of drawing conclusions about a treatment effect from the statistical results that were obtained; be certain that any inferences made about the relationship between IV and DV are have statistical conclusion validity

    -if chance of a type I error has increased, statistical conclusion validity has been lowered    -if there is barely a statistically significant effect with a large sample, be cautious with conclusions    -findings can be statistically significant without being meaningful; ex: low statistical significance --> don't make sweeping conclusions

*Taking a Broader Perspective: the problem of external validity-making sure your results are generalizable/fit in other situations besides your experiment settings-not an either/or matter; it is a continuum-some experiments are more externally valid than others-use induction to make generalizations based on a specific set of findings-accuracy of generalizations depends on factors that affect the external validity or an experiment

externally valid experiments must have:    -internally validity; demonstrates a cause and effect relationship free of confounding    -must be replicated by other researchers

Generalizing Across Subjects:-experiments may have different outcomes when they are run on different samples so try to get samples from the population you are discussing-practical problems prevent from getting truly random samples; usually is bias in the way human subjects are chosen because the subjects are typically volunteers-trouble finding subjects will lead to troubles finding samples that are typical of a population and limit external validity-if subject variables will effect the IV, treat them as an IV

Generalizing from Procedures to Concepts: Research Significance

-attempts to generalize across procedures raise issues that are hard to resolve; ex: variables like anxiety that can have multiple operational definitions-risky to generalize findings of a single experiment using induction-when formulating conclusions, you move farther away from actual observations which is a risk-findings have some degree of generality if they are consistent with prior research studying the same variables    -findings have "research significance"; ex: are findings consistent with prior findings?-contradictions between your study and prior studies could be caused by small differences in operational definitions; these may lead to new studies which build more theories-this is all part of the process of theory building

Generalizing beyond the lab:-lab is the most precise tool for measuring the effect of an IV as it varies under controlled conditions but this causes a problem for real life situations that cannot be perfectly controlled

Increasing External Validity:    -Aggregation: grouping together and averaging data that is gathered in various ways; similar to meta-analysis; four types of aggregation:            1. aggregation over subjects: using greater samples to better generalize the entire population

Page 28: I did it

            2. aggregation over stimuli or situations: stimuli/context must be sampled as effectively as the sample subjects; ex: using 20 colors does not draw conclusions about 7,500 different colors            3. Aggregation over trials or occasions: using many trials and combining multiple testing sessions minimizes the effects associated with specific trials; helps in cases when researchers unknowingly gives a cue about the right answer during one trial            4. Aggregation over measures: use multiple measuring procedures; measuring in more than one way can offset the errors made when using just one instrument    -Multivariate Designs: deal with multiple DV's; let us look at many DV's in combination                -multivariate analysis of variance (MANOVA): can measure the effects of IV's as they affect sets of DV's; tests the effects of the IV's on the whole set of measures at once; more representative of reality    -Nonreactive Measurements: minimize reactivity in an experiment; don't give subjects unnecessary ques by controlling demand characteristics; use single or double blind experiments; control for personality and social variables    -Unobtrusive measures: not influenced by subjects' reactions; researchers have procedures for measuring subjects' behavior without them knowing they are being measured; many depend on physical aspects    -Field experiments: most obvious way of dealing with external validity problems is to take the experiment out of the lab; meets the basic requirements of an experiment (manipulating antecedent conditions and observing effects on DV) but in a natural setting; one problem though is having little control over the choice of participants so sample may not be random/also have more difficulty identifying subject variables; used also to validate lab findings    -Naturalistic Observation: a descriptive, nonexperimental method of observing behaviors as they spontaneously occur naturally

Handling a Nonsignificant Outcome:-asks, why didn't things go expected?        -Faulty procedures: lookout for confounding; numerous uncontrolled variables increasing the amount of variability between subjects' scores; unreliable measuring instrument; IV with a weak effect/not a powerful enough manipulation; sample may not be large enough; manipulation was inadequate (IV might be powerful if the treatment levels were defined better        -Faulty hypothesis: reasoning got confused; overlooking key factors from prior studies, cautiously draw statistically significant conclusions

Chapter 16: Writing the Research Report

Research Reports: introduction, methods section, results section, and a discussion    -also has an abstract (summary) and a list of references

Primary purpose of a written report is communication; telling others what we found and did

written in scientific writing style:    -goal is to present objective information; avoid seeming opinionated about your topic    -parsimonious: author gives complete information in as few words as possible    -use unbiased language; free of gender and ethnic bias (use the preferred term when talking about race like "white" or "black)    -use nonsexist language; "he or she", "all people" instead of "mankind"    -avoid language with negative overtones; do not say homosexuals; day "persons diagnosed with mental illness" instead of "mentally ill person"

Major Components of a Research Report:

-Title: gives readers an idea of what the report is about; include both the IV and the DV; recommends 10-12 words in a title-Abstract: summary of the report; about 120 words; written in past tense; should contain a statement of the problem studied, the method, the results, and the conclusions; leave out citations unless replicating a published experiment; include what kinds of subjects and experiment's design; easier to write after the entire report is done-Introduction: what you are doing and why; states hypothesis and how you will test it-Method: how you went about doing the experiment; should be detailed enough to be replicated; subsections includeL

Page 29: I did it

    -participants: "subjects" or "sample"; How many participants? Relevant characteristics (age, sex, species, etc)? How were they recruited? Were they compensated?; characteristics of subjects are crucial to assess the external validity    -materials ("apparatus" if experiment relies on mechanical or electronic equipment): description of the equipment and materials used; if you built your own equipment show figures and give details; use standard metric units (centimeters and meters) to describe physical information about equipment    -procedure: clear description of all the procedures followed; explain experimental manipulations and procedures used for controlling extraneous variables; easiest to report everything in chronological order (step by step);-Results: what statistical procedures you used and what you found; do not report individual scores unless it was a small n design-Discussion: used to evaluate your experiment and interpret the results; explain what was accomplished (Was your hypothesis supported? How do the findings fit in with prior research in the area? etc..); explain what you think your results mean; confounds or potential problems should be reported-References: at the end; lists any articles or books mentioned

running head: an abbreviated title (max 50 characters) printed above pages of articles to help readers identify it in a journal containing other articles

Title page: title of experiment, name, and affiliation; show a running head in capitol letters above the title

number pages in the top right; create a page header containing the first few words of your title

type body in block form (no paragraph indent) and subheadings go in italics.

Atomed Cebu Inc.An ISO 13485:2003 and Japanese Good Manufacturing Practice (JGMP) Certified company is a leading manufacturer of medical devices that aims to support the birth of new lives and contribute to the advancement of medical care based on dynamic company tradition and action.

We are in need of:

HR MANAGERCentral Visayas

Responsibilities:

Responsible for the management of human resources: for the development and supervision of supervisory and/or HR staff including hiring, training, and performance management. Primary duties typically include complete oversight and implementation of HR projects, policies, and procedures, managing and evaluating recruitment and selection processes, implementing policies and procedures, managing employee relations issues and investigations, researching and analyzing HR issues and recommending solutions to management, and ensuring that all departmental HR practices are in compliance with appropriate labor laws.

Requirements:

Female, Graduate of Bachelor of Science in Psychology, Human Resources or related field

At least 40 years old but not more than 50 years old

Page 30: I did it

Masters Degree in Psychology is a plus but not necessary With at least five years experience in managerial position With extensive knowledge/experience on labor laws, positive employee relations

and HR practices including recruitment and placement, company policies & procedure formulation & implementation, compensation and benefits, salary structuring, job evaluation, Industrial and Employee Relations, Training and Development.

Well versed in Computer application Has good managerial skills, effective Leadership Skills and  able to work under

pressure Excellent in verbal and written communication skills Fast, Result Oriented & with good interpersonal skills Full-Time position(s) available.

Interested parties may submit their Application Letter,Curriculum Vitae with 2x2 colored picture, TOR and other pertinent documents to:

The ManagerAtomed Cebu Inc.

MEZ II, Basak, Lapu-lapu City,Cebu6015

Or email to [email protected]

Page 31: I did it

 


Recommended