Learning Outcomes Assessment: A Step-by-Step Approach
2012 Assessment Institute
Pre-Institute Workshop
Sunday, October 28, 2012
9:00 a.m. – 4:30 p.m.
Compose appropriate course, program, and institution level learning outcomes
Apply S.M.A.R.T.A. criteria to outcome statements
Explain a menu of indirect and direct assessment methods and measures
Check for alignment and apply M.A.T.U.R.E. criteria to assessment measures
Identify appropriate sampling and data collection techniques
Analyze results and identify ways to use the assessment results to improve teaching and learning
Workshop Outcomes
Writing Appropriate Learning Outcomes
Module I
Dr. Laurie Gach
Differentiate between college learning outcomes, program outcomes, and course outcomes
Apply S.M.A.R.T.A. criteria to program learning outcome statements
Apply ABCD criteria to student learning outcome statements
Compose appropriate student learning outcome statements Contrast levels of student learning outcomes
Module Objectives
Learning outcomes tell the student what we expect them to do.
Learning outcomes guide faculty in planning the instruction and assessment of student learning
Learning outcomes measure how well the student has met the learning outcome by providing a forum for assessing work and comparing it to a standard
Learning outcomes provide faculty with valuable information about how to improve student learning and classroom instruction
Why Learning Outcomes Matter
Institutional Outcomes Gen Ed
Program/Discipline Outcomes
Course Outcomes
Alignment
College Learning Outcome
Program Outcome: Political Science
Course Outcome: Intro to US Govt.
Solve problems using critical and creative
thinking and scientific reasoning.
Evaluate political issues and potential solutions
from different theoretical
perspectives.
Describe the many points of view,
including libertarian, anarchist, conservative,
liberal, socialist, and autocratic, held by the
American people.
Levels of Outcomes
The student will…
Focus on the student, not the professor Focus on the learning that results from the course or
program/discipline Integrate the knowledge, skills, and attitudes essential
to the discipline/program Help students develop real-world, lifelong
competency
Huba and Freed, Learner-centered Assessment, 2000
Effective Learning Outcomes
SUBJECT: The Student will…
VERB: See Webb’s Depth of Knowledge or Bloom’s Taxonomy for verbs, e.g., demonstrate, create, and so on.
OBJECT: Something (skill, content knowledge, see course outcomes for ideas)
Learning Outcomes Simplified
Webb’s Depth of Knowledge
Bloom’s Taxonomy
A – Audience (The student will...) B – Behavior (The verb that describes what the audience will be able to do after the lesson is taught.) C – Condition (The circumstances under which the outcome will be carried out.) D – Degree (The criterion of acceptable performance. How well the learner must perform in order for the performance to be considered acceptable.)
(Henrich et al., 1996)
ABCDs of Writing Outcomes
Example 1- Given a set of common rocks and minerals, the student enrolled in the Mineralogy and Petrology courses will provide the name and classification of each and explain how/where each typically occurs with high accuracy.
Example 2- University Center visitors that use dining services will be highly satisfied with the condition of the physical facilities.
Example 3- Resident assistants participating in Resident Assistance training will recognize most policy violations.
Adapted from Rebecca Lewis, University of Texas Arlington
ABCD Writing Outcomes Model
•Clearly Stated
•Observable Specific
•Direct
•Indirect Measurable
•Feasible to accomplish Attainable
• Measurable end product Results-oriented
•Realistic expectation for completion Time-bound
•College-wide outcomes
•Program outcomes
•Course outcomes Aligned
S.M.A.R.T.A. Criteria
Adapted from Paula Krist, Ph.D. presentation at MDC 2009
Do these outcomes meet the S.M.A.R.T.A. criteria:
1. A commitment to life-long learning to enhance critical thinking
and professional values and competence in clinical psychology.
2. The student will be aware of the impact of major communication
events in society.
3. Gain an appreciation for classical music and its historical
connotations.
S.M.A.R.T.A. Scenario
1. By the end of the clinical psychology program, the student will be able to determine what theories and research shaped his/her professional values.
2. By the end of the course, the student will demonstrate an awareness of the impact of major, 21st century communication events.
3. By the end of the workshop, the student will identify the characteristics and historical connotations of classical music.
S.M.A.R.T.A. Scenario Answers
Associate in Arts = one program drawing on many disciplines • Disciplines are components within the AA program • Each discipline identifies/develops Student Learning Outcomes
Associate in Science -discrete work force motivated programs/certifications EAP-English for Academic Purposes College Prep Reading Writing Baccalaureate degree programs
MDC Defines a Program as
Communication
Quantitative
Creative / Critical Thinking & Scientific
Information Literacy
Personal, Civic, Social Responsibility
Ethical Thinking
Computer and Technology Use
Aesthetic Appreciation
Natural Systems and Environment
Global Perspectives
Sources of Outcomes
Discipline Specific
Discipline Specific
Discipline Specific
PROGRAM
Susan Hatfield , Ph.D., Presentation to MDC 2010
Measures and Methods
Module II
Dr. Miriam Frances Abety
Differentiate between direct and indirect measures
Gain competence in developing measures to assess student learning
Differentiate between formative and summative assessments
Apply the "MATURE" criteria to measures
Workshop Objectives:
Measures (or instruments) refer to the specific tool or means of collecting the desired information; measures can be direct or indirect (e.g. exam or survey)
Methods refer to how the outcome is assessed; the
method describes generally how the information or data will be collected (e.g. use of rubric)
Measures and Methods
Direct measures are utilized to gauge the student's performance directly.
Some examples include:
Standardized tests
Case studies
Capstone projects
Internships/practicums
Theses
Exhibitions
What are Direct Measures?
Self-reports
Course-embedded questions and assignments
Pre-and post-test evaluations
Portfolios
Activity logs
Video and audio taped presentations
Direct Measures (cont.)
Indirect measures gather reflection about the learning or secondary evidence of its existence.
Examples include:
Surveys
Interviews
Questionnaires
Information from graduates
Information from employees
What are Indirect Measures?
Analysis of student transcripts, program curriculum, and course syllabi
Graduation rates
Transfer rates and persistence
Existing Reports
Indirect Measures (cont.)
Formative Assessments
Summative Assessments
Conducted during the course or program Conducted at the end of course or completion of program
Often used for improvement Often used for accountability
Students receive feedback Feedback is limited or students may not receive feedback
Changes made affect current students in course or program
Changes made affect subsequent students in course of program
Most student-centered Least student-centered
Often low stakes Often high stakes
Submission of a draft of a paper for feedback Minute paper
Standardized Tests Mid-term or final Capstone project
Formative vs. Summative Assessments
Rubrics: Help both the student and the assessor to focus on what is important in the task.
Rubrics should: Describe the performance that the rubric is designed to evaluate
Demonstrate varying levels, according to the ability of completing the objectives of the task
Common rubric types are: Analytic provides specific feedback about several categories
Holistic provides a single rating or score based on an overall impression of a student’s performance on a task
Checklist determines whether or not the task was completed.
Popular Method for Assessment
The "MATURE" criteria allows for congruence between the assessment method and the outcome by:
M=Matches: Explains how the method is directly related to the outcome.
A=Appropriate: Justifies the appropriateness of the methods.
T=Targets: Identifies the desired level of performance.
U=Useful: Illustrates how you will use the method to improve student learning.
R=Reliable: Measures whether or not the method is truly worthy and consistent in its results.
E=Effective: Provides results to support the elegance of the method used.
The M.A.T.U.R.E. Criteria
Student learning outcomes should be aligned with institutional, program, and course level learning outcomes:
1. Process of developing your learning outcomes.
2. Select program outcomes that may be gleaned from the institution or that may be course-specific.
3. Map the selected course to ensure the course outcomes align with the selected learning outcomes.
Aligning Learning Outcomes
1. Break into groups of 4-5 members each.
2. Select a writer (to jot notes) and a reporter (to present what you accomplished to the other participants in the room).
3. Select two learning outcomes.
4. Brainstorm ideas of an assessment task you could develop to "directly" measure these two learning outcomes.
5. For example, off the coast of Florida, we had the BP oil spill. Perhaps we could have a scenario with a similar oil spill and we would measure the learning outcomes of "ethics" and "environment".
Workshop Activity
Sampling and Data Collection Module III
John Frederick, PhD
Population is the universe of students/artifacts from which a sample is drawn
Sample is a subset of individuals or artifacts from the population.
Sampling is the process selecting subsets of individuals/artifacts from within a population to estimate characteristics of the whole population
What is Sampling?
Size
Time
Resources
Available Technology
Measure (Instrument) and Method
Factors Affecting Sampling Choices
Sampling makes sense when you have a large program and not enough time or resources to assess all participants
Why Sample?
Forms of Sampling
Random
Random: A simple random sample is selected so that all individuals/artifacts in the sample have an equal chance of being selected from the entire population.
Stratified Random: This involves selecting representative subset samples from the larger population
Non-random
Convenience: Selecting the students you have.
Purposive: Knowing who and what you want, and selecting students based on this.
Snowball: Students or artifacts that are difficult to reach.
Making Data Collection Manageable
Data Collection
Take stock of your existing technology, structures, resources, and conditions that will make data collection manageable.
What technology, resources, structures and conditions do you need to make data collection manageable?
Collecting Data Scantrons
E-Portfolios
Course Artifacts
Survey Software
Video Capture
Learning Management Systems
Assessment Software
Interpreting and Using Assessment Results
Module IV
Barbara June Rodriguez
Identify difficulties with interpreting results
Review appropriate questions to solicit analysis of results
Identify strategies to enhance teaching and learning
Apply information in hands-on activities
Workshop Objectives
Interpreting Results
Programs need to be aware of typical problems: 1. “Perfect Data” Fallacy: Since most assessment methods are
limited, this fallacy forms a solid basis of opposition.
2. “Single Indicator” Fallacy: There is a belief that a single approach exists that can answer all questions.
3. “Face Validity” Problem: This occurs when assessment results are communicated to and used by nontechnical stakeholders. Results must be valid and appear valid.
4. Power of Negative Evidence: Programs fear the impact of negative findings, but findings can induce positive action.
Adapted from Peter T. Ewell
Interpretation Difficulties
Questions that frame how to analyze and summarize assessment results:
Why was the assessment conducted?
Who is the audience for the results?
What are the audience’s needs?
Questions for Interpretation
Common Ways to Summarize Results:
Tallies– count how many students earned each rating or chose each option
Percentages– easier to understand than raw numbers
Aggregates– summarize results in overall scores
Averages– summarize the central tendency
Common Ways to Display Results:
Tables can summarize tallies succinctly
Line graphs can summarize ordered or scaled results
Bar graphs can summarize virtually any type of assessment results
Adapted from Linda Suskie
Summarizing Results
Using Assessment Results
Use of data from a direct measure will answer:
1. What does the student know versus what the program intends the student to know? (Cognitive)
2. What can the student do versus what the program expects the student to be able to do? (Skills)
3. What does the student care about versus what the program intends the student to care about? (Affective)
Using Direct Measure Results
Use of data from an indirect measure will answer:
1. What does the student report she knows versus what the program thought the student’s perception would be? (Cognitive)
2. What does the student report that he can do versus what the program intended for the student to be able to do? (Skills)
3. How does the student respond to questions dealing with program impact on the student’s values versus how the program intended to impact the student’s values? (Affective)
Using Indirect Measure Results
Assign grades and give feedback to students
Improve what is being done:
Curricula
Teaching
Support programs and infrastructure
Make sure quality is not slipping
Share the story of our success to key stakeholders (accountability)
Adapted from Linda Suskie
Uses of Assessment Results
Based on the results, programs may:
Address gaps in curriculum
Vary teaching strategies and methods
Review course(s) learning outcomes
Increase student exposure in the area addressed in the outcome
Revise the assessment process
Offer professional training and development
Closing the Loop
Interpretations and Use of Assessment Results Should:
Indicate how program will use what it has learned about the assessment process or the learning outcome of interest
Provide a timetable for implementing changes and then following up to see if the change had the intended effect
Describe why the changes will lead to improvements in student learning or the assessment process
Describe the program’s focus for the next assessment cycle
Adapted from Peggy Maki
Recap
For Further Information
Miriam Frances Abety
(305) 237-6564
John Frederick
(305) 237-7068
Laurie Gach
(305) 237-2451
Barbara June Rodriguez
(305) 237-7481