Effective Assessment in Competency-Based Education (CBE) CACTE
Summit | Breckenridge, CO | July, 2018 Brian Tinker | EdD,
MFA
Brian Tinker | EdD, MFA Professor Digital Media | Graphic Design
Degree Program Director Isaacson School of Communication, Arts
& Media Colorado Mountain College
The Digital Media and Graphic Design degree programs at the
Isaacson School are largely based on CBE, as is common in CTE
programs nationwide.
The topic of the presenter’s Doctoral dissertation was CBE
assessment, and he has been the instructional designer for multiple
CBE courses at Colorado Mountain College, The Art Institutes, and
American Public University, including design for face-to-face,
hybrid, and online courses.
Clarifying the Lexicon Competency-based education (Competency-based
learning) refers to systems of instruction, assessment, grading,
and academic reporting that are based on students demonstrating
that they have learned the knowledge and skills they are expected
to learn as they progress through their education.
In most public schools, CBE uses state, district, and individual
school learning standards to determine academic expectations and
define “competency” or “proficiency” in a given course, subject
area, or grade level.
The general goal of CBE is to ensure students are acquiring the
knowledge and skills that are deemed to be essential to success in
school, higher education, careers, and adult life.
If students fail to meet expected learning standards, they
typically receive additional instruction, practice time, and
academic support to help them achieve competency or meet the
expected standards.
Defining CBE is complicated by the fact that educators not only use
a wide variety of terms for the general approach, but the terms may
or may not be used synonymously from place to place. A few of the
more common synonyms include proficiency-, outcome-, mastery-,
performance-, and standards-based, among others.
In practice, CBE can take a wide variety of forms from state to
state or school to school — there is no single model or universally
used approach. While schools often create their own CBE systems,
they may also use systems, models, or strategies created by state
education agen- cies or outside educational organizations. CBE is
more widely used at the elementary level, although more middle
schools and high schools are adopting the approach. As with any
educational strategy, some CBE systems may be better designed or
more ef- fective than others.
Recently, the term CBE (and related synonyms) has become widely
used by:
• Online schools or companies selling online learning
programs,
• Colleges and universities, particularly those offering online
degree programs.
“CBE” as it is typically designed and implemented in K–12 public
schools can differ significantly from the forms of “CBE” being
offered and promoted by these other entities.
At the collegiate level, for example, CBE may entail prospective
adult students receiving academic credit for knowledge and skills
they acquired in their former careers — “credit for prior
learning”. When exploring CBE, it is important to determine
precisely how the terms are being used in a specific context.
Why CBE? One challenge with traditional grading is that averages
mask areas of misunder- standing: Students can receive passing
grades but lack skills that will be needed later on in their
coursework (or after graduation).
Traditional grading also produces high levels of variability in
what it means to be proficient.
Within schools, there is variability between teachers, who each use
their own system of grading, weighting how well students did on
assignments and tests and good behavior in their own unique way.
Within districts and across states, some schools have much lower
expectations, and students learn at much lower achievement levels
than others.
The high variability results in credits having questionable
meaning; the high school diploma is largely meaningless in terms of
indicating what students know and are able to do, and the GPA,
considered a predictor for college success, is at best an indicator
work habits and ability to navigate the school environment rather
than of what students know and can do.
Efficient Delivery of Curriculum vs. Actual Learning Most of us
have the shared experience of traditional education: desks in
straight rows; quiet when the bell ring;, following the teacher’s
instructions; trying to stay focused during 50 minutes of lecture;
studying (often pure memorization) for quizzes and tests throughout
the semester.
The traditional school has been designed for teachers to
efficiently deliver the curriculum and assess students. However,
actual learning is often a messy process. Students bring different
skills, interests, and life experiences to the classroom. They have
misconceptions, they make mistakes, and they can become frustrated
or disengage.
Ideally, instruction is designed based on what we know about child
development and learning.
Education systems should focused on effectiveness, not efficiency,
taking into consideration research on learning, engagement, and
motivation.
CBE isn’t a panacea, but it does offer some reasonable
opportunities for improved learning effectiveness
CBE Assessment Design in Practice CBE assessment can take a variety
of formats:
• Objectively scored assessments (for example: multiple-choice or
true-false questions)
• Performance-based assessments (for example: essays; group
projects; or simulated environments)
• Observations Regardless of format, the credibility of inferences
drawn from assessment results depends on evidence of their
validity. This makes careful creation of assessments crucial to CBE
success.
Imagine a test developed to measure a student’s knowl- edge of
public-health laws, regulations, and policies.
Students with higher scores should exhibit a greater level of
knowledge of public-health concepts. Their level of knowledge, as
evidenced by their test scores, could be used to determine whether
they are awarded competen- cy credits in this area and, by
extension, whether they are prepared for future endeavors in public
health.
Understanding of test scores may seem intuitive, but scientifically
valid inferences from assessment results rely on specific
axioms.
For example, can it be demon- strated that knowledge of
public-health laws, regulations, and policies — and not some other
trait —explains test score variability? Moreover, do higher scores
relate to higher levels of subsequent job performance?
Validation is the process of accumulating evidence to answer these
fundamental questions.
According to the Standards for Educational and Psychological
Testing (2014), validity evidence can come from five sources:
1) Test content 2) Response processes 3) Internal test structure 4)
Relations to other variables 5) Test consequences Sources 1–3
generally reflect the test instrument itself, whereas 4 and 5 rely
on data external to the assessment.
Although not all sources of validity evidence may be present for
every assessment, a stronger validity argument is made by
integrating evidence from multiple sources.
For example, while it is important to show that CBE assessment
tests the knowledge and skills associated with the specified
competency (evidence-based on test content), it is just as
important to show that students who score higher on the assessment
also do well on other tasks — such as job performance, that require
that competency (evidence based on relations to other
variables).
Using 5-source validity verification is especially valuable when
implmenting initiatives that are controversial, or otherwise likely
to attract skepticism.
CBE Assessment Exemplars Objectively scored assessments One
misconception is that CBE eschews traditional assessment method of
written tests. Many categories of CBE can be effectively assessed
using objectively scored assessments, the distinction being
avoidance of measuring rote memory.
Answers should demonstrate student understanding of con- cepts,
processes, and methods. This does not mean that students aren’t
expected to recall factual minutiae. It means that those facts
should be crucial to understanding the larger issues.
For example: questions intended to confirm under- standing of the
causes of the U.S. Civil War would not require recalling specific
dates beyond the very general (say, the fact that it occurred in
the mid-19th Century be- tween the Mexican-American and Indian
wars). However, questions intended to confirm understanding
(including the broader implications) of the Battle of Gettysbug
would expect students to know more specific dates.
Performance-based assessments Performance-based assessment (PBA) is
very commonplace in CBE. Examples of PBA include portfolios,
essays, presentations, and models. Typically, PBA challenges
students to use their higher-order thinking skills to create a
product or complete a process.
Arguably, the most genuine assessments require students to complete
a task that closely mirrors the responsibilities of a professional
in practice (artist, engineer, laboratory technician, financial
analyst, agriculturalist, etc.).
Although PBA tools vary greatly, the majority of them share six
characteristics:
1) Accurately measures one or more specific course standards 2)
Complex 3) Authentic 4) Process/product-oriented 5) Open-ended 6)
Time-bound
Often, students are presented with an open-ended question that may
produce several different correct out- comes (Chun, 2010; McTighe,
2015). In the higher-level tasks, there is a sense of urgency for
the product to be devel- oped or the process to be determined, as
in most real-world situations.
PBA Example I The backward design process plan (Reader’s Digest®
version) for a high school math PBA for a unit on
probability:
Step 1. Identify goals of the PBA: • Challenge students to use
critical
thinking and problem-solving skills
• Exhibit less codependence and more individuality while completing
the assessment
• Completion of each step autonomously without assistance from
instructor
Step 2. Select the appropriate course standards: Selection of the
Common Core standards to be addressed with this PBA: understanding
of conditional probability and rules of probability
Step 3. Review assessments and identify learning gaps: Considered
current worksheets students were completing for the unit. Two-way
frequency tables were a large component. Identified that plan was
missing a relevant real-world application, so created a
reality-based PBA requiring students to analyze two-way frequency
tables along with other charts and graphs.
Step 4. Design the scenario: Settled on scenario where the students
must decide if an inmate should be granted parole or remain in
prison. Scenario components:
• Setting • Role • Audience • Time frame • Product
Step 5. Gather or create materials: Depending on the scenario, this
step may not be needed. For the scenario to calculate the
probability of inmate recidivism, statistics from government
agencies, including the Federal Bureau of Prisons and Bureau of
Justice Statistics was provided.
6. Develop a learning plan: • Avoid “teach to the test” by
striking a balance between teaching the content (proba- bility
given two independent events) and preparing students for the task
(interpreting the validity of a media resource)
• Six formative PBA’s
• Summative PBA: Parole Board Hearing Scenario:
“Ashley” is serving three to five years for embez- zlement and
assault. After three years, she is up for parole.
Task: You are Ashley’s former probation officer. You have been
asked to review documents and present your opinion about a
potential early release for Ashley at a Parole Board hearing.
You’ve granted three to five minutes. Your presentation must be
short, but detailed with strong evidence to support your
decision.
Documents for review: • Criminal history report
• Article announcing a new web series on embezzlement
• Blog post about nursuries in prisons
• Letter to the parole board from the inmate’s mother and son
• Newsletter about the incarceration rates in the state
• Press release about a prison-work program
• Research brief on recidivism rates of nonviolent offenders
PBA Example II The backward design process plan for a unit on
profiency with the Adobe Photoshop® software application.
Step 1. Identify goals of the PBA: • Confirm proficiency with
selective image adjustments • Confirm baseline proficiency with
image compositing
Step 2. Select the appropriate course standards: • Entry-level
professional standard
Step 3. Review assessments and identify learning gaps: Profiencies
can be clearly and simply demonstrated with visual artifacts
(Photoshop computer file). No gaps identified.
Step 4. Design the scenario: Student will be provided with three
different images: • Seacoast background scene • Figure of a woman
holding cell phone • Sailboat Student must combine the three images
into a single realistic image portraying the woman in the
foreground, and the sailboat in the sea off the coast.
Step 5. Gather or create materials:
6. Develop a learning plan: Five formative PBA’s:
• Navigating the workspace • Non-selective image adjustment •
Achieving effective selections • QuickMask mode • Selective color
replacement
Summative PBA with rubric
Grade____ Score____ |____| |____| |____| |____| |____|
Grading Rubric: CATEGORY 5 pts 4 pts 3 pts 2 pts 1 pts
Shadows Shadows matched to professional standard. Shadows matched
to nearly professional standard.
Shadows matched with obvious, but non-fatal flaws.
Shadows matched with significant flaws and/or largely fails
to
approach professional standard.
Layers utilized with obvious, but non-fatal flaws.
Layers utilized with significant flaws and/or largely fails
to
approach professional standard.
Layers not utilized.
Impression of Reality Composite image achieves the impression of
reality to a professional
standard.
Composite image achieves the impression of reality to nearly
professional standard.
Composite image achieves the impression of reality with obvious,
but non-fatal flaws.
Composite image achieves the impression of reality with significant
flaws and/or largely fails to approach professional standard.
Composite image fails to achieve the impression of reality.
Process Statement Statement unequivocally communicates the intended
message without grammatical or spelling errors or
missing content
Statement communicates the intended message no more than two
(2)
grammatical or spelling errors, and/or no more than one (1) missing
content element.
Statement largely communicates the intended message, and contains
no more than four (4)
grammatical or spelling errors, and/or no more than two (2) missing
content elements.
Statement evidences significant ambiguities and/or more than four
(4) grammatical or spelling errors, and or more than two (2)
missing content elements.
Statement largely fails to communicate effectively and/or contains
more than six (6) grammatical or
spelling errors, and/or more than Compliance Student followed all
project instructions accurately, including size, content,
and inclusion limitations
inclusion limitations, with no more than a single minor
infraction.
Student followed many of the project instructions accurately, but
failed to comply in more than two
instances.
Student largely failed to follow the project instructions
accurately. More than three instances.
Student failed to follow the project instructions accurately.
25 pts = 100% 24 pts = 96% 23 pts = 92% 22 pts = 88% 21 pts = 84%
20 pts = 80% 19 pts = 76% 18 pts = 72% 17 pts = 68% 16 pts = 64% 15
pts = 60%
Observations From a Decade in CBE • Employ backwards design. Where
you want students to end
up drives everything else
• Use rubrics. This makes it easier to convert results into data,
which is necessary to establish validity
• Formative assessments are crucial for student success
• Stress real-world scenarios. Performing well in an academic
setting is often a very different challenge than performing well in
a professional practice environment. Introducing factors like time,
money, and clients will generate relavance
• Employ disruption. “Throwing a grenade” can enable students to
shine (and can also reveal unknown deficits)
• Time spent in planning and preparation pays big dividends
• If CBE isn’t working for you, you’re probably not doing it
right
References Chun, M. (2010, March). “Taking teaching to
(performance) task: Linking pedagogical and assessment practices.”
Change: The Magazine of Higher Education.
Darling-Hammond, L. & Adamson, F. (2013). Developing
assessments of deeper learning: The costs and benefits of using
tests that help students learn.
McTighe, J. (2015, April). “What is a performance task?”
Palm, T. (2008). “Performance assessment and authentic assessment:
A conceptual analysis of the literature.” Practical Assessment
Research and Evaluation, 13(4).
McClarty & Gaetner (2015). Measuring Mastery: Best Practices
for Assessment in Comptency-Based Education. American Enterprise
Institute.
Standards for Educational and Psychological Testing (Washington,
DC: American Educational Research Association, 2014).
Fin