+ All Categories
Home > Documents > Certificate IV in Training and Assessment (TAE40110)files.dmssystems.com.au/TAE/Candidate...

Certificate IV in Training and Assessment (TAE40110)files.dmssystems.com.au/TAE/Candidate...

Date post: 01-Jan-2020
Category:
Upload: others
View: 7 times
Download: 0 times
Share this document with a friend
29
2011 David Santamaria [College of Sound & Music Production] Certificate IV in Training and Assessment (TAE40110)
Transcript

2011

David Santamaria

[College of Sound & Music Production]

Certificate IV in Training and Assessment (TAE40110)

Module 4 – Assessment Design

TAE40110 online learning programReading Program

Week Topic Resource.No

Learning resource Assessment

1 What is Vocational Training.- Who is involved- What are the responsibilities for each body

10 Back to basics edition 4.pdf TAEAS4012 The National Vocational Education & Training

System5 National Quality Council7 Industry Skills Council3 The Australian Qualifications Framework4 The Australian Quality Training Framework6 National Centre for Vocational Education

Research12 VET Glossary

2 Components of a Training Package

14 Training Packages TAEAS40215 Components of a Training Package20 Example of Qualification Rules16 A Unit of Competency18 Endorsed components of a Unit of Competency19 Implementation & use of training packages23 Employability skills

3 Vocational Training in schools- Requirements for Senior certificate credits- Nominal Hours

8 Vocational Education & Training in Schools (VETiS)

TAEAS403

11 The Victorian Purchasing Guide17 VCAA CUS09 Extract

Developing a learning program

24 Contextualising and packaging of training packages

25 Learner design26 Reasonable adjustment29 The Unit of Competency list

3 Organisational requirements for a learning program

27 The learner handbook cover28 The learner handbook body30 A student enrolment form22 Skill recognition21 Skill sets

4 Designing Assessment

34 Guide for developing assessment tools TAEAS40435 The NQC Assessor guide31 Designing assessment – critical questions

5 Developing assessment tools

38 Assessor instructions TAEAS40639 Learner instructions40 Assessment mapping

Assessing student work

32 Assessment fact sheet 233 Assessment fact sheet 4

6 Assessment Validation

36 Assessment & moderation TAEAS40737 Assessment validation13 Why be a VET teacher

Program deliveryThis program is designed for teachers in schools who are not able to attend the preferred program of 2 day intensive with 6 weeks projects work in the classroom at school, followed by a final 2 day intensive. It should be noted that all teachers who have undertaken our TAE40110 program with a facilitator have found it to be rewarding.This study is a significant study requiring much explanation, as

The concept of Vocational Training The process of designing a program and assessment tools The process of assessment validationis very different to what the majority of secondary teacher are used to.

Consequently, wherever possible COSAMP and ACAS promote our standard option of studying with a facilitator is the preferred method.However, it is recognised that due to remoteness of particular circumstances, this may not be the best option for some teachers.

The following program has been designed for teachers whose circumstances preclude them from participating in our standard program. It is as close as possible organised and presented around the process of delivery with a facilitator. Participants in this program have phone and email support with their online facilitator.

Certificate IV in Training and Assessment.The Certificate IV in Training and Assessment requires 10 Units of Competency:

7 core Units PLUS 3 Elective Units

The Core Units are:TAEDES401A Design and develop learning programsTAEDES402A Use training packages and accredited courses to meet client needsTAEDEL401A Plan, organise and deliver group-based learningTAEDEL402A Plan, organise and facilitate learning in the workplaceTAEASS401A Plan assessment activities and processesTAEASS402A Assess competenceTAEASS403A Participate in assessment validationThe Elective Units COSAMP has selected which match the requirements of teachers in schools are:TAETAS401A Maintain training and assessment informationTAEASS502A Design and develop assessment toolsBSBCMM401A Make a presentation

Module 4 – Assessment Design.

Assessment is the process of collecting evidence and making judgements on whether competency has been achieved to confirm that an individual can perform to the standard expected in the workplace.The assessment process is used to determine whether people are either ‘competent’ or ‘not yet competent’ against the agreed industry standards. A person who is not yet competent against any standard can undertake further study or training and be assessed again.

Who can conduct assessments? Only qualified assessors working with a registered training organisation can conduct assessments leading to a national (Australian Qualifications Framework) qualification or statement of attainment. The Australian Quality Training Framework Essential Standards for Registration, identifies that a person conducting assessment must: • Have the necessary training and assessment competencies as determined in the National Quality Council Policy on Training and Assessment Competencies to be held by Trainers and Assessors • Have the relevant vocational competencies at least to the level being assessed • Continue developing their vocational, and training and assessment competencies.

If a person does not have the assessment competencies as defined in the National Quality Council policy determination and the relevant vocational competencies at least to the level being assessed, one person with all the assessment competencies listed and one or more persons who have the relevant vocational competencies at least to the level being assessed may work together to conduct the assessments.

In a recent research study (Bateman & Giles, 2010 for NCVER) it was identified that there is an endemic non-compliance problem in the design of assessment tasks throughout Australia. A number of interesting comments emerged from this study:

Although some teachers will be experienced with validating school-based assessments, this is different to validating assessments against industry standards

Assessment should be against industry standards, not curriculum or academic standards (ie. there is no place for the traditional Ball-shaped curve. For example 100% of you class can achieve competency or a high level of competency )

Assessment is Criterion Referenced o Interpreting student performance by making comparisons directly against pre-

established criteria

How are assessments conducted? Assessment under the national vocational education and training system is quite different from the formal examinations and tests most people remember from their school days. Evidence is gathered to demonstrate competence in the skills and knowledge required by the units of competency. Common types of assessment methods used by assessors to gather evidence include: • Answers to questions • Training records • Work records • Samples of work

• Observation • Demonstrations • Simulations and role plays

The evidence used in assessment depends on the requirements of the particular units of competency and the preferences or needs of the person being assessed. Each case is unique. The registered training organisation responsible for assessing people’s competencies should devise an assessment plan for each student, apprentice or trainee. Students and, where appropriate, industry should be involved in the development of the assessment plan. The plan should incorporate recognition or prior learning and any reasonable adjustment that may be required. Because work activities draw on the skills described in a number of units of competency, teachers and trainers can utilise holistic assessment methods to assess a range of units simultaneously.

WHAT IS THE PRIMARY PURPOSE OF RPL (Recognition of Prior Learning)?Assessment for RPL purposes requires an assessor to review candidate evidence and make an assessment decision as to whether there is sufficient, valid, reliable and current evidence to recognise candidate achievement against the units of competency. RPL can therefore be classified as a summative assessment as its primary purpose is to inform the credentialing process. However, like any well designed assessment tool, the evidence arising from the assessment could also be used to identify the strengths and weaknesses of the candidate, and therefore, could also serve a diagnostic function.

For the classroom teacher delivering and assessing VET in Schools (VETiS) as part of the Senior Certificate, the teacher needs to simulate the work environment as much as possible. While individual assessment, RPL and reasonable adjustment are important components of VET assessment, it is understood that 15 – 18 year olds may have very little industry experience to make such allowances. Consequently, this component of assessment tool design, while needing to be addressed and evaluated, is of less significance than it would otherwise be for adult students with experience in the relevant industry.

What is meant by assessing a learner in a simulated work environment? Assessments conducted in a simulated work environment must replicate the conditions and outcomes encountered in a workplace, as noted in the unit(s) of competency. In some cases this may refer to the actual physical resources, such as assessing an apprentice changing engine oil in a motor vehicle. In other cases this may refer to the interpersonal dynamics of the workplace, such as assessing a manager’s ability to conduct an interview to review work performance. It may refer to a combination of both.

The assessor will need to fully analyse the unit(s) of competency being assessed and make a judgment regarding the adequacy of a simulated work environment. Content within the evidence guide and the range statement of each unit of competency will help the assessor determine the conditions of a valid simulated work environment.

Assessment can take place on the job or off the job. However, as applying skills in the workplace is a key facet of vocational education and training, most evidence should ideally be gathered as the student performs work duties, whether in the workplace or in the simulated work environment.

Reasonable adjustmentReasonable adjustment refers to measures or actions taken to provide a student with a disability the same educational opportunities as everyone else. To be reasonable, adjustments

must be appropriate for that person, must not create undue hardship for the registered training organisation, and must be allowable within rules defined by the training package.

Engaging in reasonable adjustment activities, such as assisting students to identify their learning needs or offering a wide variety of course options and delivery modes, exemplifies good teaching practice.The Disability Discrimination Act 1992 makes it unlawful for an education service provider to discriminate against someone because the person has a disability. The Disability Standards for Education 2005 provide greater clarity on areas where reasonable adjustment can be applied. If a person with a disability meets the necessary course entry requirements of a registered training organisation, he or she should have just as much chance to study there as anyone else.Participants in vocational education and training could have a range of disabilities. In many situations the person with the disability (or guardian) will be able to tell educators what adjustments he or she needs to be able to study. If necessary, educators should also seek advice from government agencies or support organisations to determine what needs to be done to accommodate an individual’s needs. Reasonable adjustment activities could involve: • Modifying or providing equipment • Changing assessment procedures • Changing course delivery • Modifying premises

The determination of ‘reasonableness’ requires judgement that must take into account the impact on the organisation and the need to maintain the integrity of the unit of competency. Where the qualification outcome is specifically related to an ‘occupational’ outcome, any reasonable adjustment may only be accommodated in the assessment process if the workplace can be similarly ‘adjusted’ to accommodate the needs of the applicant/employee.

What is an Assessment Tool?An assessment tool is made up of a number of components. The COSAMP method for assessment tool design requires trainers to develop the following:

An Assessor Guide A Learner Guide A Mapping matrix Criteria assessment sheet in Total VET

Assessment tool developers need to: Choose the assessment methods that reflect the job task(s) embedded in the competency (face

validity). Specify, for each assessment method, the information that will be given to the student to get

them to say, do, write and/or build something (this becomes the tasks to be performed by the candidate).

Ensure that the tasks developed for each assessment method reflects the scope of the competency. This will ensure that the assessment has content and face validity.

Ensure that the tasks developed enable sufficient evidence to be collected to determine competence (sufficiency).

Ensure that there is clear documentation describing how the candidate will be expected to respond to each task. This should include instructions to the candidate for how s/he should build, say, write and/or do something for each task (e.g. what should be included in a portfolio? How they should prepare and present their essay?).

Ensure that there is clear documentation describing the expected response to the task (for example, answer guides, performance indicators, a description of the attributes of a completed product). This should never be simply a listing of performance criteria.

Ensure that there are clear instructions as to how the assessor is to judge the evidence and make an overall judgement of competence

Ideal Characteristics of an Assessment ToolThe context The target group and purpose of the tool should be described. This should include a description of the background characteristics of the target group that may impact on the candidate performance (e.g. literacy and numeracy requirements, workplace experience, age, gender etc).

Competency MappingThe components of the Unit(s) of Competency that the tool should cover should be described. This could be as simple as a mapping exercise between the components of the task (e.g. each structured interview question) and components within a Unit or cluster of Units of Competency. The mapping will help to determine the sufficiency of the evidence to be collected

The information to be provided to the candidateOutlines the task(s) to be provided to the candidate that will provide the opportunity for the candidate to demonstrate the competency. It should prompt them to say, do, write or create something

The evidence to be collected from the candidateProvides information on the evidence to be produced by the candidate in response to the task.

Decision making rulesThe rules to be used to:

Check evidence quality (i.e. the rules of evidence); Judge how well the candidate performed according to the standard expected (i.e. the evidence

criteria); and Synthesise evidence from multiple sources to make an overall judgement

Range and conditionsOutlines any restriction or specific conditions for the assessment such as the location, time restrictions, assessor qualifications, currency of evidence (e.g. for portfolio based assessments), amount of supervision required to perform the task (i.e. which may assist with determining the authenticity of evidence) etc

Materials/resources requiredDescribes access to materials, equipment etc that may be required to perform the task.

Assessor interventionDefines the amount (if any) of support provided.

Reasonable adjustments (for enhancing fairness and flexibility)This section should describe the guidelines for making reasonable adjustments to the way in which evidence of performance is gathered (e.g. in terms of the information to be provided to the candidate and the type of evidence to be collected from the candidate) without altering the expected performance standards (as outlined in the decision making rules).

Validity evidence Evidence of validity (such as face, construct, predictive, concurrent, consequential and content) should be provided to support the use of the assessment evidence for the defined purpose and target group of the tool.

Reliability evidence If using a performance based task that requires professional judgement of the assessor, evidence of reliability could include providing evidence of:

The level of agreement between two different assessors who have assessed the same evidence of performance for a particular candidate (i.e. inter-rater reliability); and

The level of agreement of the same assessor who has assessed the same evidence of performance of the candidate, but at a different time (i.e. intra-rater reliability).

If using objective test items (e.g. multiple choice tests) than other forms of reliability should be considered such as the internal consistency of a test (i.e. internal reliability) as well as the equivalence of two alternative assessment tasks (i.e. parallel forms).

Recording requirementsThe type of information that needs to be recorded and how it is to be recorded and stored, including duration.

Reporting requirementsFor each key stakeholder, the reporting requirements should be specified and linked to the purpose of the assessment.

Assessment TypesThe National Quality Council has developed guidelines for assessors which documents the requirements for each assessment type:

Portfolio Observation Product based methods Interview methods

Four examples have been included to illustrate how the assessment tool framework could be applied to the development of assessment tools. It should be acknowledge that the examples provided are not assessment tools but instead, are guidance as to what key features should be in an assessment tool based on the specific assessment method. These four examples encapsulate methods that require candidates to either do (observation), say (interview), write (portfolio) and create (build) something. In fact, any assessment activity can be classified according to these four broad methods.

PORTFOLIO

A portfolio is defined as a purposeful collection of samples of annotated and validated pieces of evidence (e.g. written documents, photographs, videos, audio tapes).The tool should provide instructions to the candidate for how the portfolio should be put together. For example, the candidate may be instructed to:

Select the pieces of evidence to be included; Provide explanations of each piece of evidence; Include samples of evidence only if they take on new meaning within the context of other

entries; Include evidence of self-reflection; Include evidence of growth or development; and Include a table of contents for ease of navigation.

Decision making rulesThis section should outline the procedures for checking the appropriateness and trustworthiness of the evidence included within the portfolio such as the:

Currency of evidence within the portfolio - is the evidence relatively recent).The rules for determining currency would need to be specified here (e.g. less than five years);

Authenticity -is there evidence included within the portfolio that verifies that the evidence is that of the candidate and/or if part of a team contribution, what aspects were specific to the candidate (e.g. testimonial statements from colleagues, opportunity to verify qualifications with issuing body etc);

Sufficiency - is there enough evidence to demonstrate to the assessor competence against the entire unit of competency, including the critical aspects of evidence described in the Evidence Guide (e.g. evidence of consistency of performance across time and contexts);

Content Validity – does the evidence match the unit of competency (e.g. relevance of evidence and justification by candidate for inclusion, as well as annotations and reflections);

Once the evidence within the portfolio has been determined to be trustworthy and appropriate, the evidence will need to be judged against evidence criteria such as:

Profile descriptions of varying levels of achievement (e.g. competent versus not yet competent performance;

Behaviourally Anchored Rating Scales (BARS)2 that describe typical performance from low to high (also referred to as analytical rubrics); and

The Unit of Competency presented in some form of a checklist.

Range and conditionsIt should be explained to candidates (preferably in written format prior to the preparation of the portfolio) that the portfolio should not be just an overall collection of candidate’s work, past assessment outcomes, checklists and of the information commonly kept in candidates’ cumulative folders. The candidate should be instructed to include samples of work only if they take on new meaning within the context of other entries. Consideration of evidence across time and varying contexts should be emphasised to the candidate.Candidate should also be instructed to only include recent evidence (preferable less than five years) although more dated evidence can be used but should be defended for inclusion. Such information should be provided in written format to the candidate prior to preparing the portfolio.

Materials/resources requiredMaterials to be provided to the candidate to assist with preparing his/her portfolio may include access to: photocopier, personal human resource files etc., if required.

Assessor interventionClarification of portfolio requirements permitted.

Reasonable adjustmentsAn electronic and/or product based version of the portfolio may be prepared by the candidate. The portfolio may include videos, photographs etc.

ValidityEvidence of the validity of the portfolio tool may include:

Detailed mapping of the evidence used to judged the portfolio with the Unit(s) of Competency (content validity);

Inclusion of documents produced within the workplace and/or has direct application to the workplace (face validity);

Evidence that the tool was panelled with subject matter experts (face and content validity);

ReliabilityEvidence of the reliability of the portfolio tool may include:

Detailed scoring and/or evidence criteria for the content to be judged within the portfolio (inter-rater reliability); and

Recording sheet to record judgements in a consistent and methodical manner (intra-rater reliability).

Recording requirementsThe following information should be recorded and maintained:

The Portfolio tool (for validation and/or moderation purposes); Samples of candidate portfolios of varying levels of quality (for moderation purposes); and Summary Results of each candidate performance on the portfolio as well as recommendations

for future assessment and/or training etc in accordance with the organisation’s record keeping policy.

OBSERVATION METHODS

Workplace Observation, Simulation Exercise, Third Party ReportThis may be part of a real or simulated workplace activity. Prior to the assessment event, the candidate must be informed that they will be assessed against the Observation Form and should be provided with a copy of the Form. Details about the conditions of the assessment should also be communicated to the candidate as part of these instructions (e.g. announced versus unannounced observations, period of observation).

Observations of the candidate performing a series of tasks and activities as defined by the information provided to the candidate. The performance may be:

Part of his/her normal workplace activities; A result of a structured activity set by the observer in the workplace setting; and A result of a simulated activity set by the assessor/observer.

This section should outline what evidence of performance the assessor should be looking for during the observation of the candidate. The evidence required should be documented and presented in an Observation Form.

Decision making rulesTo enhance the inter-rater reliability of the observation (i.e. increasing the likelihood that another assessor would make the same judgement, based upon the same evidence, as the assessor), an Observation Form should be developed and used to judge and record candidate observations. The observer should record the assessors (or observers) observations of the candidate’s performance directly onto the Observation Form.The observer should be instructed as to whether to record his/her observations on the Observation Form during and/or after the observation.The Observation Form may have a series of items in which each key component within the Unit of Competency is represented by a number of items. Each item would be the evidence criteria. Each item may be presented as:

a Behaviourally Anchored Rating Scale (BARS; standard referenced frameworks (or profiles); a checklist; and/or open ended statements to record impressions/notes by the observer.

Instructions on how to make an overall judgement of the competence of the candidate would need to be documented(e.g. do all items have to be observed by the assessor?). The form should also provide the opportunity for the observer to record that s/he has not had the opportunity to observe the candidate applying these skills and knowledge. Again, instructions on how to treat not observed items on the checklist would need to be included within the tool. The form should also be designed to record the number of instances and/or period of observation (this will help determine the level of sufficiency of the evidence to be collected), as well as the signature of the observer; and the date of observation(s) (to authenticate the evidence and to determine the level of currency).

Range and conditionsAssessors need to provide the necessary materials to the candidate, as well as explain or clarify any concerns/questions. The period of observation should be communicated to the observer and candidate and this would need to be negotiated and agreed to by workplace colleagues, to minimise interruptions to the everyday activities and functions of the workplace environment

Materials/resources required

The tool should also specify the materials required to record the candidate’s performance. For example: A copy of the Unit(s) of Competency; The Observation Form; Pencil/paper; and Video camera.

In addition, any specific equipment required by the candidate to perform the demonstration and/or simulation should be specified.

Assessor interventionIn cases where observations are to be made by an internal staff member and are to be unannounced, the candidate needs to be warned that s/he will be observed over a period of time for purposes of formal assessment against the Unit(s) of Competency. If the observer is external to the workplace (e.g. teacher or trainer), s/he will need to ensure that the time and date of the visit to the candidate’s workplace is confirmed and agreed to by the candidate and the workplace manager. The external observer will need to inform the candidate and his/her immediate supervisor of his/her presence on the worksite as soon as possible. At all times, the external observer will need to avoid hindering the activities of the workplace.

Reasonable adjustmentsIf the candidate does not have access to the workplace, then suitable examples of simulated activities may be used. This section would outline any requirements and/or conditions for the simulated activity.

ValidityEvidence of the validity of the observation tool may include:

Detailed mapping of the Observation Form with the Unit(s) of Competency (content validity); Direct relevance and/or use within a workplace setting (face validity); A report of the outcomes of the panelling exercise with subject matter experts (face and

content validity); Observing a variety of performance over time (predictive validity); The tool clearly specifying the purpose of the tool, the target population, the evidence to be

collected, decision making rules, reporting requirements as well as the boundaries and limitations of the tool (consequential validity); and

Evidence of how the literacy and numeracy requirements of the Unit(s) of Competency have been adhered to (construct validity).

ReliabilityEvidence of the reliability of the observation tool may include:

Detailed evidence criteria for each aspect of performance to be observed (inter-rater reliability); and

Recording sheet to record observations in a timely manner (intra-rater reliability).

Recording requirementsThe following information should be recorded and maintained:

The Observation Form (for validation and/or moderation purposes); Samples of completed forms of varying levels of quality (for moderation purposes); Summary Results of each candidate performance on the Observation Forms as well as

recommendations for future assessment and/or training etc in accordance with the organisation’s record keeping policy; and

The overall assessment result should be recorded electronically on the organisation’s candidate record keeping management system.

PRODUCT BASED METHODS

Reports, Displays, Work Samples.The instructions for building/creating the product need to be clearly specified and preferably provided to the candidate in writing prior to formal assessment. The evidence criteria to be applied to the product should also be clearly specified and communicated (preferably in writing) to the candidate prior to the commencement of the formal assessment.Details about the conditions of the assessment should also be communicated to the candidate as part of these instructions (e.g. access to equipment/resources, time restrictions, due date etc)

Evidence from candidateThe tool needs to specify whether the product only will be assessed, or whether it will also include the process. If it is product based assessment only, then the candidate needs to be instructed on what to include in the product. The conditions for producing the product should be clearly specified in the ‘information to be provided to the candidate’; which will directly influence the type of response to be produced by the candidate (e.g. whether they are to draw a design, produce a written policy document, build a roof etc). If the Tool also incorporates assessing the process of building the product, then the observations of the process would need to be also judged and recorded. The candidate would need to be instructed on how to present his/her product for example:

Portfolio (possibly containing written documents, photos, videos etc); Display or exhibition of work; and Written document etc.

Decision making rulesThis section should outline the procedures for checking the appropriateness and trustworthiness of the product evidence such as its:

Currency - is the product relatively recent. The rules for determining currency would need to be specified here (e.g. less than five years);

Authenticity - is there evidence included within the product that verifies that the product has been produced by the candidate and/or if part of a team contribution, what aspects were specific to the candidate (e.g. testimonial statements from colleagues); and

Sufficiency - is there enough evidence to demonstrate to the assessor competence against the entire Unit of Competency, including the critical aspects of evidence described in the Evidence Guide (e.g. evidence of consistency of performance

To enhance the inter-rater reliability of the assessment of the product, the criteria to be used to judge the quality of the product should be developed. Such criteria (referred hereon as evidence criteria) should be displayed in a Product Form to be completed bythe assessor. The assessor should record his/her judgements of the product directly onto a Product Form.There are many different ways in which the form could be designed. For example,

the form may have broken down the task into key components to be performed by the candidate to produce the product.

Each key component may be assessed individually using analytical rubrics (also referred to as behaviourally anchored rating scales (BARS)), or

the product overall may be compared to a holistic rubric describing varying levels of performance (also referred to as standard referenced frameworks or profiles) or

it simply may be judged using a checklist approach.The candidate should be provided with the evidence criteria prior to commencing building his/her product.

Range and conditions

Assessors need to provide the necessary materials to the candidate, as well as explain or clarify any concerns/questions.The time allowed to build the product should be communicated to the candidate and if there are any restrictions on where and when the product can be developed, this would also need to be clearly specified to the candidate.

Materials/resources requiredThe tool should also specify the materials required by the candidate to build the product. It should also specify the materialsrequired for the assessor to complete the form. For example:

The Product Form; Pencil/paper; and Specific technical manuals/workplace documents etc.

Assessor interventionThe amount of support permitted by the assessor, workplace supervisor and/or trainers needs to be clearly documented.

Reasonable adjustmentsIf the creation of the product requires access to the workplace, then suitable examples of simulated activities may be used to produce the product if the candidate does not have access to the workplace.

ValidityEvidence of the validity of the product tool may include:

Detailed mapping of the key components within the task with the Unit(s) of Competency (content validity);

Direct relevance and application to the workplace (face validity); A report of the outcomes of the panelling exercise with subject matter experts (face and

content validity); The tool clearly specifying the purpose of the tool, the target population, the evidence to be

collected, decision making rules, reporting requirements as well as the boundaries and limitations of the tool (consequential validity); and

Evidence of how the literacy and numeracy requirements of the Unit(s) of Competency have been adhered to (construct validity).

ReliabilityEvidence of the reliability of the observation tool may include:

Detailed evidence criteria for each aspect of the product to be judged (inter-rater reliability); and

Recording sheet to record judgements in a consistent and methodical manner (intra-rater reliability).

Recording requirementsThe following information should be recorded and maintained:

The Product Form (for validation and/or moderation purposes); Samples of completed forms of varying levels of quality (for moderation purposes); and Summary Results of each candidate performance on the Product Forms as well as

recommendations for future assessment and/or training etc in accordance with the organisation’s record keeping policy.

INTERVIEW METHODS

structured, semi-structured, unstructured interviews, written answers

The interview schedule may be structured, semi-structured and unstructured.If using structured and/or semi-structured interview techniques, each question to be asked in the interview session should be listed and presented within the interview schedule. The type of questions that could be asked may include open ended; diagnostic; information seeking; challenge; action; prioritization, prediction; hypothetical; extension; and/or generalisation questions.When designing the interview schedule, the assessor will need to decide whether to:

Provide the candidate with the range of questions prior to the assessment period; Provide the candidate with written copies of the questions during the interview; Allow prompting; Place restrictions on the number of attempts; Allow access to materials etc throughout the interview period; and Allow the candidate to select his/her preferred response format (e.g. oral versus written).

Evidence from candidateInstructions on how the candidate is expected to respond to each question (e.g. oral, written etc). This section should also outline how responses will be recorded (e.g. audio taped, written summaries by interviewer etc).

Decision making rulesProcedures for judging the quality and acceptability of the responses. For each question, the rubric may outline:

Typical, acceptable and/or model responses; and BARS that describe typical responses of increasing cognitive sophistication that are linked to

separate points on a rating scale (usually 3 to 4 points).

The tool should outline the administration procedures for asking each question. For example, not all questions may need to be asked if they are purely an indication of what may be asked. In such circumstances, the schedule should specify whether an assessors needs to ask a certain number of questions per category (as determined in the mapping exercise. The tool should also provide guidelines to the assessor on how to combine the evidence against the interview with other forms of evidence to make an overall judgement of competence (to ensure sufficiency of evidence).As the interview is to be administered by the assessor and conducted in present time, there will be evidence of both currency and authenticity of the evidence. However, if the candidate within the interview refers to past activities etc that s/he has undertaken as evidence of competence, then decision making rules need to be established to check the currency and authenticity of such claims.

Range and conditionsThe tool should also specify any restrictions on the number of attempts to answer the interview questions and/or time restrictions (if applicable).

Materials/resources requiredThe interview schedule should specify the type of materials provided to the candidate which may include:

Written copies of the questions prior to or during the assessment; Access to materials (e.g. reference materials, policy documents, workplace documents) during

the interview to refer to; and Access to an interpreter/translator if the candidate is from a non English speaking background

(NESB).The interview schedule should also specify the materials required by the interviewer to record the candidate’s responses. For example, paper, pencil, video camera, audio tape etc.

Assessor interventionThe tool should specify the extent to which the assessor may assist the candidate to understand the questions.

Reasonable adjustmentsCandidates may be given the option of responding to the interview questions in writing, as opposed to oral response. Access to an interpreter during the interview may also be permitted if the competency is not directly related to oral communication skills in English. Similarly, candidates from NESB may be provided with a copy of the interview schedule in their native language prior to the interview.

ValidityEvidence of the validity of the interview tool may include:

Detailed mapping of the questions to be included within the interview schedule with the Unit(s) of Competency (content validity);

Direct relevance to the workplace setting (face validity); Evidence of panelling the questions with industry representatives during the tool development

phase (face validity); The tool clearly specifying the purpose of the tool, the target population, the evidence to be

collected, decision making rules, reporting requirements, as well as the boundaries and limitations of the tool (consequential validity); and

Evidence of how the literacy and numeracy requirements of the unit(s) of competency (construct validity) have been adhered to.

ReliabilityEvidence of the reliability of the interview tool may include:

Detailed scoring and/or evidence criteria for each key question within the interview schedule (inter-rater reliability);

Recording sheet to record responses in a timely, consistent and methodical manner (intra-rater reliability); and

Audio taping responses and having them double marked blindly by another assessor (i.e. where each assessor is not privy to the judgements made by the other assessor) during the development phase of the tool (inter-rater reliability).

Recording requirementsThe following information should be recorded and maintained:

The Interview Schedule (for validation and/or moderation purposes); Samples of candidate responses to each item as well examples of varying levels of responses (for

moderation purposes); and Summary Results of each candidate performance on the interview as well as recommendations for

future assessment and/or training etc in accordance with the organisation’s record keeping policy.

NOW IT’S TIME TO START ASSESSMENT TASK 4 (TAEAS404) Open the file named “TAEAS404_AssessmentDesign-TAEASS401A_LNR” Open the file “TV for TAA” Open the pdf file of one of your Units of Competency

In Module 2 you analysed each Unit of Competency and paraphrased the endorsed sections of the Unit in the spreadsheet ‘Program Analysis’ in Total VET. Re-read the Unit of Competency alongside the paraphrasing you created in Total VET.

Your TAEAS404 Assessment Task may be designed in a couple of ways:1. You may create a separate tool for each Unit of Competency you have analysed; or2. You may create a combined tool if you intend to deliver and assess both Units concurrently

Your Assessment Tool(s) ought have more than one assessment type (portfolio, observation, product, interview) as some endorsed components of a Unit of Competency may be better assessed by one assessment type than another.

Designing your assessment tool – The Assessor Guidea. Use the COSAMP Assessor Instruction for Assessment template to build the

content for your assessment tool. i. On the front page, type in the Training Package Qualification and Code in

the space provided.ii. On the front page, give a name for your assessment tool that will capture

the imagination of your students, preferably with some reference to a Work Task in the workplace. Give your tool a ‘Task Code’

iii. On the front page, type in the name(s) and code(s) of the Unit(s) of Competency being assessed

iv. On the front page, type in the names and codes of the Elements of the Unit(s) of Competency

v. Your assessment tool should be based around a work or simulated work activity. Provide a description of the work activity where this assessment tool would be part of a workplace or industry context. This description is written up briefly on the front page.

b. Decide on the various Assessment Types first. List the types you are using and a short description of the task associated with each type. Put this in the section called ‘Assessment Components’.

c. Using the assessment type guides on pages 8 – 16 of this documenti. Identify the task criteria that students will be required to undergoii. After each criterion, put in brackets the Unit Performance Criteria

number that this task will be assessing (for reference later on to assist in your mapping document)

iii. Ensure that each Performance Criterion has at least one activity by which it will be assessed across all your assessment types

iv. The assessment task is not reliant on your personal ability to assess the client’s performance. It must have the capacity for another assessor to

use and replicate the assessment procedures without any need for further clarification by you.

v. If RPL is available, describe the conditions that would need to be demonstrated for RPL to be awarded to any of your students

vi. As a whole, the assessment criteria need to demonstrate 1. currency of the content in relation to the relevant industry2. authenticity – that it is the client’s own work 3. sufficiency – that there is enough evidence produced to

determine competency4. reasonable adjustments for specific contexts, but do not

compromise the validity of the assessment task itself for the Unit of Competency.

Designing your assessment tool – Student Results Recording page setupa. Open the file “TV for TAA.xls”

i. From the Index sheet, select the option “Assessment Task List” in the LHS navigation list. This will hyperlink you to the sheet titled “Assessment Task List”.

Previously you had entered the name of your Units of Competency in the Unit Selector. Now you will notice that the Unit Name and Code appear in columns B and F

ii. Type in the name of your newly-designed Assessment Task in Column E If you have designed a single Assessment tool for both Units combined, type in

the name of the task in row 8 and add the suffix “- Part A”. Repeat the name in row 9 and add the suffix “- Part B”

If you have designed a task for one Unit only, you will eventually have the names of 2 separate tasks in rows 8 and 9.

Give your Assessment Tool a code, or you can leave it as the default “AS00x”iii. In Column D there is a hyperlink, such as “AT1”. Click on this hyperlink which will take

you to the recording sheet for your assessment tool. In Row 4 you will see the name of your assessment tool appear In Cell G6, type in the first Assessment Type In Row 7, type in a 2-5 word paraphrase of each criterion for that assessment

type which you will be using to assess the relevant components of competency. Highlight across Row 6 the columns where the criteria refer to the Assessment

type and put a border around this group of cells in Row 6 (right click/format cells/Border/Select style/Select outline/Click OK

Repeat this process for each Assessment type.iv. If you have created a single Assessment Tool for both Units of Competency,

only record the criteria assessing the specific Unit on the AT1 sheet or AT2 sheet respectively.

An explanation of the functionality of these assessment sheets:

Columns A & B have the students’ names which are populated on the sheet called “Student list”If you click on the student’s name, you will be hyperlinked to the student’s summary page.

Column C has a circle with an arc through by default, indicating that the task has not been completed. When there is a tick for each criterion for the specific student, this will change to a Tick.

Column D will indicate the amount of the task the student has completed, as you gradually assess each criterion.

Column E will count down the number of Criteria that the student still has to complete.

Column F enables you to rate the students’ work on a scale of 1-5. The number ‘1’ does indicate that the student is competent, but needs support. The number ‘5’ indicates the student is highly competent and independent

Columns ‘G’ through to ‘AE’ are set for you to select either the letter ‘a’ or the letter ‘x’. The cell is formatted with the Webdings font. ‘a’ returns a tick. If the student has satisfactorily demonstrated competence in each of the criteria, type the letter ‘a’, otherwise, leave the cell blank

Designing your assessment tool – The Learner Guidea. Use the COSAMP Learner Instructions for Assessment template to build the content for

your Learner Guide. i. On the front page, copy the front cover information from the Assessor Guide

ii. In the section “Task Focus” you can elaborate about the industry work context for the assessment tool, making the student the subject of the description.

iii. In the body of the document, copy and paste the criteria you developed in the Assessor guide. Edit the text of each criterion so that the task components are directed towards the student in the 2nd person (‘you’). You can possibly expand on the instructions to provide students with as much support as is required.

iv. Delete all references to the Performance Criteria codes in brackets.b. Copy and paste the Learner Guide Content into the Assessor Guide document you have

already created.

Designing your assessment tool – Assessment Tool MappingThe Assessment Tool Mapping document demonstrates the linkages between your assessment task and the Endorsed Components of the Unit of Competency. This is the document which ensures that your Assessment Task is valid, reliable, fair, reasonable and fully complaint with the AQTF requirements.

There is no one specific way to organise the information in this table. The organisation depends on the way you have designed your Assessment Tool and spread it across the various Assessment Types. You may find you start organising the information according to Performance Criteria and find that you’re doing a lot of repetition.For an exemplar, open the file “MAP_MusicIndustryResearch.doc” (Resource Folder/Templates) for one method of completing the mapping document.

a. Open the document “Assessment Mapping Template.doc” (Resource Folder/Templates)

i. Type in the Qualification Code and name by replacing the textii. Type in the Unit of Competency code and name by replacing the text

iii. Type in the name of your Assessment tool and a code (if applicable)iv. Copy the task description/task Focus from the Assessor Guidev. Copy the Assessment Components from the Assessor Guide

b. The first table to complete is the mapping to the Performance Criteria.Ensure you include all Performance Criteria somewhere in this table. The numbers you entered after each of the assessment criteria in the Assessor Guide will assist you in creating this table.

c. The second table refers to the relevant knowledge the students ought have acquired during your delivery/teaching program.

i. You may find that there has not been a Performance Criterion relevant to some specific knowledge requirements. You may need to determine another Task Activity to include in your Assessment Task to ensure that all knowledge is being assessed.

d. The third table refers to the relevant skills the students ought have acquired during your delivery/teaching program.

i. You may find that there has not been a Performance Criterion relevant to some specific skill requirements. You may need to determine another Task Activity to include in your Assessment Task to ensure that all knowledge is being assessed.

e. The fourth table refers to the relevant Employability Skills the students ought have acquired during your delivery/teaching program.

An Assessment task does not need to cover every Employability Skill, so just nominate the Task components which relate to relevant Employability Skills.

f. The last table refers to the Evidence Guide which lists the Methods of Assessment and Evidence Generated by the student to determine the student’s competency in a Unit.

i. List the Critical Aspects of Assessment and the Evidence Required as nominated in the Unit of Competency and in the first 2 columns, identify the relevant methods you used and the evidence the student ought to have generated to demonstrate competence.


Recommended