Home >Education >Measuring the impact of instant high quality feedback

Measuring the impact of instant high quality feedback

Date post:12-Aug-2015
View:129 times
Download:1 times
Share this document with a friend
  1. 1. Measuring the impact of instant high quality feedback. Stephen Nutbrown: psxsn6@nottingham.ac.uk Su Beesley: susan.beesley@ntu.ac.uk Colin Higgins: colin.higgins@nottingham.ac.uk University of Nottingham & Nottingham Trent University
  2. 2. Background oAutomated assessment oWhy do we assess work? Assessment of Learning vs Assessment for learning oExample - teaching programming a practical subject, assessment for learning is vital. oFeedback core part of assessment for learning
  3. 3. Overview oStudy of 141 Computer Science students oAccording to the NSS, relative to other questions, students unhappy with their feedback. oWhat is good feedback? oIntroduction to TMA (Automated marking system) to produce feedback in line with good feedback. oMeasure student performance same exercise + multiple submissions oMeasure student performance on following exercises without multiple subs
  4. 4. Aim: Improve feedback, measure the result oHelp students learn! oGeneral properties of good feedback : o Informative (How to improve!) & specific o Reliable & Consistent o Clearly communicated o Timely o The assessment (and feedback) should be useful for teachers
  5. 5. TMA A brief introduction (The Markers Apprentice) Electronic framework for combining automated, semi-automated, manual marking. Mark Scheme broken into many parts, a tool for each part which may be: Automated: e.g Spell checks, word counts, grammar checks for reports functionality tests + convention tests for programming. Semi-automated (help a human marker): e.g Selecting options from a rubric or bank of feedback, each with actions on how to improve. Manual (entirely up the marker): e.g Free text entry
  6. 6. Instant? oIf all of the tools for an assignment are automated, feedback can be given instantly. oAlso allows for marking of drafts, or using a subset of tools for a pre-submission to give an idea of how the student is doing.
  7. 7. Use in Computer Science oSeveral thousand submissions so far at University of Nottingham UK and University of Nottingham China. oA range of automated tools have been developed which work with TMA for the assessment of programming.
  8. 8. A typical programming exercise Overall Does it work? X% of total grade + feedback Is the source code of good quality? Y% of grade + feedback Input Output?
  9. 9. Rule based conventions tool oOne of several tools but this is the one we will focus on today. o127 rules, based on PMD (http://pmd.sourceforge.net) oSearches for particular patterns. oCan identify common problems and their exact line number. oFor each violation marks deducted & feedback given o Difficulties due to repeat violations, weightings, difficulty.
  10. 10. Tool example - conventions Extremely difficult to manually mark imagine thousands of lines of code each submission, for 140+ students. Bad code, example with common issues Good code, example without common issues int a = 99; int b = 10; int c; if(a < b) c = b-a; else if (a > b) c = a-b; else c = 0; int first = 99; int second = 10; int calculatedDifference; if (first < second) { calculatedDifference = second first; } else if (first > second) { calculatedDifference = first - second; } else { calculatedDifference = 0; }
  11. 11. Linking back to good feedback Good example (How it should be done) Bad example (Help identify the problem) Reason Hyperlink back to learning resource (Lecture slides) Feedback given instantly Perfectly consistent Feedback sessions to discuss keep communication open Used in combination with functionality tests.
  12. 12. Screenshots - overview
  13. 13. Further help
  14. 14. Programming is just an example. The same is possible for reports, or other assignment types.
  15. 15. Programming Coursework 0 First measurement: oDoes not count towards final grade, as many submissions as they wish. getting started with Java. oInstant feedback (automated) makes feedback on drafts possible. oAssessed using functionality and conventions tools from before. oMeasure performance between submissions
  16. 16. Measuring improvements within one exercise CW0, does not count towards the final grade. 346 Submissions from 60 unique students (Avg. 5.7 each) 60.8 77.2 0 20 40 60 80 100 AverageGrade First Submission, Last Submission First submission Last Submission
  17. 17. Results raise some questions oDid they read the feedback and engage with it? oDid they just mechanically fix problems? oDid they learn anything? Should fix line 7. See lecture 2 Submission 1 Well done Submission n Nothing
  18. 18. Survey of students (35 responses) Did you read the instant feedback? Yes (100%) Did the feedback highlight areas which can be improved? Yes (91%) No (9%) Did you improve the quality of your submission based on your feedback? Yes (77%) No (23%) Do you feel the feedback assisted in your learning and will help you in future work? Yes (68%) No (32%)
  19. 19. Coursework 1 oCounts towards final grade (100%!) oSame coursework set 2 years ago. oMade up of 3 parts: o Part 1 Ensures every student has received specific, instant feedback at least once from TMA using functionality and convention tools.. Allowed 2 submissions. o Part 2 No pre-submissions. Compare results to previous years (This part had a focus on their testing of their own work) o Part 3 1 pre-submission [a subset of tests] containing very few tests, for peace of mind.
  20. 20. Avg. Number of convention mistakes 0 5 10 15 20 25 30 35 Avgconventionviolations Cohort Previous Year (118) This Year(141) 35.4% difference
  21. 21. Useful for teachers 0 20 40 60 80 100 Average grade for each section
  22. 22. Analysis oSame course material oSame lecturer oSame assignment oImproved feedback. Specific, instant, clearly communicated, timely, useful for teachers too.
  23. 23. Conclusions oThe feedback generally helped students to learn (assessment for learning) oThe feedback, as it was specific and broken down into sections, is useful for teachers. oThe assessment technique is vitally important to student experience and has a huge impact on student learning oSimilar findings in China, not formally analysed.
  24. 24. General: For other disciplines This study strongly highlights the importance of good feedback and well-considered assessment techniques. Students created submissions of much higher quality than before. TMA almost forces good feedback practises, even for semi-automated tools Require action Split up mark scheme specific Saves time
  25. 25. To take away oFeedback is extremely important, considering all of the guidelines is a good start, even if it isnt automated. oAutomated assessment may help, but is not a replacement for human interaction. oCan you do anything to improve the turnaround time or detail of your feedback? oTMA will be made generally available soon, if you would like a demo, please feel free to ask.
Popular Tags:

Click here to load reader

Embed Size (px)