+ All Categories
Home > Documents > Interpretations of formative assessment in the teaching of English at two Chinese universities: a...

Interpretations of formative assessment in the teaching of English at two Chinese universities: a...

Date post: 19-Dec-2016
Category:
Upload: lyn
View: 212 times
Download: 0 times
Share this document with a friend
18
This article was downloaded by: [Moskow State Univ Bibliote] On: 02 January 2014, At: 01:50 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Assessment & Evaluation in Higher Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/caeh20 Interpretations of formative assessment in the teaching of English at two Chinese universities: a sociocultural perspective Qiuxian Chen a , Margaret Kettle b , Val Klenowski b & Lyn May b a Foreign Language School, Shanxi University, Taiyuan, China. b Faculty of Education, Queensland University of Technology, Brisbane, Australia. Published online: 28 Sep 2012. To cite this article: Qiuxian Chen, Margaret Kettle, Val Klenowski & Lyn May (2013) Interpretations of formative assessment in the teaching of English at two Chinese universities: a sociocultural perspective, Assessment & Evaluation in Higher Education, 38:7, 831-846, DOI: 10.1080/02602938.2012.726963 To link to this article: http://dx.doi.org/10.1080/02602938.2012.726963 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Transcript

This article was downloaded by: [Moskow State Univ Bibliote]On: 02 January 2014, At: 01:50Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in HigherEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/caeh20

Interpretations of formativeassessment in the teaching of Englishat two Chinese universities: asociocultural perspectiveQiuxian Chena, Margaret Kettleb, Val Klenowskib & Lyn Mayb

a Foreign Language School, Shanxi University, Taiyuan, China.b Faculty of Education, Queensland University of Technology,Brisbane, Australia.Published online: 28 Sep 2012.

To cite this article: Qiuxian Chen, Margaret Kettle, Val Klenowski & Lyn May (2013)Interpretations of formative assessment in the teaching of English at two Chinese universities:a sociocultural perspective, Assessment & Evaluation in Higher Education, 38:7, 831-846, DOI:10.1080/02602938.2012.726963

To link to this article: http://dx.doi.org/10.1080/02602938.2012.726963

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

Interpretations of formative assessment in the teaching of Englishat two Chinese universities: a sociocultural perspective

Qiuxian Chena, Margaret Kettleb, Val Klenowskib* and Lyn Mayb

aForeign Language School, Shanxi University, Taiyuan, China; bFaculty of Education,Queensland University of Technology, Brisbane, Australia

Formative assessment is increasingly being implemented through policyinitiatives in Chinese educational contexts. As an approach to assessment,formative assessment derives many of its key principles from Western contexts,notably through the work of scholars in the UK, the USA and Australia. Thequestion for this paper is the ways that formative assessment has been inter-preted in the teaching of College English in Chinese Higher Education. Thepaper reports on a research study that utilised a sociocultural perspective onlearning and assessment to analyse how two Chinese universities – an urban-based Key University and a regional-based Non-Key University – interpretedand enacted a China Ministry of Education policy on formative assessment inCollege English teaching. Of particular interest for the research were the waysin which the sociocultural conditions of the Chinese context mediated under-standing of Western principles and led to their adaptation. The findings from thetwo universities identified some consistency in localised interpretations of for-mative assessment which included emphases on process and student participa-tion. The differences related to the specific sociocultural conditionscontextualising each university including geographical location, socioeconomicstatus, and teacher and student roles, expectations and beliefs about English.The findings illustrate the sociocultural tensions in interpreting, adapting andenacting formative assessment in Chinese College English classes and the con-sequent challenges to and questions about retaining the spirit of formativeassessment as it was originally conceptualised.

Keywords: formative assessment; English language teaching (ELT); ChineseHigher Education; sociocultural perspective

Introduction

In 2007, the Ministry of Education (CMoE) in China issued a guideline documentfor College English, that is, the English language teaching (ELT) programme forChinese university undergraduates majoring in subject areas other than English. Thedocument titled College English Curriculum Requirements (CECR) advocated theincorporation of formative assessment into College English assessment: ‘assessmentof College English learning should include both formative assessment and summa-tive assessment …’ (China Ministry of Education [CMoE] 2007, 5).

The move to formative assessment resonated with a global trend in educationalassessment that foregrounds formative assessment for the purpose of improved

*Corresponding author. Email: [email protected]

Assessment & Evaluation in Higher Education, 2013Vol. 38, No. 7, 831–846, http://dx.doi.org/10.1080/02602938.2012.726963

� 2013 Taylor & Francis

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

learning outcomes (Chen 2009). Yet, it is notable that much of the availableresearch and theory on formative assessment is Western and English speaking-based. While the influence of sociocultural factors on assessment practices is wellacknowledged (Black and Wiliam 2005; Carless 2011; Pryor and Crossouard 2008),the initiative by the CMoE makes it highly relevant to investigate how formativeassessment is operating in the Chinese context. Indeed what are the alignments andtensions as formative assessment is mandated and implemented in a context withsocial, cultural, historical political and geographical conditions markedly differentfrom those in which it was conceived? This paper draws on a research study of theformative assessment initiative as it was realised at two Chinese universities in theirinstitutional policies and assessment practices. The study analysed interpretations offormative assessment drawing on policy documents and teachers’ accounts as mani-festations of the sociocultural conditions of the respective university contexts.

Literature review

The evolution of formative assessment principles

The notion of formative assessment can be traced to Scriven’s (1967) concept offormative evaluation. It was incorporated by Bloom, Hastings, and Madaus (1971)into their theory of mastery learning. The definition of formative assessment at thattime prioritised the timing (during the process rather than at the end) and the func-tion (to help improve rather than to summarise).

The understanding of formative assessment evolved particularly in the UK,where it is known as assessment for learning, through the work of the AssessmentReform Group (ARG). Black and Wiliam (1998) argued that assessment performs aformative function by which evidence can be used to inform learning improvement.The ARG defined formative assessment as:

The process of seeking and interpreting evidence for use by learners and their teachersto decide where the learners are in their learning, where they need to go and how bestto get there. (ARG 2002, 2)

Ten principles were also proposed to guide classroom assessment practices.These principles foreground the critical role of assessment in classroom practiceand its integral relationship to teaching and learning through the use of effectiveand timely feedback for the purpose of learning enhancement, and in the learner’sactive engagement for the development of learning capability (ARG 2002). Theprinciples are widely accepted and form the basis of formative assessment initiativesin Western English speaking countries and other contexts.

Formative assessment in diverse context

Research shows that the enactment of formative assessment in classroom practiceshas been less than straightforward. Some UK teachers, for example, even supportedwith training, have been found implementing the principles of assessment for learn-ing to the ‘letter’ rather than the ‘spirit’ (Marshall and Drummond 2006). That is,the teachers followed the prescribed procedures rather than organising their assess-ment practice using the principles. On other occasions, the principles were distortedso that ‘real and sustained learning was sacrificed to performance on a test’

832 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

(Klenowski 2009, 263). In these cases it was found that feedback was given to stu-dents in order to target better performance on future tests.

In New Zealand, cases were reported in which the meaning of formativeassessment was narrowed so that assessment was used in a mechanistic way,instead of as intended, as Assessment for Learning (Hume and Coll 2009). Teach-ers’ use of formative assessment was procedural in that students were expected tocomply with criteria to the ‘letter’ to improve performance rather than to under-stand the qualities as reflected in the criteria to improve the standard of theirlearning.

While some researchers view these practices as distorting the principles of for-mative assessment, others maintain that deliberate appropriation for political orcultural ends is also a problem (Klenowski 2009). Studies exploring the feasibilityof the appropriation of formative assessment have identified how variations occurin practice. For example, Torrance and Pryor (1998) identify two types of forma-tive assessment: ‘convergent’ which is described as behaviourist, and ‘divergent’which aligns with a constructivist approach and is characterised by ‘helping ques-tions’ as opposed to ‘testing questions’. ‘Divergent’ formative assessment engagesstudents through probing questions in scaffolded conversations and sustaineddialogues.

Pryor and Crossouard (2008, 6) describe formative assessment as ‘complex andtricky practices for both teachers and learners’, and suggest that teachers can movebetween ‘convergent’ and ‘divergent’ variations in practices. Carless (2011) too pro-poses that formative assessment in varied contexts can range from ‘restricted’ to‘extended’ forms or from the ideal as prescribed in the international literature to lessideal but more contextually feasible variations.

Davison and colleagues (Davison 2008; Davison and Leung 2009) from thefield of ELT propose a typology of possibilities for formative assessment practice inclassrooms (Figure 1).

The typology extends the understanding of formative assessment and illustrateshow it can be flexible in practice. Carless (2011, 2), utilising a sociocultural per-spective, argues for the necessity of this flexibility: ‘formative assessment needs totake different forms in different contexts’. He proposes a ‘contextually groundedapproach’ especially for Confucian Heritage Culture (CHC) contexts such as Chinathat are predominantly dominated by an examination culture. He posits an educa-tional innovation such as formative assessment in terms of mutual adaptation,whereby the development of a suitable form of formative assessment involvesretaining its pedagogical focus while adapting to locally and nationally contextua-lised factors (Carless 2011).

In-class contingent formative assessment-

while-teaching

More formal mock or trial assessments modelled on

summative assessments but used for formative purposes

Prescribed summative assessments but results also used formatively

More planned integrated formative

assessment

Source: (Davison and Leung 2009, 400).

Figure 1. Formative assessment in the classroom: A typology of possibilities.

Assessment & Evaluation in Higher Education 833

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

A sociocultural perspective

A sociocultural perspective views values, models of social relations and practicesentrenched in traditions as ‘cultural tools for thinking’ (Rogoff 2003, 258).Together, they mediate the construction of meaning by agents within a shared con-text or ‘community of practice’ (Wenger 1998, 2000). Understanding education pol-icy and its implementation from this perspective is a complex process. It involvesrecognising that policy implementation is mediated. As noted by Wenger (1998,80):

The enterprise is never fully determined by an outside mandate, by a prescription orby an individual participant. Even when a community of practice arises in response tothat mandate, the practice evolves into the community’s own response to thatmandate.

This means that the making of meaning about a policy by local agents such as pol-icy-makers, teachers and students involves appropriation ‘in situated locales’ and incommunities of practice (Levinson, Sutton, and Winstead 2009, 768). These insightsin conjunction with the work of Carless (2011), Davison (2008), Davison andLeung (2009) as discussed above present the possibility that formative assessmentas a concept and practice coined ‘elsewhere’ may be appropriated and practisedwithin Chinese university communities in adapted and evolved forms. This possibil-ity raises questions about what form these practices take and whether they inhereand retain the recognised principles of formative assessment. These questions under-pinned the research presented in this paper.

Context

Within a sociocultural perspective, context is a foundational concept because it isthe amalgam of factors that situate particular practices. The context in which theCECR policy was implemented was marked by dramatic economic, social and cul-tural change in China that influenced and in turn was influenced by institutional,disciplinary and cultural factors.

Institutional factors: Higher Education and university categorisation

A key institutional consideration is the nature of the Chinese Higher Education sys-tem. China has become the largest provider of Higher Education (HE) in the world(Wang 2011) with English the most dominant foreign language, being taught in uni-versities, colleges and schools (Hu 2001; United Nations Educational, Scientific andCultural Organization [UNESCO] 2011). The demand for English language profi-ciency is linked to China’s economic reforms in the 1990s and developments ininternational business, trade, technology and information. Increased urbanisation hasmeant that people with a high level of proficiency in English have access to greateracademic success, better employment prospects and improved living conditions inurban areas (Qiang and Kang 2011). Within the HE sector, concerted movestowards mass education were initiated at the beginning of the twenty-first century(Wang 2011). Four categories of HE Institution evident in the sector are: nationalKey, ordinary Key, Non-Key and local. Before the 1990s, Chinese universities were

834 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

classified as either Key University (KU) or Non-Key University (NKU) (that is,prestigious or ordinary) (Zhao and Campbell 1995).

Various initiatives have been undertaken within Chinese Higher Education. Forexample, Projects 985 and 211 are two Chinese Government programmes aimed atdeveloping world-class universities (CMoE 1999). Project 985 was a policylaunched by the Chinese Government in 1998 to develop a small group of elite uni-versities capable of achieving world-class standing; the policy for Project 211 waslaunched in 1985 to strengthen about 100 HE. institutions and key disciplinaryareas as a national priority for the twenty-first century (Gong and Li 2010; Wang2011). The concept of the Key and Non-Key categorisation persists, with most ofthe Key universities participating in both Projects 985 and 211. For these reasonsthe Key and Non-Key categorisation is maintained in this paper. In China KUs havepriority in receiving funding and support from the central government. They alsoenjoy a good reputation and have the privilege of recruiting highly qualified teach-ers and enrolling high achieving students from around the country. This factor isinfluential for the status and profile of the universities.

The KU selected for the study was situated in a major city in a developed east-ern region of China. It was well-funded, enrolled high-achieving students from allover China and recruited highly qualified teachers. The NKU was located in a pro-vincial city in West China which is not as developed economically and education-ally. The NKU received less funding than the KU, enrolled students of an averagelevel of academic performance, and recruited teachers with lower levels of qualifica-tions. The urban–rural divide between eastern and western parts of China is signifi-cant. It accounts for differences in social conditions such as access to HE, levels ofwealth, life opportunities, benefits structures, lifestyle and social rights (Wang2011).

Cultural factors: Confucian heritage culture and education

The orientation to examinations is firmly embedded in educational contexts inChina (Deng and Carless 2010; Qi 2005). Examinations originated in AncientChina and have been used summatively in China for over 2000 years (Meng et al.1961; Stobart 2008). The examination is valued for its ‘one-off result’ (Han andYang 2001) and continues to influence Chinese education including College English(Chen 2009; Cheng and Curtis 2009).

Within a domain such as education, social relations are influenced by social andcultural understandings and expectations. Confucian culture prioritises the responsi-bilities of the individual and the importance of morality and social connections(Yan 2010). In terms of HE, Chinese tradition has emphasised individual cultivationand growth in knowledge and morality on the one hand, and the pursuit of knowl-edge in the service of the state and society, on the other (Gong and Li 2010).

The relationship of teachers to students has been traditionally presented as oneof authority, (Ho, Peng, and Chan 2001; Zhu 1992) with teachers placed in a hierar-chical relationship with students (Biggs 1996). The teacher is positioned as the onlycredible judge or assessor of learning, while students have little sanction to judge orassess each other’s work (Hu 2002). Changes have been noted in these relation-ships, however. For example Zhang (2004) notes that compared to colleagues in the1980s and early 1990s in China, students in the 2000s are more independent and

Assessment & Evaluation in Higher Education 835

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

creative in their thinking, and less likely to be satisfied with the answers of theirteachers.

A number of studies have investigated the impact of CHC factors on assess-ment. Davison (2005) found that the examination culture was responsible for differ-ences in Hong Kong and Australian teachers’ attitudes to and practices of formativeassessment. The CHC influence has also been identified in comparative studies con-ducted in Canada, Hong Kong and mainland China (Cheng, Rogers, and Hu 2004;Cheng, Rogers, and Wang 2008; Cheng and Wang 2007). Chen (2009) analysed thekey values embedded in the Chinese context and identified the presence of a hierar-chical teacher/student relationship, an examination tradition and measurement orien-tation to assessment that appears incompatible with the principles of formativeassessment. It has been argued that the Chinese context may not be conducive tothe practice of formative assessment as understood and carried out in Western,Anglophone contexts (Chen 2009). In light of these arguments, it follows that theformative assessment initiative in China needs investigation for its particularitiesand local interpretations.

Disciplinary factors: College English

The College English programme in Chinese universities involves non-Englishmajor undergraduate students completing English language studies as part oftheir undergraduate course. Over the past 25 years, College English assessmenthas largely reflected the features of psychometric tests (Kunnan 2005). The Col-lege English Test (CET) is a large-scale standardised test used since the mid-1980s. It was a compulsory component of Chinese non-English major students’successful graduation from their degree courses until its mandatory nature wasremoved in early 2005 (CMoE 2005). Nonetheless, a testing orientation is stillprevalent within College English teaching and assessment (Tang 2005; Wang2007). The high stakes use of the College English test results, particularly foremployment purposes, has reinforced the continued use of tests and examinations(Chen 2009).

Formative assessment in the CECR

The CECR policy introduced by the CMoE in 2007 provided a general statementabout assessment in its introduction:

Assessment is an important part of College English curriculum. Comprehensive, objec-tive, scientific and accurate measurement is crucial for teaching objectives to be ful-filled. (CMoE 2007, 5)

The statement defines assessment as having a comprehensive reach, objectivity andaccuracy. The reference to scientific measurement suggests an orientation to assess-ment as a quantitative endeavour. With regard to formative assessment, the policydefines it as follows:

Formative assessment is the procedural and developmental assessment conducted dur-ing the process of teaching and learning … It is a means to adapt various assessmentapproaches and means to follow up the teaching and learning process, and to provide

836 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

timely feedback information so as to enhance students’ overall development … For-mative assessment includes self-assessment, peer-assessment, teachers’ and theadministration’s assessment of students’ learning … It is used to observe, evaluate andmonitor the learning process for the purpose of enhancing effective learning. (CMoE2007, 5)

This latter policy statement is elaborative and presents the Ministry’s definition offormative assessment. The emphasis is on the timing and process of formativeassessment: ‘during the process of teaching and learning’. The use of timely feed-back is advocated and modes other than testing are suggested. The involvement ofstudents in assessment is advocated via the use of self- and peer-assessment. Thepolicy intent for better learning outcomes through reformed assessment is clearlyarticulated.

Taken together, this explanation of formative assessment appears to align withthe key principles accepted in Western contexts where this approach to assessmenthad its genesis. Yet questions have been raised about the impact of government-mandated reforms on Chinese classroom practice and calls made for research thatinvestigates the enactment of policy in classrooms (Deng and Carless 2010).Drawing on a study by Xu and Liu (2009); Carless (2011) maintains that there islittle pre- or in-service training in formative assessment for teachers in China andteachers view assessment as a stand-alone addition rather than integral to teachingand learning. The research reported here was directed at the ways in which per-sonnel in two socioculturally differentiated universities interpreted the definitionsof formative assessment in CECR policy in their respective institutional CET poli-cies and teaching practices.

Methodology

The research study adopted a qualitative case study approach which enabled deepexploration (Yin 2003) to address two questions:

(1) How is formative assessment interpreted in the College English policies inthe two Chinese universities?

(2) How do teachers understand and take up formative assessment in practice?

Table 1. Data: Methods and participants.

Data

Sources

ToolsKU NKU

Individualinterview:Administration

Senioradministrator

Senior administrator Senior administratorinterview schedule

Individualinterview:Teachers

KU-T1, KU-T2 NKU-T1, NKU-T2 Teacher interviewschedule

Focus groupinterview:Teachers

KU-T3, KU-T4,KU-T5, KU-T6

NKU-T3, NKU-T4, NKU-T5, NKU-T6, NKU-T7

Teacher interviewschedule

Assessment & Evaluation in Higher Education 837

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

The questions orient to the localised responses of the two universities to the govern-ment-level policy initiative. Two major data-sets were generated to address thequestions: (i) interviews with a senior CET administrator at each university thatwere transcribed and notated and (ii) interviews with two individual teachers and ateacher focus group at each university that were also transcribed and notated. Topreserve anonymity the teachers were coded with their university category and anumber: KU-T1–KU-T6 and NKU-T1–NKU-T7. The KU teachers consisted of twofemales and four males with teaching experience ranging from 5 to 25 years. TheNKU teachers were five females and two males also with teaching experience rang-ing from 5 to 25 years. The administrator interviews covered key questions aboutthe College English assessment policy within each university and changes that hadbeen made in response to the CECR formative assessment initiative. The teacherinterviews focused mainly on teachers’ understanding of the institutional policy andtheir uptake of formative assessment in classrooms. The interviews were conductedin Chinese, the first language of the participants. The interviews were transcribedand translated by the author–researcher Chen. Table 1 details the data source.

The coding process was descriptive, analytic and iterative (Richards 2005).NVivo 8 was used to facilitate the coding: the imported data were firstly coded asnodes, an NVivo expression of the basic codes. The coded nodes were then clus-tered and categorised in accordance with the emergent themes. For instance, thecoding at the institutional level, as informed by the senior administrator interviews,generated themes related to the institutional responses to the formative assessmentinitiative. The coding of teacher interview data produced themes such as the teach-ers’ views on formative assessment and their classroom responses to the policy ini-tiative. The themes that emerged from the analysis of the interview data from thesenior administrator and the teachers were synthesised and analysed. A constantcomparative method, which involves constant induction and comparison (Merriam2009) was used to identify similarities and differences within the data and emergentthemes of the two cases. The findings are presented as follows: (1) the interpreta-tions of formative assessment in College English policies at the respective universi-ties and (2) the understandings and uptake of formative assessment by teachers intheir classroom practice at the two universities.

College English assessment 100%

Process assessment 60%

Final term exam 40%

Classroom Participation 20%

Assignment/quiz 20%

Attendance 20%

Figure 2. KU assessment framework.

838 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

University interpretations of formative assessment

In response to the CECR initiative, KU introduced a formative assessment elementthat was conducted during the semester. This element was referred to as processassessment (guòchéngpíngjià 过程评价), which is a literal translation of the termused interchangeably with formative assessment by the senior administrator and theteachers. Process assessment together with the end of semester achievement test orfinal term exam (期终考试qīzhōngkăoshì) comprised the College English assess-ment at KU. According to the senior administrator, process assessment accountedfor 60% of the overall assessment for College English at KU and was calculated asfollows: classroom participation (20%), assignments or quizzes (20%) and atten-dance (20%) (Figure 2).

Students’ performances in these categories were recorded. A final grade wasallocated by teachers and used for end of term reporting. There were strict institu-tional requirements regarding attendance: three absences and the student automati-cally failed. Assessing classroom participation and assignments was the teacher’sresponsibility.

At NKU, the term process assessment also emerged and was used interchange-ably with formative assessment. The College English assessment framework atNKU comprised process assessment and a final term exam. This framework was asignificant change, as previously NKU had relied solely on the CET (Bands 4 or 6)or an achievement test at the end of each term. At the time of the study 10% of theoverall College English assessment was allocated to process assessment which wasfurther divided as follows: classroom participation (4%), assignments (3%) andattendance (3%) (Figure 3).

To achieve the 4% for classroom participation, students needed to perform wellin class; to achieve the 3% for assignments, they needed to submit a minimum ofthree assignments. Attendance at 3% was calculated through subtraction of a pointfor each recorded absence. The combined result formed a process grade (平时成绩píngshíchéngjì) and represented 10% of the final assessment; the final exam consti-tuted 90% of marks. The total grade was calculated from the outcomes of the pro-cess grade and the final exam and used for reporting purposes.

College English assessment 100%

Process assessment 10%

Final term exam 90%

Classroom participation 4%

Assignment/quiz 3%

Attendance 3%

Figure 3. NKU assessment framework.

Assessment & Evaluation in Higher Education 839

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

In both universities the interpretation of formative assessment, although termedprocess assessment, resulted in a grade or a mark. The universities’ interpretationand realisation of formative assessment in terms of weightings and grades, and theinclusion of criteria such as attendance, point to a shift in the understandings of for-mative assessment away from the original principles as defined by the ARG. Theyalso highlight the powerful influence of summative assessment as a major barrier tothe implementation of formative assessment for teaching and learning purposes(Carless 2011).

Comparative analysis

A salient difference in the weightings allocated to process assessment (60% at KUas compared to 10% at NKU) emerged in the comparative analysis. This could beclosely related to the individual institutional cultures of the two universities. Thefollowing quote from a KU senior administrator is illustrative of the institutionalculture of KU: ‘Process assessment in our institution has been increased over theyears to 60% … ’. At KU teachers were entrusted with opportunities to take controlof their classroom practice more than ever before. KU appeared to promote an insti-tutional culture that was more open to and supportive of change.

In contrast, the institutional culture at NKU seemed to be predicated on a con-cern about and lack of trust in teachers’ professional judgement. This was evidentwhen the NKU senior administrator explained the minor weighting apportioned toprocess assessment (10%):

… We have argued for more weighting for the process assessment. Some teachersraised this question too. We did try, but the Teaching Administrative Office rejectedthe proposal. The rejection was grounded in the finding that some teachers did nottake the matter seriously, like giving all his or her students full scores …

The different responses of the universities to the roles and responsibilities of teach-ers in relation to assessment are indicators of institutional culture. They are embed-ded in complex meshes of understandings and practices including teachers’ level ofexpertise and training, expectations of students and understandings about the func-tions and implementation of assessment. This finding aligns with the contentionsthat culture and context internalise each other (Kettle 2011) and are influential inthe adoption of formative assessment options that appear most feasible (Carless2011).

Despite the differences in weightings, a striking similarity surfaced in the com-parative analysis between the teachers’ accounts at the two universities. The similar-ity manifested in the ways that the teachers presented formative assessment asprocess assessment. Their use of the term process assessment foregrounded timingand frequency. That is, for them formative assessment was defined as continuousand ongoing, as opposed to a one-off event at the end of a certain period. Therewas, however, very little articulation of the procedures and purposes of formativeassessment in relation to teaching and learning, nor explicit articulation of the roleof defining features such as the use of feedback which was a key principle in thenational policy and in original theorisations.

A second similarity in the data from the two universities was the specificationthat process assessment should include student participation in classroom activities,

840 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

assignments and attendance. Rather than the sole use of tests, alternative modes ofassessment were presented in policies that relied on teachers’ judgement and deci-sion-making. Student involvement in assessment, however, was not emphasised ineither the respective policy frameworks or in the senior administrators’ interviews.A third similarity was the requirement within both institutional policies that recordsbe made of students’ performances in the three areas of process assessment, that is,class participation, assignments and attendance. The purpose was to generate a pro-cess grade which was combined with the final exam result and used for reporting.In both universities, the so-called process assessment had a summative purpose.

These similarities indicate that process assessment as interpreted in the institu-tional policies, though used interchangeably with formative assessment, did not con-vey the same meaning of formative assessment as intended in the CECR nationalpolicy. Consistent localised interpretations across the policies of both universitiesincluded a shift from the one-off summation of learning to the collection of multiplesources of evidence during the teaching and learning process. Also evident was ashift from sole reliance on objective testing to the incorporation of teachers’ profes-sional judgements. However, these shifts did not necessarily mean that a shift hadoccurred involving the summative and formative purposes of assessment: indeed,the ‘objective’ measurement function remained the focus.

Classroom uptake

Analysis of the teacher interview data indicated the strong influence of the institu-tional assessment policy on the teachers’ assessment practices in both universities.Every teacher recognised a need to adhere to the prescribed assessment frame-work and reported procedures that reflected the institutional policies. Specifically,the teachers assigned tasks to students or conducted quizzes, referred to the stu-dents’ participation in classroom activities, and checked attendance. Moreover inaccordance with institutional policies, they made records of the students’performances in the three required areas and generated a process grade forsummative use.

Despite the quite consistent uptake of policy stipulations, teachers from bothuniversities also acknowledged the existence of flexibility, particularly with regardto specific assessment tasks and how to proceed with them in the classroom. Forinstance, the task types and their frequency varied between teachers within andacross the two contexts. This flexibility was also reflected in the ways that teachersmonitored students’ attendance; for example, some teachers at both institutionsasked students to answer questions while others assessed students’ weekly assign-ments. While teachers took up the stipulated model of assessment, they nonethelesshad the freedom to interpret how these requirements could be met through tasktypes of their own choosing. Some of their views include:

NKU-T1: We must first of all do whatever the institutional policy requires. But as ateacher, you can make the best of the room [flexibility] allowed.

NKU-T7: I think it is more of a personal choice. A process grade is required by (theinstitutional assessment policy) though, isn’t it?

NKU-T3, NKU-T4 and NKU-T5: Yes, it is.

Assessment & Evaluation in Higher Education 841

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

KU-T1: … we have unified arrangements on the respective proportions each itemaccounts for. Based on that, I have made some alterations as I think the best … Thisis not strictly regulated.

In relation to conscious change in response to the institutional policy initiativesaround process/formative assessment, teachers at both universities reported little orno change to their assessment practices. Even two teachers who reported givingconstructive feedback to students for the purposes of learning and improvementcommented, respectively: ‘We have been doing like this all along’ (KU-T4), ‘notmuch change at all’ (NKU-T4). Given that ‘finding new tools and changed class-room practices’ are the necessary symbols of the development of formative assess-ment (Pryor and Crossouard 2008, 2), the largely unchanged status of assessmentpractices pointed to the possibility that the essence and potential of the nationallymandated formative assessment initiative was not fully understood by teachers.

While similarities in the institutional responses of the two universities werefound, differences were also evident between the teacher groups. One differencewas the teachers’ responses to the differentiated weightings of process assessment.At the KU, where the university policy allocated 60% to process assessment, theteachers seemed to be satisfied with the policy. In contrast, at the NKU, where theweighting for process assessment was 10%, concerns were expressed by someteachers who felt the percentage was a token gesture. NKU-T4 commented: ‘Actu-ally I am thinking we don’t have formative assessment for its own sake. Ten per-cent doesn’t mean anything; it can’t possibly play the role it is supposed to’. Thistension, mentioned by teachers in each of the four interviews conducted at NKU,had a detrimental impact on the teachers. The quote below is illustrative:

… The authority does not seem to trust us teachers. They are afraid we will abuse thispower. On this point, I strongly disagree. I think since you distrust me, I will notbother to differentiate between a nine and a ten (NKU-T7).

The exercise of institutional control seems to have adversely affected the teachers’assessment practices and commitment and aroused resistance to the institution. Thisfinding aligns with other research (e.g. Carless 2009; Wang and Cheng 2005) and isone of the factors that can undermine reform.

Implications

The findings of the study have implications for future assessment policy changeand practices in the Chinese context. It is apparent that the narrowed meaning offormative assessment in the institutional policy represents a response to the ‘mixedmessage’ inherent in the communication conveyed in the CECR. For the messageto be taken up in the local settings, a borrowed policy needs to be delivered in away that policy-makers at all levels understand, being aware of the significance ofthe policy and the rationale for change. Otherwise, the local policy may be adoptedin an unintended manner or practices remain largely unchanged.

The appropriation of the national policy at the institutional level is crucial inthat the resultant institutional policy constitutes the guidelines to be enacted withinthe particular local context. It prescribes the procedures for the translation of policymeaning into the classroom practice as well (Levinson, Sutton, and Winstead 2009).

842 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

It is this local policy that to a great extent, informs teachers’ actual implementationof the policy in their classrooms.

The different weightings allocated to process assessment in the two Chinese uni-versities resulted from the constraints and affordances of the respective institutionalcultures and conditions. However, despite the multiple differences, both universitypolicies interpreted formative assessment as process assessment but prioritised thesummative uses of assessment results. The influence of the Chinese cultural, institu-tional and historical conditions was evident. The focus on a summative grade, forinstance, paralleled the Chinese assessment tradition that values the product oflearning more than the process (Han and Yang 2001). It also aligned with the valueattributed to English language scores in uses beyond the university, notably inworkplaces. Associated with the need for outcomes was the ongoing centrality ofthe test, particularly at the more conservative and less prosperous NKU. The 90%allocation of student assessment to a final ‘objective’ exam was linked to adminis-trator mistrust of teacher judgments, a point recognised and resented by teachers.

Findings from this study indicate that assessment practices in classrooms arecomplicated. On the one hand, obliged to follow the institutional assessment frame-work and procedures, the teachers took up the locally interpreted meaning of forma-tive assessment. That is, process assessment was practised in classrooms. Thestrong influence of the institutional policy was evident. On the other hand, flexibil-ity enabled teachers to conduct assessment in their preferred ways. Overall, the con-straints of institutional impositions; student, teacher and external expectations; andcultural–historical traditions meant that there was some, albeit limited, change toclassroom practice.

A sociocultural perspective regards response to mandates such as policy as acommunal practice (Wenger 1998). Process assessment, as prescribed in the institu-tional policy and the teachers’ uptake, was situated and thereby mediated by thecomplex sociocultural factors of the Chinese context. The two universities similarlyimplemented formative assessment as process assessment despite the differences ininstitutional situations. Process assessment appears therefore to be a culturally situ-ated interpretation of formative assessment. It embodies cultural values, and reflectscontextual reality and history. It seems the values shared within the respective uni-versity communities functioned as cultural tools for thinking and mediating themeaning of formative assessment at the institutional level (e.g. Levinson, Sutton,and Winstead 2009).

Conclusion

To conclude, formative assessment as defined in the CECR espouses many of theprinciples derived from the Western Anglophone context (e.g. ARG 2002; Blacket al. 2003). However, the Chinese addition of a measurement function to formativeassessment illustrates the ongoing historical and cultural influence of the summativeorientation to assessment in China. The analysis of the two universities’ attempts toadopt more formative approaches to assessment can be described as a demonstrationof the ‘contextually-grounded approach’ (Carless 2011). Such research of formativeassessment is now an important priority ‘in CHC where summative assessment isso dominant’ (Carless 2011, 4). Prescribing dual functions for formative assessmentcan be regarded as the negotiation and adaptation that policies make as the uptake

Assessment & Evaluation in Higher Education 843

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

of formative assessment is mediated by the social, historical and cultural conditionsof the Chinese context.

Notes on contributorsQiuxian Chen is a lecturer in the Foreign Language School in Shanxi University in TaiyuanChina and has research interests in English language learning and assessment.

Margaret Kettle lectures in second language teaching methodology and sociolinguistics inthe Faculty of Education, at Queensland University of Technology, with research interests ininternational education, global English and discourse analysis.

Val Klenowski is a professor in education in the Faculty of Education, at QueenslandUniversity of Technology with major research interests in curriculum reform, assessment andevaluation.

Lynette May lectures in second language assessment and language teaching methodology inthe Faculty of Education, at Queensland University of Technology with major researchinterests in assessing second language speaking.

ReferencesAssessment Reform Group (ARG). 2002. Assessment for learning: 10 principles. http://

www.assessment-reform-group.org/CIE3.PDF (accessed January 11, 2007).Biggs, J.B. 1996. Western misconceptions of the confucian-heritage learning culture. In The

Chinese learner: Cultural, psychological and contextual influences, ed. D.A. Watkinsand J.B. Biggs, 45–68. Hong Kong: CERC and ACER.

Black, P., C. Harrison, C. Lee, B. Marshall, and D. Wiliam, eds. 2003. Assessment for learn-ing: Putting it into practice. Maidenhead: Open University Press.

Black, P., and D. Wiliam. 1998. Assessment and classroom learning. Assessment in Educa-tion: Principles, Policy & Practice 5: 7–75.

Black, P., and D. Wiliam. 2005. Lessons from around the world: How policies, politics andcultures constrain and afford assessment practices. Curriculum Journal 16, no. 2:249–61.

Bloom, B.S., J.T. Hastings, and G.F. Madaus. 1971. Handbook on the formative and summa-tive evaluation of student learning. New York, NY: McGraw-Hill.

Carless, D. 2009. Trust, distrust and their impact on assessment reform. Assessment & Eval-uation in Higher Education 34, no. 1: 79–89.

Carless, D. 2011. From testing to productive student learning: Implementing formativeassessment in confucian-heritage settings. New York, NY: Routledge.

Chen, Q. 2009. The potential barriers to College English assessment policy change in China:A sociocultural perspective. In Educational planet shapers: Researching, hypothesising,dreaming the future, ed. B. Garrick, S. Poed, and J. Skinner, 115–26. Brisbane: PostPressed.

Cheng, L., and A. Curtis. 2009. English language assessment and the Chinese learner. NewYork, NY: Routledge.

Cheng, L., T. Rogers, and H. Hu. 2004. ESL/EFL instructors’ classroom assessment prac-tices: Purposes, methods, and procedures. Language Testing 21, no. 3: 360–89.

Cheng, L., W.T. Rogers, and X. Wang. 2008. Assessment purposes and procedures in ESL/EFL classrooms. Assessment & Evaluation in Higher Education 33, no. 1: 9–32.

Cheng, L., and X. Wang. 2007. Grading, feedback, and reporting in ESL/EFL classrooms.Language Assessment Quarterly 4, no. 1: 85–107.

China Ministry of Education (CMoE). 1999. On 211 Project. http://www.moe.edu.cn/edoas/website18/level3.jsp?tablename=724&infoid=5607 (accessed December 1, 2009).

CMoE. 2005. Second press conference. http://www.moe.gov.cn/publicfiles/business/htmlfiles/moe/moe_746/200506/9506.html (accessed October 30, 2007).

844 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

CMoE. 2007. College English Curriculum Requirements [daxue yingyu kecheng yaoqiu].http://www.moe.edu.cn/edoas/website18/info34295.htm [in Chinese] (accessed September27, 2007).

Davison, C. 2005. The contradictory culture of teacher-based assessment: ESL teacherassessment practices in Australian and Hong Kong secondary schools: Erratum. Lan-guage Testing 22: 120–33.

Davison, C. 2008, March. Assessment for learning: Building inquiry-oriented assessmentcommunities. Paper presented at the 42nd annual TESOL Convention and Exhibition.International TESOL, April 2–5, 2008, in New York.

Davison, C., and C. Leung. 2009. Current issues in English language teacher-based assess-ment. TESOL Quarterly 43: 393–415.

Deng, C., and D. Carless. 2010. Examination preparation or effective teaching: Conflictingpriorities in the implementation of a pedagogic innovation. Language Assessment Quar-terly 7: 285–302.

Gong, F., and J. Li. 2010. Seeking excellence in the move to a mass system: Institutionalresponses of key Chinese comprehensive universities. Frontiers of Education in China 5,no. 4: 477–506.

Han, M., and X. Yang. 2001. Educational assessment in China: Lessons from history andfuture prospects. Assessment in Education 8, no. 1: 5–10.

Ho, D.Y., S. Peng, and F.S. Chan. 2001. Authority and learning in confucian-heritage educa-tion: A relational methodological analysis. In Multiple competencies and self-regulatedlearning implications for multicultural education, ed. C. Chiu, F. Salili, and Y. Hong,29–48. Greenwich: IAP.

Hu, D. 2001. The globalisation of the English language: Reflections on the teaching of Eng-lish in China. International Education Journal 2, no. 4: 126–33.

Hu, G. 2002. Potential cultural resistance to pedagogical imports: The case of communica-tive language teaching in China. Language, Culture and Curriculum 15, no. 2: 93–105.

Hume, A., and R.K. Coll. 2009. Assessment of learning, for learning, and as learning: New Zea-land case studies. Assessment in Education: Principles, Policy & Practice 16, no. 3: 269–90.

Kettle, M. 2011. Academic practice as explanatory framework: Reconceptualising interna-tional student academic engagement and university teaching. Discourse: Studies in thecultural politics of education 32, no. 1: 1–14.

Klenowski, V. 2009. Assessment for learning revisited: An Asia-Pacific perspective. Assess-ment in Education: Principles, Policy & Practice 16, no. 3: 263–8.

Kunnan, A.J. 2005. Language assessment from a wider context. In Handbook of research insecond language teaching and learning, ed. E. Hinkel, 779–94. London: LawrenceErlbaum.

Levinson, B., and M. Sutton, 2001. Introduction: Policy as/in practice – A socioculturalapproach to the study of educational policy. In Policy as practice. Toward a comparativesociocultural analysis of educational policy, ed. B.A.U. Levinson and M. Sutton, 1–22.Westport, CT: Ablex.

Levinson, B., M. Sutton, and T. Winstead. 2009. Education policy as a practice of power.Educational Policy 23, no. 6: 767–95.

Marshall, B., and M.J. Drummond. 2006. How teachers engage with assessment for learning:Lessons from the classroom. Research Papers in Education 21, no. 2: 133–49.

Meng, X., X. Cheng, R. Zhang, and Z. Chou. 1961. Book of Rites: Examples in China’sancient educational history. Beijing: People’s Education Press.

Merriam, S.B. 2009. Qualitative research: A guide to design and implementation . 2nd ed.San Francisco, CA: Jossey-Bass.

Pryor, J., and B. Crossouard. 2008. A socio-cultural theorisation of formative assessment.Oxford Review of Education 34, no. 1: 1–20.

Qi, L. 2005. Stakeholders’ conflicting aims undermine the washback function of a high-stakes test. Language Testing 22: 142–73.

Qiang, H., and Y. Kang. 2011. English immersion in China as a case of educational transfer.Frontiers of Education in China 6, no. 1: 8–36.

Richards, L. 2005. Handling qualitative data: A practical guide. London: Sage.Rogoff, B. 2003. The cultural nature of human development. Oxford: Oxford University

Press.

Assessment & Evaluation in Higher Education 845

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14

Scriven, M. 1967. The methodology of evaluation. In Perspectives of curriculum evaluation,ed. R.W. Tyler, 39–85. Chicago, IL: Rand McNally.

Stobart, G. 2008. Testing times: The uses and abuses of assessment. New York, NY: Routl-edge.

Tang, X. 2005. CET 4/6 and college English curriculum evaluation. Foreign Language Edu-cation (waiyu jiaxue) 26, no. 1: 56–60.

Torrance, H., and J. Pryor. 1998. Investigating formative assessment teaching, learning andassessment in the classroom. Buckingham: Open University.

United Nations Educational, Scientific and Cultural Organization (UNESCO). 2011. Worlddata on education 2010/2011: People's Republic of China. http://www.ibe.unesco.org/fileadmin/user_upload/Publications/WDE/2010/pdf-versions/China.pdf (accessed Septem-ber 10, 2012).

Wang, J. 2007. The college English test in China: Challenges and suggestions. Asian Jour-nal of English Language Teaching 17: 137–44.

Wang, H. 2011. Access to Higher Education in China: Differences in opportunity. Frontiersof Education in China 6, no. 2: 227–47.

Wang, H., and L. Cheng. 2005. The impact of curriculum innovation on the cultures ofteaching. The Asian EFL Journal Quarterly 7, no. 4: 7–32.

Wenger, E. 1998. Community of practice. Learning, meaning and identity. Cambridge: Cam-bridge University Press.

Wenger, E. 2000. Communities of practice and social learning systems. Organization 7, no.2: 225–46.

Xu, Y., and Y. Liu. 2009. Teacher assessment knowledge and practice. A narrative inquiryof a Chinese college EFL teacher’s experience. TESOL Quarterly 43, no. 3: 493–513.

Yan, F. 2010. Tensions within the changing Chinese Higher Education system. Frontiers ofEducation in China 5, no. 4: 473–6.

Yin, R.K. 2003. Case study research: Design and methods. 3rd ed., Vol. 5. London, NewDelhi: Sage.

Zhang, R. 2004. Using the principles of exploratory practice to guide group work in anextensive reading class in China. Language Teaching Research 8, no. 3: 331–45.

Zhao, Y., and K. Campbell. 1995. English in China. World Englishes 14, no. 3: 377–90.Zhu, W.Z. 1992. Confucius and traditional Chinese education: An assessment. In Education

and modernization: The Chinese experience, ed. R. Hayhoe, 3–22. Oxford: Pergamon.

846 Q. Chen et al.

Dow

nloa

ded

by [

Mos

kow

Sta

te U

niv

Bib

liote

] at

01:

50 0

2 Ja

nuar

y 20

14


Recommended