+ All Categories
Home > Documents > Survey to Investigate Expectations of Achievement in Written English on English Language Degree...

Survey to Investigate Expectations of Achievement in Written English on English Language Degree...

Date post: 12-Dec-2016
Category:
Upload: carole
View: 212 times
Download: 0 times
Share this document with a friend
24
This article was downloaded by: [Temple University Libraries] On: 10 September 2013, At: 04:44 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Language Assessment Quarterly Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hlaq20 Survey to Investigate Expectations of Achievement in Written English on English Language Degree Programmes in Europe Carole Sedgwick a a Roehampton University, Published online: 05 Dec 2007. To cite this article: Carole Sedgwick (2007) Survey to Investigate Expectations of Achievement in Written English on English Language Degree Programmes in Europe, Language Assessment Quarterly, 4:3, 235-256 To link to this article: http://dx.doi.org/10.1080/15434300701462879 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.
Transcript

This article was downloaded by: [Temple University Libraries]On: 10 September 2013, At: 04:44Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Language Assessment QuarterlyPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/hlaq20

Survey to InvestigateExpectations of Achievementin Written English on EnglishLanguage Degree Programmesin EuropeCarole Sedgwick aa Roehampton University,Published online: 05 Dec 2007.

To cite this article: Carole Sedgwick (2007) Survey to Investigate Expectations ofAchievement in Written English on English Language Degree Programmes in Europe,Language Assessment Quarterly, 4:3, 235-256

To link to this article: http://dx.doi.org/10.1080/15434300701462879

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly orindirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone isexpressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

LANGUAGE ASSESSMENT QUARTERLY, 4(3), 235–256Copyright © 2007, Lawrence Erlbaum Associates, Inc.

HLAQ1543-43031543-4311language Assessment Quarterly, Vol. 4, No. 3, Jun 2007: pp. 0–0language Assessment Quarterly

ARTICLE

Survey to Investigate Expectations of Achievement in Written English on English Language Degree Programmes in Europe

Expectations of AchievementSEDGWICK Carole SedgwickRoehampton University

From June to November 2003 an exploratory survey was made of the assessment ofwritten English in a small sample of English language degrees in Europe. All pro-grammes had similar components, the study of English language and literature, cul-tural or area studies, and English language development. Some involved the studyof another language. Theoretical perspectives on test validity informed the con-struction of a questionnaire to collect quantitative and qualitative data from the pro-grammes in the sample in order to investigate expectations about attainment ofproficiency in written English. Responses were obtained from 32 English languagedegree programmes in 14 European countries: Austria, Belgium, Denmark, France,Germany, Italy, Latvia, the Netherlands, Poland, Portugal, Romania, Spain,Switzerland, and the United Kingdom. The sample was not large enough to be rep-resentative of such degrees throughout Europe, but this exploratory study revealeda wide variation between countries, within the same country and, in some cases,within the same department with regard to what was measured, the contexts sam-pled, and the assessment instrument adopted and measures taken to ensure the reli-ability of the assessment. The study revealed variable evidence to support theoutcomes of the assessment across the sample, leading to the conclusion that therewas wide variation in what is attained in proficiency in written English on theseEnglish language degree programmes, which is indicative of similar degrees acrossEurope. The framework used for data collection yielded themes, practices, and con-cerns that would inform the development of common guidelines for assessment.

Correspondence should be addressed to Carole Sedgwick, Roehampton University, DigbyStuart College, Roehampton Lane, London SW1 5PH, United Kingdom. E-mail: [email protected]

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

236 SEDGWICK

INTRODUCTION

The Bologna Declaration (June 19, 1999) signed by 29 European countries wasa landmark commitment to the creation of a pan-European zone for HigherEducation to become a force for quality in the international market. The move-ment has expanded, and there were 40 signatories to the Bologna agreement atthe Berlin conference in September 2003. A key feature of the agreement is toestablish comparable standards with common systems of quality assurance toenable the free mobility of students, teachers, and researchers. To achieve theseaims, the signatories have agreed to a common 3-year undergraduate and2-year postgraduate cycle with a European Credit Transfer System to be inplace by 2010.

The Council of Europe views cohesion in education as a vital factor in theinterest of maintaining human rights and freedoms in Europe. It is particularlyconcerned not only to foster the linguistic and cultural diversity of its memberstates but also to promote plurilingualism and pluriculturalism among its citi-zens. It supports the Bologna Process and the European Centre for ModernLanguages, and through the Language Policy division in Strasbourg, it hassponsored the development of the 2001 Common European Framework ofReference (CEFR)1 and the development of a European Language Portfolio(Council of Europe, n.d.).

To develop a harmonious approach to language teaching based on common princi-ples by pooling, through international co-operation, member States’ experience andexpertise in this area. The aim is to promote a coherent learner-centred approach tolanguage teaching, integrating aims, content, learning experiences and assessment.(Council of Europe Language Policy)

Hasselgren (1999) reported a wide variation in the importance attributed tolanguage assessment and the methods used for assessment, in a survey to investi-gate exit testing of languages on undergraduate language and nonlanguagemajors in several European countries’ assessments undertaken by a testing sub-project of the Thematic Networks of the European Language Council. The studypresented here undertook to build on the findings of that project by systemati-cally investigating attainment in a language skill on language degree programmesover a broad geographical range in Europe, following the initiation of theBologna Process and the publication of the Common European Framework.

1A reference guide for teachers, learners, and assessors that includes descriptions of languageability at six levels: A1, A2, B1, B2, C1, C2 (A1=basic user and C2=proficient user).

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 237

The Common European Framework has made an important contribution toquality assurance by developing language descriptors that can be used as areference for the specification of objectives for learning, teaching, andassessment. It seemed timely to investigate what students were expected toachieve and the extent to which there was agreement with regard to reports ofattainment in English language proficiency on English language majors inEuropean universities. English is an important language in the EuropeanUnion and is, therefore, a language that would be studied as a language majorin a large number of universities across Europe. Writing was chosen as theskill that was most likely to be assessed on a language programme atuniversity level.

METHOD

Analytical Framework

Given that the interpretation, value, or meaning that can be attributed to theresults of an assessment is generally seen as being the concern of validity studiesin this study, I have used current ideas about validity as a basis for the collectionof data. This was done to ensure systematicity in the data collection and to allowfor meaningful observations to be made regarding the assessment instrumentsreviewed.

Messick (1989, 1995) advocated a unitary concept of validity, incorporatingcontent, criterion, and consequential validities, which had previously beentreated as separate considerations in the validation process:

Validity is broadly defined as nothing less than an evaluative summary of both theevidence for and the actual—as well as the potential—consequences of score inter-pretation and use (i.e. construct validity conceived comprehensively). This compre-hensive view of validity integrates considerations of content, criteria andconsequences into a comprehensive framework of empirically testing rationalhypotheses about score meaning and utility. (Messick, 1995, p. 742)

There is current agreement that a multiplicity of evidence should be used to sup-port the interpretation of the score on an assessment.

Validation can be seen as a form of evaluation where a variety of quantitative andqualitative methodologies are used to generate evidence to support inferencesfrom test scores. The validity of a test score does not lie in what the test designerclaims, rather they need to produce evidence to support such claims. (Weir,2005a, p. 15)

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

238 SEDGWICK

The evidence collected should relate to all the factors in the assessment thatcontribute to the final score value. Five key types of validity evidence are cur-rently recognised in the testing literature as contributing to the validity of anassessment. Weir uses the following descriptors for these key types: context,theory-based, score-based, criterion-related, and consequential (Weir, 2005a):

• Context validity is the extent to which test performance is representative ofthe performance domain that is specified for the test.

• Theory-based validity is the extent to which the test measures theunderlying cognitive abilities that are specified for the test. Thespecification of these underlying abilities is based on a theory of lan-guage and language processing that informs assessment design andevaluation.

• Score-based validity relates to the reliability of an assessment—themeasures that are taken to ensure that the score obtained on an assess-ment task is a reliable measure of what is being assessed. The reliabil-ity of an assessment was usually considered independently fromvalidity. However, it has increasingly been perceived as inseparablefrom validity in its contribution to the value of the final score (Alderson,Clapham, & Wall, 1995; Bachman, 1990). Weir (2005a) considered it asone of the validities that contributes to the overall validity of anassessment.

• Criterion-related validity is to the extent to which the score on anassessment relates to a score on another assessment that purports tomeasure the same criterion, or the extent to which the score correlateswith a measure of the same criterion in a future target languagesituation.

• Consequential validity is regarded as the perceived value of the assessmentfor stakeholders: teachers, learners, parents, governments, and official bod-ies and the marketplace (Hamp-Lyons & Kroll, 1997).

All of these aspects of validity would interrelate in the assessment situation.Weir (2005a) advocated the collection of evidence to support each in order tojustify the meaning interpretation and value of the results of an assessment. Inthe study presented here, quantitative and qualitative data were collected inrelation to the five types of validity evidence just described to investigate themeaning of the score on the assessment of written English at the end of the lan-guage development component on English language majors in Europeanuniversities.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 239

Instrument

A questionnaire was constructed to collect five types of validity evidence asdescribed by Weir (2005a).2 The subsections of the questionnaire listed next weredesigned to elicit the different types of validity evidence: Section 3A—contextvalidity, Section 3B—theory-based validity, Section 3C—scoring validity, Sec-tion 2 (3)—criterion-related validity, and Section 3D—consequential validity.

The questionnaire was posted on the Roehampton University Web site forease of response. Closed-response questions were used to collect context andscoring validity evidence, where the categories could be predetermined. Open-response questions were used to collect richer contextual and theory-basedvalidity data, where categories could not be predetermined, and to enablerespondents to provide further clarification on the closed responses and thewriting assessment process as a whole. Respondents were asked to give theirnames and contact details in case clarification was needed for the open-endedresponses and to enable circulation of the findings and invitations to follow-upstudies. All participants were assured that the details given would be strictlyconfidential.

Participants

Academics responsible for the final assessment of students’ written English at theend of an English language major programme in 64 universities representing abroad geographical range in Europe were invited to participate. The cascademethod of sampling was adopted, as updated lists of relevant contacts in eachcountry were not available.3 E-mail messages were sent to 64 heads of depart-ment identified in 18 countries inviting them to participate. In cases where thehead of department was not responsible for the assessment of written English atthe required level, the appropriate member of the department was requested torespond. Responses were received from academics representing 32 English lan-guage majors in 14 European countries:4 Austria (1), Belgium (3), Denmark (1),France (1), Germany (3), Italy (8), Latvia (1), the Netherlands (4), Poland (1),

2The questionnaire was developed following interviews with five informants in differentEuropean countries about their assessment practices and piloted with five respondents. Please contactthe author to obtain this questionnaire.

3Initial contact was made with European colleagues of members of the modern languages depart-ment in the author’s university, the British Council in each European country, and members of rele-vant international and European organisations (the International Association of Teachers of Englishas a Foreign Language, the European Language Council, the European Centre for Modern Lan-guages), and university Web sites were investigated in countries in which the languages are spokenby the author (France and Italy).

4The number of responses for each country is given in brackets.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

240 SEDGWICK

Portugal (2), Romania (1), Spain (3), Switzerland (1), and the United Kingdom(2).5 All the programmes had similar components: the study of English languageand literature, cultural or area studies, and English language development. Someinvolved the study of another language.

Research Questions

The research questions, which focused on the final assessment of written Englishon English language majors across a range of countries, were as follows:

RQ1: What is assessed in terms of proficiency in written English?RQ2: To what extent are reports of attainment comparable?

RESULTS AND DISCUSSION

Responses from academics representing 32 English Language majors in 14European countries (Austria, Belgium, Denmark, France, Germany, Italy, Latvia,the Netherlands, Poland, Portugal, Romania, Spain, Switzerland, the UnitedKingdom) were analysed with reference to the five types of validity evidence col-lected. Respondents had been instructed to answer only those questions relevantto their programme and invited to use the “Additional information” prompt at theend of each section to support responses or supply an alternative description ifthe questions or prompts were not relevant to their particular assessment.Responses were not made by all of the respondents to all of the questions, but nosubstantive questions elicited fewer than 27 responses.

Theory-Based Evidence

The answers to the open-response questions, designed to elicit the aims and ratio-nale of the courses, were collated in a matrix for each university in each countryand were colour coded according to the different conceptual approaches to writ-ing identified:

• Focus on the product of writing (cohesion, structure, accuracy).• Focus on the ability to produce a range of genres other than academic.• Focus on the ability to write an academic essay or an academic essay type

(so significant in the data that it was recorded separately).

5English language majors for nonnative speakers of English.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 241

• Focus on the ability to compose or process text (formative approach toassessment).

• No specific rationale known.

Thirty-five tokens of these five categories, identified in 32 responses, aresummarised in Figure 1. A sample of the coded data is given in the appendix.The results indicate a range of theoretical perspectives across those universitiesthat make up the sample. The most common aim was to assess the ability towrite for academic purposes. A focus on the ability to compose in the assess-ment is much less prominent. However, Table 1 gives a clearer picture of therange of conceptual approaches to writing identified in relation to each univer-sity and the extent to which more than one conceptual perspective was indi-cated for an assessment.

Product-based goals either were highlighted as the only concern or, in fivecases, were a goal in the assessment of academic essay writing. Although pro-cess-oriented goals were explicitly stated in five responses, all but two of therespondents who used coursework as a mode of assessment responded affir-matively to the question, “Do students receive support/feedback during theproduction of coursework?” It may be inferred that process writing was animportant consideration in choice of performance conditions in these casesalso.

FIGURE 1 Summary of conceptual approaches. Note: N = 27.

0

2

4

6

8

10

12

14

Product Genre Academic Process Nonespecified

aims/rationale

No.

of u

nive

rsiti

es

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

242 SEDGWICK

The approaches to writing in Figure 1 reflect guiding concepts identified ininterviews with experienced teachers of writing in six countries6 by Cumming(2001). Moreover, they correspond closely to three broad theoretical approachesdescribed by Hyland (2002). The first approach is that of writing as text produc-tion. An ability to write in this case would be the ability to produce a coherentand cohesive text with accurate grammar, spelling, and punctuation and appropri-ate vocabulary. The second approach takes a cognitive perspective. The ability to

6Australia, Canada, Hong Kong, Japan, New Zealand, and Thailand.

TABLE 1Conceptual Approaches of Respondents

Location Product Process Academic Other Genre None Known

Austria 1 √Belgium Fr √Belgium Fl 1 √ √Denmark √France √Germany 1 √ √Germany 2 √ √Germany 3 √ √Italy 1 √Italy 2 √Italy 3 √Italy 6 √Italy 7 √Latvia √Netherlands 1 √ √Netherlands 2 √Netherlands 4 √Poland √ √Portugal 1 √Portugal 2 √Romania √Spain 1 √Spain 2 √Spain 3 √ √Switzerland √ √UK 1 √UK 2 √

Note. N = 27. Fr = France; Fl = Flanders; UK = United Kingdom.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 243

write, in this case, is the ability to compose, involving a process of generatingideas, revision, redrafting, and editing. The third approach focuses on the writer’sability to interact with the reader and to write appropriately for target discoursecommunities. The first and third of these approaches is reflected in Weigle’s(2002) account of weak and strong approaches to the assessment of writing,based on McNamara’s (1996) perspectives on performance testing. The weakversion focuses on the linguistic aspects of texts and the strong version focuseson the effectiveness of the text to achieve its purpose.

The criteria for assessment should reflect the preferred approach. Accordingto Weigle (2002):

As McNamara (1996) notes, the scale that is used in assessing performance taskssuch as writing tests represents, implicitly or explicitly, the theoretical basis uponwhich the test is founded: that is, it embodies the test (or scale) developer’s notionof what skills or abilities are being measured by the test. (p. 109)

However, the closed-response answers to questions designed to elicit scoringcriteria used did not reveal these different perspectives because the responses hadbeen predetermined by the researcher. Qualitative data collection methods, suchas open-response questions via questionnaire or interview, rating scales, and ver-bal protocols from raters, would yield more valid and reliable data. Further inves-tigation may reveal regional or national trends only weakly evident in these data,for example, a focus on academic writing and product in Germany, academicwriting and process in the Netherlands, or process in Spain.

Context Evidence

Context evidence was collected in terms of, first, the characteristics of the tasksthat students were required to perform, which would reflect the domain sampled,and second, the performance conditions associated with the context of the assess-ment itself, mode of assessment, time allowed, length and number of texts, andso on. The data were collated and analysed to identify the extent to which thewriting tasks that students were expected to perform were comparable.

Task characteristics. The closed-response answers to questions asking forinformation regarding task characteristics were collated according to whether eachcharacteristic is “included” or “not included” in the assessment.7 These data were

7The questionnaire had originally been designed to elicit three categories of response (must beincluded, may be included, not included). However, the first two categories were collapsed into onefor the final analysis because the distinction complicated the analysis and did not make a significantcontribution to the results with regard to the research question.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

244 SEDGWICK

organised into sets of task characteristics: text types, functional range, and topic.Where fewer options were possible—for example, information type (abstract/con-crete), input and response mode (paper based, electronic)—the information wascollated into matrices. Figures 2 and 3 illustrate the range of responses.

Formal task characteristics are prevalent, in particular, academic text types,especially the ubiquitous academic essay. Information functions, with a focus onargumentation, reflect the salience of academic writing. Nevertheless,approaches that relate to genres other than academic are reflected in the informalwriting and interactional functions sampled by some programmes.

Although there was a wide variation in topics sampled, academic and currentaffairs predominated. Topic (Jacobs, Zingraf, Wormuth, Hartfiel, & Hughey,1981; Reid, 1990), audience, purpose, and task make different processingdemands on the writer and generate different responses from different raters.

In short, although a good deal of research has been done on effects of task variation,the picture is not clear at this point in terms of which specific differences in writingprompts affect examinee performance and in what ways. What is clear, however, isthat individuals use different cognitive, rhetorical and linguistic strategies when

FIGURE 2 Summary of text types. Note: N = 30.

0

5

10

15

20

25

30

Summary Dissertation Essay Report Informalletter

Formal letter

Text types

No.

of R

espo

nden

ts

Included Not included

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 245

they are faced with tasks that vary according to topic, purpose and audience, andthat raters’ responses across task types vary as well. If nothing else, the literature ontask variability reinforces the limitations of using a single writing task as a measureof general writing ability. (Weigle, 2002, p. 69)

A third of the respondents always included stimulus texts as input for studentsto respond to. The length and number of texts for input cannot be determinedfrom the survey data, but written input can affect performance. Lewcowicz, ascited in Weigle (2002), found that students who were given written input tendedto rely heavily on the language of the source text, whereas those given instruc-tions only generated more ideas in their writing. In addition, if written inputneeds to be processed for the writing task, then what is assessed is not the sameas that requiring a written response based on the student’s own ideas (Moore &Morton, 2005).

The mode of response expected was still predominantly handwritten asopposed to computer drafted, which could affect performance and measurementof performance. Although research suggests that student performance is notaffected by the medium used (Green & Maycock, 2004), it could influence the

FIGURE 3 Summary of functional ranges. Note: N=30. Interactional functions:Interex=expressing thanks, requirements, apology, and so on; Interel=eliciting information,direction, service, and so on; Interdir=Directing, ordering, instructing, persuading, and so on.Informational functions: Infdes ideas=describing phenomena and ideas (e.g., definition clas-sification, comparison); Infdes proc=describing process (e.g., sequential description, instruc-tion, summary); Infargue =argumentation (e.g., stating a proposition, assumptions).

0

5

10

15

20

25

30

Interex Interel Interdir InfargueInfdesideas

Infdes proc

Functional Range

No.

of

Res

pond

ents

Included Not included

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

246 SEDGWICK

approach to scoring (Shaw, 2003). The mode of response would be strongly asso-ciated with the method of assessment, coursework, or examination.

Performance conditions. Performance conditions are summarised inTables 2 and 3. Examination (Table 2) seems to be the preferred method ofassessment. Forty-three percent of the sample set were examination only, 20%were coursework only, and 37% were exam plus coursework. There is consider-able variation in the amount of writing sampled and the time allowed. Up to threewriting tasks could be required in examinations, with one task as the most com-mon, selected by half of the sample. Length of text required varied from 200 to2,500 words, with a range of 200 to 600 words expected per hour. Although justunder half of the respondents specified 200 to 250 words per hour, an equal num-ber specified 300 to 350 words. Time allowed varied from 1 hr to as much as 7 hrof examination on one programme. The number of tasks required for coursework(Table 3) ranged from one to four. Three respondents assess portfolios, which

TABLE 2Performance Conditions: Examination

Location Total Time No. of Tasks Words per Hr Plus Coursework

Austria 60 1 250–400 NoBelgium Fr 120 3 500 YesFrance 420 1 300–400 YesGermany 1 240 1 300 YesGermany 2 180 1 200 NoGermany 3 240 3 150 YesItaly 1 240 1 250 NoItaly 2 90 2 Unspecified NoItaly 3 120 1 250 NoItaly 4 60 1 350–400 NoItaly 5 180 1 Unspecified NoItaly 6 120 2 250 NoItaly 7 Unspecified 1 Unspecified NoItaly 8 180 1 200–250 NoLatvia 90 1 500–600 YesNetherlands 2 180 1 300–350 YesPoland 1 180 1 300–400 NoPortugal 1 80 1 200–250 YesPortugal 2 60 2 300–550 YesSpain 1 120 3 300 YesSpain 2 Unspecified 1 Unspecified YesSpain 3 150 2 200 YesSwitzerland 60 1 Unspecified NoUK 2 60 2 300–350 No

Note. N = 24. Fr = France; UK = United Kingdom.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 247

would provide a broader and more representative sample of student work (Grabe& Kaplan, 1996; Hamp-Lyons & Condon, 2000; Hyland, 2002; Weigle, 2002).Apart from portfolios, the total amount students are required to write varies from900 words to 60 to 80 pages. Time allocated for completion of courseworkranged from 8 hr to 8 months.

Time allowed and length of text would be expected to influence performanceoutcomes. More time allowed would enable the writer to complete the task undercloser to normal time constraints and allow time for the writer to review, revise,and improve writing. Although Kroll (1990, pp. 140–154) reported thatperformance on essays written in class and on those completed at home over a10- to 14-day period did not differ, it could be that longer tasks, completed ascoursework, would affect performance. With more time available, the productionof longer texts would enable writers to demonstrate their organisational skillsmore fully and their ability to develop an argument.

Scoring Evidence

The majority of respondents, 21 of 29, reported subjective scoring as opposed tousing rating scales (Table 4). In the absence of precise scoring descriptors,

TABLE 3Performance Conditions: Coursework

Location Total Time No. of Tasks No. of Words Plus Exam

Belgium Fl 7 days 3 1,400 NoBelgium Fr Unspecified 3 1,100 YesFrance Unspecified 1 10 pp. YesGermany 1 4 weeks 4 4,000–6,000 YesGermany 3 Unspecified 3 15–18 pp. YesLatvia 20 weeks ports+class ports+class YesNetherlands 1 Unspecified 2 2,200 NoNetherlands 2 8 hr hmk 4 1,000–2,000 YesNetherlands 3 Unspecified 1 1,500 NoNetherlands 4 Unspecified 1 2,500 NoPortugal 1 Unspecified port port YesPortugal 2 Unspecified 2 3,000 YesRomania 8 months 1 60–80 pp. NoSpain 1 Unspecified port portfolio YesSpain 2 Unspecified 3 1,800–2,400 YesSpain 3 6 weeks 4 900 YesUK 1 Unspecified 1 5,000 No

Note. N = 17. Fl = Flanders; Fr = France; port = portfolio; class = classwork; hmk = homework;UK = United Kingdom.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

248 SEDGWICK

scorers may differ as to what they regard as important for a successful writingperformance. For example, some emphasise style, content, and organisation,whereas others favour grammar and vocabulary. Some may regard mechanicalerror (spelling and punctuation) as important factors to take into account in theirassessment of writing. Raters may interpret the criteria in different ways or bemore lenient in their interpretation of certain criteria than others. Some mayavoid the extremes in a rating scale, whereas others may avoid the middle. Ratersmay be inconsistent with regard to all of these factors (McNamara, 1996).

There was considerable variation in rating procedures (Table 5). Almost one-half of the respondents adopted procedures to ensure reliability, such as standard-isation of marks and review and revision of the assessment process, whereas onlyone-third double-marked assignments, and about one-fourth stated that theymade a statistical analysis of the results. Only 8 of 29 used external moderation,though the majority (22 of 29) scrutinised the results to look for any unexpectedoutcomes.

Scorer reliability is a particularly problematic issue in the assessment of writ-ing (Hamp-Lyons, 2003; McNamara, 1996). Even when the same abilities are

TABLE 4Scoring Validity Evidence: Scoring Method

Scoring Method No. of Responses

Subjective 21Analytic 5Subjective/analytic 1Not stated 2

Note. N = 29.

TABLE 5Scoring Validity Evidence: Rating Procedures

Rating Procedure Yes No Sometimes Not Applicable

Marks standardised 12 15 0 2Raters trained 16 11 1 1Double marking 10 19External moderation 8 20 1Statistical analysis 7 22Scrutiny to examine unexpected results 22 7Review and revision of assessment process 17 12

Note. N = 29.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 249

measured and the same contexts sampled, if there is a wide variation in measurestaken to ensure reliability of scores, it will affect the meaning that can be attrib-uted to the results.

Criterion-Related Evidence

Twelve respondents stated that they did not equate a pass on the final assessmentwith one of the levels on the CEFR, whereas 13 stated that they equated a passwith C1 on the CEFR, 4 with Level C2, and 1 with B1. However, only one of theuniversities in the sample had completed a benchmarking exercise, which it hadconducted with Cambridge ESOL, to equate its assessment with the CEFR lev-els. Due to the variation in the ability measured, the contexts sampled and meth-ods taken to ensure the reliability of the measurement, it is not clear what C1means in terms of proficiency in written English for these programmes. Thiscould reflect the shortcomings in the CEFR if it is used for assessment design andto define levels of language ability (Alderson et al., 1995; Weir, 2005b).

None of the respondents were able to report the results of any predictive valid-ity studies with regard to the relationship between performance in writing on thelanguage development component and performance in other subject areas in thedegree programme. However, as previously stated, some universities reportedthat their aim in the writing assessment was to assess students’ ability to performacademic writing tasks, which would be required for the other subject courses onthe programme.

Consequential Evidence

Although determining the value of the assessment for stakeholders was largelyoutside the scope of this study and no impact studies were reported, some evi-dence with regard to its value for teachers and for the potential value of theassessment to the programme as a whole was collected. Thirteen respondentsstated that the assessment method had a positive effect on teaching and learning,though none mentioned any specific washback studies. However, six respondentsreported the positive effect of continuous assessment and feedback on perfor-mance in writing. Particular mention was made of the beneficial effects thisassessment method had on learning strategies: “Students take writing seriouslyand plan for tasks,” “Students work regularly,” “Cannot leave everything till thelast moment,” “Students get their score after each essay and have an incentive toimprove.” Also, the respondent for the programme that had benchmarked the lan-guage assessment with Cambridge ESOL stated that the assessment “was per-ceived as fair”; the teachers felt that they were “working towards somethingtangible” and the marking had improved. These comments indicate aspects of theassessment that could be explored in further studies.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

250 SEDGWICK

In addition to the variation just reported with reference to the five validi-ties, there was variation not only between countries and within the same coun-try but also within the same department. In response to the question, “Who isresponsible for the implementation and evaluation of the assessments in theEnglish language component? Course team, course tutor, class tutor?” 19stated either the course team or the course tutor, usually in conjunction withthe course team. However, 6 out of 27 respondents stated that the class tutorwas responsible, and in one case, a team within the department designed theirown assessments.

Respondents used the comments section at the end of the questionnaire tovoice their concern to improve and develop common standards with regard towriting and to make recommendations for improvement: “It is generally acceptedthat students’ writing could do with improvement. But how this might be effectedis a matter of both genuine perplexity and departmental politics.” “Our subjectcolleagues are involved in a fruitless discussion about whose fault these writingproblems are and if it is the job of a university to do anything about them.” Tworespondents from courses that focused on academic writing expressed the need tofocus on other genres “to reflect the realities of the professional world”; “it ispossible for a student to graduate and not to know how to write a formal letter.”More particularly, the need to establish standards was highlighted: “Nothing iscodified”; “All members of our department should have some basic training inmarking productive skills. I also consider that the adoption of common practicesand common scales should constitute a policy in our department.”

CONCLUSION

In response to Research Questions 1 and 2, What is assessed in terms of profi-ciency in written English? To what extent are reports of attainment comparable?

For this relatively small sample of similar degree programmes with similarcomponents and similar career prospects, there was wide variation in the amountand range of writing sampled, how it was sampled, the reliability of theresponses, and the potential impact of writing on the rest of the programme. Themeaning or interpretation that can be attributed to the final score for this sampleis, therefore, different between countries, within the same country, and evenwithin the same department, in such cases where tutors design, administer, andscore their own assessments.8 This variability suggests that a degree in English

8This reflects the level of variation in assessment practices identified by Cummings (2001,p. 222), leading him to conclude that “it is difficult to consider the goals or processes for learningESL/EFL writing to be similar or even comparable across these courses.” Student could be achievingsomething different according to the preferences and values of the individual instructor.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 251

language will not be useful for employers, students, or postgraduate programmesthat require proficiency in written English locally, nationally, or within Europebecause there are no comparable and reliable standards. Similarly, if the assess-ment outcome is described in terms of a level on the Common European Frame-work but not supported by a statement of what is achieved at that level or aguarantee of the reliability of the statement made, then it will have no currency inEurope.

This study was exploratory, using a single method of data collection with a rel-atively small sample, though from a broad geographical area. Therefore, theresults could only be described as indicative. As with any survey-based study,there are a number of issues related to reliability, for instance, respondents mightreport what they think they ought to report rather than what they actually do; thequestions could be misinterpreted, especially when respondents come from differ-ent cultural and linguistic backgrounds. Last, the range of possible answers toclosed-response questions is necessarily predetermined by the researcher. Directcollection of tasks could have produced more reliable information about task char-acteristics (Carson, Chase, & Gibson, 1992; Hale et al., 1996; Horowitz, 1986a,1986b; Moore & Morton, 2005) but would be more appropriate for a needs analy-sis to develop specifications for assessment. My survey enabled an exploration ofcurrent practice with reference to the research questions across a broad geographi-cal range in Europe. The questionnaire, informed by Weir’s (2005a) frameworkfor the collection of validity evidence, generated a rich source of data, which pro-vided different perspectives with which to infer and interpret score meaningacross the university programmes in the sample and enabled the identification ofcommon themes, practices, and concerns to inform future developments.

It was understood that there would be variation, given the different educa-tional traditions and systems in Europe. The European Language Council studyreported in Hasselgren (1999) found variation in assessment practices on lan-guage degrees in Europe. However, the findings of the study presented here notonly highlighted variation but indicated where the variation that would have aninfluence on score meaning occurred and identified intracultural, as well as inter-cultural, variation. Common patterns were identified in relation to theoreticalperspectives adopted, domains sampled, and level of attainment required in termsof the CEFR, which would form the basis for research and debate to decide whata language major should achieve in terms of writing proficiency. Factors affect-ing score reliability that were identified in the data—the prevalence of subjectivescoring, variation in rating procedures, and statistical checks on rating—highlighted areas that would need to be addressed in future training if writingassessment scores are to have currency for stakeholders in Europe. Practices thathad a beneficial effect on teaching and learning were highlighted—for example,portfolio assessment, an emerging solution to the issue of ensuring a representa-tive sample of student work and greater student involvement in the assessment

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

252 SEDGWICK

process, was used by a number of respondents. Variation in the potential impactof the assessment was identified that would warrant further investigation in thedevelopment of an assessment argument.

The relatively high response rate to the lengthy and somewhat detailed ques-tionnaire reflects the will to communicate, share practices, and to improve prac-tice, and the concerns for fairness and common standards reflect the broaderpicture in Europe. The Bologna process has created an impetus that has gatheredmomentum. Since this project was completed, a number of initiatives have con-tributed to the aims of transparency, mobility, and quality assurance in Europe.The pilot CEFR Manual (Council of Europe, 2003) and Reference Supplement(Council of Europe, 2005) have been developed to enable test developers to linktheir tests to the CEFR (2005).9 The European Association of Language Testingand Assessment has been constituted. It has an annual conference and Web dis-cussion forum with a special interest group for the universities. The ThematicNetwork Project 3 of the European Language Council (2003–2006) has beenconducting a needs analysis with employers to ensure the relevance and useful-ness of language programmes to stakeholders. These initiatives would informand support future developments.

Guidelines for assessment are needed that include a specification of the under-pinning theory; the contexts to be sampled; levels of attainment, referenced to theCEFR with sample texts at the different levels; and the assessment context (taskcharacteristics and performance conditions), including a minimum sample thatwould meet the criteria of writing required. Measures to ensure scoring validityand a posteriori measures to investigate the validity of claims made should bestipulated. This would constitute a validation framework for the assessment ofwriting on European English language degrees that could be used to developguidelines for the assessment of other skills and other languages. Focused, quali-tative studies of task demands (Carson et al., 1992; Samraj, 2002; Zhu, 2004)across different cultural contexts in Europe would provide valuable informationfor assessment design.

If stakeholders (students, lecturers, and employers) are to recognise and valuethe outcomes of the assessment, they need to be involved in determining what alanguage major should be expected to achieve in terms of proficiency in writingthat would be useful across Europe. Bachman (2005) outlined steps to build acase for an assessment use argument that starts with the stakeholder and aims toensure that the assessment is useful, relevant, fair, and sufficient for its intendedpurpose.

9See Figueras, North, Takala, Verhelst, and Van Avermaet (2005) for a critical evaluation of theprocess of linking examinations to the CEFR.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 253

There are major barriers to achieving common standards in Europe. TheInternational Association for the Evaluation of Educational Achievement study(Gorman, Purves, & Degenhart, 1988; Purves, 1992) in its aim to develop aninternationally appropriate set of writing tasks and a common framework forassessment that would be applicable across school systems and languages foundthat it was not, in the end, possible to make comparisons of writing ability acrosscountries because, although the rating criteria were used “consistently” by eachnational team, they were interpreted differently in different systems of educa-tion.10 “All these findings suggest that performance in writing is part of a cultureand that schools tend to foster rhetorical communities” (Purves, 1992, p. 200).

However, the findings of the study presented here suggest that there areintracultural as well as intercultural differences, not only between universities inthe same country but within the same department. Common understanding can bereached if assessment developers are trained and work together (Purves, 1992).Training would then be required for local implementation and local interpreta-tions would need to be investigated through impact studies. However, politicaland institutional support is required to enable standards to be implemented,evaluated, and maintained. Inadequate funding in this regard was reported as aproblem in establishing standards for national exams linked to the CEFR (Eckeset al., 2005). The low status of language teaching and consequent lack of fundingto support the research required to develop language assessments in universitiesare reported in Duguid (2001). Brindley (2001) advised the need for adequatefunding for training, assessment development, and monitoring of the quality ofassessment procedures for outcomes-based assessment to be viable.

Language education is key to the development of an integrated zone for learningin Europe, yet these results indicate that what real, practical language skills theholder of an English language degree qualification actually has cannot be assuredfor stakeholders. Practitioners have been identified with common concerns toimprove writing on English language majors and to establish common standards.This survey highlights the need for practitioners to develop a common frameworkto ensure the validity of a language qualification at degree level, which is impera-tive for the guarantee of quality of any qualification in a language.

ACKNOWLEDGMENT

I thank all those who responded to the questionnaire survey, also BernardHrusa-Marlow, Barry O’Sullivan, and anonymous reviewers for their carefulreading and comments on earlier drafts of this article.

10This study was undertaken in non-European as well as European countries.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

254 SEDGWICK

REFERENCES

Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation.Cambridge, UK: Cambridge University Press.

Bachman, L. F. (2005). Building and supporting a case for test use. Language Assessment Quarterly,2, 1–34.

Brindley, G. (2001). Outcomes-based assessment in practice. Language Testing, 18, 393–407.Carson, J. G., Chase, N. D., & Gibson, S. U. (1992). Literacy demands of the undergraduate curricu-

lum. Reading Research and Instruction, 31(4), 25–50.Council of Europe. (2003). Relating language examinations to the Common European Framework of

Reference for Languages: Learning, teaching, assessment (Manual: Preliminary pilot version.DGIV/EDU/LANG 2003, 5). Strasbourg, France: Language Policy Division.

Council of Europe. (2005). Reference supplement to the preliminary version of the manual for relat-ing language examinations to the Common European Framework of Reference for Languages:Learning, teaching, assessment (DGIV/EDU/LANG 2005, 13). Strasbourg, France: Language Pol-icy Division.

Council of Europe. (n.d.). European language policy. Retrieved August 2006 from http://www.coe.int/t/dg4/linguistic/default_EN.asp

Cumming, A. (2001). EFL/ESL instructors’ practices for writing assessment. Language Testing, 18,207–224.

Duguid, A. (2001). Anatomy of a context: English language teaching in Italy. London: Granville.Eckes, T., L. Elis, M., Kalnberzina, V., Pižorn, K., Springer, C., Szollás, K., et al. (2005). Progress and

problems in reforming public language examinations in Europe: Cameos from the Baltic States,Greece, Hungary, Poland, Slovenia, France and Germany. Language Testing, 22, 355–377.

Figueras, N., North, B., Takala, S., Verhelst, N., & Van Avermaet, P. (2005). Relating examinationsto the Common European Framework: A manual. Language Testing, 22, 261–279.

Gorman, T. P., Purves, A. C., & Degenhart, R. E. (Eds.).(1988). The IEA study of written compositionI. London: Pergamon.

Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing. Harlow, England: PearsonEducation.

Green, T., & Maycock, L. (2004). Computer-based and paper-based versions of IELTS. CambridgeESOL Research Notes, 18, 3–6.

Hale, G., Taylor, C., Bridgeman, B., Carson, J., Kroll, B., & Kantor, R. (1996). A study of writingtasks assigned in academic degree programmes (TOEFL Research Report RR, 95-44). Princeton,NJ: Educational Testing Service.

Hamp-Lyons, L. (2003). Writing teachers as assessors of writing. In B. Kroll (Ed.), Exploring thedynamics of second language writing (pp. 71–92). Cambridge, UK: Cambridge University Press.

Hamp-Lyons, L., & Condon, W. (2000). Assessing the portfolio. Cresskill, NJ: Hampton.Hamp-Lyons, L., & Kroll, B. (1997). Composition, community and assessment (TOEFL Monograph

Series). Princeton, NJ: Educational Testing Service.Hasselgren, A. (Ed.). (1999, September) Assessing the proficiency of modern language (under)gradu-

ates: Report on a survey and feasibility study (Thematic network project in the area of languages,sub-project 10: Testing). Available from http://userpage.fu-berlin.de/~elc/tnp1/SP10rep.pdf

Horowitz, D. (1986a). Essay examination prompts and the teaching of academic writing. English forSpecific Purposes, 5, 107–120.

Horowitz, D. (1986b). What professors actually require: Academic tasks for the ESL classroom.TESOL Quarterly, 20, 445–462.

Hyland, K. (2002). Teaching and researching writing. London: Pearson Education.Jacobs, H., Zingraf, A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL compo-

sition. Riley, MA: Newbury House.

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

EXPECTATIONS OF ACHIEVEMENT 255

Kroll, B. (1990). What does time buy? In B. Kroll (Ed.), Second language writing: Research insightsfor the language classroom (pp. 140–154). New York: Cambridge University Press.

Little, D. (2005). The Common European Framework and the European Language Portfolio: Involv-ing learners and their judgements in the assessment process. Language Testing, 22, 321–336.

McNamara, T. (1996). Measuring second language performance. London: Longman.Messick, S. A. (1989). Validity. In R. Linn (Ed.) Educational measurement (pp. 13–109). New York:

Macmillan.Messick, S. A. (1995). Validity of psychological assessment: Validation of inferences from person's

responses and performances as scientific enquiry into score meaning. American Psychologist, 50,741–749.

Moore, T., & Morton, J. (2005). Dimensions of difference: A comparison of university writing andIELTS writing. Journal of English for Academic Purposes, 4, 43–66.

Purves, A. C. (1992). IEA study of written composition. London: Pergamon.Reid, J. (1990). Responding to different topic types: A quantitative analysis from a contrastive rheto-

ric perspective. In B. Kroll (Ed.), Second language writing: Research insights for the classroom(pp. 191–210). New York: Cambridge University Press.

Samraj, B. (2002). Textual and contextual layers: Academic writing in content courses. In A. M.Johns (Ed.), Genre in the classroom: Multiple perspectives (pp. 163–176). Mahwah, NJ: LawrenceErlbaum Associates.

Shaw, S. (2003). Legibility and the rating of second language writing: The effect on examiners whenassessing handwritten and word-processed scripts. Cambridge ESOL Research Notes, 11, 7–10.

Weigle, S. C. (Ed.). (2002). Assessing writing (Cambridge language assessment). Cambridge, UK:Cambridge University Press.

Weir, C. J. (2005a). Language testing and validation: An evidence-based approach. Basingstoke,UK: Palgrave Macmillan.

Weir, C. J. (2005b). Developing comparable examinations and tests. Language Testing, 22, 281–300.Zhu, W. (2004). Writing in business courses: An analysis of assignment types, their characteristics

and required skills. English for Specific Purposes, 23, 111–135.

APPENDIX

TABLE A1Sample Analysis of Responses to Section 3b: Theory-based Validity Evidence

Netherlands 1 a) The task that produces the final mark is the essay. The aim is to assess the student’s progress towards being able to write a long academic text (final dissertation)

b) The writing programme is a step-by-step preparation for writing a dissertation. When students get to the final year, they also get a short course in how to write a dissertation (no ects involved) so we are interested in the whole process of selecting topics that are interesting for some reason, finding info., planning a piece of written communication, writing drafts and doing editing, so that is why we do not have an exam-like assessment.

Netherlands 2 The course is geared towards writing an academic essay. Essays are assessed on Language, Content and Structure.

Netherlands 3 Not time to answer.

(Continued)

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013

256 SEDGWICK

TABLE A1(Continued)

Netherlands 4 Within each course, assessment takes place after students have revised their work. Each assignment is graded; no final assessment. We try to emphasise that writing is a process of writing and revising; essay writing under exam conditions does not fit that aim very well.

Poland No specific aims have been stated. Students are supposed to be able to produce essays which are up to the agreed standard from the point of view of content, composition and language.

Romania I am not sure what the rationale and the aims behind this final assessment procedure are. I suspect it is mainly connected to tradition and the fact that all the language departments in the Faculty have to have the same final assessment procedures. In other words, this is how the things are done and the English department has no control over the form of this final assessment procedure.

Spain 1 a) To serve as a guide for learners;b) To be as transparent as possible and to encourage learners to become fully active partners in their learning.

Switzerland a) To assess students’ ability to produce clear, correct English prose of a type suitable for academic purposes;b) To make the test as “objective” and examiner-friendly as possible, consistent with

the level of language competence expected and the need for active language production – both of which rule out multiple-choice formats. Text-based comprehension is felt to be preferable to, e.g., an essay, since the latter may (directly or indirectly) amount to a test of students’ ideas rather than their language.

Note. Samples selected demonstrate the range of conceptual approaches identified coded for thisarticle as follows: Text production (Times italic), Genre (Futura Bk), Academic (Helvetica), Process(Palatino), none specified (Times bold).

Dow

nloa

ded

by [

Tem

ple

Uni

vers

ity L

ibra

ries

] at

04:

44 1

0 Se

ptem

ber

2013


Recommended