+ All Categories
Home > Documents > Research Trends Issue29

Research Trends Issue29

Date post: 05-Apr-2018
Category:
Upload: jean-carlos-dos-santos
View: 219 times
Download: 0 times
Share this document with a friend

of 20

Transcript
  • 7/31/2019 Research Trends Issue29

    1/20

  • 7/31/2019 Research Trends Issue29

    2/20

    Research Trends Issue 29 July 2012 Page 01

    Powered by Scopus

    Welcome tothe 29th issue oResearch Trends.

    As we continue to develop new ways tomake our content more engaging andinteractive, we want to be sure that you arekept inormed o important changes we

    make. Two months ago we launched ournew website enabling you to comment onour articles and link them to your Facebookand Twitter eeds. It is with great interestthat we have seen these conversations andsharing options being embraced.

    We now have over 600 tweets and Facebooklikes, allowing you to see which articleswere most enjoyed and most shared by ourreaders. The Research Trends team thanksyou or taking the time to share these articleswith others, and sharing your thoughts andopinions with us.

    Recently we started a Research TrendsTwitter account, @researchtrendy, wherewe highlight content that we think will beo interest to you. With this we say helloto all our current Twitter ollowers andwelcome anyone who would like to jointhe community. We look orward to seeingyou there.

    Yours sincerely,Iris Kisjes

  • 7/31/2019 Research Trends Issue29

    3/20

    Research Trends Issue 29 July 2012 Page 02

    Page 03

    Merit, expertise and measurement: a new research program

    at CWTS

    Proessor Paul Wouters discusses a new research program at the Centre or Science

    and Technology Studies o Leiden University.

    Page 05

    The Integrated Impact Indicator (I3), the top-10% Excellence

    Indicator, and the use o non-parametric statistics

    We present an explanation o an innovative new approach to using citation data

    in research evaluation.

    Page 09

    Bibliometrics and Urban Research Part II:

    Mapping author afliations

    Research Trends takes another look at the eld o urban research, with maps

    o the author community.

    Page 11Identiying emerging research topics in Wind Energy research

    using author given keywords

    Research Trends examines how two types o keywords have dierent properties

    and serve dierent purposes in the search process.

    Page 15

    Research Evaluation in Practice:

    Interview with Linda ButlerLinda Butler speaks with Research Trends about changing trends in research evaluation

    and the use o bibliometrics.

    Page 17

    Did you know?

    that scientometrics is explained in a recent novel?

  • 7/31/2019 Research Trends Issue29

    4/20

    Section 1:The Value oBibliometrics

    Merit, expertise andmeasurement: a new researchprogram at CWTS.

    Pro. Paul Wouters

    Introduction

    The Centre or Science and TechnologyStudies at Leiden University has developeda new research program ocusing on

    monitoring and analyzing knowledge fowsand on research evaluation. The program,which will be published this Fall, introducesnew approaches to these well-establishedgoals o scientometric research. With thedevelopment o this new program, rst, wemove rom data-centric methods justiedby ad-hoc reasoning towards a systematictheory-based ramework or developingbibliometric and scientometric indicators.Second, in interpreting and applyingperormance indicators we increasinglybase ourselves on the systematic analysis

    o current scientic and scholarly practicesrather than only on general statisticalarguments. Specic attention is paid tohumanities and social sciences because othe variety o its research and publicationpractices. We also analyze the impact oresearch assessment exercises, and theperormance criteria applied, on the primaryprocess o knowledge production. Third,we explore the possibilities and problemsin assessing the societal impact o research(social quality). Increasingly, this dimensionis becoming the second pillar o researchevaluation next to scientic impact and

    is creating a new challenge or scienceevaluation and assessment.

    To sum up, we maintain the tried andtrusted CWTS ocus on bibliometrics orresearch evaluation, but we deepenour theoretical work and increase ourempirical scope. Our new researchagenda is a response to the widespreaduse o bibliometrics in perormance basedresearch management. We hope it willhelp prevent abuse o perormancemeasures and thereby contribute to the

    development o good evaluation practices.We aim to bring scientometrics to a newlevel o quality in close collaboration withour colleagues in the eld. This shouldalso lead to new international standardso quality or assessments and science &technology indicators.

    Research question

    How can we improve our understandingo the dynamics o science, technology,and innovation by the measurement andassessment o the scientic and scholarlysystem, in particular o scientic products,

    communication processes and scholarlyperormance? This is the overarching themeo the new research program. In response,two specic research questions are in ocus:

    1. How do scientic and scholarly practicesinteract with the social technology oresearch evaluation and monitoringknowledge systems?

    2. What are the characteristics, possibilitiesand limitations o advanced metricsand indicators o science, technologyand innovation?

    Key research themesThe rst research theme in the programis the methodology o bibliometrics. Bothat CWTS and elsewhere, the developmento bibliometric indicators or research

    assessment has long been done in apragmatic way. Indicators were developedwithout explicitly incorporating them ina broader mathematical or statisticalramework. Indicators were justied mainlyusing empirical arguments. This resultedin a data-centric approach where theinterpretation o the chosen indicators wasdeveloped in an ad-hoc ashion. In thenew program we move towards a theory-oriented approach; indicator developmentwill become more and more based onexplicit theoretical models o the scienticpublication and citation process. In thisramework, the indicators will be judged ontheir mathematical and statistical properties.These models will or instance allow usto distinguish between observable andnon-observable eatures o the publicationand citation process (e.g., between theobservable concept o citation impact andnon-observable concepts such as scienticinfuence or quality). Model-based indicatordevelopment has the advantage o makingan explicit distinction between what oneintends to measure and what one is inact measuring. This helps us to study the

    properties o bibliometric indicators (e.g.,validity and reliability or bias and variance)in a more ormalized way. The limitationso the indicators should be made explicit aswell. For example, a complex concept suchas scientic impact cannot be measured by

    Powered by Scopus

    Research Trends Issue 29 July 2012 Page 03

    Figure 1:Paul Wouters at a workshop o theRussian Academy o Sciences in St Petersburg,titled Career Development in Academia,56 June 2012.

  • 7/31/2019 Research Trends Issue29

    5/20

    one indicator. This is the reason we havemoved rom emphasizing one indicator (e.g.the crown indicator) towards a portolioapproach to perormance indicators.

    The new program also pays increasingattention to bibliometric network analysisand science maps. Bibliometric networksare networks o, or instance, publications,journals, researchers, or keywords. Insteado ocusing on the properties o individualentities in a network, bibliometric networkanalysis concentrates on the way in whichrelations between entities give rise to largerstructures, such as clusters o relatedpublications or keywords. In this sense,bibliometric network analysis is closelyrelated to the analysis o complex systems.The main objective o our research intobibliometric network analysis will be toprovide content and context or researchassessment purposes. Science maps enableus to analyze both the citation impact o aresearch group and its relationships withother groups. It also enables the analysis ointerdisciplinary research without having torely on predened subject classications. Aninteresting application is the visualization othe actual eld proles o research groupsand scientic journals. We can also map thecitation networks o journals at all levels o

    aggregation (see Figure 2).

    The second research theme in the programrelates to the way evaluation processescongure the primary process o knowledgecreation. The key question is that o therelationship between peer review based andindicator based evaluation. In the past, CWTShas dealt with this tension in a pragmaticway, using indicators to provide useulinormation to supplement peer review.As explained earlier, we will move towardsa more systematic, theory based, approachin which we will probe in much more detail

    how expertise develops in particular scienticelds in relation to the bibliometric insightso those elds. We will not assume thatthe two ways o evaluating the quality oscientic and scholarly work are diametricallyopposed: this would amount to setting upa straw man. In practice, peer review andbibliometrics are combined in a variety oways. But how these combinations aredeveloped by both evaluating institutionsand the researchers that are being evaluatedis not sel-evident. Because it is exactlythis interplay where the criteria or

    scientic quality and impact are beingdeveloped, we zoom in on this aspect oresearch evaluation.

    Research evaluation may take dierentorms: annual appraisal interviews,institutional research assessment exercises,and global assessments o national sciencesystems. Evaluation is a more complexinteraction than simply the measuremento the perormance o the researcher.We see it as a communication process inwhich both evaluators and the researcherunder evaluation dene what the proper

    evaluation criteria and materials should be.Thereore, we are especially interested inthe intermediate eects o the process oevaluation on the researcher, evaluator, andon the development o assessment protocols.

    Within this theme specic attention is paidto the constructive eects o researchevaluation (including perverse eects).Evaluation systems inevitably produce(construct) quality and relevance as muchas they measure it. This holds both orindicator based evaluation and or qualitativepeer review evaluation systems. Evaluation

    systems have these eects because theyshape the career paths o researchersand because they orm the quality andrelevance criteria that researchers entertain.These eedback processes also producestrategic behavior amongst researcherswhich potentially undermines the validity othe evaluation criteria. We thereore placeocus on how current and new orms opeer review and indicator systems as mainelements o the evaluation process will denedierent quality and relevance criteria inresearch assessment, on the short term as

    well as on the longer term. The recent anxietyabout perverse eects o indicators such asthe Hirsch-index will also be an importanttopic in this research theme. This theme willalso encompass a research program aboutthe development o scientic and scholarlycareers and academic leadership.

    Questions regarding the socio-economicand cultural relevance o scientic researchorm our third research theme. From theperspective o the knowledge-based society,policy makers stress the importance oknowledge valorisation. This term is usedor the transer o knowledge rom one partyto another with the aim o creating (economicand societal) benets. However, the use othe word is oten limited: only describing the

    transer o knowledge to the commercialsector. The value in other domains, orexample in proessional or public domains,is oten not taken into account. Also, theterm valorisation is oten used to describea one-way-interaction, the disseminationo scientic knowledge to society, while inpractice we oten observe more mutual,interactive processes.

    Within this research theme, we will thereoreuse the concept o societal quality inanalyzing the societal impact o research.Societal quality is described as the value

    that is created by connecting research tosocietal practice and it is based on the notionthat knowledge exchange between researchand its related proessional, public andeconomic domain strengthens the researchinvolved. This denition encompassesexplicitly more than economic value creationonly. It also entails research that connects tosocietal issues and interactions with usersin not-or prot sectors such as health andeducation as well as to the lay public. Inthe program we ocus on the developmento robust data sets, as well as the analysis

    o these datasets, in the context o specicpioneering projects in which the interactionbetween research and society can be welldened. This will create the possibility toconstruct, measure, and test potentialindicators o societal impact.

    Research Trends Issue 29 July 2012 Page 04

    Figure 2:A map o journals based on citation relations. More maps can be ound at

    http://www.vosviewer.com

    http://www.vosviewer.com/http://www.vosviewer.com/
  • 7/31/2019 Research Trends Issue29

    6/20

    Section 2:Behind the data

    The Integrated Impact Indicator(I3), the top-10% ExcellenceIndicator, and the use onon-parametric statisticsPro. Loet Leydesdor & Dr Lutz Bornmann

    Introduction

    Competitions generate skewed distributions.For example, a ew papers are highly cited,but the majority is not or hardly cited. The

    skewness in bibliometric distributions isreinorced by mechanisms which havevariously been called the Matthew eect (1),cumulative advantages (2) and preerentialattachment (3). These mechanisms describethe rich get richer phenomenon in science.Skewed distributions should not be studiedin terms o central tendency statistics suchas arithmetic means (4). Instead, one canuse non-parametric statistics, such as thetop-1%, top-10%, etc.

    In Figure 1, or example, the 2009 citation

    distributions o citable items in 2007 and2008 in two journals rom the eld onanotechnology (Nano Letters and NatureNanotechnology) are compared using alogarithmic scale. The Impact Factor (IF)2009 o the latter journal is almost threetimes as high as the one o the ormerbecause the IF is a two-year average. Usingthe number o publications in the previoustwo years (N) in the respective denominatorserroneously suggests that Nano Letters hadless impact than Nature Nanotechnology.I one instead considers the citationdistributions in terms o six classes top

    1%, top-5%, etc. (Figure 2) Nano Lettersoutperorms Nature Nanotechnology inall classes.

    These six classes have been used by theUS National Science Board (e.g., 6) or theScience and Engineering Indicators or adecade. By attributing a weight o six toeach paper in the rst class (top-1%) andve to each paper in the second class,etc., the stepwise unction o six so-calledpercentile-rank classes (PR6) in Figure 2can be integrated using the ollowing omula:

    . In this ormula, x representsthe percentile value and (x) the requencyo this rank. For example, i = 6 in the caseabove, or i = 100 when using 100 equalclasses such as top-1%, top-2%, etc.

    Measuring integrated impact withI3 and/or PR6

    Under the infuence o using impact actors,scientometricians have conused impact

    with average impact: a research team asa group has more impact than one leadingresearcher, but the leading researcherhim/hersel can be expected to havemore average impact, that is, citationsper publication (c/p). Existing bibliometricindicators such as IF and SNIP are basedon central tendency statistics, with theexception o the excellence indicator o thetop-10% most-highly cited papers whichis increasingly used in university rankings(7,8; c. 9,10). An excellence indicator canbe considered as the specication o two

    classes: excellent papers are counted asones and the others as zeros.

    Leydesdor* & Bornmann** called thisscheme o percentile-based indicatorsI3 as an abbreviation o integrated impactindicator (11). I3 is extremely fexiblebecause one can sum across journalsand/or across nations by changing thesystems o reerence. Unlike using thearithmetic mean as a parameter, thepercentile-normalized citation ranks can betested using non-parametric statistics suchas chi-square or Kruskall-Wallis because an

    expectation can also be specied. In the caseo hundred percentile rank classes, 50 is theexpectation, but because o the non-linearityinvolved this expectation is 1.91 or the sixclasses used above (12). Various tests allowor comparing the resulting proportions withthe expectation in terms o their statisticalsignicance (e.g., 7,13).

    Powered by Scopus

    Research Trends Issue 29 July 2012 Page 05

    *Amsterdam School o CommunicationResearch, University o Amsterdam,

    Kloveniersburgwal 48, NL-1012 CX,Amsterdam, The Netherlands;[email protected]

    **Division or Science and InnovationStudies, Administrative Headquarterso the Max Planck Society, Hogartenstr. 8,D-80539 Munich, Germany;[email protected]

    mailto:loet%40leydesdorff.net?subject=mailto:bornmann%40gv.mpg.de?subject=mailto:bornmann%40gv.mpg.de?subject=mailto:loet%40leydesdorff.net?subject=
  • 7/31/2019 Research Trends Issue29

    7/20

    Figure 1 Citation distributions or Nature Nanotechnology (N = 199 publications) and Nano Letters

    (N = 1,506). Source: (5).

    Figure 2 Frequency distribution o six percentile rank classes o publications in Nano Letters and

    Nature Nanotechnology, with reerence to the 58 journals o the WoS Subject Category nanoscience

    & nanotechnology. Source: (5).

    Research Trends Issue 29 July 2012 Page 06

  • 7/31/2019 Research Trends Issue29

    8/20

    Research Trends Issue 29 July 2012 Page 07

    The outcome o evaluations usingnon-parametric statistics can be verydierent rom using averages. Figure 3,

    or example, shows citation proles otwo Principal Investigators (PIs) o theAcademic Medical Center o the Universityo Amsterdam (using the journals in whichthese authors published as the reerencesets). In this academic hospital the averagedc/p ratios are used in a model to allocateunding, raising the stakes or methodso assessing impact and inciting theresearchers to question the exactness o theevaluation (15). The average impact (c/p ratio)o PI1, or example, is 70.96, but it is only24.28 or PI2; the PR6 values as a measure

    o integrated impact, however, show areverse ranking: 65 and 122, respectively (14).This dierence is statistically signicant.

    I3 quanties the skewed citation curvesby normalizing the documents rst interms o percentiles (or the continuous

    equivalent: quantiles). The scheme usedor the evaluation can be considered asthe specication o an aggregation rule orthe binning and weighting o these citationimpacts; or example as above, in terms osix percentile rank classes. However, policymakers may also wish to consider quartilesor the top-10% as in the case o an excellenceindicator. Bornmann & Leydesdor, orexample, used top-10% rates or showingcities with research excellence as overlaysto Google Maps using green circles or citiesranked statistically signicantly above and

    red circles or ones below expectation (9).

    Figure 3 Citation distributions and percentile ranks or 23 publications o PI 1 and 65 publications o PI 2, respectively. Source: (14).

  • 7/31/2019 Research Trends Issue29

    9/20

    Research Trends Issue 29 July 2012 Page 08

    Conclusions and implications

    The use o quantiles and percentile rankclasses improves impact measurementwhen compared with using averages.

    First, one appreciates the skewness o thedistribution. Second, the conusion betweenimpact and average impact can be solved:averages over skewed distributions arenot inormative and the error can be large.Using I3 with 100 percentiles, a paper in the39th percentile can be counted as hal thevalue o one in the 78th percentile. UsingPR6, alternatively, one would rate the latterwith a 4 and the ormer with a 6. Thus,the use o I3 allows thirdly or the choiceo normative evaluation schemes such asthe six percentile ranks used by the NSF

    or the excellence indicator o the top-10%.

    Fourth, institutional and document-basedevaluations (such as journal evaluations)can be brought into an encompassingramework (5). These indicators are nally

    well suited or signicance testing so thatone can also assess whether excellent canbe distinguished rom good research, andindicate error bars. Dierent publication andcitation proles (such as between PI1 andPI2 in Figure 3) can thus be compared anduncertainty be specied.

    Reerences:

    1. Merton, R. K. (1968) The Matthew Eect in Science, Science, 159, 56-63.

    2. Price, D. S. (1976) A general theory o bibliometric and other cumulative advantage processes, Journal o the American Society or Inormation Science,

    27(5), 292-306.

    3. Barabsi, A.-L. (2002) Linked: The New Science o Networks. Cambridge, MA: Perseus Publishing.

    4. Seglen, P. O. (1992). The Skewness o Science. Journal o the American Society or Inormation Science, 43(9), 628-638.

    5. Leydesdor, L. (in press) An Evaluation o Impacts in Nanoscience & nanotechnology: Steps towards standards or citation analysis,

    Scientometrics; http://www.springerlink.com/content/6082p65177r04425/.

    6. National Science Board (2012) Science and Engineering Indicators. Washington DC: National Science Foundation; http://www.ns.gov/statistics/seind12/.

    7. Bornmann, L., de Moya-Anegn, F., & Leydesdor, L. (2012) The new excellence indicator in the World Report o the SCImago Institutions Rankings 2011,

    Journal o Inormetrics, 6(3), 333-335.

    8. Leydesdor, L., & Bornmann, L. (in press) Testing Dierences Statistically with the Leiden Ranking, Scientometrics;

    http://www.springerlink.com/content/8g2t2324v0677w86/.

    9. Bornmann, L., & Leydesdor, L. (2011) Which cities produce excellent papers worldwide more than can be expected? A new mapping approachusing

    Google Mapsbased on statistical signicance testing, Journal o the American Society or Inormation Science and Technology, 62(10), 1954-1962.

    10. Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J., ... Wouters, P. (2012). The Leiden Ranking 2011/2012: data

    collection, indicators, and interpretation; http://arxiv.org/abs/1202.3941.

    11. Leydesdor, L., & Bornmann, L. (2011) Integrated Impact Indicators (I3) compared with Impact Factors (IFs): An alternative design with policy implications,

    Journal o the American Society or Inormation Science and Technology, 62(11), 2133-2146. doi: 10.1002/asi.21609

    12. Bornmann, L., & Mutz, R. (2011) Further steps towards an ideal method o measuring citation perormance: The avoidance o citation (ratio) averages in

    eld-normalization, Journal o Inormetrics, 5(1), 228-230.

    13. Leydesdor, L., Bornmann, L., Mutz, R., & Optho, T. (2011) Turning the tables in citation analysis one more time: Principles or comparing sets o documents,

    Journal o the American Society or Inormation Science and Technology, 62(7), 1370-1381.

    14. Wagner, C. S., & Leydesdor, L. (2012, in press). An Integrated Impact Indicator (I3): A New Denition o Impact with Policy Relevance.

    Research Evaluation; http://arxiv.org/abs/1205.1419.

    15. Optho, T. and L. Leydesdor (2010) Caveats or the journal and eld normalizations in the CWTS (Leiden) evaluations o research perormance,

    Journal o Inormetrics 4(3), 423-430.

    http://www.springerlink.com/content/6082p65177r04425/http://www.nsf.gov/statistics/seind12/http://www.springerlink.com/content/8g2t2324v0677w86/http://arxiv.org/abs/1202.3941http://arxiv.org/abs/1205.1419http://arxiv.org/abs/1205.1419http://arxiv.org/abs/1202.3941http://www.springerlink.com/content/8g2t2324v0677w86/http://www.nsf.gov/statistics/seind12/http://www.springerlink.com/content/6082p65177r04425/
  • 7/31/2019 Research Trends Issue29

    10/20

  • 7/31/2019 Research Trends Issue29

    11/20

    From one discipline to another

    The other two branches o urban researchare those published in Social Science andin Science journals, respectively. These can

    be compared using the same approach asthat used above, but instead here we alterthe approach to look at only the authorso the top-cited papers in each discipline.As we are including both articles andreviews in the analysis, but these types opapers have dierent expected numberso citations, we rank the articles and reviewsseparately, and take the top 10% o eachaccording to citations. This allows us tomap the distribution o the authors o the

    highest-impact articles and reviews together.Figure 2 shows the resulting distributionsin the Social Sciences and Science clusters,plotted in dierent colors. Dierences are

    apparent through a comparison o red (SocialScience) and cyan (Science) authors. Someregions, such as South Arica and Australia,have more prominence in the SocialSciences; others, such as continental Europe,show a greater presence in the Sciences.

    The maps o author aliations show aner level o detail than any aggregatedcountry data can provide; and they allow ormuch more immediate interpretation o the

    aliation data. We looked at the distributionso authors whether including all authors,or only highly-cited authors in the threeidentied branches o urban research.

    There are two elements that may improvethis approach urther. The rst is to includeimpact data more directly in the mappingprocess. The second would be to look atcollaboration; here papers are duplicatedor each aliation, and there is no sense othe partnerships that go into that creation; acomparison o the collaborative trends in thevarious urban research clusters would addeven deeper insight into their natures.

    Figure 1 Distribution o urban studies authors in 2010. Following the method described by Bornmann

    et al. (3), circles are sized and colored according to the number o papers originating rom each location.

    Data source: Scopus

    Figure 2 Distribution o highly-cited Social Science (red) and Science (cyan) urban research authors in

    2010. Where authors in the dierent disciplines are rom the same location, this is shown by a darker red

    or darker cyan than where there is no overlap. Data source: Scopus

    Research Trends Issue 29 July 2012 Page 10

    Reerences:1. Kirby, A., & Kamalski, J. (2012) Bibliometrics

    and Urban Research, Research Trends, No. 28.

    2. Kamalski, J., & Kirby, A. (2012, in press)

    Bibliometrics and urban knowledge transer,

    Cities; http://dx.doi.org/10.1016/j.cities.2012.06.012.

    3. Bornmann, L. et al. (2011) Mapping excellence

    in the geography o science: An approach based

    on Scopus data, Journal o Inormetrics, 5(4),

    537546.

    http://www.scopus.com/http://www.scopus.com/http://dx.doi.org/10.1016/j.cities.2012.06.012http://dx.doi.org/10.1016/j.cities.2012.06.012http://www.scopus.com/http://www.scopus.com/
  • 7/31/2019 Research Trends Issue29

    12/20

    Powered by Scopus

    Section 4:

    Behind the dataIdentiying emerging researchtopics in Wind Energy researchusing author given keywords

    Gali Halevi MLS, PhD and Dr Henk Moed

    The value o well constructed thesaurias means or eective searching andstructuring o inormation is somethinga seasoned searcher is very amiliar

    with. Thesauri are useul or numerousinormation management objectivessuch as grouping, dening and linkingterms, and identiying synonyms andnear-synonyms as well as broader andnarrower terms. Searches based on thesauriterms are considered better in terms o bothrecall and precision (1,2,3).

    Yet the construction o a comprehensivethesaurus is a laborious task which otenrequires the intervention o an indexer whois expert in the subject. Terms incorporatedin a thesaurus are selected careully andexamined or their capability to describecontent accurately while keeping theintegrity o the thesaurus as a whole. Termsincorporated in a thesaurus are reerred to ascontrolled vocabulary or terms. Uncontrolledvocabulary on the other hand, consists oreely assigned keywords which the authorsuse to describe their work. These terms canusually be ound as a part o an abstract,and appear in most databases as authorkeywords or uncontrolled vocabularies.In todays ast moving world o sciencewhere new discoveries and technologies

    develop rapidly, the pace by which thesauricapture new areas o research may bequestioned, and so the value o now usingauthor keywords in retrieving new, domain-specic research should be examined.

    This study sought to examine the mannersby which thesauri keywords and authorkeywords manage to capture new andemerging research in the eld o WindEnergy. The research questions wereas ollows:

    1. Do author keywords include new

    terms that are not to be ound in athesaurus unction?

    2. Can new areas o research be identiedthrough author keywords?

    3. Is there a time lapse between theappearance o a keyword assigned by anauthor and its appearance in a thesaurus?

    Methodology

    In order to answer these questions weanalyzed controlled and uncontrolledterms o 4000 articles grouped under

    the main heading Wind Power inCompendex captured between the years20052012. Compendex is a comprehensivebibliographic database o scientic andtechnical engineering research available,covering all engineering disciplines. Itincludes millions o bibliographic citationsand abstracts rom thousands o engineeringjournals and conerence proceedings. Whencombined with the Engineering Index Backle(1884-1969), Compendex covers well over120 years o core engineering literature.

    In each Compendex record a list o controlledand uncontrolled terms are listed and canbe searched on. Over 17,000 terms wereextracted rom the Compendex records andsorted by requency. Two separate les werecreated; one depicting all the controlledterms and the second depicting the authorgiven keywords (i.e. uncontrolled terms).For each term a count o the number otimes they appear in each year rom2005-2012 and the total number o articlesin which each term appears was recorded.In addition, a simple trend analysiscompared the number o the times each

    term appears on average in paperspublished during the years 20092012 withthe same measure calculated or 20052008.This trend analysis allowed or a view oterms that increase in usage in the past3 years, compared to the overall time period.

    To answer the research questions, theollowing steps were taken:- All author keywords that appear 100 times

    or more were collected.- The author keywords were searched in

    the Compendex Thesaurus: i an author

    keyword appeared, the year in which itwas introduced was recorded.- The author keyword was then searched

    or in Compendex across all yearsand the year in which it rst appearedwas recorded.

    - The author keywords that appeared morethan 100 times were grouped into themes.In addition these author keywords weresearched or in Compendex in order toidentiy their corresponding articles andthe topics they cover.

    Findings

    Table 1 shows the most recurringuncontrolled terms. The terms werecategorized in 4 groups as ollows:

    Research Trends Issue 29 July 2012 Page 11

  • 7/31/2019 Research Trends Issue29

    13/20

    Looking at the corresponding literaturewithin Compendex, there were three maintopics that emerged rom the author keywords which indicate specialized areas oresearch within the overall wind power mainheading. These terms did not appear in theCompendex thesaurus.

    Wind Farms: This term rst appeared as anuncontrolled term (i.e. Author keywords) in1985 in an article by NASA researchers (4).

    The term reers to large areas o land onwhich wind turbines are grouped. Someexamples o such wind arms are The AltaWind Energy Center (AWEC) which is locatedin the Tehachapi-Mojave Wind ResourceArea in the USA and the Dabancheng WindFarm in China. This research includes a widevariety o topics ranging rom agriculture,turbines mechanics, and eects on theatmosphere and power grid integrations. Theterm has shown substantial growth in use asan author keyword between 2006 and 2012with peak o 757 articles in 2011 (see Figure 1).

    In the thesaurus, however, this term isincluded under Farm buildings whichalso contains livestock buildings and otherstructures that are to be ound in arms.

    Oshore wind arms: This term rst appearedas an uncontrolled terms in 1993(5) andreers to the construction o wind arms indeep waters. Some examples o such windarms include Lillgrund Wind Farm in Swedenand Walney in the UK. In the thesaurusarticles with this keyword are assigned tothe term Ocean structures. This o course

    includes other structures such as oceandrilling, gas pipelines and oil wells. The useo this term has been steadily growing (seeFigure 2) with substantial increase between2008 and 2011.

    Research Trends Issue 29 July 2012 Page 12

    Topic Group Environment Mechanics Integration Computerization

    Uncontrolled Terms Renewable energies

    Renewable energy source

    Wind energy

    Wind speed

    Wind Resources

    Doubly-edinduction generator

    Oshore wind arms

    Permanent magnet

    Synchronous generator

    Wind arm(s)

    Wind turbine generators

    Wind generators

    Wind generation

    Wind energyconversion system

    Control strategies

    Power grids

    Power output

    Grid-connected

    Simulation result

    Table 1 Most recurring uncontrolled terms in the retrieved articles. Source: Engineering Village

    Figure 1 Use o keyword Wind Farm by authors. Source: Engineering Village

    http://www.power-technology.com/projects/100mwlillgrund/http://www.dongenergy.com/walney/Pages/index.aspxhttp://www.engineeringvillage.com/http://www.engineeringvillage.com/http://www.engineeringvillage.com/http://www.engineeringvillage.com/http://www.dongenergy.com/walney/Pages/index.aspxhttp://www.power-technology.com/projects/100mwlillgrund/
  • 7/31/2019 Research Trends Issue29

    14/20

    Research Trends Issue 29 July 2012 Page 13

    Most surprisingly, however, is the act thatthe term Wind energy itsel doesnt appearin the thesaurus at all. The topic as a wholeappears under Wind Power which also

    applies to damages caused by wind, windturbulences, wind speed and so orth.The term has been used by authors since1976 and rst appeared in an article by theDepartment o the Environment, BuildingResearch Establishment o UK Government(6) and has seen constant growth between2006 and 2012 (see Figure 3).

    Other emerging topics include: wind energyintegration into power grids, eects o windarms on the atmosphere, wind armsand turbines computer simulations andcontrol sotware. In addition, comparingthe uncontrolled and controlled termsthat appeared most commonly there areapparent dierences in oci as they emergerom the vocabulary. While the uncontrolledvocabulary highlights Wind speed and arms,the controlled vocabulary eatures Windpower, Electric utilities, and Turbomachineblades. This could be due to the act thatthe Compendex thesaurus is engineeringocused, thus giving the mechanics o windpower conversion prominent descriptors.In this case, the author given keywords arevaluable and they provide a supplementary

    view on these topics by depicting theenvironmental aspects o these researcharticles. Table 2 illustrates the dierent ocio the keywords.

    Discussion

    Wind energy is by no means a new areao exploration, yet in the past 4 to 5 yearsthis area has seen a considerable growth inresearch output especially in wind turbinestechnology and wind harvesting. Althoughthe data sample analyzed is small andcovers one subject eld only, our ndings

    illustrate that author keywords may indeedinclude new terms that are not to be oundin a thesaurus unction. The use o thesauriterms is usually recommended as a part oprecision strategy in searching. Yet, in ourcase controlled terms have a more generalscope. Table 3 below summarizes some oour major conclusions as they pertain to theproperties o using author-given keywordsand controlled terms in the search process.Our ndings show that the use o authorgiven keywords as a search strategy willbe benecial when one searches or morespecic technologies and applications or

    new research areas within the overall topic(see Table 3).

    Figure 2 Use o keyword Oshore Wind Farms by authors. Source: Engineering Village

    Table 2 Most common controlled and uncontrolled terms on search. Source: Engineering Village

    Uncontrolled terms Controlled terms

    Wind speed (43 papers, 10%) Wind power (172 papers, 41%)

    Wind arm (37, 9%) Wind turbines (171, 41%)

    Wind arms (22, 5%) Computer simulation (74, 18%)

    Wind turbine blades (17, 4%) Mathematical models (73, 18%)

    Fatigue loads (12, 3%) Aerodynamics (72, 17%)

    Wind energy (12, 3%) Electric utilities (63, 15%)

    Wind turbine wakes (12, 3%) Turbomachine blades (58, 14%)

    Control strategies (11, 3%) Wind eects (49, 12%)

    Oshore wind arms (11, 3%) Rotors (48, 12%)

    Power systems (11, 3%) Wakes (45, 11%)

    http://www.engineeringvillage.com/http://www.engineeringvillage.com/http://www.engineeringvillage.com/http://www.engineeringvillage.com/
  • 7/31/2019 Research Trends Issue29

    15/20

    Research Trends Issue 29 July 2012 Page 14

    Our analysis showed, or example, thatstrongly emerging areas identied in oursample are wind arms and oshore windarms. These terms, although appearing

    in the author given keywords or over 20years have not entered the Compendexthesaurus. This could be due to the act thatthe Compendex database is engineering-ocused and built to serve engineersthereore grouping these articles under termsthat are mechanical in nature. However, thismight hinder a broader understanding o thetopics in context.

    In this case using the thesaurus as basis orsearching Wind Energy articles would createbroader results sets. Depending on whatthe purpose o the search is, this could beviewed as a positive or negative outcome.Our analysis shows that the two types oterms have dierent properties and servedierent purposes in the search process. Inthe analysis o emerging topics author-givenkeywords are useul tools, as they enableone to speciy a topic in a way that seemsdicult to carry out when one uses onlyterms rom a controlled thesaurus.

    Table 3 Evaluation o the impact o controlled and uncontrolled terms on search.

    Figure 3 Use o keyword Wind Energy by authors. Source: Engineering Village

    ControlledTerms

    UncontrolledTerms

    Notes

    Recall 3 Using controlled terms retrieves a larger

    number o articles since they are lumpedunder broader descriptors

    Precision 3 Uncontrolled terms are very specic andenable retrieval o detailed topics

    Discoverability 3 3 Uncontrolled terms enable the discoveryo the new topics and can serve asindicators o the latest discoveries madein this eld. Controlled terms enable theclustering o such topics thus enablingconnections between larger numbers oarticles and topics

    Serendipity 3 Controlled terms are broader thus retrievinga larger amount o article and enablingserendipity through browsing

    State o the Art 3 Uncontrolled terms depict the latestdescriptors o methods, applications andprocesses in a certain topic

    Reerences:

    1. Sihvonen, A., Vakkari, P. (2004)

    Subject knowledge, thesaurus-assisted query

    expansion and search success, Proceedings o

    RIAO2004 Conerence, pp. 393-404.

    2. Sihvonen, A., & Vakkari, P. (2004)

    Subject knowledge improves interactive

    query expansion assisted by a thesaurus,

    Journal o Documentation, 60(6), 673-690.

    3. Shiri, A.A.,Revie, C.,Chowdhury, G. (2002)

    Thesaurus-enhanced search interaces, Journal

    o Inormation Science, Volume 28, Issue 2, 2002,

    Pages 111-122.

    4. Neustadter, H. E., & Spera, D. A. (1985)

    Method or Evaluating Wind Turbine Wake Eects

    on Wind Farm Perormance, Journal o Solar

    Energy Engineering, Transactions o the ASME,

    107(3), 240-243.

    5. Olsen, F., & Dyre, K. (1993) Vindeby o-shore

    wind arm - construction and operation,

    Wind Engineering, 17(3), 120-128.

    6. Rayment, R. (1976) Wind Energy in the UK,

    Building Services Engineer, (44), 63-69.

    http://www.engineeringvillage.com/http://www.engineeringvillage.com/
  • 7/31/2019 Research Trends Issue29

    16/20

    Section 5:

    Expert OpinionResearch Evaluation in Practice:Interview with Linda Butler

    Gali Halevi, PhD

    Powered by Scopus

    Research Trends Issue 29 July 2012 Page 15

    During your career, you have taken part ingovernment-driven research projects usingbibliometrics methodologies. Could yougive an example or two o the outcomes o

    these research projects and the way theyinormed scientifc unding?

    The most infuential body o research Ihave undertaken relates to analyses o theway Australian academics responded tothe introduction o a sector-wide undingscheme that distributes research undingto universities on the basis o a very bluntormula. The ormula is based on data onresearch students, success in obtainingcompetitive grant income, and the numbero research outputs produced. For researchoutputs, a simple count is used. It does

    not matter where a publication appeared the rewards are the same. By looking indetail at the higher education sector, andater eliminating other possible causalactors, I was able to demonstrate that theintroduction o the ormula led to Australianacademics signicantly increasing theirproductivity above long-term trend lines.While the increase was welcome, what waso major concern to policy makers werethe ndings that the increase in output wasparticularly high in lower impact journals,and that Australias relative citation impacthad allen below that o a number o its

    traditional OECD comparators.

    These ndings were part, though not all, othe driver or Australia to introduce a newunding system or research. The same bluntormula is still being used, but it is anticipatedthat much o the unding it distributes willbeore long be based on the results o theExcellence in Research or Australia (ERA)initiative, the second exercise o which willbe conducted in 2014 (the rst was held in2012). The same research has also beeninfuential in Norway and other Scandinavian

    countries where governments sought toavoid the pitalls o simple publication countsby introducing a tiered system o outputs,with those in more prestigious journals orrom more prestigious publishers receiving ahigher weighting and thereore resulting ingreater unding.

    See also: Powerul Numbers: Interview withDr. Diana Hicks

    Examining the literature, there appear tobe ar more research evaluation studiesocusing on lie and medical sciences. Why,

    in your opinion, are these not as prevalentin the social sciences?

    I believe this is primarily because quantitativeindicators are seen as airly robust in thebiomedical disciplines and are thereore,on the whole, reasonably well accepted by

    researchers in those elds. This is not thecase or the social sciences. There is nothingsurprising in this. The biomedical literatureis well covered by major bibliometric

    databases. In addition, sociological studieshave given us much evidence on themeaning o citations in the lie sciences andthis, together with evaluative studies thathave been shown to correlate well withpeer review, means researchers have somecondence that measures based on thedata are reasonably robust though alwayswith the proviso they are not used as a bluntinstrument in isolation rom peer or expertinterpretation o the results.

    The same cant be said or the socialsciences (or the humanities and arts). Thereis some evidence that a citation in thesedisciplines has a dierent meaning theirscholarship does not build on past researchin the same way that it does in the liesciences. It is also well known that coverageo the social sciences is very poor in manydisciplines, and only moderate in the bestcases. Evaluative studies that use only theindexed journal literature have sometimesdemonstrated poor correlation to peer reviewassessments, and there is understandablylittle condence in the application o thestandard measures used in the lie sciences.

    What can be done to measure arts &humanities as well as social sciences better?

    I think the most promising initiatives arethose coming out o the European ScienceFoundation, which has or a number oyears been investigating the potential ora citation index specically constructed tocover these disciplines. The problem is that,as it would need to cover books and manyjournals not indexed by the major citationdatabases, it is a huge undertaking. Giventhe current European nancial climate I dont

    have much condence that this initiative willprogress very ar in the short-term. It is alsoan initiative raught with problems, as seenin the ESFs rst oray into this domain with itsjournal classication scheme (http://www.es.org/research-areas/humanities.html).Discipline and national interest groups havebeen very vocal in their criticisms o the initiallists, and a citation index is likely to be justas controversial.

    Many scholars in these disciplines pin theirhopes on Google Scholar (GS) to providemeasures that take account o all their orms

    o scholarship. The problem with GS is that itis not a static database, but rather a searchengine. As GS itsel clearly points out, i awebsite disappears, then all the citationsrom publications ound solely in that website

    Linda Butler

    Web: http://www.lindabutler52.com.au/Email: [email protected]

    http://www.arc.gov.au/erahttp://www.researchtrends.com/issue28-may-2012/powerful-numbers-%E2%80%93-an-interview-with-diana-hicks/http://www.researchtrends.com/issue28-may-2012/powerful-numbers-%E2%80%93-an-interview-with-diana-hicks/http://www.esf.org/research-areas/humanities.htmlhttp://www.esf.org/research-areas/humanities.htmlhttp://www.lindabutler52.com.au/mailto:linda.butler52%40gmail.com?subject=mailto:linda.butler52%40gmail.com?subject=http://www.lindabutler52.com.au/http://www.esf.org/research-areas/humanities.htmlhttp://www.esf.org/research-areas/humanities.htmlhttp://www.researchtrends.com/issue28-may-2012/powerful-numbers-%E2%80%93-an-interview-with-diana-hicks/http://www.researchtrends.com/issue28-may-2012/powerful-numbers-%E2%80%93-an-interview-with-diana-hicks/http://www.arc.gov.au/era
  • 7/31/2019 Research Trends Issue29

    17/20

    Research Trends Issue 29 July 2012 Page 16

    will also disappear, so over time there can beconsiderable variability in results, particularlyor individual papers or researchers. Inaddition, it has to date been impossible to

    obtain data rom GS that would enable worldbenchmarks to be calculated essentialinormation or any evaluative studies.

    Do you think that open accesspublishing will have an eect onjournals content quality, citationstracking and general impact?

    The answers to these questions dependon what open access publishing means.I it reers to making articles in the journalliterature that are currently only accessiblethrough paid subscription services publicly

    available, I would expect the journalgatekeepers the editors and reviewers to continue with the same quality controlmeasures that currently exist. I all (or most)literature becomes open access, then theshort-term citation advantage that is said toexist or those currently in open access ormwill disappear, but general impact couldincrease as all publications will have thepotential to reach a much wider audiencethan was previously possible.

    But i open access publishing is interpretedin its broadest sense the publishing o allresearch output irrespective o whether ornot it undergoes any orm o peer review then there is potential or negative impacton quality. There is so much literature inexistence that researchers need someorm o assessment to allow them to identiythe most appropriate literature and avoidthe all too real danger o being swampedby the sheer volume o what is available.Some orm o peer validation is absolutelyessential. That is not to say that peervalidation must take the same orm as thatused by journals it may be in the orm o

    online commentary, blogs, or the like butit is essential in some ormat.

    Any new mode o publication presents itsown challenges or citation tracking. Onthe one hand, open access publishingpresents huge possibilities in a much morecomprehensive coverage o the literature,and potential eciencies in harvesting thedata. But on the other hand they presentproblems or constructing benchmarksagainst which to judge perormance howis the world to be dened? Will we be ableto continue using existing techniques or

    delineating elds? Will author or institutionaldisambiguation become so dicult that ewanalysts will possess the knowledge andcomputer power required to do this?

    What orms o measurements, otherthan citations, should be applied whenevaluating research quality and outputimpact in your opinion? (i.e. usage, patents)

    It is important to use a suite o indicatorsthat is as multi-dimensional as possible.In addition to citation-based measures,other measures o quality that may berelevant include those based on journalrankings, publisher rankings, journal impactmeasures (i.e. SNIP, SJR etc.) and success incompetitive unding schemes. Any indicatorchosen must be valid, must actually relateto the quality o research, must betransparent, and must enable theconstruction o appropriate eld-specicbenchmarks. Even then, no single indicator,

    nor even a diverse suite o indicators, willgive a denitive answer on quality the datastill need to be interpreted by experts in therelevant disciplines who understand thenuances o what the data is showing.

    Choosing indicators o wider impact isa much more raught task. Those thatare readily available are either limitedin their application (e.g. patents are notrelevant or all disciplines), or reer merelyto engagement rather than demonstratedachievement (e.g. data on giving non-academic presentations, or meetings

    with end-users attended). And perhapsthe biggest hurdle is attribution whichpiece (or body) o work led to a particularoutcome? For this reason, the currentattempts to assess the wider impact oacademic research are ocussing on a casestudy approach rather than being limitedto quantitative indicators. The assessmento impact in the UKs Research ExcellenceFramework is the major example o suchan approach currently being undertaken,and much inormation on this assessmentapproach can be ound on the website o

    the agency overseeing this process theHigher Education Funding Council o England.See also: Research Impact in the broadestsense: REF 14

    During your years as a universityacademic, did you notice a changeamong university leaders and researchmanagers in the perception andapplication o bibliometrics?

    From a global perspective, the biggestchange has occurred since the appearanceo university rankings such as the Jiao Tongand THE rankings. Prior to this, ew senior

    administrators had much knowledge othe use o bibliometrics in perormanceassessments, other than the ubiquitousjournal impact actor. The weightings givento citation data in the university rankings nowensure that bibliometrics are at the oreront

    o universities strategic thinking and manyuniversities have signed up to obtain the datathat relates to their own university and use itinternally or perormance assessment.

    In Australia, most university researchmanagers had at least a passing knowledgeo the use o bibliometrics in evaluationexercises by the 1990s, through the analysesundertaken by the unit I headed at TheAustralian National University, the ResearchEvaluation and Policy Project. However theirinterest increased with the announcementthat bibliometrics were to orm an integralpart o a new perormance assessmentsystem or Australian universities theResearch Quality Framework which wasultimately superseded by the ERA ramework.This interest was urther heightened by theappearance o the institutional rankingsmentioned above. While ERA is not currentlylinked to any substantial unding outcomes,it is expected to have nancial implicationsby the time the results have been publishedrom the second exercise to be held in 2014.Australian universities are now acutelyaware o the citation perormance o theiracademics publications, and many monitorthat perormance internally through theirresearch oces.

    The downside o all this increased interestin, and exposure to, bibliometrics is theprolieration o what some commentatorshave labelled amateur bibliometrics studies undertaken by those with littleknowledge o existing sophisticatedtechniques, nor any understanding o thestrengths and weaknesses o the underlyingdata. Sometimes the data is seriouslymisused, particularly in its application toassessing the work o individuals.

    What are your thoughts about usingsocial media as a orm o indication about

    scientifc trends and researchers impact?

    I have deep reservations about the useo data rom social media to constructperormance indicators. They relate moreto popularity than to the inherent qualityo the underpinning research, and atthis point in time are incredibly easy tomanipulate. They may be able to be usedto develop some idea o the outreacho a particular idea, or a set o researchoutcomes, but are unlikely to provide muchindication o any real impact on the broadercommunity. As with many o the new Web

    2.0 developments, the biggest challenge isdetermining the meaning o any data thatcan be harvested, and judging whetherany o it relates to real impact on either theresearch community, on policy, on practice,or on other end-users o that research.

    http://www.ref.ac.uk/http://www.ref.ac.uk/http://www.hefce.ac.uk/http://www.researchtrends.com/issue-27-march-2012/research-impact-in-the-broadest-sense-ref-14/http://www.researchtrends.com/issue-27-march-2012/research-impact-in-the-broadest-sense-ref-14/http://www.researchtrends.com/issue-27-march-2012/research-impact-in-the-broadest-sense-ref-14/http://www.researchtrends.com/issue-27-march-2012/research-impact-in-the-broadest-sense-ref-14/http://www.hefce.ac.uk/http://www.ref.ac.uk/http://www.ref.ac.uk/
  • 7/31/2019 Research Trends Issue29

    18/20

    Section 6:

    Did you know?that scientometrics isexplained in a recent novel?

    Matthew Richardson

    In Michael Frayns latest novel Skios, a casto characters descend on a Greek island ortwo days o crossed identities and mislaidmessages, in the best arcical tradition.

    The climax o the aair is the Fred TopplerLecture, which is set to be delivered this yearon the topic o Innovation and Governance:the Promise o Scientometrics.

    As Dr Norman Wilred explains: The resultso scientic research are scienticallymeasurable. We have developed a disciplineor this. Its called scientometrics. And on

    the basis o scientometrics science can bescientically managed.

    The ironic response: This is your lecture,is it? ... I see why you dont want people tomiss it.

    Is this the rst novel tomention scientometrics?

    Powered by Scopus

    Reerences:

    Frayn, M. (2012) Skios. London: Faber & Faber.p.132.

    Research Trends Issue 29 July 2012 Page 17

  • 7/31/2019 Research Trends Issue29

    19/20

    Research Trends:

    Editorial BoardHenk Moed

    Judith KamalskiIris KisjesAndrew PlumeSarah HuggettMatthew RichardsonGali Halevi

    You can nd more inormation onwww.researchtrends.com orcontact us at [email protected]

    Powered by Scopus

    Research Trends Issue 29 July 2012 Page 18

    http://www.researchtrends.com/mailto:researchtrends%40elsevier.com?subject=mailto:researchtrends%40elsevier.com?subject=http://www.researchtrends.com/
  • 7/31/2019 Research Trends Issue29

    20/20


Recommended