+ All Categories
Home > Documents > EDITORIAL DECISIONS & DETERMINANTS OF IMPACT Ayeh Bandeh-Ahmadi Department of Economics University...

EDITORIAL DECISIONS & DETERMINANTS OF IMPACT Ayeh Bandeh-Ahmadi Department of Economics University...

Date post: 01-Jan-2016
Category:
Upload: dorthy-hubbard
View: 215 times
Download: 0 times
Share this document with a friend
Popular Tags:
24
EDITORIAL DECISIONS & DETERMINANTS OF IMPACT Ayeh Bandeh-Ahmadi Department of Economics University of Maryland & hypochondri.ac
Transcript

EDITORIAL DECISIONS & DETERMINANTS OF IMPACT

Ayeh Bandeh-AhmadiDepartment of EconomicsUniversity of Maryland &hypochondri.ac

Goals/Outcomes of This Study

• Better design of innovative organizations• Case study of Knowledge Commons use• Insights for designing knowledge commons• Development of broader set of metrics

including text-based• Beyond publications, applications to

– grants, prizes, fellowships, society elections

Background• Journal publications are broadly used for:

– job placement, tenure, grants, prizes

• Landscape is changing with:– advent of electronic publications (eg. PLoS)– financial pressure on publishers– blogs, working papers

• Opportunities to build new institutions and tweak existing ones

More Context“a system in which commercial publishers make profits based on the free

labor of mathematicians and subscription fees from their institutions’ libraries, for a service that has become largely unnecessary”

Signed by 34 mathematicians, February 2012

• Ongoing realities:– Elsevier Boycott– Opportunities for design of Scientific Commons

• Research Questions:– What are incentives for editors, referees, authors, publishers?– What does quality actually mean? Citations?

• Learning experiences with proprietary arrangements• Insights for designing Scientific Commons

Stakeholder Framework• Interactions between stakeholders:

PublishersEditorsRefereesAuthorsLibrariesReaders

Researchers

StudentsProfessorsAcademic OrganizationsResearch Funding AgenciesOthers?

Existing Theories/Related Literature

• Abrevaya & Hamermesh 2009– No evidence of gender bias amongst econ

referees

• Cherkashin et al (Type I/II errors in accept/reject at Journal of International Economics)– Co-editor standards seem to vary significantly– Type I error small (rejected papers less cited)– Type II error large (poorly cited accepted

papers)

Existing Theories/Related Literature

• Ellison (2002)– Theory of editorial incentives & referee

communication explains increasing lags

• Laband (1990)– Survey data characterizes critical referee roles,

editors feeling they must take marginal work

• Labande & Piette (1994)– Peers’ work selected by editors often yield

greater citations

Theory: Factors affecting editorial decisions, referee recommendations,

citations:• Editors select papers to publish based on:

– Fit within journal’s niche– Accuracy– Future Impact– Pressure to fill pages from publishers– Perhaps also personal benefit (citations, etc.)

• What role do referees play?– Providing feedback to improve quality of mediocre

papers, rich information content in referee letters– Evaluating fit, accuracy, future impact, personal benefit

• What drives citations?– Hot topics, famous authors, publication in journals

Data: Editorial Databases

• Manuscript ID• Manuscript Received Date• JEL codes• Country• Co-editor ID• Co-editor Evaluation:

– Summary Reject, No Referee Input

– Summary Reject, Referee Input

– Reject– Withdrawn– Returned for Revision– Conditionally Accepted (Minor

Revisions)

– Accept

• Revision• Manuscript text• Referee review text• Referee ID• Referee Evaluation:

– No Recommendation (i.e. no review submitted)

– Definite Reject– Reject– Weak Revise & Resubmit– Revise & Resubmit– Strong Revise & Resubmit– Accept with Revisions– Accept

Data from Five Journals: 5372 submissions; 2004-2010

Building Meaningful Metrics for Testing Theories

• Textual content of manuscripts– similarity to past accepted articles in journal– Presence of co-editor name in paper references

(first, last versions)– Paper length

• Textual content of referee reviews• Referee scores (past and current)• Editorial scores (past and current)• Citation Commons (IDEAS, Google Scholar)

Textual similarity between submissions

• Edges show relationships between submissions with textual cosine similarity of at least 18%.

• Green nodes are accepted submissions; pink nodes are all other submissions

• Layout using spring-embedded algorithm (Kamada & Kawai, 1988).

Models:

LASSO citation model selection method: Predict citations of journal submissions – based on referee language -- whether they are published or not

Ordered Probit model of referee decision

Ordered Probit model of editorial decision

Referee Language Determinants of Score, Citations

• Little correlation between referee language coefficients on referee score versus eventual citations

0 1000 2000 3000 4000 5000 60000.00

1,000.00

2,000.00

3,000.00

4,000.00

5,000.00

6,000.00

f(x) = 0.00421780677319079 x + 2850.91641816366R² = 1.77898955144862E-05

rank(score_coeffs_pvalue)

rank(c

ite_c

oeff

s_p

valu

e)

0 1000 2000 3000 4000 5000 60000

1000

2000

3000

4000

5000

6000

f(x) = 0.157560363264924 x + 2411.89788350376R² = 0.0248253224057068

rank(score_coeff)

rank(c

ite_c

oeff

)

LASSO Model Findings: Determinants of Impact

• Terms of significance in ref reports include:– (+) “debate”– (+) “government”– (+) “from [years] to [years]”– (-) “counterfactual”– (+) “wage”

Could identify types of impact valued by different individuals & different fields

Basic Model

Journals/Editors face separable paper payoff function:

over accuracy a, impact i, fit f :

assuming fit is ex ante observable…

fiafiap ),,(

1,0,01,1 f i a

Basic Model

Journals/Editors maximize payoff:

by selecting a threshold based on available space and making decisions D accordingly:

𝐸൫Πpȁ�𝑠𝑎,𝑠𝑖൯= ඵ Πpሺ𝑎,𝑖,𝑓ሻ 𝐴ሺ𝑠𝑎,𝑎ሻ 𝐼ሺ𝑠𝑖,𝑖ሻ𝑎,𝑖 𝑑𝑎 𝑑𝑖

𝐷= ቊ𝑎𝑐𝑐𝑒𝑝𝑡 𝑖𝑓 𝐸൫Πp൯≥ 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑𝑟𝑒𝑗𝑒𝑐𝑡 𝑖𝑓 𝐸൫Πp൯≥ 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑

Methods

• Econometric analysis to understand form of payoff function Pp

• Dig deeper to develop metrics for a, i, f :– Measure submissions’ similarities to past accepted

articles (fit within journal niche)– Develop estimator of submission impact based on

referee language (potential impact)– Referee scores seem good proxy for accuracy– Also, count mentions of journal editors names in

submissions

Findings: Editor & Referee Preferences (ordered probits)

• Editors seem to show preference for:– (+) referee score– (+) fit of submission with journal niche– (+) potential impact– (++) interaction between potential impact & referee

score

• Referee scores show:– (+) very small taste for potential impact– no significant effect of fit with niche

• Terms of significance in ref reports (debate, from to years, etc.)

More Findings

• Evidence that published papers closer to journal’s niche get more impact/citations

• Papers mentioning an editor in first version more likely to get eventually accepted

• Editors are harder on papers that mention them• Evidence that editors differentiate/select higher

impact among papers that mention them (some do, others don’t; can score editors by this)

• Some editors choose harder referees (or are less likely to desk reject lower quality papers)

Summary

• Support Laband’s hypotheses that there is a great deal of rich information content in referee reviews

• Appearance of favoritism among editors may reflect better selection (as in Laband & Piette)

• Evidence of an interaction between accuracy and potential for citation impact that suggests it may be much more risky to produce and submit papers that are expected to receive high numbers of citations at certain journals.

• Characterizes budget constraints on referee capital (binding)• Characterize (low) publication space limits on high impact

publications

Insights for Development of Scientific Commons on Journals

• Referee reviews:– include full texts if possible– alternatively include check boxes for referees to indicate different types of

merit relevant to each journal• unique data sources or experimental setups• contributions to ongoing debates or policy

– computational methods for relevant types of merit by journal

• Anonymized Referee Scores• Measures of textual similarity to past accepted work• Anonymous flags on citations of referees or editors’ own work in a

paper– can be used to score referees and editors if desired

Simple to automatically compute given modern software technologies

Relevant to many types of scientific review (eg. society elections, grant awards, tenure decisions) and other processes where text accompanies data on preferences (employee promotion, appraisals, product reviews, health evaluations)

Insights for Development of Scientific Commons on Journals

• Balance between breadth of access and data detail, eg.:– referee/author identities,– review/manuscript texts– ability to connect datasets is critical!!!!

• Text-based metrics can be powerful:– LASSO for model selection on text– Recognition of editor names in first, last versions– Textual similarities (to past publications/submissions)– Automatically computable given modern software technologies

• Relevant to many types of scientific review – society elections– grant awards– tenure decisions– other processes where text accompanies data on preferences

(employee promotion, legal filings, appraisals, product reviews, health evaluations)

DISCUSSION/QUESTIONS

Ayeh [email protected]


Recommended