Explore the literature.
-
The Tenure Review Process Must Evolve
Innovative faculty members can lead the way, argue Andrew McKinney and Amanda Coolidge, by encouraging the inclusion of open educational resources work in tenure and promotion portfolios.
-
Coalition for Advancing Research Assessment
Our vision is that the assessment of research, researchers and research organisations recognises the diverse outputs, practices and activities that maximise the quality and impact of research. This requires basing assessment primarily on qualitative judgement, for which peer review is central, supported by responsible use of quantitative indicators.
-
ASM Journals Eliminate Impact Factor Information from Journal Websites
Many scientists attempt to publish their work in a journal with the highest possible journal impact factor (IF). Despite widespread condemnation of the use of journal IFs to assess the significance of published work, these numbers continue to be widely misused in publication, hiring, funding, and promotion decisions.
-
ScholComm Lab's RPT Project
This project examined the RPT process in the US and Canada in ways that can directly inform actions likely to translate into behavioural change and to a greater opening of research.
The team collected and analyzed more than 850 RPT guidelines from research institutions across Canada and the US and assessed the degree to which they included guidelines specific to open access, open data, and open education. They also explored the use of the Journal Impact Factor in these documents, analyzing how often the controversial was mentioned and how it was defined. Finally, they investigated faculty members’ perceptions about the RPT process, including what publishing strategies they believe will be rewarded and how this influences their dissemination choices.
-
Scan of Promising Efforts to Broaden Faculty Reward Systems to Support Societally Impactful Research
This white paper results from a scan of promising reforms to faculty reward systems that was commissioned by participants in the Transforming Evidence Funders Network (TEFN), facilitated by The Pew Charitable Trusts. These systems, including promotion and tenure (P&T) policies, often focus primarily on faculty members’ scholarly impact measured by citation counts and publication metrics and fail to sufficiently recognize their contributions to policy outcomes, community development, and technological innovation.
Read Pew Charitable Trust’s white paper on efforts to broaden faculty reward systems
-
The h-index is no longer an effective correlate of scientific reputation
The impact of individual scientists is commonly quantified using citation-based measures. The most common such measure is the h-index. A scientist’s h-index affects hiring, promotion, and funding decisions, and thus shapes the progress of science. Our results suggest that the use of the h-index in ranking scientists should be reconsidered, and that fractional allocation measures such as h-frac provide more robust alternatives.
-
Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia
Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited and their h-index. Yet such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score.
-
"What Universities Owe Democracy" pgs 168-186
President Ronald Daniels (JHU) writes, in describing the two critical lessons drawn from the pandemic “The first is that it is now almost undeniable that a more open approach to science, one that makes the barriers between scientists and between scientists and the public more permeable, presents a historic opportunity to renew the promises of knowledge creation and diffusion that have inhered in the modern university since 1876. This openness is both an accelerant of discovery and a salve for liberal democracy, helping to address the frailties of the reproducibility crisis and bringing science closer to the citizens it aims to serve. Second, COVID-19 has revealed a pressing need for the research enterprise and the institutions that house it to establish clear guardrails that will safeguard the legacy of academic fact creation and discovery—the knitted virtues of training, credentialing, rigor, and independence—that have defined that enterprise over the years and made it such an indelible asset to democracy.”
-
Value dissonance in research(er) assessment: individual and perceived institutional priorities in review, promotion, and tenure
There are currently broad moves to reform research assessment, especially to better incentivize open and responsible research and avoid problematic use of inappropriate quantitative indicators. This study adds to the evidence base for such decision-making by investigating researcher perceptions of current processes of research assessment in institutional review, promotion, and tenure processes. Analysis of an international survey of 198 respondents reveals a disjunct between personal beliefs and perceived institutional priorities (‘value dissonance’), with practices of open and responsible research, as well as ‘research citizenship’ comparatively poorly valued by institutions at present. Our findings hence support current moves to reform research assessment.
-
Why NASA and federal agencies are declaring this the Year of Open Science
In this piece, NASA’s Chelle Gentemann describes how NASA is incentivizing open science, and how you can too. “Open-science innovation is being driven by a global community with diverse perspectives. The scientific questions are more interesting and nuanced, the solutions better.”
-
DORA's Project TARA
Project TARA will help DORA identify, understand, and make visible the criteria and standards universities use to make hiring, promotion, and tenure decisions. This information will be used to create resources and practical guidance on research assessment reform for academic and scholarly institutions.
-
Modernizing Scholarship for the Public Good: An Action Framework for Public Research Universities
APLU’s Modernizing Scholarship for the Public Good Action Framework (2023) offers guidance to public research universities on ways that they can support scholars to advance public impact research, Cooperative Extension, civic science, community-engaged research, and other forms of public engagement, with particular attention to the ways that diversity, equity, inclusion, and justice are integral to this work. The Modernizing Scholarship for the Public Good initiative was led by Elyse Aurbach, the APLU Civic Science Fellow, and supported by the Rita Allen Foundation, the Kavli Foundation, the Burroughs Wellcome Fund, APLU’s FANR, and the University of Michigan.
-
DOERS3 Case Studies: Including OER in Tenure
A critical part of sustaining Open Educational Resources (OER) in higher education is recognizing the contributions by instructors who create and improve them as part of their professional work. In order to aid this effort, DOERS3 has developed an adaptable advisory model, the OER Contributions Matrix to help guide faculty as they attempt to include their OER work in their tenure and promotion portfolios. Building on this work, with generous support from the Hewlett Foundation, DOERS3 has funded 28 authors for a book-length project centered around valuing open education work in the tenure, promotion, and reappointment process. Coming soon in January 2024, these case studies written by faculty, staff, and administrators, will detail their experiences trying to appropriately value OER and open educational work in that process.
Read more about the Driving OER Sustainability for Student Success (DOERS3) Collaborative and view the OER Contributions Matrix
-
Make Data Count
Make Data Count is an initiative that promotes open data metrics to enable the evaluation and reward of research data reuse and impact.
While there has been an increasing interest in research data and the importance of data sharing in recent years, we lack standardized, adopted ways to evaluate the impact of open data across the research ecosystem. We are thus lacking the means to complete meaningful evaluations that can lead to credit for individual researchers, incentives for data sharing, and an understanding of how open data advances discoveries.
To enable the evaluation of data usage, Make Data Count drives the development of community-led transparent and meaningful open data metrics. A perspective piece about open data metrics, also drawing on discussions at the Make Data Count Summit
-
10 Simple Rules for Recognizing Data and Software Contributions in the Promotion and Tenure Process
Align the evaluation process to institutional values
Learn from the experience at other institutions
Tailor the call for applications
Select an appropriate committee
Ensure a dedicated review
Articulate the metrics that will be used
Leverage existing tools
Account for departmental needs
Develop clear guidelines for committee members
Share with the community
-
UK Research Network Open and Responsible Researcher Reward and Recognition (OR4)
The program aim is to accelerate the uptake of high quality open research practices. The Center for Open Science theory of change highlights five types of action that can promote change. This project is focused on making open research rewarded and, in particular, making it rewarded via the recruitment, promotion, and other recognition activities at institutions that employ researchers.
This project will reform the ways in which institutions recruit, promote and appraise their staff, to better reward open research practices.
-
A Framework for Values-Based Assessment in Promotion, Tenure, and Other Academic Evaluations
In this preprint, the authors explore a common theme to center values, focusing on how incentives could be designed to better reward aspects like collaboration, equity, rigor, and transparency. The authors present a framework, developed in part through workshops they ran with faculty and department chairs at three professional society convenings. The framework includes 14 values (e.g. creativity, inclusivity, engagement, public good), and for each value, some scoping considerations, representative academic activities or scholarly outputs, and possible behavioral indicators that could be incorporated into promotion and tenure evaluations.