8th workshop on “Perspectives on Scientific Error” with Finn Luebber

Finn Luebber from our team was invited to present his PhD research on “A Psychometric Analysis and Primer for Decision Making in Third-Party Funding Allocation”

 

Finn Luebber (Department of Psychiatry and Psychotherapy, University of Lübeck), Sören Krach – Department of Psychiatry and Psychotherapy, University of Lübeck; Daniel Leising – Chair of Assessment and Intervention, Technische Universität Dresden; Frieder M. Paulus – Department of Psychiatry and Psychotherapy, University of Lübeck; Lena Rademacher – Department of Psychiatry and Psychotherapy, University of Lübeck; Rima-Maria Rahal – Institute for Cognition and Behavior, Vienna University of Economics

 

Research at universities is increasingly funded by competitive, project-based grants. Applications for these grants, and their subsequent reviews, consume a great amount of resources (time and money) and most of the applications are ultimately rejected. Thus, grant applications create a strain on the system as a whole, and errors of any type can be very costly. Additionally, peer review for grants has been subject to criticism on several grounds, e.g., poor reliability of reviewer scores, a bias against novel ideas, a bias against certain groups of applicants, or a shift of the incentive landscape toward short-term, highly marketable research results.

However, one of the core questions has rarely, if ever, been addressed: Does peer review really pick projects that ultimately end up delivering high value, and does it do so better than other processes of allocating funding? Or put differently: Is the core assumption – that a cost-intensive competitive peer review process is needed to find and fund the best research – really true? Thus, the justification for the current process of project-based grant funding rests on the assumption that peer review is valid and worth its costs – but this has not been tested yet.

To address this gap, we use a theoretical and formalized model from the context of psychometric and diagnostic research to establish how these assumptions could be tested. Further, we delineate conditions under which such a process would be worth its costs. We show that two key types of data for this evaluation are usually not available: First, an assessment of research quality and its connection to value for society, and second, the validity of peer review – i.e., the quality of peer review as a predictor of actual outcomes.

We argue that more research on research funding is needed, including experimental designs, together with assessments of the validity of peer review and quality and value of funded research. Additionally, we call for a shift towards greater transparency and data accessibility on the side of funding agencies. Combining these with regular evaluations of funded projects and funding mechanisms can support better evidence-based policy design for allocating research funding.

 

Scroll to Top