Open Research ProgrammeA five-year programme supported by Research England, UKRN institutional members and project partners
Workstream 2: Evaluation
The lead for this workstream in 2022-23 is Malcolm Macleod, University of Edinburgh.
Aim and objectives
We will develop and use effective ways to assess the changes the programme seeks to make among both researchers and institutions.
We will do this by:
- Developing and regularly deploying a high-quality survey of researchers’ open research practices, and their views on the support they have in adopting those. This survey serves three purposes:
- It provides a baseline for the programme, and a regular progress measure
- It gives participating institutions reliable insight to inform their planning
- It informs the programme in both the training and the reward/recognition areas
- Developing effective ways to assess the effects of specific programme interventions, notably those on training and reward/recognition
Practically, the steps we are taking are as follows:
Evaluation of open research practices overall
We have designed and piloted a survey instrument on 14 specific categories of open research practice, which covers how important the practice is, how common, and how well supported. We are now planning how best to deploy the survey to institutions in the programme in a way that ensures the findings are as representative as possible. The next step will be to analyse survey responses at an aggregated level and at the level of each individual institution. We will disseminate anonymised, aggregated results openly, and institution results to the relevant institution. The results will inform the training programme (workstream 1), as well as a discussion in early 2023 with professional and technical staff on how they support open research. We plan to repeat the survey, perhaps twice during the programme.
Evaluation of specific interventions
This work will begin shortly. We will design non-survey methods for the evaluation of interventions, including evaluation of training at both individual and institutional levels. We will put things in place for that evaluation ahead of training roll-out in 2023, working with partner institutions. We will explore novel evaluation methods that may provide better evidence of change than those currently used.