UKRN, through our Open Research Programme, is enabling a set of pilot projects that will run in 2024 to explore how best to monitor aspects of open research to support institutional planning and evaluation. Note that this monitoring is aggregate and anonymous, unlike the individual indicators that might (or might not) support researcher assessment, and that might therefore be the focus of CoARA working groups. Some of the background to this work is outlined in this project plan. In brief, both the Open Research Programme and the partner institutions need evidence on the uptake and nature of open research practices, to plan and evaluate the support that we each provide to the research community.

Following a webinar launching this work, we asked the UK academic sector which institutions wanted to pilot indicators of open research (15 opted in), and which specific open research practices should be priorities for monitoring, and we wrote that up in a working paper. The priorities at the moment turned out to be open/FAIR data, the use of data availability statements, preregistration and the use of the CRediT taxonomy. We then asked a wide group of solution providers – from small academic-led initiatives to large commercial operations – to let us know whether and how they might work with interested institutions to help them monitor those priorities. We received 15 responses, some of which noted that, because the pilots were unfunded, they would struggle to engage with them. The 15 pilot institutions then discussed and agreed which solution providers they wished to work with, those being OpenAIRE, CORE, Elsevier, Digital Science and PLOS/DataSeer, with some also wanting to work with the Center for Open Science to monitor preregistration.

The four sets of pilots – one each for the four priorities – are now being designed. However, we thought it was a good moment to take stock of the work and to reflect on lessons that we might learn. The exclusion of smaller, more academic-led, often more open source initiatives is a concern. It is likely to have arisen for a number of reasons, including that:

  • the lack of funding for the pilots favours large operations that have capital resources that enable them to invest strategically on activities without an immediate pay back;
  • institutions seem to favour partners who can offer staff time, a level of brand recognition and familiarity, and large datasets.

This introduces a strongly conservative bias into the process, posing a high barrier to entry, and may reduce the diversity within the innovation community.

Our next steps will seek, as far as possible within the parameters of the pilots, to address this concern. We will therefore:

  1. Strongly implement an agreed set of partnership principles outlined in the plan, to promote interoperability in data and services, transparency, inclusion and collaboration, to favour FAIR and open sector-wide / system-wide data about research, and enable data portability.
  2. Set up regular opportunities for any interested party to hear about progress in the pilots, to see draft outputs, and to inform their plans.
  3. Pro-actively engage with and support international initiatives and actors that are leading, promoting and enabling collective action toward a more open infrastructure for monitoring open research, including the work of UNESCO.

Beyond this, there appears to be a strong case for the academic sector to reflect on the opportunities and risks of sourcing indicators relating to its core business from third parties, and how those might be managed to benefit research. This discussion is starting in the USA, and in Europe there are also strong signals from some very significant institutions such as The Sorbonne and Utrecht University. Could the academic sector be about to take more control over the ways in which its research is monitored?

Photo credit: Dominika Roseclay, Pexels