Open Access Week has traditionally focussed on Open Access to publications, which has been a catalyst to address the transformation of scholarly communication more broadly.
Those who have stepped one foot into a University or research environment knows that there is a great deal of multifaceted work that goes into producing 500-10,000 words of a research paper or monograph. One example of Open Research beyond publications is Open Data. Studies have shown that making the underlying research data open on top of the long-form written output increases citation possibilities by an average of 30%-65%; that, on top of the citation advantage already possible from making the paper Open Access. The figures, of course, are discipline dependent.
Citations aside, Open Data has wider benefits to the research community, not least avoiding the following -
- overemphasis of results from small samples;
- selective reporting;
- statistical errors skewing final results;
- insufficient documentation of research methods.
Other benefits include -
- greatly improved reproducibility of results, and
- much greater efficiency by discouraging redundant data collection.
Data that is made Open Access via recognised data repositories, such as the University of Leicester's FigShare repository, will be assigned a Digital Object Identifier. (DOI). The DOI is a key component in Open Access infrastructure because it ensures the data can be linked consistently from reference lists and picked up by citation and altmetric tools. In other words, by adopting the identifiers DOI and, as previously covered in this blog, ORCiD across all research administration systems, researchers could have the potential to evidence more of their research practices.
Bridging the Boxes by Islam Elsedoudi for opensource.com
Developments in Open Research can draw heavily upon an open publishing framework, let's take research evaluation as one example. Methods of research assessment have received vast amounts of criticism, including an overemphasis on metrics limited to one aspect of scholarship: journal articles. Another valid criticism of the use of metrics in evaluation could be that it lessens the attention on qualitative assessment and peer-review. One solution to the research evaluation issue is Open Evaluation / Open Peer Review, i.e. an ongoing post-publication process of transparent peer-review and rating of papers. It's thought that Open Peer Review could enable an academic to receive credit for their contributions reviewing a significant paper in their field.
Another example deserving more standardised recognition is expertise on a particular resource, collection or equipment. ORCiD have expanded the data collected in an ORCiD profile to include a research resources section, which aims to enable researchers to make their resource expertise a discoverable entity to employers, funders and future collaborators.
The tools covered in this short series of OA Week blog posts are designed with a bigger picture in mind: an Open Research infrastructure that can really come into its own once they are applied to all aspects of scholarship.