Skip to main content

How effective is our research data management (RDM) training?

Benchmarking RDM Training
The University of Leicester research data service is involved in an international collaborative project which aims to assess and benchmark the quality of RDM training across institutions.This blog post reports on the progress of the international project so far, it originally appeared on the project blog on 6th October 2017.  
Remember, you can sign up for one of our generic or discipline-specific 2017/2018 introduction to RDM training sessions here. We look forward to seeing you.


How effective is your RDM training?
Collaborators (in alphabetical order by surname): Cadwallader Lauren, Higman Rosie, Lawler Heather, Neish Peter, Peters Wayne, Schwamm Hardy, Teperek Marta, Verbakel Ellen, Williamson, Laurian, Busse-Wicher Marta
When developing new training programmes, one often asks oneself a question about the quality of training. Is it good? How good is it? Trainers often develop feedback questionnaires and ask participants to evaluate their training. However, feedback gathered from participants attending courses does not answer the question how good was this training compared with other training on similar topics available elsewhere. As a result, improvement and innovation becomes difficult. So how to objectively assess the quality of training?
In this blog post we describe how, by working collaboratively, we created tools for objective assessment of RDM training quality.
Crowdsourcing
In order to objectively assess something, objective measures need to exist. Being unaware of any objective measures for benchmarking of a training programme, we asked Jisc’s Research Data Management mailing list for help. It turned out that a lot of resources with useful advice and guidance on creation of informative feedback forms was readily available, and we gathered all information received in a single document. However, none of the answers received provided us with the information we were looking for. To the contrary, several people said they would be interested in such metrics. This meant that objective metrics to address the quality of RDM training either did not exist, or the community was not aware of them. Therefore, we decided to create RDM training evaluation metrics.
Cross-institutional and cross-national collaboration
For metrics to be objective, and to allow benchmarking and comparisons of various RDM courses, they need to be developed collaboratively by a community who would be willing to use them. Therefore, the next question we asked Jisc’s Research Data Management mailing list was whether people would be willing to work together to develop and agree on a joint set of RDM training assessment metrics and a system, which would allow cross-comparisons and training improvements. Thankfully, the RDM community tends to be very collaborative, which was the case also this time - more than 40 people were willing to take part in this exercise and a dedicated mailing list was created to facilitate collaborative working.
Agreeing on the objectives
To ensure effective working, we first needed to agree on common goals and objectives. We agreed that the purpose of creating the minimal set of questions for benchmarking is to identify what works best for RDM training. We worked with the idea that this was for ‘basic’ face-to-face RDM training for researchers or support staff but it can be extended to other types and formats of training session. We reasoned that same set of questions used in feedback forms across institutions, combined with sharing of training materials and contextual information about sessions, should facilitate exchange of good practice and ideas. As an end result, this should allow constant improvement and innovation in RDM training. We therefore had joint objectives, but how to achieve this in practice?
Methodology
Deciding on common questions to be asked in RDM training feedback forms
In order to establish joint metrics, we first had to decide on a joint set of questions that we would all agree to use in our participant feedback forms. To do this we organised a joint catch up call during which we discussed the various questions we were asking in our feedback forms and why we thought these were important and should be mandatory in the agreed metrics. There was lots of good ideas and valuable suggestions. However, by the end of the call and after eliminating all the non-mandatory questions, we ended up with a list of thirteen questions, which we thought were all important. These however were too many to be asked of participants to fill in, especially as many institutions would need to add their own institution-specific feedback questions.
In order to bring down the number of questions which should be made mandatory in feedback forms, a short survey was created and sent to all collaborators, asking respondents to judge how important each question was (scale 1-5, 1 being 'not important at all that this question is mandatory' and 5 being 'this should definitely be mandatory'.). Twenty people participated in the survey. The total score received from all respondents for each question were calculated. Subsequently, top six questions with the highest scores were selected to be made mandatory.
Ways of sharing responses and training materials
We next had to decide on the way in which we would share feedback responses from our courses and training materials themselves . We unanimously decided that Open Science Framework (OSF) supports the goals of openness, transparency and sharing, allows collaborative working and therefore is a good place to go. We therefore created a dedicated space for the project on the OSF, with separate components with the joint resources developed, a component for sharing training materials and a component for sharing anonymised feedback responses.
Next steps
With the benchmarking questions agreed and with the space created for sharing anonymised feedback and training materials, we were ready to start collecting first feedback for the collective training assessment. We also thought that this was also a good opportunity to re-iterate our short-, mid- and long-term goals.
Short-term goals
Our short-term goal is to revise our existing training materials to incorporate the agreed feedback questions into RDM training courses starting in the Autumn 2017. This would allow us to obtain the first comparative metrics at the beginning of 2018 and would allow us to evaluate if our designed methodology and tools are working and if they are fit for purpose. This would also allow us to iterate over our materials and methods as needed.
Mid-term goals
Our mid-term goal is to see if the metrics, combined with shared training materials, could allow us to identify parts of RDM training that work best and to collectively improve the quality of our training as a whole. This should be possible in mid/late-2018, allowing time to adapt training materials as result of comparative feedback gathered at the beginning of 2018 and assessing whether training adaptation resulted in better participant feedback.
Long-term goals
Our long-term goal is to collaboratively investigate and develop metrics which could allow us to measure and monitor long-term effects of our training. Feedback forms and satisfaction surveys immediately after training are useful and help to assess the overall quality of sessions delivered. However, the ultimate goal of any RDM training should be the improvement of researchers’ day to day RDM practice. Is our training really having any effects on this? In order to assess this, different kinds of metrics are needed, which would need to be coupled with long-term follow up with participants. We decided that any ideas developed on how to best address this will be also gathered in the OSF and we have created a dedicated space for the work in progress.
Reflections
When reflecting on the work we did together, we all agreed that we were quite efficient. We started in June 2017, and it took us two joint catch up calls and a couple of email exchanges to develop and agree on joint metrics for assessment of RDM training. Time will show whether the resources we create will help us meet our goals, but we all thought that during the process we have already learned a lot from each other by sharing good practice and experience. Collaboration turned out to be an excellent solution for us. Likewise, our discussions are open to everyone to join, so if you are reading this blog post and would like to collaborate with us (or to follow our conversations), simply sign up to the mailing list.
Resources
Mailing list for RDM Training Benchmarking: http://bit.ly/2uVJJ7N
Project space on the Open Science Framework: https://osf.io/nzer8/
Mandatory and optional questions: https://osf.io/pgnse/
Space for sharing training materials: https://osf.io/tu9qe/
Anonymised feedback: https://osf.io/cwkp7/
Space for developing ideas on measuring long-term effects of training: https://osf.io/zc623/


Popular posts from this blog

You can now export multiple citations from Google Scholar

You can now export multiple citations from Google Scholar if you have a Google Account. Go to Google Scholar and sign into your Google Account. Conduct your search. Click on the Star icon (Save) under each reference you want to export. Then click on My Library in the top, right of the screen. Select all the references and click on the Export option: Click the Star/Save Icon Choose Export Option To Export into EndNote Choose the EndNote option. Open the EndNote file that is created. The references should automatically import into EndNote. To Export into RefWorks Choose the RefMan option. Save the RIS file that is created. Login to your RefWorks account. Click on the plus (+) button. Choose Import References. Add the RIS file you just saved. Set the file import option to RIS - Reference Manager. Click import and your references will be imported. --- Good Practice Tip: Always check that all the reference information you need has been

Searching ABS Journals in Business Source Premier

In Business and Management Studies, researchers undertaking a literature review sometimes search across a defined group of journals. This is a way of focusing the literature search to make the results more relevant to the questions in hand. Groups are often chosen from the Association of Business Schools (ABS)'s  Academic Journal Guide . Read more how about how they put together the guide here . There are several ways to search across ABS journals. Here is how to do it in Business Source Premier, a leading literature database for this subject area.  1.     Login into the ABS journal guide. If you have never used it before you will need to create an account. 2.     You can use the guide to draw up a group of journals either by using the Rankings information or the Fields. Fields divides up the journals into categories of research focus e.g. Accounting, Finance etc. In this example we will use the Fields. The field we are interested is ‘Operations Research and Marketin

Advanced Search Tip: Proximity (Adjacency) Searching

Proximity (Adjacency) Searching vs Phrase Searching When you're searching literature databases you might want to find a phrase. The easiest way to do this is to put the phrase in "speech marks". E.g. "heart disease" This will find that exact phrase - with the words next to each other in that order. BUT... You may be interested in variations on that phrase e.g. heart disease, disease of the heart, diseases of the heart, diseases of the human heart. In that case it might be better to use a proximity/adjacency search - this allows you to find one keyword next to another. Or one keyword within a specified number of words of the other keyword. When using a proximity search the keywords can be in any order. Different Databases Use Different Proximity Operators In Ovid Medline : heart adj disease finds the word heart next to the word disease, in that order.    (This is the same as searching for the phrase, of course) heart adj2 disease fin