By Amy West | April 16, 2014
The Thomson-Reuters Data Citation Index (DCI) has been up and running for just over a year. The University of Minnesota (UMN) had a trial of the database when the DCI first launched, but because so little of the database was populated at that time, it was hard to assess it completely. We felt that after a year, it would be worthwhile to revisit it. The initial annual subscrption prices were significant. The UMN’s initial quote in late 2012 was over $20,000/year. Like many universities, when we consider new acquisitions, it’s always in light of what we’d have to cancel since at best, our budgets remain flat. Therefore, a potential new acquisition has to not just be good, but a better use of our funds than what we already have.
What makes the DCI interesting is that puts datasets and journal literature into a single platform, namely Thomson-Reuter’s Web of Science (WoS). Within the overall WoS, there is a core collection that constitutes the default search for subscribers. We know from our own statistics as well as vendor supplied statistics that our users do indeed go to WoS. We know that there is use of specialized databases for datasets like ICPSR’s archive, but the volume of use is much lower. If we had a single tool that made it easy for researchers to just search - without having to worry about what they’re searching - we believe that we’d see an increase in use of datasets as primary research inputs and greater acceptance of them as primary research outputs. So, that became our standard for measuring the DCI: how has Thomson-Reuters integrated DCI content into the WoS platform and is that integration strong enough to allow researchers to just search? Is the DCI part of the core collection? If not, do the links at the record level between datasets and articles provide an adequate substitute? So far, for the UMN, the answer is no, the integration isn’t strong enough to make the DCI a compelling subscription. It’s not part of the core collection nor do the links at the record level appear to be robust enough to compensate for its absence from the core collection.
All of that said, the database itself has a number of nice features and shows tremendous potential. I gathered my notes from the UMN’s first trial in late 2012 and our current trial at “Thomson Reuters Data Citation Index”. They are informal notes and it’s possible I may have gotten some things wrong. Working with trial versions of databases can be difficult I welcome comments and corrections!