Already a member?

Sign In

Conference Presentations 2016

  • IASSIST 2016-Embracing the 'Data Revolution': Opportunities and challenges for research, Bergen
    Host Institution: NSD - Norwegian Centre for Research Data

3A: Opening up open data (Wed, 2016-06-01)
Chair:Jenny Muilenburg

  • Open data and citizen empowerment: Opening National Food Survey data
    Sharon Bolton (UK Data Archive)

    [abstract]

    During 2016, the UK Data Service has been collaborating with a UK government department on an initiative to open National Food Survey data. What are the rewards and challenges of repurposing previously safeguarded data? This presentation will cover elements such as negotiation, re-licencing, privacy and disclosure review, and the upgrade of legacy data to improve the experience for users old and new.

  • Infrastructures for the Data Revolution: How OpenAIRE supports the ECs Open Access and Open Data Policies
    Tony Ross-Hellauer (OpenAIRE/Uni Goettingen)
    Alen Vodopijevec (OpenAIRE/Institut Ruder Boskovic)

    [abstract]

    OpenAIRE2020 is an Open Access (OA) infrastructure for research which supports open scholarly communication and access to the research output of European funded projects. With over five years experience of supporting the European Commission's OA policies, OpenAIRE now has a key role in supporting the EC's Horizon 2020 Open Data Pilot.

    OpenAIRE's community network works to gather research outputs, highlight the OA mandate, and advance open access initiatives at national levels. It has National Open Access Desks in over 30 countries, and operates a European Helpdesk system for all matters concerning open access, copyright and repository interoperability. At the same time, OpenAIRE harvests metadata information from a network of Open Access repositories, data repositories, aggregators and OA journals. It then enriches this metadata by linking people, publications, datasets, projects and funding streams. This interlinked information which currently encompasses more than 13 million publications and 12 thousand datasets from more than 6 thousand data sources helps optimise the research process, increasing research visibility, facilitating data sharing and reuse and enabling the monitoring of research impact. This presentation will outline how an infrastructure like OpenAIRE can help turn OA policy into successful implementation.

  • 101 cool things do with open data - running an App challenge
    Louise Corti (UK Data Archive)

    [abstract]

    In the summer 2015 the UK Data Service and the small company, AppChallenge.net, collaborated to launch a developer contest using open data about the Quality of Life of European citizens. The project involved us creating an open dataset certified as expert by the UK's Open Data Institute and to be made available via a new test open API (Application Programming Interface). In this paper I will set out how we opened up and richly documented 2 years of the European Quality of Life Survey (EQLS) data carried out by Eurofound; through detailed disclosure review (using an SDC R tool) and harmonising variables across years.

    These data were made available via our new pilot public API with weights added at the point of making a call. The project used crowdsourcing to generate innovative apps and services from developers who may not have otherwise discovered the UK Data Service. Developers from across the world took part in our EULife AppChallenge competition, with an 18 year old Polish man winning the contest with his EULife Quizzes, and scooping the largest cash prize. I'll share with you how we got this Challenge off the ground, some of the lessons we learned and some of the great winning idea. One thing is, don't assume that app developers will read any of your beautiful archive documentation - they won't - they just want rich self documenting data through a single AI interface.

3F: Technical data infrastructure frameworks (Wed, 2016-06-01)
Chair:Bhojaraju Gunjal

  • MMRepo - Storing qualitative and quantitative data into one big data repository
    Ingo Barkow (HTW Chur)

    [abstract]

    In recent years the storage of qualitative data has been a challenge to data archives using repositories which base on relational databases as large files could not really be represented well in these structures. Most of the times two or more structures have to be in place e.g. a fileserver including versioning for large files and a relational database for the tabular information which means handling multiple systems at the same time. With the arrival of Hadoop and other big data technologies there is now the possibility to store qualitative data and quantitative data as mixed mode data into the same structures. This paper will discuss our findings in developing an early prototype version of MMRepo at HTW Chur. MMRepo is planned as a combination of the Invenio portal solution from CERN with a Hadoop 2.0 cluster using the DDI 3.3 beta metadata scheme for data documentation.

  • The CESSDA Technical Framework - what is it and why is it needed?
    John Shepherdson (UK Data Archive)

    [abstract]

    There is a requirement for a delivery capability to provide the compute power, storage and bandwidth required to host the various products and services that will be developed as components of the forthcoming CESSDA Research Infrastructure (CRI), which will make high quality European research data more readily available to researchers. Alongside this, the provision of a development and test environment with a common, shared toolchain will reap many benefits. The ambition of the Technical Framework is to promote good software development practice across the CESSDA member (aka "Service Provider") community, in respect of the delivery of software-based products and services for the CRI. The publication of architectural guidelines and basic standards for source code quality will ensure Service Providers know what is expected of them, whilst the shared development infrastructure will help them achieve the required standards without a lot of upfront cost and effort. That is to say, the goal is to lower the entry barriers for Service Providers. In summary, modern data curation techniques are rooted in sophisticated IT capabilities, and the CRI needs to have such capabilities at its core, in order to better serve its community. The CESSDA Technical Framework is a key enabler for this.

  • Archonnex at ICPSR - Data Science Management for All
    Thomas Murphy (ICPSR - University of Michigan)
    Harsha Ummerpillai (ICPSR - University of Michigan)

    [abstract]

    Archonnex is a Digital Asset Management System (DAMS) architecture defined to transition to a newer technology stack meeting core and emerging business needs of the organization and the industry. It aims to build a digital technology platform that leverages ICPSR expertise and open source technologies that are proven and well supported by strong Open Source communities. This component based design identifies re-usable self-contained services as components. These components will be integrated and orchestrated using an Enterprise Service Bus and Message Broker to deliver complex business functions. All components starts as a Minimum Viable Product (MVP) and improved in iterative development phases. This presentation will identify all the various operational components and the associated technology counterparts involved with running a data science repository. It will consider the process of the upfront integration with the researcher to allow better managed data collection, dissemination and management (see the SEAD poster proposal) during research and follow the workflow process technologically through from the ingestion of data to the repository, curation, archiving, publication and re-use of the research data including the citation and bibliography management along the way. The integration of data management plans and their impact on this workflow should become apparent with this ground up architecture designed for the data science industry. The conference participants will leave with an understanding of how the Archonnex Architecture at ICPSR is strengthening the data services offer to new researchers as well as data re-use and how repository brokering may be leveraged.

1G: Data services: Setting up and evaluating (Wed, 2016-06-01)
Chair:Chubing Tripepi

  • Maturity Model for Assessing Data Infrastructures - CESSDA as Example.
    Marion Wittenberg (DANS)
    Mike Priddy (DANS)
    Trond Kvamme (NSD)
    Maarten Hoogerwerf (DANS)

    [abstract]

    CESSDA, the consortium of European Social Science Data Archives, aims to provide an infrastructure that enables the research community to conduct high-quality research within the social sciences. Developing such a infrastructure requires all service providers to participate to their ability: some partners have a long history and have set high ambitions and funding whereas other partners are in the process of setting up their archives, sometimes with limited funding. Rather than setting fixed requirements for each partner or services, CESSDA must define both the desired state AND provide effective guidance for partners how to improve their services gradually to the minimal/desired state. Within the SaW-project a maturity model will be developed which helps (aspiring) CESSDA members to assess their services and determine the gap(s) between the current and desired state for each individual partner. In this this presentation we will show the model and we will explain how it could be used for assessments.

    Presentation:
  • Roper@Cornell: TheComplexities of Moving the World's Largest Archive of Public Opinion Data to Its New Home at Cornell University
    William Block (Cornell University)
    Tim Parsons (Roper Center)
    Brett Powell (Roper Center)

    [abstract]

    In November of 2015 the Roper Center for Public Opinion Research moved from its home of 38 years, the University of Connecticut, to new digs at Cornell University. This presentation will discuss the complexities of moving an archive, especially the decision-making processes involved in that (paper, admininistrative information, data producer agreements, etc). Key decisions have to driven by the preservation policy and by commitments to the membership and society.

  • Emerging data archives: Providing Data Services with Limited Resources
    Aleksandra Bradic-Martinovic (Institute of Economic Sciences)
    Marijana Glavica (University of Zagreb)
    Vipavc Brvar (Slovenian Social Science Data Archive)

    [abstract]

    The establishment of data services is a long and challenging process. There are two possible approaches in realizing the establishment: top-down and bottom-up. The bottom-up approach is more common and involves development of services within one institution often funded by projects with limited resources and duration. During the initial phase, a potential service provider (SP) is able to gain necessary knowledge and experience, but after that the provider has to offer some services to their users. The problem is that often a newly established SP is not yet fully operational, and it has to be decided which services can and which can not be offered to the potential users.
    In this paper we will analyze different pathways for providing data services with limited resources. Our main focus will be on two cases of emerging data archives, one in Croatia and one in Serbia. We will offer a systematic review of data services and argument which of them could and should be provided or not. We will identify a minimum set of services and the way in which they must be delivered in order to build trust with users and to provide long-term preservation and availability of deposited data.

1I: Teaching data (Wed, 2016-06-01)
Chair:Laine Ruus

  • A Proposed Scaffolding for Data Management Skills from Undergraduate Education through Post Graduate Training and Beyond
    Megan Sapp Nelson (Purdue University)

    [abstract]

    Initial work in identifying data management or data information literacy skills went as far as identifying a list of proposed competencies without further differentiation between those competencies, whether by discipline, complexity, or use case. This presentation proposes an evolution in existing competencies by identifying a scaffolding built upon existing competencies that moves students progressively from undergraduate training through post graduate coursework and research to post-doctoral work and even into the early years of data stewardship. The scaffolding ties together existing research that has been completed in research data management skills and data information literacy with research into the outcomes that are desirable for individuals to present in data management at each of the levels of education. As a result of this presentation, competencies will be aligned according to application (personal, small group, large group) in such a way that the skills attained at the undergraduate level would give students moving on to graduate work greater familiarity with data management and therefore greater likelihood of success at the graduate and then post graduate and data steward levels.

    Presentation:
  • IASSIST Quarterly

    Publications Special issue: A pioneer data librarian
    Welcome to the special volume of the IASSIST Quarterly (IQ (37):1-4, 2013). This special issue started as exchange of ideas between Libbie Stephenson and Margaret Adams to collect

    more...

  • Resources

    Resources

    A space for IASSIST members to share professional resources useful to them in their daily work. Also the IASSIST Jobs Repository for an archive of data-related position descriptions. more...

  • community

    • LinkedIn
    • Facebook
    • Twitter

    Find out what IASSISTers are doing in the field and explore other avenues of presentation, communication and discussion via social networking and related online social spaces. more...