Already a member?

Sign In
Syndicate content

Blogs

Call for Workshops

Topic:

Don't forget to propose that extra special workshop for IASSIST 2012. Deadline is Jan 16. You can also propose Pecha Kuchas, posters, and roundtable discussions until Jan 16.

Call for Workshops

Data Science for a Connected World: Unlocking and Harnessing the Power of Information

The 38th International Association for Social Science Information Services and Technology (IASSIST) annual conference will be hosted by NORC at the University of Chicago and will be held at the George Washington University in Washington DC, June 4 - 8, 2012.

The theme of this year's conferences is Data Science for a Connected World: Unlocking and Harnessing the Power of Information. This theme reflects the growing desire of research communities, government agencies and other organizations to build connections and benefit from the better use of data through practicing good management, dissemination and preservation techniques. Submissions are encouraged that offer improvements for creating, documenting, submitting, describing, disseminating, and preserving scientific research data.

Workshops details:
The conference committee seeks workshops that highlight this year’s theme Data Science for a Connected World: Unlocking and Harnessing the Power of Information.  Below is a sample of possible workshop topics that may be considered:

  • Innovative/disruptive technologies for data management and preservation
  • Infrastructures, tools and resources for data production and research
  • Linked data: opportunities and challenges
  • Metadata standards enhancing the utility of data
  • Challenges and concerns with inter-agency / intra-governmental data sharing
  • Privacy, confidentiality and regulation issues around sensitive data
  • Roles, responsibilities, and relationships in supporting data
  • Facilitating data exchange and sharing across boundaries
  • Data and statistical literacy
  • Data management plans and funding agency requirements
  • Norms and cultures of data in the sciences, social sciences and the humanities
  • Collaboration on research data infrastructure across domains and communities
  • Addressing the digital/statistical divide and the need for trans-national outreach
  • Citation of research data and persistent identifiers
  • The evolving data librarian profession

Successful workshop proposals will blend lecture and active learning techniques.  The conference planning committee will provide the necessary classroom space and computing supplies for all workshops.  For previous examples of IASSIST workshops, please see our 2010 workshops and our 2011 workshops. Workshops can be a half-day or full-day in length.

Procedure: Please submit the proposed title and an abstract of no longer than 200 words to Lynda Kellam (lmkellam@uncg.edu). With your submission please include a preliminary list of requirements including:

  • computer Lab OR classroom
  • software and hardware requirements
  • any additional expected requirements

Deadline for submissionJanuary 16, 2012
Notification of acceptance: March 2, 2012

Please contact Lynda Kellam, IASSIST workshop Coordinator, if you have any questions regarding workshop submissions at lmkellam@uncg.edu

IASSIST is an international organization of professionals working in and with information technology and data services to support research and teaching in the social sciences.  Typical workplaces include data archives/libraries, statistical agencies, research centers, libraries, academic departments, government departments, and non‐profit organizations.  Visit iassistdata.org  for further information.

IASSIST 2012
June 4 - 8, 2012
Washington DC, USA

-IASSIST 2012 Program Chairs: Jake Carlson, Pascal Heus and Oliver Watteler

IQ Special Quadruple Issue: The Book of the Bremen Workshop

Welcome to this very special IASSIST Quarterly issue. We now present volume 34 (3 & 4) of 2010 and volume 35 (1 & 2) of 2011. Normally we have about three papers in a single issue. In this super-mega-special issue we have fourteen papers from the countries: Finland, Ireland, United Kingdom, Austria, Czech Republic, Denmark, Germany, Norway, Slovenia, Belarus, Hungary, Lithuania, Poland and Switzerland. This will be known in IASSIST as the “The book of the Bremen Workshop”.

The workshop took place in April 2009 at the University of Bremen. The workshop was hosted by the Archive for Life Course Research at Bremen and funded by the Timescapes Initiative with support from CESSDA. The background and context of the workshop as well as short introductions to the many papers are found in the Editorial Introduction by the guest editors Bren Neale and Libby Bishop. The many papers are the result of the effort of numerous authors that were instrumental in the development and fulfillment of the many outcomes of the workshop. The introduction by the guest editors shows impressive lists of short-term activities, agreed goals, and also strategies for development. There are future initiatives and the future looks bright and interesting.The focus of the Bremen Workshop is on “qualitative (Q) and qualitative longitudinal (QL) research and resources across Europe”. I would have called that a qualitative workshop but you can see from the introduction and the papers that this subject is often referred to as “qualitative and QL data”. The “and QL” emphasizes that the longitudinal aspect is the special and important issue. In the beginning of IASSIST data was equivalent to quantitative data. However, digital archives found in the next wave that the qualitative data also with great value were made available for secondary research. The aspect of “longitudinal” further accentuates that value creation.

This is a growing subject area. During the processing one of the authors wanted to update her paper and asked for us to replace the sentence “80 archived qualitative datasets and yearly around 30-40 datasets are ordered for re-use” with “115 archived qualitative datasets and yearly around 50-60 datasets are ordered for re-use”. Yes, we do have a somewhat long processing time but this is still a very fast growth rate. I want to thank Libby Bishop for not being annoyed when I persistently reminded her of the IQ special issues. I’m sure the guest editors with similar persistency contacted the authors. It was worth it.

As in Sherlock Holmes we might look for what is not there as when curiosity is raised by the fact that “the dog did not bark”. IASSIST has had and continues to have a majority of its membership in North America so it is also remarkable that we here present the initiative on “qualitative (Q) and qualitative longitudinal (QL) research” with a European angle. Hopefully the rest of the world will enjoy these papers and there will probably be more papers both from Europe but also from the others regions covered by the IASSIST members.

Articles for the IQ are always very welcome. They can be papers from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. If you don’t have anything to offer right now, then please prepare yourself for the next IASSIST conference and start planning for participation in a session there. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is much appreciated as the information in the form of an IQ issue reaches many more people than the session participants and will be readily available on the IASSIST website at http://www.iassistdata.org.

Authors are very welcome to take a look at the description for layout and sending papers to the IQ:
http://iassistdata.org/iq/instructions-authors
Authors can also contact me via e-mail: kbr@sam.sdu.dk. Should you be interested in compiling a special issue for the IQ as guest editor (editors) I will also delighted to hear from you.

Karsten Boye Rasmussen
Editor August 2011

Image Credit: by mitko-denev on flickr

IASSIST 2012 - Call for Workshops

Topic:

 

The Call for Papers for IASSIST 2012 is closed, but proposals for Workshops are now being accepted.  The Call for Workshops is listed below:

 

Call for Workshops

Data Science for a Connected World:
Unlocking and Harnessing the Power
of Information

The 38th International Association for Social Science Information Services and Technology (IASSIST) annual conference will be hosted by NORC at the University of Chicago and will be held at the George Washington University in Washington DC, June 4 - 8, 2012. 

The theme of this year's conferences is Data Science for a Connected World: Unlocking and Harnessing the Power of Information. This theme reflects the growing desire of research communities, government agencies and other organizations to build connections and benefit from the better use of data through practicing good management, dissemination and preservation techniques. Submissions are encouraged that offer improvements for creating, documenting, submitting, describing, disseminating, and preserving scientific research data. 

Workshops details:
The conference committee seeks workshops that highlight this year’s theme Data Science for a Connected World: Unlocking and Harnessing the Power of Information.  Below is a sample of possible workshop topics that may be considered: 

  • Innovative/disruptive technologies for data management and preservation
  • Infrastructures, tools and resources for data production and research
  • Linked data: opportunities and challenges
  • Metadata standards enhancing the utility of data
  • Challenges and concerns with inter-agency / intra-governmental data sharing
  • Privacy, confidentiality and regulation issues around sensitive data
  • Roles, responsibilities, and relationships in supporting data
  • Facilitating data exchange and sharing across boundaries
  • Data and statistical literacy
  • Data management plans and funding agency requirements
  • Norms and cultures of data in the sciences, social sciences and the humanities
  • Collaboration on research data infrastructure across domains and communities
  • Addressing the digital/statistical divide and the need for trans-national outreach
  • Citation of research data and persistent identifiers
  • The evolving data librarian profession

Successful workshop proposals will blend lecture and active learning techniques.  The conference planning committee will provide the necessary classroom space and computing supplies for all workshops.  For previous examples of IASSIST workshops, please see our 2010 workshops and our 2011 workshops. Workshops can be a half-day or full-day in length.

Procedure: Please submit the proposed title and an abstract of no longer than 200 words to Lynda Kellam (lmkellam@uncg.edu). With your submission please include a preliminary list of requirements including:

  • computer Lab OR classroom
  • software and hardware requirements
  • any additional expected requirements

Deadline for submissionJanuary 16, 2012
Notification of acceptance: March 2, 2012

Please contact Lynda Kellam, IASSIST workshop Coordinator, if you have any questions regarding workshop submissions at lmkellam@uncg.edu

IASSIST is an international organization of professionals working in and with information technology and data services to support research and teaching in the social sciences.  Typical workplaces include data archives/libraries, statistical agencies, research centers, libraries, academic departments, government departments, and non‐profit organizations.  Visit iassistdata.org  for further information. 

IASSIST 2012
June 4 - 8, 2012
Washington DC, USA

-IASSIST 2012 Program Chairs: Jake Carlson, Pascal Heus and Oliver Watteler

Council of Professional Associations on Federal Statistics (COPAFS) meeting notes

I was lucky enough to be able to sit in on the most recent COPAFS meeting in place of our regular liaison Judith Rowe.  While the topics were very different than the issues I usually deal with at work, I found the presentations really interesting. Here's an abridged version of my notes.

 

News:

Ed Spar will be stepping down as Executive Director at the end of 2012.  The board will be launching a search and will be engaging a search firm.

Director's update:

The budgetary situation is grim to worse and outlook isn't any better. Every agency will wish they had last years budget. Census numbers reflect a very bad year coming up. The meeting dates for next year are: March 16, June 1, Sept 14, December 7.

Update on National Center for Education Statistics (NCES)- Marilyn Seastrom
NCES is the statistical agency within Dept of Education.  They have a small staff but lots of contractors and may be lucky enough to be level funded next year.

Assessment: it was the busiest year in the history of national assessment.  They are ready to release the state mapping report.  This compares assessment measures across states - map state assessments to National Assessment of Educational Progress (NAEP). For example, there is only one state (MA) where a 4th grader who is deemed is proficient on the state exam is proficient on the national level.  There are many states where they are proficient at the state level but they don't even make the "basic" cut for the national assessment. The are also ready to Release the Reading and Mathematics report card


Elementary and Secondary update: They've done an expansion of NCES Geo-mapping application which works with the ACS to provide data by school district boundaries.


Miscellaneous: there's a new OECD adult literacy study (PIAAC - first international assessment done on laptops in the home) and the national household education survey (what goes on outside of school) is no longer random digit dial sample due to deterioration in response rates, now address based sample (mail) .
There's new stuff on the horizon:  a middle school study, NAEP-TIMSS (Trends in International Mathematics and Science Study) link which will be an ambitious study using 8th grade level achievement in math and science.

American Demographic History: Campbell Gibson (demographer retired from Census)
Website of demographic history : www.demographicchartbook.com
Developed over a few years with David Kennedy and Herbert Kline (Stanford) - about 130 graphics through 2000 for both state and national charts which are freely available and can be downloaded.
Source:  all decennial census - some drawn from compendia of ipums files.
He showed a variety of slides - all of which are available on the website and most of which were fascinating.  Can you guess the changes in the set of the top five languages spoken in the home of non-US born residents?

Rural Statistical Areas: Mike Radcliffe, Geography Division, Census
The presentation described a three year joint research project with 23 states. The goal was to define Rural Statistical areas - geographic areas defined using counties, county subdivisions and census tracts a building blocks. The goal was to be able to tabulate ACS 1 year estimates for areas of 65K+ people. These areas would be based on rural focus - not like pumas which used 100K but mostly urban areas.  They started with most rural parts and build from there - urban is really the residual.


RSA delineation process - counties with 65K+ would be standalone RSAs if rural focus.  Used the urban influence codes (UIC from USDA) to get to "ruralness" and grouped counties with some boundary tweaks made by State Data Center Steering committee. He showed maps of UIC ratings then discussed how to aggregate counties:  they created an aggregation net using state boundaries, interstate highways and rivers to create a lattice work to think about how to group counties.  They started with UIC category 12 and aggregated up by county until you hit the 65K+ measure. It's an imperfect measure and there were some problems with adjacent county differences and sometimes had to sacrifice resolution.


The resulting definitions for RSAs by state were sent to the state and they were able to move things around a bit to help smooth out some of the initial classification imperfections. Some states suggested alternative definitions; for example, Vermont wanted to use their planning regions.


Questions on the table:

  • Should RSAs be contiguous? Census has a preference for yes but states disagree - eg Alabama might have similar demographics between north and south counties that would match better for an RSA than using geography.
  • Can a variety of building blocks be used to form RSAs?  Initial proposal was counties but they may not be the best units to start with.  States found that in some cases sub-county divisions or census tracts worked better.
  • Why not cross state lines?  Makes sense for some questions but State data centers need to address rural areas withing their states?
  • Should counties of 65K+ be split into multiple areas?

Next steps:
State data centers have asked Census to define these as statistical areas but Census has said that in some cases (like Los Angeles) you just can't call them rural.  What do you call them? The project needs to get wider review including public comment through a Federal Register notice.


Research on measuring same sex couples - Nancy Bates - Census
Motivation: definition of marriage has changed; new terms and different state recognition and no federal recognition of same sex couples. According to 2008 ACS, there are about 150,000 self described same-sex married couples but only around 32,000 same-sex legally married couples.

Possible causes:

  • Classification error:  maybe people think of themselves as married even if they aren't.
  • First response: on ACS the husband/wife category is first in list but unmarried partner is 13th
  • Errors elsewhere: false positives due to incorrect gender response

Research: some based on focus groups - 18 groups in 8 different areas with different legal recognition of same sex marriage.  Mostly gay couples but some unmarried straight couples.  Most people interpreted the question on federal form as indicating "legal status".  Some thought it meant "legally married anywhere".  Many groups noted they were missing categories for civil unions or domestic partnerships. And there is the "function equivalence" problem that couples had the equivalent of a marriage but no where to put themselves.


Research: some based on cognitive interviews - 40 interviews both gays and straights across different legal jurisdictions. Participants filled out forms then were debriefed afterwards and showed alternative form and asked for preference.
Results: most survey results aligned with "true" legal status.  Specifically calling out same sex or opposite sex in the marital status question was preferred but also was flagged as potentially sensitive. Would this delineation increase unit non-response? Also, there was some confusion about defintion of civil union/domestic partnership.  Most people found it useful to have a cohabitation question.
Next steps:  interagency group review, piggyback on an ACS test for a larger trial which is mail only and they need to test in other modes and would love to be able to have a re-interview component.

Research on measuring same sex couples - Martin O'Donnell - showing some data
Showed a comparison of ACS data and census stuff - but comparability may not be perfect.
Changes in ACS forms and editing caused a drop of self reported same sex spouses from 350K+ to 150K+.

2010 Census results showed much higher level of same sex households than the 2010 ACS.  There was a huge difference between mail forms and non-mail forms.  Approximately 3 times as many households reported themselves as same sex households in mail forms as non-mail forms for ACS where the non-mail were nonresponse follow up (NRFU). On the pre2008 ACS and 2010 Census NRFU form, the matrix format for the form didn't yield consistent results.  ACS 2008+ and 2010 Census form had a person based column format which had much more consistent responses.  This is truly non-sampling error for populations: you only need 4 errors per 1000 of opposite sex households to generate the 250K+ error in the same sex spouses because there are 60 million of them.


Problem: bad matrix form was approved and printed before these results where available. Now short form data wave 1 is published including one table with one table about same sex couples but they can't stop the processing of the entire 2010 Census to allow for the correction of one table. Now how do they fix it?


They tested the quality of the reporting on sex.  Used name index to match the probability that a person has a name associated with a male (John or Thomas has very high index, Virginia or Elizabeth is very low) with state controls for cultural differences (Jean may be more likely to be a male in French areas).  Index value of 0-50 were likely to be female and those with 950-1000 were likely to be male.  Couples with a female partner with a name at the highest index value or a male partner with a name at the lowest index value where then considered to have incorrectly marked the sex item on the question and they were dropped from the same sex couples category. Ex: 9000 male-male couples in Texas out of 31,000 have names that indicate they are probably male-female couples - nearly one third of the same sex marriage stats in American Factfinder may be incorrect.  


Geographic distribution with inconsistent name reporting: swath from Florida north west to ND - matches high rate of NRFU forms.
Summary: They reissued the numbers which matched the 2010 ACS better once the name mismatched folks where thrown out. Spousal household estimate is most improved. American Factfinder page shows people where to go to get preferred estimate. Census PUMS is based on edited data.  They aren't recalculating the entire Census data but they are published the edit data and there will be a flag on data that are affected.

Reminder - Submit your paper and panel proposals to IASSIST 2012

Topic:

Just a reminder that the deadline to submit an individual paper or a panel session to IASSIST 2012 is Friday December 9th.  The Submission Form can be found at: http://www.iassist2012.org/index.php/CPMS/submissions2012.html  

Call for Papers

Data Science for a Connected World: Unlocking and Harnessing the Power of Information

The theme of this year's conference is Data Science for a Connected World: Unlocking and Harnessing the Power of Information. This theme reflects the growing desire of research communities, government agencies and other organizations to build connections and benefit from the better use of data through practicing good management, dissemination and preservation techniques.

The theme is intended to stimulate discussions on building connections across all scholarly disciplines, governments, organizations, and individuals who are engaged in working with data.  IASSIST as a professional organization has a long history of bringing together those who provide information technology and data services to support research and teaching in the social sciences.  What can we as data professionals with shared interests and concerns learn from others going forward and what can they learn from us?  How can data professionals of all kinds build the connections that will be needed to address shared concerns and leverage strengths to better manage, share, curate and preserve data?

We welcome submissions on the theme outlined above, and encourage conference participants to propose papers and sessions that would be of interest to a diverse audience. Any paper related to the conference theme will be considered; below is a sample of possible topics

Topics:

  • Innovative/disruptive technologies for data management and preservation
  • Infrastructures, tools and resources for data production and research
  • Linked data: opportunities and challenges
  • Metadata standards enhancing the utility of data
  • Challenges and concerns with inter-agency / intra-governmental data sharing
  • Privacy, confidentiality and regulation issues around sensitive data
  • Roles, responsibilities, and relationships in supporting data
  • Facilitating data exchange and sharing across boundaries
  • Data and statistical literacy
  • Data management plans and funding agency requirements
  • Norms and cultures of data in the sciences, social sciences and the humanities
  • Collaboration on research data infrastructure across domains and communities
  • Addressing the digital/statistical divide and the need for trans-national outreach

Papers will be selected from a wide range of subjects to ensure a broad balance of topics.

The Program Committee welcomes proposals for:
Individual presentations (typically 15-20 minutes)
Complete sessions, which could take a variety of formats (e.g. a set of three to four individual presentations on a theme, a discussion panel, a discussion with the audience, etc.)
Posters/demonstrations for the poster session
Pecha Kucha (a presentation of 20 slides shown for 20 seconds each, heavy emphasis on visual content) http://www.wired.com/techbiz/media/magazine/15-09/st_pechakucha
Round table discussions (as these are likely to have limited spaces, an explanation of how the discussion will be shared with the wider group should form part of the proposal).
[Note: A separate call for workshops is forthcoming].

Session formats are not limited to the ideas above and session organizers are welcome to suggest other formats.

Proposals for complete sessions should list the organizer or moderator and possible participants; the session organizer will be responsible for securing both session participants and a chair.

All submissions should include the proposed title and an abstract no longer than 200 words (note: longer abstracts will be returned to be shortened before being considered).  Abstracts submitted for complete sessions should provide titles and a brief description for each of the individual presentations.  Abstracts for complete session proposals should be no longer than 300 words if information about individual presentations are needed. 

Please note that all presenters are required to register and pay the registration fee for the conference; registration for individual days will be available.

  • Deadline for submission of individual presentations and sessions: 9 December 2011.
  • Deadline for submission of posters, Pecha Kucha sessions and round table discussions: 16 January 2012.
  • Notification of acceptance for individual presentations and sessions: 10 February 2012.
  • Notification of acceptance for posters, Pecha Kucha sessions and round table discussions: 2 March 2012.

We would want to receive confirmation of acceptance from those we invite to present by two weeks after notification.

Stephen S. Clark Library for Maps, Government Information, and Data Services is open for business!

Three cheers for Jen Green!!! 

When not keeping IASSIST finances in check as the IASSIST Treasurer, Jennifer Green, director of the new Stephen S. Clark Library for Maps, Government Information, and Data Services, at the University of Michigan has been busy getting the library in shape for the recent opening day! 

Check out the announcement of the grand opening festivities in the Record Update (a publication of the Office of the Vice President for Communications at the University of Michigan) and don't miss the brand new website of the Setphen S. Clark Library

Green says the new library’s unique combination of collections, government information expertise, and data services will provide scholars and researchers with unprecedented opportunities for exploration, discovery, and collaboration.

“Before the Clark, there was a large degree of interaction among these three units,” Green says. “Our new proximity, in a purposefully designed and equipped space, means that we can more effectively collaborate with each other, which in turn really enhances our ability to creatively collaborate with students, faculty, and researchers.”

From the Record Update

IASSIST 2012 - Conference website

The IASSIST 2012 conference website is now live and ready to receive submissions:  http://www.iassist2012.org/index.html

Call for Papers

Data Science for a Connected World: Unlocking and Harnessing the Power of Information

The theme of this year's conference is Data Science for a Connected World: Unlocking and Harnessing the Power of Information. This theme reflects the growing desire of research communities, government agencies and other organizations to build connections and benefit from the better use of data through practicing good management, dissemination and preservation techniques.

The theme is intended to stimulate discussions on building connections across all scholarly disciplines, governments, organizations, and individuals who are engaged in working with data.  IASSIST as a professional organization has a long history of bringing together those who provide information technology and data services to support research and teaching in the social sciences.  What can we as data professionals with shared interests and concerns learn from others going forward and what can they learn from us?  How can data professionals of all kinds build the connections that will be needed to address shared concerns and leverage strengths to better manage, share, curate and preserve data?

We welcome submissions on the theme outlined above, and encourage conference participants to propose papers and sessions that would be of interest to a diverse audience. Any paper related to the conference theme will be considered; below is a sample of possible topics

Topics:  
  • Innovative/disruptive technologies for data management and preservation
  • Infrastructures, tools and resources for data production and research
  • Linked data: opportunities and challenges
  • Metadata standards enhancing the utility of data
  • Challenges and concerns with inter-agency / intra-governmental data sharing
  • Privacy, confidentiality and regulation issues around sensitive data
  • Roles, responsibilities, and relationships in supporting data
  • Facilitating data exchange and sharing across boundaries
  • Data and statistical literacy
  • Data management plans and funding agency requirements
  • Norms and cultures of data in the sciences, social sciences and the humanities
  • Collaboration on research data infrastructure across domains and communities
  • Addressing the digital/statistical divide and the need for trans-national outreach

Papers will be selected from a wide range of subjects to ensure a broad balance of topics.

The Program Committee welcomes proposals for:
- Individual presentations (typically 15-20 minutes)
- Complete sessions, which could take a variety of formats (e.g. a set of three to four individual presentations on a theme, a discussion panel, a discussion with the audience, etc.)
- Posters/demonstrations for the poster session
- Pecha Kucha (a presentation of 20 slides shown for 20 seconds each, heavy emphasis on visual content) http://www.wired.com/techbiz/media/magazine/15-09/st_pechakucha
- Round table discussions (as these are likely to have limited spaces, an explanation of how the discussion will be shared with the wider group should form part of the proposal).
[Note: A separate call for workshops is forthcoming].

Session formats are not limited to the ideas above and session organizers are welcome to suggest other formats.

Proposals for complete sessions should list the organizer or moderator and possible participants; the session organizer will be responsible for securing both session participants and a chair.

All submissions should include the proposed title and an abstract no longer than 200 words (note: longer abstracts will be returned to be shortened before being considered).  Abstracts submitted for complete sessions should provide titles and a brief description for each of the individual presentations.  Abstracts for complete session proposals should be no longer than 300 words if information about individual presentations are needed. 

Please note that all presenters are required to register and pay the registration fee for the conference; registration for individual days will be available.

  • Deadline for submission of individual presentations and sessions: 9 December 2011.
  • Deadline for submission of posters, Pecha Kucha sessions and round table discussions: 16 January 2012.
  • Notification of acceptance for individual presentations and sessions: 10 February 2012.
  • Notification of acceptance for posters, Pecha Kucha sessions and round table discussions: 2 March 2012.

We would want to receive confirmation of acceptance from those we invite to present by two weeks after notification.

Open Access to Federally Funded Research

Got something to say about "ensuring long-term stewardship and encouraging broad public access to unclassified digital data that result from federally funded scientific research"?

 

The White House Office for Science and Technology Policy (OSTP) released two public consultations today, one on OA for data and one on OA for publications arising from publicly-funded research. Responses are due in early January. Please spread the word. Submit your own comments and/or work with colleagues to submit comments on behalf of your institution.

(1) "[T]his Request for Information (RFI) offers the opportunity for interested individuals and organizations to provide recommendations on approaches for ensuring long-term stewardship and encouraging broad public access to unclassified digital data that result from federally funded scientific research....Response Date: January 12, 2012...."
http://goo.gl/L1jn3

(2) "[T]his Request for Information (RFI) offers the opportunity for interested individuals and organizations to provide recommendations on approaches for ensuring long-term stewardship and broad public access to the peer-reviewed scholarly publications that result from federally funded scientific research....Response Date: January 2, 2012...."
http://goo.gl/vTP18

IASSIST Latin Engagement Action Group

The Latin Engagement Action Group have come up with a number of outreach activities aimed at supporting data professionals from Spanish and Portuguese speaking educational institutions, namely:

1. Research Data Management Webinars (complete with IASSIST contribution) for Spanish/Portuguese data specialists (http://www.recolecta.net/buscador/webminars.jsp)

Stuart Macdonald and Luis Martínez-Uribe in collaboration with Alicia López Medina (UNED, Spain), the Spanish Agency of Science and Technology (FECYT) and the network of Spanish repositories RECOLECTA have organised a programme of webinars in 3 strands starting in October to discuss RDM issues:

Strand 1 is dedicated to Research Data Management Strategy (presentations from FECYT, RedIris, Simon Hodson (JISC Managing Research Data (MRD) Programme Manager)

Strand 2 - RDM Tools and models (presentations from Sarah Jones on DAF/DMP online (DCC) and Stuart Macdonald (EDINA) on IASSIST Latin Engagement, RDM at Edinburgh & Research Data MANTRA 

Strand 3 - Research Data Management Experiences (presentations from Kate McNeil-Harmen (MIT) , Luis Martinez Uribe (Institute Juan March), colleagues from University of Porto

Several members of IASSIST have been invited and the work of the group will be presented in order to keep promoting the organization to colleagues in Spain, Portugal and Latin-America.

2. Preparation of a Latin-American session in next IASSIST annual conference in collaboration with outreach committee

Organise another Latin-American session at IASSIST 2012 (complete with NGO representation) led by Bobray Bordelan (Princeton). Liaise with the outreach to fund and invite data professional colleagues from Latin America to participate in this session.

3. Spanish and Portuguese translation of the main pages of the IASSIST site - May 2012

Working with the IASSIST web editor Robin Rice to scope and implement (voluntary) translation of the main landing pages on the IASSIST website (e.g. Home page, About page, Becoming a member if IASSIST, FAQ, IASSIST at a Glance, About IQ, Instruction for Authors)

Image: Toledo by Pat Barker on Flickr, CC-BY-NC licence

86 helpful tools for the data professional PLUS 45 bonus tools

I have been working on this (mostly) annotated collection of tools and articles that I believe would be of help to both the data dabbler and professional. If you are a data scientist, data analyst or data dummy, chances are there is something in here for you. I included a list of tools, such as programming languages and web-based utilities, data mining resources, some prominent organizations in the field, repositories where you can play with data, events you may want to attend and important articles you should take a look at.

The second segment (BONUS!) of the list includes a number of art and design resources the infographic designers might like including color palette generators and image searches. There are also some invisible web resources (if you're looking for something data-related on Google and not finding it) and metadata resources so you can appropriately curate your data. This is in no way a complete list so please contact me here with any suggestions!

Data Tools

  1. Google Refine - A power tool for working with messy data (formerly Freebase Gridworks)
  2. The Overview Project - Overview is an open-source tool to help journalists find stories in large amounts of data, by cleaning, visualizing and interactively exploring large document and data sets. Whether from government transparency initiatives, leaks or Freedom of Information requests, journalists are drowning in more documents than they can ever hope to read.
  3. Refine, reuse and request data | ScraperWiki - ScraperWiki is an online tool to make acquiring useful data simpler and more collaborative. Anyone can write a screen scraper using the online editor. In the free version, the code and data are shared with the world. Because it's a wiki, other programmers can contribute to and improve the code.
  4. Data Curation Profiles - This website is an environment where academic librarians of all kinds, special librarians at research facilities, archivists involved in the preservation of digital data, and those who support digital repositories can find help, support and camaraderie in exploring avenues to learn more about working with research data and the use of the Data Curation Profiles Tool.
  5. Google Chart Tools - Google Chart Tools provide a perfect way to visualize data on your website. From simple line charts to complex hierarchical tree maps, the chart galley provides a large number of well-designed chart types. Populating your data is easy using the provided client- and server-side tools.
  6. 22 free tools for data visualization and analysis
  7. The R Journal - The R Journal is the refereed journal of the R project for statistical computing. It features short to medium length articles covering topics that might be of interest to users or developers of R.
  8. CS 229: Machine Learning - A widely referenced course by Professor Andrew Ng, CS 229: Machine Learning provides a broad introduction to machine learning and statistical pattern recognition. Topics include supervised learning, unsupervised learning, learning theory, reinforcement learning and adaptive control. Recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing are also discussed.
  9. Google Research Publication: BigTable - Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.
  10. Scientific Data Management - An introduction.
  11. Natural Language Toolkit - Open source Python modules, linguistic data and documentation for research and development in natural language processing and text analytics, with distributions for Windows, Mac OSX and Linux.
  12. Beautiful Soup - Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping.
  13. Mondrian: Pentaho Analysis - Pentaho Open source analysis OLAP server written in Java. Enabling interactive analysis of very large datasets stored in SQL databases without writing SQL.
  14. The Comprehensive R Archive Network - R is `GNU S', a freely available language and environment for statistical computing and graphics which provides a wide variety of statistical and graphical techniques: linear and nonlinear modelling, statistical tests, time series analysis, classification, clustering, etc. Please consult the R project homepage for further information. CRAN is a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R. Please use the CRAN mirror nearest to you to minimize network load.
  15. DataStax - Software, support, and training for Apache Cassandra.
  16. Machine Learning Demos
  17. Visual.ly - Infographics & Visualizations. Create, Share, Explore
  18. Google Fusion Tables - Google Fusion Tables is a modern data management and publishing web application that makes it easy to host, manage, collaborate on, visualize, and publish data tables online.
  19. Tableau Software - Fast Analytics and Rapid-fire Business Intelligence from Tableau Software.
  20. WaveMaker - WaveMaker is a rapid application development environment for building, maintaining and modernizing business-critical Web 2.0 applications.
  21. Visualization: Annotated Time Line - Google Chart Tools - Google Code An interactive time series line chart with optional annotations. The chart is rendered within the browser using Flash.
  22. Visualization: Motion Chart - Google Chart Tools - Google Code A dynamic chart to explore several indicators over time. The chart is rendered within the browser using Flash.
  23. PhotoStats Create gorgeous infographics about your iPhone photos, with Photostats.
  24. Ionz Ionz will help you craft an infographic about yourself.
  25. chart builder Powerful tools for creating a variety of charts for online display.
  26. Creately Online diagramming and design.
  27. Pixlr Editor A powerful online photo editor.
  28. Google Public Data Explorer ?The Google Public Data Explorer makes large datasets easy to explore, visualize and communicate. As the charts and maps animate over time, the changes in the world become easier to understand. You don't have to be a data expert to navigate between different views, make your own comparisons, and share your findings.
  29. Fathom Fathom Information Design helps clients understand and express complex data through information graphics, interactive tools, and software for installations, the web, and mobile devices. Led by Ben Fry. Enough said!
  30. healthymagination | GE Data Visualization Visualizations that advance the conversation about issues that shape our lives, and so we encourage visitors to download, post and share these visualizations.
  31. ggplot2 ggplot2 is a plotting system for R, based on the grammar of graphics, which tries to take the good parts of base and lattice graphics and none of the bad parts. It takes care of many of the fiddly details that make plotting a hassle (like drawing legends) as well as providing a powerful model of graphics that makes it easy to produce complex multi-layered graphics.
  32. Protovis Protovis composes custom views of data with simple marks such as bars and dots. Unlike low-level graphics libraries that quickly become tedious for visualization, Protovis defines marks through dynamic properties that encode data, allowing inheritance, scales and layoutsto simplify construction.Protovis is free and open-source, provided under the BSD License. It uses JavaScript and SVG for web-native visualizations; no plugin required (though you will need a modern web browser)! Although programming experience is helpful, Protovis is mostly declarative and designed to be learned by example.
  33. d3.js D3.js is a small, free JavaScript library for manipulating documents based on data.
  34. MATLAB - The Language Of Technical Computing MATLAB® is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
  35. OpenGL - The Industry Standard for High Performance Graphics OpenGL.org is a vendor-independent and organization-independent web site that acts as one-stop hub for developers and consumers for all OpenGL news and development resources. It has a very large and continually expanding developer and end-user community that is very active and vested in the continued growth of OpenGL.
  36. Google Correlate Google Correlate finds search patterns which correspond with real-world trends.
  37. Revolution Analytics - Commercial Software & Support for the R Statistics Language Revolution Analytics delivers advanced analytics software at half the cost of existing solutions. By building on open source R—the world’s most powerful statistics software—with innovations in big data analysis, integration and user experience, Revolution Analytics meets the demands and requirements of modern data-driven businesses.
  38. 22 Useful Online Chart & Graph Generators
  39. The Best Tools for Visualization Visualization is a technique to graphically represent sets of data. When data is large or abstract, visualization can help make the data easier to read or understand. There are visualization tools for search, music, networks, online communities, and almost anything else you can think of. Whether you want a desktop application or a web-based tool, there are many specific tools are available on the web that let you visualize all kinds of data.
  40. Visual Understanding Environment The Visual Understanding Environment (VUE) is an Open Source project based at Tufts University. The VUE project is focused on creating flexible tools for managing and integrating digital resources in support of teaching, learning and research. VUE provides a flexible visual environment for structuring, presenting, and sharing digital information.
  41. Bime - Cloud Business Intelligence | Analytics & Dashboards Bime is a revolutionary approach to data analysis and dashboarding. It allows you to analyze your data through interactive data visualizations and create stunning dashboards from the Web.
  42. Data Science Toolkit A collection of data tools and open APIs curated by our own Pete Warden. You can use it to extract text from a document, learn the political leanings of a particular neighborhood, find all the names of people mentioned in a text and more.
  43. BuzzData BuzzData lets you share your data in a smarter, easier way. Instead of juggling versions and overwriting files, use BuzzData and enjoy a social network designed for data.
  44. SAP - SAP Crystal Solutions: Simple, Affordable, and Open BI Tools for Everyday Use
  45. Project Voldemort
  46. ggplot. had.co.nz

Data Mining

  1. Weka -nWeka is a collection of machine learning algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or called from your own Java code. Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. It is also well-suited for developing new machine learning schemes. Weka is open source software issued under the GNU General Public License.
  2. PSPP- PSPP is a program for statistical analysis of sampled data. It is a Free replacement for the proprietary program SPSS, and appears very similar to it with a few exceptions. The most important of these exceptions are, that there are no “time bombs”; your copy of PSPP will not “expire” or deliberately stop working in the future. Neither are there any artificial limits on the number of cases or variables which you can use. There are no additional packages to purchase in order to get “advanced” functions; all functionality that PSPP currently supports is in the core package.PSPP can perform descriptive statistics, T-tests, linear regression and non-parametric tests. Its backend is designed to perform its analyses as fast as possible, regardless of the size of the input data. You can use PSPP with its graphical interface or the more traditional syntax commands.
  3. Rapid I- Rapid-I provides software, solutions, and services in the fields of predictive analytics, data mining, and text mining. The company concentrates on automatic intelligent analyses on a large-scale base, i.e. for large amounts of structured data like database systems and unstructured data like texts. The open-source data mining specialist Rapid-I enables other companies to use leading-edge technologies for data mining and business intelligence. The discovery and leverage of unused business intelligence from existing data enables better informed decisions and allows for process optimization.The main product of Rapid-I, the data analysis solution RapidMiner is the world-leading open-source system for knowledge discovery and data mining. It is available as a stand-alone application for data analysis and as a data mining engine which can be integrated into own products. By now, thousands of applications of RapidMiner in more than 30 countries give their users a competitive edge. Among the users are well-known companies as Ford, Honda, Nokia, Miele, Philips, IBM, HP, Cisco, Merrill Lynch, BNP Paribas, Bank of America, mobilkom austria, Akzo Nobel, Aureus Pharma, PharmaDM, Cyprotex, Celera, Revere, LexisNexis, Mitre and many medium-sized businesses benefitting from the open-source business model of Rapid-I.
  4. R Project - R is a language and environment for statistical computing and graphics. It is a GNU projectwhich is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity.One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control.R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS.

Organizations

  1. Data.gov
  2. SDM group at LBNL
  3. Open Archives Initiative
  4. Code for America | A New Kind of Public Service
  5. The # DataViz Daily
  6. Institute for Advanced Analytics | North Carolina State University | Professor Michael Rappa · MSA Curriculum
  7. BuzzData | Blog, 25 great links for data-lovin' journalists
  8. MetaOptimize - Home - Machine learning, natural language processing, predictive analytics, business intelligence, artificial intelligence, text analysis, information retrieval, search, data mining, statistical modeling, and data visualization
  9. had.co.nz
  10. Measuring Measures - Measuring Measures

Repositories

  1. Repositories | DataCite
  2. Data | The World Bank
  3. Infochimps Data Marketplace + Commons: Download Sell or Share Databases, statistics, datasets for free | Infochimps
  4. Factual Home - Factual
  5. Flowing Media: Your Data Has Something To Say
  6. Chartsbin
  7. Public Data Explorer
  8. StatPlanet
  9. ManyEyes
  10. 25+ more ways to bring data into R

Events

  1. Welcome | Visweek 2011
  2. O'Reilly Strata: O'Reilly Conferences
  3. IBM Information On Demand 2011 and Business Analytics Forum
  4. Data Scientist Summit 2011
  5. IBM Virtual Performance 2011
  6. Wolfram Data Summit 2011—Conference on Data Repositories and Ideas
  7. Big Data Analytics: Mobile, Social and Web

Articles

  1. Data Science: a literature review | (R news & tutorials)
  2. What is "Data Science" Anyway?
  3. Hal Varian on how the Web challenges managers - McKinsey Quarterly - Strategy - Innovation
  4. The Three Sexy Skills of Data Geeks « Dataspora
  5. Rise of the Data Scientist
  6. dataists » A Taxonomy of Data Science
  7. The Data Science Venn Diagram « Zero Intelligence Agents
  8. Revolutions: Growth in data-related jobs
  9. Building data startups: Fast, big, and focused - O'Reilly Radar

BONUS! Art Design

  1. Periodic Table of Typefaces
  2. Color Scheme Designer 3
  3. Color Palette Generator Generate A Color Palette For Any Image
  4. COLOURlovers
  5. Colorbrewer: Color Advice for Maps

Image Searches

  1. American Memory from the Library of Congress The home page for the American Memory Historical Collections from the Library of Congress. American Memory provides free access to historical images, maps, sound recordings, and motion pictures that document the American experience. American Memory offers primary source materials that chronicle historical events, people, places, and ideas that continue to shape America.
  2. Galaxy of Images | Smithsonian Institution Libraries
  3. Flickr Search
  4. 50 Websites For Free Vector Images Download
  5. Design weblog for designers, bloggers and tech users. Covering useful tools, tutorials, tips and inspirational photos.
  6. Images Google Images. The most comprehensive image search on the web.
  7. Trade Literature - a set on Flickr
  8. Compfight / A Flickr Search Tool
  9. morgueFile free photos for creatives by creatives
  10. stock.xchng - the leading free stock photography site
  11. The Ultimate Collection Of Free Vector Packs - Smashing Magazine
  12. How to Create Animated GIFs Using Photoshop CS3 - wikiHow
  13. IAN Symbol Libraries (Free Vector Symbols and Icons) - Integration and Application Network
  14. Usability.gov
  15. best icons
  16. Iconspedia
  17. IconFinder
  18. IconSeeker

Invisible Web

  1. 10 Search Engines to Explore the Invisible Web Like the header says...
  2. Scirus - for scientific information The most comprehensive scientific research tool on the web. With over 410 million scientific items indexed at last count, it allows researchers to search for not only journal content but also scientists' homepages, courseware, pre-print server material, patents and institutional repository and website information.
  3. TechXtra: Engineering, Mathematics, and Computing TechXtra is a free service which can help you find articles, books, the best websites, the latest industry news, job announcements, technical reports, technical data, full text eprints, the latest research, thesis & dissertations, teaching and learning resources and more, in engineering, mathematics and computing.
  4. Welcome to INFOMINE: Scholarly Internet Resource Collections INFOMINE is a virtual library of Internet resources relevant to faculty, students, and research staff at the university level. It contains useful Internet resources such as databases, electronic journals, electronic books, bulletin boards, mailing lists, online library card catalogs, articles, directories of researchers, and many other types of information.
  5. The WWW Virtual Library The WWW Virtual Library (VL) is the oldest catalogue of the Web, started by Tim Berners-Lee, the creator of HTML and of the Web itself, in 1991 at CERN in Geneva. Unlike commercial catalogues, it is run by a loose confederation of volunteers, who compile pages of key links for particular areas in which they are expert; even though it isn't the biggest index of the Web, the VL pages are widely recognised as being amongst the highest-quality guides to particular sections of the Web.
  6. Intute Intute is a free online service that helps you to find web resources for your studies and research. With millions of resources available on the Internet, it can be difficult to find useful material. We have reviewed and evaluated thousands of resources to help you choose key websites in your subject. The Virtual Training Suite can also help you develop your Internet research skills through tutorials written by lecturers and librarians from universities across the UK.
  7. CompletePlanet - Discover over 70,000+ databases and specially search engines There are hundreds of thousands of databases that contain Deep Web content. CompletePlanet is the front door to these Deep Web databases on the Web and to the thousands of regular search engines — it is the first step in trying to find highly topical information. By tracing through CompletePlanet's subject structure or searching Deep Web sites, you can go to various topic areas, such as energy or agriculture or food or medicine, and find rich content sites not accessible using conventional search engines. BrightPlanet initially developed the CompletePlanet compilation to identify and tap into many hundreds and thousands of search sources simultaneously to automatically deliver high-quality content to its corporate and enterprise customers. It then decided to make CompletePlanet available as a public service to the Internet search public.
  8. Infoplease: Encyclopedia, Almanac, Atlas, Biographies, Dictionary, Thesaurus. Information Please has been providing authoritative answers to all kinds of factual questions since 1938—first as a popular radio quiz show, then starting in 1947 as an annual almanac, and since 1998 on the Internet at www.infoplease.com. Many things have changed since 1938, but not our dedication to providing reliable information, in a way that engages and entertains.
  9. DeepPeep: discover the hidden web DeepPeep is a search engine specialized in Web forms. The current beta version currently tracks 45,000 forms across 7 domains. DeepPeep helps you discover the entry points to content in Deep Web (aka Hidden Web) sites, including online databases and Web services. Advanced search allows you to perform more specific queries. Besides specifying keywords, you can also search for specific form element labels, i.e., the description of the form attributes.
  10. IncyWincy: The Invisible Web Search Engine IncyWincy is a showcase of Net Research Server (NRS) 5.0, a software product that provides a complete search portal solution, developed by LoopIP LLC. LoopIP licenses the NRS engine and provides consulting expertise in building search solutions.

Metadata

  1. Description Schema: MODS (Library of Congress) and Outline of elements and attributes in MODS version 3.4: MetadataObject This document contains a listing of elements and their related attributes in MODS Version 3.4 with values or value sources where applicable. It is an "outline" of the schema. Items highlighted in red indicate changes made to MODS in Version 3.4.All top-level elements and all attributes are optional, but you must have at least one element. Subelements are optional, although in some cases you may not have empty containers. Attributes are not in a mandated sequence and not repeatable (per XML rules). "Ordered" below means the subelements must occur in the order given. Elements are repeatable unless otherwise noted."Authority" attributes are either followed by codes for authority lists (e.g., iso639-2b) or "see" references that link to documents that contain codes for identifying authority lists.For additional information about any MODS elements (version 3.4 elements will be added soon), please see the MODS User Guidelines.
  2. wiki.dbpedia.org : About DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link other data sets on the Web to Wikipedia data. We hope this will make it easier for the amazing amount of information in Wikipedia to be used in new and interesting ways, and that it might inspire new mechanisms for navigating, linking and improving the encyclopaedia itself.
  3. Semantic Web - W3C In addition to the classic “Web of documents” W3C is helping to build a technology stack to support a “Web of data,” the sort of data you find in databases. The ultimate goal of the Web of data is to enable computers to do more useful work and to develop systems that can support trusted interactions over the network. The term “Semantic Web” refers to W3C’s vision of the Web of linked data. Semantic Web technologies enable people to create data stores on the Web, build vocabularies, and write rules for handling data. Linked data are empowered by technologies such as RDF, SPARQL, OWL, and SKOS.
  4. RDA: Resource Description & Access | www.rdatoolkit.org Designed for the digital world and an expanding universe of metadata users, RDA: Resource Description and Access is the new, unified cataloging standard. The online RDA Toolkit subscription is the most effective way to interact with the new standard. More on RDA.
  5. Cataloging Cultural Objects Cataloging Cultural Objects: A Guide to Describing Cultural Works and Their Images (CCO) is a manual for describing, documenting, and cataloging cultural works and their visual surrogates. The primary focus of CCO is art and architecture, including but not limited to paintings, sculpture, prints, manuscripts, photographs, built works, installations, and other visual media. CCO also covers many other types of cultural works, including archaeological sites, artifacts, and functional objects from the realm of material culture.
  6. Library of Congress Authorities (Search for Name, Subject, Title and Name/Title) Using Library of Congress Authorities, you can browse and view authority headings for Subject, Name, Title and Name/Title combinations; and download authority records in MARC format for use in a local library system. This service is offered free of charge.
  7. Search Tools and Databases (Getty Research Institute) Use these search tools to access library materials, specialized databases, and other digital resources.
  8. Art & Architecture Thesaurus (Getty Research Institute) Learn about the purpose, scope and structure of the AAT. The AAT is an evolving vocabulary, growing and changing thanks to contributions from Getty projects and other institutions. Find out more about the AAT's contributors.
  9. Getty Thesaurus of Geographic Names (Getty Research Institute) Learn about the purpose, scope and structure of the TGN. The TGN is an evolving vocabulary, growing and changing thanks to contributions from Getty projects and other institutions. Find out more about the TGN's contributors.
  10. DCMI Metadata Terms
  11. The Digital Object Identifier System
  12. The Federal Geographic Data Committee — Federal Geographic Data Committee
  • IASSIST Quarterly

    Publications Special issue: A pioneer data librarian
    Welcome to the special volume of the IASSIST Quarterly (IQ (37):1-4, 2013). This special issue started as exchange of ideas between Libbie Stephenson and Margaret Adams to collect

    more...

  • Resources

    Resources

    A space for IASSIST members to share professional resources useful to them in their daily work. Also the IASSIST Jobs Repository for an archive of data-related position descriptions. more...

  • community

    • LinkedIn
    • Facebook
    • Twitter

    Find out what IASSISTers are doing in the field and explore other avenues of presentation, communication and discussion via social networking and related online social spaces. more...