https://iassistquarterly.com/index.php/iassist/issue/feed IASSIST Quarterly 2018-11-14T00:22:35-07:00 Karsten Boye Rasmussen kbr@sam.sdu.dk Open Journal Systems <p class="p1">The <strong>IASSIST Quarterly</strong> represents an international cooperative effort on the part of individuals managing, operating, or using machine-readable data archives, data libraries, and data services. The&nbsp;<strong>IASSIST Quarterly </strong>reports on activities related to the production, acquisition, preservation, processing, distribution, and use of machine-readable data carried out by its members and others in the international social science community.&nbsp;</p> https://iassistquarterly.com/index.php/iassist/article/view/933 Metadata is key - the most important data after data 2018-11-14T00:22:34-07:00 Karsten Boye Rasmussen kbr@sam.sdu.dk 2018-08-14T07:37:40-06:00 ##submission.copyrightStatement## https://iassistquarterly.com/index.php/iassist/article/view/923 Flexible DDI storage 2018-11-14T00:22:35-07:00 Oliver Hopt oliver.hopt@gesis.org Claus-Peter Klas claus-peter.klas@gesis.org Alexander Mühlbauer alexander.muehlbauer@gesis.org <p>The current usage of DDI is heterogeneous. It varies over different versions of DDI, different grouping, and unequal interpretation of elements. Therefore, provider of services based on DDI implement complex database models for each developed application, resulting in high costs and application specific and non-reusable models.</p> <p>This paper shows a way to model the binding of DDI to applications in a way that it works independent of most version changes and interpretative differences in a standard like DDI without continuous reimplementation. Based on our DDI-FlatDB approach, shown first at EDDI 2015 &amp; 2016, we present a complete implementation along the use case of a web-based questionnaire editor including sustainable solution for version management and efficient handling of large DDI structures. The user interface is adaptable to different usage scenarios to come. The application supports DDI-Lifecycle from the first question draft and hands over structured (meta-) data to survey institutes and data archives and supports the collaborative questionnaire development for the Pre-election Cross-Section of the German Longitudinal Election Study from early 2017 on.</p> 2018-07-18T00:00:00-06:00 ##submission.copyrightStatement## https://iassistquarterly.com/index.php/iassist/article/view/924 Elaborating a Crosswalk Between Data Documentation Initiative (DDI) and Encoded Archival Description (EAD) for an Emerging Data Archive Service Provider 2018-11-14T00:22:35-07:00 Benjamin Peuch benjamin.peuch@arch.be <p>Belgium has recently decided to integrate the Consortium of European Social Science Data Archives (CESSDA). The Social Sciences Data Archive (SODA) project aims at tackling the different challenges entailed by the setting up of a new research infrastructure in the form of a data archive. The SODA project involves an archival institution, the State Archives of Belgium, which, like most other large archival repositories around the world, work with Encoded Archival Description (EAD) for managing their metadata. There exists at the State Archives a large pipeline of programs and procedures that processes EAD documents and channels their content through different applications, such as the online catalog of the institution. Because there is a chance that the future Belgian data archive will be part of the State Archives and because DDI is the most widespread metadata standard in the social sciences as well as a requirement for joining CESSDA, the State Archives have developed a DDI-to-EAD crosswalk in order to re-use the State Archives' infrastructure for the needs of the future Belgian service provider. Technical illustrations highlight the conceptual differences between DDI and EAD and how these can be reconciled or escaped for the purpose of a data archive for the social sciences.</p> 2018-07-18T13:16:40-06:00 ##submission.copyrightStatement## https://iassistquarterly.com/index.php/iassist/article/view/925 Research Data Management Tools and Workflows: Experimental Work at the University of Porto 2018-11-14T00:22:34-07:00 Cristina Ribeiro mcr@fe.up.pt João Rocha da Silva joao.rocha.silva@inesctec.pt João Aguiar Castro joao.a.castro@inesctec.pt Ricardo Carvalho Amorim ricardo.c.amorim@inesctec.pt João Correia Lopes joao.c.lopes@inesctec.pt Gabriel David gtd@fe.up.pt <p>Research datasets include all kinds of objects, from web pages to sensor data, and originate in every domain. Concerns with data generated in large projects and well-funded research areas are centered on their exploration and analysis. For data in the long tail, the main issues are still how to get data visible, satisfactorily described, preserved, and searchable.</p> <p>Our work aims to promote data publication in research institutions, considering that researchers are the core stakeholders and need straightforward workflows, and that multi-disciplinary tools can be designed and adapted to specific areas with a reasonable effort. For small groups with interesting datasets but not much time or funding for data curation, we have to focus on engaging researchers in the process of preparing data for publication, while providing them with measurable outputs. In larger groups, solutions have to be customized to satisfy the requirements of more specific research contexts.</p> <p>We describe our experience at the University of Porto in two lines of enquiry. For the work with long-tail groups we propose general-purpose tools for data description and the interface to multi-disciplinary data repositories. For areas with larger projects and more specific requirements, namely wind infrastructure, sensor data from concrete structures and marine data, we define specialized workflows. In both cases, we present a preliminary evaluation of results and an estimate of the kind of effort required to keep the proposed infrastructures running.&nbsp;</p> <p>The tools available to researchers can be decisive for their commitment. We focus on data preparation, namely on dataset organization and metadata creation. For groups in the long tail, we propose Dendro, an open-source research data management platform, and explore automatic metadata creation with LabTablet, an electronic laboratory notebook. For groups demanding a domain-specific approach, our analysis has resulted in the development of models and applications to organize the data and support some of their use cases. Overall, we have adopted ontologies for metadata modeling, keeping in sight metadata dissemination as Linked Open Data.</p> 2018-07-18T13:34:55-06:00 ##submission.copyrightStatement##