Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 64 result(s)
The Language Archive Cologne (LAC) is a research data repository for the linguistics and all humanities disciplines working with audiovisual data. The archive forms a cluster of the Data Center for Humanities in cooperation with the Institute of Linguistics of the University of Cologne. The LAC is an archive for language resources, which is freely available via a web-based access. In addition, concrete technical and methodological advice is offered in the research data cycle - from the collection of the data, their preparation and archiving, to publication and reuse.
Real-Time Database for high-resolution Neutron Monitor measurements. NMDB provides access to Neutron Monitor measurements from stations around the world. The goal of NMDB is to provide easy access to all Neutron Monitor measurements through an easy to use interface. NMDB provides access to real-time as well as historical data.
Content type(s)
The Network for the Detection of Atmospheric Composition Change (NDACC), a major contributor to the worldwide atmospheric research effort, consists of a set of globally distributed research stations providing consistent, standardized, long-term measurements of atmospheric trace gases, particles, spectral UV radiation reaching the Earth's surface, and physical parameters, centered around the following priorities.
SeaDataNet is a standardized system for managing the large and diverse data sets collected by the oceanographic fleets and the automatic observation systems. The SeaDataNet infrastructure network and enhance the currently existing infrastructures, which are the national oceanographic data centres of 35 countries, active in data collection. The networking of these professional data centres, in a unique virtual data management system provide integrated data sets of standardized quality on-line. As a research infrastructure, SeaDataNet contributes to build research excellence in Europe.
The domain of the IDS repository is the German language, mainly in its current form (contemporary New High German). Its designated community are national and international researchers in German and general linguistics. As an institutional repository, the repository provides long term archival of two important IDS projects: the Deutsches Referenzkorpus (‘German Reference Corpus’, DeReKo), which curates a large corpus of written German language, and the Archiv für Gesprochenes Deutsch (‘Archive of Spoken German’, AGD), which curates several corpora of spoken German. In addition, the repository enables germanistic researchers from IDS and from other research facilities and universities to deposit their research data for long term archival of data and metadata arising from research projects.
The International Ocean Discovery Program (IODP) is an international marine research collaboration that explores Earth's history and dynamics using ocean-going research platforms to recover data recorded in seafloor sediments and rocks and to monitor subseafloor environments. IODP depends on facilities funded by three platform providers with financial contributions from five additional partner agencies. Together, these entities represent 26 nations whose scientists are selected to staff IODP research expeditions conducted throughout the world's oceans. IODP expeditions are developed from hypothesis-driven science proposals aligned with the program's science plan Illuminating Earth's Past, Present, and Future. The science plan identifies 14 challenge questions in the four areas of climate change, deep life, planetary dynamics, and geohazards. Until 2013 under the name: International Ocean Drilling Program.
The IWH Research Data Centre provides external scientists with data for non-commercial research. The research data centre of the IWH was accredited by RatSWD.
The Language Archive at the Max Planck Institute in Nijmegen provides a unique record of how people around the world use language in everyday life. It focuses on collecting spoken and signed language materials in audio and video form along with transcriptions, analyses, annotations and other types of relevant material (e.g. photos, accompanying notes).
<<<!!!<<< This repository is no longer available. This record is out-dated >>>!!!>>> Science3D is an Open Access project to archive and curate scientific data and make them available to everyone interested in scientific endeavours. Science3D focusses mainly on 3D tomography data from biological samples, simply because theses object make it comparably easy to understand the concepts and techniques. The data come primarily from the imaging beamlines of the Helmholtz Center Geesthacht (HZG), which make use of the uniquely bright and coherent X-rays of the Petra3 synchrotron. Petra3 - like many other photon and neutron sources in Europe and World-wide - is a fantastic instrument to investigate the microscopic detail of matter and organisms. The experiments at photon science beamlines hence provide unique insights into all kind of scientific fields, ranging from medical applications to plasma physics. The success of these experiments demands enormous efforts of the scientists and quite some investments
The repository of the Hamburg Centre for Speech Corpora is used for archiving, maintenance, distribution and development of spoken language corpora. These usually consist of audio and / or video recordings, transcriptions and other data and structured metadata. The corpora treat the focus on multilingualism and are generally freely available for research and teaching. Most of the measures maintained by the HZSK corpora were created in the years 2000-2011 in the framework of the SFB 538 "Multilingualism" at the University of Hamburg. The HZSK however also strives to take linguistic data from other projects or contexts, and to provide also the scientific community for research and teaching are available, provided that they are compatible with the current focus of HZSK, ie especially spoken language and multilingualism.
Content type(s)
The Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) is a CLARIN partner institution and has been an officially certified CLARIN service center since June 20th, 2013. The CLARIN center at the BBAW focuses on historical text corpora (predominantly provided by the 'Deutsches Textarchiv'/German Text Archive, DTA) as well as on lexical resources (e.g. dictionaries provided by the 'Digitales Wörterbuch der Deutschen Sprache'/Digital Dictionary of the German Language, DWDS).
The Survey of Health, Ageing and Retirement in Europe (SHARE) is a multidisciplinary and cross-national panel database of micro data on health, socio-economic status and social and family networks of more than 140,000 individuals (approximately 530,000 interviews) aged 50 or over from 28 European countries and Israel.
The Bavarian Archive for Speech Signals (BAS) is a public institution hosted by the University of Munich. This institution was founded with the aim of making corpora of current spoken German available to both the basic research and the speech technology communities via a maximally comprehensive digital speech-signal database. The speech material will be structured in a manner allowing flexible and precise access, with acoustic-phonetic and linguistic-phonetic evaluation forming an integral part of it.
The aim of the Freshwater Biodiversity Data Portal is to integrate and provide open and free access to freshwater biodiversity data from all possible sources. To this end, we offer tools and support for scientists interested in documenting/advertising their dataset in the metadatabase, in submitting or publishing their primary biodiversity data (i.e. species occurrence records) or having their dataset linked to the Freshwater Biodiversity Data Portal. This information portal serves as a data discovery tool, and allows scientists and managers to complement, integrate, and analyse distribution data to elucidate patterns in freshwater biodiversity. The Freshwater Biodiversity Data Portal was initiated under the EU FP7 BioFresh project and continued through the Freshwater Information Platform (http://www.freshwaterplatform.eu). To ensure the broad availability of biodiversity data and integration in the global GBIF index, we strongly encourages scientists to submit any primary biodiversity data published in a scientific paper to national nodes of GBIF or to thematic initiatives such as the Freshwater Biodiversity Data Portal.
The European Social Survey (the ESS) is a biennial multi-country survey covering over 30 nations. The first round was fielded in 2002/2003, the fifth in 2010/2011. The questionnaire includes two main sections, each consisting of approximately 120 items; a 'core' module which remains relatively constant from round to round, plus two or more 'rotating' modules, repeated at intervals. The core module aims to monitor change and continuity in a wide range of social variables, including media use; social and public trust; political interest and participation; socio-political orientations; governance and efficacy; moral; political and social values; social exclusion, national, ethnic and religious allegiances; well-being; health and security; human values; demographics and socio-economics
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The main function of the GGSP (Galileo Geodetic Service Provider) is to provide a terrestrial reference frame, in the broadest sense of the word, to both the Galileo Core System (GCS) as well as to the Galileo User Segment (all Galileo users). This implies that the GGSP should enable all users of the Galileo System, including the most demanding ones, to access and realise the GTRF with the precision required for their specific application. Furthermore, the GGSP must ensure the proper interfaces to all users of the GTRF, especially the geodetic and scientific user groups. In addition the GGSP must ensure the adherence to the defined standards of all its products. Last but not least the GGSP will play a key role to create awareness of the GTRF and educate users in the usage and realisation of the GTRF.
The Database explores the interactions of chemicals and proteins. It integrates information about interactions from metabolic pathways, crystal structures, binding experiments and drug-target relationships. Inferred information from phenotypic effects, text mining and chemical structure similarity is used to predict relations between chemicals. STITCH further allows exploring the network of chemical relations, also in the context of associated binding proteins.
The FAIRDOMHub is built upon the SEEK software suite, which is an open source web platform for sharing scientific research assets, processes and outcomes. FAIRDOM (Web Site) will establish a support and service network for European Systems Biology. It will serve projects in standardizing, managing and disseminating data and models in a FAIR manner: Findable, Accessible, Interoperable and Reusable. FAIRDOM is an initiative to develop a community, and establish an internationally sustained Data and Model Management service to the European Systems Biology community. FAIRDOM is a joint action of ERA-Net EraSysAPP and European Research Infrastructure ISBE.
DEPOD - the human DEPhOsphorylation Database (version 1.1) is a manually curated database collecting human active phosphatases, their experimentally verified protein and non-protein substrates and dephosphorylation site information, and pathways in which they are involved. It also provides links to popular kinase databases and protein-protein interaction databases for these phosphatases and substrates. DEPOD aims to be a valuable resource for studying human phosphatases and their substrate specificities and molecular mechanisms; phosphatase-targeted drug discovery and development; connecting phosphatases with kinases through their common substrates; completing the human phosphorylation/dephosphorylation network.
The project is set up in order to improve the infrastructure for text-based linguistic research and development by building a huge, automatically annotated German text corpus and the corresponding tools for corpus annotation and exploitation. DeReKo constitutes the largest linguistically motivated collection of contemporary German texts, contains fictional, scientific and newspaper texts, as well as several other text types, contains only licenced texts, is encoded with rich meta-textual information, is fully annotated morphosyntactically (three concurrent annotations), is continually expanded, with a focus on size and stratification of data, may be analyzed free of charge via the query system COSMAS II, serves as a 'primordial sample' from which users may draw specialized sub-samples (socalled 'virtual corpora') to represent the language domain they wish to investigate. !!! Access to data of Das Deutsche Referenzkorpus is also provided by: IDS Repository https://www.re3data.org/repository/r3d100010382 !!!
<<<!!!<<< The pages were merged. Please use "Forschungsdaten- und Servicezentrum der Bundesbank" https://www.re3data.org/repository/r3d100012252 >>>!!!<<<
>>>!!!<<< stated 13.02.2020: the repository is offline >>>!!!<<< Data.DURAARK provides a unique collection of real world datasets from the architectural profession. The repository is unique, as it provides several different datatypes, such as 3d scans, 3d models and classifying Metadata and Geodata, to real world physical buildings.domain. Many of the datasets stem from architectural stakeholders and provide the community in this way with insights into the range of working methods, which the practice employs on large and complex building data.