Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 122 result(s)
CLARIN.SI is the Slovenian node of the European CLARIN (Common Language Resources and Technology Infrastructure) Centers. The CLARIN.SI repository is hosted at the Jožef Stefan Institute and offers long-term preservation of deposited linguistic resources, along with their descriptive metadata. The integration of the repository with the CLARIN infrastructure gives the deposited resources wide exposure, so that they can be known, used and further developed beyond the lifetime of the projects in which they were produced. Among the resources currently available in the CLARIN.SI repository are the multilingual MULTEXT-East resources, the CC version of Slovenian reference corpus Gigafida, the morphological lexicon Sloleks, the IMP corpora and lexicons of historical Slovenian, as well as many other resources for a variety of languages. Furthermore, several REST-based web services are provided for different corpus-linguistic and NLP tasks.
Sound and Vision has one of the largest audiovisual archives in Europe. The institute manages over 70 percent of the Dutch audiovisual heritage. The collection contains more than a million hours of television, radio, music and film from the beginning in 1898 until today. All programs of the Dutch public broadcasters come in digitally every day. Individuals and institutions entrust their collection to Sound and Vision as well. The institute ensures that the material is optimally preserved for (re)use. Broadcasters, producers and editors use the archive for the creation of new programs. The collection is also used to develop products and services for a wide audience, such as exhibitions, iPhone applications, DVD boxes and various websites. The collection of Sound and Vision contains the complete radio and television archives of the Dutch public broadcasters; films of virtually every leading Dutch documentary maker; newsreels; the national music depot; various audiovisual corporate collections; advertising, radio and video material of cultural and social organizations, of scientific institutes and of all kinds of educational institutions. There are also collections of images and articles from the history of Dutch broadcasting itself, like the elaborate collection of historical television sets.
The focus of PolMine is on texts published by public institutions in Germany. Corpora of parliamentary protocols are at the heart of the project: Parliamentary proceedings are available for long stretches of time, cover a broad set of public policies and are in the public domain, making them a valuable text resource for political science. The project develops repositories of textual data in a sustainable fashion to suit the research needs of political science. Concerning data, the focus is on converting text issued by public institutions into a sustainable digital format (TEI/XML).
The Korea Polar Data Center (KPDC) is an organization dedicated for managing different types of data acquired during scientific research that South Korea carries out in Antarctic and Arctic regions. South Korea, as an Antarctic Treaty Consultative Party (ATCP) and an accredited member of the Scientific Committee on Antarctic Research (SCAR) established the Center in 2003 as part of its effort to joint international Antarctic research.
Country
Through our decades of hydrological and ecological practices and the activities on the waterways of the German Federal a valuable inventory of hydrological information has emerged in the stuck our experience and knowledge base. Almost all environmental information have a direct or indirect spatial reference. This inventory of spatial data continues to grow. He is both the basis and results of our scientific work. With GGInA, the hydrological Geographical Information and Analysis System of BfG, you can research yourself in this inventory. Much of the information and data is accessed directly on the Metadata Catalog Search and specialist applications. The Geoportal also opens up databases of our partners in the transport and environment. At the same selected data can also be integrated into other environmental portals.
The purpose of the Dataset Catalogue is to enhance discovery of GNS Science datasets. At a minimum, users will be able to determine whether a dataset on a specific topic exists and then whether it pertains to a specific place and/or a specific date or period. Some datasets include a web link to an online resource. In addition, contact details are provided for the custodian of each dataset as well as conditions of use.
Country
REDU is the institutional open research data repository of the University of Campinas, Brazil. It contains research data produced by all research groups of the University, in a wide range of scientific domains, which are indexed by DataCite DOI. Created at the end of 2020, it is coordinated by a scientific and technical committee composed by data librarians, IT professionals, and scientists representing user groups. Implemented on top of Dataverse, it exports metadata using OAIS. Files with sensitive content (due to ethics or legal constraints) are not stored therein - rather, only their metadata is recorded in REDU, as well as contact information so that interested researchers can contact the persons responsible for the files for conditional subsequent access. It is being little by little populated, following the University's Open Science policies.
US Department of Energy’s Atmospheric Radiation Measurement (ARM) Data Center is a long-term archive and distribution facility for various ground-based, aerial and model data products in support of atmospheric and climate research. ARM facility currently operates over 400 instruments at various observatories (https://www.arm.gov/capabilities/observatories/). ARM Data Center (ADC) Archive currently holds over 11,000 data products with a total holding of over 3 petabytes of data that dates back to 1993, these include data from instruments, value added products, model outputs, field campaign and PI contributed data. The data center archive also includes data collected by ARM from related program (e.g., external data such as NASA satellite).
GitHub is the best place to share code with friends, co-workers, classmates, and complete strangers. Over three million people use GitHub to build amazing things together. With the collaborative features of GitHub.com, our desktop and mobile apps, and GitHub Enterprise, it has never been easier for individuals and teams to write better code, faster. Originally founded by Tom Preston-Werner, Chris Wanstrath, and PJ Hyett to simplify sharing code, GitHub has grown into the largest code host in the world.
Country
The Institute of Ocean Sciences (IOS)/Ocean Sciences Division (OSD) data archive contains the holdings of oceanographic data generated by the IOS and other agencies and laboratories, including the Institute of Oceanography at the University of British Columbia and the Pacific Biological Station. The contents include data from B.C. coastal waters and inlets, B.C. continental shelf waters, open ocean North Pacific waters, Beaufort Sea and the Arctic Archipelago.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.
York Digital Library (YODL) is a University-wide Digital Library service for multimedia resources used in or created through teaching, research and study at the University of York. YODL complements the University's research publications, held in White Rose Research Online and PURE, and the digital teaching materials in the University's Yorkshare Virtual Learning Environment. YODL contains a range of collections, including images, past exam papers, masters dissertations and audio. Some of these are available only to members of the University of York, whilst other material is available to the public. YODL is expanding with more content being added all the time
The Marine Geoscience Data System (MGDS) is a trusted data repository that provides free public access to a curated collection of marine geophysical data products and complementary data related to understanding the formation and evolution of the seafloor and sub-seafloor. Developed and operated by domain scientists and technical specialists with deep knowledge about the creation, analysis and scientific interpretation of marine geoscience data, the system makes available a digital library of data files described by a rich curated metadata catalog. MGDS provides tools and services for the discovery and download of data collected throughout the global oceans. Primary data types are geophysical field data including active source seismic data, potential field, bathymetry, sidescan sonar, near-bottom imagery, other seafloor senor data as well as a diverse array of processed data and interpreted data products (e.g. seismic interpretations, microseismicity catalogs, geologic maps and interpretations, photomosaics and visualizations). Our data resources support scientists working broadly on solid earth science problems ranging from mid-ocean ridge, subduction zone and hotspot processes, to geohazards, continental margin evolution, sediment transport at glaciated and unglaciated margins.
Data products developed and distributed by the National Institute of Standards and Technology span multiple disciplines of research and are widely used in research and development programs by industry and academia. NIST's publicly available data sets showcase its committment to providing accurate, well-curated measurements of physical properties, exemplified by the Standard Reference Data program, as well as its committment to advancing basic research. In accordance with U.S. Government Open Data Policy and the NIST Plan for providing public access to the results of federally funded research data, NIST maintains a publicly accessible listing of available data, the NIST Public Dataset List (json). Additionally, these data are assigned a Digital Object Identifier (DOI) to increase the discovery and access to research output; these DOIs are registered with DataCite and provide globally unique persistent identifiers. The NIST Science Data Portal provides a user-friendly discovery and exploration tool for publically available datasets at NIST. This portal is designed and developed with data.gov Project Open Data standards and principles. The portal software is hosted in the usnistgov github repository.
In collaboration with other centres in the Text+ consortium and in the CLARIN infrastructure, the CLARIND-UdS enables eHumanities by providing a service for hosting and processing language resources (notably corpora) for members of the research community. CLARIND-UdS centre thus contributes of lifting the fragmentation of language resources by assisting members of the research community in preparing language materials in such a way that easy discovery is ensured, interchange is facilitated and preservation is enabled by enriching such materials with meta-information, transforming them into sustainable formats and hosting them. We have an explicit mission to archive language resources especially multilingual corpora (parallel, comparable) and corpora including specific registers, both collected by associated researchers as well as researchers who are not affiliated with us.
It is an interactive website offering access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms, integrated with a large collection of aligned annotations. The Browser is a graphical viewer optimized to support fast interactive performance and is an open-source, web-based tool suite built on top of a MySQL database for rapid visualization, examination, and querying of the data at many levels.
BBMRI-ERIC is a European research infrastructure for biobanking. We bring together all the main players from the biobanking field – researchers, biobankers, industry, and patients – to boost biomedical research. To that end, we offer quality management services, support with ethical, legal and societal issues, and a number of online tools and software solutions. Ultimately, our goal is to make new treatments possible. The Directory is a tool to share aggregate information about the biobanks that are willing external collaboration. It is based on the MIABIS 2.0 standard, which describes the samples and data in the biobanks at an aggregated level.
Launched in December 2013, Gaia is destined to create the most accurate map yet of the Milky Way. By making accurate measurements of the positions and motions of stars in the Milky Way, it will answer questions about the origin and evolution of our home galaxy. The first data release (2016) contains three-dimensional positions and two-dimensional motions of a subset of two million stars. The second data release (2018) increases that number to over 1.6 Billion. Gaia’s measurements are as precise as planned, paving the way to a better understanding of our galaxy and its neighborhood. The AIP hosts the Gaia data as one of the external data centers along with the main Gaia archive maintained by ESAC and provides access to the Gaia data releases as part of Gaia Data Processing and Analysis Consortium (DPAC).
Country
The Universidad del Rosario Research data repository is an institutional iniciative launched in 2019 to preserve, provide access and promote the use of data resulting from Universidad del Rosario research projects. The Repository aims to consolidate an online, collaborative working space and data-sharing platform to support Universidad del Rosario researchers and their collaborators, and to ensure that research data is available to the community, in order to support further research and contribute to the democratization of knowledge. The Research data repository is the heart of an institutional strategy that seeks to ensure the generation of Findable, Accessible, Interoperable and Reusable (FAIR) data, with the aim of increasing its impact and visibility. This strategy follows the international philosophy of making research data “as open as possible and as closed as necessary”, in order to foster the expansion, valuation, acceleration and reusability of scientific research, but at the same time, safeguard the privacy of the subjects. The platform storage, preserves and facilitates the management of research data from all disciplines, generated by the researchers of all the schools and faculties of the University, that work together to ensure research with the highest standards of quality and scientific integrity, encouraging innovation for the benefit of society.
EIDA, an initiative within ORFEUS, is a distributed data centre established to (a) securely archive seismic waveform data and related metadata, gathered by European research infrastructures, and (b) provide transparent access to the archives by the geosciences research communities. EIDA nodes are data centres which collect and archive data from seismic networks deploying broad-band sensors, short period sensors, accelerometers, infrasound sensors and other geophysical instruments. Networks contributing data to EIDA are listed in the ORFEUS EIDA networklist (http://www.orfeus-eu.org/data/eida/networks/). Data from the ORFEUS Data Center (ODC), hosted by KNMI, are available through EIDA. Technically, EIDA is based on an underlying architecture developed by GFZ to provide transparent access to all nodes' data. Data within the distributed archives are accessible via the ArcLink protocol (http://www.seiscomp3.org/wiki/doc/applications/arclink).
Country
The COSYNA observatory measures key physical, sedimentary, geochemical and biological parameters at high temporal resolution in the water column and at the sediment and atmospheric boundaries. COSYNA delivers spatial representation through a set of fixed and moving platforms, like tidal flats poles, FerryBoxes, gliders, ship surveys, towed devices, remote sensing, etc.. New technologies like underwater nodes, benthic landers and automated sensors for water biogeochemical parameters are further developed and tested. A great variety of parameters is measured and processed, stored, analyzed, assimilated into models and visualized.