Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 28 result(s)
Country
The BonaRes Repository stores, manage and publishes soil and agricultural research data from research projects, agricultural long-term field experiments and soil profiles which contribute significantly to the analysis of changes of soil and soil functions over the long term. Research data are described by the metadata following the BonaRes Metadata Schema (DOI: 10.20387/bonares-5pgg-8yrp) which combines international recognized standards for the description of geospatial data (INSPIRE Directive) and research data (DataCite 4.0). Metadata includes AGROVOC keywords. Within the BonaRes Repository research data is provided for free reuse under the CC License and can be discovered by advanced text and map search via a number of criteria.
<<<!!!<<< The repository is offline >>>!!!>>> A collection of open content name datasets for Information Centric Networking. The "Content Name Collection" (CNC) lists and hosts open datasets of content names. These datasets are either derived from URL link databases or web traces. The names are typically used for research on Information Centric Networking (ICN), for example to measure cache hit/miss ratios in simulations.
Country
The purpose of this central repository is to gather all the research data created by Greek researchers and academics from Greek Universities, and make them available in the most open and secure way possible. HARDMIN has been developed with the open software CKAN and, along with HELIX, constitutes the national digital research infrastructure (eInfrastructure) software for cataloguing services and research data repository, part of the Open Access infrastructure of Heal-Link. The repository provides the capability to connect to already established repositories and extract data from existing collections.
Country
SILVA is a comprehensive, quality-controlled web resource for up-to-date aligned ribosomal RNA (rRNA) gene sequences from the Bacteria, Archaea and Eukaryota domains alongside supplementary online services. In addition to data products, SILVA provides various online tools such as alignment and classification, phylogenetic tree calculation and viewer, probe/primer matching, and an amplicon analysis pipeline. With every full release a curated guide tree is provided that contains the latest taxonomy and nomenclature based on multiple references. SILVA is an ELIXIR Core Data Resource.
Country
The Open Energy Family aims to ensure quality, transparency and reproducibility in energy system research. It is a collection of various tools and information and that help working with energy related data. It is a collaborative community effort, everything is openly developed and therefore constantly evolving. The main module is the Open Energy Platform (OEP), a web interface to access most of the modules, especially the community database. It provides a way to publish data with proper documentation (metadata), and link it to source code and underlying assumptions. Open Energy Database is an open community database for energy, climate and modelling data.
Country
ArkeoGIS is a unified scientific data publishing platform. It is a multilingual Geographic Information System (GIS), initially developed in order to mutualize archaeological and paleoenvironmental data of the Rhine Valley. Today, it allows the pooling of spatialized scientific data concerning the past, from prehistory to the present day. The databases come from the work of institutional researchers, doctoral students, master students, private companies and archaeological services. They are stored on the TGIR Huma-Num service grid and archived as part of the Huma-Num/CINES long-term archiving service. Because of their sensitive nature, which could lead to the looting of archaeological deposits, access to the tool is reserved to archaeological professionals, from research institutions or non-profit organizations. Each user can query online all or part of the available databases and export the results of his query to other tools.
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
Country
The Open Archive for Miscellaneous Data (OMIX) database is a data repository developed and maintained by the National Genomics Data Center (NGDC). The database specializes in descriptions of biological studies, including genomic, proteomic, and metabolomic, as well as data that do not fit in the structured archives at other databases in NGDC. It can accept various types of studies described via a simple format and enables researchers to upload supplementary information and link to it from the publication.
CaltechDATA is an institutional data repository for Caltech. Caltech library runs the repository to preserve the accomplishments of Caltech researchers and share their results with the world. Caltech-associated researchers can upload data, link data with their publications, and assign a permanent DOI so that others can reference the data set. The repository also preserves software and has automatic Github integration. All files present in the repository are open access or embargoed, and all metadata is always available to the public.
The Arctic Permafrost Geospatial Centre (APGC) is an Open Access Circum-Arctic Geospatial Data Portal that promotes, describes and visualizes geospatial permafrost data. A data catalogue and a WebGIS application allow to easily discover and view data and metadata. Data can be downloaded directly via link to the publishing data repository.
The purpose of the Dataset Catalogue is to enhance discovery of GNS Science datasets. At a minimum, users will be able to determine whether a dataset on a specific topic exists and then whether it pertains to a specific place and/or a specific date or period. Some datasets include a web link to an online resource. In addition, contact details are provided for the custodian of each dataset as well as conditions of use.
PetDB, the Petrological Database, is a web-based data management system that provides on-line access to geochemical and petrological data. PetDB is a global synthesis of chemical, isotopic, and mineralogical data for rocks, minerals, and melt inclusions. PetDB's current content focuses on data for igneous and metamorphic rocks from the ocean floor, specifically mid-ocean ridge basalts and abyssal peridotites and xenolith samples from the Earth's mantle and lower crust. PetDB is maintained and continuously updated as part of the EarthChem data collections.
SureChemOpen is a free resource for researchers who want to search, view and link to patent chemistry. For end-users with professional search and analysis needs, we offer the fully-featured SureChemPro. For enterprise users, SureChemDirect provides all our patent chemistry via an API or a data feed. The SureChem family of products is built upon the Claims® Global Patent Database, a comprehensive international patent collection provided by IFI Claims®. This state of the art database is normalized and curated to provide unprecedented consistency and quality.
Country
The Norwegian Meteorological Institute supplies climate observations and weather data and forecasts for the country and surrounding waters (including the Arctic). In addition commercial services are provided to fit customers requirements. Data are served through a number of subsystems (information provided in repository link) and cover data from internal services of the institute, from external services operated by the institute and research projects where the institute participates. Further information is provided in the landing page which also contains entry points some of the data portals operated.
BOARD (Bicocca Open Archive Research Data) is the institutional data repository of the University of Milano-Bicocca. BOARD is an open, free-to-use research data repository, which enables members of University of Milano-Bicocca to make their research data publicly available. By depositing their research data in BOARD researchers can: - Make their research data citable - Share their data privately or publicly - Ensure long-term storage for their data - Keep access to all versions - Link their article to their data
Country
The public MorpheusML model repository collects, curates, documents and tests computational models for multi-scale and multicellular biological systems. Model must be encoded in the model description language MorpheusML. Subsections of the repository distinguish published models from contributed non-published and example models. New models are simulated in Morpheus or Artistoo independently from the authors and results are compared to published results. Successful reproduction is documented on the model's webpage. Models in this repository are included into the CI and test pipelines for each release of the model simulator Morpheus to check and guarantee reproducibility of results across future simulator updates. The model’s webpage provides a History-link to all past model versions and edits that are automatically tracked via Git. Each model is registered with a unique and persistent ID of the format M..... The model description page (incl. the biological context and key results of that model), the model’s XML file, the associated paper, and all further files (often simulation result videos) connected with that model can be retrieved via a persistent URL of the format https://identifiers.org/morpheus/M..... - for technical details on the citable ModelID please see https://registry.identifiers.org/registry/morpheus - for the model definition standard MorpheusML please see https://doi.org/10.25504/FAIRsharing.78b6a6 - for the model simulator Morpheus please see https://morpheus.gitlab.io - for the model simulator Artistoo please see https://artistoo.net/converter.html
The MG-RAST server is an open source system for annotation and comparative analysis of metagenomes. Users can upload raw sequence data in fasta format; the sequences will be normalized and processed and summaries automatically generated. The server provides several methods to access the different data types, including phylogenetic and metabolic reconstructions, and the ability to compare the metabolism and annotations of one or more metagenomes and genomes. In addition, the server offers a comprehensive search capability. Access to the data is password protected, and all data generated by the automated pipeline is available for download in a variety of common formats. MG-RAST has become an unofficial repository for metagenomic data, providing a means to make your data public so that it is available for download and viewing of the analysis without registration, as well as a static link that you can use in publications. It also requires that you include experimental metadata about your sample when it is made public to increase the usefulness to the community.
eLaborate is an online work environment in which scholars can upload scans, transcribe and annotate text, and publish the results as on online text edition which is freely available to all users. Short information about and a link to already published editions is presented on the page Editions under Published. Information about editions currently being prepared is posted on the page Ongoing projects. The eLaborate work environment for the creation and publication of online digital editions is developed by the Huygens Institute for the History of the Netherlands of the Royal Netherlands Academy of Arts and Sciences. Although the institute considers itself primarily a research facility and does not maintain a public collection profile, Huygens ING actively maintains almost 200 digitally available resource collections.
---<<< This repository is no longer available. This record is out-dated >>>--- The ONS challenge contains open solubility data, experiments with raw data from different scientists and institutions. It is part of the The Open Notebook Science wiki community, ideally suited for community-wide collaborative research projects involving mathematical modeling and computer simulation work, as it allows researchers to document model development in a step-by-step fashion, then link model prediction to experiments that test the model, and in turn, use feeback from experiments to evolve the model. By making our laboratory notebooks public, the evolutionary process of a model can be followed in its totality by the interested reader. Researchers from laboratories around the world can now follow the progress of our research day-to-day, borrow models at various stages of development, comment or advice on model developments, discuss experiments, ask questions, provide feedback, or otherwise contribute to the progress of science in any manner possible.
The Data Catalogue is a service that allows University of Liverpool Researchers to create records of information about their finalised research data, and save those data in a secure online environment. The Data Catalogue provides a good means of making that data available in a structured way, in a form that can be discovered by both general search engines and academic search tools. There are two types of record that can be created in the Data Catalogue: A discovery-only record – in these cases, the research data may be held somewhere else but a record is provided to help people find it. A record is created that alerts users to the existence of the data, and provides a link to where those data are held. A discovery and data record – in these cases, a record is created to help people discover the data exist, and the data themselves are deposited into the Data Catalogue. This process creates a unique Digital Object identifier (DOI) which can be used in citations to the data.
<<<!!!<<< This repository is no longer available. >>>!!!>>> see https://beta.ukdataservice.ac.uk/datacatalogue/studies/study?id=7021#!/details and https://ota.bodleian.ox.ac.uk/repository/xmlui/discover?query=germanc&submit=Search&filtertype_1=title&filter_relational_operator_1=contains&filter_1=&query=germanc
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
Country
Etsin is a research data finder that contains descriptive information – that is, metadata – on research datasets. In the service you can search and find data from various fields of research. A researcher, research group or organisation can use Etsin to publish information on their datasets and offer them for wider use. The metadata contained in Etsin makes it easy for anyone to discover the datasets. Etsin assigns a permanent URN identifier to datasets, making it possible to link to the dataset and gather merit through its publication and use. The metadata enables users to search for datasets and evaluate the potential for reuse. Etsin includes a description of the dataset, keywords and various dataset identifiers. The dataset information includes, for example, its subject, language, author, owner and how it is licensed for reuse. Good description of data plays an important role in its discoverability and visibility. Etsin encourages comprehensive descriptions by adapting a common set of discipline independent metadata fields and by making it easy to enter metadata. Etsin only collects metadata on datasets, not the data themselves. Anyone may browse and read the metadata. Etsin can be used with a browser or through an open interface (API). The service is discipline independent and free to use. Etsin is a service provided by the Ministry of Education and Culture to actors in the Finnish research system. The service is produced by CSC – IT Center for Science (CSC). Customer service contacts and feedback is available through servicedesk@csc.fi. The service maintenance window is on the fourth Monday of every month between 4 and 6 PM (EET). During that time, the service will be out of use.