Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 334 result(s)
Country
Swedish National Data Service (SND) is a research data infrastructure designed to assist researchers in preserving, maintaining, and disseminating research data in a secure and sustainable manner. The SND Search function makes it easy to find, use, and cite research data from a variety of scientific disciplines. Together with an extensive network of almost 40 Swedish higher education institutions and other research organisations, SND works for increased access to research data, nationally as well as internationally.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
The SURF Data Repository is a user-friendly web-based data publication platform that allows researchers to store, annotate and publish research datasets of any size to ensure long-term preservation and availability of their data. The service allows any dataset to be stored, independent of volume, number of files and structure. A published dataset is enriched with complex metadata, unique identifiers are added and the data is preserved for an agreed-upon period of time. The service is domain-agnostic and supports multiple communities with different policy and metadata requirements.
Country
Kadi4Mat instance for use at the Karlsruhe Institute of Technology (KIT) and for cooperations, including the Cluster of Competence for Solid-state Batteries (FestBatt), the Battery Competence Cluster Analytics/Quality Assurance (AQua), and more. Kadi4Mat is the Karlsruhe Data Infrastructure for Materials Science, an open source software for managing research data. It is being developed as part of several research projects at the Institute for Applied Materials - Microstructure Modelling and Simulation (IAM-MMS) of the Karlsruhe Institute of Technology (KIT). The goal of this project is to combine the ability to manage and exchange data, the repository , with the possibility to analyze, visualize and transform said data, the electronic lab notebook (ELN). Kadi4Mat supports a close cooperation between experimenters, theorists and simulators, especially in materials science, to enable the acquisition of new knowledge and the development of novel materials. This is made possible by employing a modular and generic architecture, which allows to cover the specific needs of different scientists, each utilizing unique workflows. At the same time, this opens up the possibility of covering other research disciplines as well.
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
Tropicos® was originally created for internal research but has since been made available to the world’s scientific community. All of the nomenclatural, bibliographic, and specimen data accumulated in MBG’s electronic databases during the past 30 years are publicly available here.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
Country
CINES is the French national long-term preservation service provider for Higher Education and Research: more than 20 institutions (universities, librairies, labs) archive their digital heritage at CINES so that it's preserved over time in a secure, dedicated environment. This includes documents such as PhD theses or publications, digitized ancient/rare books, satellite imagery, 3D/vidéos/image galleries, datasets, etc.
IAGOS aims to provide long-term, regular and spatially resolved in situ observations of the atmospheric composition. The observation systems are deployed on a fleet of 10 to 15 commercial aircraft measuring atmospheric chemistry concentrations and meteorological fields. The IAGOS Data Centre manages and gives access to all the data produced within the project.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
Country
Strasbourg astronomical Data Center (CDS) is dedicated to the collection and worldwide distribution of astronomical data and related information. Alongside data curation and service maintenance responsibilities, the CDS undertakes R&D activities that are fundamental to ensure the long term sustainability in a domain in which technology evolves very quickly. R&D areas include informatics, big data, and development of the astronomical Virtual Observatory (VO). CDS is a major actor in the VO with leading roles in European VO projects, the French Virtual Observatory and the International Virtual Observatory Alliance (IVOA). The CDS hosts the SIMBAD astronomical database, the world reference database for the identification of astronomical objects; VizieR, the catalogue service for the CDS reference collection of astronomical catalogues and tables published in academic journals; and the Aladin interactive software sky atlas for access, visualization and analysis of astronomical images, surveys, catalogues, databases and related data.
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
Human Protein Reference Database (HPRD) has been established by a team of biologists, bioinformaticists and software engineers. This is a joint project between the PandeyLab at Johns Hopkins University, and Institute of Bioinformatics, Bangalore. HPRD is a definitive repository of human proteins. This database should serve as a ready reckoner for researchers in their quest for drug discovery, identification of disease markers and promote biomedical research in general. Human Proteinpedia (www.humanproteinpedia.org) is its associated data portal.
Central data management of the USGS for water data that provides access to water-resources data collected at approximately 1.5 million sites in all 50 States, the District of Columbia, Puerto Rico, the Virgin Islands, Guam, American Samoa and the Commonwealth of the Northern Mariana Islands. Includes data on water use and quality, groundwater, and surface water.
The central mission of the NACJD is to facilitate and encourage research in the criminal justice field by sharing data resources. Specific goals include providing computer-readable data for the quantitative study of crime and the criminal justice system through the development of a central data archive, supplying technical assistance in the selection of data collections and computer hardware and software for data analysis, and training in quantitative methods of social science research to facilitate secondary analysis of criminal justice data
ScienceBase provides access to aggregated information derived from many data and information domains, including feeds from existing data systems, metadata catalogs, and scientists contributing new and original content. ScienceBase architecture is designed to help science teams and data practitioners centralize their data and information resources to create a foundation needed for their work. ScienceBase, both original software and engineered components, is released as an open source project to promote involvement from the larger scientific programming community both inside and outside the USGS.
Country
>>> !!! The repository is offline !!! <<< The MyTARDIS repository at ANSTO is used to: * Store metadata for all experiments conducted at ANSTO * Provide access and download of metadata and data to authorised users of experiments * Provide search, access and download of public metadata and data to the general scientific community
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The European Data Portal harvests the metadata of Public Sector Information available on public data portals across European countries. Information regarding the provision of data and the benefits of re-using data is also included.
The datacommons@psu was developed in 2005 to provide a resource for data sharing, discovery, and archiving for the Penn State research and teaching community. Access to information is vital to the research, teaching, and outreach conducted at Penn State. The datacommons@psu serves as a data discovery tool, a data archive for research data created by PSU for projects funded by agencies like the National Science Foundation, as well as a portal to data, applications, and resources throughout the university. The datacommons@psu facilitates interdisciplinary cooperation and collaboration by connecting people and resources and by: Acquiring, storing, documenting, and providing discovery tools for Penn State based research data, final reports, instruments, models and applications. Highlighting existing resources developed or housed by Penn State. Supporting access to project/program partners via collaborative map or web services. Providing metadata development citation information, Digital Object Identifiers (DOIs) and links to related publications and project websites. Members of the Penn State research community and their affiliates can easily share and house their data through the datacommons@psu. The datacommons@psu will also develop metadata for your data and provide information to support your NSF, NIH, or other agency data management plan.
Country
Trent University Dataverse is a research data repository for our faculty, students and staff. Files are held in a secure environment on Canadian servers. The platform makes it possible for researchers to deposit data, create appropriate metadata, and version documents as they work. Researchers can choose to make content available publicly, to specific individuals, or to keep it locked.
The Australian National University undertake work to collect and publish metadata about research data held by ANU, and in the case of four discipline areas, Earth Sciences, Astronomy, Phenomics and Digital Humanities to develop pipelines and tools to enable the publication of research data using a common and repeatable approach. Aims and outcomes: To identify and describe research data held at ANU, to develop a consistent approach to the publication of metadata on the University's data holdings: Identification and curation of significant orphan data sets that might otherwise be lost or inadvertently destroyed, to develop a culture of data data sharing and data re-use.
CLARIN-LV is a national node of Clarin ERIC (Common Language Resources and Technology Infrastructure). The mission of the repository is to ensure the availability and long­ term preservation of language resources. The data stored in the repository are being actively used and cited in scientific publications.
Country
Arquivo.pt is a research infrastructure that preserves millions of files collected from the web since 1996 and provides a public search service over this information. It contains information in several languages. Periodically it collects and stores information published on the web. Then, it processes the collect data to make it searchable, providing a “Google-like” service that enables searching the past web (English user interface available at https://arquivo.pt/?l=en). This preservation workflow is performed through a large-scale distributed information system and can also accessed through API (https://arquivo.pt/api).
Country
The Linguistic Linked Open Data cloud is a collaborative effort pursued by several members of the OWLG, with the general goal to develop a Linked Open Data (sub-)cloud of linguistic resources. The diagram is inspired by the Linking Open Data cloud diagram by Richard Cyganiak and Anja Jentzsch, and the resources included are chosen according to the same criteria of openness, availability and interlinking. Although not all resources are already available, we actively work towards this goal, and subsequent versions of this diagram will be restricted to openly available resources. Until that point, please refer to the diagram explicitly as a "draft".