Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 1151 result(s)
The Health and Medical Care Archive (HMCA) is the data archive of the Robert Wood Johnson Foundation (RWJF), the largest philanthropy devoted exclusively to health and health care in the United States. Operated by the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan, HMCA preserves and disseminates data collected by selected research projects funded by the Foundation and facilitates secondary analyses of the data. Our goal is to increase understanding of health and health care in the United States through secondary analysis of RWJF-supported data collections
The EBiSC Catalogue is a collection of human iPS cells being made available to academic and commercial researchers for use in disease modelling and other forms of preclinical research. The initial collection has been generated from a wide range of donors representing specific disease backgrounds and healthy controls. As the collection grows, more isogenic control lines will become available which will add further to the collection’s appeal.
Country
On this server you'll find 127 items of primary data of the University of Munich. Scientists / students of all faculties of LMU and of institutions that cooperate with the LMU are invited to deposit their research data on this platform.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
Country
The National Earthquake Database (NEDB) comprises a number of separate databases that together act as the national repository for all raw seismograph data, measurements, and derived parameters arising from the Canadian National Seismograph Network (CNSN), the Yellowknife Seismological Array (YKA), previous regional telemetered networks in eastern and western Canada (ECTN, WCTN), local telemetered networks (CLTN, SLTN), the Regional Analogue Network, and the former Standard Seismograph Network (CSN). It supports the efforts of Earthquakes Canada in Canadian seismicity monitoring, global seismic monitoring, verification of the Comprehensive nuclear Test Ban Treaty, and international data exchange. It also supports the Nuclear Explosion Monitoring project.
OMIM is a comprehensive, authoritative compendium of human genes and genetic phenotypes that is freely available and updated daily. OMIM is authored and edited at the McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, under the direction of Dr. Ada Hamosh. Its official home is omim.org.
Seafloor Sediments Data Collection is a collection of more than 14,000 archived marine geological samples recovered from the seafloor. The inventory includes long, stratified sediment cores, as well as rock dredges, surface grabs, and samples collected by the submersible Alvin.
DaSCH is the trusted platform and partner for open research data in the Humanities. DaSCH develops and operates a FAIR long-term repository and a generic virtual research environment for open research data in the humanities in Switzerland. We provide long-term direct access to the data, enable their continuous editing and allow for precise citation of single objects within a dataset. We ensure interoperability with tools used by the Humanities and Cultural Sciences communities and foster the use of standards. The development of our platform happens in close cooperation with these communities. We provide training and advice in the area of research data management, promote open data and the use of standards. DaSCH is the coordinating institution and representative of Switzerland in the European Research Infrastructure Consortium ‘Digital Research Infrastructure for the Arts and Humanities’ (DARIAH ERIC). Within this mandate, we actively engage in community building within Switzerland and abroad. DaSCH cooperates with national and international organizations and initiatives in order to provide services that are fit for purpose within the broader Swiss open research data landscape and that are coordinated with other institutions such as FORS. We base our actions on the values reliability, flexibility, appreciation, curiosity, and persistence. Furthermore, DARIAH’s activities in Switzerland are coordinated by DaSCH and DaSCH is acting as DARIAH-CH Coordination Office.
EBRAINS offers one of the most comprehensive platforms for sharing brain research data ranging in type as well as spatial and temporal scale. We provide the guidance and tools needed to overcome the hurdles associated with sharing data. The EBRAINS data curation service ensures that your dataset will be shared with maximum impact, visibility, reusability, and longevity, https://ebrains.eu/services/data-knowledge/share-data. Find data - the user interface of the EBRAINS Knowledge Graph - allows you to easily find data of interest. EBRAINS hosts a wide range of data types and models from different species. All data are well described and can be accessed immediately for further analysis.
Research data from University of Pretoria. This data repository facilitates data publishing, sharing and collaboration of academic research, allowing UP to manage and in some cases showcase its data to the wider research community. Previously UPSpace (https://repository.up.ac.za/) was used for both datasets and research outputs. Now UP Research Data Repository is dedicated for datasets.
The European Database of Seismogenic Faults (EDSF) was compiled in the framework of the EU Project SHARE, Work Package 3, Task 3.2. EDSF includes only faults that are deemed to be capable of generating earthquakes of magnitude equal to or larger than 5.5 and aims at ensuring a homogeneous input for use in ground-shaking hazard assessment in the Euro-Mediterranean area. Several research institutions participated in this effort with the contribution of many scientists (see the Database section for a full list). The EDSF database and website are hosted and maintained by INGV.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
Country
BExIS is the online data repository and information system of the Biodiversity Exploratories Project (BE). The BE is a German network of biodiversity related working groups from areas such as vegetation and soil science, zoology and forestry. Up to three years after data acquisition, the data use is restricted to members of the BE. Thereafter, the data is usually public available (https://www.bexis.uni-jena.de/ddm/publicsearch/index).
The World Ocean Database (WOD) is a collection of scientifically quality-controlled ocean profile and plankton data that includes measurements of temperature, salinity, oxygen, phosphate, nitrate, silicate, chlorophyll, alkalinity, pH, pCO2, TCO2, Tritium, Δ13Carbon, Δ14Carbon, Δ18Oxygen, Freon, Helium, Δ3Helium, Neon, and plankton. WOD contains all data of "World Data Service Oceanography" (WDS-Oceanography).
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
The World Ocean Atlas (WOA) contains objectively analyzed climatological fields of in situ temperature, salinity, oxygen, and other measured variables at standard depth levels for various compositing periods for the world ocean. Regional climatologies were created from the Atlas, providing a set of high resolution mean fields for temperature and salinity. A new version of the WOA is released in conjunction with each major update to the WOD, the largest collection of publicly available, uniformly formatted, quality controlled subsurface ocean profile data in the world.
NED is a comprehensive database of multiwavelength data for extragalactic objects, providing a systematic, ongoing fusion of information integrated from hundreds of large sky surveys and tens of thousands of research publications. The contents and services span the entire observed spectrum from gamma rays through radio frequencies. As new observations are published, they are cross- identified or statistically associated with previous data and integrated into a unified database to simplify queries and retrieval. Seamless connectivity is also provided to data in NASA astrophysics mission archives (IRSA, HEASARC, MAST), to the astrophysics literature via ADS, and to other data centers around the world.
<<<!!!<<< This repository is no longer available. >>>!!!>>> TeachingWithData.org is a portal where faculty can find resources and ideas to reduce the challenges of bringing real data into post-secondary classes. It allows faculty to introduce and build students' quantitative reasoning abilities with readily available, user-friendly, data-driven teaching materials.
Country
The Atlantic Canada Conservation Data Centre (ACCDC) maintains comprehensive lists of plant and animal species. The Atlantic CDC has geo-located records of species occurrences and records of extremely rare to uncommon species in the Atlantic region, including New Brunswick, Nova Scotia, Prince Edward Island, Newfoundland, and Labrador. The Atlantic CDC also maintains biological and other types of data in a variety of linked databases.
Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library. It is written in C++ and easily scales to massive networks with hundreds of millions of nodes, and billions of edges. It efficiently manipulates large graphs, calculates structural properties, generates regular and random graphs, and supports attributes on nodes and edges. SNAP is also available through the NodeXL which is a graphical front-end that integrates network analysis into Microsoft Office and Excel. The SNAP library is being actively developed since 2004 and is organically growing as a result of our research pursuits in analysis of large social and information networks. Largest network we analyzed so far using the library was the Microsoft Instant Messenger network from 2006 with 240 million nodes and 1.3 billion edges. The datasets available on the website were mostly collected (scraped) for the purposes of our research. The website was launched in July 2009.
ARCHE (A Resource Centre for the HumanitiEs) is a service aimed at offering stable and persistent hosting as well as dissemination of digital research data and resources for the Austrian humanities community. ARCHE welcomes data from all humanities fields. ARCHE is the successor of the Language Resources Portal (LRP) and acts as Austria’s connection point to the European network of CLARIN Centres for language resources.
The HUGO Gene Nomenclature Committee (HGNC) assigned unique gene symbols and names to over 35,000 human loci, of which around 19,000 are protein coding. This curated online repository of HGNC-approved gene nomenclature and associated resources includes links to genomic, proteomic and phenotypic information, as well as dedicated gene family pages.