Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 43 result(s)
LEPR is a database of results of published experimental studies involving liquid-solid phase equilibria relevant to natural magmatic systems. TraceDs is a database of experimental studies involving trace element distribution between liquid, solid and fluid phases.
The World Ocean Database (WOD) is a collection of scientifically quality-controlled ocean profile and plankton data that includes measurements of temperature, salinity, oxygen, phosphate, nitrate, silicate, chlorophyll, alkalinity, pH, pCO2, TCO2, Tritium, Δ13Carbon, Δ14Carbon, Δ18Oxygen, Freon, Helium, Δ3Helium, Neon, and plankton. WOD contains all data of "World Data Service Oceanography" (WDS-Oceanography).
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
Cary Institute data repository allows researchers to store, share and publish their research data, supplementary information and associated metadata. Each published item is assigned a Digital Object identifier (DOI), which allows the data to be citable and sustainable. This repository is a member node of DataOne.
Climate Data Record (CDR) is a time series of measurements of sufficient length, consistency and continuity to determine climate variability and change. The fundamental CDRs include sensor data, such as calibrated radiances and brightness temperatures, that scientists have improved and quality-controlled along with the data used to calibrate them. The thematic CDRs include geophysical variables derived from the fundamental CDRs, such as sea surface temperature and sea ice concentration, and they are specific to various disciplines.
The Global Hydrology Resource Center (GHRC) provides both historical and current Earth science data, information, and products from satellite, airborne, and surface-based instruments. GHRC acquires basic data streams and produces derived products from many instruments spread across a variety of instrument platforms.
Specification Patterns is an online repository for information about property specification for finite-state verification. The intent of this repository is to collect patterns that occur commonly in the specification of concurrent and reactive systems.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
In keeping with the open data policies of the U.S. Agency for International Development (USAID) and Bill & Melinda Gates Foundation, the Cereal Systems Initiative for South Asia (CSISA) has launched the CSISA Data Repository to ensure public accessibility to key data sets, including crop cut data- directly observed, crop yield estimates, on-station and on-farm research trial data and socioeconomic surveys. CSISA is a science-driven and impact-oriented regional initiative for increasing the productivity of cereal-based cropping systems in Bangladesh, India and Nepal, thus improving food security and farmers’ livelihoods. CSISA generates data that is of value and interest to a diverse audience of researchers, policymakers and the public. CSISA’s data repository is hosted on Dataverse, an open source web application developed at Harvard University to share, preserve, cite, explore and analyze research data. CSISA’s repository contains rich datasets, including on-station trial data from 2009–17 about crop and resource management practices for sustainable future cereal-based cropping systems. Collection of this data occurred during the long-term, on-station research trials conducted at the Indian Council of Agricultural Research – Research Complex for the Eastern Region in Bihar, India. The data include information on agronomic management for the sustainable intensification of cropping systems, mechanization, diversification, futuristic approaches to sustainable intensification, long-term effects of conservation agriculture practices on soil health and the pest spectrum. Additional trial data in the repository includes nutrient omission plot technique trials from Bihar, eastern Uttar Pradesh and Odisha, India, covering 2012–15, which help determine the indigenous nutrient supplying ability of the soil. This data helps develop precision nutrient management approaches that would be most effective in different types of soils. CSISA’s most popular dataset thus far includes crop cut data on maize in Odisha, India and rice in Nepal. Crop cut datasets provide ground-truthed yield estimates, as well as valuable information on relevant agronomic and socioeconomic practices affecting production practices and yield. A variety of research data on wheat systems are also available from Bangladesh and India. Additional crop cut data will also be coming online soon. Cropping system-related data and socioeconomic data are in the repository, some of which are cross-listed with a Dataverse run by the International Food Policy Research Institute. The socioeconomic datasets contain baseline information that is crucial for technology targeting, as well as to assess the adoption and performance of CSISA-supported technologies under smallholder farmers’ constrained conditions, representing the ultimate litmus test of their potential for change at scale. Other highly interesting datasets include farm composition and productive trajectory information, based on a 20-year panel dataset, and numerous wheat crop cut and maize nutrient omission trial data from across Bangladesh.
Our mission is to provide the data services, tools, and cyberinfrastructure leadership that advance earth-system science, enhance educational opportunities, and broaden participation. Unidata's main RAMADDA server contains access to a variety of datasets including the full IDD feed, Case Studies and other project data. RAMADDA is a content management system for Earth Science data. More information is available here: https://ramadda.org/?
The Hive is the University of Utah's institutional data repository. All datasets are associated with a U of Utah current or past faculty member or student. They are also all freely available under creative commons licenses. The repository spans disciplines and has over 80 datasets at this time.
The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a publicly accessible earth science data repository created to curate, publicly serve (publish), and archive digital data and information from biological, chemical and biogeochemical research conducted in coastal, marine, great lakes and laboratory environments. The BCO-DMO repository works closely with investigators funded through the NSF OCE Division’s Biological and Chemical Sections and the Division of Polar Programs Antarctic Organisms & Ecosystems. The office provides services that span the full data life cycle, from data management planning support and DOI creation, to archive with appropriate national facilities.
Bioinformatics.org serves the scientific and educational needs of bioinformatic practitioners and the general public. We develop and maintain computational resources to facilitate world-wide communications and collaborations between people of all educational and professional levels. We provide and promote open access to the materials and methods required for, and derived from, research, development and education.
The Google Code Archive contains the data found on the Google Code Project Hosting Service, which turned down in early 2016. This archive contains over 1.4 million projects, 1.5 million downloads, and 12.6 million issues. Google Project Hosting powers Project Hosting on Google Code and Eclipse Labs. Project Hosting on Google Code Eclipse Labs. It provides a fast, reliable, and easy open source hosting service with the following features: Instant project creation on any topic; Git, Mercurial and Subversion code hosting with 2 gigabyte of storage space and download hosting support with 2 gigabytes of storage space; Integrated source code browsing and code review tools to make it easy to view code, review contributions, and maintain a high quality code base; An issue tracker and project wiki that are simple, yet flexible and powerful, and can adapt to any development process; Starring and update streams that make it easy to keep track of projects and developers that you care about.
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
Mountain Scholar is an open access repository service that collects, preserves, and provides access to digitized library collections and other scholarly and creative works from several academic entities within the state of Colorado. Colorado State University research data from the fall of 2022 and forward is available in Dryad; CSU legacy research data prior to fall 2022 is in Mountain Scholar.
OSGeo's mission is to support the collaborative development of open source geospatial software, in part by providing resources for projects and promoting freely available geodata. The Public Geodata Repository is a distributed repository and registry of data sources free to access, reuse, and re-distribute.
UltraViolet is part of a suite of repositories at New York University that provide a home for research materials, operated as a partnership of the Division of Libraries and NYU IT's Research and Instruction Technology. UltraViolet provides faculty, students, and researchers within our university community with a place to deposit scholarly materials for open access and long-term preservation. UltraViolet also houses some NYU Libraries collections, including proprietary data collections.
The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership based at the NASA Goddard Space Flight Center in Greenbelt, Maryland and a component of the National Space Weather Program. The CCMC provides, to the international research community, access to modern space science simulations. In addition, the CCMC supports the transition to space weather operations of modern space research models.