Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 102 result(s)
caNanoLab is a data sharing portal designed to facilitate information sharing in the biomedical nanotechnology research community to expedite and validate the use of nanotechnology in biomedicine. caNanoLab provides support for the annotation of nanomaterials with characterizations resulting from physico-chemical and in vitro assays and the sharing of these characterizations and associated nanotechnology protocols in a secure fashion.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
BSRN is a project of the Radiation Panel (now the Data and Assessment Panel) from the Global Energy and Water Cycle Experiment (GEWEX) under the umbrella of the World Climate Research Programme (WCRP). It is the global baseline network for surface radiation for the Global limate Observing System (GCOS), contributing to the Global Atmospheric Watch (GAW), and forming a ooperative network with the Network for the Detection of Atmospheric Composition Change NDACC).
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
The nationally recognized National Cancer Database (NCDB)—jointly sponsored by the American College of Surgeons and the American Cancer Society—is a clinical oncology database sourced from hospital registry data that are collected in more than 1,500 Commission on Cancer (CoC)-accredited facilities. NCDB data are used to analyze and track patients with malignant neoplastic diseases, their treatments, and outcomes. Data represent more than 70 percent of newly diagnosed cancer cases nationwide and more than 34 million historical records.
The OpenMadrigal project seeks to develop and support an on-line database for geospace data. The project has been led by MIT Haystack Observatory since 1980, but now has active support from Jicamarca Observatory and other community members. Madrigal is a robust, World Wide Web based system capable of managing and serving archival and real-time data, in a variety of formats, from a wide range of ground-based instruments. Madrigal is installed at a number of sites around the world. Data at each Madrigal site is locally controlled and can be updated at any time, but shared metadata between Madrigal sites allow searching of all Madrigal sites at once from any Madrigal site. Data is local; metadata is shared.
Sharing and preserving data are central to protecting the integrity of science. DataHub, a Research Computing endeavor, provides tools and services to meet scientific data challenges at Pacific Northwest National Laboratory (PNNL). DataHub helps researchers address the full data life cycle for their institutional projects and provides a path to creating findable, accessible, interoperable, and reusable (FAIR) data products. Although open science data is a crucial focus of DataHub’s core services, we are interested in working with evidence-based data throughout the PNNL research community.
Museum explorers travel to ocean depths, the peaks of the Andes, Africa's Rift Valley, the rainforests of South America, and the deserts of Central Asia. Perhaps even to a field site or research institution in your own state, territory or country. In each area, researchers collect specimens: fossils, minerals, and rocks, plants and animals, tools and artworks. Collections care professionals have meticulously preserved, labeled, cataloged, and organized items of this kind for more than 150 years. Taken together, the NMNH collections form the largest, most comprehensive natural history collection in the world. By comparing items gathered in different eras and regions, scientists learn how our world has varied across time and space.
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
This Animal Quantitative Trait Loci (QTL) database (Animal QTLdb) is designed to house all publicly available QTL and trait mapping data (i.e. trait and genome location association data; collectively called "QTL data" on this site) on livestock animal species for easily locating and making comparisons within and between species. New database tools are continuely added to align the QTL and association data to other types of genome information, such as annotated genes, RH / SNP markers, and human genome maps. Besides the QTL data from species listed below, the QTLdb is open to house QTL/association date from other animal species where feasible. Note that the JAS along with other journals, now require that new QTL/association data be entered into a QTL database as part of their publication requirements.
The PAIN Repository is a recently funded NIH initiative, which has two components: an archive for already collected imaging data (Archived Repository), and a repository for structural and functional brain images and metadata acquired prospectively using standardized acquisition parameters (Standardized Repository) in healthy control subjects and patients with different types of chronic pain. The PAIN Repository provides the infrastructure for storage of standardized resting state functional, diffusion tensor imaging and structural brain imaging data and associated biological, physiological and behavioral metadata from multiple scanning sites, and provides tools to facilitate analysis of the resulting comprehensive data sets.
The Infrared Space Observatory (ISO) is designed to provide detailed infrared properties of selected Galactic and extragalactic sources. The sensitivity of the telescopic system is about one thousand times superior to that of the Infrared Astronomical Satellite (IRAS), since the ISO telescope enables integration of infrared flux from a source for several hours. Density waves in the interstellar medium, its role in star formation, the giant planets, asteroids, and comets of the solar system are among the objects of investigation. ISO was operated as an observatory with the majority of its observing time being distributed to the general astronomical community. One of the consequences of this is that the data set is not homogeneous, as would be expected from a survey. The observational data underwent sophisticated data processing, including validation and accuracy analysis. In total, the ISO Data Archive contains about 30,000 standard observations, 120,000 parallel, serendipity and calibration observations and 17,000 engineering measurements. In addition to the observational data products, the archive also contains satellite data, documentation, data of historic aspects and externally derived products, for a total of more than 400 GBytes stored on magnetic disks. The ISO Data Archive is constantly being improved both in contents and functionality throughout the Active Archive Phase, ending in December 2006.
The VDC is a public, web-based search engine for accessing worldwide earthquake strong ground motion data. While the primary focus of the VDC is on data of engineering interest, it is also an interactive resource for scientific research and government and emergency response professionals.
The National Science Digital Library provides high quality online educational resources for teaching and learning, with current emphasis on the sciences, technology, engineering, and mathematics (STEM) disciplines—both formal and informal, institutional and individual, in local, state, national, and international educational settings. The NSDL collection contains structured descriptive information (metadata) about web-based educational resources held on other sites by their providers. These providers have contribute this metadata to NSDL for organized search and open access to educational resources via this website and its services.
The UC San Diego Library Digital Collections website gathers two categories of content managed by the Library: library collections (including digitized versions of selected collections covering topics such as art, film, music, history and anthropology) and research data collections (including research data generated by UC San Diego researchers).
UCLA Library is adopting Dataverse, the open source web application designed for sharing, preserving and using research data. UCLA Dataverse will allow data, text, software, scripts, data visualizations, etc., created from research projects at UCLA to be made publicly available, widely discoverable, linkable, and ultimately, reusable
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
>>>!!!<<< On June 1, 2020, the Academic Seismic Portal repositories at UTIG were merged into a single collection hosted at Lamont-Doherty Earth Observatory. Content here was removed July 1, 2020. Visit the Academic Seismic Portal @LDEO! https://www.marine-geo.org/collections/#!/collection/Seismic#summary (https://www.re3data.org/repository/r3d100010644) >>>!!!<<<
The Cancer Genome Atlas (TCGA) Data Portal provides a platform for researchers to search, download, and analyze data sets generated by TCGA. It contains clinical information, genomic characterization data, and high level sequence analysis of the tumor genomes. The Data Coordinating Center (DCC) is the central provider of TCGA data. The DCC standardizes data formats and validates submitted data.
The Precipitation Processing System (PPS) evolved from the Tropical Rainfall Measuring Mission (TRMM) Science Data and Information System (TSDIS). The purpose of the PPS is to process, analyze and archive data from the Global Precipitation Measurement (GPM) mission, partner satellites and the TRMM mission. The PPS also supports TRMM by providing validation products from TRMM ground radar sites. All GPM, TRMM and Partner public data products are available to the science community and the general public from the TRMM/GPM FTP Data Archive. Please note that you need to register to be able to access this data. Registered users can also search for GPM, partner and TRMM data, order custom subsets and set up subscriptions using our PPS Data Products Ordering Interface (STORM)
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
Content type(s)
Our Frozen Zoo® is the largest and most diverse collection of its kind in the world. It contains over 10,000 living cell cultures, oocytes, sperm, and embryos representing nearly 1,000 taxa, including one extinct species, the po’ouli. Located at the Beckman Center for Conservation Research, the collection is also duplicated for safekeeping at a second site. The irreplaceable living cell lines, gametes, and embryos stored in the Frozen Zoo® provide an invaluable resource for conservation, assisted reproduction, evolutionary biology, and wildlife medicine.