Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 73 result(s)
The Agricultural and Environmental Data Archive (AEDA) is the direct result of a project managed by the Freshwater Biological Association in partnership with the Centre for e-Research at King's College London, and funded by the Department for the Environment, Food & Rural Affairs (Defra). This project ran from January 2011 until December 2014 and was called the DTC Archive Project, because it was initially related to the Demonstration Test Catchments Platform developed by Defra. The archive was also designed to hold data from the GHG R&D Platform (www.ghgplatform.org.uk). After the DTC Archive Project was completed the finished archive was renamed as AEDA to reflect it's broader remit to archive data from any and all agricultural and environmental research activities.
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
Country
National Genomic Resources Repository is established as an institutional framework for methodical and centralized efforts to collect, generate, conserve and distribute genomic resources for agricultural research.
Country
The National Genomics Data Center (NGDC), part of the China National Center for Bioinformation (CNCB), advances life & health sciences by providing open access to a suite of resources, with the aim to translate big data into big discoveries and support worldwide activities in both academia and industry.
Harmonized, indexed, searchable large-scale human FG data collection with extensive metadata. Provides scalable, unified way to easily access massive functional genomics (FG) and annotation data collections curated from large-scale genomic studies. Direct integration (API) with custom / high-throughput genetic and genomic analysis workflows.
Country
QSAR DataBank (QsarDB) is repository for (Quantitative) Structure-Activity Relationships ((Q)SAR) data and models. It also provides open domain-specific digital data exchange standards and associated tools that enable research groups, project teams and institutions to share and represent predictive in silico models.
CryptoDB is an integrated genomic and functional genomic database for the parasite Cryptosporidium and other related genera. CryptoDB integrates whole genome sequence and annotation along with experimental data and environmental isolate sequences provided by community researchers. The database includes supplemental bioinformatics analyses and a web interface for data-mining.
The Electron Microscopy Data Bank (EMDB) is a public repository for electron microscopy density maps of macromolecular complexes and subcellular structures. It covers a variety of techniques, including single-particle analysis, electron tomography, and electron (2D) crystallography.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
Content type(s)
The MDR harvests metadata on data objects from a variety of sources within clinical research (e.g. trial registries, data repositories) and brings that together in a single searchable portal. The metadata is concerned with discoverability, access and provenance of the data objects (which because the data may be sensitive will often be available under a controlled access regime). At the moment (01/2021) the MDR obtains study data from: Clinical Trials.gov (CTG), The European Clinical Trials Registry (EUCTR), ISRCTN, The WHO ICTRP
DataON is Korea's National Research Data Platform. It provides integrated search of metadata for KISTI's research data and domestic and international research data and links to raw data. DataON allows users (researchers, policy makers, etc.) to perform the following tasks: Easily search for various types of research data in all scientific fields. By registering research results, research data can be posted and cited. Build a community among researchers and enable collaborative research. It provides a data analysis environment that allows one-stop analysis of discovered research data.
Provides quick, uncluttered access to information about Heliophysics research data that have been described with SPASE resource descriptions.
The Sequence Read Archive stores the raw sequencing data from such sequencing platforms as the Roche 454 GS System, the Illumina Genome Analyzer, the Applied Biosystems SOLiD System, the Helicos Heliscope, and the Complete Genomics. It archives the sequencing data associated with RNA-Seq, ChIP-Seq, Genomic and Transcriptomic assemblies, and 16S ribosomal RNA data.
PHI-base is a web-accessible database that catalogues experimentally verified pathogenicity, virulence and effector genes from fungal, Oomycete and bacterial pathogens, which infect animal, plant, fungal and insect hosts. PHI-base is therfore an invaluable resource in the discovery of genes in medically and agronomically important pathogens, which may be potential targets for chemical intervention. In collaboration with the FRAC team, PHI-base also includes antifungal compounds and their target genes.
The CALIPSO satellite provides new insight into the role that clouds and atmospheric aerosols play in regulating Earth's weather, climate, and air quality. CALIPSO combines an active lidar instrument with passive infrared and visible imagers to probe the vertical structure and properties of thin clouds and aerosols over the globe. CALIPSO was launched on April 28, 2006, with the CloudSat satellite. CALIPSO and CloudSat are highly complementary and together provide new, never-before-seen 3D perspectives of how clouds and aerosols form, evolve, and affect weather and climate. CALIPSO and CloudSat fly in formation with three other satellites in the A-train constellation to enable an even greater understanding of our climate system.
Country
Research Data Australia is the data discovery service of the Australian Research Data Commons (ARDC). The ARDC is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program. Research Data Australia helps you find, access, and reuse data for research from over one hundred Australian research organisations, government agencies, and cultural institutions. We do not store the data itself here but provide descriptions of, and links to, the data from our data publishing partners.
The Immunology Database and Analysis Portal (ImmPort) archives clinical study and trial data generated by NIAID/DAIT-funded investigators. Data types housed in ImmPort include subject assessments i.e., medical history, concomitant medications and adverse events as well as mechanistic assay data such as flow cytometry, ELISA, ELISPOT, etc. --- You won't need an ImmPort account to search for compelling studies, peruse study demographics, interventions and mechanistic assays. But why stop there? What you really want to do is download the study, look at each experiment in detail including individual ELISA results and flow cytometry files. Perhaps you want to take those flow cytometry files for a test drive using FLOCK in the ImmPort flow cytometry module. To download all that interesting data you will need to register for ImmPort access.
Country
The National Forest Inventory (NFI) is a collaborative effort involving federal, provincial and territorial government agencies. They monitor a network of twenty thousand sampling points across Canada on an ongoing basis to provide information on the state of Canada's forests and a continuous record of forest change. They provide data and products to forest science researchers, forest policy decision-makers and interested stakeholders.
Content type(s)
The British Ocean Sediment Core Research Facility (BOSCORF) is based at the Southampton site of the National Oceanography Centre and is Britain’s national deep-sea core repository. BOSCORF is responsible for long-term storage and curation of sediment cores collected through UKRI-NERC research programmes. We promote secondary usage of sediment core samples and analytical data relating to the sample collection.
Country
Welcome to the District of North Vancouver’s Open Data portal. Here you have access to many free datasets which you can use in your printed products or online services – completely free of charge. Our datasets are updated automatically and refreshed each week. Every dataset comes with its own metadata providing valuable information on the origin, history, accuracy and completeness of the dataset.
ODC-TBI is a community platform to Share Data, Publish Data with a DOI, and get Citations. Advancing Traumatic Brain Injury research through sharing of data from basic and clinical research.
The WDC is concerned with the collection, management, distribution and utilization of data from Chinese provinces, autonomous regions and counties,including: Resource data:management,distribution and utlilzation of land, water, climate, forest, grassland, minerals, energy, etc. Environmental data:pollution,environmental quality, change, natural disasters,soli erosion, etc. Biological resources:animals, plants,wildlife Social economy:agriculture, industry, transport, commerce,infrastructure,etc. Population and labor Geographic background data on scales of 1:4M,1:1M, 1:(1/2)M, 1:2500, etc.
Neotoma is a multiproxy paleoecological database that covers the Pliocene-Quaternary, including modern microfossil samples. The database is an international collaborative effort among individuals from 19 institutions, representing multiple constituent databases. There are over 20 data-types within the Neotoma Paleoecological Database, including pollen microfossils, plant macrofossils, vertebrate fauna, diatoms, charcoal, biomarkers, ostracodes, physical sedimentology and water chemistry. Neotoma provides an underlying cyberinfrastructure that enables the development of common software tools for data ingest, discovery, display, analysis, and distribution, while giving domain scientists control over critical taxonomic and other data quality issues.
The CDAWeb data system enables improved display and coordinated analysis of multi-instrument, multimission data bases of the kind whose analysis is critical to meeting the science objectives of the ISTP program and the InterAgency Consultative Group (IACG) Solar-Terrestrial Science Initiative. The system combines the client-server user interface technology of the World Wide Web with a powerful set of customized IDL routines to leverage the data format standards (CDF) and guidelines for implementation adopted by ISTP and the IACG. The system can be used with any collection of data granules following the extended set of ISTP/IACG standards. CDAWeb is being used both to support coordinated analysis of public and proprietary data and better functional access to specific public data such as the ISTP-precursor CDAW 9 data base that is formatted to the ISTP/IACG standards. Many data sets are available through the Coordinated Data Analysis Web (CDAWeb) service and the data coverage continues to grow. These are largely, but not exclusively, magnetospheric data and nearby solar wind data of the ISTP era (1992-present) at time resolutions of approximately a minute. The CDAWeb service provides graphical browsing, data subsetting, screen listings, file creations and downloads (ASCII or CDF). Public data from current (1992-present) space physics missions (including Cluster, IMAGE, ISTP, FAST, IMP-8, SAMPEX and others). Public data from missions before 1992 (including IMP-8, ISIS1/2, Alouette2, Hawkeye and others). Public data from all current and past space physics missions. CDAWeb ist part of "Space Physics Data Facility" (https://www.re3data.org/repository/r3d100010168).