Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 109 result(s)
TPA is a database that contains sequences built from the existing primary sequence data in GenBank. TPA records are retrieved through the Nucleotide Database and feature information on the sequence, how it was cataloged, and proper way to cite the sequence information.
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
IEDA2 is currently undergoing a website reconstruction and will be back soon. IEDA is a community-based facility that serves to support, sustain, and advance the geosciences by providing data services for observational Geoscience data from the Ocean, Earth, and Polar Sciences. IEDA welcomes and encourages investigators to contribute their data to the IEDA collections so that the data can be discovered and reused by a diverse community now and in the future. The IEDA collections are: EarthChem, Geochron, System for Earth Sample Registration (SESAR), Marine Geoscience Data System (MGDS), and USAP Data Center. Meta-Search provided on the portal through IEDA Data Browser http://www.iedadata.org/databrowser .
The Progenetix database provides an overview of copy number abnormalities in human cancer from currently 32548 array and chromosomal Comparative Genomic Hybridization (CGH) experiments, as well as Whole Genome or Whole Exome Sequencing (WGS, WES) studies. The cancer profile data in Progenetix was curated from 1031 articles and represents 366 different cancer types, according to the International classification of Diseases in Oncology (ICD-O).
IRSA is chartered to curate the calibrated science products from NASAs infrared and sub-millimeter missions, including five major large-area/all-sky surveys. IRSA exploits a re-useable architecture to deploy cost-effective archives for customers, including: the Spitzer Space Telescope; the 2MASS and IRAS all-sky surveys; and multi-mission datasets such as COSMOS, WISE and Planck mission
The Bavarian Archive for Speech Signals (BAS) is a public institution hosted by the University of Munich. This institution was founded with the aim of making corpora of current spoken German available to both the basic research and the speech technology communities via a maximally comprehensive digital speech-signal database. The speech material will be structured in a manner allowing flexible and precise access, with acoustic-phonetic and linguistic-phonetic evaluation forming an integral part of it.
OpenKIM is an online suite of open source tools for molecular simulation of materials. These tools help to make molecular simulation more accessible and more reliable. Within OpenKIM, you will find an online resource for standardized testing and long-term warehousing of interatomic models and data, and an application programming interface (API) standard for coupling atomistic simulation codes and interatomic potential subroutines.
The Solar Dynamics Observatory (SDO) studies the solar atmosphere on small scales of space and time, in multiple wavelengths. This is a searchable database of all SDO data, including citizen scientist images, space weather and near real time data, and helioseismology data.
The GHO data repository is WHO's gateway to health-related statistics for its 194 Member States. It provides access to over 1000 indicators on priority health topics including mortality and burden of diseases, the Millennium Development Goals (child nutrition, child health, maternal and reproductive health, immunization, HIV/AIDS, tuberculosis, malaria, neglected diseases, water and sanitation), non communicable diseases and risk factors, epidemic-prone diseases, health systems, environmental health, violence and injuries, equity among others. In addition, the GHO provides on-line access to WHO's annual summary of health-related data for its Member states: the World Health Statistics.
Country
RepOD is a general-purpose repository for open research data, offering all members of the academic community in Poland the possibility to deposit their work. It is intended for scientific data from all disciplines of knowledge and in all formats. The purpose of RepOD is to create a place where research data can be safely stored and openly shared with others.
The Met Office is the UK's National Weather Service. We have a long history of weather forecasting and have been working in the area of climate change for more than two decades. As a world leader in providing weather and climate services, we employ more than 1,800 at 60 locations throughout the world. We are recognised as one of the world's most accurate forecasters, using more than 10 million weather observations a day, an advanced atmospheric model and a high performance supercomputer to create 3,000 tailored forecasts and briefings a day. These are delivered to a huge range of customers from the Government, to businesses, the general public, armed forces, and other organisations.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
Country
The CosmoSim database provides results from cosmological simulations performed within different projects: the MultiDark and Bolshoi project, and the CLUES project. The CosmoSim webpage provides access to several cosmological simulations, with a separate database for each simulation. Simulations overview: https://www.cosmosim.org/cms/simulations/simulations-overview/ . CosmoSim is a contribution to the German Astrophysical Virtual Observatory.
This is CSDB version 1 merged from Bacterial (BCSDB) and Plant&Fungal (PFCSDB) databases. This database aims at provision of structural, bibliographic, taxonomic, NMR spectroscopic and other information on glycan and glycoconjugate structures of prokaryotic, plant and fungal origin. It has been merged from the Bacterial and Plant&Fungal Carbohydrate Structure Databases (BCSDB+PFCSDB). The key points of this service are: High coverage. The coverage for bacteria (up to 2016) and archaea (up to 2016) is above 80%. Similar coverage for plants and fungi is expected in the future. The database is close to complete up to 1998 for plants, and up to 2006 for fungi. Data quality. High data quality is achieved by manual curation using original publications which is assisted by multiple automatic procedures for error control. Errors present in publications are reported and corrected, when possible. Data from other databases are verified on import. Detailed annotations. Structural data are supplied with extended bibliography, assigned NMR spectra, taxon identification including strains and serogroups, and other information if available in the original publication. Services. CSDB serves as a platform for a number of computational services tuned for glycobiology, such as NMR simulation, automated structure elucidation, taxon clustering, 3D molecular modeling, statistical processing of data etc. Integration. CSDB is cross-linked to other glycoinformatics projects and NCBI databases. The data are exportable in various formats, including most widespread encoding schemes and records using GlycoRDF ontology. Free web access. Users can access the database for free via its web interface (see Help). The main source of data is retrospective literature analysis. About 20% of data were imported from CCSD (Carbbank, University of Georgia, Athens; structures published before 1996) with subsequent manual curation and approval. The current coverage is displayed in red on the top of the left menu. The time lag between the publication of new data and their deposition into CSDB is ca. 1 year. In the scope of bacterial carbohydrates, CSDB covers nearly all structures of this origin published up to 2016. Prokaryotic, plant and fungal means that a glycan was found in the organism(s) belonging to these taxonomic domains or was obtained by modification of those found in them. Carbohydrate means a structure composed of any residues linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds in which at least one residue is a sugar or its derivative.
Country
Paris Astronomical Data Centre aims at providing VO access to its data collections, at participating to international standards developments, at implementing VO compliant simulation codes, data visualization and analysis software. This centre hosts high level permanent activities for tools and data distribution under the format of reference services. These sustainable services are recognized at the national level as CNRS labeled services. The various activities are organised as portals whose functions are to provide visibility and information on the projects and to encourage collaboration.
Country
The ZFMK Biodiversity Data Center is aimed at hosting, archiving, publishing and distributing data from biodiversity research and zoological collections. The Biodiversity Data Center handles and curates data on: - The specimens of the institutes collection, including provenance, distribution, habitat, and taxonomic data. - Observations, recordings and measurements from field research, monitoring and ecological inventories. - Morphological measurements, descriptions on specimens, as well as - Genetic barcode libraries, and - Genetic and molecular research data associated with specimens or environmental samples. For this purpose, suitable software and hardware systems are operated and the required infrastructure is further developed. Core components of the software architecture are: The DiversityWorkbench suite for managing all collection-related information. The Digital Asset Management system easyDB for multimedia assets. The description database Morph·D·Base for morphological data sets and character matrices.
Country
With more than 60 years of experience, Toronto and Region Conservation Authority (TRCA) is one of 36 Conservation Authorities in Ontario, created to safeguard and enhance the health and well-being of watershed communities through the protection and restoration of the natural environment and the ecological services the environment provides. At TRCA, we are working towards providing free and open access to our data and information, in both accessible and machine readable formats, to ensure it’s available and easy to consume. Improving access to TRCA’s data and information will provide transparency into the decision making process and will improve accountability while increasing the public’s understanding and engagement with the organization.
Remote Sensing Systems is a world leader in processing and analyzing microwave data from satellite microwave sensors. We specialize in algorithm development, instrument calibration, ocean product development, and product validation. We have worked with more than 30 satellite microwave radiometer, sounder, and scatterometer instruments over the past 40 years. Currently, we operationally produce satellite retrievals for SSMIS, AMSR2, WindSat, and ASCAT. The geophysical retrievals obtained from these sensors are made available in near-real-time (NRT) to the global scientific community and general public via FTP and this web site.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> The National Oceanographic Data Center includes the National Coastal Data Development Center (NCDDC) and the NOAA Central Library, which are integrated to provide access to the world's most comprehensive sources of marine environmental data and information. NODC maintains and updates a national ocean archive with environmental data acquired from domestic and foreign activities and produces products and research from these data which help monitor global environmental changes. These data include physical, biological and chemical measurements derived from in situ oceanographic observations, satellite remote sensing of the oceans, and ocean model simulations.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.
The Stanford Digital Repository (SDR) is Stanford Libraries' digital preservation system. The core repository provides “back-office” preservation services – data replication, auditing, media migration, and retrieval -- in a secure, sustainable, scalable stewardship environment. Scholars and researchers across disciplines at Stanford use SDR repository services to provide ongoing, persistent, reliable access to their research outputs.
The United States Census Bureau (officially the Bureau of the Census, as defined in Title 13 U.S.C. § 11) is the government agency that is responsible for the United States Census. It also gathers other national demographic and economic data. As a part of the United States Department of Commerce, the Census Bureau serves as a leading source of data about America's people and economy. The most visible role of the Census Bureau is to perform the official decennial (every 10 years) count of people living in the U.S. The most important result is the reallocation of the number of seats each state is allowed in the House of Representatives, but the results also affect a range of government programs received by each state. The agency director is a political appointee selected by the President of the United States.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets
The Marine Geoscience Data System (MGDS) is a trusted data repository that provides free public access to a curated collection of marine geophysical data products and complementary data related to understanding the formation and evolution of the seafloor and sub-seafloor. Developed and operated by domain scientists and technical specialists with deep knowledge about the creation, analysis and scientific interpretation of marine geoscience data, the system makes available a digital library of data files described by a rich curated metadata catalog. MGDS provides tools and services for the discovery and download of data collected throughout the global oceans. Primary data types are geophysical field data including active source seismic data, potential field, bathymetry, sidescan sonar, near-bottom imagery, other seafloor senor data as well as a diverse array of processed data and interpreted data products (e.g. seismic interpretations, microseismicity catalogs, geologic maps and interpretations, photomosaics and visualizations). Our data resources support scientists working broadly on solid earth science problems ranging from mid-ocean ridge, subduction zone and hotspot processes, to geohazards, continental margin evolution, sediment transport at glaciated and unglaciated margins.