Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 10 result(s)
The Southern California Earthquake Data Center (SCEDC) operates at the Seismological Laboratory at Caltech and is the primary archive of seismological data for southern California. The 1932-to-present Caltech/USGS catalog maintained by the SCEDC is the most complete archive of seismic data for any region in the United States. Our mission is to maintain an easily accessible, well-organized, high-quality, searchable archive for research in seismology and earthquake engineering.
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
mzCloud is an extensively curated database of high-resolution tandem mass spectra that are arranged into spectral trees. MS/MS and multi-stage MSn spectra were acquired at various collision energies, precursor m/z, and isolation widths using Collision-induced dissociation (CID) and Higher-energy collisional dissociation (HCD). Each raw mass spectrum was filtered and recalibrated giving rise to additional filtered and recalibrated spectral trees that are fully searchable. Besides the experimental and processed data, each database record contains the compound name with synonyms, the chemical structure, computationally and manually annotated fragments (peaks), identified adducts and multiply charged ions, molecular formulas, predicted precursor structures, detailed experimental information, peak accuracies, mass resolution, InChi, InChiKey, and other identifiers. mzCloud is a fully searchable library that allows spectra searches, tree searches, structure and substructure searches, monoisotopic mass searches, peak (m/z) searches, precursor searches, and name searches. mzCloud is free and available for public use online.
The Department of Energy (DOE) Joint Genome Institute (JGI) is a national user facility with massive-scale DNA sequencing and analysis capabilities dedicated to advancing genomics for bioenergy and environmental applications. Beyond generating tens of trillions of DNA bases annually, the Institute develops and maintains data management systems and specialized analytical capabilities to manage and interpret complex genomic data sets, and to enable an expanding community of users around the world to analyze these data in different contexts over the web. The JGI Genome Portal provides a unified access point to all JGI genomic databases and analytical tools. A user can find all DOE JGI sequencing projects and their status, search for and download assemblies and annotations of sequenced genomes, and interactively explore those genomes and compare them with other sequenced microbes, fungi, plants or metagenomes using specialized systems tailored to each particular class of organisms. Databases: Genome Online Database (GOLD), Integrated Microbial Genomes (IGM), MycoCosm, Phytozome
FactGrid is a Wikibase instance designed to be used by historians with a focus on international projects. The database is hosted by the University of Erfurt and coordinated at the Gotha Research Centre. Partners in joint ventures are Wikimedia Germany as the software provider and the German National Library in a project to open the GND to international research.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
TreeGenes is a genomic, phenotypic, and environmental data resource for forest tree species. The TreeGenes database and Dendrome project provide custom informatics tools to manage the flood of information.The database contains several curated modules that support the storage of data and provide the foundation for web-based searches and visualization tools. GMOD GUI tools such as CMAP for genetic maps and GBrowse for genome and transcriptome assemblies are implemented here. A sample tracking system, known as the Forest Tree Genetic Stock Center, sits at the forefront of most large-scale projects. Barcode identifiers assigned to the trees during sample collection are maintained in the database to identify an individual through DNA extraction, resequencing, genotyping and phenotyping. DiversiTree, a user-friendly desktop-style interface, queries the TreeGenes database and is designed for bulk retrieval of resequencing data. CartograTree combines geo-referenced individuals with relevant ecological and trait databases in a user-friendly map-based interface. ---- The Conifer Genome Network (CGN) is a virtual nexus for researchers working in conifer genomics. The CGN web site is maintained by the Dendrome Project at the University of California, Davis.
The Magnetics Information Consortium (MagIC) improves research capacity in the Earth and Ocean sciences by maintaining an open community digital data archive for rock magnetic, geomagnetic, archeomagnetic (archaeomagnetic) and paleomagnetic (palaeomagnetic) data. Different parts of the website allow users access to archive, search, visualize, and download these data. MagIC supports the international rock magnetism, geomagnetism, archeomagnetism (archaeomagnetism), and paleomagnetism (palaeomagnetism) research and endeavors to bring data out of private archives, making them accessible to all and (re-)useable for new, creative, collaborative scientific and educational activities. The data in MagIC is used for many types of studies including tectonic plate reconstructions, geomagnetic field models, paleomagnetic field reversal studies, magnetohydrodynamical studies of the Earth's core, magnetostratigraphy, and archeology. MagIC is a domain-specific data repository and directed by PIs who are both producers and consumers of rock, geo, and paleomagnetic data. Funded by NSF since 2003, MagIC forms a major part of https://earthref.org which integrates four independent cyber-initiatives rooted in various parts of the Earth, Ocean and Life sciences and education.
The Immunology Database and Analysis Portal (ImmPort) archives clinical study and trial data generated by NIAID/DAIT-funded investigators. Data types housed in ImmPort include subject assessments i.e., medical history, concomitant medications and adverse events as well as mechanistic assay data such as flow cytometry, ELISA, ELISPOT, etc. --- You won't need an ImmPort account to search for compelling studies, peruse study demographics, interventions and mechanistic assays. But why stop there? What you really want to do is download the study, look at each experiment in detail including individual ELISA results and flow cytometry files. Perhaps you want to take those flow cytometry files for a test drive using FLOCK in the ImmPort flow cytometry module. To download all that interesting data you will need to register for ImmPort access.
The Arctic Data Center is the primary data and software repository for the Arctic section of NSF Polar Programs. The Center helps the research community to reproducibly preserve and discover all products of NSF-funded research in the Arctic, including data, metadata, software, documents, and provenance that links these together. The repository is open to contributions from NSF Arctic investigators, and data are released under an open license (CC-BY, CC0, depending on the choice of the contributor). All science, engineering, and education research supported by the NSF Arctic research program are included, such as Natural Sciences (Geoscience, Earth Science, Oceanography, Ecology, Atmospheric Science, Biology, etc.) and Social Sciences (Archeology, Anthropology, Social Science, etc.). Key to the initiative is the partnership between NCEAS at UC Santa Barbara, DataONE, and NOAA’s NCEI, each of which bring critical capabilities to the Center. Infrastructure from the successful NSF-sponsored DataONE federation of data repositories enables data replication to NCEI, providing both offsite and institutional diversity that are critical to long term preservation.