Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 240 result(s)
GenBase is a genetic sequence database that accepts user submissions (mRNA, genomic DNAs, ncRNA, or small genomes such as organelles, viruses, plasmids, phages from any organism) and integrates data from INSDC.
NCBI Datasets is a continually evolving platform designed to provide easy and intuitive access to NCBI’s sequence data and metadata. NCBI Datasets is part of the NIH Comparative Genomics Resource (CGR). CGR facilitates reliable comparative genomics analyses for all eukaryotic organisms through an NCBI Toolkit and community collaboration.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
Country
The National Biodiversity Information System (SNIB) of Mexico by the National Commission for the Knowledge and Use of Biodiversity (CONABIO). The SNIB is of strategic importance in a megadiversity country like Mexico, making it clear to CONABIO from the beginning that the SNIB should rely on the work of the multiplicity of institutions and national and foreign experts that for years have been dedicated to the study of biodiversity of Mexico. The creation of this system was expressed as a mandate for CONABIO in the General Law of Ecological Balance and Environmental Protection (LGEEPA Art. 80 fraction V). The participation of specialists in the generation of data and information for the SNIB is one of the various ways in which they collaborate with this system, since having an information system that allows the country to make informed decisions regarding its biodiversity requires that it be made up of data and information supported by a broad network of experts.
Country
bio.tools is a software registry for bioinformatics and the life sciences.
Country
The ZFMK Biodiversity Data Center is aimed at hosting, archiving, publishing and distributing data from biodiversity research and zoological collections. The Biodiversity Data Center handles and curates data on: - The specimens of the institutes collection, including provenance, distribution, habitat, and taxonomic data. - Observations, recordings and measurements from field research, monitoring and ecological inventories. - Morphological measurements, descriptions on specimens, as well as - Genetic barcode libraries, and - Genetic and molecular research data associated with specimens or environmental samples. For this purpose, suitable software and hardware systems are operated and the required infrastructure is further developed. Core components of the software architecture are: The DiversityWorkbench suite for managing all collection-related information. The Digital Asset Management system easyDB for multimedia assets. The description database Morph·D·Base for morphological data sets and character matrices.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
DataON is Korea's National Research Data Platform. It provides integrated search of metadata for KISTI's research data and domestic and international research data and links to raw data. DataON allows users (researchers, policy makers, etc.) to perform the following tasks: Easily search for various types of research data in all scientific fields. By registering research results, research data can be posted and cited. Build a community among researchers and enable collaborative research. It provides a data analysis environment that allows one-stop analysis of discovered research data.
Country
ProteomicsDB started as a protein-centric in-memory database for the exploration of large collections of quantitative mass spectrometry-based proteomics data. The data types and contents grew over time to include RNA-Seq expression data, drug-target interactions and cell line viability data.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
The Cellosaurus is a knowledge resource on cell lines. It attempts to describe all cell lines used in biomedical research. Its scope includes: Immortalized cell lines, Naturally immortal cell lines (example: stem cell lines), Finite life cell lines when those are distributed and used widely, Vertebrate cell line with an emphasis on human, mouse and rat cell lines, Invertebrate (insects and ticks) cell lines. Its scope does not include: Primary cell lines (with the exception of the finite life cell lines described above), Plant cell lines. Cellosaurus was initiated to be used as a cell line controlled vocabulary in the context of the neXtProt knowledgebase, but it quickly become apparent that there was a need for a cell line knowledge resource that would serve the needs of individual researchers, cell line distributors and bioinformatic resources. This leads to an increase of the scope and depth of the content of the Cellosaurus. The Cellosaurus is a participant of the Resource Identification Initiative and contributes actively to the work of the International Cell Line Authentication Committee (ICLAC). It is a Global Core Biodata Resource, an ELIXIR Core Data Resource and an IRDiRC Recognized Resource.
FaceBase is a collaborative NIDCR-funded project that houses comprehensive data in support of advancing research into craniofacial development and malformation. It serves as a community resource by curating large datasets of a variety of types from the craniofacial research community and sharing them via this website. Practices emphasize a comprehensive and multidisciplinary approach to understanding the developmental processes that create the face. The data offered spotlights high-throughput genetic, molecular, biological, imaging and computational techniques. One of the missions of this project is to facilitate cooperation and collaboration between the central coordinating center (ie, the Hub) and the craniofacial research community.
Born of the desire to systematize analyses from The Cancer Genome Atlas pilot and scale their execution to the dozens of remaining diseases to be studied, GDAC Firehose now sits atop terabytes of analysis-ready TCGA data and reliably executes thousands of pipelines per month. More information: https://broadinstitute.atlassian.net/wiki/spaces/GDAC/
Genomic Expression Archive (GEA) is a public database of functional genomics data such as gene expression, epigenetics and genotyping SNP array. Both microarray- and sequence-based data are accepted in the MAGE-TAB format in compliance with MIAME and MINSEQE guidelines, respectively. GEA issues accession numbers, E-GEAD-n to experiment and A-GEAD-n to array design. Data exchange between GEA and EBI ArrayExpress is planned.
Country
A collection of high quality multiple sequence alignments for objective, comparative studies of alignment algorithms. The alignments are constructed based on 3D structure superposition and manually refined to ensure alignment of important functional residues. A number of subsets are defined covering many of the most important problems encountered when aligning real sets of proteins. It is specifically designed to serve as an evaluation resource to address all the problems encountered when aligning complete sequences. The first release provided sets of reference alignments dealing with the problems of high variability, unequal repartition and large N/C-terminal extensions and internal insertions. Version 2.0 of the database incorporates three new reference sets of alignments containing structural repeats, trans-membrane sequences and circular permutations to evaluate the accuracy of detection/prediction and alignment of these complex sequences. Within the resource, users can look at a list of all the alignments, download the whole database by ftp, get the "c" program to compare a test alignment with the BAliBASE reference (The source code for the program is freely available), or look at the results of a comparison study of several multiple alignment programs, using BAliBASE reference sets.
Country
ArachnoServer is a manually curated database containing information on the sequence, three-dimensional structure, and biological activity of protein toxins derived from spider venom. Spiders are the largest group of venomous animals and they are predicted to contain by far the largest number of pharmacologically active peptide toxins (Escoubas et al., 2006). ArachnoServer has been custom-built so that a wide range of biological scientists, including neuroscientists, pharmacologists, and toxinologists, can readily access key data relevant to their discipline without being overwhelmed by extraneous information.
Content type(s)
Country
BaAMPs is the first database dedicated to antimicrobial peptides (AMPs) specifically tested against microbial biofilms. The aim of this project is to provide useful resources for the study of AMPs against biofilms to microbiologist, bioinformatics researcher and medical scientist working in this field in an open-access framework.
This is CSDB version 1 merged from Bacterial (BCSDB) and Plant&Fungal (PFCSDB) databases. This database aims at provision of structural, bibliographic, taxonomic, NMR spectroscopic and other information on glycan and glycoconjugate structures of prokaryotic, plant and fungal origin. It has been merged from the Bacterial and Plant&Fungal Carbohydrate Structure Databases (BCSDB+PFCSDB). The key points of this service are: High coverage. The coverage for bacteria (up to 2016) and archaea (up to 2016) is above 80%. Similar coverage for plants and fungi is expected in the future. The database is close to complete up to 1998 for plants, and up to 2006 for fungi. Data quality. High data quality is achieved by manual curation using original publications which is assisted by multiple automatic procedures for error control. Errors present in publications are reported and corrected, when possible. Data from other databases are verified on import. Detailed annotations. Structural data are supplied with extended bibliography, assigned NMR spectra, taxon identification including strains and serogroups, and other information if available in the original publication. Services. CSDB serves as a platform for a number of computational services tuned for glycobiology, such as NMR simulation, automated structure elucidation, taxon clustering, 3D molecular modeling, statistical processing of data etc. Integration. CSDB is cross-linked to other glycoinformatics projects and NCBI databases. The data are exportable in various formats, including most widespread encoding schemes and records using GlycoRDF ontology. Free web access. Users can access the database for free via its web interface (see Help). The main source of data is retrospective literature analysis. About 20% of data were imported from CCSD (Carbbank, University of Georgia, Athens; structures published before 1996) with subsequent manual curation and approval. The current coverage is displayed in red on the top of the left menu. The time lag between the publication of new data and their deposition into CSDB is ca. 1 year. In the scope of bacterial carbohydrates, CSDB covers nearly all structures of this origin published up to 2016. Prokaryotic, plant and fungal means that a glycan was found in the organism(s) belonging to these taxonomic domains or was obtained by modification of those found in them. Carbohydrate means a structure composed of any residues linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds in which at least one residue is a sugar or its derivative.
<<<!!!<<< Pfam data and new releases are available through InterPro https://www.re3data.org/repository/r3d100010798 The Pfam website now serves as a static page with no data updates. All links below redirect to the closest alternative page in the InterPro website. >>>!!!>>>
<<<!!!<<< This repository is no longer available. >>>!!!>>>The Deep Carbon Observatory (DCO) is a global community of multi-disciplinary scientists unlocking the inner secrets of Earth through investigations into life, energy, and the fundamentally unique chemistry of carbon. Deep Carbon Observatory Digital Object Registry (“DCO-VIVO”) is a centrally-managed digital object identification, object registration and metadata management service for the DCO. Digital object registration includes DCO-ID generation based on the global Handle System infrastructure and metadata collection using VIVO. Users will be able to deposit their data into the DCO Data Repository and have that data discoverable and accessible by others.