Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 66 result(s)
WFCC Global Catalogue of Microorganisms (GCM) is expected to be a robust, reliable and user-friendly system to help culture collections to manage, disseminate and share the information related to their holdings. It also provides a uniform interface for the scientific and industrial communities to access the comprehensive microbial resource information.
Project Tycho is a repository for global health, particularly disease surveillance data. Project Tycho currently includes data for 92 notifiable disease conditions in the US, and up to three dengue-related conditions for 99 countries. Project Tycho has compiled data from reputable sources such as the US Centers for Disease Control, the World Health Organization, and National health agencies for countries around the world. Project Tycho datasets are highly standardized and have rich metadata to improve access, interoperability, and reuse of global health data for research and innovation.
BEI Resources was established by the National Institute of Allergy and Infectious Diseases (NIAID) to provide reagents, tools and information for studying Category A, B, and C priority pathogens, emerging infectious disease agents, non-pathogenic microbes and other microbiological materials of relevance to the research community. BEI Resources acquires authenticates, and produces reagents that scientists need to carry out basic research and develop improved diagnostic tests, vaccines, and therapies. By centralizing these functions within BEI Resources, access to and use of these materials in the scientific community is monitored and quality control of the reagents is assured
The COVID-19 Data Portal was launched in April 2020 to bring together relevant datasets for sharing and analysis in an effort to accelerate coronavirus research. It enables researchers to upload, access and analyse COVID-19 related reference data and specialist datasets as part of the wider European COVID-19 Data Platform.
The N3C Data Enclave is a secure portal containing a very large and extensive set of harmonized COVID-19 clinical electronic health record (EHR) data. The data can be accessed through a secure cloud Enclave hosted by NCATS and cannot be downloaded due to regulatory control. Broad access is available to investigators at institutions that sign a Data Use Agreements and via Data Use Requests by investigators. The N3C is a unique open, reproducible, transparent, collaborative team science initiative to leverage sensitive clinical data to expedite COVID-19 discoveries and improve health outcomes.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
The Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Data and Specimen Hub (DASH) is a centralized resource that allows researchers to share and access de-identified data from studies funded by NICHD. DASH also serves as a portal for requesting biospecimens from selected DASH studies.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
BBMRI-ERIC is a European research infrastructure for biobanking. We bring together all the main players from the biobanking field – researchers, biobankers, industry, and patients – to boost biomedical research. To that end, we offer quality management services, support with ethical, legal and societal issues, and a number of online tools and software solutions. Ultimately, our goal is to make new treatments possible. The Directory is a tool to share aggregate information about the biobanks that are willing external collaboration. It is based on the MIABIS 2.0 standard, which describes the samples and data in the biobanks at an aggregated level.
LSHTM Data Compass is a curated digital repository of research outputs that have been produced by staff and students at the London School of Hygiene & Tropical Medicine and their collaborators. It is used to share outputs intended for reuse, including: qualitative and quantitative data, software code and scripts, search strategies, and data collection tools.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
It is an interactive website offering access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms, integrated with a large collection of aligned annotations. The Browser is a graphical viewer optimized to support fast interactive performance and is an open-source, web-based tool suite built on top of a MySQL database for rapid visualization, examination, and querying of the data at many levels.
Synapse is an open source software platform that clinical and biological data scientists can use to carry out, track, and communicate their research in real time. Synapse enables co-location of scientific content (data, code, results) and narrative descriptions of that work.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Content type(s)
Country
The GISAID Initiative promotes the international sharing of all influenza virus sequences, related clinical and epidemiological data associated with human viruses, and geographical as well as species-specific data associated with avian and other animal viruses, to help researchers understand how the viruses evolve, spread and potentially become pandemics. *** GISAID does so by overcoming disincentives/hurdles or restrictions, which discourage or prevented sharing of influenza data prior to formal publication. *** The Initiative ensures that open access to data in GISAID is provided free-of-charge and to everyone, provided individuals identify themselves and agree to uphold the GISAID sharing mechanism governed through its Database Access Agreement. GISAID calls on all users to agree to the basic premise of upholding scientific etiquette, by acknowledging the originating laboratories providing the specimen and the submitting laboratories who generate the sequence data, ensuring fair exploitation of results derived from the data, and that all users agree that no restrictions shall be attached to data submitted to GISAID, to promote collaboration among researchers on the basis of open sharing of data and respect for all rights and interests.
Country
From April 2020 to March 2023, the Covid-19 Immunity Task Force (CITF) supported 120 studies to generate knowledge about immunity to SARS-CoV-2. The subjects addressed by these studies include the extent of SARS-CoV-2 infection in Canada, the nature of immunity, vaccine effectiveness and safety, and the need for booster shots among different communities and priority populations in Canada. The CITF Databank was developed to further enhance the impact of CITF funded studies by allowing additional research using the data collected from CITF-supported studies. The CITF Databank centralizes and harmonizes individual-level data from CITF-funded studies that have met all ethical requirements to deposit data in the CITF Databank and have completed a data sharing agreement. The CITF Databank is an internationally unique resource for sharing epidemiological and laboratory data from studies about SARS-CoV-2 immunity in different populations. The types of research that are possible with data from the CITF Databank include observational epidemiological studies, mathematical modelling research, and comparative evaluation of surveillance and laboratory methods.
The world’s largest collection of TCR and BCR sequences. Easily incorporate millions of sequences worth of public data into your next papers and projects using immunoSEQ Analyzer. Construct your own projects, draw your own conclusions, and freely publish new discoveries.
The MG-RAST server is an open source system for annotation and comparative analysis of metagenomes. Users can upload raw sequence data in fasta format; the sequences will be normalized and processed and summaries automatically generated. The server provides several methods to access the different data types, including phylogenetic and metabolic reconstructions, and the ability to compare the metabolism and annotations of one or more metagenomes and genomes. In addition, the server offers a comprehensive search capability. Access to the data is password protected, and all data generated by the automated pipeline is available for download in a variety of common formats. MG-RAST has become an unofficial repository for metagenomic data, providing a means to make your data public so that it is available for download and viewing of the analysis without registration, as well as a static link that you can use in publications. It also requires that you include experimental metadata about your sample when it is made public to increase the usefulness to the community.