Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 18 result(s)
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
>>>!!!<<< caArray Retirement Announcement >>>!!!<<< The National Cancer Institute (NCI) Center for Biomedical Informatics and Information Technology (CBIIT) instance of the caArray database was retired on March 31st, 2015. All publicly-accessible caArray data and annotations will be archived and will remain available via FTP download https://wiki.nci.nih.gov/x/UYHeDQ and is also available at GEO http://www.ncbi.nlm.nih.gov/geo/ . >>>!!!<<< While NCI will not be able to provide technical support for the caArray software after the retirement, the source code is available on GitHub https://github.com/NCIP/caarray , and we encourage continued community development. Molecular Analysis of Brain Neoplasia (Rembrandt fine-00037) gene expression data has been loaded into ArrayExpress: http://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-3073 >>>!!!<<< caArray is an open-source, web and programmatically accessible microarray data management system that supports the annotation of microarray data using MAGE-TAB and web-based forms. Data and annotations may be kept private to the owner, shared with user-defined collaboration groups, or made public. The NCI instance of caArray hosts many cancer-related public datasets available for download.
The Bacterial and Viral Bioinformatics Resource Center (BV-BRC) is an information system designed to support research on bacterial and viral infectious diseases. BV-BRC combines two long-running BRCs: PATRIC, the bacterial system, and IRD/ViPR, the viral systems.
Funded by the National Science Foundation (NSF) and proudly operated by Battelle, the National Ecological Observatory Network (NEON) program provides open, continental-scale data across the United States that characterize and quantify complex, rapidly changing ecological processes. The Observatory’s comprehensive design supports greater understanding of ecological change and enables forecasting of future ecological conditions. NEON collects and processes data from field sites located across the continental U.S., Puerto Rico, and Hawaii over a 30-year timeframe. NEON provides free and open data that characterize plants, animals, soil, nutrients, freshwater, and the atmosphere. These data may be combined with external datasets or data collected by individual researchers to support the study of continental-scale ecological change.
EarthWorks is a discovery tool for geospatial (a.k.a. GIS) data. It allows users to search and browse the GIS collections owned by Stanford University Libraries, as well as data collections from many other institutions. Data can be searched spatially, by manipulating a map; by keyword search; by selecting search limiting facets (e.g., limit to a given format type); or by combining these options.
Harmonized, indexed, searchable large-scale human FG data collection with extensive metadata. Provides scalable, unified way to easily access massive functional genomics (FG) and annotation data collections curated from large-scale genomic studies. Direct integration (API) with custom / high-throughput genetic and genomic analysis workflows.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
NIAID’s TB Portals Program is a multi-national collaboration for TB data sharing and analysis to advance TB research. As a global consortium of clinicians, scientists, and IT professionals from 40 sites in 16 countries throughout eastern Europe, Asia, and sub-Saharan Africa, the TB Portals Program is a web-based, open-access repository of multi-domain TB data and tools for its analysis. Researchers can find linked socioeconomic/geographic, clinical, laboratory, radiological, and genomic data from over 7,500 international published TB patient cases with an emphasis on drug-resistant tuberculosis.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
Weed Images is a project of the University of Georgia’s Center for Invasive Species and Ecosystem Health and one of the four major parts of BugwoodImages. The Focus is on damages of weed. It provides an easily accessible archive of high quality images for use in educational applications. In most cases, the images found in this system were taken by and loaned to us by photographers other than ourselves. Most are in the realm of public sector images. The photographs are in this system to be used.
GNPS is a web-based mass spectrometry ecosystem that aims to be an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. GNPS aids in identification and discovery throughout the entire life cycle of data; from initial data acquisition/analysis to post publication.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
eBird is among the world’s largest biodiversity-related science projects, with more than 1 billion records, more than 100 million bird sightings contributed annually by eBirders around the world, and an average participation growth rate of approximately 20% year over year. A collaborative enterprise with hundreds of partner organizations, thousands of regional experts, and hundreds of thousands of users, eBird is managed by the Cornell Lab of Ornithology. eBird data document bird distribution, abundance, habitat use, and trends through checklist data collected within a simple, scientific framework. Birders enter when, where, and how they went birding, and then fill out a checklist of all the birds seen and heard during the outing. Data can be accessed from the Science tab on the website.
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
NKN is now Research Computing and Data Services (RCDS)! We provide data management support for UI researchers and their regional, national, and international collaborators. This support keeps researchers at the cutting-edge of science and increases our institution's competitiveness for external research grants. Quality data and metadata developed in research projects and curated by RCDS (formerly NKN) is a valuable, long-term asset upon which to develop and build new research and science.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.