Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 53 result(s)
Academic Commons provides open, persistent access to the scholarship produced by researchers at Columbia University, Barnard College, Jewish Theological Seminary, Teachers College, and Union Theological Seminary. Academic Commons is a program of the Columbia University Libraries. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
Cary Institute data repository allows researchers to store, share and publish their research data, supplementary information and associated metadata. Each published item is assigned a Digital Object identifier (DOI), which allows the data to be citable and sustainable. This repository is a member node of DataOne.
CaltechDATA is an institutional data repository for Caltech. Caltech library runs the repository to preserve the accomplishments of Caltech researchers and share their results with the world. Caltech-associated researchers can upload data, link data with their publications, and assign a permanent DOI so that others can reference the data set. The repository also preserves software and has automatic Github integration. All files present in the repository are open access or embargoed, and all metadata is always available to the public.
Bioconductor provides tools for the analysis and comprehension of high-throughput genomic data. Bioconductor uses the R statistical programming language, and is open source and open development. It has two releases each year, and an active user community. Bioconductor is also available as an AMI (Amazon Machine Image) and a series of Docker images.
The UC San Diego Library Digital Collections website gathers two categories of content managed by the Library: library collections (including digitized versions of selected collections covering topics such as art, film, music, history and anthropology) and research data collections (including research data generated by UC San Diego researchers).
ModelDB is a curated database of published models in the broad domain of computational neuroscience. It addresses the need for access to such models in order to evaluate their validity and extend their use. It can handle computational models expressed in any textual form, including procedural or declarative languages (e.g. C++, XML dialects) and source code written for any simulation environment. The model source code doesn't even have to reside inside ModelDB; it just has to be available from some publicly accessible online repository or WWW site.
The Duke Research Data Repository is a service of the Duke University Libraries that provides curation, access, and preservation of research data produced by the Duke community. Duke's RDR is a discipline agnostic institutional data repository that is intended to preserve and make public data related to the teaching and research mission of Duke University including data linked to a publication, research project, and/or class, as well as supplementary software code and documentation used to provide context for the data.
Bioinformatics.org serves the scientific and educational needs of bioinformatic practitioners and the general public. We develop and maintain computational resources to facilitate world-wide communications and collaborations between people of all educational and professional levels. We provide and promote open access to the materials and methods required for, and derived from, research, development and education.
The WashU Research Data repository accepts any publishable research data set, including textual, tabular, geospatial, imagery, computer code, or 3D data files, from researchers affiliated with Washington University in St. Louis. Datasets include metadata and are curated and assigned a DOI to align with FAIR data principles.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
Neuroimaging Tools and Resources Collaboratory (NITRC) is currently a free one-stop-shop environment for science researchers that need resources such as neuroimaging analysis software, publicly available data sets, and computing power. Since its debut in 2007, NITRC has helped the neuroscience community to use software and data produced from research that, before NITRC, was routinely lost or disregarded, to make further discoveries. NITRC provides free access to data and enables pay-per-use cloud-based access to unlimited computing power, enabling worldwide scientific collaboration with minimal startup and cost. With NITRC and its components—the Resources Registry (NITRC-R), Image Repository (NITRC-IR), and Computational Environment (NITRC-CE)—a researcher can obtain pilot or proof-of-concept data to validate a hypothesis for a few dollars.
VectorBase provides data on arthropod vectors of human pathogens. Sequence data, gene expression data, images, population data, and insecticide resistance data for arthropod vectors are available for download. VectorBase also offers genome browser, gene expression and microarray repository, and BLAST searches for all VectorBase genomes. VectorBase Genomes include Aedes aegypti, Anopheles gambiae, Culex quinquefasciatus, Ixodes scapularis, Pediculus humanus, Rhodnius prolixus. VectorBase is one the Bioinformatics Resource Centers (BRC) projects which is funded by National Institute of Allergy and Infectious Diseases (NAID).
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
In keeping with the open data policies of the U.S. Agency for International Development (USAID) and Bill & Melinda Gates Foundation, the Cereal Systems Initiative for South Asia (CSISA) has launched the CSISA Data Repository to ensure public accessibility to key data sets, including crop cut data- directly observed, crop yield estimates, on-station and on-farm research trial data and socioeconomic surveys. CSISA is a science-driven and impact-oriented regional initiative for increasing the productivity of cereal-based cropping systems in Bangladesh, India and Nepal, thus improving food security and farmers’ livelihoods. CSISA generates data that is of value and interest to a diverse audience of researchers, policymakers and the public. CSISA’s data repository is hosted on Dataverse, an open source web application developed at Harvard University to share, preserve, cite, explore and analyze research data. CSISA’s repository contains rich datasets, including on-station trial data from 2009–17 about crop and resource management practices for sustainable future cereal-based cropping systems. Collection of this data occurred during the long-term, on-station research trials conducted at the Indian Council of Agricultural Research – Research Complex for the Eastern Region in Bihar, India. The data include information on agronomic management for the sustainable intensification of cropping systems, mechanization, diversification, futuristic approaches to sustainable intensification, long-term effects of conservation agriculture practices on soil health and the pest spectrum. Additional trial data in the repository includes nutrient omission plot technique trials from Bihar, eastern Uttar Pradesh and Odisha, India, covering 2012–15, which help determine the indigenous nutrient supplying ability of the soil. This data helps develop precision nutrient management approaches that would be most effective in different types of soils. CSISA’s most popular dataset thus far includes crop cut data on maize in Odisha, India and rice in Nepal. Crop cut datasets provide ground-truthed yield estimates, as well as valuable information on relevant agronomic and socioeconomic practices affecting production practices and yield. A variety of research data on wheat systems are also available from Bangladesh and India. Additional crop cut data will also be coming online soon. Cropping system-related data and socioeconomic data are in the repository, some of which are cross-listed with a Dataverse run by the International Food Policy Research Institute. The socioeconomic datasets contain baseline information that is crucial for technology targeting, as well as to assess the adoption and performance of CSISA-supported technologies under smallholder farmers’ constrained conditions, representing the ultimate litmus test of their potential for change at scale. Other highly interesting datasets include farm composition and productive trajectory information, based on a 20-year panel dataset, and numerous wheat crop cut and maize nutrient omission trial data from across Bangladesh.
LibraData is a place for UVA researchers to share data publicly. It is UVA's local instance of Dataverse. LibraData is part of the Libra Scholarly Repository suite of services which includes works of UVA scholarship such as articles, books, theses, and data.
The Hive is the University of Utah's institutional data repository. All datasets are associated with a U of Utah current or past faculty member or student. They are also all freely available under creative commons licenses. The repository spans disciplines and has over 80 datasets at this time.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
Reference anatomies of the brain and corresponding atlases play a central role in experimental neuroimaging workflows and are the foundation for reporting standardized results. The choice of such references —i.e., templates— and atlases is one relevant source of methodological variability across studies, which has recently been brought to attention as an important challenge to reproducibility in neuroscience. TemplateFlow is a publicly available framework for human and nonhuman brain models. The framework combines an open database with software for access, management, and vetting, allowing scientists to distribute their resources under FAIR —findable, accessible, interoperable, reusable— principles. TemplateFlow supports a multifaceted insight into brains across species, and enables multiverse analyses testing whether results generalize across standard references, scales, and in the long term, species, thereby contributing to increasing the reliability of neuroimaging results.