Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 32 result(s)
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The FishNet network is a collaborative effort among fish collections around the world to share and distribute data on specimen holdings. There is an open invitation for any institution with a fish collection to join.
The Wilson Center Digital Archive contains once-secret documents from governments all across the globe, uncovering new sources and providing fresh insights into the history of international relations and diplomacy. It contains newly declassified historical materials from archives around the world—much of it in translation and including diplomatic cables, high level correspondence, meeting minutes and more. It collects the research of three Wilson Center projects which focus on the interrelated histories of the Cold War, Korea, and Nuclear Proliferation.
<<<!!!<<< CRAWDAD has moved to IEEE-Dataport https://www.re3data.org/repository/r3d100012569 The datasets in the Community Resource for Archiving Wireless Data at Dartmouth (CRAWDAD) repository are now hosted as the CRAWDAD Collection on IEEE Dataport. After nearly two decades as a stand-alone archive at crawdad.org, the migration of the collection to IEEE DataPort provides permanence and new visibility. >>>!!!>>>
IntAct provides a freely available, open source database system and analysis tools for molecular interaction data. All interactions are derived from literature curation or direct user submissions and are freely available.
The Southern California Earthquake Data Center (SCEDC) operates at the Seismological Laboratory at Caltech and is the primary archive of seismological data for southern California. The 1932-to-present Caltech/USGS catalog maintained by the SCEDC is the most complete archive of seismic data for any region in the United States. Our mission is to maintain an easily accessible, well-organized, high-quality, searchable archive for research in seismology and earthquake engineering.
TERN provides open data, research and management tools, data infrastructure and site-based research equipment. The open access ecosystem data is provided by TERN Data Discovery Portal , see https://www.re3data.org/repository/r3d100012013
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
This site is dedicated to making high value health data more accessible to entrepreneurs, researchers, and policy makers in the hopes of better health outcomes for all. In a recent article, Todd Park, United States Chief Technology Officer, captured the essence of what the Health Data Initiative is all about and why our efforts here are so important.
The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a publicly accessible earth science data repository created to curate, publicly serve (publish), and archive digital data and information from biological, chemical and biogeochemical research conducted in coastal, marine, great lakes and laboratory environments. The BCO-DMO repository works closely with investigators funded through the NSF OCE Division’s Biological and Chemical Sections and the Division of Polar Programs Antarctic Organisms & Ecosystems. The office provides services that span the full data life cycle, from data management planning support and DOI creation, to archive with appropriate national facilities.
The NCBI Short Genetic Variations database, commonly known as dbSNP, catalogs short variations in nucleotide sequences from a wide range of organisms. These variations include single nucleotide variations, short nucleotide insertions and deletions, short tandem repeats and microsatellites. Short Genetic Variations may be common, thus representing true polymorphisms, or they may be rare. Some rare human entries have additional information associated withthem, including disease associations, genotype information and allele origin, as some variations are somatic rather than germline events. ***NCBI will phase out support for non-human organism data in dbSNP and dbVar beginning on September 1, 2017***
<<<!!!<<< As of 2023, support to maintain the www.modencode.org and intermine.modencode.org sites have been retired following the end of funding. To access data from the modENCODE project, or for questions regarding the data they make available, please visit these databases: Fly data: FlyBase: ModENCODE data at FlyBase: https://wiki.flybase.org/wiki/FlyBase:ModENCODE_data_at_FlyBase FlyBase: https://www.re3data.org/repository/r3d100010591 Worm data: WormBase https://www.re3data.org/repository/r3d100010424 Data, including modENCODE and modERN project data, is also available at the ENCODE Portal: https://www.re3data.org/repository/r3d100013051 (search metadata and view datasets for Drosophila and Caenorhabditis https://www.encodeproject.org/matrix/?type=Experiment&control_type!=*&status=released&replicates.library.biosample.donor.organism.scientific_name=Drosophila+melanogaster&replicates.library.biosample.donor.organism.scientific_name=Caenorhabditis+elegans&replicates.library.biosample.donor.organism.scientific_name=Drosophila+pseudoobscura&replicates.library.biosample.donor.organism.scientific_name=Drosophila+mojavensis). >>>!!!>>>
NeuroMorpho.Org is a centrally curated inventory of digitally reconstructed neurons associated with peer-reviewed publications. It contains contributions from over 80 laboratories worldwide and is continuously updated as new morphological reconstructions are collected, published, and shared. To date, NeuroMorpho.Org is the largest collection of publicly accessible 3D neuronal reconstructions and associated metadata which can be used for detailed single cell simulations.
HITRAN is an acronym for high-resolution transmission molecular absorption database. The HITRAN compilation of the SAO (HIgh resolution TRANmission molecular absorption database) is used for predicting and simulating transmission and emission of light in atmospheres. It is the world-standard database in molecular spectroscopy. The journal article describing it is the most cited reference in the geosciences. There are presently about 5000 HITRAN users world-wide. Its associated database HITEMP (high-temperature spectroscopic absorption parameters) is accessible by the HITRAN website.
BEA produces economic accounts statistics that enable government and business decision-makers, researchers, and the American public to follow and understand the performance of the Nation's economy. To do this, BEA collects source data, conducts research and analysis, develops and implements estimation methodologies, and disseminates statistics to the public.
The Coastal Data Information Program (CDIP) is an extensive network for monitoring waves and beaches along the coastlines of the United States. Since its inception in 1975, the program has produced a vast database of publicly-accessible environmental data for use by coastal engineers and planners, scientists, mariners, and marine enthusiasts. The program has also remained at the forefront of coastal monitoring, developing numerous innovations in instrumentation, system control and management, computer hardware and software, field equipment, and installation techniques.
This web site is provided by the United States Geological Survey’s (USGS) Earthquake Hazards Program as part of our effort to reduce earthquake hazard in the United States. We are part of the USGS Hazards Mission Area and are the USGS component of the congressionally established, multi-agency National Earthquake Hazards Reduction Program (NEHRP).
<<<!!!<<< USHIK was archived because some of the metadata are maintained by other sites and there is no need for duplication. The USHIK metadata registry was a neutral repository of metadata from an authoritative source used to promote interoperability and reuse of data. The registry did not attempt to change the metadata content but rather provided a structured way to view data for the technical or casual user. Complete information see: https://www.ahrq.gov/data/ushik.html >>>!!!>>>
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
>>>!!!<<< caArray Retirement Announcement >>>!!!<<< The National Cancer Institute (NCI) Center for Biomedical Informatics and Information Technology (CBIIT) instance of the caArray database was retired on March 31st, 2015. All publicly-accessible caArray data and annotations will be archived and will remain available via FTP download https://wiki.nci.nih.gov/x/UYHeDQ and is also available at GEO http://www.ncbi.nlm.nih.gov/geo/ . >>>!!!<<< While NCI will not be able to provide technical support for the caArray software after the retirement, the source code is available on GitHub https://github.com/NCIP/caarray , and we encourage continued community development. Molecular Analysis of Brain Neoplasia (Rembrandt fine-00037) gene expression data has been loaded into ArrayExpress: http://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-3073 >>>!!!<<< caArray is an open-source, web and programmatically accessible microarray data management system that supports the annotation of microarray data using MAGE-TAB and web-based forms. Data and annotations may be kept private to the owner, shared with user-defined collaboration groups, or made public. The NCI instance of caArray hosts many cancer-related public datasets available for download.
Remote Sensing Systems is a world leader in processing and analyzing microwave data from satellite microwave sensors. We specialize in algorithm development, instrument calibration, ocean product development, and product validation. We have worked with more than 30 satellite microwave radiometer, sounder, and scatterometer instruments over the past 40 years. Currently, we operationally produce satellite retrievals for SSMIS, AMSR2, WindSat, and ASCAT. The geophysical retrievals obtained from these sensors are made available in near-real-time (NRT) to the global scientific community and general public via FTP and this web site.