Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 29 result(s)
Country
The arctic data archive system (ADS) collects observation data and modeling products obtained by various Japanese research projects and gives researchers to access the results. By centrally managing a wide variety of Arctic observation data, we promote the use of data across multiple disciplines. Researchers use these integrated databases to clarify the mechanisms of environmental change in the atmosphere, ocean, land-surface and cryosphere. That ADS will be provide an opportunity of collaboration between modelers and field scientists, can be expected.
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
Country
Science Data Bank is an open generalist data repository developed and maintained by the Chinese Academy of Sciences Computing and Network Information Center (CNIC). It promotes the publication and reuse of scientific data. Researchers and journal publishers can use it to store, manage and share science data.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
CalSurv is a comprehensive information on West Nile virus, plague, malaria, Lyme disease, trench fever and other vectorborne diseases in California — where they are, where they’ve been, where they may be headed and what new diseases may be emerging.The CalSurv Web site serves as a portal or a single interface to all surveillance-related Web sites in California.
Country
depositar — taking the term from the Portuguese/Spanish verb for to deposit — is an online repository for research data. The site is built by the researchers for the researchers. You are free to deposit, discover, and reuse datasets on depositar for all your research purposes.
The Argo observational network consists of a fleet of 3000+ profiling autonomous floats deployed by about a dozen teams worldwide. WHOI has built about 10% of the global fleet. The mission lifetime of each float is about 4 years. During a typical mission, each float reports a profile of the upper ocean every 10 days. The sensors onboard record fundamental physical properties of the ocean: temperature and conductivity (a measure of salinity) as a function of pressure. The depth range of the observed profile depends on the local stratification and the float's mechanical ability to adjust it's buoyancy. The majority of Argo floats report profiles between 1-2 km depth. At each surfacing, measurements of temperature and salinity are relayed back to shore via satellite. Telemetry is usually received every 10 days, but floats at high-latitudes which are iced-over accumulate their data and transmit the entire record the next time satellite contact is established. With current battery technology, the best performing floats last 6+ years and record over 200 profiles.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
Country
The CORA. Repositori de dades de Recerca is a repository of open, curated and FAIR data that covers all academic disciplines. CORA. Repositori de dades de Recerca is a shared service provided by participating Catalan institutions (Universities and CERCA Research Centers). The repository is managed by the CSUC and technical infrastructure is based on the Dataverse application, developed by international developers and users led by Harvard University (https://dataverse.org).
The GSA Data Repository is an open file in which authors of articles in our journals can place information that supplements and expands on their article. These supplements will not appear in print but may be obtained from GSA.
Arca Data is Fiocruz's official repository for archiving, publishing, disseminating, preserving and sharing digital research data produced by the Fiocruz community or in partnership with other research institutes or bodies, with the aim of promoting new research, ensuring the reproducibility or replicability of existing research and promoting an Open and Citizen Science. Its objective is to stimulate the wide circulation of scientific knowledge, strengthening the institutional commitment to Open Science and free access to health information, in addition to providing transparency and fostering collaboration between researchers, educators, academics, managers and graduate students, to the advancement of knowledge and the creation of solutions that meet the demands of society.
ChemSpider is a free chemical structure database providing fast access to over 58 million structures, properties and associated information. By integrating and linking compounds from more than 400 data sources, ChemSpider enables researchers to discover the most comprehensive view of freely available chemical data from a single online search. It is owned by the Royal Society of Chemistry. ChemSpider builds on the collected sources by adding additional properties, related information and links back to original data sources. ChemSpider offers text and structure searching to find compounds of interest and provides unique services to improve this data by curation and annotation and to integrate it with users’ applications.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
Country
DataStream is an open access platform for sharing information on freshwater health. It currently allows users to access, visualize, and download full water quality datasets collected by Indigenous Nations, community groups, researchers and governments throughout five regional hubs: Atlantic Canada, the Great Lakes and Saint Lawrence region, the Lake Winnipeg Basin, the Mackenzie River Basin and the Pacific region. DataStream was developed by The Gordon Foundation and is carried out in collaboration with regional monitoring networks.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
Country
The China National GeneBank database (CNGBdb) is a unified platform for biological big data sharing and application services. CNGBdb has now integrated a large amount of internal and external biological data from resources such as CNGB, NCBI, and the EBI. There are several sub-databases in CNGBdb, including literature, variation, gene, genome, protein, sequence, organism, project, sample, experiment, run, and assembly. Based on underlying big data and cloud computing technologies, it provides various data services, including archive, analysis, knowledge search, and management authorization of biological data. CNGBdb adopts data structures and standards of international omics, health, and medicine, such as The International Nucleotide Sequence Database Collaboration (INSDC), The Global Alliance for Genomics and Health GA4GH (GA4GH), Global Genome Biodiversity Network (GGBN), American College of Medical Genetics and Genomics (ACMG), and constructs standardized data and structures with wide compatibility. All public data and services provided by CNGBdb are freely available to all users worldwide. CNGB Sequence Archive (CNSA) is the bionomics data repository of CNGBdb. CNGB Sequence Archive (CNSA) is a convenient and efficient archiving system of multi-omics data in life science, which provides archiving services for raw sequencing reads and further analyzed results. CNSA follows the international data standards for omics data, and supports online and batch submission of multiple data types such as Project, Sample, Experiment/Run, Assembly, Variation, Metabolism, Single cell, and Sequence. Moreover, CNSA has achieved the correlation of sample entities, sample information, and analyzed data on some projects. Its data submission service can be used as a supplement to the literature publishing process to support early data sharing.CNGB Sequence Archive (CNSA) is a convenient and efficient archiving system of multi-omics data in the life science of CNGBdb, which provides archiving services for raw sequencing reads and further analyzed results. CNSA follows the international data standards for omics data, and supports online and batch submission of multiple data types such as Project, Sample, Experiment/Run, Assembly, Variation, Metabolism, Single cell, Sequence. Its data submission service can be used as a supplement to the literature publishing process to support early data sharing.
Country
REDU is the institutional open research data repository of the University of Campinas, Brazil. It contains research data produced by all research groups of the University, in a wide range of scientific domains, which are indexed by DataCite DOI. Created at the end of 2020, it is coordinated by a scientific and technical committee composed by data librarians, IT professionals, and scientists representing user groups. Implemented on top of Dataverse, it exports metadata using OAIS. Files with sensitive content (due to ethics or legal constraints) are not stored therein - rather, only their metadata is recorded in REDU, as well as contact information so that interested researchers can contact the persons responsible for the files for conditional subsequent access. It is being little by little populated, following the University's Open Science policies.
ZENODO builds and operates a simple and innovative service that enables researchers, scientists, EU projects and institutions to share and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject-based repositories of the research communities. ZENODO enables researchers, scientists, EU projects and institutions to: easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. display their research results and get credited by making the research results citable and integrate them into existing reporting lines to funding agencies like the European Commission. easily access and reuse shared research results.