Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 87 result(s)
Country
The National High Energy Physics Science Data Center (NHEPSDC) is a repository for high-energy physics. In 2019, it was designated as a scientific data center at the national level by the Ministry of Science and Technology of China (MOST). NHEPSDC is constructed and operated by the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences (CAS). NHEPSDC consists of a main data center in Beijing, a branch center in Guangdong-Hong Kong-Macao Greater Bay Area, and a branch center in Huairou District of Beijing. The mission of NHEPSDC is to provide the services of data collection, archiving, long-term preservation, access and sharing, software tools, and data analysis. The services of NHEPSDC are mainly for high-energy physics and related scientific research activities. The data collected can be roughly divided into the following two categories: one is the raw data from large scientific facilities, and the other is data generated from general scientific and technological projects (usually supported by government funding), hereafter referred to as generic data. More than 70 people work in NHEPSDC now, with 18 in high-energy physics, 17 in computer science, 15 in software engineering, 20 in data management and some other operation engineers. NHEPSDC is equipped with a hierarchical storage system, high-performance computing power, high bandwidth domestic and international network links, and a professional service support system. In the past three years, the average data increment is about 10 PB per year. By integrating data resources with the IT environment, a state-of-art data process platform is provided to users for scientific research, the volume of data accessed every year is more than 400 PB with more than 10 million visits.
Country
<<<!!!<<< 2017-06-02: We recently suffered a server failure and are working to bring the full ORegAnno website back online. In the meantime, you may download the complete database here: http://www.oreganno.org/dump/ ; Data are also available through UCSC Genome Browser (e.g., hg38 -> Regulation -> ORegAnno) https://genome.ucsc.edu/cgi-bin/hgTrackUi?hgsid=686342163_2it3aVMQVoXWn0wuCjkNOVX39wxy&c=chr1&g=oreganno >>>!!!>>> The Open REGulatory ANNOtation database (ORegAnno) is an open database for the curation of known regulatory elements from scientific literature. Annotation is collected from users worldwide for various biological assays and is automatically cross-referenced against PubMED, Entrez Gene, EnsEMBL, dbSNP, the eVOC: Cell type ontology, and the Taxonomy database, where appropriate, with information regarding the original experimentation performed (evidence). ORegAnno further provides an open validation process for all regulatory annotation in the public domain. Assigned validators receive notification of new records in the database and are able to cross-reference the citation to ensure record integrity. Validators have the ability to modify any record (deprecating the old record and creating a new one) if an error is found. Further, any contributor to the database can comment on any annotation by marking errors, or adding special reports into function as they see fit. These features of ORegAnno ensure that the collection is of the highest quality and uniquely provides a dynamic view of our changing understanding of gene regulation in the various genomes.
eLaborate is an online work environment in which scholars can upload scans, transcribe and annotate text, and publish the results as on online text edition which is freely available to all users. Short information about and a link to already published editions is presented on the page Editions under Published. Information about editions currently being prepared is posted on the page Ongoing projects. The eLaborate work environment for the creation and publication of online digital editions is developed by the Huygens Institute for the History of the Netherlands of the Royal Netherlands Academy of Arts and Sciences. Although the institute considers itself primarily a research facility and does not maintain a public collection profile, Huygens ING actively maintains almost 200 digitally available resource collections.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
Central data management of the USGS for water data that provides access to water-resources data collected at approximately 1.5 million sites in all 50 States, the District of Columbia, Puerto Rico, the Virgin Islands, Guam, American Samoa and the Commonwealth of the Northern Mariana Islands. Includes data on water use and quality, groundwater, and surface water.
Country
Arquivo.pt is a research infrastructure that preserves millions of files collected from the web since 1996 and provides a public search service over this information. It contains information in several languages. Periodically it collects and stores information published on the web. Then, it processes the collect data to make it searchable, providing a “Google-like” service that enables searching the past web (English user interface available at https://arquivo.pt/?l=en). This preservation workflow is performed through a large-scale distributed information system and can also accessed through API (https://arquivo.pt/api).
Country
The Polar Data Catalogue is an online database of metadata and data that describes, indexes and provides access to diverse data sets generated by polar researchers. These records cover a wide range of disciplines from natural sciences and policy, to health, social sciences, and more.
Country
The aim of the project KCDC (KASCADE Cosmic Ray Data Centre) is the installation and establishment of a public data centre for high-energy astroparticle physics based on the data of the KASCADE experiment. KASCADE was a very successful large detector array which recorded data during more than 20 years on site of the KIT-Campus North, Karlsruhe, Germany (formerly Forschungszentrum, Karlsruhe) at 49,1°N, 8,4°O; 110m a.s.l. KASCADE collected within its lifetime more than 1.7 billion events of which some 433.000.000 survived all quality cuts. Initially about 160 million events are available here for public usage.
Country
The National Archives makes Denmark's largest collection of questionnaire-based research data available to researchers and students. Order quantitative research data, conduct analyzes online and access register data and international survey data. Formerly known as the Danish Data Archive (DDA), it was the national social science data archive.
The Geoscience Data Exchange (GDEX) mission is to provide public access to data and other digital research assets related to the Earth and its atmosphere, oceans, and space environment. GDEX fulfills federal and scientific publication requirements for open data access by: Providing long-term curation and stewardship of research assets; Enabling scientific transparency and traceability of research findings in digital formats; Complementing existing NCAR community data management and archiving capabilities; Facilitating openness and accessibility for the public to leverage the research assets and thereby benefit from NCAR's historical and ongoing scientific research. This mission intentionally supports and aligns with those of NCAR and its sponsor, the National Science Foundation (NSF).
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
Country
The CORA. Repositori de dades de Recerca is a repository of open, curated and FAIR data that covers all academic disciplines. CORA. Repositori de dades de Recerca is a shared service provided by participating Catalan institutions (Universities and CERCA Research Centers). The repository is managed by the CSUC and technical infrastructure is based on the Dataverse application, developed by international developers and users led by Harvard University (https://dataverse.org).
The Language Archive at the Max Planck Institute in Nijmegen provides a unique record of how people around the world use language in everyday life. It focuses on collecting spoken and signed language materials in audio and video form along with transcriptions, analyses, annotations and other types of relevant material (e.g. photos, accompanying notes).
Content type(s)
<<<!!!<<< The repository is no longer available. >>>!!!>>>The information is accessible through PubChem:https://pubchem.ncbi.nlm.nih.gov/. Help for HSDB Users in PubChem PDF: https://www.nlm.nih.gov/toxnet/Accessing_HSDB_Content_from_PubChem.pdf Help for HSDB Users in PubChem Web Page: https://www.nlm.nih.gov/toxnet/Accessing_HSDB_Content_from_PubChem.html <<<!!!>>>
<<<!!!<<< This site is no longer maintained and is provided for reference only. Some functionality or links may not work. For all enquiries please contact the Ensembl Helpdesk http://www.ensembl.org/Help/Contact >>>!!!>>> PhytoPath is a new bioinformatics resource that integrates genome-scale data from important plant pathogen species with literature-curated information about the phenotypes of host infection. Using the Ensembl Genomes browser, it provides access to complete genome assembly and gene models of priority crop and model-fungal, oomycete and bacterial phytopathogens. PhytoPath also links genes to disease progression using data from the curated PHI-base resource. PhytoPath portal is a joint project bringing together Ensembl Genomes with PHI-base, a community-curated resource describing the role of genes in pathogenic infection. PhytoPath provides access to genomic and phentoypic data from fungal and oomycete plant pathogens, and has enabled a considerable increase in the coverage of phytopathogen genomes in Ensembl Fungi and Ensembl Protists. PhytoPath also provides enhanced searching of the PHI-base resource as well as the fungi and protists in Ensembl Genomes.
Country
Tethys is an Open Access Research Data Repository of the GeoSphere Austria, which publishes and distributes georeferenced geoscientific research data generated at and in cooperation with the GeoSphere Austria. The research data publications and the associated metadata are predominantly provided in German or in English. The abstracts are provided in both languages. Tethys aims to provide published data sets as open data and in accordance with the FAIR Data Principles, findable, accessible, interoperable and reusable.
This site provides access to complete, annotated genomes from bacteria and archaea (present in the European Nucleotide Archive) through the Ensembl graphical user interface (genome browser). Ensembl Bacteria contains genomes from annotated INSDC records that are loaded into Ensembl multi-species databases, using the INSDC annotation import pipeline.
Country
In the framework of the Collaborative Research Centre/Transregio 32 ‘Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation’ (CRC/TR32, www.tr32.de), funded by the German Research Foundation from 2007 to 2018, a RDM system was self-designed and implemented. The so-called CRC/TR32 project database (TR32DB, www.tr32db.de) is operating online since early 2008. The TR32DB handles all data including metadata, which are created by the involved project participants from several institutions (e.g. Universities of Cologne, Bonn, Aachen, and the Research Centre Jülich) and research fields (e.g. soil and plant sciences, hydrology, geography, geophysics, meteorology, remote sensing). The data is resulting from several field measurement campaigns, meteorological monitoring, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes of the scientists such as publications, conference contributions, PhD reports and corresponding images are collected in the TR32DB.