Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 139 result(s)
The European Social Survey (the ESS) is a biennial multi-country survey covering over 30 nations. The first round was fielded in 2002/2003, the fifth in 2010/2011. The questionnaire includes two main sections, each consisting of approximately 120 items; a 'core' module which remains relatively constant from round to round, plus two or more 'rotating' modules, repeated at intervals. The core module aims to monitor change and continuity in a wide range of social variables, including media use; social and public trust; political interest and participation; socio-political orientations; governance and efficacy; moral; political and social values; social exclusion, national, ethnic and religious allegiances; well-being; health and security; human values; demographics and socio-economics
The 1000 Genomes Project is an international collaboration to produce an extensive public catalog of human genetic variation, including SNPs and structural variants, and their haplotype contexts. This resource will support genome-wide association studies and other medical research studies. The genomes of about 2500 unidentified people from about 25 populations around the world will be sequenced using next-generation sequencing technologies. The results of the study will be freely and publicly accessible to researchers worldwide. The International Genome Sample Resource (IGSR) has been established at EMBL-EBI to continue supporting data generated by the 1000 Genomes Project, supplemented with new data and new analysis.
virus mentha archives evidence about viral interactions collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. virus mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". virus mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. virus mentha offers direct access to viral families such as: Orthomyxoviridae, Orthoretrovirinae and Herpesviridae plus, it offers the unique possibility of searching by host organism. The website and the graphical application are designed to make the data stored in virus mentha accessible and analysable to all users.virus mentha superseeds VirusMINT. The Source databases are: MINT, DIP, IntAct, MatrixDB, BioGRID.
The European Soil Data Centre (ESDAC) is the thematic centre for soil related data in Europe. Its ambition is to be the single reference point for and to host all relevant soil data and information at European level. It contains a number of resources that are organized and presented in various ways: datasets, services/applications, maps, documents, events, projects and external links.
The UniProt Reference Clusters (UniRef) provide clustered sets of sequences from the UniProt Knowledgebase (including isoforms) and selected UniParc records in order to obtain complete coverage of the sequence space at several resolutions while hiding redundant sequences (but not their descriptions) from view.
The Database explores the interactions of chemicals and proteins. It integrates information about interactions from metabolic pathways, crystal structures, binding experiments and drug-target relationships. Inferred information from phenotypic effects, text mining and chemical structure similarity is used to predict relations between chemicals. STITCH further allows exploring the network of chemical relations, also in the context of associated binding proteins.
DEPOD - the human DEPhOsphorylation Database (version 1.1) is a manually curated database collecting human active phosphatases, their experimentally verified protein and non-protein substrates and dephosphorylation site information, and pathways in which they are involved. It also provides links to popular kinase databases and protein-protein interaction databases for these phosphatases and substrates. DEPOD aims to be a valuable resource for studying human phosphatases and their substrate specificities and molecular mechanisms; phosphatase-targeted drug discovery and development; connecting phosphatases with kinases through their common substrates; completing the human phosphorylation/dephosphorylation network.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The main function of the GGSP (Galileo Geodetic Service Provider) is to provide a terrestrial reference frame, in the broadest sense of the word, to both the Galileo Core System (GCS) as well as to the Galileo User Segment (all Galileo users). This implies that the GGSP should enable all users of the Galileo System, including the most demanding ones, to access and realise the GTRF with the precision required for their specific application. Furthermore, the GGSP must ensure the proper interfaces to all users of the GTRF, especially the geodetic and scientific user groups. In addition the GGSP must ensure the adherence to the defined standards of all its products. Last but not least the GGSP will play a key role to create awareness of the GTRF and educate users in the usage and realisation of the GTRF.
<<<!!!<<< The repository is no longer available. You can find the data using https://www.re3data.org/repository/r3d100010199 and searching for WATCH. Further information see: https://catalogue.ceh.ac.uk/documents/ba6e8ddd-22a9-457d-acf4-d63cd34f2dda >>>!!!>>>
The European Space Agency's Planetary Science Archive (PSA) is the central repository for all scientific and engineering data returned by ESA's Solar System missions: currently including Giotto, Huygens, Mars Express, Venus Express, Rosetta, SMART-1 and ExoMars 16, as well as several ground-based cometary observations. Future missions hosted by the PSA will be Bepi Colombo, ExoMars Rover and Surface Platform and Juice.
>>>!!!<<< stated 13.02.2020: the repository is offline >>>!!!<<< Data.DURAARK provides a unique collection of real world datasets from the architectural profession. The repository is unique, as it provides several different datatypes, such as 3d scans, 3d models and classifying Metadata and Geodata, to real world physical buildings.domain. Many of the datasets stem from architectural stakeholders and provide the community in this way with insights into the range of working methods, which the practice employs on large and complex building data.
CLARIN-UK is a consortium of centres of expertise involved in research and resource creation involving digital language data and tools. The consortium includes the national library, and academic departments and university centres in linguistics, languages, literature and computer science.
The focus of PolMine is on texts published by public institutions in Germany. Corpora of parliamentary protocols are at the heart of the project: Parliamentary proceedings are available for long stretches of time, cover a broad set of public policies and are in the public domain, making them a valuable text resource for political science. The project develops repositories of textual data in a sustainable fashion to suit the research needs of political science. Concerning data, the focus is on converting text issued by public institutions into a sustainable digital format (TEI/XML).
IoT Lab is a research platform exploring the potential of crowdsourcing and Internet of Things for multidisciplinary research with more end-user interactions. IoT Lab is a European Research project which aims at researching the potential of crowdsourcing to extend IoT testbed infrastructure for multidisciplinary experiments with more end-user interactions. It addresses topics such as: - Crowdsourcing mechanisms and tools; - “Crowdsourcing-driven research”; - Virtualization of crowdsourcing and testbeds; - Ubiquitous Interconnection and Cloudification of testbeds; - Testbed as a Service platform; - Multidisciplinary experiments; - End-user and societal value creation; - Privacy and personal data protection.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
BeiDare2 is currently at beta version. All new users should try the new service as we no longer provide training for the classic BioDare. - BioDare stands for Biological Data Repository, its main focus is data from circadian experiments. BioDare is an online facility to share, store, analyse and disseminate timeseries data, focussing on circadian clock data, with browser and web service interfaces. Toolbox features include an improved, speedier FFT-NLLs routine and ROBuST’s Spectrum Resampling tool that will analyse rhythmic time series data.
The DNA Bank Network was established in spring 2007 and was funded until 2011 by the German Research Foundation (DFG). The network was initiated by GBIF Germany (Global Biodiversity Information Facility). It offers a worldwide unique concept. DNA bank databases of all partners are linked and are accessible via a central web portal, providing DNA samples of complementary collections (microorganisms, protists, plants, algae, fungi and animals). The DNA Bank Network was one of the founders of the Global Genome Biodiversity Network (GGBN) and is fully merged with GGBN today. GGBN agreed on using the data model proposed by the DNA Bank Network. The Botanic Garden and Botanical Museum Berlin-Dahlem (BGBM) hosts the technical secretariat of GGBN and its virtual infrastructure. The main focus of the DNA Bank Network is to enhance taxonomic, systematic, genetic, conservation and evolutionary studies by providing: • high quality, long-term storage of DNA material on which molecular studies have been performed, so that results can be verified, extended, and complemented, • complete on-line documentation of each sample, including the provenance of the original material, the place of voucher deposit, information about DNA quality and extraction methodology, digital images of vouchers and links to published molecular data if available.
Content type(s)
The World Data Centre for Aerosols (WDCA) is the data repository and archive for microphysical, optical, and chemical properties of atmospheric aerosol of the World Meteorological Organisation's (WMO) Global Atmosphere Watch (GAW) programme. The goal of the Global Atmosphere Watch (GAW) programme is to ensure long-term measurements in order to detect trends in global distributions of chemical constituents in air and the reasons for them. With respect to aerosols, the objective of GAW is to determine the spatio-temporal distribution of aerosol properties related to climate forcing and air quality on multi-decadal time scales and on regional, hemispheric and global spatial scales.
InterPro collects information about protein sequence analysis and classification, providing access to a database of predictive protein signatures used for the classification and automatic annotation of proteins and genomes. Sequences in InterPro are classified at superfamily, family, and subfamily. InterPro predicts the occurrence of functional domains, repeats, and important sites, and adds in-depth annotation such as GO terms to the protein signatures.
The platform hosts the critical edition of the letters written to Jacob Burckhardt, reconstructing in open access one of the most important European correspondences of the 19th century. Save a few exceptions, these letters are all unpublished. On a later stage, the project aims to publish also Jacob Burckhardt’s letters. The editing process has been carried out using Muruca semantic digital library framework. The Muruca framework has been modified over the project, as the requirements of the philological researchers emerged more clearly. The results are stored in and accessible from the front-end of the platform.