Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 142 result(s)
CLARINO Bergen Center repository is the repository of CLARINO, the Norwegian infrastructure project . Its goal is to implement the Norwegian part of CLARIN. The ultimate aim is to make existing and future language resources easily accessible for researchers and to bring eScience to humanities disciplines. The repository includes INESS the Norwegian Infrastructure for the Exploration of Syntax and Semantics. This infrastructure provides access to treebanks, which are databases of syntactically and semantically annotated sentences.
Polish CLARIN node – CLARIN-PL Language Technology Centre – is being built at Wrocław University of Technology. The LTC is addressed to scholars in the humanities and social sciences. Registered users are granted free access to digital language resources and advanced tools to explore them. They can also archive and share their own language data (in written, spoken, video or multimodal form).
The European Social Survey (the ESS) is a biennial multi-country survey covering over 30 nations. The first round was fielded in 2002/2003, the fifth in 2010/2011. The questionnaire includes two main sections, each consisting of approximately 120 items; a 'core' module which remains relatively constant from round to round, plus two or more 'rotating' modules, repeated at intervals. The core module aims to monitor change and continuity in a wide range of social variables, including media use; social and public trust; political interest and participation; socio-political orientations; governance and efficacy; moral; political and social values; social exclusion, national, ethnic and religious allegiances; well-being; health and security; human values; demographics and socio-economics
virus mentha archives evidence about viral interactions collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. virus mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". virus mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. virus mentha offers direct access to viral families such as: Orthomyxoviridae, Orthoretrovirinae and Herpesviridae plus, it offers the unique possibility of searching by host organism. The website and the graphical application are designed to make the data stored in virus mentha accessible and analysable to all users.virus mentha superseeds VirusMINT. The Source databases are: MINT, DIP, IntAct, MatrixDB, BioGRID.
The UniProt Reference Clusters (UniRef) provide clustered sets of sequences from the UniProt Knowledgebase (including isoforms) and selected UniParc records in order to obtain complete coverage of the sequence space at several resolutions while hiding redundant sequences (but not their descriptions) from view.
The Database explores the interactions of chemicals and proteins. It integrates information about interactions from metabolic pathways, crystal structures, binding experiments and drug-target relationships. Inferred information from phenotypic effects, text mining and chemical structure similarity is used to predict relations between chemicals. STITCH further allows exploring the network of chemical relations, also in the context of associated binding proteins.
The FAIRDOMHub is built upon the SEEK software suite, which is an open source web platform for sharing scientific research assets, processes and outcomes. FAIRDOM (Web Site) will establish a support and service network for European Systems Biology. It will serve projects in standardizing, managing and disseminating data and models in a FAIR manner: Findable, Accessible, Interoperable and Reusable. FAIRDOM is an initiative to develop a community, and establish an internationally sustained Data and Model Management service to the European Systems Biology community. FAIRDOM is a joint action of ERA-Net EraSysAPP and European Research Infrastructure ISBE.
The goal of the Center of Estonian Language Resources (CELR) is to create and manage an infrastructure to make the Estonian language digital resources (dictionaries, corpora – both text and speech –, various language databases) and language technology tools (software) available to everyone working with digital language materials. CELR coordinates and organises the documentation and archiving of the resources as well as develops language technology standards and draws up necessary legal contracts and licences for different types of users (public, academic, commercial, etc.). In addition to collecting language resources, a system will be launched for introducing the resources to, informing and educating the potential users. The main users of CELR are researchers from Estonian R&D institutions and Social Sciences and Humanities researchers all over the world via the CLARIN ERIC network of similar centers in Europe. Access to data is provided through different sites: Public Repository https://entu.keeleressursid.ee/public-document, Language resources https://keeleressursid.ee/en/resources/corpora, and MetaShare CELR https://metashare.ut.ee/.
The project is set up in order to improve the infrastructure for text-based linguistic research and development by building a huge, automatically annotated German text corpus and the corresponding tools for corpus annotation and exploitation. DeReKo constitutes the largest linguistically motivated collection of contemporary German texts, contains fictional, scientific and newspaper texts, as well as several other text types, contains only licenced texts, is encoded with rich meta-textual information, is fully annotated morphosyntactically (three concurrent annotations), is continually expanded, with a focus on size and stratification of data, may be analyzed free of charge via the query system COSMAS II, serves as a 'primordial sample' from which users may draw specialized sub-samples (socalled 'virtual corpora') to represent the language domain they wish to investigate. !!! Access to data of Das Deutsche Referenzkorpus is also provided by: IDS Repository https://www.re3data.org/repository/r3d100010382 !!!
B2SAFE is a robust, safe and highly available service which allows community and departmental repositories to implement data management policies on their research data across multiple administrative domains in a trustworthy manner. A solution to: provide an abstraction layer which virtualizes large-scale data resources, guard against data loss in long-term archiving and preservation, optimize access for users from different regions, bring data closer to powerful computers for compute-intensive analysis
CLARIN.SI is the Slovenian node of the European CLARIN (Common Language Resources and Technology Infrastructure) Centers. The CLARIN.SI repository is hosted at the Jožef Stefan Institute and offers long-term preservation of deposited linguistic resources, along with their descriptive metadata. The integration of the repository with the CLARIN infrastructure gives the deposited resources wide exposure, so that they can be known, used and further developed beyond the lifetime of the projects in which they were produced. Among the resources currently available in the CLARIN.SI repository are the multilingual MULTEXT-East resources, the CC version of Slovenian reference corpus Gigafida, the morphological lexicon Sloleks, the IMP corpora and lexicons of historical Slovenian, as well as many other resources for a variety of languages. Furthermore, several REST-based web services are provided for different corpus-linguistic and NLP tasks.
CLARIN-UK is a consortium of centres of expertise involved in research and resource creation involving digital language data and tools. The consortium includes the national library, and academic departments and university centres in linguistics, languages, literature and computer science.
The focus of PolMine is on texts published by public institutions in Germany. Corpora of parliamentary protocols are at the heart of the project: Parliamentary proceedings are available for long stretches of time, cover a broad set of public policies and are in the public domain, making them a valuable text resource for political science. The project develops repositories of textual data in a sustainable fashion to suit the research needs of political science. Concerning data, the focus is on converting text issued by public institutions into a sustainable digital format (TEI/XML).
The ACEnano Knowledge Infrastructure facilitates access and sharing of methodology applied in nanosafety, starting with nanomaterials characterisation protocols developed or optimised within the ACEnano project.
The GRSF, the Global Record of Stocks and Fisheries, integrates data from three authoritative sources: FIRMS (Fisheries and Resources Monitoring System), RAM (RAM Legacy Stock Assessment Database) and FishSource (Program of the Sustainable Fisheries Partnership). The GRSF content publicly disseminated through this catalogue is distributed as a beta version to test the logic to generate unique identifiers for stocks and fisheries. The access to and review of collated stock and fishery data is restricted to selected users. This beta release can contain errors and we welcome feedback on content and software performance, as well as the overall usability. Beta users are advised that information on this site is provided on an "as is" and "as available" basis. The accuracy, completeness or authenticity of the information on the GRSF catalogue is not guaranteed. It is reserved the right to alter, limit or discontinue any part of this service at its discretion. Under no circumstances shall the GRSF be liable for any loss, damage, liability or expense suffered that is claimed to result from the use of information posted on this site, including without limitation, any fault, error, omission, interruption or delay. The GRSF is an active database, updates and additions will continue after the beta release. For further information, or for using the GRSF unique identifiers as a beta tester please contact FIRMS-Secretariat@fao.org.
IoT Lab is a research platform exploring the potential of crowdsourcing and Internet of Things for multidisciplinary research with more end-user interactions. IoT Lab is a European Research project which aims at researching the potential of crowdsourcing to extend IoT testbed infrastructure for multidisciplinary experiments with more end-user interactions. It addresses topics such as: - Crowdsourcing mechanisms and tools; - “Crowdsourcing-driven research”; - Virtualization of crowdsourcing and testbeds; - Ubiquitous Interconnection and Cloudification of testbeds; - Testbed as a Service platform; - Multidisciplinary experiments; - End-user and societal value creation; - Privacy and personal data protection.
HELIX DATA is an integral component of the Hellenic Data Service "HELIX" supporting knowledge management and scholarly communication in Greece. HELIX DATA is the data catalogue and repository, with a dual role to store and preserve data that are self-deposited by researchers as well as to harvest data records from other national data sources and catalogues.
ZENODO builds and operates a simple and innovative service that enables researchers, scientists, EU projects and institutions to share and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject-based repositories of the research communities. ZENODO enables researchers, scientists, EU projects and institutions to: easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. display their research results and get credited by making the research results citable and integrate them into existing reporting lines to funding agencies like the European Commission. easily access and reuse shared research results.
The DARIAH-DE repository is a digital long-term archive for human and cultural-scientific research data. Each object described and stored in the DARIAH-DE Repository has a unique and lasting Persistent Identifier (DOI), with which it is permanently referenced, cited, and kept available for the long term. In addition, the DARIAH-DE Repository enables the sustainable and secure archiving of data collections. The DARIAH-DE Repository is not only to DARIAH-DE associated research projects, but also to individual researchers as well as research projects that want to save their research data persistently, referenceable and long-term archived and make it available to third parties. The main focus is the simple and user-oriented access to long-term storage of research data. To ensure its long term sustainability, the DARIAH-DE Repository is operated by the Humanities Data Centre.
ChEMBL is a database of bioactive drug-like small molecules, it contains 2-D structures, calculated properties (e.g. logP, Molecular Weight, Lipinski Parameters, etc.) and abstracted bioactivities (e.g. binding constants, pharmacology and ADMET data). The data is abstracted and curated from the primary scientific literature, and cover a significant fraction of the SAR and discovery of modern drugs We attempt to normalise the bioactivities into a uniform set of end-points and units where possible, and also to tag the links between a molecular target and a published assay with a set of varying confidence levels. Additional data on clinical progress of compounds is being integrated into ChEMBL at the current time.
The repository is part of the National Research Data Infrastructure initiative Text+, in which the University of Tübingen is a partner. It is housed at the Department of General and Computational Linguistics. The infrastructure is maintained in close cooperation with the Digital Humanities Centre, which is a core facility of the university, colaborating with the library and computing center of the university. Integration of the repository into the national CLARIN-D and international CLARIN infrastructures gives it wide exposure, increasing the likelihood that the resources will be used and further developed beyond the lifetime of the projects in which they were developed. Among the resources currently available in the Tübingen Center Repository, researchers can find widely used treebanks of German (e.g. TüBa-D/Z), the German wordnet (GermaNet), the first manually annotated digital treebank (Index Thomisticus), as well as descriptions of the tools used by the WebLicht ecosystem for natural language processing.
BeiDare2 is currently at beta version. All new users should try the new service as we no longer provide training for the classic BioDare. - BioDare stands for Biological Data Repository, its main focus is data from circadian experiments. BioDare is an online facility to share, store, analyse and disseminate timeseries data, focussing on circadian clock data, with browser and web service interfaces. Toolbox features include an improved, speedier FFT-NLLs routine and ROBuST’s Spectrum Resampling tool that will analyse rhythmic time series data.