Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 31 result(s)
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
Country
The World Data Centre section provides software and data catalogue information and data produced by IPS Radio and Space Services over the past few past decades. You can download data files, plot graphs from data files, check data availability, retrieve data sets and station information.
Country
The department specializes on developing complex distributed systems for satellite data processing. The main task given to the department is development, validation and implementation of different satellite data processing methods in the form of information services and certain systems
Country
We developed a method, ChIP-sequencing (ChIP-seq), combining chromatin immunoprecipitation (ChIP) and massively parallel sequencing to identify mammalian DNA sequences bound by transcription factors in vivo. We used ChIP-seq to map STAT1 targets in interferon-gamma (IFN-gamma)-stimulated and unstimulated human HeLa S3 cells, and compared the method's performance to ChIP-PCR and to ChIP-chip for four chromosomes.For both Chromatin- immunoprecipation Transcription Factors and Histone modifications. Sequence files and the associated probability files are also provided.
The EUDAT project aims to contribute to the production of a Collaborative Data Infrastructure (CDI). The project´s target is to provide a pan-European solution to the challenge of data proliferation in Europe's scientific and research communities. The EUDAT vision is to support a Collaborative Data Infrastructure which will allow researchers to share data within and between communities and enable them to carry out their research effectively. EUDAT aims to provide a solution that will be affordable, trustworthy, robust, persistent and easy to use. EUDAT comprises 26 European partners, including data centres, technology providers, research communities and funding agencies from 13 countries. B2FIND is the EUDAT metadata service allowing users to discover what kind of data is stored through the B2SAFE and B2SHARE services which collect a large number of datasets from various disciplines. EUDAT will also harvest metadata from communities that have stable metadata providers to create a comprehensive joint catalogue to help researchers find interesting data objects and collections.
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
Candida Genome Database, a resource for genomic sequence data and gene and protein information for Candida albicans and related species. CGD is based on the Saccharomyces Genome Database. The Candida Genome Database (CGD) provides online access to genomic sequence data and manually curated functional information about genes and proteins of the human pathogen Candida albicans and related species. C. albicans is the best studied of the human fungal pathogens. It is a common commensal organism of healthy individuals, but can cause debilitating mucosal infections and life-threatening systemic infections, especially in immunocompromised patients. C. albicans also serves as a model organism for the study of other fungal pathogens.
The NCBI Short Genetic Variations database, commonly known as dbSNP, catalogs short variations in nucleotide sequences from a wide range of organisms. These variations include single nucleotide variations, short nucleotide insertions and deletions, short tandem repeats and microsatellites. Short Genetic Variations may be common, thus representing true polymorphisms, or they may be rare. Some rare human entries have additional information associated withthem, including disease associations, genotype information and allele origin, as some variations are somatic rather than germline events. ***NCBI will phase out support for non-human organism data in dbSNP and dbVar beginning on September 1, 2017***
B2SHARE is a user-friendly, reliable and trustworthy way for researchers, scientific communities and citizen scientists to store and share small-scale research data from diverse contexts and disciplines. B2SHARE is able to add value to your research data via (domain tailored) metadata, and assigning citable Persistent Identifiers PIDs (Handles) to ensure long-lasting access and references. B2SHARE is one of the B2 services developed via EUDAT and long tail data deposits do not cost money. Special arrangements such as branding and special metadata elements can be made on request.
Country
depositar — taking the term from the Portuguese/Spanish verb for to deposit — is an online repository for research data. The site is built by the researchers for the researchers. You are free to deposit, discover, and reuse datasets on depositar for all your research purposes.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
The long-term goal of this project is to implement a new strategy for preserving and providing access to the Astrophysical data heritage. IA2 is an ambitious Italian Astrophysical research infrastructure project that aims at co-ordinating different national initiatives to improve the quality of astrophysical data services. It aims at co-ordinating these developments and facilitating access to this data for research purposes. The first working target, is the implementation of the TNG Long-Term Archive (LTA). Its feasibility was demonstrated by the LTA pilot project prototype, funded by CNAA in 2001 and completed successfully in July 2002. The implementation of the TNG archive implies: − interfacing with the Centro "Galileo Galilei" (CGG) for the acquisition of TNG data; − long-term storage of scientific, technical and auxiliary data from the TNG; − providing accessibility by the CGG staff and by the scientific community to original and derived data; − providing tools to support the life cycle of observing proposals. The second target of the proposal aims at ensuring harmonization with other projects related to archiving of data of astrophysical interest, with particular reference to projects involving the Italian astronomical community (LBT, VST, GSC-II, DPOSS, …), to the Italian Solar and Solar System Physics community (SOLAR, SOLRA, ARTHEMIS which form SOLARNET – a future node of EGSO) and to the national and international coordination efforts fostering the idea of a multiwavelength Virtual Astronomical Observatory, and the use of the archived data through the Italian Astronomical Grid.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
Country
The CORA. Repositori de dades de Recerca is a repository of open, curated and FAIR data that covers all academic disciplines. CORA. Repositori de dades de Recerca is a shared service provided by participating Catalan institutions (Universities and CERCA Research Centers). The repository is managed by the CSUC and technical infrastructure is based on the Dataverse application, developed by international developers and users led by Harvard University (https://dataverse.org).
Country
CEEHRC represents a multi-stage funding commitment by the Canadian Institutes of Health Research (CIHR) and multiple Canadian and international partners. The overall aim is to position Canada at the forefront of international efforts to translate new discoveries in the field of epigenetics into improved human health. The two sites will focus on sequencing human reference epigenomes and developing new technologies and protocols; they will also serve as platforms for other CEEHRC funding initiatives, such as catalyst and team grants. The complementary reference epigenome mapping efforts of the two sites will focus on a range of common human diseases. The Vancouver group will focus on the role of epigenetics in the development of cancer, including lymphoma and cancers of the ovary, colon, breast, and thyroid. The Montreal team will focus on autoimmune / inflammatory, cardio-metabolic, and neuropsychiatric diseases, using studies of identical twins as well as animal models of human disease.
Country
The task of WDC geomagnetism is to collect geomagnetic data from all over the globe and distribute those data to researchers and data users, as a World Data Center for Geomagnetism.
<<<!!!<<< USHIK was archived because some of the metadata are maintained by other sites and there is no need for duplication. The USHIK metadata registry was a neutral repository of metadata from an authoritative source used to promote interoperability and reuse of data. The registry did not attempt to change the metadata content but rather provided a structured way to view data for the technical or casual user. Complete information see: https://www.ahrq.gov/data/ushik.html >>>!!!>>>
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
Country
As the third center for oceanography of the World Data Center following WDC-A of the United States and WDC-B of Russia, WDC-D for oceanography boasts long-term and stable sources of domestic marine basic data. The State Oceanic Administration now has long-term observations obtained from the fixed coastal ocean stations, offshore and oceanic research vessels, moored and drifting buoys. More and more marine data have been available from the Chinese-foreign marine cooperative surveys, analysis and measurement of laboratory samples, reception by the satellite ground station, aerial telemeter and remote sensing, the GOOS program and global ships of opportunity reports, etc; More marine data are being and will be obtained from the ongoing “863” program, one of the state key projects during the Ninth Five-year plan and the seasat No 1 which is scheduled to be launched next year. Through many years’ effort, the WDC-D for oceanography has established formal relationship of marine data exchange with over 130 marine institutions in more than 60 countries in the world and is maintaining a close relationship of data exchange with over 30 major national oceanographic data centers. The established China Oceanic Information Network has joined the international marine data exchange system via Internet. Through these channels, a large amount data have been acquired of through international exchange, which, plus the marine data collected at home for many years, has brought the WDC-D for Oceanography over 100 years’ global marine data with a total data amounting to more than 10 billion bytes. In the meantime, a vast amount of work has been done in the standardized and normalized processing and management of the data, and a series of national and professional standards have been formulated and implemented successively. Moreover, appropriate standards and norms are being formulated as required.