Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 39 result(s)
Sharing and preserving data are central to protecting the integrity of science. DataHub, a Research Computing endeavor, provides tools and services to meet scientific data challenges at Pacific Northwest National Laboratory (PNNL). DataHub helps researchers address the full data life cycle for their institutional projects and provides a path to creating findable, accessible, interoperable, and reusable (FAIR) data products. Although open science data is a crucial focus of DataHub’s core services, we are interested in working with evidence-based data throughout the PNNL research community.
Country
Within the RESIF-EPOS observation research infrastructure and the Action Spécifique RESIF-GNSS action, the Reseau National GNSS permanent (RENAG) is the network of GNSS observation stations of French universities and research organizations. It is currently composed of 85 GNSS stations (Global Navigation Satellite System such as GPS, GLONASS, Galileo). The scientific objectives of RESIF-RENAG range from the quantification of the slow deformation in France to the sounding of the atmosphere (troposphere and ionosphere), through the measurement of sea-level variations and the characterization of transient movements related to overloads. Data production is carried out in a distributed way by the laboratories and organizations that manage the stations. 12 teams are specifically in charge of station maintenance and of accurately filling in the metadata files. A single data center, RENAG-DC, hosted at the Observatoire de la Côte d'Azur (OCA) within the Geoazur laboratory, is in charge of data management, from their collection to their distribution in the standard RINEX format (http://renag.resif.fr).
The National Science Digital Library provides high quality online educational resources for teaching and learning, with current emphasis on the sciences, technology, engineering, and mathematics (STEM) disciplines—both formal and informal, institutional and individual, in local, state, national, and international educational settings. The NSDL collection contains structured descriptive information (metadata) about web-based educational resources held on other sites by their providers. These providers have contribute this metadata to NSDL for organized search and open access to educational resources via this website and its services.
Country
GTS AI is an Artificial Intelligence Company that offers excellent services to its clients. We use high definition images and use high quality data to analyze and help in Machine Learning Company . We are a dataset provider and we collect data in regards to artificial intelligence.
Open Power System Data is a free-of-charge data platform dedicated to electricity system researchers. We collect, check, process, document, and publish data that are publicly available but currently inconvenient to use. The project is a service provider to the modeling community: a supplier of a public good. Learn more about its background or just go ahead and explore the data platform.
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
The Alternative Fuels Data Center (AFDC) is a comprehensive clearinghouse of information about advanced transportation technologies. The AFDC offers transportation decision makers unbiased information, data, and tools related to the deployment of alternative fuels and advanced vehicles. The AFDC launched in 1991 in response to the Alternative Motor Fuels Act of 1988 and the Clean Air Act Amendments of 1990. It originally served as a repository for alternative fuel performance data. The AFDC has since evolved to offer a broad array of information resources that support efforts to reduce petroleum use in transportation. The AFDC serves Clean Cities stakeholders, fleets regulated by the Energy Policy Act, businesses, policymakers, government agencies, and the general public.
-----<<<<< The repository is no longer available. This record is out-dated. >>>>>----- GEON is an open collaborative project that is developing cyberinfrastructure for integration of 3 and 4 dimensional earth science data. GEON will develop services for data integration and model integration, and associated model execution and visualization. Mid-Atlantic test bed will focus on tectonothermal, paleogeographic, and biotic history from the late-Proterozoicto mid-Paleozoic. Rockies test bed will focus on integration of data with dynamic models, to better understand deformation history. GEON will develop the most comprehensive regional datasets in test bed areas.
>>>!!!<<< 2018-01-18: no data nor programs can be found >>>!!!<<< These archives contain public domain programs for calculations in physics and other programs that we suppose about will help during work with computer. Physical constants and experimental or theoretical data as cross sections, rate constants, swarm parameters, etc., that are necessary for physical calculations are stored here, too. Programs are mainly dedicated to computers compatible with PC IBM. If programs do not use graphic units it is possible to use them on other computers, too. It is necessary to reprogram the graphic parts of programs in the other cases.
<<<!!!<<< The repository is offline >>>!!!>>> A collection of open content name datasets for Information Centric Networking. The "Content Name Collection" (CNC) lists and hosts open datasets of content names. These datasets are either derived from URL link databases or web traces. The names are typically used for research on Information Centric Networking (ICN), for example to measure cache hit/miss ratios in simulations.
Lab Notes Online presents historic scientific data from the Caltech Archives' collections in digital facsimile. Beginning in the fall of 2008, the first publication in the series is Robert A. Millikan's notebooks for his oil drop experiments to measure the charge of the electron, dating from October 1911 to April 1912. Other laboratory, field, or research notes will be added to the archive over time.
Content type(s)
A machine learning data repository with interactive visual analytic techniques. This project is the first to combine the notion of a data repository with real-time visual analytics for interactive data mining and exploratory analysis on the web. State-of-the-art statistical techniques are combined with real-time data visualization giving the ability for researchers to seamlessly find, explore, understand, and discover key insights in a large number of public donated data sets. This large comprehensive collection of data is useful for making significant research findings as well as benchmark data sets for a wide variety of applications and domains and includes relational, attributed, heterogeneous, streaming, spatial, and time series data as well as non-relational machine learning data. All data sets are easily downloaded into a standard consistent format. We also have built a multi-level interactive visual analytics engine that allows users to visualize and interactively explore the data in a free-flowing manner.
NKN is now Research Computing and Data Services (RCDS)! We provide data management support for UI researchers and their regional, national, and international collaborators. This support keeps researchers at the cutting-edge of science and increases our institution's competitiveness for external research grants. Quality data and metadata developed in research projects and curated by RCDS (formerly NKN) is a valuable, long-term asset upon which to develop and build new research and science.
Country
CRAN is a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R. R is ‘GNU S’, a freely available language and environment for statistical computing and graphics which provides a wide variety of statistical and graphical techniques: linear and nonlinear modelling, statistical tests, time series analysis, classification, clustering, etc. Please consult the R project homepage for further information.
<<<!!!<<< All user content from this site has been deleted. Visit SeedMeLab (https://seedmelab.org/) project as a new option for data hosting. >>>!!!>>> SeedMe is a result of a decade of onerous experience in preparing and sharing visualization results from supercomputing simulations with many researchers at different geographic locations using different operating systems. It’s been a labor–intensive process, unsupported by useful tools and procedures for sharing information. SeedMe provides a secure and easy-to-use functionality for efficiently and conveniently sharing results that aims to create transformative impact across many scientific domains.
Country
The research data centre at the Federal Motor Transport Authority provides anonymised microdata on driver, vehicles, and road freight transport free of charge for non-commercial and independent scientific research.
Country
Ocean Networks Canada maintains several observatories installed in three different regions in the world's oceans. All three observatories are cabled systems that can provide power and high bandwidth communiction paths to sensors in the ocean. The infrastructure supports near real-time observations from multiple instruments and locations distributed across the Arctic, NEPTUNE and VENUS observatory networks. These observatories collect data on physical, chemical, biological, and geological aspects of the ocean over long time periods, supporting research on complex Earth processes in ways not previously possible.
cIRcle is an open access digital repository for published and unpublished material created by the UBC community and its partners. In BIRS there are thousands of mathematics videos, which are primary research data. Our repository is the largest source of mathematics data with more than 10TB of primary research by the best mathematicians in the world, coming from more than 600 institutions.
Country
The Data Bank operates a computer program service related to nuclear energy applications. The software library collects programs, compiles and verifies them in an appropriate computer environment, ensuring that the computer program package is complete and adequately documented. This collection of material contains more than 2000 documented packages and group cross-section data sets. We distribute these codes on CD-ROM, DVD and via electronic transfer to about 900 nominated NEA Data Bank establishments (see the rules for requesters). Standard software verification procedures are used following an ANSI/ANS standard.
Country
The Informatics Research Data Repository is a Japanese data repository that collects data on disciplines within informatics. Such sub-categories are things like consumerism and information diffusion. The primary data within these data sets is from experiments run by IDR on how one group is linked to another.
The long term goal of the Software Heritage initiative is to collect all publicly available software in source code form together with its development history, replicate it massively to ensure its preservation, and share it with everyone who needs it. The Software Heritage archive is growing over time as we crawl new source code from software projects and development forges.