Skip to main content
Advanced Search

Folders: ROOT > ScienceBase Catalog > Community for Data Integration (CDI) ( Show direct descendants )

289 results (112ms)   

Filters
Date Range
Extensions
Types
Contacts
Categories
Tag Types
Tag Schemes
View Results as: JSON ATOM CSV
thumbnail
This metadata record describes the materials contained in stake folder 2952. Stake 2952 is located at latitude 36.393, longitude -112.629. This location was photographed in the following years: 1954 and 1995. The materials associated with this item include original best quality images from each repeat date (preserved as digitized film images or in some cases digitized print photographs, depending on availability), scanned film envelopes with camera metadata, records of repeat photography sheets, and all field notes and/or camera notes associated with this stake. All attachments follow the following naming convention: stake_date_material_type_Kanab. Some stakes will have multiple materials from one repeat date (e.g.,...
This folder contains information on projects supported by the Community for Data Integration (CDI) in FY2013. Learn more about CDI and our proposals process on our website.
thumbnail
As a proof of concept, with coordination with FEMA, a GIS service feed for detected events was implemented. An example heatmap produced from ArcGIS service shows the intensity of where reporting tweets orginated.
thumbnail
The Community for Data Integration (CDI) Risk Map Project is developing modular tools and services to benefit a wide group of scientists and managers that deal with various aspects of risk research and planning. Risk is the potential that exposure to a hazard will lead to a negative consequence to an asset such as human or natural resources. This project builds upon a Department of the Interior project that is developing geospatial layers and other analytical results that visualize multi-hazard exposure to various DOI assets. The CDI Risk Map team has developed the following: a spatial database of hazards and assets, an API (application programming interface) to query the data, web services with Geoserver (an open-source...
thumbnail
Executive Summary Traditionally in the USGS, data is processed and analyzed on local researcher computers, then moved to centralized, remote computers for preservation and publishing (ScienceBase, Pubs Warehouse). This approach requires each researcher to have the necessary hardware and software for processing and analysis, and also to bring all external data required for the workflow over the internet to their local computer. To explore a more efficient and effective scientific workflow, we explored an alternate model: storing scientific data remotely, and performing data analysis and visualization close to the data, using only a local web browser as an interface. Although this environment was not a good fit...
thumbnail
Land-use researchers need the ability to rapidly compare multiple land-use scenarios over a range of spatial and temporal scales, and to visualize spatial and nonspatial data; however, land-use datasets are often distributed in the form of large tabular files and spatial files. These formats are not ideal for the way land-use researchers interact with and share these datasets. The size of these land-use datasets can quickly balloon in size. For example, land-use simulations for the Pacific Northwest, at 1-kilometer resolution, across 20 Monte Carlo realizations, can produce over 17,000 tabular and spatial outputs. A more robust management strategy is to store scenario-based, land-use datasets within a generalized...
thumbnail
USGS scientists often face computationally intensive tasks that require high-throughput computing capabilities. Several USGS facilities use HTCondor to run their computational pools but are not necessarily connected to the larger USGS pool. This project demonstrated how to connect HTCondor pools by flocking, or coordinating, within the USGS. In addition to flocking the Upper Midwest Environmental Science Center and the Wisconsin Water Science Center, we have flocked with the USGS Advanced Research Computing Yeti supercomputing cluster and other water science centers. We also developed tutorials on how to sandbox code using Docker within the USGS environment for use with high-throughput computing. A main accomplishment...
thumbnail
In this age of rapidly developing technology, scientific information is constantly being gathered across large spatial scales. Yet, our ability to coordinate large-scale monitoring efforts depends on development of tools that leverage and integrate multiple sources of data. North American bats are experiencing unparalleled population declines. The North American Bat Monitoring Program (NABat), a multi-national, multi-agency coordinated monitoring program, was developed to better understand the status and trends of North American bats. Similar to other large-scale monitoring programs, the ultimate success of NABat relies on a unified web-based data system. Our project successfully developed a program interface...
thumbnail
Spatial data on landslide occurrence across the U.S. varies greatly in quality, accessibility, and extent. This problem of data variability is common across USGS Mission Areas; it presents an obstacle to developing national-scale products and to identifying areas with relatively good/bad data coverage. We compiled available data of known landslides into a national-scale, searchable online map, which greatly increases public access to landslide hazard information. Additionally, we held a workshop with landslide practitioners and sought broader input from the CDI community; based on recommendations we identified a limited subset of essential attributes for inclusion in our product. We also defined a quantitative metric...
Fighting wildfires and reducing their negative effects on natural resources costs billions of dollars annually in the U.S. We will develop the Wildfire Trends Tool (WTT), a data visualization and analysis tool that will calculate and display wildfire trends and patterns for the western U.S. based on user-defined regions of interest, time periods, and ecosystem types. The WTT will be publicly available via a web application that will retrieve fire data and generate graphically compelling maps and charts of fire activity. For an area of interest, users will be able ask questions such as: Is the area burned by wildfire each year increasing or decreasing over time? Are wildfires becoming larger? Are fire seasons becoming...
A number of monitoring method and protocol libraries are currently in existence. Although these systems have been tailored to certain disciplines or research foci, the underlying principles, mechanisms, and processes have commonalities that could facilitate synthesizing content and information. The Protocol Library project consists of modifying and thus extending the capabilities of the existing NEMI methods compendium. To accomplish the task of incorporating a broad array of protocols, NEMI developers have conducted user requirement queries of protocol owners. In addition, protocol owners have been asked how they would prefer to access the data. Input forms have been created to accommodate the desires of protocol...
Wetland soils are vital to the Nation because of their role in sustaining water resources, supporting critical ecosystems, and sequestering significant concentrations of biologically-produced carbon. The United States has the world’s most detailed continent-scale digital datasets for soils and wetlands, yet scientists and land managers have long struggled with the challenge of integrating these datasets for applications in research and in resource assessment and management. The difficulties include spatial and temporal uncertainties, inconsistencies among data sources, and inherent structural complexities of the datasets. This project’s objective was to develop and document a set of methods to impute wetland...
thumbnail
Large amounts of data are being generated that require hours, days, or even weeks to analyze using traditional computing resources. Innovative solutions must be implemented to analyze the data in a reasonable timeframe. The program HTCondor (https://research.cs.wisc.edu/htcondor/) takes advantage of the processing capacity of individual desktop computers and dedicated computing resources as a single, unified pool. This unified pool of computing resources allows HTCondor to quickly process large amounts of data by breaking the data into smaller tasks distributed across many computers. This project team implemented HTCondor at the USGS Upper Midwest Environmental Sciences Center (UMESC) to leverage existing computing...
thumbnail
This project will assess the accuracy of climate drivers (precipitation and temperature) from different sources for current and future conditions. The impact of these drivers on hydrologic response will be using the monthly water balance model (MWBM). The methodology for processing and analysis of these datasets will be automated for when new climate datasets become available on the USGS Geo Data Portal (http://cida.usgs.gov/climate/gdp/ - content no longer available). This will ensure continued relevancy of project results, future opportunities for research and assessment of potential climate change impacts on hydrologic resources, and comparison between generations of climate data. To share and distribute the...
thumbnail
How can the public discover opportunities for participation in USGS scientific research? What citizen science projects are currently active within the USGS? How may PIs increase public engagement in and awareness of their citizen science projects? To address these questions, a web application leveraging existing Community for Data Integration (CDI) and USGS work was created to allow unprecedented public access to USGS citizen science project metadata and highlights of key science outcomes. Such an application enables, for the first time, high-visibility, unified open access to information about projects and practices related to citizen participation in USGS research. The need for such information was identified...
This presentation is a product of the 2012 CDI Project: Citizen Science Observation Platform - Using Curated Twitter and GeoRSS Enabled Feeds. It was presented on Sept. 5, 2012 at a CDI-sponsored webinar.
This poster is a product of the 2012 CDI project: Data Management Website. It was presented at the CDI Data Blast Poster Presentation 2012 in Reston, VA.
This poster is a product of the 2012 CDI project: NWIS Web Services Snapshot for ArcGIS. It was presented at the CDI Data Blast Poster Presentation 2012 in Reston, VA.
This poster is a product of the 2010 CDI project: Data Upload, Registry and Access Project. It was presented at the CDI Data Blast Poster Presentation 2012 in Reston, VA.
thumbnail
The U.S. Geological Survey (USGS) Geology, Geophysics and Geochemistry Science Center (GGGSC) collaborated with the USGS Data at Risk (DaR) team to preserve and release a subset of magnetotelluric data from the San Andreas Fault in Parkfield, California. The San Andreas Fault data were collected by the Branch of Geophysics, a precursor to the now GGGSC, between 1989 and 1994. The magnetotelluric data selected for this preservation project were collected in 1990 using USGS portable truck mounted systems that measure the distribution of electrical conductivity beneath the surface of the earth. Truck mounted systems of this era output data to 3.5” discs, from which data were recovered and transformed to binary or ASCII...


map background search result map search result map Magnetotelluric Data from the San Andreas Fault, Parkfield CA, 1990 USGS Southwest Repeat Photography Collection: Kanab Creek, southern Utah and northern Arizona, 1872-2010: Stake 2952 USGS Southwest Repeat Photography Collection: Kanab Creek, southern Utah and northern Arizona, 1872-2010: Stake 2952 Magnetotelluric Data from the San Andreas Fault, Parkfield CA, 1990