Remote sensing of invasive species is a critical component of conservation and management efforts, but reliable methods for the detection of invaders have not been widely established. In Hawaiian forests, we recently found that invasive trees often have hyperspectral signatures unique from that of native trees, but mapping based on spectral reflectance properties alone is confounded by issues of canopy senescence and mortality, intra- and inter-canopy gaps and shadowing, and terrain variability. We deployed a new hybrid airborne system combining the Carnegie Airborne Observatory (CAO) small-footprint light detection and ranging (LiDAR) system with the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) to map the three-dimensional spectral and structural properties of Hawaiian forests. The CAO-AVIRIS systems and data were fully integrated using in-flight and post-flight fusion techniques, facilitating an analysis of forest canopy properties to determine the presence and abundance of three highly invasive tree species in Hawaiian rainforests.
The LiDAR sub-system was used to model forest canopy height and top-of-canopy surfaces; these structural data allowed for automated masking of forest gaps, intra- and inter-canopy shadows, and minimum vegetation height in the AVIRIS images. The remaining sunlit canopy spectra were analyzed using spatially-constrained spectral mixture analysis. The results of the combined LiDAR-spectroscopic analysis highlighted the location and fractional abundance of each invasive tree species throughout the rainforest sites. Field validation studies demonstrated < 6.8% and < 18.6% error rates in the detection of invasive tree species at 7 m2 and 2 m2 minimum canopy cover thresholds. Our results show that full integration of imaging spectroscopy and LiDAR measurements provides enormous flexibility and analytical potential for studies of terrestrial ecosystems and the species contained within them. 相似文献
Image differences between Shuttle Radar Topography Mission (SRTM) data and other Digital Elevation Models (DEMs) are often performed for either accuracy assessment or for estimating vegetation height across the landscape. It has been widely assumed that the effect of sub-pixel misregistration between the two models on resultant image differences is negligible, yet this has not previously been tested in detail. The aim of this study was to determine the impact that various levels of misregistration have on image differences between SRTM and DEMs. First, very accurate image co-registration was performed at two study sites between higher resolution DEMs and SRTM data, and then image differences (SRTM–DEM) were performed after various levels of misregistration were systematically introduced into the SRTM data. It was found that: (1) misregistration caused an erroneous and dominant correlation between elevation difference and aspect across the landscape; (2) the direction of the misregistration defined the direction of this erroneous and systematic elevation difference; (3) for sub-pixel misregistration the error due solely to misregistration was greater than, or equal to the true difference between the two models for substantial proportions of the landscape (e.g., greater than 33% of the area for a half-pixel misregistration); and (4) the strength of the erroneous relationship with aspect was enhanced by steeper terrain. Spatial comparisons of DEMs were found to be sensitive to even sub-pixel misregistration between the two models, which resulted in a strong erroneous correlation with aspect. This misregistration induced correlation with aspect is not likely specific to SRTM data only; we expect it to be a generic relationship present in any DEM image difference analysis. 相似文献
Change detection based on the comparison of independently classified images (i.e. post-classification comparison) is well-known to be negatively affected by classification errors of individual maps. Incorporating spatial-temporal contextual information in the classification helps to reduce the classification errors, thus improving change detection results. In this paper, spatial-temporal Markov Random Fields (MRF) models were used to integrate spatial-temporal information with spectral information for multi-temporal classification in an attempt to mitigate the impacts of classification errors on change detection. One important component in spatial-temporal MRF models is the specification of transition probabilities. Traditionally, a global transition probability model is used that assumes spatial stationarity of transition probabilities across an image scene, which may be invalid if areas have varying transition probabilities. By relaxing the stationarity assumption, we developed two local transition probability models to make the transition model locally adaptive to spatially varying transition probabilities. The first model called locally adjusted global transition model adapts to the local variation by multiplying a pixel-wise probability of change with the global transition model. The second model called pixel-wise transition model was developed as a fully local model based on the estimation of the pixel-wise joint probabilities. When applied to the forest change detection in Paraguay, the two local models showed significant improvements in the accuracy of identifying the change from forest to non-forest compared with traditional models. This indicates that the local transition probability models can present temporal information more accurately in change detection algorithms based on spatial-temporal classification of multi-temporal images. The comparison between the two local transition models showed that the fully local model better captured the spatial heterogeneity of the transition probabilities and achieved more stable and consistent results over different regions of a large image scene. 相似文献
Floodplain roughness parameterization is one of the key elements of hydrodynamic modeling of river flow, which is directly linked to exceedance levels of the embankments of lowland fluvial areas. The present way of roughness mapping is based on manually delineated floodplain vegetation types, schematized as cylindrical elements of which the height (m) and the vertical density (the projected plant area in the direction of the flow per unit volume, m− 1) have to be assigned using a lookup table. This paper presents a novel method of automated roughness parameterization. It delivers a spatially distributed roughness parameterization in an entire floodplain by fusion of CASI multispectral data with airborne laser scanning (ALS) data. The method consists of three stages: (1) pre-processing of the raw data, (2) image segmentation of the fused data set and classification into the dominant land cover classes (KHAT = 0.78), (3) determination of hydrodynamic roughness characteristics for each land cover class separately. In stage three, a lookup table provides numerical values that enable roughness calculation for the classes water, sand, paved area, meadows and built-up area. For forest and herbaceous vegetation, ALS data enable spatially detailed analysis of vegetation height and density. The hydrodynamic vegetation density of forest is mapped using a calibrated regression model. Herbaceous vegetation cover is further subdivided in single trees and non-woody vegetation. Single trees were delineated using a novel iterative cluster merging method, and their height is predicted (R2 = 0.41, rse = 0.84 m). The vegetation density of single trees was determined in an identical way as for forest. Vegetation height and density of non-woody herbaceous vegetation were also determined using calibrated regression models. A 2D hydrodynamic model was applied with the results of this novel method, and compared with a traditional roughness parameterization approach. The modeling results showed that the new method is well able to provide accurate output data. The new method provides a faster, repeatable, and more accurate way of obtaining floodplain roughness, which enables regular updating of river flow models. 相似文献
Proximity queries such as closest point computation and collision detection have many applications in computer graphics, including computer animation, physics‐based modelling, augmented and virtual reality. We present efficient algorithms for proximity queries between a closed rigid object and an arbitrary, possibly deformable, polygonal mesh. Using graphics hardware to densely sample the distance field of the rigid object over the arbitrary mesh, we compute minimal proximity and collision response information on the graphics processing unit (GPU) using blending and depth buffering, as well as parallel reduction techniques, thus minimizing the readback bottleneck. Although limited to image‐space resolution, our algorithm provides high and steady performance when compared with other similar algorithms. Proximity queries between arbitrary meshes with hundreds of thousands of triangles and detailed distance fields of rigid objects are computed in a few milliseconds at high‐sampling resolution, even in situations with large overlap.相似文献
We perform continuous collision detection (CCD) for articulated bodies where motion is governed by an adaptive dynamics simulation.
Our algorithm is based on a novel hierarchical set of transforms that represent the kinematics of an articulated body recursively,
as described by an assembly tree. The performance of our CCD algorithm significantly improves as the number of active degrees
of freedom in the simulation decreases. 相似文献
Many software systems would significantly improve performance if they could adapt to the emotional state of the user, for example if Intelligent Tutoring Systems (ITSs), ATM’s, ticketing machines could recognise when users were confused, frustrated or angry they could guide the user back to remedial help systems so improving the service. Many researchers now feel strongly that ITSs would be significantly enhanced if computers could adapt to the emotions of students. This idea has spawned the developing field of affective tutoring systems (ATSs): ATSs are ITSs that are able to adapt to the affective state of students. The term “affective tutoring system” can be traced back as far as Rosalind Picard’s book Affective Computing in 1997.This paper presents research leading to the development of Easy with Eve, an ATS for primary school mathematics. The system utilises a network of computer systems, mainly embedded devices to detect student emotion and other significant bio-signals. It will then adapt to students and displays emotion via a lifelike agent called Eve. Eve’s tutoring adaptations are guided by a case-based method for adapting to student states; this method uses data that was generated by an observational study of human tutors. This paper presents the observational study, the case-based method, the ATS itself and its implementation on a distributed computer systems for real-time performance, and finally the implications of the findings for Human Computer Interaction in general and e-learning in particular. Web-based applications of the technology developed in this research are discussed throughout the paper. 相似文献