首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Remote sensing has traditionally be done with satellites and manned aircraft. While these methods can yield useful scientific data, satellites and manned aircraft have limitations in data frequency, process time, and real time re-tasking. Small low-cost unmanned aerial vehicles (UAVs) can bridge the gap for personal remote sensing for scientific data. Precision aerial imagery and sensor data requires an accurate dynamics model of the vehicle for controller development. One method of developing a dynamics model is system identification (system ID). The purpose of this paper is to provide a survey and categorization of current methods and applications of system ID for small low-cost UAVs. This paper also provides background information on the process of system ID with in-depth discussion on practical implementation for UAVs. This survey divides the summaries of system ID research into five UAV groups: helicopter, fixed-wing, multirotor, flapping-wing, and lighter-than-air. The research literature is tabulated into five corresponding UAV groups for further research.  相似文献   

2.
Provenance information in eScience is metadata that's critical to effectively manage the exponentially increasing volumes of scientific data from industrial-scale experiment protocols. Semantic provenance, based on domain-specific provenance ontologies, lets software applications unambiguously interpret data in the correct context. The semantic provenance framework for eScience data comprises expressive provenance information and domain-specific provenance ontologies and applies this information to data management. The authors' "two degrees of separation" approach advocates the creation of high-quality provenance information using specialized services. In contrast to workflow engines generating provenance information as a core functionality, the specialized provenance services are integrated into a scientific workflow on demand. This article describes an implementation of the semantic provenance framework for glycoproteomics.  相似文献   

3.
FlowVR是为VR提供的支持数据流的分布式框架[1],对于构建功能强大的分布式科学数据分析系统也有很大的优势,但是FlowVR本身没有数学建模功能。Scilab具有强大的数学建模和仿真功能。结合FlowVR和Scilab的优点,可以构建高性能的科学数据分析系统,以应用于各种领域。文章介绍了FlowVR的Scilab Toolbox的设计与实现。该系统通过开发Scilab的函数编程接口,实现了FlowVR框架下各个模块与Scilab的数据交互,利用后者的科学计算功能,能够为FlowVR开发数据分析系统提供强大的建模工具。  相似文献   

4.
为了提高高等院校科研工作的效率,便于科研数据的统计与共享,提出了基于ESMSH框架的科研管理系统的实现方法。利用Easy UI构建表示层,通过Spring MVC实现控制层,利用Spring整合Spring MVC和Hibernate,利用Hibernate实现数据持久层。分析了架构的执行流程,阐述了系统实现的关键技术。实践表明,ESMSH框架通过清晰的分层结构和松散的耦合,使系统具有较高的重用性和扩展性。  相似文献   

5.
Trajectory analysis of land cover change in arid environment of China   总被引:1,自引:0,他引:1  
Remotely sensed data have been utilized for environmental change study over the past 30 years. Large collections of remote sensing imagery have made it possible for spatio‐temporal analyses of the environment and the impact of human activities. This research attempts to develop both conceptual framework and methodological implementation for land cover change detection based on medium and high spatial resolution imagery and temporal trajectory analysis. Multi‐temporal and multi‐scale remotely sensed data have been integrated from various sources with a monitoring time frame of 30 years, including historical and state‐of‐the‐art high‐resolution satellite imagery. Based on this, spatio‐temporal patterns of environmental change, which is largely represented by changes in land cover (e.g., vegetation and water), were analysed for the given timeframe. Multi‐scale and multi‐temporal remotely sensed data, including Landsat MSS, TM, ETM and SPOT HRV, were used to detect changes in land cover in the past 30 years in Tarim River, Xinjiang, China. The study shows that by using the auto‐classification approach an overall accuracy of 85–90% with a Kappa coefficient of 0.66–0.78 was achieved for the classification of individual images. The temporal trajectory of land‐use change was established and its spatial pattern was analysed to gain a better understanding of the human impact on the fragile ecosystem of China's arid environment.  相似文献   

6.
Scientific workflows have emerged as an important tool for combining the computational power with data analysis for all scientific domains in e-science, especially in the life sciences. They help scientists to design and execute complex in silico experiments. However, with rising complexity it becomes increasingly impractical to optimize scientific workflows by trial and error. To address this issue, we propose to insert a new optimization phase into the common scientific workflow life cycle. This paper describes the design and implementation of an automated optimization framework for scientific workflows to implement this phase. Our framework was integrated into Taverna, a life-science oriented workflow management system and offers a versatile programming interface (API), which enables easy integration of arbitrary optimization methods. We have used this API to develop an example plugin for parameter optimization that is based on a Genetic Algorithm. Two use cases taken from the areas of structural bioinformatics and proteomics demonstrate how our framework facilitates setup, execution, and monitoring of workflow parameter optimization in high performance computing e-science environments.  相似文献   

7.
In recent years, satellite imagery has greatly improved in both spatial and spectral resolution. One of the major unsolved problems in highly developed remote sensing imagery is the manual selection and combination of appropriate features according to spectral and spatial properties. Deep learning framework can learn global and robust features from the training data set automatically, and it has achieved state-of-the-art classification accuracies over different image classification tasks. In this study, a technique is proposed which attempts to classify hyperspectral imagery by incorporating deep learning features. Firstly, deep learning features are extracted by multiscale convolutional auto-encoder. Then, based on the learned deep learning features, a logistic regression classifier is trained for classification. Finally, parameters of deep learning framework are analysed and the potential development is introduced. Experiments are conducted on the well-known Pavia data set which is acquired by the reflective optics system imaging spectrometer sensor. It is found that the deep learning-based method provides a more accurate classification result than the traditional ones.  相似文献   

8.
Data analysis is an important part of the scientific process carried out by domain experts in data-intensive science. Despite the availability of several software tools and systems, their use in combination with each other for conducting complex types of analyses is a very difficult task for non-IT experts. The main contribution of this paper is to introduce an open architectural framework based on service-oriented computing (SOC) principles called the Ad-hoc DAta Grid Environment (ADAGE) framework that can be used to guide the development of domain-specific problem-solving environments or systems to support data analysis activities. Through an application of the ADAGE framework and a prototype implementation that supports the analysis of financial news and market data, this paper demonstrates that systems developed based on the framework allow users to effectively express common analysis processes. This paper also outlines some limitations as well as avenues for future research.  相似文献   

9.
A problem with NOAA AVHRR imagery is that the intrinsic scale of spatial variation in land cover in the U.K. is usually finer than the scale of sampling imposed by the image pixels. The result is that most NOAA AVHRR pixels contain a mixture of land cover types (sub-pixel mixing). Three techniques for mapping the sub-pixel proportions of land cover classes in the New Forest, U.K. were compared: (i) artificial neural networks (ANN); (ii) mixture modelling; and (iii) fuzzy c -means classification. NOAA AVHRR imagery and SPOT HRV imagery, both for 28 June 1994, were obtained. The SPOT HRV images were classified using the maximum likelihood method, and used to derive the 'known' sub-pixel proportions of each land cover class for each NOAA AVHRR pixel. These data were then used to evaluate the predictions made (using the three techniques and the NOAA AVHRR imagery) in terms of the amount of information provided, the accuracy with which that information is provided, and the ease of implementation. The ANN was the most accurate technique, but its successful implementation depended on accurate co-registration and the availability of a training data set. Supervised fuzzy c -means classification was slightly more accurate than mixture modelling.  相似文献   

10.
This paper presents a new unmixing-based retrieval system for remotely sensed hyperspectral imagery. The need for this kind of system is justified by the exponential growth in the volume and number of remotely sensed data sets from the surface of the Earth. This is particularly the case for hyperspectral images, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels. To deal with the high computational cost of extracting the spectral information needed to catalog new hyperspectral images in our system, we resort to efficient implementations of spectral unmixing algorithms on commodity graphics processing units (GPUs). Spectral unmixing is a very popular approach for interpreting hyperspectral data with sub-pixel precision. This paper particularly focuses on the design of the proposed framework as a web service, as well as on the efficient implementation of the system on GPUs. In addition, we present a comparison of spectral unmixing algorithms available in the system on both CPU and GPU architectures.  相似文献   

11.
This Letter presents a new methodological framework for a hierarchical data fusion system for vegetation classification using multi-sensor and multitemporal remotely sensed imagery. The uniqueness of the approach is that the overall structure of the fusion system is built upon a hierarchy of vegetation canopy attributes that can be remotely detected by sensors. The framework consists of two key components: an automated multisource image registration system and a hierarchical model for multi-sensor and multi-temporal data fusion.  相似文献   

12.
As the amount of noisy, unorganized, linked data on the Internet increases dramatically, how to efficiently analyze such data becomes a challenging research problem. In this paper, we propose a framework, iOLAP, that offers functionalities for analyzing networked data from Internet, social networks, scientific paper citations, etc. We first identify four main data dimensions that are common in most of networked data, namely people, relation, content, and time. Motivated by the fact that various dimensions of data jointly affect each other, we propose a polyadic factorization approach to directly model all the dimensions simultaneously in a unified framework. We provide detailed theoretical analysis of the new modeling framework. In addition to the theoretical framework, we also present an efficient implementation of the algorithm that takes advantage of the sparseness of data and has time complexity linear in the number of data records in a dataset. We then apply the proposed models to analyzing the blogosphere and personalizing recommendation in paper citations. Extensive experimental studies showed that our framework is able to provide deep insights jointed obtained from various dimensions of networked data.   相似文献   

13.
The large volume of data and computational complexity of algorithms limit the application of hyperspectral image classification to real-time operations. This work addresses the use of different parallel processing techniques to speed up the Markov random field (MRF)-based method to perform spectral-spatial classification of hyperspectral imagery. The Metropolis relaxation labelling approach is modified to take advantage of multi-core central processing units (CPUs) and to adapt it to massively parallel processing systems like graphics processing units (GPUs). The experiments on different hyperspectral data sets revealed that the implementation approach has a huge impact on the execution time of the algorithm. The results demonstrated that the modified MRF algorithm produced classification accuracy similar to conventional methods with greatly improved computational performance. With modern multi-core CPUs, good computational speed-up can be achieved even without additional hardware support. The CPU-GPU hybrid framework rendered the otherwise computationally expensive approach suitable for time-constrained applications.  相似文献   

14.
15.
One obstacle to successful modeling and prediction of crop yields using remotely sensed imagery is the identification of image masks. Image masking involves restricting an analysis to a subset of a region's pixels rather than using all of the pixels in the scene. Cropland masking, where all sufficiently cropped pixels are included in the mask regardless of crop type, has been shown to generally improve crop yield forecasting ability, but it requires the availability of a land cover map depicting the location of cropland. The authors present an alternative image masking technique, called yield-correlation masking, which can be used for the development and implementation of regional crop yield forecasting models and eliminates the need for a land cover map. The procedure requires an adequate time series of imagery and a corresponding record of the region's crop yields, and involves correlating historical, pixel-level imagery values with historical regional yield values. Imagery used for this study consisted of 1-km, biweekly AVHRR NDVI composites from 1989 to 2000. Using a rigorous evaluation framework involving five performance measures and three typical forecasting opportunities, yield-correlation masking is shown to have comparable performance to cropland masking across eight major U.S. region-crop forecasting scenarios in a 12-year cross-validation study. Our results also suggest that 11 years of time series AVHRR NDVI data may not be enough to estimate reliable linear crop yield models using more than one NDVI-based variable. A robust, but sub-optimal, all-subsets regression modeling procedure is described and used for testing, and historical United States Department of Agriculture crop yield estimates and linear trend estimates are used to gauge model performance.  相似文献   

16.
Contextual statistical decision rules for classification of lattice-structured data such as pixels in multispectral imagery are developed. Their recursive implementation is shown to have a strong resemblance to relaxation algorithms. Experimental evaluation of the proposed algorithms demonstrates their effectiveness.  相似文献   

17.
We describe ncWMS, an implementation of the Open Geospatial Consortium's Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.  相似文献   

18.
Grant W. Petty 《Software》2001,31(11):1067-1076
Physical dimensions and units form an essential part of the specification of constants and variables occurring in scientific programs, yet no standard compilable programming language implements direct support for automated dimensional consistency checking and unit conversion. This paper describes a conceptual basis and prototype implementation for such support within the framework of the standard Fortran 90 language. This is accomplished via an external module supplying appropriate user data types and operator interfaces. Legacy Fortran 77 scientific software can be easily modified to compile and run as ‘dimension‐aware’ programs utilizing the proposed enhancements. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
A new approach to texture discrimination is described. This approach is based upon an assumed stochastic model for texture in imagery and is an approximation to the statistically optimum maximum likelihood classifier. The construction and properties of the stochastic texture model are described and a digital filtering implementation of the resulting maximum likelihood texture discriminant is provided. The efficacy of this approach is demonstrated through experimental results obtained with simulated texture data. A comparison is provided with more conventional texture discriminants under identical conditions. The implications to texture discrimination in realworld imagery are discussed.  相似文献   

20.
In this paper, we propose a parallel convolution algorithm for estimating the partial derivatives of 2D and 3D images on distributed-memory MIMD architectures. Exploiting the separable characteristics of the Gaussian filter, the proposed algorithm consists of multiple phases such that each phase corresponds to a separated filter. Furthermore, it exploits both the task and data parallelism, and reduces communication through data redistribution. We have implemented the proposed algorithm on the Intel Paragon and obtained a substantial speedup using more than 100 processors. The performance of the algorithm is also evaluated analytically. The analytical results confirming with the experimental results indicate that the proposed algorithm scales very well with the problem size and number of processors. We have also applied our algorithm to the design and implementation of an efficient parallel scheme for the 3D surface tracking process. Although our focus is on 3D image data, the algorithm is also applicable to 2D image data, and can be useful for a myriad of important applications including medical imaging, magnetic resonance imaging, ultrasonic imagery, scientific visualization, and image sequence analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号