首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
针对多径传播造成基于射频信号相位信息测距不准确问题,提出了一种基于双标签的两步测距法。每一待定位目标上附着两个标签。在单频副载波调幅调制方式下,首先,提取载波信号的卷叠相位信息,计算标签在载波半波长范围内与阅读器的距离值,得到细测距估计值;然后,提取副载波信号的展开相位信息,根据展开相位估计标签与阅读器相距距离中包含的载波半波长的整倍数;其次,计算两个标签对应整倍数的平均值,将该平均值倍的载波半波长距离作为两个标签与阅读器之间距离的粗测距估计值;最后,将粗测距与细测距的估计值相叠加得到双标签最终的测距估计值。另外,为了降低硬件设施成本,提出了基于单阅读器和双标签的几何定位方法。仿真实验结果表明,在复杂多径传播环境中,与直接基于副载波信号相位信息测距相比,基于双标签的两步测距法平均测距误差约降低了35%,最终平均定位误差约为0.43 m,最大误差约为1 m,有效提高了相位法在定位技术中的精度,并降低了硬件成本。  相似文献   

4.
Generalized discriminant analysis using a kernel approach   总被引:100,自引:0,他引:100  
Baudat G  Anouar F 《Neural computation》2000,12(10):2385-2404
We present a new method that we call generalized discriminant analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical linear discriminant analysis (LDA) to nonlinear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results, as well as the shape of the decision function. The results are confirmed using real data to perform seed classification.  相似文献   

5.
COSMOS (Campaign for validating the Operation of Soil Moisture and Ocean Salinity), and NAFE (National Airborne Field Experiment) were two airborne campaigns held in the Goulburn River catchment (Australia) at the end of 2005. These airborne measurements are being used as benchmark data sets for validating the SMOS (Soil Moisture and Ocean Salinity) ground segment processor over prairies and crops. This paper presents results of soil moisture inversions and brightness temperature simulations at different resolutions from dual-polarisation and multi-angular L-band (1.4 GHz) measurements obtained from two independent radiometers. The aim of the paper is to provide a method that could overcome the limitations of unknown surface roughness for soil moisture retrievals from L-band data. For that purpose, a two-step approach is proposed for areas with low to moderate vegetation. Firstly, a two-parameter inversion of surface roughness and optical depth is used to obtain a roughness correction dependent on land use only. This step is conducted over small areas with known soil moisture. Such roughness correction is then used in the second step, where soil moisture and optical depth are retrieved over larger areas including mixed pixels. This approach produces soil moisture retrievals with root mean square errors between 0.034 m3 m− 3 and 0.054 m3 m− 3 over crops, prairies, and mixtures of these two land uses at different resolutions.  相似文献   

6.
The early diagnosis of lymphatic system tumors heavily relies on the computerized morphological analysis of blood cells in microscopic specimen images. Automating this analysis necessarily requires an accurate segmentation of the cells themselves. In this paper, we propose a robust method for the automatic segmentation of microscopic images. Cell segmentation is achieved following a coarse-to-fine approach, which primarily consists in the rough identification of the blood cell and, then, in the refinement of the nucleus contours by means of a neural model. The method proposed has been applied to different case studies, revealing its actual feasibility. This article was submitted by the authors in English. Sara Colantonio, M. Sc. honors degree in Computer Science from the University of Pisa in 2004, PhD student in Information Engineering at the Dept. of Information Engineering, Pisa University, is a research fellow at the Institute of Information Science and Technologies of the Italian National Research Council, in Pisa. She has a grant from Finmeccanica for studies in the field of image categorization with applications in medicine and quality control. Her main interests include neural networks, machine learning, industrial diagnostics, and medical imaging. She is a coauthor of more than fifteen scientific papers. At present, she is involved in a number of European research projects regarding image mining, information technology, and medical decision support systems. Ovidio Salvetti, director of research at the Institute of Information Science and Technologies (ISTI) of the Italian National Research Council (CNR), in Pisa, is working in the field of theoretical and applied computer vision. His fields of research are image analysis and understanding, pictorial information systems, spatial modeling, and intelligent processes in computer vision. He is a coauthor of four books and monographs and more than three hundred technical and scientific articles; he also possesses ten patents regarding systems and software tools for image processing. He has been a scientific coordinator of several national and European research and industrial projects, in collaboration with Italian and foreign research groups, in the fields of computer vision and high-performance computing for diagnostic imaging. He is member of the editorial boards of the international journals Pattern Recognition and Image Analysis and G. Ronchi Foundation Acts. He is at present the CNR contact person in ERCIM (the European Research Consortium for Informatics and Mathematics) for the Working Group on Vision and Image Understanding, member of IEEE and of the steering committee of a number of EU projects. He is head of the ISTI Signals and Images Laboratory. Igor B. Gurevich. Born 1938. Dr. Eng. [Diploma Engineer (Automatic Control and Electrical Engineering), 1961, Moscow Power Engineering Institute, Moscow, USSR]; Dr. (Theoretical Computer Science/Mathematical Cybernetics), 1975, Moscow Institute of Physics and Technology, Moscow, USSR. Head of department at the Dorodnicyn Computing Center of the Russian Academy of Sciences, Moscow; assistant professor at the Computer Science Faculty, Moscow State University. He has worked from 1960 to present as an engineer and researcher in industry, medicine, and universities and in the Russian Academy of Sciences. Area of expertise: image analysis, image understanding, mathematical theory of pattern recognition, theoretical computer science, pattern recognition and image analysis techniques for applications in medicine, nondestructive testing, process control, knowledge bases, knowledge-based systems. Two monographs (in coauthorship), 135 papers on pattern recognition, image analysis, theoretical computer science and applications in peer reviewed international and Russian journals, conference and workshop proceedings; one patent of the USSR, four patents of the RF Executive Secretary of the Russian Federation Association for Pattern Recognition and Image Analysis, member of the International Association for Pattern Recognition Governing Board (representative from the Russian Federation), IAPR fellow. He has been the PI of many research and development projects as part of national research (applied and basic research) programs of the Russian Academy of Sciences, of the Ministry of Education and Science of the Russian Federation, of the Russian Foundation for Basic Research, of the Soros Foundation, and of INTAS. Vice Editor-in-Chief of Pattern Recognition and Image Analysis, International Academic Publishing Company “Nauka/Interperiodica” Pleiades Publishing.  相似文献   

7.
With the present gap between CAD and CAE, designers are often hindere in their efforts to explore design alternatives and ensure product robustness. This paper describes the multi-representation architecture (MRA)—a design-analysis integration strategy that views CAD-CAE integration as an information-intensive mapping between design models and analysis models. The MRA divides this mapping into subproblems using four information representations: solution method models (SMMs), analysis building blocks (ABBs), product models (PMs), and product model-based analysis models (PBAMs). A key distinction is the explicit representation of design-analysis associativity as PM-ABB idealization linkages that are contained in PBAMs.The MRA achieves flexibility by supporting different solution tools and design tools, and by accommodating analysis models of diverse discipline, complexity and solution method. Object and constraint graph techniques provide modularity and rich semantics.Priority has been given to the class of problems termedroutine analysis—the regular use of established analysis models in product design. Representative solder joint fatigue case studies demonstrate that the MRA enables highly automated routine analysis for mixed formula-based and finite element-based models. Accordingly, one can employ the MRA and associated methodology to create specialized CAE tools that utilize both design information and general purpose solution tools.Nomenclature MRA multi-representation architecture - SMM solution method model - ABB analysis building block - PM product model - PBAM product model-based analysis model - ABB-SMM transformation - idealization relation between design and analysis attributes - PM-ABB associativity linkage indicating usage of one or more i eislab. eislab. gatech. edu.  相似文献   

8.
The rolling process is a strategical industrial and economical activity that has a large impact among world-wide commercial markets. Typical operating conditions during the rolling process involve extreme mechanical situations, including large values of forces and tensions. In some cases, these scenarios can lead to several kinds of faults, which might result in large economic losses. Thereby, a proper assessment of the process condition is a key aspect, not only as a fault detection mechanism, but also as an economic saving system. In the rolling process, a remarkable kind of fault is the so-called chatter, a sudden powerful vibration that affects the quality of the rolled material. In this paper, we propose a visual approach for the analysis of the rolling process. According to physical principles, we characterize the exit thickness and the rolling forces by means of a large dimensional feature vector, that contains the energies at specific frequency bands. Afterwards, we use a dimensionality reduction technique, called t-SNE, to project all feature vectors on a visual 2D map that describes the vibrational states of the process. The proposed methodology provides a way for an exploratory analysis of the dynamic behaviors in the rolling process and allows to find relationships between these behaviors and the chatter fault. Experimental results from real data of a cold rolling mill are described, showing the application of the proposed approach.  相似文献   

9.
Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting” (allow for much more behavior without strong support for it). None of the existing techniques enables the user to control the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches.  相似文献   

10.
Given a user-specified minimum correlation threshold /spl theta/ and a market-basket database with N items and T transactions, an all-strong-pairs correlation query finds all item pairs with correlations above the threshold /spl theta/. However, when the number of items and transactions are large, the computation cost of this query can be very high. The goal of this paper is to provide computationally efficient algorithms to answer the all-strong-pairs correlation query. Indeed, we identify an upper bound of Pearson's correlation coefficient for binary variables. This upper bound is not only much cheaper to compute than Pearson's correlation coefficient, but also exhibits special monotone properties which allow pruning of many item pairs even without computing their upper bounds. A two-step all-strong-pairs correlation query (TAPER) algorithm is proposed to exploit these properties in a filter-and-refine manner. Furthermore, we provide an algebraic cost model which shows that the computation savings from pruning is independent of or improves when the number of items is increased in data sets with Zipf-like or linear rank-support distributions. Experimental results from synthetic and real-world data sets exhibit similar trends and show that the TAPER algorithm can be an order of magnitude faster than brute-force alternatives. Finally, we demonstrate that the algorithmic ideas developed in the TAPER algorithm can be extended to efficiently compute negative correlation and uncentered Pearson's correlation coefficient.  相似文献   

11.
Rapid advances in computer and geospatial technology have made it increasingly possible to design and develop urban models to efficiently simulate spatial growth patterns. An approach commonly used in geography and urban growth modelling is based on cellular automata theory and the GIS framework. However, the behaviour of cellular automaton (CA) models is affected by uncertainties arising from the interaction between model elements, structures, and the quality of data sources used as model input. The uncertainty of CA models has not been sufficiently addressed in the research literature. The objective of this study is to analyze the behaviour of a GIS-based CA urban growth model using sensitivity analysis (SA). The proposed SA approach has both qualitative and quantitative components. These components were operationalized using the cross-tabulation map, KAPPA index with coincidence matrices, and spatial metrics. The research focus was on the impacts of CA neighbourhood size and type on the model outcomes. A total of 432 simulations were generated and the results suggest that CA neighbourhood size and type configurations have a significant influence on the CA model output. This study provides insights about the limitations of CA model behaviour and contributes to enhancing existing spatial urban growth modelling procedures.  相似文献   

12.
A novel manifold learning approach is presented to efficiently identify low-dimensional structures embedded in high-dimensional MRI data sets. These low-dimensional structures, known as manifolds, are used in this study for predicting brain tumor progression. The data sets consist of a series of high-dimensional MRI scans for four patients with tumor and progressed regions identified. We attempt to classify tumor, progressed and normal tissues in low-dimensional space. We also attempt to verify if a progression manifold exists—the bridge between tumor and normal manifolds. By identifying and mapping the bridge manifold back to MRI image space, this method has the potential to predict tumor progression. This could be greatly beneficial for patient management. Preliminary results have supported our hypothesis: normal and tumor manifolds are well separated in a low-dimensional space. Also, the progressed manifold is found to lie roughly between the normal and tumor manifolds.  相似文献   

13.
A formulation for shape optimization of elastic structures subject to multiple load cases is presented. The problem is solved using a homogenization method. When compared to the single load solution strategy, it is shown that the more general formulation can produce more stable designs while it introduces little additional complexity.  相似文献   

14.
《Computers & Structures》1987,26(3):499-512
A general analysis of the static and dynamic responses of suspension bridges in presented. The Cullmann ‘elastic weight’ theory is revisited, and it is shown to be a powerful discretization tool. According to this method, the structure is reduced to a system of lumped masses and elastic springs, so that it can be considered as a classic holonomic system of rigid elements (Lagrange system). The elasticity is reduced to reactive conservative forces in the springs, and the second-order components of the potential energy of the applied forces can be detected by means of the work of these reactive forces. This is an advantage, at least from the theoretical point of view, over any finite element method. Another advantage is the substantial diminishing of the redundants, which allowed us to use a portable personal computer. Finally, this discretization method is quite general, but it seems particularly suitable for suspension bridges, in which the springs are only on the bridge, and must be considered as simple bending cells.Some numerical results are shown, in which we examine one of the possible suspension bridges which is to be built on the Straits of Messina.  相似文献   

15.
We provide sufficient conditions for the semilocal convergence of a family of two-step Steffensen's iterative methods on a Banach space. The main advantage of this family is that it does not need to evaluate neither any Fréchet derivative nor any bilinear operator, but having a high speed of convergence. Some numerical experiments are also presented.  相似文献   

16.
Although an important component of natural scenes, the representation of skyscapes is often relatively simplistic. This can be largely attributed to the complexity of the thermodynamics underpinning cloud evolution and wind dynamics, which make interactive simulation challenging. We address this problem by introducing a novel layered model that encompasses both terrain and atmosphere, and supports efficient meteorological simulations. The vertical and horizontal layer resolutions can be tuned independently, while maintaining crucial inter-layer thermodynamics, such as convective circulation and land-air transfers of heat and moisture. In addition, we introduce a cloud-form taxonomy for clustering, classifying and upsampling simulation cells to enable visually plausible, finely-sampled volumetric rendering. As our results demonstrate, this pipeline allows interactive simulation followed by up-sampled rendering of extensive skyscapes with dynamic clouds driven by consistent wind patterns. We validate our method by reproducing characteristic phenomena such as diurnal shore breezes, convective cells that contribute to cumulus cloud formation, and orographic effects from moist air driven upslope.  相似文献   

17.
An approach is presented for the determination of solution sensitivity to changes in problem domain or shape. A finite element displacement formulation is adopted and the point of view is taken that the finite element basis functions and grid are fixed during the sensitivity analysis; therefore, the method is referred to as a “fixed basis function” finite element shape sensitivity analysis. This approach avoids the requirement of explicit or approximate differentiation of finite element matrices and vectors and the difficulty or errors resulting from such calculations. Effectively, the sensitivity to boundary shape change is determined exactly; thus, the accuracy of the solution sensitivity is dictated only by the finite element mesh used. The evaluation of sensitivity matrices and force vectors requires only modest calculations beyond those of the reference problem finite element analysis; that is, certain boundary integrals and reaction forces on the reference location of the moving boundary are required. In addition, the formulation provides the unique family of element domain changes which completely eliminates the inclusion of grid sensitivity from the shape sensitivity calculation. The work is illustrated for some one-dimensional beam problems and is outlined for a two-dimensional C0 problem; the extension to three-dimensional problems is straight-forward. Received December 5, 1999?Revised mansucript received July 6, 2000  相似文献   

18.
Failure mode and effects analysis (FMEA) is a methodology to evaluate a system, design, process or service for possible ways in which failures (problems, errors, risks and concerns) can occur. It is a group decision function and cannot be done on an individual basis. The FMEA team often demonstrates different opinions and knowledge from one team member to another and produces different types of assessment information such as complete and incomplete, precise and imprecise and known and unknown because of its cross-functional and multidisciplinary nature. These different types of information are very difficult to incorporate into the FMEA by the traditional risk priority number (RPN) model and fuzzy rule-based approximate reasoning methodologies. In this paper we present an FMEA using the evidential reasoning (ER) approach, a newly developed methodology for multiple attribute decision analysis. The proposed FMEA is then illustrated with an application to a fishing vessel. As is illustrated by the numerical example, the proposed FMEA can well capture FMEA team members’ diversity opinions and prioritize failure modes under different types of uncertainties.  相似文献   

19.
Jia  Zhen  Zhao  Jianwei  Wang  Hongcheng  Xiong  Ziyou  Finn  Alan 《Multimedia Tools and Applications》2015,74(6):1845-1862
Multimedia Tools and Applications - In this paper we propose a novel face hallucination algorithm to synthesize a high-resolution face image from several low-resolution input face images. Face...  相似文献   

20.
A neural network trained with clustered data has been applied to the extraction of temperature from vibrational Coherent Anti-Stokes Raman (CARS) spectra of nitrogen. CARS is a non-intrusive thermometry technique applied in practical combustors in industry. The advantages of clustering of training data over training with unprocessed calculated spectra is described. The method is applied to CARS data from an isothermal furnace and a liquid kerosene fuelled aeroengine combustor sector rig. Resulting temperatures have been compared with values extracted from the data using conventional least squares fitting and, where possible, mean temperatures measured by pyrometer and blackbody cavity probe. The main advantage of the neural network method is speed, with the potential for online temperature extraction at the spectral acquisition rate of 10 Hz using standard PC hardware.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号