首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
基于双基站侦查系统对于有源空中运动目标,提出了一种被动测速方法.该方法利用双站与目标间的几何位置关系和相对速度引起的多普勒频差信息构建动态方程,求解目标速度.通过几组真实的参数详细分析了测速精度与测频精度、测角精度的关系.仿真结果证明该方法在较大的方位角范围内有较高的测速精度,且对于测频精度和测角精度的敏感性不高.  相似文献   

3.
多普勒雷达安装偏差及测速精度的估计与补偿   总被引:1,自引:0,他引:1  
研究飞行器导航定位精度优化问题,由于固定天线多普勒雷达、航姿系统及导航系统等测速存在误差,引起系统导航误差.为了减小测速误差,提高导航精度,提出了一种采用安装偏差与测速精度估计的补偿方法.分析了多普勒雷达的工作原理及误差的影响因素,建立了误差模型,然后利用卡尔曼滤波器,结合误差模型,将安装偏差、测速精度作为状态变量进行了估计,最后根据估计值进行了补偿.仿真结果表明,在载机起飞、降落、平飞等飞行状态下,改进方法能准确地对安装偏差、测速精度进行估计,减小了测速误差,提高了载机在陆地及海洋上飞行的导航精度.  相似文献   

4.
多普勒频移是GPS接收机的重要观测数据且具有高精度、实时性和不受周跳影响等特点,围绕多普勒观测值开展载体速度和加速度的实时获取方法研究.由于卫星速度和加速度获取的时效性和准确性是制约多普勒计算载体速度和加速度的关键,对广播星历实时计算卫星速度和加速度原理方程进行推导;建立基于广播星历的GPS多普勒计算载体速度和加速度模型后,分别开展静态和车载动态实验,结果表明:基于广播星历的GPS多普勒计算载体速度和加速度算法可满足车载导航精度要求.  相似文献   

5.
一种膛内多普勒测速修正方法   总被引:1,自引:1,他引:0  
针对复杂电磁环境下膛内测速的重复性和精度低的问题,提出了一种非接触式膛内高精度测速方法,该方法在对膛内多普勒信号进行提取并去噪的基础上,利用S变换时频分析对预处理后的信号进行处理,然后采用谱峰搜索的方法来获得信号的瞬时频率,用能量重心校正法来提高结果精确度,最后根据多普勒原理,完成速度的测量;实验部分采用模拟信号和实际信号两种方式来验证该方法的可实施性;结果表明,该方法测速精度能够达到0.043 2%,相对于极值法和STFT法具有更小的误差。  相似文献   

6.
时间同步是水下传感器网络的关键技术,由于海洋中采用水声通信时传播时延高且存在多普勒频移,导致使用射频通信的陆上时间同步算法无法直接应用于水下环境.基于多普勒测速原理和节点在水下的移动性,提出一种新型的时间同步CD-Sync算法.利用具有聚类特性的分簇模型选择合理的簇首节点,并与水面信标节点进行簇内同步,且在同步过程中,...  相似文献   

7.
A new method has been developed for calibration of the depolarization measurement of an atmospheric lidar. The technique requires a simple polarizer, and can be performed without interfering with the measurement set-up. The theoretical background of the procedure will be given and results of this calibration procedure on the tropospheric stratospheric aerosol lidar in McMurdo will be presented.  相似文献   

8.
9.
In this paper we present an inversion algorithm for ill-posed problems arising in atmospheric remote sensing. The proposed method is an iterative Runge-Kutta type regularization method. Those methods are better well known for solving differential equations. We adapted them for solving inverse ill-posed problems. The numerical performances of the algorithm are studied by means of simulations concerning the retrieval of aerosol particle size distributions from lidar observations.  相似文献   

10.
We develop and validate an automated approach to determine canopy height, an important metric for global biomass assessments, from micro-pulse photon-counting lidar data collected over forested ecosystems. Such a lidar system is planned to be launched aboard the National Aeronautics and Space Administration’s follow-on Ice, Cloud and land Elevation Satellite mission (ICESat-2) in 2017. For algorithm development purposes in preparation for the mission, the ICESat-2 project team produced simulated ICESat-2 data sets from airborne observations of a commercial micro-pulse lidar instrument (developed by Sigma Space Corporation) over two forests in the eastern USA. The technique derived in this article is based on a multi-step mathematical and statistical signal extraction process which is applied to the simulated ICESat-2 data set. First, ground and canopy surfaces are approximately extracted using the statistical information derived from the histogram of elevations for accumulated photons in 100 footprints. Second, a signal probability metric is generated to help identify the location of ground, canopy-top, and volume-scattered photons. According to the signal probability metric, the ground surface is recovered by locating the lowermost high-photon density clusters in each simulated ICESat-2 footprint. Thereafter, canopy surface is retrieved by finding the elevation at which the 95th percentile of the above-ground photons exists. The remaining noise is reduced by cubic spline interpolation in an iterative manner. We validate the results of the analysis against the full-resolution airborne photon-counting lidar data, digital terrain models (DTMs), and canopy height models (CHMs) for the study areas. With ground surface residuals ranging from 0.2 to 0.5 m and canopy height residuals ranging from 1.6 to 2.2 m, our results indicate that the algorithm performs very well over forested ecosystems of canopy closure of as much as 80%. Given the method’s success in the challenging case of canopy height determination, it is readily applicable to retrieval of land ice and sea ice surfaces from micro-pulse lidar altimeter data. These results will advance data processing and analysis methods to help maximize the ability of the ICESat-2 mission to meet its science objectives.  相似文献   

11.
Winds play a very important role in the dynamics of the lower atmosphere, and there is a need to obtain vertical distribution of winds at high spatio-temporal resolution for various observational and modelling applications. Profiles of wind speed and direction obtained at two tropical Indian stations using a Doppler wind lidar during the Indian southwest monsoon season were inter-compared with those obtained simultaneously from GPS upper-air sounding (radiosonde). Mean wind speeds at Mahbubnagar (16.73° N, 77.98° E, 445 m above mean sea level) compare well in magnitude for the entire height range from 100 m to 2000 m. The mean difference in wind speed between the two techniques ranged from ?0.81 m s?1 to +0.41 m s?1, and the standard deviation of wind speed differences ranged between 1.03 m s?1 and 1.95 m s?1. Wind direction by both techniques compared well up to about 1200 m height and then deviated slightly from each other at heights above, with a standard deviation in difference of 19°–48°. At Pune (1832′ N, 7351′ E, 559 m above mean sea level), wind speed by both techniques matched well throughout the altitude range, but with a constant difference of about 1 m s?1. The root mean square deviation in wind speed ranged from 1.0 to 1.6 m s?1 and that in wind direction from 20° to 45°. The bias and spread in both wind speed and direction for the two stations were computed and are discussed. The study shows that the inter-comparison of wind profiles obtained by the two independent techniques is very good under conditions of low wind speeds, and they show larger deviation when wind speeds are large, probably due the drift of the radiosonde balloon away from the location.  相似文献   

12.
13.
Most of the common techniques in text retrieval are based on the statistical analysis terms (words or phrases). Statistical analysis of term frequency captures the importance of the term within a document only. Thus, to achieve a more accurate analysis, the underlying model should indicate terms that capture the semantics of text. In this case, the model can capture terms that represent the concepts of the sentence, which leads to discovering the topic of the document. In this paper, a new concept-based retrieval model is introduced. The proposed concept-based retrieval model consists of conceptual ontological graph (COG) representation and concept-based weighting scheme. The COG representation captures the semantic structure of each term within a sentence. Then, all the terms are placed in the COG representation according to their contribution to the meaning of the sentence. The concept-based weighting analyzes terms at the sentence and document levels. This is different from the classical approach of analyzing terms at the document level only. The weighted terms are then ranked, and the top concepts are used to build a concept-based document index for text retrieval. The concept-based retrieval model can effectively discriminate between unimportant terms with respect to sentence semantics and terms which represent the concepts that capture the sentence meaning. Experiments using the proposed concept-based retrieval model on different data sets in text retrieval are conducted. The experiments provide comparison between traditional approaches and the concept-based retrieval model obtained by the combined approach of the conceptual ontological graph and the concept-based weighting scheme. The evaluation of results is performed using three quality measures, the preference measure (bpref), precision at 10 documents retrieved (P(10)) and the mean uninterpolated average precision (MAP). All of these quality measures are improved when the newly developed concept-based retrieval model is used, confirming that such model enhances the quality of text retrieval.  相似文献   

14.
Abstract

Through an extensive literature review the results of previous lidar investigations in Australia, Canada, Sweden, the U.S.A. and West Germany have been evaluated. Based on these findings estimates are given for the anticipated depth capability, measurement accuracy and operational constraints for a laser system in U.K. waters. Consideration is also given to the possibility of deploying a depth-sounding lidar for non-bathymetric purposes such as depth-resolved turbidity mapping.  相似文献   

15.
Ranking functions are an important component of information retrieval systems. Recently there has been a surge of research in the field of “learning to rank”, which aims at using labeled training data and machine learning algorithms to construct reliable ranking functions. Machine learning methods such as neural networks, support vector machines, and least squares have been successfully applied to ranking problems, and some are already being deployed in commercial search engines.Despite these successes, most algorithms to date construct ranking functions in a supervised learning setting, which assume that relevance labels are provided by human annotators prior to training the ranking function. Such methods may perform poorly when human relevance judgments are not available for a wide range of queries. In this paper, we examine whether additional unlabeled data, which is easy to obtain, can be used to improve supervised algorithms. In particular, we investigate the transductive setting, where the unlabeled data is equivalent to the test data.We propose a simple yet flexible transductive meta-algorithm: the key idea is to adapt the training procedure to each test list after observing the documents that need to be ranked. We investigate two instantiations of this general framework: The Feature Generation approach is based on discovering more salient features from the unlabeled test data and training a ranker on this test-dependent feature-set. The importance weighting approach is based on ideas in the domain adaptation literature, and works by re-weighting the training data to match the statistics of each test list. We demonstrate that both approaches improve over supervised algorithms on the TREC and OHSUMED tasks from the LETOR dataset.  相似文献   

16.
A conception of matrix which is a rectangular cellular array is provided. Application of the matrix for data sorting, retrieval and inverted file implementation are presented. Retrieval time does not depend on the size of the set being searched, and the sorting time is directly proportional to the number of ordered elements. Simulation of the matrix is reported and remarks on possibilities or relational data base implementation in the matrix are included.  相似文献   

17.
Content-based retrieval for trademark registration   总被引:1,自引:0,他引:1  
With ever increasing number of registered trademarks, the task of trademark office is becoming increasingly difficult to ensure the uniqueness of all trademarks registered. Trademarks are complex patterns consisting of various image and text patterns, called device-mark and word-in-mark respectively. Due to the diversity and complexity of image patterns occurring in trademarks, due to multi-lingual word-in-mark, there is no very successful computerized operating trademark registration system. We have tackled key technical issues: multiple feature extraction methods to capture the shape, similarity of multi-lingual word-in-mark, matching device mark interpretation using fuzzy thesaurus, and fusion of multiple feature measures for conflict trademark retrieval. A prototype System for Trademark Archival and Registration (STAR) has been developed. The initial test run has been conducted using 3000 trademarks, and the results have shown satisfaction to trademark officers and specialists.  相似文献   

18.
《Pattern recognition》2002,35(8):1705-1722
In today's computer networks, the amount of digital images increases rapidly and enormously. However, images may be distorted through different types of processing such as histogram equalization, quantization, smoothing, compression, noise corruption, geometric transformation, and changing of illumination. It is imperative to develop an effective method to retrieve the original images from very large image databases because only the original images are stored for economy. In this study, a new image normalization method is first proposed to solve the problem with illumination varying. A complementary retrieval method is then proposed to resist various types of processing. According to the type of distortion, all processing are classified into three distortion categories, low frequency, high frequency and geometric transformation. In addition, different features are resistant to different distortion categories. However, the distortion by which a query image is corrupted is usually unknown. Hence, a complementary analysis is proposed to determine the distortion category for each query image and the feature resistant to the estimated category is used to retrieve the desired original image. As a result, an effective retrieval method is achieved. The feasibility and effectiveness of our method are demonstrated by experimental results.  相似文献   

19.
一种有效的信息检索模型*   总被引:1,自引:0,他引:1  
提出基于用户查询行为和查询扩展的信息检索模型,给出了设计思想及其算法和实现的关键技术。实验结果表明,该模型能有效提高信息检索性能,有很高的实际应用价值和广阔的前景。  相似文献   

20.
Citation-based retrieval for scholarly publications   总被引:1,自引:0,他引:1  
Scholarly publications are available online and in digital libraries, but existing search engines are mostly ineffective for these publications. The proposed publication retrieval system is based on Kohonen's self-organizing map and offers fast retrieval speeds and high precision in terms of relevance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号