首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
基于形状内容描述子的点特征匹配   总被引:2,自引:1,他引:1  
冯晓伟  田裕鹏 《光电工程》2008,35(3):108-111
针对两幅图像中特征点的匹配问题,本文提出了一种新的基于形状内容描述子的点特征匹配方法.该方法首先利用基于曲率尺度空间(CSS)的角点检测技术获得两幅图像中的角点及其所在的曲线;然后,计算两幅图像中每个角点的形状内容描述子,运用x2统计检验函数得到描述子的匹配度,对该匹配度进行评估,如果高于某一个阈值,则认为初始匹配成功;最后对初始匹配成功的点对,利用半局域限制完成点集之间的最终匹配.实验结果表明,本文所提出的匹配算法具有较高的点特征匹配准确率.  相似文献   

2.
采用邻域差值特征模板的立体匹配方法   总被引:5,自引:1,他引:4  
顾征  苏显渝 《光电工程》2005,32(10):39-42
提出了一种基于邻域差值特征模板的立体匹配方法。该方法通过计算一幅图像中间隔一定距离的两点的像素邻域差值作为特征模板,再计算另一幅图像中在同一扫描线上间隔同样距离的像素邻域的差值,将它与特征模板的偏差作为匹配标准,偏差最小的点就认为是匹配点。在同等条件下,与区域相关法相比,该方法能够将运算的速度提高3倍,且同样能够得到精确浓密的视差图。实验证明,该算法结构简单,易于实现,能够处理复杂的场景,具有良好的实验效果和实用价值。  相似文献   

3.
本文讨论了在一定的光源设计前提下,光照强度及其分布所满足的规律,并在此基础上,分析了车灯线光源的优 化设计的途径。在分析的方法上,我们侧重于计算机数值模拟辅以理论推导,提出了一系列创新的技术,用以克 服离散途径解决连续问题时所遇到的困难。在基本的方法上,我们将线光源用离散的点光源逼近,将抛物面离 散为网格。这里我们从法向量变化率出发。提出按曲率划分而非单纯尺寸长度划分,大大提高了模拟精度。在 处理离散条件下点线相交时,我们引入了连通集这一重要概念,以克服(屏上点,光源)1—>(抛面面上点)映射 时的多值性,解决了因细分抛物面而产生的多重光路的问题。在实现过程中,我们应用了广度优先的线性复杂 度算法,使计算效率一跃提升了近5000倍。在解决多点采样问题时,我们又开发了并行连通集算法,减少了串行 算法的重复计算,使计算速度加快了近2000倍。在验证模型合理性和进行数据分析时,在第一题中我们着重于 收敛的程度问题,并证明我们的模型是稳定的,能克服误差的影响;第二小题中,我们着重分析光强分布的合理 性问题,并结合第三小题的要求,对图象进行分析,最终得出题目中的设计规范是符合实际应用需要的,并且光 源消耗很优化,光强分布合理。  相似文献   

4.
Given a point and an expanding map on the unit interval, we consider the set of points for which the forward orbit under this map is bounded away from the given point. It is well-known that in many cases such sets have full Hausdorff dimension. We prove that such sets have a large intersection property, i.e. countable intersections of such sets also have full Hausdorff dimension. This result applies to a class of maps including multiplication by integers modulo 1 and x ? 1/x modulo 1. We prove that the same properties hold for multiplication modulo 1 by a dense set of non-integer numbers between 1 and 2.  相似文献   

5.
By exploiting the meshless property of kernel‐based collocation methods, we propose a fully automatic numerical recipe for solving interpolation/regression and boundary value problems adaptively. The proposed algorithm is built upon a least squares collocation formulation on some quasi‐random point sets with low discrepancy. A novel strategy is proposed to ensure that the fill distances of data points in the domain and on the boundary are in the same order of magnitude. To circumvent the potential problem of ill‐conditioning due to extremely small separation distance in the point sets, we add an extra dimension to the data points for generating shape parameters such that nearby kernels are of distinctive shape. This effectively eliminates the needs of shape parameter identification. Resulting linear systems were then solved by a greedy trial space algorithm to improve the robustness of the algorithm. Numerical examples are provided to demonstrate the efficiency and accuracy of the proposed methods.  相似文献   

6.
《Materials Letters》2004,58(3-4):507-512
The indentation cycle obtained from hardness testing of material has two singular points. The first one is the right end of the definition interval of loading/unloading curves; it corresponds to the cycle tip and poses no difficulties for mathematical analysis. The second one is the latest contact point between the material and the indenter tip in phase of withdrawal, it is located inside the interval of definition, more towards the left- or the right-hand side depending on the elasticity degree of the material. This second point is impractical for the analytical modelling of the cycle as the unloading curve loses there all mathematical properties of derivability and differentiability.This difficulty induced a tendency for empirical models built from experimental results that are, like any empirical laws, affected by some lack of precision. It also led to an intense focusing on the unloading curve to which main nanomechanical and structural properties have been connected, forgetting sometimes the loading curve and the valuable information it can provides.In this article, we work out an analytical model to represent the two curves of the indentation cycle as accurately as possible. In this step by step modeling we use functional analysis and force the modelling curves to fit the interval of definition, the concavity direction and the materials energetic properties.  相似文献   

7.
A 3-D seismic survey is usually achieved by recording a parallel profile network. The 3-D data thus obtained are sampled and processed in a cubic grid for which the sampling requirements are generally derived from the usual 1-D viewpoint. The spectrum of 3-D seismic data has a support (the region of the Fourier space in which the spectrum is not zero) that can be approximated by a domain bounded by two cones. Considering the particular shape of this support, we use a 3-D sampling theory to obtain results applicable to the recording and processing of 3-D seismic data. This naturally leads to weaker sampling requirements than the 1-D viewpoint does. We define the hexagonal non-cubic sampling grid and the triangular non-cubic sampling grid and show that fewer sample points are needed to represent 3-D seismic data with the same degree of accuracy. Thus, using the hexagonal non-cubic sampling grid we point out that the maximum value of the spatial sampling interval along the profiles is larger by 15.6% than the one of the cubic sampling grid. We also point out that the triangular non-cubic sampling grid requires a number of sample points equal to half the number required by a cubic sampling grid.  相似文献   

8.
基于多路激光跟踪干涉仪测长的坐标测量系统在大尺寸测量领域具有显著的优越性,准确标定系统参数是实现高精度坐标测量的关键。为了克服传统自标定方法的缺点,提出了一种采用标准长度的改进自标定算法。该算法首先在稳定的基座上设置固定点,在X、Y、Z方向分别产生用激光干涉法精确测得的标准长度,然后将标准长度用于构造自标定的优化函数。通过提高相应优化函数的权重,进一步提高坐标测量精度。通过仿真实验证明了该算法的可行性。采用独立的激光干涉仪验证系统在大尺寸范围内的测量精度,当测量点分布在距系统坐标系原点[7.0 m,8.3 m]区间内,两组实验误差均分布在[-9.5 μm,4.6 μm]区间内,结果表明所提出自标定算法可显著提高大尺寸空间坐标测量精度。  相似文献   

9.
We propose a fast surface-profiling algorithm based on white-light interferometry by use of sampling theory. We first provide a generalized sampling theorem that reconstructs the squared-envelope function of the white-light interferogram from sampled values of the interferogram and then propose the new algorithm based on the theorem. The algorithm extends the sampling interval to 1.425 microm when an optical filter with a center wavelength of 600 nm and a bandwidth of 60 nm is used. The sampling interval is 6-14 times wider than those used in conventional systems. The algorithm has been installed in a commercial system that achieved the world's fastest scanning speed of 80 microm/s. The height resolution of the system is of the order of 10 nm for a measurement range of greater than 100 microm.  相似文献   

10.
Z. Wang  M. Vo  H. Kieu  T. Pan 《Strain》2014,50(1):28-36
A challenging task that has hampered the fully automatic processing of the digital image correlation (DIC) technique is the initial guess when large deformation and rotation are present. In this paper, a robust scheme combining the concepts of a scale‐invariant feature transform (SIFT) algorithm and an improved random sample consensus (iRANSAC) algorithm is employed to conduct an automated fast initial guess for the DIC technique. The scale‐invariant feature transform algorithm can detect a certain number of matching points from two images even though the corresponding deformation and rotation are large or the images have periodic and identical patterns. After removing the wrong matches with the improved random sample consensus algorithm, the three pairs of closest and non‐collinear matching points serve for the purpose of initial guess calculation. The validity of the technique is demonstrated by both computer simulation and real experiment.  相似文献   

11.
动态图像运动目标检测是图像处理中的热点,但动态图像的识别范围却成了目标检测的限制,针对此问题,本文提出了一种利用图像拼接技术扩展图像识别范围、并在此基础上完成运动目标检测的方法。在图像拼接中采用了SURF图像匹配算法,运动目标识别利用背景差分法,实验中使用的是开源的Linux操作系统、以及为图像处理提供了大量算法和函数的Open CV软件开发库。针对不同分辨率、不同角度采集的图像进行了实验研究,结果表明,可以在较好满足图像识别范围的同时,明确地检测出运动目标的相关信息。同时,本文提出一种通过图像拼接实现扩展运动目标检测的方法,满足了实时性要求,达到了增加图像清晰度的目的,但是,在摄像设备与场景之间的相对运动方面还存在着有待解决的问题,这将成为今后研究的重点方向。  相似文献   

12.
The rise of intelligent technological devices (ITDs)—wearables and insideables—provides the possibility of enhancing human capabilities and skills. This study contributes to the literature on the impact of ethical judgements on the acceptance of ITDs by using a multidimensional ethical scale (MES) proposed by Shwayer and Sennetti. The novelty of this study resides in using fuzzy set qualitative comparative analysis (fsQCA) instead of correlational methods to explain human behaviour (in this case, attitudes towards ITDs) from an ethical perspective. fsQCA evaluates the influence of ethical variables on the intention to use ITDs (and the non-use of these technologies). Positive ethical evaluations of technology do not always ensure ITD acceptance—unfavourable ethical perceptions may lead to its rejection. We find that for wearables: (1) positive perceptions from a utilitarian perspective are key in explaining their acceptance. Likewise, we identify configurations leading to acceptance where positive judgements on moral equity, egoism and contractualism are needed. Surprisingly, only the relativism dimension participates in configurations that cause acceptance when it is negated; (2) We found that a single unfavourable perception from a contractualism or relativism perspective causes non-use. Likewise, we found that coupling of negative judgements on moral equity, utilitarianism and egoism dimensions also produce resistance to wearables. For insideables, we notice that: (1) an MES has weak explanatory power for the intention to use ITDs but is effective in understanding resistance to use; (2) A negative perception of any ethical dimension leads to resistance towards insideables.  相似文献   

13.
蔡鹏飞  李扬波  段湘煜  孙挺 《包装工程》2017,38(19):206-212
目的为了解决当前因图像匹配算法主要依靠提取图像的特征属性矢量进行度量,从而利用其对应的相关系数最大的点进行匹配时导致匹配结果中存在较多的错误匹配点以及匹配误差变大的问题。方法提出区域灰度分布耦合相似判定策略的图像匹配算法,首先利用Forstner算子来提取图像的特征点,以特征点为中心,采取建立极坐标系的方法来确定特征点的主方向,通过特征点邻域的灰度特征来生成低维度的特征描述子;然后引入归一化互相(NCC)函数对特征点之间的相似度进行评估,建立矩形窗口特征点双向匹配规则,完成特征点的匹配,以提高特征点之间的匹配准确度和算法鲁棒性;最后,根据正确匹配特征点组成的三角形具有相似性的特征,设计相似判定策略,对错误匹配点进行剔除,以改善匹配精度。结果实验结果表明,与当前图像匹配技术相比,文中匹配算法具有更高的匹配精度与效率,有效降低了特征点的误匹配率。结论所提图像匹配技术具有较高的配准精度,在图像伪造、包装条码识别等领域具有一定的应用价值。  相似文献   

14.
We propose a motion estimation system that uses stereo image pairs as the input data. To perform experimental work, we also obtain a sequence of outdoor stereo images taken by two metric cameras. The system consists of four main stages, which are (1) determination of point correspondences on the stereo images, (2) correction of distortions in image coordinates, (3) derivation of 3D point coordinates from 2D correspondences, and (4) estimation of motion parameters based on 3D point correspondences. For the first stage of the system, we use a four-way matching algorithm to obtain matched point on two stereo image pairs at two consecutive time instants (ti and ti + 1). Since the input data are stereo images taken by cameras, it has two types of distortions, which are (i) film distortion and (ii) lens distortion. These two distortions must be corrected before any process can be applied on the matched points. To accomplish this goal, we use (i) bilinear transform for film distortion correction and (ii) lens formulas for lens distortion correction. After correcting the distortions, the results are 2D coordinates of each matched point that can be used to derive 3D coordinates. However, due to data noise, the calculated 3D coordinates to not usually represent a consistent rigid structure that is suitable for motion estimation; therefore, we suggest a procedure to select good 3D point sets as the input for motion estimation. The procedure exploits two constraints, rigidity between different time instants and uniform point distribution across the object on the image. For the last stage, we use an algorithm to estimate the motion parameters. We also wish to know what is the effect of quantization error on the estimated results; therefore an error analysis based on quantization error is performed on the estimated motion parameters. In order to test our system, eight sets of stereo image pairs are extracted from an outdoor stereo image sequence and used as the input data. The experimental results indicate that the proposed system does provide reasonable estimated motion parameters.  相似文献   

15.
Takane  Yoshio 《Behaviormetrika》1996,23(2):153-167

An item response model, similar to that in test theory, was proposed for multiple-choice questionaire data. In this model both subjects and item categories are represented as points in a multidimensional euclidean space. The probability of a particular subject choosing a particular item category is stated as a decreasing function of the distance between the subject point and the item category point. The subject point is assumed to follow a certain distribution, and is then integrated out to derive marginal probabilities of response patterns. A marginal maximum likelihood (MML) method was developed to estimate coordinates of the item category points as well as distributional properties of the subject point. Bock and Aitkin’s EM algorithm was adapted to the MML estimation of the proposed model. Examples were given to illustrate the method, which we call MAXMC.

  相似文献   

16.
Quantitative expert judgements are used in reliability assessments to inform critically important decisions. Structured elicitation protocols have been advocated to improve expert judgements, yet their application in reliability is challenged by a lack of examples or evidence that they improve judgements. This paper aims to overcome these barriers. We present a case study where two world-leading protocols, the IDEA protocol and the Classical Model, were combined and applied by the Australian Department of Defence for a reliability assessment. We assess the practicality of the methods and the extent to which they improve judgements. The average expert was extremely overconfident, with 90% credible intervals containing the true realisation 36% of the time. However, steps contained in the protocols substantially improved judgements. In particular, an equal weighted aggregation of individual judgements and the inclusion of a discussion phase and revised estimate helped to improve calibration, statistical accuracy, and the Classical Model score. Further improvements in precision and information were made via performance weighted aggregation. This paper provides useful insights into the application of structured elicitation protocols for reliability and the extent to which judgements are improved. The findings raise concerns about existing practices for utilising experts in reliability assessments and suggest greater adoption of structured protocols is warranted. We encourage the reliability community to further develop examples and insights.  相似文献   

17.
Given its importance in parametrizing eddies, we consider the orientation of eddy flux of potential vorticity (PV) in geostrophic turbulence. We take two different points of view, a classical ensemble- or time-average point of view and a second scale decomposition point of view. A net alignment of the eddy flux of PV with the appropriate mean gradient or the large-scale gradient of PV is required. However, we find this alignment to be very weak. A key finding of our study is that in the scale decomposition approach, there is a strong correlation between the eddy flux and a nonlinear combination of resolved gradients. This strong correlation is absent in the classical decomposition. This finding points to a new model to parametrize the effects of eddies in global ocean circulation.  相似文献   

18.
Patent mapping is an important method for analyzing technological patterns both for scientific research and strategic tasks in companies. In this paper we focus on a specific type of technological pattern, namely the analysis of patents' positions in relation to predefined positions of application fields. For this purpose we use an anchoring approach. We apply semantic patent measurement and discuss RadViz as a powerful method to visualize the measurement's results and to provide insightful motion patterns for monitoring technology change. Moreover, we present an algorithm to define so called anchor points as high dimensional reference points by using textual elements of patents. By the example of carbon fiber reinforcements we demonstrate the usefulness of our approach. Thus, our approach enables academics to analyze important types of technological patterns like convergence or divergence by means of a new instrument and gives practitioners like the R&D management of companies the opportunity to build a reliable strategic business decision support.  相似文献   

19.
Wolff法则是指骨骼通过重建/生长,保证骨小梁方向趋于与主应力方向一致以不断地适应它的力学环境。根据Wolff法则,建立了一种新的拓扑优化的准则法。该方法的基本思想是:(1)将待优化的结构看作是一块遵从Wolff法则生长的骨骼,骨骼的重建过程作为三维连续体结构寻找最优拓扑的过程;(2)用构造张量描述正交各向异性材料的弹性本构;(3)重建规律为结构中材料的更新规律。通过引入参考应变区间,材料更新规律可解释为:设计域内一点处主应变的绝对值不在该区间时,该点处构造张量出现变化;否则,构造张量不变化,该点处于生长平衡状态。(4)当设计域内所有点都处于生长平衡状态时,结构拓扑优化结束。采用各向同性本构模型,即令二阶构造张量与二阶单位张量成比例,分析三维结构拓扑优化。实例进一步验证基于Wolf法则的连续体结构优化方法的正确性和可行性。  相似文献   

20.
This paper presents an algorithm that provides an order of magnitude gain in the computational performance of the numerical integration of the boundary integral equations for three dimensional analysis. Existing algorithms for numerical integration have strategically clustered integration sample points based on the relative proximity of the load points to the boundary element being integrated using element subdivision or element co-ordinate transformation. The emphasis in these techniques has been on minimizing the number of sample points required to obtain a given level of accuracy. The present algorithm, while closely following the spirit of these earlier approaches, employs a discrete number of sets of predetermined, customized, near-optimum, sample point quantities associated with the intrinsic boundary element. The ability created by this approach to reuse sample point geometric information of the actual element allows for the realization of substantive computational economy. This algorithm provides accurate and efficient numerical results both when load points are far from, and when they are on the boundary element being integrated. Numerical results are provided to demonstrate the substantial economy achieved through the use of the present algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号