共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a restricted version of the art gallery problem within simple polygons in which the guards are required to lie
on a given one-dimensional object, a watchman route. We call this problem the vision
point
problem . We prove the following:
• The original art gallery problem is NP-hard for the very restricted class of street polygons.
• The vision point problem can be solved efficiently for the class of street polygons.
• A linear time algorithm for the vision point problem exists for the subclass of street polygons called straight
walkable polygons.
Received June 6, 1996; revised September 12, 1997. 相似文献
2.
检测点在多边形中的可见边是计算几何中的一种基本计算,文中对此提出一种加速算法.首先对多边形进行凸片段分解,以利用点在凸多边形中可见边的快速计算;然后利用格网结构实现由近及远的计算,避免处理被遮挡的凸片段.该算法可基于格网结构方便地进行并行处理,并可统一处理含空洞和不含空洞的多边形,其预处理时间复杂度为O(n),空间复杂度也是很低的O(n),而检测的时间复杂度在O(logn)~O(n)之间自适应变化,其中n为多边形的边数. 相似文献
3.
结合兴趣点和边缘的建筑物和物体识别方法 总被引:1,自引:0,他引:1
提出了多种图像特征相结合的建筑物和物体识别方法.使用尺度不变特征描述器描述的Harris-Laplace兴趣点以及边缘颜色直方图描述的边缘特征表示图像.边缘和兴趣点包含图像的重要信息.对2种特征的抽取同时进行:基于Harris检测器可以直接得到边缘特征;在多个尺度下进行Harris兴趣点检测,利用Laplace公式得到Harris-Laplace兴趣点.进行物体识别时,根据兴趣点的数目自适应地改变兴趣点和边缘特征的相似性权重.与同类方法相比较表明,该方法具有更高的识别正确率,在视点变化、光照条件变化等情况下具有较好的性能. 相似文献
4.
We present the first sublinear-time algorithms for computing order statistics in the Farey sequence and for the related problem
of ranking. Our algorithms achieve a running times of nearly O(n
2/3), which is a significant improvement over the previous algorithms taking time O(n).
We also initiate the study of a more general problem: counting primitive lattice points inside planar shapes. For rational
polygons containing the origin, we obtain a running time proportional to D
6/7, where D is the diameter of the polygon.
This work represents a merging of 19 and 21, with additional extensions. 相似文献
5.
现有人脸表情识别算法易受图像背景、非表情内容等无关因素的影响。此外,部分人脸表情(例如害怕、生气、伤心等表情)的类间差异较小也制约着算法的性能。针对上述两个问题,提出了一种融合面部关键点和权重分配残差网络的表情识别算法。通过面部关键点获取最大的表情范围以消除图像背景和非表情内容的干扰,将预处理后的表情图像作为深度残差网络的输入,引入权重分配机制从通道和空间维度上进行注意权重推断,实现不同区域的权重分配,进而引导深度残差网络学习对表情具有鉴别力的局部特征。该算法分别在FER2013和CK+表情数据集上达到了74.14%和98.99%的识别准确率,有效改善了生气、伤心、害怕等类间差异较小的表情识别准确率。 相似文献
6.
Luigi Lavazza 《Empirical Software Engineering》2014,19(4):1075-1110
Since the introduction of COSMIC Function Points, the problem of converting historical data measured using traditional Function Points into COSMIC measures have arisen. To this end, several researchers have investigated the possibility of identifying the relationship between the two measures by means of statistical methods. This paper aims at improving statistical convertibility of Function Points into COSMIC Function Points by improving previous work with respect to aspects—like outlier identification and exclusion, model non-linearity, applicability conditions, etc.—which up to now were not adequately considered, with the purpose of confirming, correcting or enhancing current models. Available datasets including software sizes measured both in Function Points and COSMIC Function Points were analyzed. The role of outliers was studied; non linear models and piecewise linear models were derived, in addition to linear models. Models based on transactions only were also derived. Confidence intervals were used throughout the paper to assess the values of the models’ parameters. The dependence of the ratio between Function Points and COSMIC Function Points on size was studied. The union of all the available datasets was also studied, to overcome problems due to the relatively small size of datasets. It is shown that outliers do affect the linear models, typically increasing the slope of the regression lines; however, this happens mostly in small datasets: in the union of the available datasets there is no outlier that can influence the model. Conditions for the applicability of the statistical conversion are identified, in terms of relationships that must hold among the basic functional components of Function Point measures. Non-linear models are shown to represent well the relationships between the two measures, since the ratio between COSMIC Function Points and Function Points appears to increase with size. In general, it is confirmed that convertibility can be modeled by different types of models. This is a problem for practitioners, who have to choose one of these models. Anyway, a few practical suggestions can be derived from the results reported here. The model assuming that one FP is equal to one CFP causes the biggest conversion errors observed and is not generally supported. All the considered datasets are characterized by a ratio of transaction to data functions that is fairly constant throughout each dataset: this can be regarded as a condition for the applicability of current models; under this condition non-linear (log–log) models perform reasonably well. The fact that in Function Point Analysis the size of a process is limited, while it is not so in the COSMIC method, seems to be the cause of non linearity of the relationship between the two measures. In general, it appears that the conversion can be successfully based on transaction functions alone, without losing in precision. 相似文献
7.
Past research on art gallery problems has concentrated almost exclusively on bounds on the numbers of guards needed in the
worst case in various settings. On the complexity side, fewer results are available. For instance, it has long been known
that placing a smallest number of guards for a given input polygon is NP -hard. In this paper we initiate the study of the
approximability of several types of art gallery problems.
Motivated by a practical problem, we study the approximation properties of the three art gallery problems VERTEX GUARD, EDGE
GUARD, and POINT GUARD. We prove that if the input polygon has no holes, there is a constant δ >0 such that no polynomial time algorithm can achieve an approximation ratio of 1+δ , for each of these guard problems. We obtain these results by proposing gap-preserving reductions from 5-OCCURRENCE-MAX-3-SAT.
We also prove that if the input polygons are allowed to contain holes, then these problems cannot be approximated by a polynomial
time algorithm with ratio ((1-ɛ)/12) \ln n for any ɛ > 0 , unless NP \subseteq TIME(n
O
(log log
n)
) , where n is the number of vertices of the input polygon. We obtain these results by proposing gap-preserving reductions from the
SET COVER problem.
We show that this inapproximability for the POINT GUARD problem for polygons with holes carries over to the problem of covering
a 2.5-dimensional terrain with a minimum number of guards that are placed at a certain fixed height above the terrain. The
same holds if the guards may be placed on the terrain itself.
Received September 22, 1999; revised December 5, 1999. 相似文献
8.
We show that vertex guarding a monotone polygon is NP-hard and construct a constant factor approximation algorithm for interior guarding monotone polygons. Using this algorithm we obtain an approximation algorithm for interior guarding rectilinear polygons that has an approximation factor independent of the number of vertices of the polygon. If the size of the smallest interior guard cover is OPT for a rectilinear polygon, our algorithm produces a guard set of size O(OPT 2). 相似文献
9.
A new discrete 3D line and 3D polygon, called Supercover 3D line and Supercover 3D polygon, are introduced. Analytical definitions are provided. The Supercover 3D polygon is a tunnel free plane segment defined by vertices and edges. An edge is a Supercover 3D line segment. Two different polygons can share a common edge and if they do, the union of both polygons is tunnel free. This definition of discrete polygons has the “most” properties in common with the continuous polygons. It is particularly interesting for modeling of discrete scenes, especially using tunnel-free discrete polyhedra. Algorithms for computing Supercover 3D Lines and Polygons are given and illustrated. 相似文献
10.
This paper presents a prefixed tableaux calculus for Propositional Dynamic Logic with Converse based on a combination of different techniques such as prefixed tableaux for modal logics and model checkers for μ-calculus. We prove the correctness and completeness of the calculus and illustrate its features. We also discuss the transformation of the tableaux method (naively NEXPTIME) into an EXPTIME algorithm. 相似文献
11.
ABSTRACTDempster–Shafer (D–S) evidence theory is very efficient and widely used mathematical tool for uncertain and imprecise information fusion for decision making. D–S rule is criticised by many researchers as it gives illogical and counterintuitive results especially when the series of evidence provided by various experts are in a high degree of conflict. Various attempts have been made and several alternatives proposed to this rule. In this paper, a new alternative is proposed which considers the possibility of an error made by experts while providing evidence, calculates the error and incorporates in the revised masses. The validity and efficiency of the proposed approach have been demonstrated with numerous examples and the results are compared with already existing methods.Highlights
An alternative method is proposed to handle the conflicting evidence.
An Error In Judgement while gathering evidence is considered and incorporated before combining evidence.
The method is simple and gives better and reasonable results than other previous methods when evidence conflicts
12.
Background
COSMIC Function Points and traditional Function Points (i.e., IFPUG Function Points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.Objective
This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.Method
Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.Results
It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.Conclusions
Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper - in particular, both linear and non-linear ones - should be evaluated in order to identify the ones that are best suited for the specific dataset at hand. 相似文献13.
A Unified Gradient-Based Approach for Combining ASM into AAM 总被引:2,自引:0,他引:2
Active Appearance Model (AAM) framework is a very useful method that can fit the shape and appearance model to the input image
for various image analysis and synthesis problems. However, since the goal of the AAM fitting algorithm is to minimize the
residual error between the model appearance and the input image, it often fails to accurately converge to the landmark points
of the input image. To alleviate this weakness, we have combined Active Shape Models (ASM) into AAMs, in which ASMs try to
find correct landmark points using the local profile model. Since the original objective function of the ASM search is not
appropriate for combining these methods, we derive a gradient based iterative method by modifying the objective function of
the ASM search. Then, we propose a new fitting method that combines the objective functions of both ASM and AAM into a single
objective function in a gradient based optimization framework. Experimental results show that the proposed fitting method
reduces the average fitting error when compared with existing fitting methods such as ASM, AAM, and Texture Constrained-ASM
(TC-ASM) and improves the performance of facial expression recognition significantly. 相似文献
14.
基于三角形分解和重构的平面多边形变形方法 总被引:5,自引:2,他引:3
为解决较复杂的不同拓扑结构的二维形状渐变问题,提出一种基于三角形分解和重构的平面多边形变形方法.该方法将图形多层分解为三角形,保留分解过程中的各层边角信息;然后通过线性插值各层边长比例及角度,并结合刚性变换方法重构中间多边形的细节和框架,以达到变形的目的.该方法适用于任意点数的多边形,具有一般性.实验结果表明,文中方法能很好地解决变形序列中的萎缩问题,并且对较复杂的狭长图形也能避免自交现象,变形效果自然. 相似文献
15.
Antonia Redondo Buitrago 《Nexus Network Journal》2007,9(2):321-326
This article furthers the study of the Metallic Means and investigates the question of whether or not there exists a polygon
corresponding to the Bronze Mean as the pentagon and the octagon correspond respectively to the Golden and Silver Means. 相似文献
16.
郝向阳 《中国图象图形学报》1999,4(6):481-485
在建立GIS的过程中,从现有的各种比例尺的地形图上获得地理信息是必不可少的工作。街区式居民地是基 例尺地形图上最主要的地物要素类别之一,地在图二值图象的基础上,根据带晕线多边形的结构物征,利用一系列图象变换技术,首次成功地实现了街区式居民地的自动识别与提取。 相似文献
17.
In Computer-Aided Design applications there is often a need to compute the union, intersection and Merence of two polygons or polyhedra. The sequential algorithms for this problem are characterized by poor speed of response and large computational complexity. In order to remove these defects, an algorithm amenable to implementation on a parallel architecture is proposed. The parallel architecture designed is a systolic one which forms a dedicated subsystem to perform set-theoretic operations on polygons. The improvement in speed gained by using the systolic array as compared to a uniprocessor has been evaluated using simulation techniques. Extensions of this architecture to perform the same operations on polyhedra are also discussed. 相似文献
18.
Several recently introduced and studied planar curve evolutionequations turn out to be iterative smoothing procedures that areinvariant under the actions of the Euclidean and affine groups ofcontinuous transformations. This paper discusses possible ways toextend these results to the projective group of transformations.Invariant polygon evolutions are also investigated. 相似文献
19.
平面多边形的分层表示(L-REP)是一种基于三角形片的多边形表示模型,具有构造简单、鲁棒性强等优点,并且在许多问题上都有着很好的应用.文中在这一工作的基础上进行扩展,使其可以应用到带圆锥曲线边的平面扩展多边形上,提出平面扩展多边形的分层表示方法(CL-REP),并给出了完整的数学模型和两种典型的构造算法.最后给出了使用该方法的几个简单应用,主要是布尔运算和包容测试等,可见使用CL-REP能够简单、有效地解决这些问题。 相似文献
20.
图像目标外接多边形及凸壳的一种构造方法 总被引:1,自引:0,他引:1
对二值图像进行Hough变换后,在(ρ,θ)空间中选取了一组边界对应点,通过计算与这些边界对应点对应的图像空间中直线的交点,构造了图像目标的外接多边形;通过比较相距π/2 rad的投影区间长度是否相等,或区间长度的乘积是否为最小,得到了形状外接正方形和外接最小面积矩形;利用构造形状外接多边形的方法并通过增加边的数目,构造了形状的近似凸壳.实验和理论分析表明,文中算法具有好的抗噪性能和广泛的适用范围. 相似文献