首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new iterative polyexponential curve stripping technique for the provision of initial pharmacokinetic parameter estimates has been developed. Such estimates are required for nonlinear least-squares curve fitting. In contrast to conventional curve stripping, this new technique does not make any assumption about the relative magnitudes of the exponential rate constants. Hence, the parameter estimates which it provides are free of the bias which may arise in conventional curve stripping. A BASIC program called JANA has been developed to implement the new curve stripping procedure.  相似文献   

2.
In this paper, a simple, efficient, and parameter free algorithm, DISCUR, is proposed to reconstruct curves from unorganized sample points. The proposed algorithm can reconstruct multiple simple curves that may be open, closed, and/or with sharp corners. The criteria for the curve reconstruction are based on two observations we have made concerning the human visual system: (1) two closest neighbors tend to be connected, and (2) sampling points tend to be connected into a smooth curve. To simulate these two observations, we use the neighborhood feature to connect the nearest neighbors and we present a statistical criterion to determine when two sample points should not be connected even if they are the nearest neighbors. Finally, a necessary and sufficient condition is proposed for the sampling of curves so that they can be reconstructed by using the present algorithm. Numerous examples show that this algorithm is effective.  相似文献   

3.
In this paper, a new algorithm, named VICUR, is presented for curve reconstruction problem. From a set of unorganized points, the proposed algorithm can construct curves that look natural to human vision. The VICUR algorithm is based on two connectivity criteria: proximity and good continuation from the prominent Gestalt principles of perception. Experimental results are presented to show the effectiveness of VICUR.  相似文献   

4.
We describe a new type of parametrically defined space curve. The parameters which define these curves allow for the convenient control over local shape attributes while maintaining global second order geometric continuity. The coordinate functions are defined by piecewise segments of rational functions, each segment being the ratio of cubic polynomials and a common quadratic polynomial. Each curve segment is a planar curve and where two segments meet the curvature is zero. This simple mathematical representation permits these curves to be efficiently manipulated and displayed.  相似文献   

5.
In this article we consider the problem of automatic detection of curves, as opposed to straight lines, over a noisy image. We develop a two step model selection procedure based on a contourlet expansion of the image and prove the method is consistent in probability. The first step is based on usual threshold methods for frames. The second step selects pixels that spread energy over several simultaneous directions which is a known property of curve-like figures. We apply the proposed method to synthetic images and show its capability to separate curves from a noisy background and from a random collection of small straight lines. A practical application to seismic grids is also considered.  相似文献   

6.
存科学研究和工程技术等领域中,常常需要研究某些变量在时间轴上的走势关系。对采集的样本数据进行插值和拟合,以绘制平滑连续的曲线以供专家进行数据分析。离散点造型绘制曲线有拟合和逼近两种,拟合过型值点,一般采用反求法求出相应的控制点,然后再得到B样条曲线。当型值点多时,运行速度会很慢。本文针对曲线计的计算效率,提出了一种新的算法,使曲线计算效率提高,解决工程应用中离散点曲线造型问题。  相似文献   

7.
This paper introduces an optimization-based approach for the curve reconstruction problem, where piecewise linear approximations are computed from sets of points sampled from target curves. In this approach, the problem is formulated as an optimization problem. To be more concrete, at first the Delaunay triangulation for the sample points is computed, and a weight is assigned with each Delaunay edge. Then the problem becomes minimization or maximization of the total weights of the edges that constitute the reconstruction. This paper proposes one exact method and two approximate methods, and shows that the obtained results are improved both theoretically and empirically. In addition, the optimization-based approach is further extended to three dimensions, where surfaces are to be reconstructed, and the quality of the reconstructions is examined.  相似文献   

8.
研究了在380-830nm波长范围的单色光色品坐标的数据特性,叙述了基于入射光的R、G、B三色测量值离散数据经计算、转换,最终获得波长值的数学模型的分析、构造原理和研究过程,提出了采用不同分段方法获取离散数据连续函数的计算模型,说明了通过模型实现数据转换的方法。研究表明,采用色品图离散数据分段曲线拟合得到计算波长具有较小的误差,能够满足光谱数据测量和分析的要求。  相似文献   

9.
We present a local method for the computation of the intersections of plane algebraic curve segments. The conventional method of intersection is global, because it must first find all of the intersections between two curves before it can restrict the segments in question; hence, it cannot take advantage of situations dealing with the intersection of short-curve segments on complex curves. Our local method, on the other hand, will directly find only those intersections that lie on the segments, as it is based upon an extension of methods for tracing along a curve.This author's research was supported by the National Science Foundation under Grant IRI-8910366This author's research was supported by the National Science Foundation under Grant CCR-8810568  相似文献   

10.
A new approach for cubic B-spline curve approximation is presented. The method produces an approximation cubic B-spline curve tangent to a given curve at a set of selected positions, called tangent points, in a piecewise manner starting from a seed segment. A heuristic method is provided to select the tangent points. The first segment of the approximation cubic B-spline curve can be obtained using an inner point interpolation method, least-squares method or geometric Hermite method as a seed segment. The approximation curve is further extended to other tangent points one by one by curve unclamping. New tangent points can also be added, if necessary, by using the concept of the minimum shape deformation angle of an inner point for better approximation. Numerical examples show that the new method is effective in approximating a given curve and is efficient in computation.  相似文献   

11.
Hierarchical part-type segmentation using voxel-based curve skeletons   总被引:1,自引:0,他引:1  
We present an effective framework for segmenting 3D shapes into meaningful components using the curve skeleton. Our algorithm identifies a number of critical points on the efficiently computed curve skeleton, either fully automatically as the junctions of the curve skeleton, or based on user input. We use these points to construct a partitioning of the object surface using geodesics. Because the segmentation is based on the curve skeleton, it intrinsically reflects the shape symmetry and articulation, and can handle shapes with tunnels. We describe a voxel-based implementation of our method which is robust and noise resistant, able to handle shapes of complex articulation and topology, produces smooth segment borders, and delivers hierarchical level-of-detail segmentations. We demonstrate the framework on various real-world 3D shapes. Additionally, we discuss the use of both curve and surface skeletons to produce part-type and patch-type, respectively, segmentations of 3D shapes.  相似文献   

12.
Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest.  相似文献   

13.
We propose a new image registration scheme for remote sensing images. This scheme includes three steps in sequence. First, a segmentation process is performed on the input image pair. Then the boundaries of the segmented regions in two images are extracted and matched. These matched regions are called confidence regions. Finally, a non-linear optimization is performed in the matched regions only to obtain a global set of transform parameters. Experiments show that this scheme is more robust and converges faster than registration of the original image pair. We also develop a new curve-matching algorithm based on curvature scale space to facilitate the second step.  相似文献   

14.
Fuzzy cell Hough transform for curve detection   总被引:6,自引:0,他引:6  
In this paper a new variation of Hough Transform is proposed. It can be used to detect shapes or contours in an image, with better accuracy, especially in noisy images. The parameter space of Hough Transform is split into fuzzy cells which are defined as fuzzy numbers. This fuzzy split provides the advantage to use the uncertainty of the contour point location which is increased when noisy images are used. By using fuzzy cells, each contour point in the spatial domain contributes in more than one fuzzy cell in the parameter space. The array that is created after the fuzzy voting process is smoother than in the crisp case and the effect of noise is reduced. The curves can now be detected with better accuracy. The computation time that is slightly increased by this method, can be minimized in comparison with classical Hough Transform, by using recursively the fuzzy voting process in a roughly split parameter space, to create a multiresolution fuzzily split parameter space.  相似文献   

15.
The Hilbert Curve describes a method of mapping between one and n dimensions. Such mappings are of interest in a number of application domains including image processing and, more recently, in the indexing of multi-dimensional data. Relatively little work, however, has been devoted to techniques for mapping in more that 2 dimensions. This paper presents a technique for constructing state diagrams to facilitate mappings and is a specialization of an incomplete generic process described by Bially. Although the storage requirements for state diagrams increase exponentially with the number of dimensions, they are useful in up to about 9 dimensions.  相似文献   

16.
This work presents the design and implementation of a syntax analyzer for microcomputers. The classical tools of high-level language analysis have been adapted so that a machine independent analyzer is provided. The architecture as well as the structural details of the analyzer are given. It consists of two phases: a scanner for producing tokens by making use of a lexical analysis and a parser which groups these tokens together into syntactic structures that can be adequately used by the subsequent phase (code generation phase).The main target, throughout all the design steps, is to achieve portability and compatibility for microcomputers. Therefore each phase consists of a number of table-based modules. For these modules the essential characteristic is the ease with which any table can be modified or even regenerated. Also, appropriate interfacing has been provided between phases.The modular design leads to storage minimization as well as system reliability and maintainability.  相似文献   

17.
N次三角多项式均匀B样条基组成的样条曲线可表示直线、抛物线、椭圆、螺旋线。本文介绍了带形状参数的三角多项式均匀B样条,最后利用形状参数为零的带形状参数的三角多项式均匀B样条来绘制椭圆和螺旋线,体现了该类方法下绘制曲线在CAGD中的有效性  相似文献   

18.
We introduce a new topology-preserving 3D thinning procedure for deriving the curve voxel skeleton from 3D binary digital images. Based on a rigorously defined classification procedure, the algorithm consists of sequential thinning iterations each characterized by six parallel directional sub-iterations followed by a set of sequential sub-iterations. The algorithm is shown to produce concise and geometrically accurate 3D curve skeletons. The thinning algorithm is also insensitive to object rotation and only moderately sensitive to noise. Although this thinning procedure is valid for curve skeleton extraction of general elongated objects, in this paper, we specifically discuss its application to the orientation modeling of trabecular biological tissues.  相似文献   

19.
There are important classes of programming errors that are hard to diagnose, both manually and automatically, because they involve a program's dynamic behavior. This article describes a compile‐time analyzer that detects these dynamic errors in large, real‐world programs. The analyzer traces execution paths through the source code, modeling memory and reporting inconsistencies. In addition to avoiding false paths through the program, this approach provides valuable contextual information to the programmer who needs to understand and repair the defects. Automatically‐created models, abstracting the behavior of individual functions, allow inter‐procedural defects to be detected efficiently. A product built on these techniques has been used effectively on several large commercial programs. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

20.
P. Ferrara 《Software》2013,43(6):663-684
In this paper, we present heckmate , the first generic static analyzer of multithreaded Java programs based on abstract interpretation. heckmate can be tuned at different levels of precision and efficiency in order to prove various properties (e.g., absence of divisions by zero and data races), and it is sound for multithreaded programs. It supports all the most relevant features of Java multithreading, such as dynamic thread creation, runtime creation of monitors, and dynamic allocation of memory. The experimental results demonstrate that heckmate is accurate and efficient enough to analyze programs with some thousands of statements and a potentially infinite number of threads. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号