共查询到20条相似文献,搜索用时 15 毫秒
1.
为了对任意NUBS曲线进行精确的分解和重构,提出了半正交B样条小波分解和重构的新算法,同时给出了处理非均匀B样条曲线的非整数阶分辨率的小波分解和重构算法,并实现了任意非均匀B样条曲线的多分辨率表示,对于任意非均匀B样条或NUBS曲线,无论它有多少个控制点,均可以对它进行半正交分解和重构,而不受控制点数必须等于2+3的限制,从这个意义上讲,该方法不仅可以实现连续分辨率水平(continuous-resolutionlevel)的非均匀B样条曲线造型,还可以对非均匀B样条和NURBS曲线进行精确的分解和重构,这对于B样条曲线曲面的多分辨率造型与显示具有重大应用价值。 相似文献
2.
An efficient algorithm for image segmentation based on a multi-resolution application of a wavelets transform and feature distribution is presented. The original feature space is transformed into a lower resolution with a wavelets transform to derive fast computation of the optimum threshold value in a feature space. Based on this lower resolution version of the given feature space, a single feature value or multiple feature values are determined as the optimum threshold values. The optimum feature values, which are in the lower resolution, are projected onto the original feature space. In this step a refinement procedure may be added to detect the optimum threshold value. Experimental results for the proposed algorithm indicate feasibility and reliability for fast image segmentation. 相似文献
3.
Leila De Floriani Author Vitae Author Vitae Enrico Puppo Author Vitae Author Vitae 《Computer aided design》2004,36(2):141-159
We address the problem of representing and processing 3D objects, described through simplicial meshes, which consist of parts of mixed dimensions, and with a non-manifold topology, at different levels of detail. First, we describe a multi-resolution model, that we call a non-manifold multi-tessellation (NMT), and we consider the selective refinement query, which is at the heart of several analysis operations on multi-resolution meshes. Next, we focus on a specific instance of a NMT, generated by simplifying simplicial meshes based on vertex-pair contraction, and we describe a compact data structure for encoding such a model. We also propose a new data structure for two-dimensional simplicial meshes, capable of representing both connectivity and adjacency information with a small memory overhead, which is used to describe the mesh extracted from an NMT through selective refinement. Finally, we present algorithms to efficiently perform updates on such a data structure. 相似文献
4.
In this paper, we introduce the general architecture of an image-search engine based on pre-attentive similarities. Local
features are computed in key points to represent local properties of the images. The location of key points, where local features
are computed, is discussed. We present two new key point detectors designed for image retrieval, both based on multi-resolution:
the contrast-based point detector, and the wavelet-based point detector. Four different local features are used in our system:
differential invariants, texture, shape and colour. The local information computed in each key point is stored in 2D histograms
to allow fast querying. We study the choice of the key points detector depending on the feature used, for different test sets.
The Harris corner detector is used for benchmarking. Uniformly distributed points are also used, and we conclude for which
applications they are effective. Finally, we show that point detector and feature efficiency depend upon the test set studied. 相似文献
5.
6.
In forest dynamics models, the intensive computation and load involved in the simulation of seed dispersal can become unbearably huge for large-scale forest analysis. To solve this problem, we propose a multi-resolution algorithm to compute seed dispersal on GPU. By exploiting the computation parallelism of seed dispersal, the computation of the whole forest plot is divided into multiple small plot cells, which are computed independently by parallel threads on GPU. To further improve the calculation efficiency with limited threads scale for GPU computation, we propose a hierarchical method to cluster the plot cells into a multi-resolution form according to the biological curves of tree seed dispersal. Experimental results show that our algorithm not only greatly reduces computational time but also obtains comparably correct results as compared to the naive GPU algorithm, which makes it especially suitable for large-scale forest modeling. 相似文献
7.
Sunil Prabhakar Divyakant Agrawal Amr El Abbadi Ambuj Singh Terence Smith 《Multimedia Systems》2003,8(6):459-469
Abstract. With rapid advances in computer and communication technologies, there is an increasing demand to build and maintain large
image repositories. To reduce the demands on I/O and network resources, multi-resolution representations are being proposed
for the storage organization of images. Image decomposition techniques such as wavelets can be used to provide these multi-resolution images. The original image is represented by several coefficients, one of them
with visual similarity to the original image, but at a lower resolution. These visually similar coefficients can be thought
of as thumbnails or icons of the original image. This paper addresses the problem of storing these multi-resolution coefficients on disks so that thumbnail
browsing as well as image reconstruction can be performed efficiently. Several strategies are evaluated to store the image
coefficients on parallel disks. These strategies can be classified into two broad classes, depending on whether the access
pattern of the images is used in the placement. Disk simulation is used to evaluate the performance of these strategies. Simulation
results are validated with results from experiments with real Disks, and are found to be in good qualitative agreement. The
results indicate that significant performance improvements can be achieved with as few as four disks by placing image coefficients
based upon browsing access patterns.
Work supported by a research grant from NSF/ARPA/NASA IRI9411330 and NSF instrumentation grant CDA-9421978 and NSF Career
grant No. IIS-9985019, and NSF grant 0010044-CCR. 相似文献
8.
A new framework regarding wavelet neural network, termed a multi-resolution wavelet neural network (MRWNN), is composed based on the theory of multi-resolution wavelet analysis and orthogonal multi-scale spaces. The hidden layer of the network is divided into two parts, neurons with the Meyer scaling activation function and the Meyer wavelet activation function which is orthogonal to the scaling function. Neurons with the scaling function approximate the contour of the aimed function for its lentitude, and neurons with the wavelet function approximate the details of the aimed function for its sensitive trend. Hidden neurons are mapped to different resolution spaces by redefining the network frame depending on the multi-resolution wavelet analysis theory. By incorporating the Gradient Descent Algorithm, the network can be optimized with less interaction within hidden neurons, and thus, it will acquire a further error convergence state when all the correspondent parameters are adjusted in different resolution spaces. When applied to fouling forecasting of a plate heat exchanger, the MRWNN achieved better performance than other neural networks (NNs) when applied to simulations, proving that the MRWNN is effective in nonlinear function approximations. 相似文献
9.
10.
Multi-resolution techniques are required for rendering large volumetric datasets exceeding the size of the graphics card's memory or even the main memory. The cut through the multi-resolution volume representation is defined by selection criteria based on error metrics. For GPU-based volume rendering, this cut has to fit into the graphics card's memory and needs to be continuously updated due to the interaction with the volume such as changing the area of interest, the transfer function or the viewpoint. We introduce a greedy cut update algorithm based on split-and-collapse operations for updating the cut on a frame-to-frame basis. This approach is guided by a global data-based metric based on the distortion of classified voxel data, and it takes into account a limited download budget for transferring data from main memory into the graphics card to avoid large frame rate variations. Our out-of-core support for handling very large volumes also makes use of split-and-collapse operations to generate an extended cut in the main memory. Finally, we introduce an optimal polynomial-time cut update algorithm, which maximizes the error reduction between consecutive frames. This algorithm is used to verify how close to the optimum our greedy split-and-collapse algorithm performs. 相似文献
11.
Zhan Gao Author Vitae 《Computer aided design》2006,38(6):661-676
In this paper, we first propose an implicit surface to B-spline surface haptic interface, which provides both force and torque feedback. We then present a new haptic sculpting system for B-spline surfaces with shaped tools of implicit surface. In the physical world, people touch or sculpt with their fingers or tools, instead of just manipulating points. Shaped virtual sculpting tools help users to relate the virtual modeling process to physical-world experience. Various novel haptic sculpting operations are developed to make the sculpting of B-spline surfaces more intuitive. Wavelet-based multi-resolution tools are provided to let modelers adjust the resolution of sculpture surfaces and thus the scale of deformation can be easily controlled. Moreover, sweep editing and 3D texture have been implemented by taking advantage of both the wavelet technique and haptic sculpting tools. 相似文献
12.
One of the main difficult problem in video analysis is to track moving objects during a video sequence, especially in presence of occlusions. Unfortunately, almost all the different approaches work on a pixel-by-pixel basis, yielding them unusable in real-time situations as well as not much expressive at a semantic level of the video. We present in this paper a novel method of tracking objects through occlusions that exploits the wealth of information due to the spatial coherence between pixels, using a graph-based, multi-resolution representation of the moving regions. The experimental results show that the approach is promising. 相似文献
13.
Video-on-demand (VOD) service requires balanced use of system resources, such as disk bandwidth and buffer, to accommodate
more clients. The data retrieval size and data rates of video streams directly affect the utilization of these resources.
Given the data rates which vary widely in multi-resolution video servers, we need to determine the appropriate data retrieval
size to balance the buffer with the disk bandwidth. Otherwise, the server may be unable to admit new clients even though one
of the resources is available for use. To address this problem, we propose the following new schemes that work together: (1)
A replication scheme called Splitting Striping units by Replication (SSR). To increase the number of admitted clients, SSR
defines two sizes of striping unit, which allow data to be stored on the primary and backup copies in different ways. (2)
A retrieval scheduling method which combines the merits of existing SCAN and grouped sweeping scheme (GSS) algorithms to balance
the buffer and disk bandwidth usage. (3) Admission control algorithms which decide whether to read data from the primary or
the backup copy. The effectiveness of the proposed schemes is demonstrated through simulations. Results show that our schemes
are able to cope with various workloads efficiently and thus enable the server to admit a much larger number of clients. 相似文献
14.
概述小波分析与重构的基本理论,将小波分解的理论应用于B样条曲线的多分辨编辑中,提出一种小波分析和重构的新算法。该算法利用方程组的增广矩阵为类带状矩阵或者稀疏矩阵这一特点,运用简单的矩阵的行初等变换,将类带状矩阵或者稀疏矩阵化成容易接受的行简化矩阵,解方程组,使小波分解与重构的过程快速准确,使从事相关工作的技术人员更容易理解和接受。 相似文献
15.
In this paper, we present a placement algorithm that interleaves multi-resolution video streams on a disk array and enables
a video server to efficiently support playback of these streams at different resolution levels. We then combine this placement
algorithm with a scalable compression technique to efficiently support interactive scan operations (i.e., fast-forward and
rewind). We present an analytical model for evaluating the impact of the scan operations on the performance of disk-arr ay-based
servers. Our experiments demonstrate that: (1) employing our placement algorithm substantially reduces seek and rotational
latency overhead during playback, and (2) exploiting the characteristics of video streams and human perceptual tolerances
enables a server to support interactive scan operations without any additional overhead. 相似文献
16.
A multiscale system identification methodology is presented and discussed, that extends, in a systematic way, the classical board of single-scale system identification tools to a multiscale context. The proposed approach is built upon a wavelet-based multiscale decomposition in a receding horizon sliding window that always includes the last measured values, in order to make it adequate for on-line use. Several examples are presented that illustrate different features of the multiscale modeling framework, such as its improved ability to perform prediction in output variables having most of its energy concentrated at intermediate or coarser time scales when compared to input variables, and its intrinsic smoothing capability. 相似文献
17.
Kelvin K.W. Law John C.S. Lui Leana Golubchik 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):133-153
Advances in high-speed networks and multimedia technologies have made it feasible to provide video-on-demand (VOD) services
to users. However, it is still a challenging task to design a cost-effective VOD system that can support a large number of
clients (who may have different quality of service (QoS) requirements) and, at the same time, provide different types of VCR
functionalities. Although it has been recognized that VCR operations are important functionalities in providing VOD service,
techniques proposed in the past for providing VCR operations may require additional system resources, such as extra disk I/O,
additional buffer space, as well as network bandwidth. In this paper, we consider the design of a VOD storage server that
has the following features: (1) provision of different levels of display resolutions to users who have different QoS requirements,
(2) provision of different types of VCR functionalities, such as fast forward and rewind, without imposing additional demand
on the system buffer space, I/O bandwidth, and network bandwidth, and (3) guarantees of the load-balancing property across
all disks during normal and VCR display periods. The above-mentioned features are especially important because they simplify
the design of the buffer space, I/O, and network resource allocation policies of the VOD storage system. The load-balancing
property also ensures that no single disk will be the bottleneck of the system. In this paper, we propose data block placement,
admission control, and I/O-scheduling algorithms, as well as determine the corresponding buffer space requirements of the
proposed VOD storage system. We show that the proposed VOD system can provide VCR and multi-resolution services to the viewing
clients and at the same time maintain the load-balancing property.
Received June 9, 1998 / Accepted April 26, 1999 相似文献
18.
One of the key problems in using B-splines successfully to approximate an object contour is to determine good knots. In this paper, the knots of a parametric B-spline curve were treated as variables, and the initial location of every knot was generated using the Monte Carlo method in its solution domain. The best km knot vectors among the initial candidates were searched according to the fitness. Based on the initial parameters estimated by an improved k-means algorithm, the Gaussian Mixture Model (GMM) for every knot was built according to the best km knot vectors. Then, the new generation of the population was generated according to the Gaussian mixture probabilistic models. An iterative procedure repeating these steps was carried out until a termination criterion was met. The GMM-based continuous optimization algorithm could determine the appropriate location of knots automatically. A set of experiments was then implemented to evaluate the performance of the new algorithm. The results show that the proposed method achieves better approximation accuracy than methods based on artificial immune system, genetic algorithm or squared distance minimization (SDM). 相似文献
19.
Approximate merging of B-spline curves via knot adjustment and constrained optimization 总被引:3,自引:0,他引:3
Chiew-Lan Tai Author Vitae Author Vitae Qi-Xing Huang Author Vitae 《Computer aided design》2003,35(10):893-899
This paper addresses the problem of approximate merging of two adjacent B-spline curves into one B-spline curve. The basic idea of the approach is to find the conditions for precise merging of two B-spline curves, and perturb the control points of the curves by constrained optimization subject to satisfying these conditions. To obtain a merged curve without superfluous knots, we present a new knot adjustment algorithm for adjusting the end k knots of a kth order B-spline curve without changing its shape. The more general problem of merging curves to pass through some target points is also discussed. 相似文献
20.
Interactions between financial time series are complex and changeable in both time and frequency domains. To reveal the evolution characteristics of the time-varying relations between bivariate time series from a multi-resolution perspective, this study introduces an approach combining wavelet analysis and complex networks. In addition, to reduce the influence the phase lag between the time series has on the correlations, we propose dynamic time-warping (DTW) correlation coefficients to reflect the correlation degree between bivariate time series. Unlike previous studies that symbolized the time series only based on the correlation strength, the second-level symbol is set according to the correlation length during the coarse-graining process. This study presents a novel method to analyze bivariate time series and provides more information for investors and decision makers when investing in the stock market. We choose the closing prices of two stocks in China’s market as the sample and explore the evolutionary behavior of correlation modes from different resolutions. Furthermore, we perform experiments to discover the critical correlation modes between the bull market and the bear market on the high-resolution scale, the clustering effect during the financial crisis on the middle-resolution scale, and the potential pseudo period on the low-resolution scale. The experimental results exactly match reality, which provides powerful evidence to prove that our method is effective in financial time series analysis. 相似文献