首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We present a compression scheme that is useful for interactive video applications such as browsing a multimedia database. The focus of our approach the development of a compression scheme (and a corresponding retrieval scheme) that is optimal for any data rate. To browse a multimedia database, such a compression scheme is essential. We use a multiresolution setting, but eliminate the need for wavelets. This results in much better compression. We show experimental results and explain in detail how to extend our approach to multidimensional data.  相似文献   

2.
Nowadays, the part-based representation of a given shape plays a significant role in shape-related applications, such as those involving content-based retrieval, object recognition, and so on. In this paper, to represent both 2-D and 3-D shapes as a relational structure, i.e. a graph, a new shape decomposition scheme, which recursively performs constrained morphological decomposition (CMD), is proposed. The CMD method adopts the use of the opening operation with the ball-shaped structuring element, and weighted convexity to select the optimal decomposition. For the sake of providing a compact representation, the merging criterion is applied using the weighted convexity difference. Therefore, the proposed scheme uses the split-and-merge approach. Finally, we present experimental results for various, modified 2-D shapes, as well as 3-D shapes represented by triangular meshes. Based on the experimental results, it is believed that the decomposition of a given shape coincides with that based on human insight for both 2-D and 3-D shapes, and also provides robustness to scaling, rotation, noise, shape deformation, and occlusion.  相似文献   

3.
In a fingerprint recognition system, templates are stored in the server database. To avoid the privacy concerns in case the database is compromised, many approaches of securing biometrics templates such as biometric encryption, salting, and noninvertible transformation are proposed to enhance privacy and security. However, a single approach may not meet all application requirements including security, diversity, and revocability. In this paper, we present a hybrid scheme for securing fingerprint templates, which integrates our novel algorithms of biometric encryption and noninvertible transformation. During biometric encryption, we perform the implementation of fingerprint fuzzy vault using a linear equation and chaff points. During noninvertible transformation, we perform a regional transformation for every minutia-centered circular region. The hybrid scheme can provide high security, diversity, and revocability. Experimental results show the comparative performance of those approaches. We also present strength analysis and threats on our scheme.  相似文献   

4.
5.
The self-organizing knowledge representation aspects in heterogeneous information environments involving object-oriented databases, relational databases, and rulebases are investigated. The authors consider a facet of self-organizability which sustains the structural semantic integrity of an integrated schemea regardless of the dynamic nature of local schemata. To achieve this objective, they propose an overall scheme for schema translation and schema integration with an object-oriented data model as common data model, and it is shown that integrated schemata can be maintained effortlessly by propagating updates in local schemata to integrated schemata unambiguously  相似文献   

6.
A hybrid receding-horizon control scheme for nonlinear discrete-time systems is proposed. Whereas a set of optimal feedback control functions is defined at the continuous level, a discrete-event controller chooses the best control action, depending on the current conditions of a plant and on possible external events. Such a two-level scheme is embedded in the structure of abstract hybrid systems, thus making it possible to prove a new asymptotic stability result for the hybrid receding-horizon control approach.  相似文献   

7.
8.
For multicast communication, authentication is a challenging problem, since it requires that a large number of recipients must verify the data originator. Many of multicast applications are running over IP networks, in which several packet losses could occur. Therefore, multicast authentication protocols must resist packet loss. Other requirements of multicast authentication protocols are: to perform authentication in real-time and to have low communication and computation overheads. In the present paper, a hybrid scheme for authenticating real-time data applications, in which low delay at the sender is acceptable, is proposed. In order to provide authentication, the proposed scheme uses both public key signature and hash functions. It is based on the idea of dividing the stream into blocks of m packets. Then a chain of hashes is used to link each packet to the one preceding it. In order to resist packet loss, the hash of each packet is appended to another place in the stream. Finally, the first packet is signed. The proposed scheme resists packet loss and is joinable at any point. The proposed scheme is compared to other multicast authentication protocols. The comparison shows that the proposed scheme has the following advantages: first, it has low computation and communication overheads. Second, it has reasonable buffer requirements. Third, the proposed scheme has a low delay at the sender side and no delay at the receiver side, assuming no loss occurs. Finally, its latency equals to zero, assuming no loss occurs.  相似文献   

9.
The Journal of Supercomputing - Efficient task scheduling is required to attain high performance in both homogeneous and heterogeneous computing systems. An application can be considered as a task...  相似文献   

10.
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of the large number of accurate training samples (10 to 30 × |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, it is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of the statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately, there is no convenient multivariate statistical model that can be employed for multisource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on Landsat satellite image datasets, and our new hybrid approach shows over 24% to 36% improvement in overall classification accuracy over conventional classification schemes.  相似文献   

11.
Two fundamental problems exist in the use of quantum computation to process an image or signal.The first one is how to represent giant data,such as image data,using quantum state without losing information.The second one is how to load a colossal volume of data into the quantum registers of a quantum CPU from classical electronic memory.Researches on these two questions are rarely reported.Here an entangled state is used to represent an image(or vector)for which two entangled registers are used to store a vector component and its classical address.Using the representation,n1+n2+8 qubits are used to store the whole information of the gray image that has a 2n1×2n2 size at a superposition of states,a feat is not possible with a classic computer.The way of designing a unitary operation to load data,such as a vector(or image),into the quantum registers of a quantum CPU from electronic memory is defined herein as a quantum loading scheme(QLS).In this paper,the QLS with time complexity O(log2N)is presented where N denotes the number of vector components,a solution that would break through the efciency bottleneck of loading data.QLS would enable a quantum CPU to be compatible with electronic memory and make possible quantum image compression and quantum signal processing that has classical input and output.  相似文献   

12.
For many years, major drawbacks that have been present in knowledge-based systems are their lack of structure-based knowledge access, high consumption of storage space, long response time, and incapability of tracing items related to each other, unlike a data processing system manipulating a data constructed according to the doubly linked list data structure. This article presents a bi-level knowledge representation scheme that addresses these problems along with several others. This scheme is based on a new knowledge access method in which the expert system communicates with the knowledge base through sub-knowledge sources rather than knowledge sources (facts and rules), i.e., through distinguished objects and relations. This method is used to design the scheme through building the knowledge base using a set of individual objects and relations. This individuality is shown to make it possible to construct highly efficient indices for these objects and relations.

It is shown that although the physical structure knowledge achieves high performance through implementing a concise and structured version of the knowledge in its low level, it is organized to provide efficient service to all the tasks carried out by the accompanying knowledge based system. Furthermore, although the low-level knowledge is highly abstracted, it is easily browsed in its full text mode, just like many existing knowledge bases. One of the important issues that this scheme address is the optiional incorporation of certainty degrees that is used by an appropriate reasoning strategy. It is also shown that fuzziness manipulation can be carried out or halted without having to rewrite the knowledge stored physically.  相似文献   


13.
We propose the PRDC (Pattern Representation based on Data Compression) scheme for media data analysis. PRDC is composed of two parts: an encoder that translates input data into text and a set of text compressors to generate a compression-ratio vector (CV). The CV is used as a feature of the input data. By preparing a set of media-specific encoders, PRDC becomes widely applicable. Analysis tasks - both categorization (class formation) and recognition (classification) - can be realized using CVs. After a mathematical discussion on the realizability of PRDC, the wide applicability of this scheme is demonstrated through the automatic categorization and/or recognition of music, voices, genomes, handwritten sketches and color images  相似文献   

14.
Temporal considerations play a key role in the planning and operation of a manufacturing system. The development of a temporal reasoning mechanism would facilitate effective and efficient computer-aided process planning and dynamic scheduling. We feel that a temporal system that makes use of the expressive power of the integral language and the computational ease of the point language will be best suited to reasoning about time within the manufacturing system. The concept of a superinterval, or a collection of intervals, is used to augment a hybrid point-interval temporal system. We have implemented a reasoning algorithm that can be used to aid temporal decision making within the manufacturing environment. Using the quantitative results obtained by measuring our program's performance, we show how the superinterval can be used to partition large temporal systems into smaller ones to facilitate distributed processing of the smaller systems. The distributed processing of large temporal systems helps achieve real-time temporal decision-making capabilities. Such a reasoning system will facilitate automation of the planning and scheduling functions within the manufacturing environment and provide the framework for an autonomous production facility.  相似文献   

15.
A novel haptic rendering technique using a hybrid surface representation addresses conventional limitations in haptic displays. A haptic interface lets the user touch, explore, paint, and manipulate virtual 3D models in a natural way using a haptic display device. A haptic rendering algorithm must generate a force field to simulate the presence of these virtual objects and their surface properties (such as friction and texture), or to guide the user along a specific trajectory. We can roughly classify haptic rendering algorithms according to the surface representation they use: geometric haptic algorithms for surface data, and volumetric haptic algorithms based on volumetric data including implicit surface representation. Our algorithm is based on a hybrid surface representation - a combination of geometric (B-rep) and implicit (V-rep) surface representations for a given 3D object, which takes advantage of both surface representations.  相似文献   

16.
L. Lopez  D. Trigiante 《Calcolo》1982,19(4):379-395
A hybrid scheme is proposed for the numerical solution of a class of hyperbolic PDE describing the growth process for a population model. We study the stability of this method and the asymptotic behaviour of the numerical solution. Finally we show some numerical results.  相似文献   

17.
A hybrid TDMA/Random-Access multiple-access (HTRAMA) scheme is introduced for providing an access-control coordination for a multi-access communication channel. Such a scheme is applicable to a large spectrum of computer communication network applications. Under this hybrid scheme, the system sources are divided into groups. Sources in different groups are allocated disjoint time slots for their transmissions. Sources within a group share their allocated time slots by transmitting according to a tree random-access policy. The number of groups (and their sizes) is dynamically adjusted to properly (and optimally) match the underlying channel traffic characteristics. In this fashion the hybrid scheme dynamically adapts to a random-access structure at lower traffic throughput levels and to a TDMA structure at higher throughput levels. We carry out detailed delay throughput analysis for these hybrid schemes under both limited and unlimited source buffer capacities. The hybrid scheme is demonstrated to yield very good delay-throughput performance curves under wide ranges of network traffic statistical fluctuations and spans.  相似文献   

18.
Main memory index is built with the assumption that the RAM is sufficiently large to hold data. Due to the volatility and high unit price of main memory, indices under secondary memory such as SSD and HDD are widely used. However, the I/O operation with main memory is still the bottleneck for query efficiency. In this paper, we propose a self-tuning indexing scheme called Tide-tree for RAM/Disk-based hybrid storage system. Tide-tree aims to overcome the obstacles main memory and disk-based indices face, and performs like the tide to achieve a double-win in space and performance, which is self-adaptive with respect to the running environment. Particularly, Tide-tree delaminates the tree structure adaptively with high efficiency based on storage sense, and applies an effective self-tuning algorithm to dynamically load various nodes into main memory. We employ memory mapping technology to solve the persistent problem of main memory index, and improves the efficiency of data synchronism and pointer translation. To further enhance the independence of Tide-tree, we employ the index head and the level address table to manage the whole index. With the index head, three efficient operations are proposed, namely index rebuild, index load and range search. We have conducted extensive experiments to compare the Tide-tree with several state-of-the-art indices, and the results have validated the high efficiency, reusability and stability of Tide-tree.  相似文献   

19.
Energy saving is a critical issue in many sensor-network-based applications. Among the existing sensor-network-based applications, the surveillance application has attracted extensive attention. Object tracking in sensor networks (OTSNs) is a typical surveillance application. Previous studies on energy saving for OTSNs can be divided into two main approaches: (1) improvements in hardware design to lower the energy consumption of attached components and (2) improvements in software to predict the movement of objects. In this paper, we propose a novel scheme, namely hybrid tracking scheme (HTS), for tracking objects with energy efficiency. The scheme consists of the two parts: (1) adaptive schedule monitoring and (2) a recovery mechanism integrated with seamless temporal movement patterns and seeding-based flooding to relocate missing objects with the purpose of saving energy. Furthermore, we also propose a frequently visited periods mining algorithm, which discovers the corresponding frequently visited periods for adaptive schedule monitoring efficiently from the visitation information of sensor nodes. To decrease the number of sensor nodes activated in flooding, a seeding-based flooding mechanism is first proposed in our work. Empirical evaluations of various simulation conditions and real datasets show that the proposed HTS delivers excellent performance in terms of energy efficiency and low missing rates.  相似文献   

20.
The asynchronous nature of the dataflow model of computation allows the exploitation of maximum inherent parallelism in many application programs. However, before the dataflow model of computation can become a viable alternative to the control flow model of computation, one has to find practical solutions to some problems such as efficient handling of data structures. The paper introduces a new model for handling data structures in a dataflow environment. The proposed model combines constant time access capabilities of vectors as well as the flexibility inherent in the concept of pointers. This allows a careful balance between copying and sharing to optimize the storage and processing overhead incurred during the operations on data structures. The mode] is compared by simulation to other data structure models proposed in the literature, and the results are good  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号