共查询到20条相似文献,搜索用时 0 毫秒
2.
Multirate systems are abundant in industry. In this paper, the problem studied is designing a residual generator for fault detection based on multirate sampled data. The key new feature of such a residual generator is that it operates at a fast rate for prompt fault detection. The design is based on optimizing a performance index to obtain an optimal parity space based residual generator. The lifting technique is used to convert the time-varying multirate design problem into a time-invariant one with a causality constraint for implementability. A procedure for computing an explicit optimal, causal solution is proposed. The advantages of this design are shown through an example. 相似文献
3.
Message-logging protocols are an integral part of a popular technique for implementing processes that can recover from crash failures. All message-logging protocols require that, when recovery is complete, there be no orphan processes, which are surviving processes whose states are inconsistent with the recovered state of a crashed process. We give a precise specification of the consistency property “no orphan processes”. From this specification, we describe how different existing classes of message-logging protocols (namely optimistic, pessimistic, and a class that we call causal) implement this property. We then propose a set of metrics to evaluate the performance of message-logging protocols, and characterize the protocols that are optimal with respect to these metrics. Finally, starting from a protocol that relies on causal delivery order, we show how to derive optimal causal protocols that tolerate f overlapping failures and recoveries for a parameter f (1⩽f⩽n) 相似文献
4.
Reversible Data hiding techniques reduce transmission cost as secret data is embedded into a cover image without increasing its size in such a way that at the receiving end, both secret data and the cover image can be extracted and recovered, respectively, to their original form. To further reduce the transmission cost, the secret data can be embedded in the compression codes by some popular reversible data hiding schemes. One of the popular and important reversible data hiding method is high- performance data-hiding Lempel–Ziv–Welch (HPDH-LZW) scheme which hides the secret data in LZW codes. In this paper, the HPDH-LZW scheme is modified in order to increase its hiding capacity and compression ratio. First, the proposed work modifies the Move to Front (MTF) encoding technique to hide the secret data and also to increase the similarity among the element of the cover media. Then, LZW encoding technique is applied on the resultant cover data to obtain LZW codes, which are used to hide further secret data. Experimental results show that the proposed scheme has significantly increased the data hiding capacity and have good embedding and extraction speed in comparison to other state of the art schemes. 相似文献
5.
In this paper, we propose an optimal cache replacement policy for data access applications in wireless networks where data updates are injected from all the clients. The goal of the policy is to increase effective hits in the client caches and in turn, make efficient use of the network bandwidth in wireless environment. To serve the applications with the most updated data, we also propose two enhanced cache access policies making copies of data objects strongly consistent. We analytically prove that a cache system, with a combination of our cache access and replacement policy, guarantees the optimal number of effective cache hits and optimal cost (in terms of network bandwidth) per data object access. Results from both analysis and extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) scheme in terms of both effective hits and bandwidth consumption. Our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks. 相似文献
6.
An eigenproblem-based technique is given for computing the stabilizing scalar static output feedback gain that minimizes the standard linear quadratic performance index in the presence of a Gaussian white-noise disturbance. The method adapts recent work on eigenproblem-based computation of
∞-norms. 相似文献
7.
An adequate level of trust must be established between prospective partners before an interaction can begin. In asymmetric trust relationships, one of the interacting partners is stronger. The weaker partner can gain a higher level of trust by disclosing private information. Dissemination of sensitive data owned by the weaker partner starts at this moment. The stronger partner can propagate data to others, who may then choose to spread data further. The proposed scheme for privacy-preserving data dissemination enables control of data by their owner (such as a weaker partner). It relies on the ideas of bundling sensitive data with metadata, an apoptosis of endangered bundles, and an adaptive evaporation of bundles in suspect environments. Possible applications include interactions among patients and healthcare providers, customers and businesses, researchers, and suppliers of their raw data. They will contribute to providing privacy guarantees, which are indispensable for the realization of the promise of pervasive computing. 相似文献
8.
An algorithm for optimizing data clustering in feature space is studied in this work. Using graph Laplacian and extreme learning machine (ELM) mapping technique, we develop an optimal weight matrix W for feature mapping. This work explicitly performs a mapping of the original data for clustering into an optimal feature space, which can further increase the separability of original data in the feature space, and the patterns points in same cluster are still closely clustered. Our method, which can be easily implemented, gets better clustering results than some popular clustering algorithms, like k-means on the original data, kernel clustering method, spectral clustering method, and ELM k-means on data include three UCI real data benchmarks (IRIS data, Wisconsin breast cancer database, and Wine database). 相似文献
9.
Abstract Earthnet's plan for a coordinated Advanced Very High Resolution Radiometer (AVHRR) data service in Europe, as outlined by Fusco and Muirhead (1987), is beginning to take'shape. Three High Resolution Picture Transmission (HRPT) stations are contributing to the data acquisition (providing coverage from West Africa to the Middle East and over the North Pole), the AVHRR catalogue is on-line and an autonomous workstation for AVHRR acquisition, processing and archiving to optical disc is in an advanced state of development. 相似文献
10.
Physical data layout is a crucial factor in the performance of queries and updates in large data warehouses. Data layout enhances and complements other performance features such as materialized views and dynamic caching of aggregated results. Prior work has identified that the multidimensional nature of large data warehouses imposes natural restrictions on the query workload. A method based on a “uniform” query class approach has been proposed for data clustering and shown to be optimal. However, we believe that realistic query workloads will exhibit data access skew. For instance, if time is a dimension in the data model, then more queries are likely to focus on the most recent time interval. The query class approach does not adequately model the possibility of multidimensional data access skew. We propose the affinity graph model for capturing workload characteristics in the presence of access skew and describe an efficient algorithm for physical data layout. Our proposed algorithm considers declustering and load balancing issues which are inherent to the multidisk data layout problem. We demonstrate the validity of this approach experimentally. 相似文献
11.
High density of coexisting networks in the Industrial, Scientific and Medical (ISM) band leads to static and self interferences among different communication entities. The inevitability of these interferences demands for interference avoidance schemes to ensure reliability of network operations. This paper proposes a novel Diversified Adaptive Frequency Rolling (DAFR) technique for frequency hopping in Bluetooth piconets. DAFR employs intelligent hopping procedures in order to mitigate self interferences, weeds out the static interferer efficiently and ensures sufficient frequency diversity. We compare the performance of our proposed technique with the widely used existing frequency hopping techniques, namely, Adaptive Frequency Hopping (AFH) and Adaptive Frequency Rolling (AFR). Simulation studies validate the significant improvement in goodput and hopping diversity of our scheme compared to other schemes and demonstrate its potential benefit in real world deployment. 相似文献
12.
In this paper, we present an optimal compact finite difference scheme for solving the 2D Helmholtz equation. A convergence analysis is given to show that the scheme is sixth-order in accuracy. Based on minimizing the numerical dispersion, a refined optimization rule for choosing the scheme’s weight parameters is proposed. Numerical results are presented to demonstrate the efficiency and accuracy of the compact finite difference scheme with refined parameters. 相似文献
13.
Data preparation is an important and critical step in neural network modeling for complex data analysis and it has a huge impact on the success of a wide variety of complex data analysis tasks, such as data mining and knowledge discovery. Although data preparation in neural network data analysis is important, some existing literature about the neural network data preparation are scattered, and there is no systematic study about data preparation for neural network data analysis. In this study, we first propose an integrated data preparation scheme as a systematic study for neural network data analysis. In the integrated scheme, a survey of data preparation, focusing on problems with the data and corresponding processing techniques, is then provided. Meantime, some intelligent data preparation solution to some important issues and dilemmas with the integrated scheme are discussed in detail. Subsequently, a cost-benefit analysis framework for this integrated scheme is presented to analyze the effect of data preparation on complex data analysis. Finally, a typical example of complex data analysis from the financial domain is provided in order to show the application of data preparation techniques and to demonstrate the impact of data preparation on complex data analysis. 相似文献
14.
Video stream is based on bits of imagery and is thus difficult to be perceived (by machine) in the content level. To access video content, a suitable organization of video data is critical. This paper proposes a hierarchical structure and a process scheme for organizing video data to facilitate indexing, browsing and querying. Four layers can be distinguished, that is: video program, episode, shot and frame. This hierarchy provides an efficient and flexible structure as well as compact and meaningful abstraction of video program. To achieve such an organization, not only the boundary detection of shots and episodes, but also the extraction of key-frames for shots and the selection of representative shots and frames for episodes are important. Suitable criteria and methods for above tasks are proposed and these techniques have been integrated into a workable system. A number of organization experiments using real video data are performed and some results are presented, which show the effectiveness of the proposed organization scheme and techniques. 相似文献
15.
Computer architects have been constantly looking for new approaches to design high-performance machines. Data flow and VLSI offer two mutually supportive approaches towards a promising design for future super-computers. When very high speed computations are needed, data flow machines may be relied upon as an adequate solution in which extremely parallel processing is achieved. This paper presents a formal analysis for data flow machines. Moreover, the following three machines are considered: (1) MIT static data flow machine; (2) TI's DDP static data flow machine; (3) LAU data flow machine. These machines are investigated by making use of a reference model. The contributions of this paper include: (1) Developing a Data Flow Random Access Machine model (DFRAM), for first time, to serve as a formal modeling tool. Also, by making use of this model one can calculate the time cost of various static data machines, as well as the performance of these machines. (2) Constructing a practical Data Flow Simulator (DFS) on the basis of the DFRAM model. Such DFS is modular and portable and can be implemented with less sophistication. The DFS is used not only to study the performance of the underlying data flow machines but also to verify the DFRAM model. 相似文献
16.
Precise global measurements of sea surface temperature (SST) are of great importance for climate research and our ability to model the ocean/ atmosphere. The ATSR instrument is an Announcement of Opportunity experimental package on the ERS-1 satellite, and it is designed to measure global SST with the accuracy levels (better than 0·5 K) that are required by modern climate models. The ATSR instrument's ability to meet its demanding performance objectives depends critically upon a number of novel design features. The way in which these features enable ATSR to achieve its measurement objectives are outlined, and the main tasks of the data processing scheme developed for the U.K. Earth Observation Data Centre are described, including in particular, the ways in which the telemetry data are decoded, the brightness temperature images are geolocated, and the scientific products are derived. 相似文献
17.
提出一种面向中断驱动型嵌入式软件的启发式静态数据竞争检测方法,并开发了原型工具H-RaceChecker.给定软件的源代码或目标程序,H-RaceChecker能够自动推断中断优先级状态、中断使能状态和内存访问状态等信息,在此基础上识别出每个程序点处可能的数据竞争,进而通过启发式精化策略对原始分析结果进行危险程度排序,提高人工确认结果的效率.实验验证了该方法的有效性. 相似文献
19.
We propose an incremental technique for discovering duplicates in large databases of textual sequences, i.e., syntactically
different tuples, that refer to the same real-world entity. The problem is approached from a clustering perspective: given a set of tuples, the objective is to partition them into groups of duplicate tuples. Each newly arrived
tuple is assigned to an appropriate cluster via nearest-neighbor classification. This is achieved by means of a suitable hash-based index, that maps any tuple to a set of indexing keys and
assigns tuples with high syntactic similarity to the same buckets. Hence, the neighbors of a query tuple can be efficiently
identified by simply retrieving those tuples that appear in the same buckets associated to the query tuple itself, without
completely scanning the original database. Two alternative schemes for computing indexing keys are discussed and compared.
An extensive experimental evaluation on both synthetic and real data shows the effectiveness of our approach. 相似文献
20.
A new fourth-order dissipative scheme on a compact 3 × 3 stencil is presented for solving 2 D hyperbolic problems. It belongs to the family of previously developed residual-based compact schemes and can be considered as optimal since it offers the maximum achievable order of accuracy on the 3 × 3-point stencil. The computation of 2 D scalar problems demonstrates the excellent accuracy and efficiency properties offered by this new RBC scheme with respect to existing second- and third-order versions. 相似文献
|