共查询到20条相似文献,搜索用时 15 毫秒
1.
Mirrored disk systems provide high reliability by multiplexing disks. Performance is improved with parallel reads and shorter read seeks. However, writes must be performed by both disks, limiting performance. We introduce distorted mirrors, a mirroring system which combines write anywhere semantics with traditional database-specified block locations. This technique radically reduces the cost of small writes, making it attractive for random access applications such as OLTP, while retaining the ability to efficiently perform large sequential accesses. Distorted mirrors also scale better than traditional mirrors in terms of both disk caching and large mirrored sets. We show the effectiveness of distorted mirrors on the TP1 benchmark. 相似文献
2.
In this paper, we describe an approach for solving the general class of energy-optimal task graph scheduling problems using priced timed automata. We provide an efficient zone-based algorithm for minimum-cost reachability. Furthermore, we show how the simple structure of the linear programs encountered during symbolic minimum-cost reachability analysis of priced timed automata can be exploited in order to substantially improve the performance of the current algorithm. The idea is rooted in duality of linear programs and we show that each encountered linear program can be reduced to the dual problem of an instance of the min-cost flow problem. Experimental results using U ppaal show a 70–80 percent performance gain. We provide priced timed automata models for the scheduling problems and provide experimental results illustrating the potential competitiveness of our approach compared to existing approaches such as mixed integer linear programming.This research was conducted in part at Aalborg University, where the author was supported by a CISS Faculty Fellowship. 相似文献
3.
目前我国城市的交通控制多采用信号灯方式,人工设置时间,定时自动放行,这种方法,无法根据对各个路口实际交通流量变化进行实时监测与控制,因此,需要一种新型化的模式来对路口交通流量实时监测,计算机建模的应用,有利于设计优化管理模型,使得信号灯自动优化,实时监测,根据实际情况自动疏导拥堵,从而最大限度的改善交通拥挤等状况。 相似文献
4.
Conceptual modeling involves the understanding and communication between system analysts and end-users. Many factors may affect the quality of conceptual modeling processes as well as the models per se. Human cognition plays a pivotal role in understanding these factors and cognitive mapping techniques are effective tools to elicit and represent human cognition. In this paper, we look at the use of cognitive mapping techniques to improve the quality of conceptual modeling. We review frameworks on quality in conceptual modeling and examine the role of human cognition in conceptual modeling. The paper also discusses how human cognition is related to quality in conceptual modeling, the various cognitive mapping techniques, and how these cognitive mapping techniques can be used in conceptual modeling. Through a case study, the paper describes ways of incorporating cognitive mapping techniques to a popular systems development methodology—Soft Systems Methodology—to improve the quality of conceptual modeling. 相似文献
5.
疾病制图是进行疾病发生情况分析和监测的一项重要技术.它是将地理分布的疾病观测数据,使用合适的空间插值技术,制作成连续分布的疾病地图的过程.介绍了ESDA和Krigmg技术,通过实际数据展示了二者的结合在我国乙肝疾病数据制图中的应用.分析结果指出,我国乙肝发病率地理分布特点为:西北内蒙和甘肃一带、中南和华东部分地区和东北部分省市为高发地区,长江以南高于长江以北,东部沿海高于西部边疆. 相似文献
6.
The gesture recognition in computer vision on life, work and the application of technology products occupy an important position. In this paper, we use the finger valleys, distance mapping and triangular method (FVDMTM) to precisely recognize the fingers. FVDMTM adopt three novel ideas: first, we use the finger valleys to distinguish each finger. It is robust against intentional deformation of the fingers. Second, we employ a distance mapping method effectively to detect the valleys between the fingers. Third: we use the center-of-gravity of the palm as the original point for angle calculation of each finger by the triangular method. This let us to get precise finger angles of the test image. The experimental results demonstrate that our scheme is an effective and correct method for finger recognition. 相似文献
8.
OBJECTIVE: This study examined the effectiveness of rear-end collision warnings presented in different sensory modalities as a function of warning timing in a driving simulator. BACKGROUND: The proliferation of in-vehicle information and entertainment systems threatens driver attention and may increase the risk of rear-end collisions. Collision warning systems have been shown to improve inattentive and/or distracted driver response time (RT) in rear-end collision situations. However, most previous rear-end collision warning research has not directly compared auditory, visual, and tactile warnings. METHOD: Sixteen participants in a fixed-base driving simulator experienced four warning conditions: no warning, visual, auditory, and tactile. The warnings activated when the time-to-collision (TTC) reached a critical threshold of 3.0 or 5.0 s. Driver RT was captured from a warning below critical threshold to brake initiation. RESULTS: Drivers with a tactile warning had the shortest mean RT. Drivers with a tactile warning had significantly shorter RT than drivers without a warning and had a significant advantage over drivers with visual warnings. CONCLUSION: Tactile warnings show promise as effective rear-end collision warnings. APPLICATION: The results of this study can be applied to the future design and evaluation of automotive warnings designed to reduce rear-end collisions. 相似文献
9.
This paper is concerned with the implementation of parallel programs on networks of processors. In particular, we study the use of the network augmenting approach as an implementation tool. According to this approach, the capabilities of a given network of processors can be increased by adding some auxiliary links among the processors. We prove that the minimum set of edges needed to augment a line-like network so that it can accommodate a parallel program is determined by an optimal path cover of the graph representation of the program. An optimal path cover of a simple graph G is a set of vertex-disjoint paths that cover all the vertices of G and has the maximum possible number of edges. We present a linear time optimal path covering algorithm for a class of sparse graphs. This algorithm is of special interest since the optimal path covering problem is NP-complete for general graphs. Our results suggest that a cover and augment scheme can be used for optimal implementation of parallel programs in line-like networks.A preliminary version of this paper was presented at the 6th IEEE Conference on Computer Communications (INFOCOM '87).This reseach is supported in part by National Semiconductor (Israel), Ltd. 相似文献
10.
The ability to present high priority warning information in both the auditory and visual modalities simultaneously has required consideration to be given to their integration. This paper describes an experiment that examined whether performance gains could be achieved by the presentation of up to four sources of concurrent, congruent information, within the applied setting of aircraft Missile Approach Warnings (MAW). It was found that four sources of information produced significantly faster responses than three sources (p<0-01). Three sources of information produced significantly faster responses than two sources (p<001) of information. Two sources of information, in turn, produced significantly faster responses ( p <001) than single sources. The implications of these results for the design of time-critical warning systems are discussed. 相似文献
11.
An example is provided from biostatistics to show that transformation to a unit normal deviate can be inadvisable. This is used to motivate a definition of commensurable units of measurement. An argument is presented to show that a unit cell size can be determined that is termed the ‘least distinguishable difference’. The recommended commensurable unit is obtained by reseating the least distinguishable difference to unity. Thus, the data are transformed to Integer-valued variates. Since the transformed data are integer-valued and optimally ranged for integer accumulation, this is recommended. On most computers integer overflow is detectable, therefore in the absence of overflow one is assured that the intervening computation is correct. A few concluding remarks are appended. 相似文献
12.
The use of reversible contrast mapping (RCM) on pair of pixels in images for reversible watermarking (RW) offers a flexible integer transform that leads to high embedding rate at low mathematical complexity and without requiring additional data compression. Mathematically RCM may be interpreted as a form of adaptive linear transformation on pair of pixels that controls distortion and leads to the retention of the structural information of the watermarked image. A generalized form of RCM, analogous to M-ary modulation in communication, is developed here for a set of points and their corresponding embedding spaces are shown geometrically with and without considering distortion control. Then an optimized distortion control framework adaptive to the choice of operating points is considered to improve data hiding capacity under embedding distortion constraint. Simulation results show that the combination of different M-ary approaches i.e. using the points representing the different transformation functions outperform the embedding rate, visual quality and security of the hidden information compared to the existing RCM, difference expansion (DE) and prediction error expansion (PEE) methods during over embedding. Numerical results show that an average of 13 % improvement in visual quality, 25 % improvement in security of the hidden data is achieved at 0.8 bpp embedding rate over existing PEE work. Performance in robustness against common signal processing operations, namely noise addition, smoothing filtering and some form of geometric operation like random bending attack are also studied. All these studies and the effectiveness are demonstrated with a large set of simulation results. 相似文献
13.
This paper investigates the use of the so-called up to techniques for bisimulation in the framework of verification. We first introduce a method to exploit these techniques, that originate in theoretical works, in the applied context of verification. We then apply it on the π-calculus, in order to design an up to bisimulation verification algorithm for π-calculus terms. The outcome of such an effort is evaluated on two examples that were run on a prototype implementation. Published online: 18 July 2001 相似文献
14.
We study limits for the detection and estimation of weak sinusoidal signals in the primary part of the mammalian auditory system using a stochastic Fitzhugh-Nagumo model and an action-recovery model for synaptic depression. Our overall model covers the chain from a hair cell to a point just after the synaptic connection with a cell in the cochlear nucleus. The information processing performance of the system is evaluated using so-called phi-divergences from statistics that quantify "dissimilarity" between probability measures and are intimately related to a number of fundamental limits in statistics and information theory (IT). We show that there exists a set of parameters that can optimize several important phi-divergences simultaneously and that this set corresponds to a constant quiescent firing rate (QFR) of the spiral ganglion neuron. The optimal value of the QFR is frequency dependent but is essentially independent of the amplitude of the signal (for small amplitudes). Consequently, optimal processing according to several standard IT criteria can be accomplished for this model if and only if the parameters are "tuned" to values that correspond to one and the same QFR. This offers a new explanation for the QFR and can provide new insight into the role played by several other parameters of the peripheral auditory system. 相似文献
15.
Two-line detection algorithms using best first and depth first search techniques, and another algorithm for line detection by joining the line segments based on the principle of A?, are proposed. The algorithms are suitable to detect lines from low-resolution low-contrast images. Shortcomings of conventional approaches, including tracking, have been discussed, and the power of proposed techniques over tracking is illustrated. Results of proposed algorithms are compared with one another. The advantage of these AI-based techniques is that lines of different confidence value (strength) and with at least a certain length can be detected, hence the effect of noise is greatly reduced. 相似文献
16.
Virtualization is the cornerstone of the developing third-party compute industry, allowing cloud providers to instantiate multiple virtual machines (VMs) on a single set of physical resources. Customers utilize cloud resources alongside unknown and untrusted parties, creating the co-resident threat—unless perfect isolation is provided by the virtual hypervisor, there exists the possibility for unauthorized access to sensitive customer information through the exploitation of covert side channels. This paper presents co-resident watermarking, a traffic analysis attack that allows a malicious co-resident VM to inject a watermark signature into the network flow of a target instance. This watermark can be used to exfiltrate and broadcast co-residency data from the physical machine, compromising isolation without reliance on internal side channels. As a result, our approach is difficult to defend against without costly underutilization of the physical machine. We evaluate co-resident watermarkingunder a large variety of conditions, system loads and hardware configurations, from a local laboratory environment to production cloud environments (Futuregrid and the University of Oregon’s ACISS). We demonstrate the ability to initiate a covert channel of 4 bits per second, and we can confirm co-residency with a target VM instance in $<$ 10 s. We also show that passive load measurement of the target and subsequent behavior profiling is possible with this attack. We go on to consider the detectability of co-resident watermarking, extending our scheme to create a subtler watermarking attack by imitating legitimate cloud customer behavior. Our investigation demonstrates the need for the careful design of hardware to be used in the cloud. 相似文献
17.
Analogies with molecular biology are frequently used to guide the development of artificial evolutionary search. A number of assumptions are made in using such reasoning, chief among these is that evolution in natural systems is an optimal, or at least best available, search mechanism, and that a decoupling of search space from behaviour encourages effective search. In this paper, we explore these assumptions as they relate to evolutionary algorithms, and discuss philosophical foundations from which an effective evolutionary search can be constructed. This framework is used to examine grammatical evolution (GE), a popular search method that draws heavily upon concepts from molecular biology. We identify several properties in GE that are in direct conflict with those that promote effective evolutionary search. The paper concludes with some recommendations for designing representations for effective evolutionary search. 相似文献
18.
Gripping and push forces, also named coupling forces, have induced effects on the transmission of the vibration in the upper limb. The assessment of the vibration exposure with powered tools thus requires that these man/machine coupling parameters are controlled and monitored. To date, no reliable metrological systems enable their precise measurements. This study first investigated how much precision could be expected from the pressure mapping technique for the determination of coupling forces by means of numerical integration. Then a specific procedure was worked out and validated to instrument hand-held tools and measure the coupling forces with regard to the appropriate current standards. The proposed method was applied as a case study on an ordinary breaker and an anti-vibration breaker. 相似文献
19.
Psychophysical research on text legibility has historically investigated factors such as size, colour and contrast, but there has been relatively little direct empirical evaluation of typographic design itself, particularly in the emerging context of glance reading. In the present study, participants performed a lexical decision task controlled by an adaptive staircase method. Two typefaces, a ‘humanist’ and ‘square grotesque’ style, were tested. Study I examined positive and negative polarities, while Study II examined two text sizes. Stimulus duration thresholds were sensitive to differences between typefaces, polarities and sizes. Typeface also interacted significantly with age, particularly for conditions with higher legibility thresholds. These results are consistent with previous research assessing the impact of the same typefaces on interface demand in a simulated driving environment. This simplified methodology of assessing legibility differences can be adapted to investigate a wide array of questions relevant to typographic and interface designs. Practitioner Summary: A method is described for rapidly investigating relative legibility of different typographical features. Results indicate that during glance-like reading induced by the psychophysical technique and under the lighting conditions considered, humanist-style type is significantly more legible than a square grotesque style, and that black-on-white text is significantly more legible than white-on-black. 相似文献
20.
Traces are everywhere from information systems that store their continuous executions, to any type of health care applications that record each
patient’s history. The transformation of a set of traces into a mathematical model that can be used for a formal reasoning
is therefore of great value. The discovery of process models out of traces is an interesting problem that has received significant
attention in the last years. This is a central problem in Process Mining, a novel area which tries to close the cycle between system design and validation, by resorting on methods for the automated
discovery, analysis and extension of process models. In this work, algorithms for the derivation of a Petri net from a set of traces are presented. The methods are grounded on the theory of regions, which maps a model in the state-based domain (e.g., an automata) into a model in the event-based domain (e.g., a Petri net).
When dealing with large examples, a direct application of the theory of regions will suffer from two problems: one is the
state-explosion problem, i.e., the resources required by algorithms that work at the state-level are sometimes prohibitive. This paper introduces
decomposition and projection techniques to alleviate the complexity of the region-based algorithms for Petri net discovery,
thus extending its applicability to handle large inputs. A second problem is known as the overfitting problem for region-based approaches, which informally means that, in order to represent with high accuracy the trace set, the models
obtained are often spaghetti-like. By focusing on special type of processes called conservative and for which an elegant theory and efficient algorithms can be devised, the techniques presented in this paper alleviate
the overfitting problem and moreover incorporate structure into the models generated. 相似文献
|