共查询到20条相似文献,搜索用时 15 毫秒
1.
The radiometric response of a camera governs the relationship between the incident light on the camera sensor and the output pixel values that are produced. This relationship, which is typically unknown and nonlinear, needs to be estimated for applications that require accurate measurement of scene radiance. Until now, various camera response recovery algorithms have been proposed each with different merits and drawbacks. However, an evaluation study that compares these algorithms has not been presented. In this work, we aim to fill this gap by conducting a rigorous experiment that evaluates the selected algorithms with respect to three metrics: consistency, accuracy, and robustness. In particular, we seek the answer of the following four questions: (1) Which camera response recovery algorithm gives the most accurate results? (2) Which algorithm produces the camera response most consistently for different scenes? (3) Which algorithm performs better under varying degrees of noise? (4) Does the sRGB assumption hold in practice? Our findings indicate that Grossberg and Nayar's (GN) algorithm (2004 [1]) is the most accurate; Mitsunaga and Nayar's (MN) algorithm (1999 [2]) is the most consistent; and Debevec and Malik's (DM) algorithm (1997 [3]) is the most resistant to noise together with MN. We also find that the studied algorithms are not statistically better than each other in terms of accuracy although all of them statistically outperform the sRGB assumption. By answering these questions, we aim to help the researchers and practitioners in the high dynamic range (HDR) imaging community to make better choices when choosing an algorithm for camera response recovery. 相似文献
2.
3.
In the hot-standby replication system, the system cannot process its tasks anymore when all replicated nodes have failed. Thus, the remaining living nodes should be well-protected against failure when parts of replicated nodes have failed. Design faults and system-specific weaknesses may cause chain reactions of common faults on identical replicated nodes in replication systems. These can be alleviated by replicating diverse hardware and software. Going one-step forward, failures on the remaining nodes can be suppressed by predicting and preventing the same fault when it has occurred on a replicated node. In this paper, we propose a fault avoidance scheme which increases system dependability by avoiding common faults on remaining nodes when parts of nodes fail, and analyze the system dependability. 相似文献
4.
Recent advances in public sector open data and online mapping software are opening up new possibilities for interactive mapping in research applications. Increasingly there are opportunities to develop advanced interactive platforms with exploratory and analytical functionality. This paper reviews tools and workflows for the production of online research mapping platforms, alongside a classification of the interactive functionality that can be achieved. A series of mapping case studies from government, academia and research institutes are reviewed. The conclusions are that online cartography's technical hurdles are falling due to open data releases, open source software and cloud services innovations. The data exploration functionality of these new tools is powerful and complements the emerging fields of big data and open GIS. International data perspectives are also increasingly feasible. Analytical functionality for web mapping is currently less developed, but promising examples can be seen in areas such as urban analytics. For more presentational research communication applications, there has been progress in story-driven mapping drawing on data journalism approaches that are capable of connecting with very large audiences. 相似文献
5.
Murphy A.L. Roman G.-C. Varghese G. 《IEEE transactions on pattern analysis and machine intelligence》2002,28(5):433-448
As computing components get smaller and people become accustomed to having computational power at their disposal at any time, mobile computing is developing as an important research area. One of the fundamental problems in mobility is maintaining connectivity through message passing as the user moves through the network. An approach to this is to have a single home node constantly track the current location of the mobile unit and forward messages to this location. One problem with this approach is that, during the update to the home agent after movement, messages are often dropped, especially in the case of frequent movement. In this paper, we present a new algorithm which uses a home agent, but maintains information regarding a subnet within which the mobile unit must be present. We also present a reliable message delivery algorithm which is superimposed on the region maintenance algorithm. Our strategy is based on ideas from diffusing computations as first proposed by Dijkstra and Scholten. Finally, we present a second algorithm which limits the size of the subnet by keeping only a path from the home node to the mobile unit 相似文献
6.
A fault-tolerant architectural approach for dependable systems 总被引:2,自引:0,他引:2
A system's structure enables it to generate its intended behavior from its components' behavior. A well-structured system simplifies relationships among components, which can increase dependability. With software systems, the architecture is an abstraction of the structure. Architectural reasoning about dependability has become increasingly important because emerging applications are increasingly complex. We've developed an architectural approach for effectively representing and analyzing fault-tolerant software systems. The proposed solution relies on exception handling to tolerate faults associated with component and connector failures, architectural mismatches, and configuration faults. Our approach, a specialization of the peer-to-peer architectural style, hides inside the architectural elements the complexities of exception handling and propagation. Our goal is to improve a system's overall reliability and availability by making it tolerant of nonmalicious faults. 相似文献
7.
Digital management of multiple robust identities is a crucial issue in developing the next generation of distributed applications. Our daily activities increasingly rely on remote resources and services - specifically, on interactions between different, remotely located parties. Because these parties might (and sometimes should) know little about each other, digital identities - electronic representations of individuals' or organizations' sensitive information - help introduce them to each other and control the amount of information transferred. In its broadest sense, identity management encompasses definitions and life-cycle management for digital identities and profiles, as well as environments for exchanging and validating such information. Digital identity management - especially support for identity dependability and multiplicity - is crucial for building and maintaining trust relationships in today's globally interconnected society. We investigate the problems inherent in identity management, emphasizing the requirements for multiplicity and dependability. We enable a new generation of advanced MDDI services on the global information infrastructure. 相似文献
8.
Cicirello V. Peysakhov M. Anderson G. Gaurav Naik Tsang K. Regli W. Kam M. 《Intelligent Systems, IEEE》2004,19(5):39-45
A mobile ad hoc network (MANET) is a wireless network of mobile devices-such as PDAs, laptops, cell phones, and other lightweight, easily transportable computing devices-in which each node can act as a router for network traffic rather than relying on fixed networking infrastructure. As mobile computing becomes ubiquitous, MANETS becomes increasingly important. As a design paradigm, multiagent systems (MASs) can help facilitate and coordinate ad hoc-scenarios that might include security personnel, rescue workers, police officers, firefighters, and paramedics. On this network, mobile agents perform critical functions that include delivering messages, monitoring resource usage on constrained mobile devices, assessing network traffic patterns, analyzing host behaviors, and revoking access rights for suspicious hosts and agents. Agents can effectively operate in such environments if they are environment aware - if they can sense and reason about their complex and dynamic environments. Altogether, agents living on a MANET must be network, information, and performance aware. This article fleshes out how we apply this approach to populations of mobile agents on a live MANET. 相似文献
9.
Enrico Natalizio Pasquale PaceAuthor VitaeFrancesca GuerrieroAuthor Vitae Antonio VioliAuthor Vitae 《Journal of Parallel and Distributed Computing》2010
In the last few years, several different mesh network architectures have been conceived by both industry and academia; however, many issues on the deployment of efficient and fair transport protocols are still open. One of these issues is rate adaptation, that is, how to allocate the network resources among multiple flows, while minimizing the performance overhead. In order to address this problem, in this paper, we first define an analytical framework for a very simple topology. The model allows us to study the performance of an adaptive and responsive transport protocol when the effect of the lower layers are ignored. The mathematical approach alone does not represent a feasible solution, but it contributes to determining the strengths and weaknesses of our proposal. The main novelty of the proposed solution is that the congestion control approach is based on a hop-by-hop mechanism, which allows nodes to adapt their transmitting rates in a distributed way and to keep track of dynamic multi-hop network characteristics in a responsive manner. This is in contrast with classical literature solutions, founded on an end-to-end support. Anyway, to ensure the reliability, a coarse-grained end-to-end algorithm is integrated with the proposed hop-by-hop congestion control mechanism to provide packet level reliability at the transport layer. Performance evaluation, via extensive simulation experiments, shows that the proposed protocol achieves a high performance in terms of network throughput. 相似文献
10.
11.
There has been significant progress in automated verification techniques based on model checking. However, scalable software
model checking remains a challenging problem. We believe that this problem can be addressed using a design for verification
approach based on design patterns that facilitate scalable automated verification. In this paper, we present a design for
verification approach for highly dependable concurrent programming using a design pattern for concurrency controllers. A concurrency
controller class consists of a set of guarded commands defining a synchronization policy, and a stateful interface describing
the correct usage of the synchronization policy. We present an assume-guarantee style modular verification strategy which
separates the verification of the controller behavior from the verification of the conformance to its interface. This allows
us to execute the interface and behavior verification tasks separately using specialized verification techniques. We present
a case study demonstrating the effectiveness of the presented approach. 相似文献
12.
The Journal of Supercomputing - Both data shuffling and cache recovery are essential parts of the Spark system, and they directly affect Spark parallel computing performance. Existing dynamic... 相似文献
13.
Visual measurements of modeled 3-D landmarks provide strong constraints on the location and orientation of a mobile robot. To make the landmark-based robot navigation approach widely applicable, it is necessary to automatically build the landmark models. A substantial amount of effort has been invested by computer vision researchers over the past 10 years on developing robust methods for computing 3-D structure from a sequence of 2-D images. However, robust computation of 3-D structure, with respect to even small amounts of input image noise, has remained elusive. The approach adopted in this article is one of model extension and refinement. A partial model of the environment is assumed to exist and this model is extended over a sequence of frames. As will be shown in the experiments, prior knowledge of the small partial model greatly enhances the robustness of the 3-D structure computations. The initial 3-D model may have errors and these are also refined over the sequence of frames. © 1992 John Wiley & Sons, Inc. 相似文献
14.
Neeraj Suri Author Vitae Arshad Jhumka Author Vitae Author Vitae András Pataricza Author Vitae Author Vitae Constantin Sârbu Author Vitae 《Journal of Systems and Software》2010,83(10):1780-1800
Embedded systems increasingly entail complex issues of hardware-software (HW-SW) co-design. As the number and range of SW functional components typically exceed the finite HW resources, a common approach is that of resource sharing (i.e., the deployment of diverse SW functionalities onto the same HW resources). Consequently, to result in a meaningful co-design solution, one needs to factor the issues of processing capability, power, communication bandwidth, precedence relations, real-time deadlines, space, and cost. As SW functions of diverse criticality (e.g. brake control and infotainment functions) get integrated, an explicit integration requirement need is to carefully plan resource sharing such that faults in low-criticality functions do not affect higher-criticality functions.On this background, the main contribution of this paper is a dependability-driven framework that helps to conduct the integration of SW components onto HW resources such that the maintenance of system dependability over integration of diverse criticality components is assured by design.We first develop a clustering strategy for SW components into Fault Containment Modules (FCMs) such that error propagation via interaction is minimized. Subsequently, the rules of composition for FCMs with respect to error propagation are developed. To allocate the resulting FCMs to the existing HW resources we provide several heuristics, each optimizing particular attributes thereof. Further, a framework for assessing the goodness of the achieved HW-SW composition as a dependable embedded system is presented. Two new techniques for quantifying the goodness of the proposed mappings are introduced by examples, both based on a multi-criteria decision theoretic approach. 相似文献
15.
Daniël Reijsbergen Pieter-Tjerk de Boer Werner Scheinhardt Boudewijn Haverkort 《Performance Evaluation》2012,69(7-8):336-355
Probabilistic model checking has been used recently to assess, among others, dependability measures for a variety of systems. However, the numerical methods employed, such as those supported by model checking tools such as PRISM and MRMC, suffer from the state-space explosion problem. The main alternative is statistical model checking, which uses standard Monte Carlo simulation, but this performs poorly when small probabilities need to be estimated. Therefore, we propose a method based on importance sampling to speed up the simulation process in cases where the failure probabilities are small due to the high speed of the system’s repair units. This setting arises naturally in Markovian models of highly dependable systems. We show that our method compares favourably to standard simulation, to existing importance sampling techniques, and to the numerical techniques of PRISM. 相似文献
16.
《Theoretical computer science》2003,290(2):1223-1251
Dependability is a qualitative term referring to a system's ability to meet its service requirements in the presence of faults. The types and number of faults covered by a system play a primary role in determining the level of dependability which that system can potentially provide. Given the variety and multiplicity of fault types, to simplify the design process, the system algorithm design often focuses on specific fault types, resulting in either over-optimistic (all fault permanent) or over-pessimistic (all faults malicious) dependable system designs.A more practical and realistic approach is to recognize that faults of varied severity levels and of differing occurrence probabilities may appear as combinations rather than the assumed single fault type occurrences. The ability to allow the user to select/customize a particular combination of fault types of varied severity characterizes the proposed customizable fault/error model (CFEM). The CFEM organizes diverse fault categories into a cohesive framework by classifying faults according to the effect they have on the required system services rather than by targeting the source of the fault condition. In this paper, we develop (a) the complete framework for the CFEM fault classification, (b) the voting functions applicable under the CFEM, and (c) the fundamental distributed services of consensus and convergence under the CFEM on which dependable distributed functionality can be supported. 相似文献
17.
For discrete-time LQG optimal regulators using current estimators, it has been shown that perfect recoveries of the loop transfer matrices at the input or at the output are possible for square minimum-phase systems. However, current estimators cannot be used when the processing time of the controller is significant. In this note, the loop transfer recoveries using prediction estimators for square minimum-phase systems are considered. It is shown that, although the perfect recoveries are impossible, the feedback properties obtained by the recovery techniques are those that can be recovered best in the presence of the delay in the controller. 相似文献
18.
Location aware, dependable multicast for mobile ad hoc networks 总被引:1,自引:0,他引:1
This paper introduces dynamic source multicast (DSM), a new protocol for multi-hop wireless (i.e., ad hoc) networks for the multicast of a data packet from a source node to a group of mobile nodes in the network. The protocol assumes that, through the use of positioning system devices, each node knows its own geographic location and the current (global) time, and it is able to efficiently spread these measures to all other nodes. When a packet is to be multicast, the source node first locally computes a snapshot of the complete network topology from the collected node measures. A Steiner (i.e., multicast) tree for the addressed multicast group is then computed locally based on the snapshot, rather than maintained in a distributed manner. The resulting Steiner tree is then optimally encoded by using its unique Pr
u" height="11" width="9">fer sequence and is included in the packet header as in, and extending the length of the header by no more than, the header of packets in source routing (unicast) techniques. We show that all the local computations are executed in polynomial time. More specifically, the time complexity of the local operation of finding a Steiner tree, and the encoding/decoding procedures of the related Prüfer sequence, is proven to be O(n2), where n is the number of nodes in the network. The protocol has been simulated in ad hoc networks with 30 and 60 nodes and with different multicast group sizes. We show that DSM delivers packets to all the nodes in a destination group in more than 90% of the cases. Furthermore, compared to flooding, DSM achieves improvements of up to 50% on multicast completion delay. 相似文献
Full-size image
19.
20.
Nicola Mazzocca Stefano Russo Valeria VittoriniAuthor vitae 《Journal of Systems Architecture》1997,43(10):671-685
This paper describes a real-world case study in the specification and analysis of dependable distributed systems. The case study is an automated transport system with safety requirements. In order to manage the complexity of the problem of specifying the dynamic behavior of the whole system, a compositional approach is used, based on the integration of the trace logic of the Communicating Sequential Processes (CSP) theory, and stochastic Petri nets (SPNs). It is argued that the integration of different formal methods is a useful approach in the definition of practical engineering methodologies for the specification, design and analysis of complex dependable distributed systems. 相似文献