首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The radiometric response of a camera governs the relationship between the incident light on the camera sensor and the output pixel values that are produced. This relationship, which is typically unknown and nonlinear, needs to be estimated for applications that require accurate measurement of scene radiance. Until now, various camera response recovery algorithms have been proposed each with different merits and drawbacks. However, an evaluation study that compares these algorithms has not been presented. In this work, we aim to fill this gap by conducting a rigorous experiment that evaluates the selected algorithms with respect to three metrics: consistency, accuracy, and robustness. In particular, we seek the answer of the following four questions: (1) Which camera response recovery algorithm gives the most accurate results? (2) Which algorithm produces the camera response most consistently for different scenes? (3) Which algorithm performs better under varying degrees of noise? (4) Does the sRGB assumption hold in practice? Our findings indicate that Grossberg and Nayar's (GN) algorithm (2004 [1]) is the most accurate; Mitsunaga and Nayar's (MN) algorithm (1999 [2]) is the most consistent; and Debevec and Malik's (DM) algorithm (1997 [3]) is the most resistant to noise together with MN. We also find that the studied algorithms are not statistically better than each other in terms of accuracy although all of them statistically outperform the sRGB assumption. By answering these questions, we aim to help the researchers and practitioners in the high dynamic range (HDR) imaging community to make better choices when choosing an algorithm for camera response recovery.  相似文献   

2.
Increasingly, for extensibility and performance, special purpose application code is being integrated with database system code. Such application code has direct access to database system buffers, and as a result, the danger of data being corrupted due to inadvertent application writes is increased. Previously proposed hardware techniques to protect from corruption require system calls, and their performance depends on details of the hardware architecture. We investigate an alternative approach which uses codewords associated with regions of data to detect corruption and to prevent corrupted data from being used by subsequent transactions. We develop several such techniques which vary in the level of protection, space overhead, performance, and impact on concurrency. These techniques are implemented in the Dali main-memory storage manager, and the performance impact of each on normal processing is evaluated. Novel techniques are developed to recover when a transaction has read corrupted data caused by a bad write and gone on to write other data in the database. These techniques use limited and relatively low-cost logging of transaction reads to trace the corruption and may also prove useful when resolving problems caused by incorrect data entry and other logical errors.  相似文献   

3.
Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules.  相似文献   

4.
5.
借助于先进的数码摄影技术和数字图像处理软件,数字图像比传统图像更容易篡改和伪造,伪造数字图像现象不仅很普遍,而且在新闻界、娱乐界、学术界和法律界都产生了非常坏的影响。而仅凭人眼视觉难以对数字图像的真实性进行检测和鉴别,借助于计算机强大的处理能力对数字图像的真实性进行检测是当前研究的一大热点。从统计学、传感器噪声模式、彩色滤镜矩阵偏差、透视几何、光照分析等几个方面介绍了被动式数字图像真实性检测这一研究领域当前研究进展,并提出了当前这一领域的不足之处和未来发展方向。  相似文献   

6.
In the hot-standby replication system, the system cannot process its tasks anymore when all replicated nodes have failed. Thus, the remaining living nodes should be well-protected against failure when parts of replicated nodes have failed. Design faults and system-specific weaknesses may cause chain reactions of common faults on identical replicated nodes in replication systems. These can be alleviated by replicating diverse hardware and software. Going one-step forward, failures on the remaining nodes can be suppressed by predicting and preventing the same fault when it has occurred on a replicated node. In this paper, we propose a fault avoidance scheme which increases system dependability by avoiding common faults on remaining nodes when parts of nodes fail, and analyze the system dependability.  相似文献   

7.
A fault-tolerant architectural approach for dependable systems   总被引:2,自引:0,他引:2  
A system's structure enables it to generate its intended behavior from its components' behavior. A well-structured system simplifies relationships among components, which can increase dependability. With software systems, the architecture is an abstraction of the structure. Architectural reasoning about dependability has become increasingly important because emerging applications are increasingly complex. We've developed an architectural approach for effectively representing and analyzing fault-tolerant software systems. The proposed solution relies on exception handling to tolerate faults associated with component and connector failures, architectural mismatches, and configuration faults. Our approach, a specialization of the peer-to-peer architectural style, hides inside the architectural elements the complexities of exception handling and propagation. Our goal is to improve a system's overall reliability and availability by making it tolerant of nonmalicious faults.  相似文献   

8.
As computing components get smaller and people become accustomed to having computational power at their disposal at any time, mobile computing is developing as an important research area. One of the fundamental problems in mobility is maintaining connectivity through message passing as the user moves through the network. An approach to this is to have a single home node constantly track the current location of the mobile unit and forward messages to this location. One problem with this approach is that, during the update to the home agent after movement, messages are often dropped, especially in the case of frequent movement. In this paper, we present a new algorithm which uses a home agent, but maintains information regarding a subnet within which the mobile unit must be present. We also present a reliable message delivery algorithm which is superimposed on the region maintenance algorithm. Our strategy is based on ideas from diffusing computations as first proposed by Dijkstra and Scholten. Finally, we present a second algorithm which limits the size of the subnet by keeping only a path from the home node to the mobile unit  相似文献   

9.
Digital management of multiple robust identities is a crucial issue in developing the next generation of distributed applications. Our daily activities increasingly rely on remote resources and services - specifically, on interactions between different, remotely located parties. Because these parties might (and sometimes should) know little about each other, digital identities - electronic representations of individuals' or organizations' sensitive information - help introduce them to each other and control the amount of information transferred. In its broadest sense, identity management encompasses definitions and life-cycle management for digital identities and profiles, as well as environments for exchanging and validating such information. Digital identity management - especially support for identity dependability and multiplicity - is crucial for building and maintaining trust relationships in today's globally interconnected society. We investigate the problems inherent in identity management, emphasizing the requirements for multiplicity and dependability. We enable a new generation of advanced MDDI services on the global information infrastructure.  相似文献   

10.
Recent advances in public sector open data and online mapping software are opening up new possibilities for interactive mapping in research applications. Increasingly there are opportunities to develop advanced interactive platforms with exploratory and analytical functionality. This paper reviews tools and workflows for the production of online research mapping platforms, alongside a classification of the interactive functionality that can be achieved. A series of mapping case studies from government, academia and research institutes are reviewed. The conclusions are that online cartography's technical hurdles are falling due to open data releases, open source software and cloud services innovations. The data exploration functionality of these new tools is powerful and complements the emerging fields of big data and open GIS. International data perspectives are also increasingly feasible. Analytical functionality for web mapping is currently less developed, but promising examples can be seen in areas such as urban analytics. For more presentational research communication applications, there has been progress in story-driven mapping drawing on data journalism approaches that are capable of connecting with very large audiences.  相似文献   

11.
A mobile ad hoc network (MANET) is a wireless network of mobile devices-such as PDAs, laptops, cell phones, and other lightweight, easily transportable computing devices-in which each node can act as a router for network traffic rather than relying on fixed networking infrastructure. As mobile computing becomes ubiquitous, MANETS becomes increasingly important. As a design paradigm, multiagent systems (MASs) can help facilitate and coordinate ad hoc-scenarios that might include security personnel, rescue workers, police officers, firefighters, and paramedics. On this network, mobile agents perform critical functions that include delivering messages, monitoring resource usage on constrained mobile devices, assessing network traffic patterns, analyzing host behaviors, and revoking access rights for suspicious hosts and agents. Agents can effectively operate in such environments if they are environment aware - if they can sense and reason about their complex and dynamic environments. Altogether, agents living on a MANET must be network, information, and performance aware. This article fleshes out how we apply this approach to populations of mobile agents on a live MANET.  相似文献   

12.
保证自身安全和提供可信服务是密码系统的双重安全属性。从两位计算机科学家图灵奖得主的综述性论文中所提出的观点以及美国联邦政府安全计划中所关注的焦点问题出发,引出当前信息安全领域中可信密码系统研究这一热点问题;阐述了可信密码系统的含义及相关概念的区别与联系,并对其研究的内容和现状进行了深刻分析;最后总结了研究可信密码系统的意义。  相似文献   

13.
In the last few years, several different mesh network architectures have been conceived by both industry and academia; however, many issues on the deployment of efficient and fair transport protocols are still open. One of these issues is rate adaptation, that is, how to allocate the network resources among multiple flows, while minimizing the performance overhead. In order to address this problem, in this paper, we first define an analytical framework for a very simple topology. The model allows us to study the performance of an adaptive and responsive transport protocol when the effect of the lower layers are ignored. The mathematical approach alone does not represent a feasible solution, but it contributes to determining the strengths and weaknesses of our proposal. The main novelty of the proposed solution is that the congestion control approach is based on a hop-by-hop mechanism, which allows nodes to adapt their transmitting rates in a distributed way and to keep track of dynamic multi-hop network characteristics in a responsive manner. This is in contrast with classical literature solutions, founded on an end-to-end support. Anyway, to ensure the reliability, a coarse-grained end-to-end algorithm is integrated with the proposed hop-by-hop congestion control mechanism to provide packet level reliability at the transport layer. Performance evaluation, via extensive simulation experiments, shows that the proposed protocol achieves a high performance in terms of network throughput.  相似文献   

14.
Complex real-time system design needs to address dependability requirements, such as safety, reliability, and security. We introduce a modelling and simulation based approach which allows for the analysis and prediction of dependability constraints. Dependability can be improved by making use of fault tolerance techniques. The de-facto example, in the real-time system literature, of a pump control system in a mining environment is used to demonstrate our model-based approach. In particular, the system is modelled using the Discrete EVent system Specification (DEVS) formalism, and then extended to incorporate fault tolerance mechanisms. The modularity of the DEVS formalism facilitates this extension. The simulation demonstrates that the employed fault tolerance techniques are effective. That is, the system performs satisfactorily despite the presence of faults. This approach also makes it possible to make an informed choice between different fault tolerance techniques. Performance metrics are used to measure the reliability and safety of the system, and to evaluate the dependability achieved by the design. In our model-based development process, modelling, simulation and eventual deployment of the system are seamlessly integrated.  相似文献   

15.
The sixth-generation (6G) wireless communication system is envisioned be cable of providing highly dependable services by integrating with native reliable and trustworthy functionalities. Zero-trust vehicular networks is one of the typical scenarios for 6G dependable services. Under the technical framework of vehicle-and-roadside collaboration, more and more on-board devices and roadside infrastructures will communicate for information exchange. The reliability and security of the vehicle-and-roadside collaboration will directly affect the transportation safety. Considering a zero-trust vehicular environment, to prevent malicious vehicles from uploading false or invalid information, we propose a malicious vehicle identity disclosure approach based on the Shamir secret sharing scheme. Meanwhile, a two-layer consortium blockchain architecture and smart contracts are designed to protect the identity and privacy of benign vehicles as well as the security of their private data. After that, in order to improve the efficiency of vehicle identity disclosure, we present an inspection policy based on zero-sum game theory and a roadside unit incentive mechanism jointly using contract theory and subjective logic model. We verify the performance of the entire zero-trust solution through extensive simulation experiments. On the premise of protecting the vehicle privacy, our solution is demonstrated to significantly improve the reliability and security of 6G vehicular networks.  相似文献   

16.
There has been significant progress in automated verification techniques based on model checking. However, scalable software model checking remains a challenging problem. We believe that this problem can be addressed using a design for verification approach based on design patterns that facilitate scalable automated verification. In this paper, we present a design for verification approach for highly dependable concurrent programming using a design pattern for concurrency controllers. A concurrency controller class consists of a set of guarded commands defining a synchronization policy, and a stateful interface describing the correct usage of the synchronization policy. We present an assume-guarantee style modular verification strategy which separates the verification of the controller behavior from the verification of the conformance to its interface. This allows us to execute the interface and behavior verification tasks separately using specialized verification techniques. We present a case study demonstrating the effectiveness of the presented approach.  相似文献   

17.
18.
The Journal of Supercomputing - Both data shuffling and cache recovery are essential parts of the Spark system, and they directly affect Spark parallel computing performance. Existing dynamic...  相似文献   

19.
Visual measurements of modeled 3-D landmarks provide strong constraints on the location and orientation of a mobile robot. To make the landmark-based robot navigation approach widely applicable, it is necessary to automatically build the landmark models. A substantial amount of effort has been invested by computer vision researchers over the past 10 years on developing robust methods for computing 3-D structure from a sequence of 2-D images. However, robust computation of 3-D structure, with respect to even small amounts of input image noise, has remained elusive. The approach adopted in this article is one of model extension and refinement. A partial model of the environment is assumed to exist and this model is extended over a sequence of frames. As will be shown in the experiments, prior knowledge of the small partial model greatly enhances the robustness of the 3-D structure computations. The initial 3-D model may have errors and these are also refined over the sequence of frames. © 1992 John Wiley & Sons, Inc.  相似文献   

20.
Embedded systems increasingly entail complex issues of hardware-software (HW-SW) co-design. As the number and range of SW functional components typically exceed the finite HW resources, a common approach is that of resource sharing (i.e., the deployment of diverse SW functionalities onto the same HW resources). Consequently, to result in a meaningful co-design solution, one needs to factor the issues of processing capability, power, communication bandwidth, precedence relations, real-time deadlines, space, and cost. As SW functions of diverse criticality (e.g. brake control and infotainment functions) get integrated, an explicit integration requirement need is to carefully plan resource sharing such that faults in low-criticality functions do not affect higher-criticality functions.On this background, the main contribution of this paper is a dependability-driven framework that helps to conduct the integration of SW components onto HW resources such that the maintenance of system dependability over integration of diverse criticality components is assured by design.We first develop a clustering strategy for SW components into Fault Containment Modules (FCMs) such that error propagation via interaction is minimized. Subsequently, the rules of composition for FCMs with respect to error propagation are developed. To allocate the resulting FCMs to the existing HW resources we provide several heuristics, each optimizing particular attributes thereof. Further, a framework for assessing the goodness of the achieved HW-SW composition as a dependable embedded system is presented. Two new techniques for quantifying the goodness of the proposed mappings are introduced by examples, both based on a multi-criteria decision theoretic approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号