Weighted Max-SAT is the optimization version of SAT and many important problems can be naturally encoded as such. Solving weighted Max-SAT is an important problem from both a theoretical and a practical point of view. In recent years, there has been considerable interest in finding efficient solving techniques. Most of this work focuses on the computation of good quality lower bounds to be used within a branch and bound DPLL-like algorithm. Most often, these lower bounds are described in a procedural way. Because of that, it is difficult to realize the logic that is behind.In this paper we introduce an original framework for Max-SAT that stresses the parallelism with classical SAT. Then, we extend the two basic SAT solving techniques: search and inference. We show that many algorithmic tricks used in state-of-the-art Max-SAT solvers are easily expressible in logical terms in a unified manner, using our framework.We also introduce an original search algorithm that performs a restricted amount of weighted resolution at each visited node. We empirically compare our algorithm with a variety of solving alternatives on several benchmarks. Our experiments, which constitute to the best of our knowledge the most comprehensive Max-SAT evaluation ever reported, demonstrate the practical usability of our approach. 相似文献
IT systems pervade our society more and more, and we become heavily dependent on them. At the same time, these systems are increasingly targeted in cyberattacks, making us vulnerable. Enterprise and cybersecurity responsibles face the problem of defining techniques that raise the level of security. They need to decide which mechanism provides the most efficient defense with limited resources. Basically, the risks need to be assessed to determine the best cost-to-benefit ratio. One way to achieve this is through threat modeling; however, threat modeling is not commonly used in the enterprise IT risk domain. Furthermore, the existing threat modeling methods have shortcomings. This paper introduces a metamodel-based approach named Yet Another Cybersecurity Risk Assessment Framework (Yacraf). Yacraf aims to enable comprehensive risk assessment for organizations with more decision support. The paper includes a risk calculation formalization and also an example showing how an organization can use and benefit from Yacraf.
Supercapacitors, also known as electrochemical capacitors, have witnessed a fast evolution in the recent years, but challenges remain. This review covers the fundamentals and state-of-the-art developments of supercapacitors. Conventional and novel electrode materials, including high surface area porous carbons for electrical double layer capacitors (EDLCs) and transition metal oxides, carbides, nitrides and their various nanocomposites for pseudocapacitors – are described. Latest characterization techniques help to better understand the charge storage mechanisms in such supercapacitors and recognize their current limitations, while recently proposed synthesis approaches enable various breakthroughs in this field. 相似文献
In order to obtain high quality data, the correction of atmospheric perturbations acting upon land surface reflectance measurements recorded by a space-based sensor is an important topic within remote sensing. For many years the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative transfer model and the Simplified Method for Atmospheric Correction (SMAC) codes have been used for this atmospheric correction, but previous studies have shown that in a number of situations the quality of correction provided by the SMAC is low. This paper describes a method designed to improve the quality of the SMAC atmospheric correction algorithm through a slight increase in its computational complexity. Data gathered from the SEVIRI aboard Meteosat Second Generation (MSG) is used to validate the additions to SMAC, both by comparison to simulated data corrected using the highly accurate 6S method and by comparison to in-situ and 6S corrected SEVIRI data gathered for two field sites in Africa. The additions to the SMAC are found to greatly increase the quality of atmospheric correction performed, as well as broaden the range of atmospheric conditions under which the SMAC can be applied. When examining the Normalised Difference Vegetation Index (NDVI), the relative difference between SMAC and in-situ values decreases by 1.5% with the improvements in place. Similarly, the mean relative difference between SMAC and 6S reflectance values decreases by a mean of 13, 14.5 and 8.5% for Channels 1, 2 and 3 respectively. Furthermore, the processing speed of the SMAC is found to remain largely unaffected, with only a small increase in the time taken to process a full SEVIRI scene. Whilst the method described within this paper is only applicable to SEVIRI data, a similar approach can be applied to other data sources than SEVIRI, and should result in a similar accuracy improvement no matter which instrument supplies the original data. 相似文献
This paper presents a new method for three dimensional object tracking by fusing information from stereo vision and stereo audio. From the audio data, directional information about an object is extracted by the Generalized Cross Correlation (GCC) and the object’s position in the video data is detected using the Continuously Adaptive Mean shift (CAMshift) method. The obtained localization estimates combined with confidence measurements are then fused to track an object utilizing Particle Swarm Optimization (PSO). In our approach the particles move in the 3D space and iteratively evaluate their current position with regard to the localization estimates of the audio and video module and their confidences, which facilitates the direct determination of the object’s three dimensional position. This technique has low computational complexity and its tracking performance is independent of any kind of model, statistics, or assumptions, contrary to classical methods. The introduction of confidence measurements further increases the robustness and reliability of the entire tracking system and allows an adaptive and dynamical information fusion of heterogenous sensor information. 相似文献
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult.By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system.We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective. 相似文献
Recent advances in algorithms for the multidimensional multiple choice knapsack problems have enabled us to solve rather large problem instances. However, these algorithms are evaluated with very limited benchmark instances. In this study, we propose new methods to systematically generate comprehensive benchmark instances. Some instances with special correlation properties between parameters are found to be several orders of magnitude harder than those currently used for benchmarking the algorithms. Experiments on an existing exact algorithm and two generic solvers show that instances whose weights are uncorrelated with the profits are easier compared with weakly or strongly correlated cases. Instances with classes containing similar set of profits for items and with weights strongly correlated to the profits are the hardest among all instance groups investigated. These hard instances deserve further study and understanding their properties may shed light to better algorithms. 相似文献