共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Total variation models for variable lighting face recognition 总被引:1,自引:0,他引:1
Chen T Yin W Zhou XS Comaniciu D Huang TS 《IEEE transactions on pattern analysis and machine intelligence》2006,28(9):1519-1524
In this paper, we present the logarithmic total variation (LTV) model for face recognition under varying illumination, including natural lighting conditions, where we rarely know the strength, direction, or number of light sources. The proposed LTV model has the ability to factorize a single face image and obtain the illumination invariant facial structure, which is then used for face recognition. Our model is inspired by the SQI model but has better edge-preserving ability and simpler parameter selection. The merit of this model is that neither does it require any lighting assumption nor does it need any training. The LTV model reaches very high recognition rates in the tests using both Yale and CMU PIE face databases as well as a face database containing 765 subjects under outdoor lighting conditions. 相似文献
3.
Software exploits, especially zero-day exploits, are major security threats. Every day, security experts discover and collect numerous exploits from honeypots, malware forensics, and underground channels. However, no easy methods exist to classify these exploits into meaningful categories and to accelerate diagnosis as well as detailed analysis. To address this need, we present SeismoMeter, which recognizes both control-flowhijacking, and data-only attacks by combining approximate control-flow integrity, fast dynamic taint analysis and API sandboxing schemes. Once it detects an exploit incident, SeismoMeter generates a succinct data representation, called an exploit skeleton, to characterize the captured exploit. SeismoMeter then classifies the captured exploits into different exploit families by performing distance computing on the extracted skeletons. To evaluate the efficiency of SeismoMeter, we conduct a field test using exploit samples from public exploit databases, such as Metasploit, as well as wild-captured exploits. Our experiments demonstrate that SeismoMeter is a practical system that successfully detects and correctly classifies all these exploit attacks. 相似文献
4.
Techniques that aid the realistic rendering of lighting effects achieved from linear (1-D) and area (2-D) light sources are presented. They are based on a radiosity model that can be inserted into any traditional ray tracer. The approach is applied to both a 1-D light, analogous to a fluorescent tube, and to a 2-D light, analogous to a light set into the ceiling 相似文献
5.
Gilmore S. Hillston J. Ribaudo M. 《IEEE transactions on pattern analysis and machine intelligence》2001,27(5):449-464
Performance Evaluation Process Algebra (PEPA) is a formal language for performance modeling based on process algebra. It has previously been shown that, by using the process algebra apparatus, compact performance models can be derived which retain the essential behavioral characteristics of the modeled system. However, no efficient algorithm for this derivation was given. We present an efficient algorithm which recognizes and takes advantage of symmetries within the model and avoids unnecessary computation. The algorithm is illustrated by a multiprocessor example 相似文献
6.
Organizations, such as federally-funded medical research centers, must share de-identified data on their consumers to publicly accessible repositories to adhere to regulatory requirements. Many repositories are managed by third-parties and it is often unknown if records received from disparate organizations correspond to the same individual. Failure to resolve this issue can lead to biased (e.g., double counting of identical records) and underpowered (e.g., unlinked records of different data types) investigations. In this paper, we present a secure multiparty computation protocol that enables record joins via consumers’ encrypted identifiers. Our solution is more practical than prior secure join models in that data holders need to interact with the third party one time per data submission. Though technically feasible, the speed of the basic protocol scales quadratically with the number of records. Thus, we introduce an extended version of our protocol in which data holders append k-anonymous features of their consumers to their encrypted submissions. These features facilitate a more efficient join computation, while providing a formal guarantee that each record is linkable to no less than k individuals in the union of all organizations’ consumers. Beyond a theoretical treatment of the problem, we provide an extensive experimental investigation with data derived from the US Census to illustrate the significant gains in efficiency such an approach can achieve. 相似文献
7.
8.
Gang Sun Shuhui Wang Xuehui Liu Qingming Huang Yanyun Chen Enhua Wu 《The Visual computer》2013,29(6-8):565-575
Cross-domain visual matching aims at finding visually similar images across a wide range of visual domains, and has shown a practical impact on a number of applications. Unfortunately, the state-of-the-art approach, which estimates the relative importance of the single feature dimensions still suffers from low matching accuracy and high time cost. To this end, this paper proposes a novel cross-domain visual matching framework leveraging multiple feature representations. To integrate the discriminative power of multiple features, we develop a data-driven, query specific feature fusion model, which estimates the relative importance of the individual feature dimensions as well as the weight vector among multiple features simultaneously. Moreover, to alleviate the computational burden of an exhaustive subimage search, we design a speedup scheme, which employs hyperplane hashing for rapidly collecting the hard-negatives. Extensive experiments carried out on various matching tasks demonstrate that the proposed approach outperforms the state-of-the-art in both accuracy and efficiency. 相似文献
9.
Phrase pattern recognition (phrase chunking) refers to automatic approaches for identifying predefined phrase structures in a stream of text. Support vector machines (SVMs)-based methods had shown excellent performance in many sequential text pattern recognition tasks such as protein name finding, and noun phrase (NP)-chunking. Even though they yield very accurate results, they are not efficient for online applications, which need to handle hundreds of thousand words in a limited time. In this paper, we firstly re-examine five typical multiclass SVM methods and the adaptation to phrase chunking. However, most of them were inefficient when the number of phrase types scales. We thus introduce the proposed two new multiclass SVM models that make the system substantially faster in terms of training and testing while keeps the SVM accurate. The two methods can also be applied to similar tasks such as named entity recognition and Chinese word segmentation. Experiments on CoNLL-2000 chunking and Chinese base-chunking tasks showed that our method can achieve very competitive accuracy and at least 100 times faster than the state-of-the-art SVM-based phrase chunking method. Besides, the computational time complexity and the time cost analysis of our methods were also given in this paper. 相似文献
10.
O. Rabaza A. Peña-García F. Pérez-Ocón D. Gómez-Lorente 《Expert systems with applications》2013,40(18):7305-7315
New parameter relationships for public lighting design (i.e. average illuminance, luminaire spacing, and mounting height) were calculated from a large sample of data sets optimized with a multi-objective evolutionary algorithm. Optimization criteria included maximum energy efficiency and overall uniformity. The relations thus derived are a simple and elegant method for designing any type of public lighting installation without the need to use complex, expensive and/or unavailable software. It would therefore be desirable that manufacturers include such parameters in the product datasheet in order to make the calculation easier. 相似文献
11.
In this paper, it is assumed that the rates of return on assets can be expressed by possibility distributions rather than probability distributions. We propose two kinds of portfolio selection models based on lower and upper possibilistic means and possibilistic variances, respectively, and introduce the notions of lower and upper possibilistic efficient portfolios. We also present an algorithm which can derive the explicit expression of the possibilistic efficient frontier for the possibilistic mean-variance portfolio selection problem dealing with lower bounds on asset holdings. 相似文献
12.
Computer architects usually evaluate new designs using cycle-accurate processor simulation. This approach provides a detailed insight into processor performance, power consumption and complexity. However, only configurations in a subspace can be simulated in practice due to long simulation time and limited resource, leading to suboptimal conclusions which might not be applied to a larger design space. In this paper, we propose a performance prediction approach which employs state-of-the-art techniques from experiment design, machine learning and data mining. According to our experiments on single and multi-core processors, our prediction model generates highly accurate estimations for unsampled points in the design space and show the robustness for the worst-case prediction. Moreover, the model provides quantitative interpretation tools that help investigators to efficiently tune design parameters and remove performance bottlenecks. 相似文献
13.
Georghiades A.S. Belhumeur P.N. Kriegman D.J. 《IEEE transactions on pattern analysis and machine intelligence》2001,23(6):643-660
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions 相似文献
14.
Most simulations of colloidal suspensions treat the solvent implicitly or as a continuum. However as particle size decreases to the nanometer scale, this approximation fails and one needs to treat the solvent explicitly. Due to the large number of smaller solvent particles, such simulations are computationally challenging. Additionally, as the ratio of nanoparticle size to solvent size increases, commonly-used molecular dynamics algorithms for neighbor finding and parallel communication become inefficient. Here we present modified algorithms that enable fast single processor performance and reasonable parallel scalability for mixtures with a wide range of particle size ratios. The methods developed are applicable for any system with widely varying force distance cutoffs, independent of particle sizes and independent of the interaction potential. As a demonstration of the new algorithm's effectiveness, we present results for the pair correlation function and diffusion constant for mixtures where colloidal particles interact via integrated potentials. In these systems, with nanoparticles 20 times larger than the surrounding solvent particles, our parallel molecular dynamics code runs more than 100 times faster using the new algorithms. 相似文献
15.
Slicing is a program analysis technique which can be used for reducing the size of the model and avoid state space explosion in model checking. In this work a static slicing technique is proposed for reducing Rebeca models with respect to a property. For applying the actor-based slicing techniques, the Rebeca control flow graph (RCFG) and the Rebeca dependence graph (RDG) are introduced. We propose two different approaches for constructing the RDG, where each approach can be more effective under certain conditions. As the static slicing usually produces large slices, two other slicing-based reduction techniques, step-wise slicing and bounded slicing, are proposed as simple novel ideas. Step-wise slicing first generates slices that overapproximate the behavior of the original model and then refines it, and bounded slicing is based on the semantics of nondeterministic assignments in Rebeca. We also propose a static slicing algorithm for deadlock detection (in absence of any particular property). The efficiency of these techniques is checked by applying them to several case studies which are included in this paper. Similar techniques can be applied on the other actor-based languages. 相似文献
16.
17.
As shown in [1], the problem of routing a flow subject to a worst-case end-to-end delay constraint in a packed-based network can be formulated as a Mixed-Integer Second-Order Cone Program, and solved with general-p‘urpose tools in real time on realistic instances. However, that result only holds for one particular class of packet schedulers, Strictly Rate-Proportional ones, and implicitly considering each link to be fully loaded, so that the reserved rate of a flow coincides with its guaranteed rate. These assumptions make latency expressions simpler, and enforce perfect isolation between flows, i.e., admitting a new flow cannot increase the delay of existing ones. Other commonplace schedulers both yield more complex latency formulæ and do not enforce flow isolation. Furthermore, the delay actually depends on the guaranteed rate of the flow, which can be significantly larger than the reserved rate if the network is unloaded. In this paper we extend the result to other classes of schedulers and to a more accurate representation of the latency, showing that, even when admission control needs to be factored in, the problem is still efficiently solvable for realistic instances, provided that the right modeling choices are made. 相似文献
18.
Non-visual biological effect of lighting and the practical meaning for lighting for work 总被引:2,自引:0,他引:2
van Bommel WJ 《Applied ergonomics》2006,37(4):461-466
The effects of good lighting extend much further than we used to think. Recent medical and biological research has consistently shown that light entering the human eyes has, apart from a visual effect, also an important non-visual biological effect on the human body. As a consequence, good lighting has a positive influence on health, well-being, alertness, and even on sleep quality. Our better understanding of the diversity in lighting effects teaches us that new rules governing the design of good and healthy lighting installations are required. Thanks to the recent discovery of a novel photoreceptor in the eye and its probable distribution within the eye we can now begin to define these new rules. These will guide us to dynamic lighting installations: that is to say dynamic in lighting level and dynamic in tint of whiteness of the lighting colour. 相似文献
19.