首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work introduces a new scheme for action unit detection in 3D facial videos. Sets of features that define action unit activation in a robust manner are proposed. These features are computed based on eight detected facial landmarks on each facial mesh that involve angles, areas and distances. Support vector machine classifiers are then trained using the above features in order to perform action unit detection. The proposed AU detection scheme is used in a dynamic 3D facial expression retrieval and recognition pipeline, highlighting the most important AU s, in terms of providing facial expression information, and at the same time, resulting in better performance than state-of-the-art methodologies.  相似文献   

2.
To address the issue of malware detection through large sets of applications, researchers have recently started to investigate the capabilities of machine-learning techniques for proposing effective approaches. So far, several promising results were recorded in the literature, many approaches being assessed with what we call in the lab validation scenarios. This paper revisits the purpose of malware detection to discuss whether such in the lab validation scenarios provide reliable indications on the performance of malware detectors in real-world settings, aka in the wild. To this end, we have devised several Machine Learning classifiers that rely on a set of features built from applications’ CFGs. We use a sizeable dataset of over 50 000 Android applications collected from sources where state-of-the art approaches have selected their data. We show that, in the lab, our approach outperforms existing machine learning-based approaches. However, this high performance does not translate in high performance in the wild. The performance gap we observed—F-measures dropping from over 0.9 in the lab to below 0.1 in the wild—raises one important question: How do state-of-the-art approaches perform in the wild?  相似文献   

3.
We propose an optical scheme to prepare large-scale maximally entangled W states by fusing arbitrary-size polarization entangled W states via polarization-dependent beam splitter. Because most of the currently existing fusion schemes are suffering from the qubit loss problem, that is the number of the output entangled qubits is smaller than the sum of numbers of the input entangled qubits, which will inevitably decrease the fusion efficiency and increase the number of fusion steps as well as the requirement of quantum memories, in our scheme, we design a effect fusion mechanism to generate \(W_{m+n}\) state from a n-qubit W state and a m-qubit W state without any qubit loss. As the nature of this fusion mechanism clearly increases the final size of the obtained W state, it is more efficient and feasible. In addition, our scheme can also generate \(W_{m+n+t-1}\) state by fusing a \(W_m\), a \(W_n\) and a \(W_t\) states. This is a great progress compared with the current scheme which has to lose at least two particles in the fusion of three W states. Moreover, it also can be generalized to the case of fusing k different W states, and all the fusion schemes proposed here can start from Bell state as well.  相似文献   

4.
In this paper, a steganographic scheme adopting the concept of the generalized K d -distance N-dimensional pixel matching is proposed. The generalized pixel matching embeds a B-ary digit (B is a function of K and N) into a cover vector of length N, where the order-d Minkowski distance-measured embedding distortion is no larger than K. In contrast to other pixel matching-based schemes, a N-dimensional reference table is used. By choosing d, K, and N adaptively, an embedding strategy which is suitable for arbitrary relative capacity can be developed. Additionally, an optimization algorithm, namely successive iteration algorithm (SIA), is proposed to optimize the codeword assignment in the reference table. Benefited from the high dimensional embedding and the optimization algorithm, nearly maximal embedding efficiency is achieved. Compared with other content-free steganographic schemes, the proposed scheme provides better image quality and statistical security. Moreover, the proposed scheme performs comparable to state-of-the-art content-based approaches after combining with image models.  相似文献   

5.
We propose techniques for processing SPARQL queries over a large RDF graph in a distributed environment. We adopt a “partial evaluation and assembly” framework. Answering a SPARQL query Q is equivalent to finding subgraph matches of the query graph Q over RDF graph G. Based on properties of subgraph matching over a distributed graph, we introduce local partial match as partial answers in each fragment of RDF graph G. For assembly, we propose two methods: centralized and distributed assembly. We analyze our algorithms from both theoretically and experimentally. Extensive experiments over both real and benchmark RDF repositories of billions of triples confirm that our method is superior to the state-of-the-art methods in both the system’s performance and scalability.  相似文献   

6.
Existing spatiotemporal indexes suffer from either large update cost or poor query performance, except for the B x -tree (the state-of-the-art), which consists of multiple B +-trees indexing the 1D values transformed from the (multi-dimensional) moving objects based on a space filling curve (Hilbert, in particular). This curve, however, does not consider object velocities, and as a result, query processing with a B x -tree retrieves a large number of false hits, which seriously compromises its efficiency. It is natural to wonder “can we obtain better performance by capturing also the velocity information, using a Hilbert curve of a higher dimensionality?”. This paper provides a positive answer by developing the B dual -tree, a novel spatiotemporal access method leveraging pure relational methodology. We show, with theoretical evidence, that the B dual -tree indeed outperforms the B x -tree in most circum- stances. Furthermore, our technique can effectively answer progressive spatiotemporal queries, which are poorly supported by B x -trees.  相似文献   

7.
Efficiently answering reachability queries, which checks whether one vertex can reach another in a directed graph, has been studied extensively during recent years. However, the size of the graph that people are facing and generating nowadays is growing so rapidly that simple algorithms, such as BFS and DFS, are no longer feasible. Although Refined Online Search algorithms can scale to large graphs, they all suffer from the false positive problem. In this paper, we analyze the cause of false positive and propose an efficient High Dimensional coordinate generating method based on Graph Dominance Drawing (HD-GDD) to answer reachability queries in linear or even constant time. We conduct experiments on different graph structures and different graph sizes to fully evaluate the performance and behavior of our proposal. Empirical results demonstrate that our method outperforms state-of-the-art algorithms and can handle extensive graphs.  相似文献   

8.
Based on unitary phase shift operation on single qubit in association with Shamir’s (tn) secret sharing, a (tn) threshold quantum secret sharing scheme (or (tn)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir’s scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new (tn)-QSS schemes can be easily constructed using other classical (tn) secret sharing.  相似文献   

9.
Using lattice basis delegation in a fixed dimension, we propose an efficient lattice-based hierarchical identity based encryption (HIBE) scheme in the standard model whose public key size is only (dm2 + mn) log q bits and whose message-ciphertext expansion factor is only log q, where d is the maximum hierarchical depth and (n, m, q) are public parameters. In our construction, a novel public key assignment rule is used to averagely assign one random and public matrix to two identity bits, which implies that d random public matrices are enough to build the proposed HIBE scheme in the standard model, compared with the case in which 2d such public matrices are needed in the scheme proposed at Crypto 2010 whose public key size is (2dm2 + mn +m) log q. To reduce the message-ciphertext expansion factor of the proposed scheme to log q, the encryption algorithm of this scheme is built based on Gentry’s encryption scheme, by which m2 bits of plaintext are encrypted into m2 log q bits of ciphertext by a one time encryption operation. Hence, the presented scheme has some advantages with respect to not only the public key size but also the message-ciphertext expansion factor. Based on the hardness of the learning with errors problem, we demonstrate that the scheme is secure under selective identity and chosen plaintext attacks.  相似文献   

10.
A (t, n) threshold quantum secret sharing (QSS) is proposed based on a single d-level quantum system. It enables the (t, n) threshold structure based on Shamir’s secret sharing and simply requires sequential communication in d-level quantum system to recover secret. Besides, the scheme provides a verification mechanism which employs an additional qudit to detect cheats and eavesdropping during secret reconstruction and allows a participant to use the share repeatedly. Analyses show that the proposed scheme is resistant to typical attacks. Moreover, the scheme is scalable in participant number and easier to realize compared to related schemes. More generally, our scheme also presents a generic method to construct new (t, n) threshold QSS schemes based on d-level quantum system from other classical threshold secret sharing.  相似文献   

11.
Disjunctive Temporal Problems (DTPs) with Preferences (DTPPs) extend DTPs with piece-wise constant preference functions associated to each constraint of the form lx ? yu, where x,y are (real or integer) variables, and l,u are numeric constants. The goal is to find an assignment to the variables of the problem that maximizes the sum of the preference values of satisfied DTP constraints, where such values are obtained by aggregating the preference functions of the satisfied constraints in it under a “max” semantic. The state-of-the-art approach in the field, implemented in the native DTPP solver Maxilitis, extends the approach of the native DTP solver Epilitis. In this paper we present alternative approaches that translate DTPPs to Maximum Satisfiability of a set of Boolean combination of constraints of the form l?x ? y?u, ? ∈{<,≤}, that extend previous work dealing with constant preference functions only. We prove correctness and completeness of the approaches. Results obtained with the Satisfiability Modulo Theories (SMT) solvers Yices and MathSAT on randomly generated DTPPs and DTPPs built from real-world benchmarks, show that one of our translation is competitive to, and can be faster than, Maxilitis (This is an extended and revised version of Bourguet et al. 2013).  相似文献   

12.
Multi Secret Sharing (MSS) scheme is an efficient method of transmitting more than one secret securely. In (n, n)-MSS scheme n secrets are used to create n shares and for reconstruction, all n shares are required. In state of the art schemes n secrets are used to construct n or n + 1 shares, but one can recover partial secret information from less than n shares. There is a need to develop an efficient and secure (n, n)-MSS scheme so that the threshold property can be satisfied. In this paper, we propose three different (n, n)-MSS schemes. In the first and second schemes, Boolean XOR is used and in the third scheme, we used Modular Arithmetic. For quantitative analysis, Similarity metrics, Structural, and Differential measures are considered. A proposed scheme using Modular Arithmetic performs better compared to Boolean XOR. The proposed (n, n)-MSS schemes outperform the existing techniques in terms of security, time complexity, and randomness of shares.  相似文献   

13.
Recent advances have demonstrated substantial benefits from learning with both generative and discriminative parameters. On the one hand, generative approaches address the estimation of the parameters of the joint distribution—\(\mathrm{P}(y,\mathbf{x})\), which for most network types is very computationally efficient (a notable exception to this are Markov networks) and on the other hand, discriminative approaches address the estimation of the parameters of the posterior distribution—and, are more effective for classification, since they fit \(\mathrm{P}(y|\mathbf{x})\) directly. However, discriminative approaches are less computationally efficient as the normalization factor in the conditional log-likelihood precludes the derivation of closed-form estimation of parameters. This paper introduces a new discriminative parameter learning method for Bayesian network classifiers that combines in an elegant fashion parameters learned using both generative and discriminative methods. The proposed method is discriminative in nature, but uses estimates of generative probabilities to speed-up the optimization process. A second contribution is to propose a simple framework to characterize the parameter learning task for Bayesian network classifiers. We conduct an extensive set of experiments on 72 standard datasets and demonstrate that our proposed discriminative parameterization provides an efficient alternative to other state-of-the-art parameterizations.  相似文献   

14.
Nowadays, many real-world problems are encoded into SAT instances and efficiently solved by modern SAT solvers. These solvers, usually known as Conflict-Driven Clause Learning (CDCL) SAT solvers, include a variety of sophisticated techniques, such as clause learning, lazy data structures, conflict-based adaptive branching heuristics, or random restarts, among others. However, the reasons of their efficiency in solving real-world, or industrial, SAT instances are still unknown. The common wisdom in the SAT community is that these technique exploit some hidden structure of real-world problems.In this thesis, we characterize some important features of the underlying structure of industrial SAT instances. Namely, they are the community structure and the self-similar structure. We observe that most industrial SAT formulas, viewed as graphs, have these two properties. This means that (i) in a graph with a clear community structure, i.e. having high modularity, we can find a partition of its nodes into communities such that most edges connect nodes of the same community; and (ii) in a graph with a self-similar pattern, i.e. being fractal, its shape is kept after re-scalings, i.e., grouping sets of nodes into a single node. We also analyze how these structures are affected by the effects of CDCL techniques during the search.Using the previous structural studies, we propose three applications. First, we face the problem of generating pseudo-industrial random SAT instances using the notion of modularity. Our model generates instances similar to (classical) random SAT formulas when the modularity is low, but when this value is high, our model is also adequate to model realistic pseudo-industrial problems. Second, we propose a method based on the community structure of the instance to detect relevant learnt clauses. Our technique augments the original instance with this set of relevant clauses, and this results into an overall improvement of the efficiency of several state-of-the-art CDCL SAT solvers. Finally, we analyze the classification of industrial SAT instances into families using the previously analyzed structure features, and we compare them to other classifiers commonly used in portfolio SAT approaches.In summary, this dissertation extends the understandings of the structure of SAT instances, with the aim of better explaining the success of CDCL techniques and possibly improve them, and propose a number of applications based on this analysis of the underlying structure of SAT formulas.  相似文献   

15.
Let \(G=(V,E)\) be an unweighted undirected graph with n vertices and m edges, and let \(k>2\) be an integer. We present a routing scheme with a poly-logarithmic header size, that given a source s and a destination t at distance \(\varDelta \) from s, routes a message from s to t on a path whose length is \(O(k\varDelta +m^{1/k})\). The total space used by our routing scheme is \(mn^{O(1/\sqrt{\log n})}\), which is almost linear in the number of edges of the graph. We present also a routing scheme with \(n^{O(1/\sqrt{\log n})}\) header size, and the same stretch (up to constant factors). In this routing scheme, the routing table of every \(v\in V\) is at most \(kn^{O(1/\sqrt{\log n})}deg(v)\), where deg(v) is the degree of v in G. Our results are obtained by combining a general technique of Bernstein (2009), that was presented in the context of dynamic graph algorithms, with several new ideas and observations.  相似文献   

16.
In this paper, an innovative framework labeled as cooperative cognitive maritime big data systems (CCMBDSs) on the sea is developed to provide opportunistic channel access and secure communication. A two-phase frame structure is applied to let Secondary users (SUs) entirely utilize the transmission opportunities for a portion of time as the reward by cooperation with Primary users (PUs). Amplify-and-forward (AF) relaying mode is exploited in SU nodes, and Backward induction method based Stackelberg game is employed to achieve optimal determination of SU, power consumption and time portion of cooperation both for non-secure communication scenario and secure communication. Specifically, a jammer-based secure communications scheme is developed to maximize the secure utility of PU, to confront of the situation that the eavesdropper could overheard the signals from SU i and the jammer. Close-form solutions for the best access time portion as well as the power for SU i and jammer are derived to realize the Nash Equilibrium. Simulation results validate the effectiveness of our proposed strategy.  相似文献   

17.
We propose a scheme of cyclic quantum teleportation for three unknown qubits using six-qubit maximally entangled state as the quantum channel. Suppose there are three observers Alice, Bob and Charlie, each of them has been given a quantum system such as a photon or spin-\(\frac{1}{2}\) particle, prepared in state unknown to them. We show how to implement the cyclic quantum teleportation where Alice can transfer her single-qubit state of qubit a to Bob, Bob can transfer his single-qubit state of qubit b to Charlie and Charlie can also transfer his single-qubit state of qubit c to Alice. We can also implement the cyclic quantum teleportation with \(N\geqslant 3\) observers by constructing a 2N-qubit maximally entangled state as the quantum channel. By changing the quantum channel, we can change the direction of teleportation. Therefore, our scheme can realize teleportation in quantum information networks with N observers in different directions, and the security of our scheme is also investigated at the end of the paper.  相似文献   

18.
This paper presents a Gaussian sparse representation cooperative model for tracking a target in heavy occlusion video sequences by combining sparse coding and locality-constrained linear coding algorithms. Different from the usual method of using ?1-norm regularization term in the framework of particle filters to form the sparse collaborative appearance model (SCM), we employed the ?1-norm and ?2-norm to calculate feature selection, and then encoded the candidate samples to generate the sparse coefficients. Consequently, our method not only easily obtained sparse solutions but also reduced reconstruction error. Compared to state-of-the-art algorithms, our scheme achieved better performance in heavy occlusion video sequences for tracking a target. Extensive experiments on target tracking were carried out to show the results of our proposed algorithm compared with various other target tracking methods.  相似文献   

19.
This paper addresses the open problem of designing attribute-based signature (ABS) schemes with constant number of bilinear pairing operations for signature verification or short signatures for more general policies posed by Gagné et al. in Pairing 2012. Designing constant-size ABS for expressive access structures is a challenging task. We design two key-policy ABS schemes with constant-size signature for expressive linear secret-sharing scheme (LSSS)-realizable monotone access structures. Both the schemes utilize only 3 pairing operations in signature verification process. The first scheme is small universe construction, while the second scheme supports large universes of attributes. The signing key is computed according to LSSS-realizable access structure over signer’s attributes, and the message is signed with an attribute set satisfying the access structure. Our ABS schemes provide the existential unforgeability in selective attribute set security model and preserve signer privacy. We also propose a new attribute-based signcryption (ABSC) scheme for LSSS-realizable access structures utilizing only 6 pairings and making the ciphertext size constant. Our scheme is significantly more efficient than existing ABSC schemes. While the secret key (signing key or decryption key) size increases by a factor of number of attributes used in the system, the number of pairing evaluations is reduced to constant. Our protocol achieves (a) ciphertext indistinguishability under adaptive chosen ciphertext attacks assuming the hardness of decisional Bilinear Diffie–Hellman Exponent problem and (b) existential unforgeability under adaptive chosen message attack assuming the hardness of computational Diffie–Hellman Exponent problem. The security proofs are in selective attribute set security model without using any random oracle heuristic. In addition, our ABSC achieves public verifiability of the ciphertext, enabling any party to verify the integrity and validity of the ciphertext.  相似文献   

20.
In the state-of-the-art methods for (large) image transmission, no user interaction behaviors (e.g., user tapping) can be actively involved to affect the transmission performance (e.g., higher image transmission efficiency with relatively poor image quality). So, to effectively and efficiently reduce the large image transmission costs in resource-constraint mobile wireless networks (MWN), we design a content-based and bandwidth-aware Interactive large Image Transmission method in MWN, called the I it. To the best of our knowledge, this is the first study on the interactive image transmission. The whole transmission processing of the I it works as follows: before transmission, a preprocessing step computes the optimal and initial image block (IB) replicas based on the image content and the current network bandwidth at the sender node. During transmission, in case of unsatisfied transmission efficiency, the user’s anxiety to preview the image can be implicitly indicated by the frequency of tapping the screen. In response, the transmission resolutions of the candidate IB replicas can be dynamically adjusted based on the user anxiety degree (UAD). Finally, the candidate IB replicas are transmitted with different priorities to the receiver for reconstruction and display. The experimental results show that the performance of our approach is both efficient and effective, minimizing the response time by decreasing the network transmission cost while improving user experiences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号