首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Concurrent techniques have been widely adopted in software systems, and data race has become a great threat to stability and security of concurrent systems. Previous precise race detection techniques either may miss many races, or are only suitable for some specific programs, such as the programs executed in a virtual machine rather than in actual hardware. To solves these problems, this paper introduces a dynamic predictive race detector, called LayDetect, which detects predictable races in C/C++ programs. LayDetect applies an innovative layering technique, which can detect more races than other detectors, such as FastTrack. We have implemented and evaluated LayDetect with well-known benchmarks and real-world applications. LayDetect has detected 3.7 M races at run-time which is more than that of FastTrack by two orders of magnitude, while the average slowdown (3.0\(\times \)) and space overhead (34.1 MB) of LayDetect are similar to that of FastTrack.  相似文献   

2.
3.
The notion of contact algebra is one of the main tools in the region based theory of space. It is an extension of Boolean algebra with an additional relation C called contact. The elements of the Boolean algebra are considered as formal representations of spatial regions as analogues of physical bodies and Boolean operations are considered as operations for constructing new regions from given ones and also to define some mereological relations between regions as part-of, overlap and underlap. The contact relation is one of the basic mereotopological relations between regions expressing some topological nature. It is used also to define some other important mereotopological relations like non-tangential inclusion, dual contact, external contact and others. Most of these definitions are given by means of the operation of Boolean complementation. There are, however, some problems related to the motivation of the operation of Boolean complementation. In order to avoid these problems we propose a generalization of the notion of contact algebra by dropping the operation of complement and replacing the Boolean part of the definition by that of a distributive lattice. First steps in this direction were made in (Düntsch et al. Lect. Notes Comput. Sci. 4136, 135–147, 2006, Düntsch et al. J. Log. Algebraic Program. 76, 18–34, 2008) presenting the notion of distributive contact lattice based on the contact relation as the only mereotopological relation. In this paper we consider as non-definable primitives the relations of contact, nontangential inclusion and dual contact, extending considerably the language of distributive contact lattices. Part I of the paper is devoted to a suitable axiomatization of the new language called extended distributive contact lattice (EDC-lattice) by means of universal first-order axioms true in all contact algebras. EDC-lattices may be considered also as an algebraic tool for a certain subarea of mereotopology, called in this paper distributive mereotopology. The main result of Part I of the paper is a representation theorem, stating that each EDC-lattice can be isomorphically embedded into a contact algebra, showing in this way that the presented axiomatization preserves the meaning of mereotopological relations without considering Boolean complementation. Part II of the paper is devoted to topological representation theory of EDC-lattices, transferring into the distributive case important results from the topological representation theory of contact algebras. It is shown that under minor additional assumptions on distributive lattices as extensionality of the definable relations of overlap or underlap one can preserve the good topological interpretations of regions as regular closed or regular open sets in topological space.  相似文献   

4.
Binary relations play an important role in rough set theory. This paper investigates the similarity of binary relations based on L-fuzzy topologies, where L is a boolean algebra. First, rough approximations based on a boolean algebra are proposed through successor neighborhoods on binary relations. Next, L-fuzzy topologies induced by binary relations are investigated. Finally, similarity of binary relations is introduced by using the L-fuzzy topologies and the fact that every binary relation is solely similar to some preorder relation is proved. It is worth mentioning that similarity of binary relations are both originated in the L-fuzzy topology and independent of the L-fuzzy topology.  相似文献   

5.
In this paper, an S-mixing entropy of quantum channels is introduced as a generalization of Ohya’s S-mixing entropy. We investigate several properties of the introduced entropy. Moreover, certain relations between the S-mixing entropy and the existing map and output entropies of quantum channels are investigated as well. These relations allowed us to find certain connections between separable states and the introduced entropy. Hence, there is a sufficient condition to detect entangled states. Moreover, several properties of the introduced entropy are investigated. Besides, entropies of qubit and phase-damping channels are calculated.  相似文献   

6.
Stochastic discrete event systems (SDES) are systems whose evolution is described by the occurrence of a sequence of events, where each event has a defined probability of occurring from each state. The diagnosability problem for SDES is the problem of determining the conditions under which occurrences of a fault can be detected in finite time with arbitrarily high probability. (IEEE Trans Autom Control 50(4):476–492 2005) proposed a class of SDES and proposed two definitions of stochastic diagnosability for SDES called A- and A A-diagnosability and reported a necessary and sufficient condition for A-diagnosability, but only a sufficient condition for A A-diagnosability. In this paper, we provide a condition that is both necessary and sufficient for determining whether or not an SDES is A A-diagnosable. We also show that verification of A A-diagnosability is equivalent to verification of the termination of the cumulative sum (CUSUM) procedure for hidden Markov models, and that, for a specific class of SDES called fault-immediate systems, the sequential probability ratio test (SPRT) minimizes the expected number of observable events required to distinguish between the normal and faulty modes.  相似文献   

7.
Continuous visible nearest neighbor query processing in spatial databases   总被引:1,自引:0,他引:1  
In this paper, we identify and solve a new type of spatial queries, called continuous visible nearest neighbor (CVNN) search. Given a data set P, an obstacle set O, and a query line segment q in a two-dimensional space, a CVNN query returns a set of \({\langle p, R\rangle}\) tuples such that \({p \in P}\) is the nearest neighbor to every point r along the interval \({R \subseteq q}\) as well as p is visible to r. Note that p may be NULL, meaning that all points in P are invisible to all points in R due to the obstruction of some obstacles in O. In contrast to existing continuous nearest neighbor query, CVNN retrieval considers the impact of obstacles on visibility between objects, which is ignored by most of spatial queries. We formulate the problem, analyze its unique characteristics, and develop efficient algorithms for exact CVNN query processing. Our methods (1) utilize conventional data-partitioning indices (e.g., R-trees) on both P and O, (2) tackle the CVNN search by performing a single query for the entire query line segment, and (3) only access the data points and obstacles relevant to the final query result by employing a suite of effective pruning heuristics. In addition, several interesting variations of CVNN queries have been introduced, and they can be supported by our techniques, which further demonstrates the flexibility of the proposed algorithms. A comprehensive experimental evaluation using both real and synthetic data sets has been conducted to verify the effectiveness of our proposed pruning heuristics and the performance of our proposed algorithms.  相似文献   

8.
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects.  相似文献   

9.
In social tagging systems such as Delicious and Flickr,users collaboratively manage tags to annotate resources.Naturally,a social tagging system can be modeled as a (user,tag,resource) hypernetwork,where there are three different types of nodes,namely users,resources and tags,and each hyperedge has three end nodes,connecting a user,a resource and a tag that the user employs to annotate the resource.Then how can we automatically cluster related users,resources and tags,respectively? This is a problem of community detection in a 3-partite,3-uniform hypernetwork.More generally,given a K-partite K-uniform (hyper)network,where each (hyper)edge is a K-tuple composed of nodes of K different types,how can we automatically detect communities for nodes of different types? In this paper,by turning this problem into a problem of finding an efficient compression of the (hyper)network’s structure,we propose a quality function for measuring the goodness of partitions of a K-partite K-uniform (hyper)network into communities,and develop a fast community detection method based on optimization.Our method overcomes the limitations of state of the art techniques and has several desired properties such as comprehensive,parameter-free,and scalable.We compare our method with existing methods in both synthetic and real-world datasets.  相似文献   

10.
To address the issue of malware detection through large sets of applications, researchers have recently started to investigate the capabilities of machine-learning techniques for proposing effective approaches. So far, several promising results were recorded in the literature, many approaches being assessed with what we call in the lab validation scenarios. This paper revisits the purpose of malware detection to discuss whether such in the lab validation scenarios provide reliable indications on the performance of malware detectors in real-world settings, aka in the wild. To this end, we have devised several Machine Learning classifiers that rely on a set of features built from applications’ CFGs. We use a sizeable dataset of over 50 000 Android applications collected from sources where state-of-the art approaches have selected their data. We show that, in the lab, our approach outperforms existing machine learning-based approaches. However, this high performance does not translate in high performance in the wild. The performance gap we observed—F-measures dropping from over 0.9 in the lab to below 0.1 in the wild—raises one important question: How do state-of-the-art approaches perform in the wild?  相似文献   

11.
A 3D binary image I can be naturally represented by a combinatorial-algebraic structure called cubical complex and denoted by Q(I), whose basic building blocks are vertices, edges, square faces and cubes. In Gonzalez-Diaz et al. (Discret Appl Math 183:59–77, 2015), we presented a method to “locally repair” Q(I) to obtain a polyhedral complex P(I) (whose basic building blocks are vertices, edges, specific polygons and polyhedra), homotopy equivalent to Q(I), satisfying that its boundary surface is a 2D manifold. P(I) is called a well-composed polyhedral complex over the picture I. Besides, we developed a new codification system for P(I), encoding geometric information of the cells of P(I) under the form of a 3D grayscale image, and the boundary face relations of the cells of P(I) under the form of a set of structuring elements. In this paper, we build upon (Gonzalez-Diaz et al. 2015) and prove that, to retrieve topological and geometric information of P(I), it is enough to store just one 3D point per polyhedron and hence neither grayscale image nor set of structuring elements are needed. From this “minimal” codification of P(I), we finally present a method to compute the 2-cells in the boundary surface of P(I).  相似文献   

12.
Mining co-occurrence frequency patterns from multiple sequences is a hot topic in bioinformatics. Many seemingly disorganized constituents repetitively appear under different biological matrices, such as PAM250 and BLOSUM62, which are considered hidden frequent patterns (FPs). A hidden FP with both gap and flexible approximation operations (replacement, deletion or insertion) deepens the difficulty in discovering its true occurrences. To effectively discover co-occurrence FPs (Co-FPs) under these conditions, we design a mining algorithm (co-fp-miner) using the following steps: (1) a biological approximation scoring matrix is designed to discover various deformations of a single FP pattern; (2) a data-driven intersection tactic is used to generate candidate Co-FPs; (3) a deterministic Apriori-like rule is proposed to prune unnecessary Co-FPs; and (4) finally, we employ a backtracking matching scheme to validate true Co-FPs. The co-fp-miner algorithm is an unified framework for both exact and approximate mining on multiple sequences. Experiments on DNA and protein sequences demonstrate that co-fp-miner is more efficient on solutions, time and memory consumption than that of other peers.  相似文献   

13.
In this paper, we propose a novel two-stage algorithm for the detection and removal of random-valued impulse noise using sparse representations. The main aim of the paper is to demonstrate the strength of image inpainting technique for the reconstruction of images corrupted by random-valued impulse noise at high noise densities. First, impulse locations are detected by applying the combination of sparse denoising and thresholding, based on sparse and overcomplete representations of the corrupted image. This stage optimally selects threshold values so that the sum of the number of false alarms and missed detections obtained at a particular noise level is the minimum. In the second stage, impulses, detected in the first stage, are considered as the missing pixels or holes and subsequently these holes are filled-up using an image inpainting method. Extensive simulation results on standard gray scale images show that the proposed method successfully removes random-valued impulse noise with better preservation of edges and other details compared to the existing techniques at high noise ratios.  相似文献   

14.
The paper studies three fundamental problems in graph analytics, computing connected components (CCs), biconnected components (BCCs), and 2-edge-connected components (ECCs) of a graph. With the recent advent of big data, developing efficient distributed algorithms for computing CCs, BCCs and ECCs of a big graph has received increasing interests. As with the existing research efforts, we focus on the Pregel programming model, while the techniques may be extended to other programming models including MapReduce and Spark. The state-of-the-art techniques for computing CCs and BCCs in Pregel incur \(O(m\times \#\text {supersteps})\) total costs for both data communication and computation, where m is the number of edges in a graph and #supersteps is the number of supersteps. Since the network communication speed is usually much slower than the computation speed, communication costs are the dominant costs of the total running time in the existing techniques. In this paper, we propose a new paradigm based on graph decomposition to compute CCs and BCCs with O(m) total communication cost. The total computation costs of our techniques are also smaller than that of the existing techniques in practice, though theoretically almost the same. Moreover, we also study distributed computing ECCs. We are the first to study this problem and an approach with O(m) total communication cost is proposed. Comprehensive empirical studies demonstrate that our approaches can outperform the existing techniques by one order of magnitude regarding the total running time.  相似文献   

15.
We present the first approach to deduce lower bounds for (worst-case) runtime complexity of term rewrite systems (TRSs) automatically. Inferring lower runtime bounds is useful to detect bugs and to complement existing methods that compute upper complexity bounds. Our approach is based on two techniques: the induction technique generates suitable families of rewrite sequences and uses induction proofs to find a relation between the length of a rewrite sequence and the size of the first term in the sequence. The loop detection technique searches for “decreasing loops”. Decreasing loops generalize the notion of loops for TRSs, and allow us to detect families of rewrite sequences with linear, exponential, or infinite length. We implemented our approach in the tool AProVE and evaluated it by extensive experiments.  相似文献   

16.
The goal of this paper is to focus on the notions of merotopy and also merotopology in the soft universe. First of all, we propose L-soft merotopic (nearness) spaces and L-soft guild. Then, we study binary, contigual, regular merotopic spaces and also relations between them. We show that the category of binary L-soft nearness spaces is bireflective in the category of L-soft nearness spaces. Later, we define L-approach soft merotopological (nearness) spaces by giving several examples. Finally, we define a simpler characterization of L-approach soft grill merotopological space called grill-determined L-approach soft merotopological space. We investigate the categorical structures of these notions such as we prove that the category of grill-determined L-approach soft merotopological spaces is a topological category over the category of L-soft topological spaces. At the end, we define a partial order on the family of all L-approach soft grill merotopologies and show that this family is a completely distributive complete lattice with respect to the defined partial order.  相似文献   

17.
Finding dense subgraphs is an important problem in graph mining and has many practical applications. At the same time, while large real-world networks are known to have many communities that are not well-separated, the majority of the existing work focuses on the problem of finding a single densest subgraph. Hence, it is natural to consider the question of finding the top-k densest subgraphs. One major challenge in addressing this question is how to handle overlaps: eliminating overlaps completely is one option, but this may lead to extracting subgraphs not as dense as it would be possible by allowing a limited amount of overlap. Furthermore, overlaps are desirable as in most real-world graphs there are vertices that belong to more than one community, and thus, to more than one densest subgraph. In this paper we study the problem of finding top-k overlapping densest subgraphs, and we present a new approach that improves over the existing techniques, both in theory and practice. First, we reformulate the problem definition in a way that we are able to obtain an algorithm with constant-factor approximation guarantee. Our approach relies on using techniques for solving the max-sum diversification problem, which however, we need to extend in order to make them applicable to our setting. Second, we evaluate our algorithm on a collection of benchmark datasets and show that it convincingly outperforms the previous methods, both in terms of quality and efficiency.  相似文献   

18.
Multi Secret Sharing (MSS) scheme is an efficient method of transmitting more than one secret securely. In (n, n)-MSS scheme n secrets are used to create n shares and for reconstruction, all n shares are required. In state of the art schemes n secrets are used to construct n or n + 1 shares, but one can recover partial secret information from less than n shares. There is a need to develop an efficient and secure (n, n)-MSS scheme so that the threshold property can be satisfied. In this paper, we propose three different (n, n)-MSS schemes. In the first and second schemes, Boolean XOR is used and in the third scheme, we used Modular Arithmetic. For quantitative analysis, Similarity metrics, Structural, and Differential measures are considered. A proposed scheme using Modular Arithmetic performs better compared to Boolean XOR. The proposed (n, n)-MSS schemes outperform the existing techniques in terms of security, time complexity, and randomness of shares.  相似文献   

19.
Frequent subgraph mining from a tremendous amount of small graphs is a primitive operation for many data mining applications. Existing approaches mainly focus on centralized systems and suffer from the scalability issue. Consider the increasing volume of graph data and mining frequent subgraphs is a memory-intensive task, it is difficult to tackle this problem on a centralized machine efficiently. In this paper, we therefore propose an efficient and scalable solution, called MRFSE, using MapReduce. MRFSE adopts the breadth-first search strategy to iteratively extract frequent subgraphs, i.e., all frequent subgraphs with \(i+1\) edges are generated based on frequent subgraphs with i edges at the ith iteration. In our design, existing frequent subgraph mining techniques in centralized systems can be easily extended and integrated. More importantly, new frequent subgraphs are generated without performing any isomorphism test which is costly and imperative in existing frequent subgraph mining techniques. Besides, various optimization techniques are proposed to further reduce the communication and I/O cost. Extensive experiments conducted on our in-house clusters demonstrate the superiority of our proposed solution in terms of both scalability and efficiency.  相似文献   

20.
The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg’s uncertainty relation and Schrödinger’s uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads,
$$\begin{aligned} U_\rho ^{(g,f)}(A)U_\rho ^{(g,f)}(B)\ge \frac{f(0)^2l}{k}\left| \mathrm {Corr}_\rho ^{s(g,f)}(A,B)\right| ^2 \end{aligned}$$
for some operator monotone functions f and g, all n-dimensional observables AB and a non-singular density matrix \(\rho \). As applications, we derive some new uncertainty relations for Wigner–Yanase skew information and Wigner–Yanase–Dyson skew information.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号