首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
802.11 Wireless local area networks are unfortunately notoriously infamous due to their many, critical security flaws. Last year, world-first 802.11 wireless driver vulnerabilities were publicly disclosed, making them a critical and recent threat. In this paper, we expose our research results on 802.11 driver vulnerabilities by focusing on the design and implementation of a fully featured 802.11 fuzzer that enabled us to find several critical implementation bugs that are potentially exploitable by attackers. Lastly, we will detail the successful exploitation of the first 802.11 remote kernel stack overflow under Linux (madwifi driver).  相似文献   

2.
On exploiting task duplication in parallel program scheduling   总被引:1,自引:0,他引:1  
One of the main obstacles in obtaining high performance from message-passing multicomputer systems is the inevitable communication overhead which is incurred when tasks executing on different processors exchange data. Given a task graph, duplication-based scheduling can mitigate this overhead by allocating some of the tasks redundantly on more than one processor. In this paper, we focus on the problem of using duplication in static scheduling of task graphs on parallel and distributed systems. We discuss five previously proposed algorithms and examine their merits and demerits. We describe some of the essential principles for exploiting duplication in a more useful manner and, based on these principles, propose an algorithm which outperforms the previous algorithms. The proposed algorithm generates optimal solutions for a number of task graphs. The algorithm assumes an unbounded number of processors. For scheduling on a bounded number of processors, we propose a second algorithm which controls the degree of duplication according to the number of available processors. The proposed algorithms are analytically and experimentally evaluated and are also compared with the previous algorithms  相似文献   

3.
Minds and Machines -  相似文献   

4.
个性化智能信息提取中的用户兴趣发现   总被引:12,自引:0,他引:12  
1 引言 1990年,WWW(World Wide Web)出现,在随后的几年中它获得了空前的发展,Internet上的信息量以指数形式飞速增长,现在,Internet已成为一个浩瀚的海量信息源。但由于Internet是一个具有开放性、动态性和异构性的全球分布式网络,资源分布很分散,且  相似文献   

5.
Aspect-oriented software development has focused on the software life cycle's implementation phase: developers identify and capture aspects mainly in code. But aspects are evident earlier in the life cycle, such as during requirements engineering and architecture design. Early aspects are concerns that crosscut an artifact's dominant decomposition or base modules derived from the dominant separation-of-concerns criterion, in the early stages of the software life cycle. In this article, we describe how to identify and capture early aspects in requirements and architecture activities and how they're carried over from one phase to another. We'll focus on requirements and architecture design activities to illustrate the points, but the same ideas apply in other phases as well, such as domain analysis or in the fine-grained design activities that lie between architecture and implementation.  相似文献   

6.
7.
Process mining is a tool to extract non-trivial and useful information from process execution logs. These so-called event logs (also called audit trails, or transaction logs) are the starting point for various discovery and analysis techniques that help to gain insight into certain characteristics of the process. In this paper we use a combination of process mining techniques to discover multiple perspectives (namely, the control-flow, data, performance, and resource perspective) of the process from historic data, and we integrate them into a comprehensive simulation model. This simulation model is represented as a colored Petri net (CPN) and can be used to analyze the process, e.g., evaluate the performance of different alternative designs. The discovery of simulation models is explained using a running example. Moreover, the approach has been applied in two case studies; the workflows in two different municipalities in the Netherlands have been analyzed using a combination of process mining and simulation. Furthermore, the quality of the CPN models generated for the running example and the two case studies has been evaluated by comparing the original logs with the logs of the generated models.  相似文献   

8.
This paper describes a general code‐improving transformation that can coalesce conditional branches into an indirect jump from a table. Applying this transformation allows an optimizer to exploit indirect jumps for many other coalescing opportunities besides the translation of multiway branch statements. First, dataflow analysis is performed to detect a set of coalescent conditional branches, which are often separated by blocks of intervening instructions. Secondly, several techniques are applied to reduce the cost of performing an indirect jump operation, often requiring the execution of only two instructions on a SPARC. Finally, the control flow is restructured using code duplication to replace the set of branches with an indirect jump. Thus, the transformation essentially provides early resolution of conditional branches that may originally have been some distance from the point where the indirect jump is inserted. The transformation can be frequently applied with often significant reductions in the number of instructions executed, total cache work, and execution time. In addition, we show that with branch target buffer support, indirect jumps improve branch prediction since they cause fewer mispredictions than the set of branches they replaced. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
The production of analytic datasets is a significant big data trend and has gone well beyond the scope of traditional IT-governed dataset development. Analytic datasets are now created by data scientists and data analysts using big data frameworks and agile data preparation tools. However, despite the profusion of available datasets, it remains quite difficult for a data analyst to start from a dataset at hand and customize it with additional attributes coming from other existing datasets. This article describes a model and algorithms that exploit automatically extracted and user-defined semantic relationships for extending analytic datasets with new atomic or aggregated attribute values. Our framework is implemented as a REST service in SAP HANA and includes a careful theoretical analysis and practical solutions for several complex data quality issues.  相似文献   

10.
11.
While much has changed in product modularity research in the 18 years since the independence axiom, some basic questions remain unanswered. Perhaps the most fundamental of those questions is whether increasing modularity actually saves money. The goal of the research behind this paper was to clearly define the fundamental relationship between product modularity and product cost. Our previous work in modular product design provided a complete package of a product modularity measure and a modular design method. The “best” measure was created and verified after correcting common performance problems among the seven measures, finally subtracting the averaged relationships external to modules from the averaged relationship within modules. After comparing and finding better design elements among four representative modular design methods, the “best” method was developed that includes product decomposition, multi-component reconfiguration and elimination, and an extended limiting factor identification. The “best” method/measure package quickly yields redesign products with higher modularity. To seek out relationships between product life-cycle modularity and product life-cycle cost, modular product design experiments were implemented for four off-the-shelf products using the new measure/method package applied to increase both functional and retirement modularity. The modularity data recorded for each redesign included retirement modularity, manufacturing modularity and assembly modularity. Each redesign’s life-cycle cost was also obtained based on several classical cost models. The cost data recorded for each redesign included retirement cost, manufacturing cost, and assembly cost. The best relationships came from the retirement viewpoint. However, there is not a significant relationship between any life-cycle modularity and any life-cycle cost unless there are significantly large modularity changes. Life-cycle modularity-cost relationships are more likely to exist in data pools generated from that life-cycle redesign viewpoint. The beginning of modular redesign, where greater modularity improvements are seen, is more effective at reducing costs. Cost savings depend the appropriateness of the modularity matrix’s product architecture representation from a cost savings viewpoint.  相似文献   

12.
基于LDA模型的博客垃圾评论发现   总被引:1,自引:0,他引:1  
Blog(博客)作为一种新兴的网络媒体,在很大程度上增强了互联网的开放性,Blog已经成为互联网上的主要信息源之一,这也使得Blog空间中的垃圾评论成倍增长,因此如何识别垃圾评论成为面临的重要问题。该文首先借鉴处理垃圾邮件的方法,针对Blog本身的特点,使用规则初步过滤垃圾评论,然后对剩余评论,利用Latent Dirichlet Allocation(LDA) 这种能够提取文本隐含主题的产生式模型,对博客中的博文进行主题提取,并结合主题信息进行判断,从而识别Blog空间的垃圾评论。通过实验验证,该方法可以发现大多数垃圾评论,实验取得了较好的结果,使Blog信息更加准确、有效的为用户使用。  相似文献   

13.
14.
Data mining-based analysis methods are increasingly being applied to data sets derived from science and engineering domains that model various physical phenomena and objects. In many of these data sets, a key requirement for their effective analysis is the ability to capture the relational and geometric characteristics of the underlying entities and objects. Geometric graphs, by modeling the various physical entities and their relationships with vertices and edges, provide a natural method to represent such data sets. In this paper we present gFSG, a computationally efficient algorithm for finding frequent patterns corresponding to geometric subgraphs in a large collection of geometric graphs. gFSG is able to discover geometric subgraphs that can be rotation, scaling, and translation invariant, and it can accommodate inherent errors on the coordinates of the vertices. We evaluated its performance using a large database of over 20,000 chemical structures, and our results show that it requires relatively little time, can accommodate low support values, and scales linearly with the number of transactions.  相似文献   

15.
Discovering High-Order Periodic Patterns   总被引:2,自引:2,他引:0  
Discovery of periodic patterns in time series data has become an active research area with many applications. These patterns can be hierarchical in nature, where a higher-level pattern may consist of repetitions of lower-level patterns. Unfortunately, the presence of noise may prevent these higher-level patterns from being recognized in the sense that two portions (of a data sequence) that support the same (high-level) pattern may have different layouts of occurrences of basic symbols. There may not exist any common representation in terms of raw symbol combinations; and hence such (high-level) pattern may not be expressed by any previous model (defined on raw symbols or symbol combinations) and would not be properly recognized by any existing method. In this paper, we propose a novel model, namely meta-pattern, to capture these high-level patterns. As a more flexible model, the number of potential meta-patterns could be very large. A substantial difficulty lies in how to identify the proper pattern candidates. However, the well-known Apriori property is not able to provide sufficient pruning power. A new property, namely component location property, is identified and used to conduct the candidate generation so that an efficient computation-based mining algorithm can be developed. Last, but not least, we apply our algorithm to some real and synthetic sequences and some interesting patterns are discovered.  相似文献   

16.
17.
Discovering the underlying structure of a given graph is one of the fundamental goals in graph mining. Given a graph, we can often order vertices in a way that neighboring vertices have a higher probability of being connected to each other. This implies that the edges form a band around the diagonal in the adjacency matrix. Such structure may rise for example if the graph was created over time: each vertex had an active time interval during which the vertex was connected with other active vertices. The goal of this paper is to model this phenomenon. To this end, we formulate an optimization problem: given a graph and an integer \(K\) , we want to order graph vertices and partition the ordered adjacency matrix into \(K\) bands such that bands closer to the diagonal are more dense. We measure the goodness of a segmentation using the log-likelihood of a log-linear model, a flexible family of distributions containing many standard distributions. We divide the problem into two subproblems: finding the order and finding the bands. We show that discovering bands can be done in polynomial time with isotonic regression, and we also introduce a heuristic iterative approach. For discovering the order we use Fiedler order accompanied with a simple combinatorial refinement. We demonstrate empirically that our heuristic works well in practice.  相似文献   

18.
This paper describes the development and evaluation of CAL software developed for the Open University course “S271—Discovering Physics”. The programs are used by students at week-long summer schools which combine the practical experience of doing physics experiments with remedial tutorial sessions. A wide range of computer tutorials, simulated experiments and tutorial simulations as well as programs to enhance traditional physics experiments were produced. The software was implemented on the TERAK 8510/a graphics microcomputer. The project incorporated the development of an “implementation system” (called OASIS) based on UCSD Pascal to simplify such aspects as text layout, response parsing and graphics. An important part of the project was a program of testing and evaluation by staff and students at a week long summer schoolin 1981. Reactions to the programs were assessed using log sheets, interviews and questionnaires (both written and interactive using a microcomputer) as well as direct observation. The software and the associated printed material were completely redesigned on the basis of the results of this study. Another outcome of the evaluation was the acquisition of knowledge of how students use the CAL software in the environment of a physics summer school.  相似文献   

19.
20.
We propose a new stepsize for the gradient method. It is shown that this new stepsize will converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's asymptotic optimal gradient method (Computational Optimization and Applications, 2006, 33(1): 73–88) is applied for minimizing quadratic objective functions. Based on this spectral property, we develop a monotone gradient method that takes a certain number of steps using the asymptotically optimal stepsize by Dai and Yang, and then follows by some short steps associated with this new stepsize. By employing one step retard of the asymptotic optimal stepsize, a nonmonotone variant of this method is also proposed. Under mild conditions, R-linear convergence of the proposed methods is established for minimizing quadratic functions. In addition, by combining gradient projection techniques and adaptive nonmonotone line search, we further extend those methods for general bound constrained optimization. Two variants of gradient projection methods combining with the Barzilai-Borwein stepsizes are also proposed. Our numerical experiments on both quadratic and bound constrained optimization indicate that the new proposed strategies and methods are very effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号