共查询到20条相似文献,搜索用时 15 毫秒
1.
Yichen Xie Engler D. 《IEEE transactions on pattern analysis and machine intelligence》2003,29(10):915-928
Programmers generally attempt to perform useful work. If they performed an action, it was because they believed it served some purpose. Redundant operations violate this belief. However, in the past, redundant operations have been typically regarded as minor cosmetic problems rather than serious errors. This paper demonstrates that, in fact, many redundancies are as serious as traditional hard errors (such as race conditions or pointer dereferences). We experimentally test this idea by writing and applying five redundancy checkers to a number of large open source projects, finding many errors. We then show that, even when redundancies are harmless, they strongly correlate with the presence of traditional hard errors. Finally, we show how flagging redundant operations gives a way to detect mistakes and omissions in specifications. For example, a locking specification that binds shared variables to their protecting locks can use redundancies to detect missing bindings by flagging critical sections that include no shared state. 相似文献
2.
Component libraries are the dominant paradigm for software reuse, but they suffer from a lack of tools that support the problem-solving process of locating relevant components. Most retrieval tools assume that retrieval is a simple matter of matching well-formed queries to a repository. But forming queries can be difficult. A designer's understanding of the problem evolves while searching for a component, and large repositories often use an esoteric vocabulary. CodeFinder is a retrieval system that combines retrieval by reformulation (which supports incremental query construction) and spreading activation (which retrieves items related to the query) to help users find information. I designed it to investigate the hypothesis that this design makes for a more effective retrieval system. My study confirmed that it was more helpful to users seeking relevant information with ill-defined tasks and vocabulary mismatches than other query systems. The study supports the hypothesis that combining techniques effectively satisfies the kind of information needs typically encountered in software design 相似文献
3.
Using symbolic computation to find algebraic invariants 总被引:4,自引:0,他引:4
Implicit polynomials have proved themselves as having excellent representation power for complicated objects, and there is growing use of them in computer vision, graphics, and CAD. A must for every system that tries to recognize objects based on their representation by implicit polynomials are invariants, which are quantities assigned to polynomials that do not change under coordinate transformations. In the recognition system developed at the Laboratory for Engineering Man-Machine Studies in Brown University (LEMS), it became necessary to use invariants which are explicit and simple functions of the polynomial coefficients. A method to find such invariants is described and the new invariants presented. This work addresses only the problem of finding the invariants; their stability is studied in another paper 相似文献
4.
Using Web search engines to find and refind information 总被引:1,自引:0,他引:1
To inform the design of next-generation Web search tools, researchers must better understand how users find, manage, and refind online information. Synthesizing results from one of their studies with related work, the authors propose a search engine use model based on prior task frequency and familiarity. 相似文献
5.
分析了LKM后门实现隐藏进程的机理。针对后门设计存在的缺陷,结合/proc文件系统特点,提出了一种按顺序遍历所有PID目录而找出全部进程的方法。再将结果与普通的进程查找输出作对比,可以发现隐藏的进程。最后给出采用Perl语言实现此查找功能的流程图。实验表明该方法能准确、有效地发现被LKM后门隐藏的进程。 相似文献
6.
We present a unified framework for applying iteration reordering transformations. This framework is able to represent traditional
transformations such as loop interchange, loop skewing and loop distribution as well as compositions of these transformations.
Using a unified framework rather than a sequence of ad-hoc transformations makes it easier to analyze and predict the effects
of these transformations. Our framework is based on the idea that all reordering transformations can be represented as a mapping
from the original iteration space to a new iteration space. An optimizing compiler would use our framework by finding a mapping
that both corresponds to a legal transformation and produces efficient code. We present the mapping selection problem as a
search problem by decomposing it into a sequence of smaller choices. We then characterize the set of all legal mappings by
defining a search tree. As part of this process we use a new operation called affine closure. 相似文献
7.
《Information Fusion》2008,9(1):120-133
We describe an ensemble approach to learning from arbitrarily partitioned data. The partitioning comes from the distributed processing requirements of a large scale simulation. The volume of the data is such that classifiers can train only on data local to a given partition. As a result of the partition reflecting the needs of the simulation, the class statistics can vary from partition to partition. Some classes will likely be missing from some partitions. We combine a fast ensemble learning algorithm with probabilistic majority voting in order to learn an accurate classifier from such data. Results from simulations of an impactor bar crushing a storage canister and from facial feature recognition show that regions of interest are successfully identified in spite of the class imbalance in the individual training sets. 相似文献
8.
Using tangent balls to find plane sections of natural quadrics 总被引:2,自引:0,他引:2
9.
Sebastian Schmerl Michael Vogel Hartmut König 《International Journal on Software Tools for Technology Transfer (STTT)》2011,13(1):89-106
Most intrusion detection systems deployed today apply misuse detection as analysis method. Misuse detection searches for attack
traces in the recorded audit data using predefined patterns. The matching rules are called signatures. The definition of signatures
is up to now an empirical process based on expert knowledge and experience. The analysis success and accordingly the acceptance
of intrusion detection systems in general depend essentially on the topicality of the deployed signatures. Methods for a systematic
development of signatures have scarcely been reported yet, so the modeling of a new signature is a time-consuming, cumbersome,
and error-prone process. The modeled signatures have to be validated and corrected to improve their quality. So far only signature
testing is applied for this. Signature testing is still a rather empirical and time-consuming process to detect modeling errors.
In this paper, we present the first approach for verifying signature specifications using the Spin model checker. The signatures
are modeled in the specification language EDL, which leans on colored Petri nets. We show how the signature specification
is transformed into a Promela model and how characteristic specification errors can be found by Spin. 相似文献
10.
Using DEA to find the best partner for a horizontal cooperation 总被引:1,自引:0,他引:1
In this paper Data Envelopment Analysis is used to select among different potential partners to form a joint venture which is the one that best fits the strategic goal of a horizontal cooperation. Since each potential partner has a different technology the one whose technology better complements ours is the one that will bring the greatest synergy to the technology of the joint venture. Models for the cases that the joint venture is planning to open one or several facilities are presented. A priori and ex-post measures of synergy between the partners are proposed. Also, a simple way of sharing the costs of the horizontal cooperation based on cooperative game theory is presented. The proposed approach is quite flexible and can be extended to include multiple-partner joint ventures as well as a multi-period planning horizon. 相似文献
11.
Given a parametrization of a rational surface, the absence of base points is shown to be a necessary and sufficient condition for the auxiliary resultant to be a power of the implicit polynomial. The method of resultants also reveals other important properties of rational surface representations, including the coefficients of the implicit equation, the relationship between the implicit and parametric degrees, the degree of each coordinate variable of the implicit equation, and the number of correspondence of the parametrization. 相似文献
12.
Relan Devanjali Ballerini Lucia Trucco Emanuele MacGillivray Tom 《Multimedia Tools and Applications》2019,78(10):12783-12803
Multimedia Tools and Applications - Automatically classifying retinal blood vessels appearing in fundus camera imaging into arterioles and venules can be problematic due to variations between... 相似文献
13.
Summary This paper shows that the Blankinship algorithm, originally proposed to find the greatest common divisor of several integers and a solution of the associated linear diophantine equation, can be used to find the general solution of the equation. This yields a more efficient method to find the general solution than the one proposed by Bond. The modification of Blankinship's algorithm to avoid generating vectors with huge component values is also proposed. 相似文献
14.
Using a Euclid distance discriminant method to find protein coding genes in the yeast genome 总被引:2,自引:0,他引:2
The Euclid distance discriminant method is used to find protein coding genes in the yeast genome, based on the single nucleotide frequencies at three codon positions in the ORFs. The method is extremely simple and may be extended to find genes in prokaryotic genomes or eukaryotic genomes with less introns. Six-fold cross-validation tests have demonstrated that the accuracy of the algorithm is better than 93%. Based on this, it is found that the total number of protein coding genes in the yeast genome is less than or equal to 5579 only, about 3.8-7.0% less than 5800-6000, which is currently widely accepted. The base compositions at three codon positions are analyzed in details using a graphic method. The result shows that the preference codons adopted by yeast genes are of the RGW type, where R, G and W indicate the bases of purine, non-G and A/T, whereas the 'codons' in the intergenic sequences are of the form NNN, where N denotes any base. This fact constitutes the basis of the algorithm to distinguish between coding and non-coding ORFs in the yeast genome. The names of putative non-coding ORFs are listed here in detail. 相似文献
15.
Luo Jianqiao He Biao Ou Yang Li Bailin Wang Kai 《Neural computing & applications》2021,33(23):16181-16196
Neural Computing and Applications - One of the greatest challenges for scene classification is the lack of sufficient training samples. Label distribution learning (LDL) is proven to be effective... 相似文献
16.
Using multi-population intelligent genetic algorithm to find the pareto-optimal parameters for a nano-particle milling process 总被引:1,自引:0,他引:1
Nano-particle materials have been widely applied in many industries and the wet-type mechanical milling process is a popular powder technology to produce the nano-particles. Since the milling process involves a number of process parameters and the multi-objective quality criteria, it is very important to set the optimal milling process parameters in order to achieve the desired multiple quality criteria. In this study, a new multi-objective evolutionary algorithm (MOEA), called the multi-population intelligent genetic algorithm (MPIGA), is proposed to find the optimal process parameters for the nano-particle milling process. In the new method, the orthogonal array (OA) experiment is first applied to obtain the analytic data of the milling process. Then the response surface method (RSM) is applied to model the nano-particle milling process and to determine the objective (fitness) value. The generalized Pareto-based scale-independent fitness function (GPSIFF) is then used to evaluate the Pareto solutions. Finally, the MPIGA is proposed to find the Pareto-optimal solutions. The results show that the integrated MPIGA approach can generate the Pareto-optimal solutions for the decision maker to determine the optimal parameters and to achieve the desired product qualities for a nano-particle milling process. 相似文献
17.
Stefanie Harbich Marc Hassenzahl 《International journal of human-computer studies》2011,69(7-8):496-508
We hypothesized that users show different behavioral patterns at work when using interactive products, namely execute, engage, evolve and expand. These patterns refer to task accomplishment, persistence, task modification and creation of new tasks, each contributing to the overall work goal. By developing a questionnaire measuring these behavioral patterns we were able to demonstrate that these patterns do occur at work. They are not influenced by the users alone, but primarily by the product, indicating that interactive products indeed are able to support users at work in a holistic way. Behavioral patterns thus are accounted for by the interaction of users and product. 相似文献
18.
This investigation shows that, in most cases, the floor cleaning procedure of typical restaurants could be improved, resulting in a better cleaning efficiency and a better floor friction. This simple approach could help reduce slips and falls in the workplace. Food safety officers visited ten European style restaurants in the London Borough of Bromley (UK) to identify their floor cleaning procedure in terms of the cleaning method, the concentration and type of floor cleaner and the temperature of the wash water. For all 10 restaurants visited, the cleaning method was damp mopping. Degreasers were used in three sites while neutral floor cleaners were used in seven sites. Typically, the degreasers were over diluted and the neutrals were overdosed. The wash water temperature ranged from 10 to 72 degrees C. The on-site cleaning procedures were repeated in the laboratory for the removal of olive oil from new and sealed quarry tiles, fouled and worn quarry tiles and new porcelain tiles. It is found that in 24 out of 30 cases, cleaning efficiency can be improved by simple changes in the floor cleaning procedure and that these changes result in a significant improvement of the floor friction. The nature of the improved floor cleaning procedure depends on the flooring type. New and properly sealed flooring tiles can be cleaned using damp mopping with a degreaser diluted as recommended by the manufacturer in warm or hot water (24 to 50 degrees C). But as the tiles become worn and fouled, a more aggressive floor cleaning is required such as two-step mopping with a degreaser diluted as recommended by the manufacturer in warm water (24 degrees C). 相似文献
19.
20.