首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we extend the classical notion of quasi-implication (“when ai is present then usually aj is also present”) to R-rules (rules of rules), the premisses and the conclusions of which can be rules themselves. A new statistical measure, based on the implicative intensity defined by Gras for quasi-implications, is defined to assess the significance of R-rules on a data set. We show how to organize R-rules in a new combinatorial structure, the directed hierarchy, which is inspired by the classical hierarchical classification. An incremental algorithm is developed to find the most significant R-rule “amalgamation”. An illustration is presented on a real data set stemming from a recent survey of the French Public Education Mathematical Teacher Society on the level in mathematics of pupils in the final year of secondary education and the perception of this subject.  相似文献   

2.
Consideration was given to the one-dimensional bin packing problem under the conditions for heterogeneity of the items put into bins and contiguity of choosing identical items for the next bin. The branch-and-bound method using the “next fit” principle and the “linear programming” method were proposed to solve it. The problem and its solution may be used to construct an improved lower bound in the problem of two-dimensional packing.  相似文献   

3.
 Two new modeling and simulation approaches for Simultaneous Switching Noise (SSN) are described and compared to “brute force” simulation by SPICE. Both simulation accuracy and simulation run-time are considered. The two new approaches are: 1) the “effective inductance” method, in which an approximate, very efficient method of extracting an SSN L eff  is utilized; and 2) the “macromodel” method, in which the complex inductance network responsible for SSN is represented by only a few dominant poles in the frequency domain and the time domain response is obtained by an efficient convolution algorithm. Both approaches are shown to be accurate and fast, but only the effective inductance algorithm is robust in numerical convergence. Received: 19 March 1997 / Accepted: 25 March 1997  相似文献   

4.
A new fourth order box-scheme for the Poisson problem in a square with Dirichlet boundary conditions is introduced, extending the approach in Croisille (Computing 78:329–353, 2006). The design is based on a “hermitian box” approach, combining the approximation of the gradient by the fourth order hermitian derivative, with a conservative discrete formulation on boxes of length 2h. The goal is twofold: first to show that fourth order accuracy is obtained both for the unknown and the gradient; second, to describe a fast direct algorithm, based on the Sherman-Morrison formula and the Fast Sine Transform. Several numerical results in a square are given, indicating an asymptotic O(N 2log 2(N)) computing complexity.  相似文献   

5.
We present polylogarithmic approximations for the R|prec|C max  and R|prec|∑ j w j C j problems, when the precedence constraints are “treelike”—i.e., when the undirected graph underlying the precedences is a forest. These are the first non-trivial generalizations of the job shop scheduling problem to scheduling with precedence constraints that are not just chains. These are also the first non-trivial results for the weighted completion time objective on unrelated machines with precedence constraints of any kind. We obtain improved bounds for the weighted completion time and flow time for the case of chains with restricted assignment—this generalizes the job shop problem to these objective functions. We use the same lower bound of “congestion + dilation”, as in other job shop scheduling approaches (e.g. Shmoys, Stein and Wein, SIAM J. Comput. 23, 617–632, 1994). The first step in our algorithm for the R|prec|C max  problem with treelike precedences involves using the algorithm of Lenstra, Shmoys and Tardos to obtain a processor assignment with the congestion + dilation value within a constant factor of the optimal. We then show how to generalize the random-delays technique of Leighton, Maggs and Rao to the case of trees. For the special case of chains, we show a dependent rounding technique which leads to a bicriteria approximation algorithm for minimizing the flow time, a notoriously hard objective function. A preliminary version of this paper appeared in the Proc. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pages 146–157, 2005. V.S. Anil Kumar supported in part by NSF Award CNS-0626964. Part of this work was done while at the Los Alamos National Laboratory, and supported in part by the Department of Energy under Contract W-7405-ENG-36. M.V. Marathe supported in part by NSF Award CNS-0626964. Part of this work was done while at the Los Alamos National Laboratory, and supported in part by the Department of Energy under Contract W-7405-ENG-36. Part of this work by S. Parthasarathy was done while at the Department of Computer Science, University of Maryland, College Park, MD 20742, and in part while visiting the Los Alamos National Laboratory. Research supported in part by NSF Award CCR-0208005 and NSF ITR Award CNS-0426683. Research of A. Srinivasan supported in part by NSF Award CCR-0208005, NSF ITR Award CNS-0426683, and NSF Award CNS-0626636.  相似文献   

6.
Video scene retrieval with interactive genetic algorithm   总被引:2,自引:1,他引:1  
This paper proposes a video scene retrieval algorithm based on emotion. First, abrupt/gradual shot boundaries are detected in the video clip of representing a specific story. Then, five video features such as “average color histogram,” “average brightness,” “average edge histogram,” “average shot duration,” and “gradual change rate” are extracted from each of the videos, and mapping through an interactive genetic algorithm is conducted between these features and the emotional space that a user has in mind. After the proposed algorithm selects the videos that contain the corresponding emotion from the initial population of videos, the feature vectors from them are regarded as chromosomes, and a genetic crossover is applied to those feature vectors. Next, new chromosomes after crossover and feature vectors in the database videos are compared based on a similarity function to obtain the most similar videos as solutions of the next generation. By iterating this process, a new population of videos that a user has in mind are retrieved. In order to show the validity of the proposed method, six example categories of “action,” “excitement,” “suspense,” “quietness,” “relaxation,” and “happiness” are used as emotions for experiments. This method of retrieval shows 70% of effectiveness on the average over 300 commercial videos.
Sung-Bae ChoEmail:
  相似文献   

7.
A simple averaging argument shows that given a randomized algorithm A and a function f such that for every input x, Pr[A(x) = f(x)] ≥ 1 − ρ (where the probability is over the coin tosses of A), there exists a non-uniform deterministic algorithm B “of roughly the same complexity” such that Pr[B(x) = f(x)] ≥ 1 − ρ (where the probability is over a uniformly chosen input x). This implication is often referred to as “the easy direction of Yao’s lemma” and can be thought of as “weak derandomization” in the sense that B is deterministic but only succeeds on most inputs. The implication follows as there exists a fixed value r′ for the random coins of A such that “hardwiring r′ into A” produces a deterministic algorithm B. However, this argument does not give a way to explicitly construct B.  相似文献   

8.
Conclusion Theorems 4.1, 4.2 and 6.1, 6.2 respectively admit a natural specialization for the problem of constructing the reachability region of the linear controlled system from Sec. 1. Informally, this specialization has the following form. If the vector functions b(·) and S(·) from Sec. 1 are “not too discontinuous” (admit a uniform approximation by piecewise-constant and right-continuous maps on [t0, voD, then, given a common resource constraintc, the controlled analogues of reachability regions are identical for the class of controls with an integral constraint (on the total pulse) and the class of “pure pulse” shock controls, whereas the “ordinary” reachability regions corresponding to unperturbed conditions (see, e.g., they− constraint in (1.2)) may be different. This is illustrated by the examples of Sees. 1, 5. The regularized version of the problem of constructing the reachability region of a linear system is thus insensitive to a change of the class of admissible controls. The study was supported by the Russian Foundation for Basic Research (94-01-00350). Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 3–17, May–June, 1998.  相似文献   

9.
We show that the space of polygonizations of a fixed planar point set S of n points is connected by O(n 2) “moves” between simple polygons. Each move is composed of a sequence of atomic moves called “stretches” and “twangs,” which walk between weakly simple “polygonal wraps” of S. These moves show promise to serve as a basis for generating random polygons.  相似文献   

10.
Optimal design of truss structures using ant algorithm   总被引:1,自引:1,他引:0  
An ant algorithm, consisting of the Ant System and API (after “apicalis” in Pachycondyla apicalis) algorithms, was proposed in this study to find optimal truss structures to achieve minimum weight objective under stress, deflection, and kinematic stability constraints. A two-stage approach was adopted in this study; first, the topology of the truss structure was optimized from a given ground structure employing the Ant System algorithm due to its discrete characteristic, and then the size and/or shape of member was optimized utilizing the API algorithm. The effectiveness of the proposed ant algorithm was evaluated through numerous different 2-D and 3-D truss-structure problems. The proposed algorithm was observed to find truss structures better than those reported in the literature. Moreover, multiple different truss topologies with almost equal overall weights can be found simultaneously.  相似文献   

11.
We describe a mechanism called SpaceGlue for adaptively locating services based on the preferences and locations of users in a distributed and dynamic network environment. In SpaceGlue, services are bound to physical locations, and a mobile user accesses local services depending on the current space he/she is visiting. SpaceGlue dynamically identifies the relationships between different spaces and links or “glues” spaces together depending on how previous users moved among them and used those services. Once spaces have been glued, users receive a recommendation of remote services (i.e., services provided in a remote space) reflecting the preferences of the crowd of users visiting the area. The strengths of bonds are implicitly evaluated by users and adjusted by the system on the basis of their evaluation. SpaceGlue is an alternative to existing schemes such as data mining and recommendation systems and it is suitable for distributed and dynamic environments. The bonding algorithm for SpaceGlue incrementally computes the relationships or “bonds” between different spaces in a distributed way. We implemented SpaceGlue using a distributed network application platform Ja-Net and evaluated it by simulation to show that it adaptively locates services reflecting trends in user preferences. By using “Mutual Information (MI)” and “F-measure” as measures to indicate the level of such trends and the accuracy of service recommendation, the simulation results showed that (1) in SpaceGlue, the F-measure increases depending on the level of MI (i.e., the more significant the trends, the greater the F-measure values), (2) SpaceGlue achives better precision and F-measure than “Flooding case (i.e., every service information is broadcast to everybody)” and “No glue case” by narrowing appropriate partners to send recommendations based on bonds, and (3) SpaceGlue achieves better F-measure with large number of spaces and users than other cases (i.e., “flooding” and “no glue”). Tomoko Itao is an alumna of NTT Network Innovation Laboratories  相似文献   

12.
A new dynamic clustering approach (DCPSO), based on particle swarm optimization, is proposed. This approach is applied to image segmentation. The proposed approach automatically determines the “optimum” number of clusters and simultaneously clusters the data set with minimal user interference. The algorithm starts by partitioning the data set into a relatively large number of clusters to reduce the effects of initial conditions. Using binary particle swarm optimization the “best” number of clusters is selected. The centers of the chosen clusters is then refined via the K-means clustering algorithm. The proposed approach was applied on both synthetic and natural images. The experiments conducted show that the proposed approach generally found the “optimum” number of clusters on the tested images. A genetic algorithm and random search version of dynamic clustering is presented and compared to the particle swarm version.  相似文献   

13.
We study the problem of efficiently extracting K entities, in a temporal database, which are most similar to a given search query. This problem is well studied in relational databases, where each entity is represented as a single record and there exist a variety of methods to define the similarity between a record and the search query. However, in temporal databases, each entity is represented as a sequence of historical records. How to properly define the similarity of each entity in the temporal database still remains an open problem. The main challenging is that, when a user issues a search query for an entity, he or she is prone to mix up information of the same entity at different time points. As a result, methods, which are used in relational databases based on record granularity, cannot work any further. Instead, we regard each entity as a set of “virtual records”, where attribute values of a “virtual record” can be from different records of the same entity. In this paper, we propose a novel evaluation model, based on which the similarity between each “virtual record” and the query can be effectively quantified, and the maximum similarity of its “virtual records” is taken as the similarity of an entity. For each entity, as the number of its “virtual records” is exponentially large, calculating the similarity of the entity is challenging. As a result, we further propose a Dominating Tree Algorithm (DTA), which is based on the bounding-pruning-refining strategy, to efficiently extract K entities with greatest similarities. We conduct extensive experiments on both real and synthetic datasets. The encouraging results show that our model for defining the similarity between each entity and the search query is effective, and the proposed DTA can perform at least two orders of magnitude improvement on the performance comparing with the naive approach.  相似文献   

14.
We introduce nondeterministic graph searching with a controlled amount of nondeterminism and show how this new tool can be used in algorithm design and combinatorial analysis applying to both pathwidth and treewidth. We prove equivalence between this game-theoretic approach and graph decompositions called q -branched tree decompositions, which can be interpreted as a parameterized version of tree decompositions. Path decomposition and (standard) tree decomposition are two extreme cases of q-branched tree decompositions. The equivalence between nondeterministic graph searching and q-branched tree decomposition enables us to design an exact (exponential time) algorithm computing q-branched treewidth for all q≥0, which is thus valid for both treewidth and pathwidth. This algorithm performs as fast as the best known exact algorithm for pathwidth. Conversely, this equivalence also enables us to design a lower bound on the amount of nondeterminism required to search a graph with the minimum number of searchers. Additional support of F.V. Fomin by the Research Council of Norway. Additional supports of P. Fraigniaud from the INRIA Project “Grand Large”, and from the Project PairAPair of the ACI “Masse de Données”. Additional supports of N. Nisse from the Project Fragile of the ACI “Sécurité & Informatique”.  相似文献   

15.
Conclusion The proposed method for polynomial expansion of SBF based on construction of the triangular tableT n(π(F)) of local codes of its derivatives has the lowest computational complexity among known methods. Constructing the table only once, the method easily determines all the “residual” functions ϑ rl km for various expansion parametersk andm. Another advantage of the method is its applicability for polynomial expansion of arbitrary BF and partially symmetric BF. In this case, the base of the “triangle” is the truth table of the arbitrary BF or the local code (including convolved local code) of the partially symmetric BF. The method can be successfully used for the synthesis of a wide class of digital networks. Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 59–71, November–December, 1996.  相似文献   

16.
17.
18.
For contractible regions ωin ℝ3 with generic smooth boundary, we determine the global structure of the Blum medial axis M. We give an algorithm for decomposing M into “irreducible components” which are attached to each other along “fin curves”. The attaching cannot be described by a tree structure as in the 2D case. However, a simplified but topologically equivalent medial structure ̂ M with the same irreducible components can be described by a two level tree structure. The top level describes the simplified form of the attaching, and the second level tree structure for each irreducible component specifies how to construct the component by attaching smooth medial sheets to the network of Y-branch curves. The conditions for these structures are complete in the sense that any region whose Blum medial axis satisfies the conditions is contractible.  相似文献   

19.
The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than their worst case complexity. Thus, this theory deals with distributional problems, defined as pairs each consisting of a decision problem and a probability distribution over the instances. While for applications utilizing hardness, such as cryptography, one seeks an efficient algorithm that outputs random instances of some problem that are hard for any algorithm with high probability, the resulting hard distributions in these cases are typically highly artificial, and do not establish the hardness of the problem under “interesting” or “natural” distributions. This paper studies the possibility of proving generic hardness results (i.e., for a wide class of NP{\mathcal{NP}} -complete problems), under “natural” distributions. Since it is not clear how to define a class of “natural” distributions for general NP{\mathcal{NP}} -complete problems, one possibility is to impose some strong computational constraint on the distributions, with the intention of this constraint being to force the distributions to “look natural”. Levin, in his seminal paper on average case complexity from 1984, defined such a class of distributions, which he called P-computable distributions. He then showed that the NP{\mathcal{NP}} -complete Tiling problem, under some P-computable distribution, is hard for the complexity class of distributional NP{\mathcal{NP}} problems (i.e. NP{\mathcal{NP}} with P-computable distributions). However, since then very few NP{\mathcal{NP}}- complete problems (coupled with P-computable distributions), and in particular “natural” problems, were shown to be hard in this sense. In this paper we show that all natural NP{\mathcal{NP}} -complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard for distributional NP{\mathcal{NP}}.  相似文献   

20.
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess” the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting, answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics, biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than using straightforward column averages. Received: March 15, 1999 / Accepted: November 1, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号