首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The optimization algorithm simulated annealing is considered. Presented is a class of problem instances and ‘neighbor selection’ matrices for which no monotone decreasing temperature sequence is optimal with respect to certain natural performance criteria. An algorithm called ‘threshold random search’ is introduced, and use is made of the fact that simulated annealing is a randomized version of threshold random search with deterministic thresholds.  相似文献   

2.
Certain problems, notably in computer vision, involve adjusting a set of real-valued labels to satisfy certain constraints. They can be formulated as optimisation problems, using the ‘least-disturbance’ principle: the minimal alteration is made to the labels that will achieve a consistent labelling. Under certain linear constraints, the solution can be achieved iteratively and in parallel, by hill-climbing. However, where ‘weak’ constraints are imposed on the labels — constraints that may be broken at a cost — the optimisation problem becomes non-convex; a continuous search for the solution is no longer satisfactory. A strategy is proposed for this case, by construction of convex envelopes and by the use of ‘graduated’ non-convexity.  相似文献   

3.
An important question in constraint satisfaction is how to restrict the problem to ensure tractability (since the general problem is NP-hard). The use of disjunctions has proven to be a useful method for constructing tractable constraint classes from existing classes; the well-known ‘max-closed’ and ‘ORD-Horn’ constraints are examples of tractable classes that can be constructed this way. Three sufficient conditions (the guaranteed satisfaction property, 1-independence and 2-independence) that each ensure the tractability of constraints combined by disjunctions have been proposed in the literature. We show that these conditions are both necessary and sufficient for tractability in three different natural classes of disjunctive constraints. This suggests that deciding this kind of property is a very important task when dealing with disjunctive constraints. We provide a simple, automatic method for checking the 1-independence property—this method is applicable whenever the consistency of the constraints under consideration can be decided by path-consistency. Our method builds on a connection between independence and refinements (which is a way of reducing one constraint satisfaction problem to another.)  相似文献   

4.
This paper considers the problem of how luggages should be assign to each truck for the transportation system consists of a depot, a fixed area and two types of luggages, called schedule problem. The main purpose of this paper is to propose a procedure for the problem subject to keep the balance of work loads among truck drivers. The procedure is based on 3 (heuristic) rules for replacing the addresses of each luggage with the ‘conventional address,’ converting size of each luggage into ‘weight’ and introducing a measure to keep the balance of work loads. The procedure consists of three stages according to ‘priority’ of the types of luggages. A case study is presented that demonstrate the practical usefulness of the procedure.  相似文献   

5.
The problem of controlling a partially observed diffusion process is studied when the cost structure has the form of an integral up to the first exit time from a bounded domain. A modified Zakai equation and the associated separated control problem are derived. An existence result for an optimal wide sense admissible control rule is sketched by analogy with the known ‘finite time horizon’ case.  相似文献   

6.
We present a domain decomposition finite element technique for efficiently generating lower and upper bounds to outputs which are linear functionals of the solutions to symmetric or nonsymmetric second-order coercive linear partial differential equations in two space dimensions. The method is based upon the construction of an augmented Lagrangian, in which the objective is a quadratic ‘energy’ reformulation of the desired output, and the constraints are the finite element equilibrium equations and intersubdomain continuity requirements. The bounds on the output for a suitably fine ‘truth-mesh’ discretization are then derived by appealing to a dual max min relaxation evaluated for optimally chosen adjoint and hybrid-flux candidate Lagrange multipliers generated by a K-element coarser ‘working-mesh’ approximation. Independent of the form of the original partial differential equation, the computation on the truth mesh is reduced to K decoupled subdomain-local, symmetric Neumann problems. The technique is illustrated for the convection-diffusion and linear elasticity equations.  相似文献   

7.
A procedure is proposed leading to an optimal detailed design of reinforced concrete members. The general scheme followed is that of a multi-criterion design optimization. The method synthesizes the information from the cross-sectional level, to the member level and finally, to a group of members where the detailed design of the whole group is decided. At the cross-sectional level, the required reinforcement in cm2 is converted into reinforcing bars of all possible diameters. At the member and/or group of members level, a number of lists that contain different design scenarios are generated. The method of objective weighting is applied to a multi-criterion objective function, which represents a compromise of the ‘minimum weight’, ‘maximum uniformity’ and ‘minimum number of reinforcing bars’ criteria. By varying the weighting factors, it is possible to generate the Pareto optimal set of the problem. The performance of the system is illustrated with a number of examples. The same building is designed using different weighting factors in the multi-criterion objective. The variation of the detailed designs is discussed and certain values for the weighting factors are recommended.  相似文献   

8.
A classical problem of Pattern Recognition consists in looking for an operator of classification (a ‘classifier’) induced from a learning set on which classes are known. A problem frequently encountered in practice is that of looking for an operator of clustering (a ‘clusterfier’, in opposition to ‘classifier’) from a learning set on which clusters are also known. In the first case, we have to find an operator which allocates each new object to one of the classes defined by the learning set. In the second case, we have to find an operator which detects classes in the complete population, taking in account as well as possible the information given by the classes on the learning set. We propose a new approach permitting to induce and aggregation index from knowledge acquiring on the learning set; the aggregation index thus obtained permits to induce a hierarchy which infers the desired classes on the whole population.

A nearest neighbours algorith with validity constraints has been realized to induce the final hierarchy. We obtain a CPU time clearly shorter than with the classical hierarchical ascending classification algorithm which does not use inference.

This program has permitted to find aggregation indices adapted to particular learning sets (elongated classes, spherical class with central kernel, half spherical class with central kernel, noising elongated classes…), and some of new indices permit to recognize more specific classes than the usual indices.  相似文献   


9.
Current state-of-the-art security systems incorporate ‘passive’ and/or ‘human’ elements, the effectiveness of which can only be measured by their ability to ‘deter’ intruders. However, rapidly changing economic and cultural conditions have weakened the strengths associated with such systems. In the not too distant future, the need for an ‘active’ security system will become necessary in order to reduce the onslaught of crime.

This paper presents a conceptual basis for the incorporation of artificial intelligence concepts in the design and implementation of ‘active’ security systems. Specifically, the paper discusses issues pertaining to a real-time model for visual perception and tracking of possible intruders.  相似文献   


10.
Distributed hard real-time systems are characterized by communication messages associated with timing constraints, typically in the form of deadlines. A message should be received at the destination before its deadline expires. Carrier sense multiple access with collision detection (CSMA/CD) appears to be one of the most common communication network access schemes that can be used in distributed hard real-time systems. In this paper, we propose a new real-time network access protocol which is based on the CSMA/CD scheme. The protocol classifies the messages into two classes as ‘critical’ and ‘noncritical’ messages. The messages close to their deadlines are considered to be critical. A critical message is given the right to access the network by preempting a noncritical message in transmission. Extensive simulation experiments have been conducted to evaluate the performance of the protocol. It is shown that the protocol can provide considerable improvement over the virtual time CSMA/CD protocol proposed for hard real-time communication by Zhao et al.1.  相似文献   

11.
This paper presents a comparatively general method for specifying a ‘data constraint’ on a parameterized data type (i.e., specifying just which category of algebras it is supposed to be defined or correct on), and shows that there is a simple canonical form for such constraint specifications. We also show how such constraints may be employed to give ‘loose’ specifications of data types.  相似文献   

12.
This paper is concerned with an optimization-satisfaction problem to determine an optimal solution such that a certain objective function is minimized, subject to satisfaction conditions against uncertainties of any disturbances or opponents' decisions. Such satisfaction conditions require that plural performance criteria are always less than specified values against any disturbances or opponents' decisions. Therefore, this problem is formulated as a minimization problem with the constraints which include max operations with respect to the disturbances or the opponents' decision variables. A new computational method is proposed in which a series of approximate problems transformed by applying a penalty function method to the max operations within the satisfaction conditions are solved by usual nonlinear programming. It is proved that a sequence of approximated solutions converges to a true optimal solution. The proposed algorithm may be useful for systems design under unknown parameters, process control under uncertainties, general approximation theory, and strategic weapons allocation problems.  相似文献   

13.
There is a growing information gap between the development of advanced human-machine systems, and the availability of human factors design criteria that can be applied during their design process. Despite increased interest in the development of human factors design guidelines, there also remains considerable uncertainty and concern regarding the actual utility of such information. Indeed, many existing human factors reference materials have been criticized by designers for being ‘too wordy’, ‘too general’, and ‘too hard to understand’. The development of clear, relevant, and useful human factors guidelines requires a judicious mix of science and art to overcome such criticisms. Specifically, while a number of empirical and systematic methods can be productively applied to their development, the final design guidelines will always represent a subjective integration of user requirements, design constraints, available information, and expert judgement. This paper summarizes procedures and heuristics associated with both the science and the art components of human factors design guideline development.  相似文献   

14.
In order to perform business modelling as apart of information systems development, there is a need for frameworks and methods. The paper proposes a framework for business interaction based on a language/action perspective. The framework is an architecture of five generic layers. The first layer concept is ‘business act’, which functions as the basic unit of analysis. The following four layer concepts are ‘action pair’, ‘exchange’, ‘business transaction’, and ‘transaction group’. The framework is inspired by a similar framework constructed by Weigand et al. The paper makes a critical examination of this framework as a basis for the proposed framework.  相似文献   

15.
Buried stormwater pipe networks play a key role in surface drainage systems for urban areas of Australia. The pipe networks are designed to convey water from rainfall and surface runoff only and do not transport sewage. The deterioration of stormwater pipes is commonly graded into structural and serviceability condition using CCTV inspection data in order to recognize two different deterioration processes and consequences. This study investigated the application of neural networks modelling (NNM) in predicting serviceability deterioration that is associated with reductions of pipe diameter until a complete blockage. The outcomes of the NNM are predictive serviceability condition for individual pipes, which is essential for planning proactive maintenance programs, and ranking of pipe factors that potentially contribute to the serviceability deterioration. In this study the Bayesian weight estimation using Markov Chain Monte Carlo simulation was used for calibrating the NNM on a case study in order to account for the uncertainty often encountered in NNM's calibration using conventional back-propagation weight estimation. The performance and the ranked factors obtained from the NNM were also compared against a classical model using multiple discrimination analysis (MDA). The results showed that the predictive performance of the NNM using Bayesian weight estimation is better than that of the NNM using conventional backpropagation and MDA model. Furthermore, among nine input factors, ‘pipe age’ and ‘location’ appeared insignificant whilst ‘pipe size’, ‘slope’, ‘the number of trees’ and ‘climatic condition’ were found consistently important over both models for serviceability deterioration process. The remaining three factors namely, ‘structure’, ‘soil’ and ‘buried depth’ might be redundant factors. A better and more consistent data collection regime may help to improve the predictive performance of the NNM and identify the significant factors.  相似文献   

16.
When it comes time to write some computer code the hardware of choice today is a workstation running the Unix operating system. This paper considers the problem of new users faced with Unix. It presents a number of possible solutions and elaborates on one which provides a graphical help system. This latter help system may be accessed via a pointing device which leads initially to a picture of a help structure, nodes in the structure may be selected to display the associated help text. The texts are structure to address the needs of differing users, ‘new’ through to ‘competent’.  相似文献   

17.
The project described is concerned with the development of a practical system for interpreting carotid angiograms. The general requirements of such a system are identified and some of the design considerations are discussed. The analysis strategies and types of knowledge used by an expert in recognizing and naming vessels and categorizing abnormalities on a single-plane angiogram are described- Two distinct types of knowledge are identified: ‘facts’ knowledge which is drawn from various areas of science, and ‘strategy’ knowledge which determines how the facts should be used. It is suggested that confidence parameters should be associated with each type of knowledge and these should be used in the control of the analysis of an angiogram. The ‘facts’ and ‘strategy’ knowledge bases have a hierarchical structure which, if exploited, would enable the system to be easily adapted for use in other application areas.  相似文献   

18.
The use of thresholding to separate objects in an image is analyzed in the light of a notion that ‘objects are in the eye of the beholder’, i.e. the existence of objects in an image is subjective rather than objective. Interpretation of the term ‘semantic’ as an association of an observed entity with an (other) entity which can be used for communication, is combined with Whorf's hypothesis on influence of language on reasoning, to yield a thresholding method. The method is based on minimizing the ambiguity which results from the different contexts (local and global) in which grey levels are classified as dark or bright. Examples are supplied and compared to thresholds which are optimal in the L2 norm.  相似文献   

19.
A one-dimensional simulation procedure is developed for use in estimating structural reliability in multi-dimensional load and resistance space with the loads represented as stochastic process. The technique employed is based on the idea of using ‘strips’ of points parallel to each other and sampled on the limit state hyperplanes. The ‘local’ outcrossing rate and the zero time failure probability Pf(0) associated with the narrow strips are derived using the conditional reliability index. When the domain boundary consists of a set of limit states, second order bounds are used to obtain a lower bound approximation of the outcrossing rate and Pf(0) associated with the union of a set of λ strips. It is shown by examples that for high reliability problems, λ may be much less than the number of limit states without significant loss of accuracy and with considerable saving in computation time. It was also found that the rate of convergence of the simulations is quite fast even without using importance sampling.  相似文献   

20.
Simple association rules (SAR) and the SAR-based rule discovery   总被引:13,自引:0,他引:13  
Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a problem of concern, as conventional mining algorithms often produce too many rules for decision makers to digest. Instead, this paper concentrates on a smaller set of rules, namely, a set of simple association rules each with its consequent containing only a single attribute. Such a rule set can be used to derive all other association rules, meaning that the original rule set based on conventional algorithms can be ‘recovered’ from the simple rules without any information loss. The number of simple rules is much less than the number of all rules. Moreover, corresponding algorithms are developed such that certain forms of rules (e.g. ‘P?’ or ‘?Q’) can be generated in a more efficient manner based on simple rules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号