首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Although analytic versions of the Popov criterion are available for multiple-loop feedback systems, it is difficult to get a graphical representation which is as useful as the single-loop one. This paper presents a graphical representation which provides a sufficient condition for the Popov criterion to hold in the multiple-loop case. Two versions are given: the first is suited to design methods based on diagonal dominance and the second is suited to those based on characteristic loci. When the linear part of the system is normal, the second version is necessary as well as sufficient for the criterion to hold.  相似文献   

2.
We investigate whether accent identification is more effective for English utterances embedded in a different language as part of a mixed code than for English utterances that are part of a monolingual dialogue. Our focus is on Xhosa and Zulu, two South African languages for which code-mixing with English is very common. In order to carry out our investigation, we extract English utterances from mixed-code Xhosa and Zulu speech corpora, as well as comparable utterances from an English-only corpus by Xhosa and Zulu mother-tongue speakers. Experiments using automatic accent identification systems show that identification is substantially more accurate for the utterances originating from the mixed-code speech. These findings are supported by a corresponding set of perceptual experiments in which human subjects were asked to identify the accents of recorded utterances. We conclude that accent identification is more successful for these utterances because accents are more pronounced for English embedded in mother-tongue speech than for English spoken as part of a monolingual dialogue by non-native speakers. Furthermore we find that this is true for human listeners as well as for automatic identification systems.  相似文献   

3.
In optimization routines used for on-line Model Predictive Control (MPC), linear systems of equations are solved in each iteration. This is true both for Active Set (AS) solvers as well as for Interior Point (IP) solvers, and for linear MPC as well as for nonlinear MPC and hybrid MPC. The main computational effort is spent while solving these linear systems of equations, and hence, it is of great interest to solve them efficiently. In high performance solvers for MPC, this is performed using Riccati recursions or generic sparsity exploiting algorithms. To be able to get this performance gain, the problem has to be formulated in a sparse way which introduces more variables. The alternative is to use a smaller formulation where the objective function Hessian is dense. In this work, it is shown that it is possible to exploit the structure also when using the dense formulation. More specifically, it is shown that it is possible to efficiently compute a standard Cholesky factorization for the dense formulation. This results in a computational complexity that grows quadratically in the prediction horizon length instead of cubically as for the generic Cholesky factorization.  相似文献   

4.
Various problems associated with localization during curved plate fabrication are discussed. Localization is a necessary step for automation of curved plate fabrication that aligns a designed shape with a fabricated one as closely as possible for comparison of their shapes. On top of this localization, various conditions are introduced to reflect requirements occurring during fabrication such as minimum cutting length, maintenance of cutting length, localization for non-penetration and data types for localization. Each condition is formulated as a constraint which is provided as input to the optimization problem for localization. Algorithms for localization with each constraint based on iteration are proposed. Examples are used to demonstrate the performance of the algorithms.  相似文献   

5.
《Ergonomics》2012,55(11):1537-1538
The motivation for this paper is to review the status of Hierarchical Task Analysis (HTA) as a general framework for examining tasks, including those for which cognitive task analysis methods might be assumed to be necessary. HTA is treated as a strategy for examining tasks, aimed at refining performance criteria, focusing on constituent skills, understanding task contexts and generating useful hypotheses for overcoming performance problems. A neutral and principled perspective avoids bias and enables the analyst to justify using different analytical methods and develop hypotheses as information is gained about the task. It is argued that these considerations are equally valid when examining tasks that are assumed to contain substantial cognitive elements. Moreover, examining cognition within the context of a broader task helps to situate cognition within the network of actions and decisions that it must support, as well as helping to establish where effort in cognitive task analysis is really justified.  相似文献   

6.
This paper proposes a deterministic heuristic algorithm (DHA) for two-dimensional strip packing problem where 90° rotations of pieces are allowed and there is no guillotine packing constraint. The objective is to place all pieces without overlapping into a strip of given width so as to minimize the total height of the pieces. Based on the definition of action space, a new sorting rule for candidate placements is proposed such that the position for the current piece is as low as possible, the distance between the current piece and other inside pieces is as close as possible, and the adverse impact for further placements is as little as possible. Experiments on four groups of benchmarks showed the proposed DHA achieved highly competitive results in comparison with the state-of-the-art algorithms in the literature. Also, as a deterministic algorithm, the DHA could achieve high quality solutions by only one independent run on both small-scale and large-scale problem instances and the results are repeatable.  相似文献   

7.
8.
Production management as a constraint satisfaction problem   总被引:2,自引:0,他引:2  
Production management problems can be quite straightforwardly presented as constraint satisfaction problems, where values for some variables are searched for under a set of constraints. A combination of an operation and a resource is usually interpreted as the variable, and a time window is usually interpreted as the value to be searched for. This convention is challenged. A case is considered where the most appropriate interpretation treats the combination of a resource and a time window as the variable, and an operation as the value. A third possible interpretation is also briefly covered, where the combination of an operation and a time window is the variable, and the resource is the value.  相似文献   

9.
On the Internet, information is largely in text form, which often includes such errors as spelling mistakes. These errors complicate natural language processing because most NLP applications aren't robust and assume that the input data is noise free. Preprocessing is necessary to deal with these errors and meet the growing need for automatic text processing. One kind of such preprocessing is automatic word spacing. This process decides correct boundaries between words in a sentence containing spacing errors, which are a type of spelling error. Except for some Asian languages such as Chinese and Japanese, most languages have explicit word spacing. In these languages, word spacing is crucial to increase readability and to accurately communicate a text's meaning. Automatic word spacing plays an important role not only as a spell-checker module but also as a preprocessor for a morphological analyzer, which is a fundamental tool for NLP applications. Furthermore, automatic word spacing can serve as a postprocessor for optical-character-recognition systems and speech recognition systems  相似文献   

10.
In this study, the scheduling of truck load operations in automated storage and retrieval systems is investigated. The problem is an extension of previous ones such that a pallet can be retrieved from a set of alternative aisles. It is modelled as a flexible job shop scheduling problem where the loads are considered as jobs, the pallets of a load are regarded as the operations, and the forklifts used to remove the retrieving items to the trucks are seen as machines. Minimization of maximum loading time is used as the objective to minimize the throughput time of orders and maximize the efficiency of the warehouse. A priority based genetic algorithm is presented to sequence the retrieving pallets. Permutation coding is used for encoding and a constructive algorithm generating active schedules for flexible job shop scheduling problem is applied for decoding. The proposed methodology is applied to a real problem arising in a warehouse installed by a leading supplier of automated materials handling and storage systems.  相似文献   

11.
A case study of a UK metropolitan council is presented which examines, over an 11-year period using document analysis and staff interviews, the approach the council has evolved to develop a strategic information systems plan. The approach is analyzed using a planning reference model. The existence of a political, as opposed to a textbook, approach to planning is identified, the reasons for this are described, and some suggestions are made for research for the further understanding of planning, as well as suggestions for modifications to textbook planning approaches.  相似文献   

12.
 Retrieving relevant information is a crucial component of cased-based reasoning systems for Internet applications such as search engines. The task is to use user-defined queries to retrieve useful information according to certain measures. Even though techniques exist for locating exact matches, finding relevant partial matches might be a problem. It may not be also easy to specify query requests precisely and completely – resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for large repositories such as World Wide Web, a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise or fuzzy query specification or search.  相似文献   

13.
This paper presents mixed model regression mapping (MMRM) as a method for mapping quantitative trait loci (QTL) in backcross and F2 data arising from crosses of inbred lines. It is related to interval mapping, composite interval mapping and other regression approaches but differs in that it tests for QTL presence in each linkage group before conditionally modeling QTL location.The three key ideas presented are to promote use of a Likelihood Ratio type of test for the presence of QTL in linkage groups before searching for QTL as a method of controlling false discovery rate, to present an alternative QTL profile to the LOD score for identifying the possible location of a QTL, and to promote the use of a local smoother to identify turning points in a profile based on evaluation at marker points rather than directly predicting all intermediate points.MMRM requires fitting a short series of models to locate and then evaluate putative QTL. Assuming marker covariates are allocated to linkage groups, MMRM first fits all the markers as independent random effects with common variance within the linkage groups. If there is no significant variance component associated with a linkage group, there is no evidence for a QTL associated with that group. Otherwise a QTL profile is predicted as a weighted sum of the marker BLUPs from which to postulate the most likely position of the QTL. A putative QTL covariate for that position is then calculated from flanking markers and added to the model. If this does not explain all the marker variance, the model is refined.Since MMRM is based on a linear mixed model, the model is easily extended to include extraneous sources of variation such as spatial variation in field experiments, to handle multiple QTL and to test for genotype by environment interactions. It is expounded using two simple examples analysed in the ASReml linear models software. Two simulation studies show that MMRM identifies QTL as reliably as but more directly than other common methods.  相似文献   

14.
15.
An overview of the ELSA (European large SIMD array) project, which uses a two-level strategy to achieve defect tolerance for wafer-scale architectures implemented in silicon, is presented. The target architecture is a 2-D array of processing elements for low-level image processing. An array is divided into subarrays called chips. At the chip level, defect tolerance is proved by an extra column of PEs (processing element) and bypassing techniques. At the wafer level, a double-rail connection network is used to construct a target array of defect-free chips that is as large and as fast as possible. Its main advantage is being independent of chip defects, as it is controlled from the I/O pads. An algorithm for constructing an optimized two-dimensional array on a wafer containing a given number of defect-free PEs and connections, a method to program the switches for the target architecture found by the algorithm, and software for programming the switches using laser cuts are discussed  相似文献   

16.
Fault diagnosis is analysed here as a decision between alternative hypotheses, based on uncertain evidence. W e consider severe lack of information, and perceive the uncertainty as an information gap between what is known, and what needs to be known for a perfect decision. This uncertainty is quantified with info-gap models of uncertainty, which require less information than probabilistic models. Previous work with convex set-models is extended to linear info-gap models which are not necessarily convex, as well as to more general info-gap models with arbitrary expansion properties. We define a decision algorithm based on info-gap models and prove three theorems, one establishing the connection with the earlier work on convex models, the other two showing that the algorithm is maximally robust for linear info-gap models as well as for general infogap models of uncertainty. An illustrative example is presented which shows how these results can be used for optimizing the design of a model-based fault diagnosis algorithm.  相似文献   

17.
A new method of image understanding for forms based on model matching is proposed in this paper as the basis of an OCR which can read a variety of forms. The outline of this method is described as follows. First, ruled lines are extracted from the input image of a form. After that, several lines are grouped as one to be recognised as data corresponding to a sub-form. These lines and sub-forms are both used for understanding the form, taking into account their feature attributes and the relationships between them. Each feature in the input image of a form is expected to correspond to a feature in one of the model forms, which are described as structured features. This correspondence is represented by a node in an association graph, where an arc represents compatible correspondences established on the basis of feature relationships. The best match is found as the largest maximal clique in the association graph. Experimental results show the method is robust and effective for document images of poor quality, and also for various styles of forms.  相似文献   

18.
Real‐life vehicle routing problems generally have both routing and scheduling aspects to consider. Although this fact is well acknowledged, few heuristic methods exist that address both these complicated aspects simultaneously. We present a graph theoretic heuristic to determine an efficient service route for a single service vehicle through a transportation network that requires a subset of its edges to be serviced, each a specified (potentially different) number of times. The times at which each of these edges are to be serviced should additionally be as evenly spaced over the scheduling time window as possible, thus introducing a scheduling consideration to the problem. Our heuristic is based on the tabu search method, used in conjunction with various well‐known graph theoretic algorithms, such as those of Floyd (for determining shortest routes) and Frederickson (for solving the rural postman problem). This heuristic forms the backbone of a decision support system that prompts the user for certain parameters from the physical situation (such as the service frequencies and travel times for each network link as well as bounds in terms of acceptability of results) after which a service routing schedule is suggested as output. The decision support system is applied to a special case study, where a service routing schedule is sought for the South African national railway system by Spoornet (the semi‐privatised South African national railways authority and service provider) as part of their rationalisation effort, in order to remain a lucrative company.  相似文献   

19.
In the proposed advanced computing environment, known as the HoneyBee Platform, various computing devices using single or multiple interfaces and technologies/standards need to communicate and cooperate efficiently with a certain level of security and safety measures. These computing devices may be supported by different types of operating systems with different features and levels of security support. In order to ensure that all operations within the environment can be carried out seamlessly in an ad-hoc manner, there is a need for a common mobile platform to be developed. The purpose of this long-term project is to investigate and implement a new functional layered model of the common mobile platform with secured and trusted ensemble computing architecture for an innovative Digital Economic Environment in the Malaysian context. This mobile platform includes a lightweight operating system to provide a common virtual environment, a middleware for providing basic functionalities of routing, resource and network management, as well as to provide security, privacy and a trusted environment. A generic application programming interface is provided for application developers to access underlying resources. The aim is for the developed platform to act as the building block for an ensemble environment, upon which higher level applications could be built. Considered as the most essential project in a series of related projects towards a more digital socio-economy in Malaysia, this article presents the design of the target computational platform as well as the conceptual framework for the HoneyBee project.  相似文献   

20.
Conditions for which linear MPC converges to the correct target   总被引:1,自引:0,他引:1  
This paper considers the efficacy of disturbance models for ensuring offset-free control and the determination of the optimum feasible steady-state target within linear model predictive control (MPC). Previously proposed methods for steady-state target determination can address model error, disturbances, and output target changes when the desired steady state is feasible, but may fail to achieve a feasible target that is as close as possible to the desired steady-state target when the desired target is unreachable due to active constraints. Under certain conditions, the resulting ‘feasible steady-state target’ can converge to a point that is not as close as possible to the optimal feasible target. By considering the Karush–Kuhn–Tucker (KKT) conditions of optimality for the steady-state target optimizer, sufficient multi-variable conditions are established for which convergence to the optimal feasible target is guaranteed and, conversely, when convergence to a sub-optimal feasible target is expected.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号