首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
It has been observed that identification of state-space models with inputs may lead to unreliable results in certain experimental conditions even when the input signal excites well within the bandwidth of the system. This may be due to ill-conditioning of the identification problem, which occurs when the state space and the future input space are nearly parallel.We have in particular shown in the companion papers (Automatica 40(4) (2004) 575; Automatica 40(4) (2004) 677) that, under these circumstances, subspace methods operating on input-output data may be ill-conditioned, quite independently of the particular algorithm which is used. In this paper, we indicate that the cause of ill-conditioning can sometimes be cured by using orthogonalized data and by recasting the model into a certain natural block-decoupled form consisting of a “deterministic” and a “stochastic” subsystem. The natural subspace algorithm for the identification of the deterministic subsystem is then a weighted version of the PI-MOESP method of Verhaegen and Dewilde (Int. J. Control 56 (1993) 1187-1211). The analysis shows that, under certain conditions, methods based on the block-decoupled parametrization and orthogonal decomposition of the input-output data, perform better than traditional joint-model-based methods in the circumstance of nearly parallel regressors.  相似文献   

2.
A distributed XML document is an XML document that spans several machines. We assume that a distribution design of the document tree is given, consisting of an XML kernel-documentT[f1,…,fn] where some leaves are “docking points” for external resources providing XML subtrees (f1,…,fn, standing, e.g., for Web services or peers at remote locations). The top-down design problem consists in, given a type (a schema document that may vary from a DTD to a tree automaton) for the distributed document, “propagating” locally this type into a collection of types, that we call typing, while preserving desirable properties. We also consider the bottom-up design which consists in, given a type for each external resource, exhibiting a global type that is enforced by the local types, again with natural desirable properties. In the article, we lay out the fundamentals of a theory of distributed XML design, analyze problems concerning typing issues in this setting, and study their complexity.  相似文献   

3.
In the paper, we develop an EPQ (economic production quantity) inventory model to determine the optimal buffer inventory for stochastic demand in the market during preventive maintenance or repair of a manufacturing facility with an EPQ (economic production quantity) model in an imperfect production system. Preventive maintenance, an essential element of the just-in-time structure, may cause shortage which is reduced by buffer inventory. The products are sold with the free minimal repair warranty (FRW) policy. The production system may undergo “out-of-control” state from “in-control” state, after a certain time that follows a probability density function. The defective (non-conforming) items in “in-control” or “out-of-control” state are reworked at a cost just after the regular production time. Finally, an expected cost function regarding the inventory cost, unit production cost, preventive maintenance cost and shortage cost is minimized analytically. We develop another case where the buffer inventory as well as the production rate are decision variables and the expected unit cost considering the above cost functions is optimized also. The numerical examples are provided to illustrate the behaviour and application of the model. Sensitivity analysis of the model with respect to key parameters of the system is carried out.  相似文献   

4.
With the increasing burden of chronic diseases on the health care system, Markov-type models are becoming popular to predict the long-term outcomes of early intervention and to guide disease management. However, statisticians have not been actively involved in the development of these models. Typically, the models are developed by using secondary data analysis to find a single “best” study to estimate each transition in the model. However, due to the nature of secondary data analysis, there frequently are discrepancies between the theoretical model and the design of the studies being used. This paper illustrates a likelihood approach to correctly model the design of clinical studies under the conditions where (1) the theoretical model may include an instantaneous state of distinct interest to the researchers and (2) the study design may be such that study data cannot be used to estimate a single parameter in the theoretical model of interest. For example, a study may ignore intermediary stages of disease. Using our approach, not only can we accommodate the two conditions above, but more than one study may be used to estimate model parameters. In the spirit of “If life gives you lemon, make lemonade”, we call this method “Lemonade Method”. Simulation studies are carried out to evaluate the finite sample property of this method. In addition, the method is demonstrated through application to a model of heart disease in diabetes.  相似文献   

5.
There has been substantial research carried out on the errors in variables (EIV) identifiability problem for dynamic systems. These results are spread across a significant volume of literature. Here, we present a single theorem which compactly summarizes many of the known results. The theorem also covers several cases which we believe to be novel. We analyze single input single output systems using second order properties. We also extend the results to a class of multivariable systems.  相似文献   

6.
When working with large-scale models or numerous small models, there can be a temptation to rely on default settings in proprietary software to derive solutions to the model. In this paper we show that, for the solution of non-linear dynamic models, this approach can be inappropriate. Alternative linear and non-linear specifications of a particular model are examined. One version of the model, expressed in levels, is highly non-linear. A second version of the model, expressed in logarithms, is linear. The dynamic solution of each model version has a combination of stable and unstable eigenvalues so that any dynamic solution requires the calculation of appropriate “jumps” in endogenous variables. We can derive a closed-form solution of the model, which we use as our “true” benchmark, for comparison with computational solutions of both linear and non-linear models. Our approach is to compare the “goodness of fit” of reverse-shooting solutions for both the linear and non-linear model, by comparing the computational solutions with the benchmark solution. Under the basic solution method with default settings, we show that there is significant difference between the computational solution for the non-linear model and the benchmark closed-form solution. We show that this result can be substantially improved using modifications to the solver and to parameter settings.  相似文献   

7.
Finite impulse-response “input shapers” may be used to prefilter inputs to eliminate or minimize residual vibration when maneuvering flexible structures. Vibration is reduced at the expense of adding a delay equal to the length of the shaper. This paper first presents results that aid in the design of “single-input shapers” in the discrete domain. Next, MIMO shaping is considered. Given certain a priori information about the inputs, shorter shapers (which introduce less delay) can be derived. A time-optimal scheme is developed here, and existence of solutions is proved for both SISO and MIMO cases. The method is direct, general, easy to implement, and allows for the addition of derivative constraints to improve robustness. Finally, traditional optimal control and input shaping are compared, and it is shown that under certain conditions optimal discrete-time input shaping and discrete time-optimal control are equivalent.  相似文献   

8.
This paper introduces a mechanism for testing multivariable models employed by model-based controllers. Although external excitation is not necessary, the data collection includes a stage where the controller is switched to open-loop operation (manual mode). The main idea is to measure a certain “distance” between the closed-loop and the open-loop signals, and then trigger a flag if this “distance” is larger than a threshold level. Moreover, a provision is made for accommodating model uncertainty. Since no hard bounds are assumed with respect to the noise amplitude, the model invalidation mechanism works in a probabilistic framework.  相似文献   

9.
In this article, we attempt to formally study two very intuitive physical models: sealed envelopes and locked boxes, often used as illustrations for common cryptographic operations. We relax the security properties usually required from locked boxes [such as in bit-commitment (BC) protocols] and require only that a broken lock or torn envelope be identifiable to the original sender. Unlike the completely impregnable locked box, this functionality may be achievable in real life, where containers having this property are called “tamper-evident seals”. Another physical object with this property is the “scratch-off card”, often used in lottery tickets. We consider three variations of tamper-evident seals, and show that under some conditions they can be used to implement oblivious transfer, BC and coin flipping (CF). We also show a separation between the three models. One of our results is a strongly fair CF protocol with bias bounded by O(1/r) (where r is the number of rounds); this was a stepping stone towards achieving such a protocol in the standard model (in subsequent work).  相似文献   

10.
Expressive mobile narratives are in the early stages of design and participative formation. We ask what relations and articulations can be brought to play in the design and enactment of a mobile fictional space in which identity is a core motivation and design feature. Through a design case, we share experiences in the building and trial process of an experimental, mobile “social fiction” called NarraHand. We introduce a new category of expressive mobile narrative, the “reflexive place-space narrative,” along with the term Markers of Identity (MOI).  相似文献   

11.
We present a fast algorithm for slope detection on gray scale images, based on 2D Fourier transform and standard filters; this may be used for line or edge detection. Our approach is based on the calculation of “energy” per direction of the image, thus obtaining the “energy spectrum on slope” (θ). This exhibits local maxima at the points where θ equals the slopes of linear or quasi-linear segments within the image, yet it is not affected by their position within it. The process thus outlined has certain advantages as regards its efficiency of linear structure detection, compared to the Radon and Hough transforms. It was motivated by the study of astrophysical images (solar dynamic radio spectra) which necessitated the introduction of a method for fast extraction of “drifting structures”, since they appear as linear or quasi-linear segments on these spectra.  相似文献   

12.
Algorithms for deterministic balanced subspace identification   总被引:1,自引:0,他引:1  
New algorithms for identification of a balanced state space representation are proposed. They are based on a procedure for the estimation of impulse response and sequential zero input responses directly from data. The proposed algorithms are more efficient than the existing alternatives that compute the whole Hankel matrix of Markov parameters. It is shown that the computations can be performed on Hankel matrices of the input-output data of various dimensions. By choosing wider matrices, we need persistency of excitation of smaller order. Moreover, this leads to computational savings and improved statistical accuracy when the data is noisy. Using a finite amount of input-output data, the existing algorithms compute finite time balanced representation and the identified models have a lower bound on the distance to an exact balanced representation. The proposed algorithm can approximate arbitrarily closely an exact balanced representation. Moreover, the finite time balancing parameter can be selected automatically by monitoring the decay of the impulse response. We show what is the optimal in terms of minimal identifiability condition partition of the data into “past” and “future”.  相似文献   

13.
The development of information technology has a significant influence on social structure and norms, and also impacts upon human behavior. In order to achieve stability and social harmony, people need to respect various norms, and have their rights protected. Students’ information ethics values are of critical and radical importance in achieving this goal. Using qualitative approach, the present study utilizes Kohlberg’s CMD model to measure improvement in students’ “information ethics values” through “technology mediated learning (TML)” models, and to assess the extent to which it is influenced by gender and Chinese guanxi culture. We find that while e-learning improves female students’ “respect rules,” “privacy,” “accessibility” and “intellectual property” values more than male students, the percentages relating to “intellectual property” for females in the higher stages remain lower than for males. Moreover, these results are interpreted from a Chinese guanxi culture perspective. In light of these results, educators should take account of such improvements when designing effective teaching methods and incentives.  相似文献   

14.
The development and assessment of spectral turbulence models requires knowledge of the spectral turbulent kinetic energy distribution as well as an understanding of the terms which determine the energy distribution in physical and wave number space. Direct numerical simulation (DNS) of turbulent channel flow yields numerical “data” that can be, and was, analyzed using a spatial Fast Fourier Transform (FFT) to obtain the various spectral turbulent kinetic energy balance terms, including the production, dissipation, diffusion, and the non-linear convective transfer terms.  相似文献   

15.
16.
In an attempt to enhance the neural network technique so that it can evolve from a “black box” tool into a semi-analytical one, we propose a novel modeling approach of imposing “generalized constraints” on a standard neural network. We redefine approximation problems by use of a new formalization with the aim of embedding prior knowledge explicitly into the model to the maximum extent. A generalized-constraint neural network (GCNN) model has therefore been developed, which basically consists of two submodels. One is constructed by the standard neural network technique to approximate the unknown part of the target function. The other is formed from partially known relationships to impose generalized constraints on the whole model. Three issues arising after combination of the two submodels are discussed: (a) the better approximation provided by the GCNN model compared with a standard neural network, (b) the identifiability of parameters in the partially known relationships, and (c) the discrepancy in the approximation due to removable singularities in the target function. Numerical studies of three benchmark problems show important findings that have not previously been reported in the literature. Significant benefits were observed from using the GCNN model in comparison with a standard neural network.  相似文献   

17.
The framework of differential algebra, especially Ritt’s algorithm, has turned out to be a useful tool when analyzing the identifiability of certain nonlinear continuous-time model structures. This framework provides conceptually interesting means to analyze complex nonlinear model structures via the much simpler linear regression models. One difficulty when working with continuous-time signals is dealing with white noise in nonlinear systems. In this paper, difference algebraic techniques, which mimic the differential-algebraic techniques, are presented. Besides making it possible to analyze discrete-time model structures, this opens up the possibility of dealing with noise. Unfortunately, the corresponding discrete-time identifiability results are not as conclusive as in continuous time. In addition, an alternative elimination scheme to Ritt’s algorithm will be formalized and the resulting algorithm is analyzed when applied to a special form of the nfir model structure.  相似文献   

18.
In this paper we examine in detail the algorithm of Simon [H.A. Simon, Causal ordering and identifiability, in: W.C. Hood, T.C. Koopmans (Eds.), Studies in Econometric Method. Cowles Commission for Research in Economics, Monograph No. 14, John Wiley & Sons, Inc., New York, 1953, pp. 49-74, Chapter III], called the causal ordering algorithm (COA), used for constructing the “causal ordering” of a system given a complete specification of the system in terms of a set of “structural” equations that govern the variables in the system. This algorithm constructs a graphical characterization of the model in a form that we call a partial causal graph. Simon argued in [H.A. Simon, Causal ordering and identifiability, in: W.C. Hood, T.C. Koopmans (Eds.), Studies in Econometric Method. Cowles Commission for Research in Economics, Monograph No. 14, John Wiley & Sons, Inc., New York, 1953, pp. 49-74, Chapter III] and subsequent papers that a graph so generated explicates causal structure among variables in the model. We formalize this claim further by proving that any causal model based on a one-to-one correspondence between equations and variables must be consistent with the COA.  相似文献   

19.
A detailed computational study is presented of the flow pattern around the Esso Osaka with rudder in simple maneuvering conditions: “static rudder” and “pure drift”. The objectives are: (1) apply RANS for maneuvering simulation; (2) perform verification and validation on field quantities; (3) characterize flow pattern; and (4) correlate behavior of the integral quantities with the flow field. The general-purpose code CFDSHIP-IOWA is used. The free surface is neglected and the two-equation k-ω turbulence model is used. The levels of verification of the velocity components for the “straight-ahead”, “static rudder” and “pure drift” conditions show ranges from 5.5% to 28.3% of free stream, U0, for the axial velocity U and 2.5-29.1%U0 for the cross flow (VW). Qualitative validation against limited experimental data shows encouraging results with respect to trends and levels. The flow pattern is characterized by fore and aft body bilge and side vortices, which are similar for “straight-ahead” and “static rudder” conditions, except in close vicinity of the rudder. The “pure drift” condition shows strong asymmetry on windward vs. leeward sides and a more complex vortex system with additional bilge vortices. Similarities and differences with data for other tanker, container, and surface combatant hulls and relation between flow pattern and forces and moments are discussed. Future work focuses on influence of propeller.  相似文献   

20.
Large-scale landslide prediction is typically based on numerical modeling, with computer codes generally involving a large number of input parameters. Addressing the influence of each of them on the final result and providing a ranking procedure may be useful for risk management purposes. This can be performed by a variance-based global sensitivity analysis. Nevertheless, such an analysis requires a large number of computer code simulations, which appears impracticable for computationally demanding simulations, with computation times ranging from several hours to several days. To overcome this difficulty, we propose a “meta-model”-based strategy consisting in replacing the complex simulator by a “statistical approximation” provided by a Gaussian-process (GP) model. This allows computation of sensitivity measures from a limited number of simulations. For illustrative purposes, the proposed methodology is used to rank in terms of importance the properties of the elastoplastic model describing the complex behavior of the slip surface in the La Frasse landslide (Switzerland). One limitation of the GP-based methodology is that the computation of sensitivity measures is associated with uncertainty as the simulator is approximated using a training sample of small size, i.e., a limited knowledge on the “true” simulator. This source of uncertainty can be taken into account by treating the GP model from a Bayesian perspective. This provides the full posterior probability distribution associated with the sensitivity measures, which can be summarized by a confidence interval to outline the regions where the GP model is “unsure.” We show that this methodology is able to provide useful guidelines for the practical decision-making process and suggest further site investigations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号