共查询到20条相似文献,搜索用时 15 毫秒
1.
Lefteri H. Tsoukalas Mamoru Ishii Ye Mi 《Engineering Applications of Artificial Intelligence》1997,10(6):545-555
A neurofuzzy methodology for flow identification based on signals obtained from an impedance void meter is presented. The methodology combines the filtering and interpolative capabilities of neural networks with the representational advantages of fuzzy systems for the purpose of mapping idiosyncratic area-averaged impedance measurements to multiphase flow regimes. It has been shown that electrical signals representing the conductance of the intervening medium can be used to infer crucial flow parameters, and that area-averaged signals contain sufficient information about flow regime and the structure of its two-phase constituents. The neurofuzzy approach is a promising means for reconstructing the visual imagery of flow in a process, analogous to tomography, and holds considerable promise for multiphase flow diagnostic and measurement applications in the nuclear as well as in the petroleum, biomedical, and food-processing industries. 相似文献
2.
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2. 相似文献
3.
Needs for increased product quality, reduced pollution, and reduced energy and material consumption are driving enhanced process integration. This increases the number of manipulated and measured variables required by the control system to achieve its objectives. This paper addresses the question of whether processes tend to become increasingly more difficult to identify and control as the process dimension increases. Tools and results of multivariable statistics are used to show that, under a variety of assumed distributions on the elements, square processes of higher dimension tend to be more difficult to identify and control, whereas the expected controllability and identifiability of nonsquare processes depends on the relative numbers of measured and manipulated variables. These results suggest that the procedure of simplifying the control problem so that only a square process is considered is a poor practice for large scale systems. 相似文献
4.
This paper describes the design and modelling of ultrasonic tomography for two-component high-acoustic impedance mixture such as liquid/gas and oil/gas flow which commonly found in chemical columns and industrial pipelines. The information obtained through this research has proven to be useful for further development of ultrasonic tomography. This includes acquiring and processing ultrasonic signals from the transducers to obtain the information of the spatial distributions of liquid and gas in an experimental column. Analysis on the transducers’ signals has been carried out to distinguish between the observation time and the Lamb waves. The information obtained from the observation time is useful for further development of the image reconstruction. 相似文献
5.
Intercalibration of vegetation indices from different sensor systems 总被引:12,自引:0,他引:12
Michael D Steven Timothy J MalthusFrédéric Baret Hui XuMark J Chopping 《Remote sensing of environment》2003,88(4):412-422
Spectroradiometric measurements were made over a range of crop canopy densities, soil backgrounds and foliage colour. The reflected spectral radiances were convoluted with the spectral response functions of a range of satellite instruments to simulate their responses. When Normalised Difference Vegetation Indices (NDVI) from the different instruments were compared, they varied by a few percent, but the values were strongly linearly related, allowing vegetation indices from one instrument to be intercalibrated against another. A table of conversion coefficents is presented for AVHRR, ATSR-2, Landsat MSS, TM and ETM+, SPOT-2 and SPOT-4 HRV, IRS, IKONOS, SEAWIFS, MISR, MODIS, POLDER, Quickbird and MERIS (see Appendix A for glossary of acronyms). The same set of coefficients was found to apply, within the margin of error of the analysis, for the Soil Adjusted Vegetation Index SAVI. The relationships for SPOT vs. TM and for ATSR-2 vs. AVHRR were directly validated by comparison of atmospherically corrected image data. The results indicate that vegetation indices can be interconverted to a precision of 1-2%. This result offers improved opportunities for monitoring crops through the growing season and the prospects of better continuity of long-term monitoring of vegetation responses to environmental change. 相似文献
6.
Mathematical modeling of microsegregation in multicomponent alloys is a considerable challenge since solidification implies generally the formation of many different solid phases, each one changing the dynamics of phase transformation as the morphology becomes very complex and the kinetics phenomena more diverse. Before sophisticated models based on the resolution of partial differential mass conservation equations can give reliable predictions in multiphase alloys, there is a need to calculate solidification paths based on the incremental mass conservation equation and the back-diffusion parameters. The incremental mass balance proposed in this contribution was written without considering a specific migration mechanism. This decoupling allowed an easy integration of a microsegregation model enabling the evaluation of back-diffusion parameters. In this paper, Ohnaka’s microsegregation model was chosen because it allows a simple and elegant inclusion of the cross interdiffusion coefficients in the calculation of back-diffusion parameters. The model assumes that complete mixing prevails in the liquid phase and that equilibrium conditions can be applied for a sub-system having a composition defined according to the mobility of species and a set of empirical parameters. A large range of solidification paths lying between Scheil and global equilibrium conditions can be calculated and used to explain experimental observations. The model was applied on a ternary Al–Mg–Mn alloy for comparison purposes, with the software DICTRA. An excellent agreement between the two models was obtained using similar assumptions. The scheme was also applied on AA6111 and allowed us to understand large deviations observed between the amounts of secondary phases obtained with DSC experiments and those predicted by Scheil or equilibrium conditions. 相似文献
7.
Stencil adaptive diffuse interface method for simulation of two-dimensional incompressible multiphase flows 总被引:1,自引:0,他引:1
Diffuse interface method is becoming a more and more popular approach for simulation of multiphase flows. As compared to other solvers, it is easy to implement and can keep conservation of mass and momentum. In the diffuse interface method, the interface is not considered as a sharp discontinuity. Instead, it treats the interface as a diffuse layer with a small thickness. This treatment is similar to the shock-capturing method. To have a fine resolution around the interface, one has to use very fine mesh in the computational domain. As a consequence, a large computational effort will be needed. To improve the computational efficiency, this paper incorporates the efficient 5-points stencil adaptive algorithm [1] into the diffuse interface method with local refinement around the interface and then applies the developed method to simulate two-dimensional incompressible multiphase flows. Three cases are chosen to test the performance of the method, including Young-Laplace law for a 2D drop, drop deformation in the shear flow and viscous finger formation. The method is well validated through the comparison with theoretical analysis or earlier results available in the literature. It is shown that the method can obtain accurate results at much lower cost, even for problems with moving contact lines. The improvement of computational efficiency by the stencil adaptive algorithm is demonstrated obviously. 相似文献
8.
Problem solving using multi-agent robotic systems has received significant attention in recent research. Complex strategies are required to organize and control these systems. Biological-inspired methodologies are often employed to bypass this complexity, e.g. self-organization. However, another line of research is to understand the relationship between low-level behaviors and complex high-level strategies. In this paper, we focus on understanding the interference caused in multi-robotic systems for the problem of search and tagging. Given a set of targets that must be found and tagged by a set of robots, what are the effects of scaling the number of robots and sensor ranges? Intuitively, increasing robot numbers, or sensor strength would seem beneficial. However, experience suggests that path and sensor interference caused by increased robots, increased targets, and sensor range will be harmful. The following investigation uses several abstract models to elucidate the issues of robot scaling and sensor noise. 相似文献
9.
Andrea Bracciali Antonio Brogi Franco Turini 《The Journal of Logic and Algebraic Programming》2005,63(2):215
Coding no longer represents the main issue in developing software applications. It is the design and verification of complex software systems that require to be addressed at the architectural level, following methodologies which permit us to clearly identify and design the components of a system, to understand precisely their interactions, and to formally verify the properties of the systems. Moreover, this process is made even more complicated by the advent of the “network-centric” model of computation, where open systems dynamically interact with each other in a highly volatile environment. Many of the techniques traditionally used for closed systems are inadequate in this context.We illustrate how the problem of modeling and verifying behavioural properties of open system is addressed by different research fields and how their results may contribute to a common solution. Building on this, we propose a methodology for modeling and verifying behavioural aspects of open systems. We introduce the IP-calculus, derived from the π-calculas process algebra so as to describe behavioural features of open systems. We define a notion of partial correctness, acceptability, in order to deal with the intrinsic indeterminacy of open systems, and we provide an algorithmic procedure for its effective verification. 相似文献
10.
F. A. Stowell 《Information Systems Journal》1991,1(3):173-189
Abstract. With the incorporation of Information Technology into most areas of modern life, the methods used by the Computer System Analyst (CSA) needs to be reconsidered. To suppose Systems Analysis to be concerned solely with computing is to minimize the task as an information system is greater than a computer system. As such, the information system designer needs to be able to 'appreciate' the wider implications of a clients information needs. An argument is put forward that Information System Design should be undertaken by the client with the CSA acting as facilitator. This paper attempts to provide a re-appraisal of the CSA and arising from this re-appraisal, suggest that ideas originating from organizational analysis could be usefully embodied in the design process for Information Systems. 相似文献
11.
In this work, we focus on output feedback control of nonlinear systems subject to sensor data losses. We initially construct an output feedback controller based on a combination of a Lyapunov-based controller with a high-gain observer. We then study the stability and robustness properties of the closed-loop system in the presence of sensor data losses for both the continuous and sampled-data systems. We state a set of sufficient conditions under which the closed-loop system is guaranteed to be practically stable. The theoretical results are demonstrated using a chemical process example. 相似文献
12.
A testing-based faster-than relation has previously been developed that compares the worst-case efficiency of asynchronous systems. This approach reveals that pipelining does not improve efficiency in general; that it does so in practice depends on assumptions about the user behaviour. Accordingly, the approach was adapted to a setting where user behaviour is known to belong to a specific, but often occurring class of request–response behaviours; some quantitative results on the efficiency of the respective so-called response processes were given. In particular, it was shown that in the adapted setting a very simple case of a pipelined process with two stages is faster than a comparable atomic processing of the two stages.In this paper, we determine the performance of general pipelines, which is not so easy in an asynchronous setting. Pipelines are built with a chaining operator; we also study whether the adapted faster-than relation is compatible with chaining and two other parallel composition operators, and give results on the performance of the respective compositions. These studies also demonstrate how rich the request–respond setting is. 相似文献
13.
This paper discusses four algorithms for detecting anomalies in logs of process aware systems. One of the algorithms only marks as potential anomalies traces that are infrequent in the log. The other three algorithms: threshold, iterative and sampling are based on mining a process model from the log, or a subset of it. The algorithms were evaluated on a set of 1500 artificial logs, with different profiles on the number of anomalous traces and the number of times each anomalous traces was present in the log. The sampling algorithm proved to be the most effective solution. We also applied the algorithm to a real log, and compared the resulting detected anomalous traces with the ones detected by a different procedure that relies on manual choices. 相似文献
14.
We propose a method for setting up PI and PID controllers based on stable FOPDT process model, where dead-time dynamics is manipulated without approximation. The main idea used is a partial compensation of the system dynamics, which makes possible obtaining simple tuning rules. Remaining unknown controller parameters are determined on the basis of the modulus optimum and the minimum ISE criterions. Besides the performance indices, quality of the settings is also evaluated by the stability margin. Although optimal values of the parameters are valid for the reference tracking problem, a compensation of the disturbance lag that preserves the stability margin is proposed for the disturbance rejection problem. 相似文献
15.
The problem of dynamic sensor activation for event diagnosis in partially observed discrete event systems is considered. Diagnostic agents are able to activate sensors dynamically during the evolution of the system. Sensor activation policies for diagnostic agents are functions that determine which sensors are to be activated after the occurrence of a trace of events. The sensor activation policy must satisfy the property of diagnosability of centralized systems or codiagnosability of decentralized systems. A policy is said to be minimal if there is no other policy, with strictly less sensor activation, that achieves diagnosability or codiagnosability. To compute minimal policies, we propose language partition methods that lead to efficient computational algorithms. Specifically, we define “window-based” language partitions for scalable algorithms to compute minimal policies. By refining partitions, one is able to refine the solution space over which minimal solutions are computed at the expense of more computation. Thus a compromise can be achieved between fineness of solution and complexity of computation. 相似文献
16.
Andrew R. Dalton 《Science of Computer Programming》2009,74(7):446-469
TinyOS is an effective platform for developing lightweight embedded network applications. But the platform’s lean programming model and power-efficient operation come at a price: TinyOS applications are notoriously difficult to construct, debug, and maintain. The development difficulties stem largely from a programming model founded on events and deferred execution. In short, the model introduces non-determinism in the execution ordering of primitive actions — an issue exacerbated by the fact that embedded network systems are inherently distributed and reactive. The resulting set of possible execution sequences for a given system is typically large and can swamp developers’ unaided ability to reason about program behavior.In this paper, we present a visualization toolkit for TinyOS 2.0 to aid in program comprehension. The goal is to assist developers in reasoning about the computation forest underlying a system under test and the particular branches chosen during each run. The toolkit supports comprehension activities involving both local and distributed runtime behavior. The constituent components include (i) a full-featured static analysis and instrumentation library, (ii) a selection-based probe insertion system, (iii) a lightweight event recording service, (iv) a trace extraction and reconstruction tool, and (v) three visualization front-ends. We demonstrate the utility of the toolkit using both standard and custom system examples and present an analysis of the toolkit’s resource usage and performance characteristics. 相似文献
17.
Comprehensive and elaborate systems analysis techniques have been developed in the past of routine and operational information systems. Developing support systems for organizational decision-making requires new tools and methodologies. We present a new framework for data collection and decision analysis which is useful for developing decision support systems. This task analysis methodology encompasses (1) event analysis, (2) participant analysis, and (3) decision content analysis. With a proper coding manual, it provides a framework for collecting relevant and detailed information required for decision support design and implementation. Further research is suggested for application and evaluation of the methodology in real-life DSS environments. 相似文献
18.
P. J. Lewis 《Information Systems Journal》1993,3(3):169-186
Abstract. Where the soft systems methodology (SSM) is used in the development of organizational information systems a clear division exists between the use of SSM to identify what information systems are required and conventional development activities in which it is decided how those information systems will be supplied. Discussion of how SSM might be more closely linked to conventional information systems development methodologies has been concentrated upon process-focused approaches to information systems development. This has been partly due to a perceived mismatch between the underlying philosophies of SSM and the alternative data-focused development methodologies. This paper argues that this perception may be mistaken; not only do the existing forms of data analysis have a large though unacknowledged subjective component but the SSM concept of appreciation may provide a model of human sense-making that the data-focused approaches currently lack and from which they may benefit. The idea of appreciation also allows that an alternative, interpretative form of data analysis might be used within SSM. It is therefore the conclusion of this paper that some closer integration of SSM with data-focused approaches to information systems development is theoretically feasible and may be practically desirable. A number of possible advantages of such integration are described. 相似文献
19.
This work considers the problem of sensor fault isolation and fault-tolerant control for nonlinear systems subject to input constraints. The key idea is to design fault detection residuals and fault isolation logic by exploiting model-based sensor redundancy through a state observer. To this end, a high-gain observer is first presented, for which the convergence property is rigorously established, forming the basis of the residual design. A bank of residuals are then designed using a bank of observers, with each driven by a subset of measured outputs. A fault is isolated by checking which residuals breach their thresholds according to a logic rule. After the fault is isolated, the state estimate generated using measurements from the healthy sensors is used in closed-loop to maintain nominal operation. The implementation of the fault isolation and handling framework subject to uncertainty and measurement noise is illustrated using a chemical reactor example. 相似文献
20.
A component, which has an optimized combination of different materials (including homogeneous materials and different types of heterogeneous materials) in its different portions for a specific application, is considered as the component made of a multiphase perfect material. To manufacture such components, a hybrid layered manufacturing technology was proposed. Since it would be risky and very expensive to make such a physical machine without further study and optimization, manufacturing simulation is adopted to do further research so as to provide the reliable foundation for future practical manufacturing. This paper describes its virtual manufacturing technologies and modeling of the component virtually manufactured. Such a model can be used to evaluate the errors of the virtual manufacturing. Finally, an example of simulating manufacturing process and generating the model of the component virtually manufactured is introduced in more detail. 相似文献