首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 953 毫秒
1.
2.
In this paper, a new delay-derivative-dependent sliding mode observer (SMO) design for a class of linear uncertain time-varying delay systems is presented. Based on this observer, a robust actuator fault reconstruction method is developed. In the meantime, the considered uncertainty is bounded and the time-delay is varying and affects the state system. Besides, the dynamic properties of the observer are analyzed and the reachability condition is satisfied. Applying the developed SMO, the \(H_\infty \) concept and a delay-derivative-dependent bounded real lemma (BRL), a robust actuator fault reconstruction is obtained wherein the effect of the uncertainty is minimized. Also, both the SMO and the BRL are delay-derivative-dependent which reduces the time-varying delay conservatism on the state estimation and on the fault reconstruction. A diesel engine system is included to illustrate the validity and the applicability of the proposed approaches.  相似文献   

3.
We express the performance of the N-class "guessing" observer in terms of the N2-N conditional probabilities which make up an N-class receiver operating characteristic (ROC) space, in a formulation in which sensitivities are eliminated in constructing the ROC space (equivalent to using false-negative fraction and false-positive fraction in a two-class task). We then show that the "guessing" observer's performance in terms of these conditional probabilities is completely described by a degenerate hypersurface with only N-1 degrees of freedom (as opposed to the N2-N-1 required, in general, to achieve a true hypersurface in such a ROC space). It readily follows that the hypervolume under such a degenerate hypersurface must be zero when N > 2. We then consider a "near-guessing" task; that is, a task in which the N underlying data probability density functions (pdfs) are nearly identical, controlled by N-1 parameters which may vary continuously to zero (at which point the pdfs become identical). With this approach, we show that the hypervolume under the ROC hypersurface of an observer in an N-class classification task tends continuously to zero as the underlying data pdfs converge continuously to identity (a "guessing" task). The hypervolume under the ROC hypersurface of a "perfect" ideal observer (in a task in which the N data pdfs never overlap) is also found to be zero in the ROC space formulation under consideration. This suggests that hypervolume may not be a useful performance metric in N-class classification tasks for N > 2, despite the utility of the area under the ROC curve for two-class tasks.  相似文献   

4.
Linear model observers based on statistical decision theory have been used successfully to predict human visual detection of aperiodic signals in a variety of noisy backgrounds. However, some models have included nonlinearities such as a transducer or nonlinear decision rules to handle intrinsic uncertainty. In addition, masking models used to predict human visual detection of signals superimposed on one of two identical backgrounds (masks) usually include a number of nonlinear components in the channels that reflect properties of the firing of cells in the primary visual cortex (V1). The effect of these nonlinearities on the ability of linear model observers to predict human signal detection in real patient structured backgrounds is unknown. We evaluate the effect of including different nonlinear human visual system components into a linear channelized Hotelling observer (CHO) using a signal known exactly but variable (SKEV) task. In particular, we evaluate whether the rank order of two compression algorithms (JPEG versus JPEG 2000) and two compression encoder settings (JPEG 2000 default versus JPEG 2000 optimized) based on model observer signal detection performance in X-ray coronary angiograms is altered by inclusion of nonlinear components. The results show: 1) the simpler linear CHO model observer outperforms CHO model with the nonlinear components; 2) the rank order of model observer performance for the compression algorithms/parameters does not change when the nonlinear components are included. For the present task and images, the results suggest that the addition of the nonlinearities to a channelized Hotelling model may add complexity to the model observers without great impact on rank order evaluation of image processing and/or acquisition algorithms.  相似文献   

5.
We examined the application of an iterative penalized maximum likelihood (PML) reconstruction method for improved detectability of microcalcifications (MCs) in digital breast tomosynthesis (DBT). Localized receiver operating characteristic (LROC) psychophysical studies with human observers and 2-D image slices were conducted to evaluate the performance of this reconstruction method and to compare its performance against the commonly used Feldkamp FBP algorithm. DBT projections were generated using rigorous computer simulations that included accurate modeling of the noise and detector blur. Acquisition dose levels of 0.7, 1.0, and 1.5 mGy in a 5-cm-thick compressed breast were tested. The defined task was to localize and detect MC clusters consisting of seven MCs. The individual MC diameter was 150 μm. Compressed-breast phantoms derived from CT images of actual mastectomy specimens provided realistic background structures for the detection task. Four observers each read 98 test images for each combination of reconstruction method and acquisition dose. All observers performed better with the PML images than with the FBP images. With the acquisition dose of 0.7 mGy, the average areas under the LROC curve (A(L)) for the PML and FBP algorithms were 0.69 and 0.43, respectively. For the 1.0-mGy dose, the values of A(L) were 0.93 (PML) and 0.7 (FBP), while the 1.5-mGy dose resulted in areas of 1.0 and 0.9, respectively, for the PML and FBP algorithms. A 2-D analysis of variance applied to the individual observer areas showed statistically significant differences (at a significance level of 0.05) between the reconstruction strategies at all three dose levels. There were no significant differences in observer performance for any of the dose levels.  相似文献   

6.
Coefficient quantization has peculiar qualitative effects on representations of vectors in IR with respect to overcomplete sets of vectors. These effects are investigated in two settings: frame expansions (representations obtained by forming inner products with each element of the set) and matching pursuit expansions (approximations obtained by greedily forming linear combinations). In both cases, based on the concept of consistency, it is shown that traditional linear reconstruction methods are suboptimal, and better consistent reconstruction algorithms are given. The proposed consistent reconstruction algorithms were in each case implemented, and experimental results are included. For frame expansions, results are proven to bound distortion as a function of frame redundancy r and quantization step size for linear, consistent, and optimal reconstruction methods. Taken together, these suggest that optimal reconstruction methods will yield O(1/r2) mean-squared error (MSE), and that consistency is sufficient to insure this asymptotic behavior. A result on the asymptotic tightness of random frames is also proven. Applicability of quantized matching pursuit to lossy vector compression is explored. Experiments demonstrate the likelihood that a linear reconstruction is inconsistent, the MSE reduction obtained with a nonlinear (consistent) reconstruction algorithm, and generally competitive performance at low bit rates  相似文献   

7.
《Mechatronics》2007,17(7):368-380
This paper presents robust sensor fault reconstruction applied in real-time to an inverted pendulum. A linear observer is used to generate an estimate of the system states. Then the state estimates, inputs and outputs are used to generate a reconstruction of the fault. The observer was designed in order to make the reconstructions robust to system disturbances arising from mismatches between the linear model and actual system. Two design methods were tested; Bounded Real Lemma and Right Eigenstructure Assignment. Both methods produced excellent real-time results where the reconstruction is visually identical to the fault.  相似文献   

8.
Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [non-prewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks.  相似文献   

9.
The likelihood ratio, or ideal observer, decision rule is known to be optimal for two-class classification tasks in the sense that it maximizes expected utility (or, equivalently, minimizes the Bayes risk). Furthermore, using this decision rule yields a receiver operating characteristic (ROC) curve which is never above the ROC curve produced using any other decision rule, provided the observer's misclassification rate with respect to one of the two classes is chosen as the dependent variable for the curve (i.e., an "inversion" of the more common formulation in which the observer's true-positive fraction is plotted against its false-positive fraction). It is also known that for a decision task requiring classification of observations into N classes, optimal performance in the expected utility sense is obtained using a set of N-1 likelihood ratios as decision variables. In the N-class extension of ROC analysis, the ideal observer performance is describable in terms of an (N2-N-1)-parameter hypersurface in an (N2-N)-dimensional probability space. We show that the result for two classes holds in this case as well, namely that the ROC hypersurface obtained using the ideal observer decision rule is never above the ROC hypersurface obtained using any other decision rule (where in our formulation performance is given exclusively with respect to between-class error rates rather than within-class sensitivities).  相似文献   

10.
Magnetic resonance imaging (MRI) reconstruction techniques are often validated with signal-to-noise ratio (SNR), contrast-to-noise ratio, and mean-to-standard-deviation ratio measured on example images. We present human and model observers as a novel approach to evaluating reconstructions for low-SNR magnetic resonance (MR) images. We measured human and channelized Hotelling observers in a two-alternative forced-choice signal-known-exactly detection task on synthetic MR images. We compared three reconstructions: magnitude, wavelet-based denoising, and phase-corrected real. Human observers performed approximately equally using all three reconstructions. The model observer showed very close agreement with the humans over the range of images. These results contradict previous predictions in the literature based on SNR. Thus, we propose that human observer studies are important for validating MRI reconstructions. The model's performance indicates that it may provide an alternative to human studies.  相似文献   

11.
Mean opinion scores obtained from subjective quality assessment are widely used as a ground truth for the development of predictive quality models. The underlying variance between observer ratings is typically quantified using confidence intervals, which do not provide any direct insight into the underlying causes of the disagreement. For better understanding of human visual quality perception and to develop more accurate models, it is important to identify the factors that impact on the variations in quality ratings. This work considers one such factor: observer confidence. This consideration is motivated by the view that quality assessment is a difficult task and hence quality ratings are provided with varying levels of confidence. The first goal of this paper is to analyse the results of an experiment to determine association between observer confidence and image quality judgement. Secondly, models are developed that aim to predict mean observer confidence as a complementary measure to the widely used mean opinion scores. It is shown that there is indeed a strong interrelation between quality perception and confidence, resulting in predictive models of high accuracy.  相似文献   

12.
Ideal observer approximation using Bayesian classification neural networks   总被引:1,自引:0,他引:1  
It is well understood that the optimal classification decision variable is the likelihood ratio or any monotonic transformation of the likelihood ratio. An automated classifier which maps from an input space to one of the likelihood ratio family of decision variables is an optimal classifier or "ideal observer." Artificial neural networks (ANNs) are frequently used as classifiers for many problems. In the limit of large training sample sizes, an ANN approximates a mapping function which is a monotonic transformation of the likelihood ratio, i.e., it estimates an ideal observer decision variable. A principal disadvantage of conventional ANNs is the potential over-parameterization of the mapping function which results in a poor approximation of an optimal mapping function for smaller training samples. Recently, Bayesian methods have been applied to ANNs in order to regularize training to improve the robustness of the classifier. The goal of training a Bayesian ANN with finite sample sizes is, as with unlimited data, to approximate the ideal observer. We have evaluated the accuracy of Bayesian ANN models of ideal observer decision variables as a function of the number of hidden units used, the signal-to-noise ratio of the data and the number of features or dimensionality of the data. We show that when enough training data are present, excess hidden units do not substantially degrade the accuracy of Bayesian ANNs. However, the minimum number of hidden units required to best model the optimal mapping function varies with the complexity of the data.  相似文献   

13.
A corner-based velocity estimation approach is proposed which is used for vehicle’s traction and stability control systems. This approach incorporates internal tire states within the vehicle kinematics and enables the velocity estimator to work for a wide range of maneuvers without road friction information. Tire models have not been widely implemented in velocity estimators because of uncertain road friction and varying tire parameters, but the current study utilizes a simplified LuGre model with the minimum number of tire parameters and estimates velocity robust to model uncertainties. The proposed observer uses longitudinal forces, updates the states by minimizing the longitudinal force estimation error, and provides accurate outcomes at each tire. The estimator structure is shown to be robust to road conditions and rejects disturbances and model uncertainties effectively. Taking into account the vehicle dynamics is time-varying, the stability of the observer for the linear parameter varying model is proved, time-varying observer gains are designed, and the performance is studied. Road test experiments have been conducted and the results are used to validate the proposed approach.  相似文献   

14.
Recent designs for brake-by-wire systems use "resolvers" to provide accurate and continuous measurements for the absolute position and speed of the rotor of the electric actuators in brake callipers (permanent magnet DC motors). Resolvers are absolute-angle transducers that are integrated with estimator modules called "angle tracking observer" and together they provide position and speed measurements. Current designs for angle-tracking observers are unstable in applications with high acceleration and/or speed. In this paper, we introduce a new angle-tracking observer in which a closed-loop linear time-invariant (LTI) observer is integrated with a quadrature encoder. Finite-gain stability of the proposed design and its robustness to three different kinds of parameter variations are proven based on theorems of input-output stability in nonlinear control theory. In our experiments, we examined the performance of our observer and two other methods (a well-known LTI observer and an extended Kalman filter) to estimate the position and speed of a brake-by-wire actuator. The results show that because of the very high speed and acceleration of the actuator in this application, the LTI observer and Kalman filter cannot track the rotor position and diverge. In contrast, with a properly designed open-loop transfer function and selecting a suitable switching threshold, our proposed angle-tracking observer is stable and highly accurate in a brake-by-wire application.  相似文献   

15.
Previous studies have evaluated the effect of the new still image compression standard JPEG 2000 using nontask based image quality metrics, i.e., peak-signal-to-noise-ratio (PSNR) for nonmedical images. In this paper, the effect of JPEG 2000 encoder options was investigated using the performance of human and model observers (nonprewhitening matched filter with an eye filter, square-window Hotelling, Laguerre-Gauss Hotelling and channelized Hotelling model observer) for clinically relevant visual tasks. Two tasks were investigated: the signal known exactly but variable task (SKEV) and the signal known statistically task (SKS). Test images consisted of real X-ray coronary angiograms with simulated filling defects (signals) inserted in one of the four simulated arteries. The signals varied in size and shape. Experimental results indicated that the dependence of task performance on the JPEG 2000 encoder options was similar for all model and human observers. Model observer performance in the more tractable and computationally economic SKEV task can be used to reliably estimate performance in the complex but clinically more realistic SKS task. JPEG 2000 encoder settings different from the default ones resulted in greatly improved model and human observer performance in the studied clinically relevant visual tasks using real angiography backgrounds.  相似文献   

16.
Time-of-flight (TOF) positron emission tomography (PET) scanners offer the potential for significantly improved signal-to-noise ratio (SNR) and lesion detectability in clinical PET. However, fully 3D TOF PET image reconstruction is a challenging task due to the huge data size. One solution to this problem is to rebin TOF data into a lower dimensional format. We have recently developed Fourier rebinning methods for mapping TOF data into non-TOF formats that retain substantial SNR advantages relative to sinograms acquired without TOF information. However, mappings for rebinning into non-TOF formats are not unique and optimization of rebinning methods has not been widely investigated. In this paper we address the question of optimal rebinning in order to make full use of TOF information. We focus on FORET-3D, which approximately rebins 3D TOF data into 3D non-TOF sinogram formats without requiring a Fourier transform in the axial direction. We optimize the weighting for FORET-3D to minimize the variance, resulting in H(2)-weighted FORET-3D, which turns out to be the best linear unbiased estimator (BLUE) under reasonable approximations and furthermore the uniformly minimum variance unbiased (UMVU) estimator under Gaussian noise assumptions. This implies that any information loss due to optimal rebinning is as a result only of the approximations used in deriving the rebinning equation and developing the optimal weighting. We demonstrate using simulated and real phantom TOF data that the optimal rebinning method achieves variance reduction and contrast recovery improvement compared to nonoptimized rebinning weightings. In our preliminary study using a simplified simulation setup, the performance of the optimal rebinning method was comparable to that of fully 3D TOF MAP.  相似文献   

17.
This paper addresses the problem of correlation estimation in sets of compressed images. We consider a framework where the images are represented under the form of linear measurements due to low complexity sensing or security requirements. We assume that the images are correlated through the displacement of visual objects due to motion or viewpoint change and the correlation is effectively represented by optical flow or motion field models. The correlation is estimated in the compressed domain by jointly processing the linear measurements. We first show that the correlated images can be efficiently related using a linear operator. Using this linear relationship we then describe the dependencies between images in the compressed domain. We further cast a regularized optimization problem where the correlation is estimated in order to satisfy both data consistency and motion smoothness objectives with a Graph Cut algorithm. We analyze in detail the correlation estimation performance and quantify the penalty due to image compression. Extensive experiments in stereo and video imaging applications show that our novel solution stays competitive with methods that implement complex image reconstruction steps prior to correlation estimation. We finally use the estimated correlation in a novel joint image reconstruction scheme that is based on an optimization problem with sparsity priors on the reconstructed images. Additional experiments show that our correlation estimation algorithm leads to an effective reconstruction of pairs of images in distributed image coding schemes that outperform independent reconstruction algorithms by 2–4 dB.  相似文献   

18.
Several statistical methods of image reconstruction are described and objectively compared through the use of receiver-operating-characteristic (ROC) analysis based on a specified detection task performed by a human observer. The simulated imaging system is a multiple-pinhole coded-aperture system for dynamic cardiac imaging, and the objects represent cross sections of the left ventricle at end systole. The task is detection of a profusion representing an akinetic wall segment. Thirteen different reconstruction algorithms are considered. Human observers perform the specified task on this set of reconstructions, and the results are analyzed through the use of ROC analysis. The results show that the methods that utilize the largest amount of (accurate) prior information tend to perform the best.  相似文献   

19.
Computer vision tasks are often expected to be executed on compressed images. Classical image compression standards like JPEG 2000 are widely used. However, they do not account for the specific end-task at hand. Motivated by works on recurrent neural network (RNN)-based image compression and three-dimensional (3D) reconstruction, we propose unified network architectures to solve both tasks jointly. These joint models provide image compression tailored for the specific task of 3D reconstruction. Images compressed by our proposed models, yield 3D reconstruction performance superior as compared to using JPEG 2000 compression. Our models significantly extend the range of compression rates for which 3D reconstruction is possible. We also show that this can be done highly efficiently at almost no additional cost to obtain compression on top of the computation already required for performing the 3D reconstruction task.  相似文献   

20.
A stabilizing observer-based control algorithm for an in-wheel-motored vehicle is proposed, which generates direct yaw moment to compensate for the state deviations. The control scheme is based on a fuzzy rule-based body slip angle (beta) observer. In the design strategy of the fuzzy observer, the vehicle dynamics is represented by Takagi-Sugeno-like fuzzy models. Initially, local equivalent vehicle models are built using the linear approximations of vehicle dynamics for low and high lateral acceleration operating regimes, respectively. The optimal beta observer is then designed for each local model using Kalman filter theory. Finally, local observers are combined to form the overall control system by using fuzzy rules. These fuzzy rules represent the qualitative relationships among the variables associated with the nonlinear and uncertain nature of vehicle dynamics, such as tire force saturation and the influence of road adherence. An adaptation mechanism for the fuzzy membership functions has been incorporated to improve the accuracy and performance of the system. The effectiveness of this design approach has been demonstrated in simulations and in a real-time experimental setting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号