首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It may be possible to estimate a nominal pulmonary blood flow (Q) during an exercise stress test via the algorithm used to estimate breath-by-breath alveolar CO2 production. Recently it has been demonstrated that by relating breath-to-breath fluctuations in alveolar CO2 production to breath-to-breath fluctuations in end-tidal CO2, an optimizing parameter related to Q can be determined that can be used to process the CO2 production fluctuations and minimize their variation. However, the reported values of Q using this procedure appear to be biased low. Using a computer simulation of gas exchange, we demonstrate that the estimate of Q is biased low when the nominal lung volume used in the alveolar gas exchange algorithm is too large. Furthermore, alveolar CO2 transport is determined by an integral of alveolar CO2 over the breath time and, thus, is a path-dependent quantity. The use of end-tidal CO2 fluctuations to approximate fluctuations in this integral contributes to an error in the estimation of Q which yields estimates that are biased low. Alternatively, the use of mean alveolar CO2 fluctuations yield more appropriate Q estimates. These results suggest practical implications for estimating effective pulmonary blood flow during an exercise stress test by using breath-to-breath estimates of mean alveolar CO2.  相似文献   

2.
Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data.  相似文献   

3.
4.
Miao X  Rao RP 《Neural computation》2007,19(10):2665-2693
A fundamental problem in biological and machine vision is visual invariance: How are objects perceived to be the same despite transformations such as translations, rotations, and scaling? In this letter, we describe a new, unsupervised approach to learning invariances based on Lie group theory. Unlike traditional approaches that sacrifice information about transformations to achieve invariance, the Lie group approach explicitly models the effects of transformations in images. As a result, estimates of transformations are available for other purposes, such as pose estimation and visuomotor control. Previous approaches based on first-order Taylor series expansions of images can be regarded as special cases of the Lie group approach, which utilizes a matrix-exponential-based generative model of images and can handle arbitrarily large transformations. We present an unsupervised expectation-maximization algorithm for learning Lie transformation operators directly from image data containing examples of transformations. Our experimental results show that the Lie operators learned by the algorithm from an artificial data set containing six types of affine transformations closely match the analytically predicted affine operators. We then demonstrate that the algorithm can also recover novel transformation operators from natural image sequences. We conclude by showing that the learned operators can be used to both generate and estimate transformations in images, thereby providing a basis for achieving visual invariance.  相似文献   

5.
6.
7.
Software component size estimation is an important task in software project management. For a component-based approach, two steps may be used to estimate the overall size of object-oriented (OO) software: a designer uses metrics to predict the size of the software components and then utilizes the sizes to estimate the overall project size. Using OO software metrics literature, we identified factors that may affect the size of an OO software component. Using real-life data from 152 software components, we then determined the effect of the identified factors on the prediction of OO software component size. The results indicated that certain factors and the type of OO software component play a significant role in the estimate. It is shown how a regression tree data mining approach can be used to learn decision rules to guide future estimates.  相似文献   

8.
Software component size estimation is an important task in software project management. For a component-based approach, two steps may be used to estimate the overall size of object-oriented (OO) software: a designer uses metrics to predict the size of the software components and then utilizes the sizes to estimate the overall project size. Using OO software metrics literature, we identified factors that may affect the size of an OO software component. Using real-life data from 152 software components, we then determined the effect of the identified factors on the prediction of OO software component size. The results indicated that certain factors and the type of OO software component play a significant role in the estimate. It is shown how a regression tree data mining approach can be used to learn decision rules to guide future estimates.  相似文献   

9.
In the context of the analysis of measured data, one is often faced with the task to differentiate data numerically. Typically, this occurs when measured data are concerned or data are evaluated numerically during the evolution of partial or ordinary differential equations. Usually, one does not take care for accuracy of the resulting estimates of derivatives because modern computers are assumed to be accurate to many digits. But measurements yield intrinsic errors, which are often much less accurate than the limit of the machine used, and there exists the effect of “loss of significance”, well known in numerical mathematics and computational physics. The problem occurs primarily in numerical subtraction, and clearly, the estimation of derivatives involves the approximation of differences. In this article, we discuss several techniques for the estimation of derivatives. As a novel aspect, we divide into local and global methods, and explain the respective shortcomings. We have developed a general scheme for global methods, and illustrate our ideas by spline smoothing and spectral smoothing. The results from these less known techniques are confronted with the ones from local methods. As typical for the latter, we chose Savitzky-Golay-filtering and finite differences. Two basic quantities are used for characterization of results: The variance of the difference of the true derivative and its estimate, and as important new characteristic, the smoothness of the estimate. We apply the different techniques to numerically produced data and demonstrate the application to data from an aeroacoustic experiment. As a result, we find that global methods are generally preferable if a smooth process is considered. For rough estimates local methods work acceptably well.  相似文献   

10.
In this paper we develop a comprehensive framework for the study of decentralized estimation problems. This approach imbeds a decentralized estimation problem into an equivalent scattering problem, and makes use of the super-position principle to relate local and centralized estimates. Some decentralized filtering and smoothing algorithms are obtained for a simple estimation structure consisting of a central processor and of two local processors. The case when the local processors exchange some information is considered, as well as the case when the local state-space models differ from the central model.  相似文献   

11.
Efficiently Querying Large XML Data Repositories: A Survey   总被引:1,自引:0,他引:1  
Extensible markup language (XML) is emerging as a de facto standard for information exchange among various applications on the World Wide Web. There has been a growing need for developing high-performance techniques to query large XML data repositories efficiently. One important problem in XML query processing is twig pattern matching, that is, finding in an XML data tree D all matches that satisfy a specified twig (or path) query pattern Q. In this survey, we review, classify, and compare major techniques for twig pattern matching. Specifically, we consider two classes of major XML query processing techniques: the relational approach and the native approach. The relational approach directly utilizes existing relational database systems to store and query XML data, which enables the use of all important techniques that have been developed for relational databases, whereas in the native approach, specialized storage and query processing systems tailored for XML data are developed from scratch to further improve XML query performance. As implied by existing work, XML data querying and management are developing in the direction of integrating the relational approach with the native approach, which could result in higher query processing performance and also significantly reduce system reengineering costs.  相似文献   

12.
Wearable human movement measurement systems are increasingly popular as a means of capturing human movement data in real-world situations. Previous work has attempted to estimate segment kinematics during walking from foot acceleration and angular velocity data. In this paper, we propose a novel neural network [GRNN with Auxiliary Similarity Information (GASI)] that estimates joint kinematics by taking account of proximity and gait trajectory slope information through adaptive weighting. Furthermore, multiple kernel bandwidth parameters are used that can adapt to the local data density. To demonstrate the value of the GASI algorithm, hip, knee, and ankle joint motions are estimated from acceleration and angular velocity data for the foot and shank, collected using commercially available wearable sensors. Reference hip, knee, and ankle kinematic data were obtained using externally mounted reflective markers and infrared cameras for subjects while they walked at different speeds. The results provide further evidence that a neural net approach to the estimation of joint kinematics is feasible and shows promise, but other practical issues must be addressed before this approach is mature enough for clinical implementation. Furthermore, they demonstrate the utility of the new GASI algorithm for making estimates from continuous periodic data that include noise and a significant level of variability.   相似文献   

13.
In this paper, we consider the problem of placing alternatives that are defined by multiple criteria into preference-ordered categories. We consider a method that estimates an additive utility function and demonstrate that it may misclassify many alternatives even when substantial preference information is obtained from the decision maker (DM) to estimate the function. To resolve this difficulty, we develop an interactive approach. Our approach occasionally requires the DM to place some reference alternatives into categories during the solution process and uses this information to categorize other alternatives. The approach guarantees to place all alternatives correctly for a DM whose preferences are consistent with any additive utility function. We demonstrate that the approach works well using data derived from ranking global MBA programs as well as on several randomly generated problems.  相似文献   

14.
This paper proposes an effective higher order statistics method to address subpixel image registration. Conventional power spectrum-based techniques employ second-order statistics to estimate subpixel translation between two images. They are, however, susceptible to noise, thereby leading to significant performance deterioration in low signal-to-noise ratio environments or in the presence of cross-correlated channel noise. In view of this, we propose a bispectrum-based approach to alleviate this difficulty. The new method utilizes the characteristics of bispectrum to suppress Gaussian noise. It develops a phase relationship between the image pair and estimates the subpixel translation by solving a set of nonlinear equations. Experimental results show that the proposed technique provides performance improvement over conventional power-spectrum-based methods under different noise levels and conditions.  相似文献   

15.
To enhance the efficiency of regression parameter estimation by modeling the correlation structure of correlated binary error terms in quantile regression with repeated measurements, we propose a Gaussian pseudolikelihood approach for estimating correlation parameters and selecting the most appropriate working correlation matrix simultaneously. The induced smoothing method is applied to estimate the covariance of the regression parameter estimates, which can bypass density estimation of the errors. Extensive numerical studies indicate that the proposed method performs well in selecting an accurate correlation structure and improving regression parameter estimation efficiency. The proposed method is further illustrated by analyzing a dental dataset.  相似文献   

16.
We have developed a computer model of the dog lung based upon an asymmetrically branching network of tubes to describe gas exchange during high-frequency oscillations. Impedances to oscillatory flows were calculated for each airway segment and used to determine flow distributions in all airway generations. Gas exchange was assumed to occur by convective and augmented dispersive mechanisms. Also included in the model were features commonly found in animal studies including the effects of bias flow of fresh gas at the airway opening and equipment dead-space volume. The magnitude of CO2 elimination predicted by the model closely resembled actual experimental data. Specifically, it predicted that CO2 elimination is proportional to frequency to the 0.82 power and tidal volume to the 1.25 power. It also indicated that the bias flow rate and equipment dead-space volume influence gas exchange characteristics and thus these variables should be considered when comparing data from different studies.  相似文献   

17.
We propose the generalized profiling method to estimate the multiple regression functions in the framework of penalized spline smoothing, where the regression functions and the smoothing parameter are estimated in two nested levels of optimization. The corresponding gradients and Hessian matrices are worked out analytically, using the Implicit Function Theorem if necessary, which leads to fast and stable computation. Our main contribution is developing the modified delta method to estimate the variances of the regression functions, which include the uncertainty of the smoothing parameter estimates. We further develop adaptive penalized spline smoothing to estimate spatially heterogeneous regression functions, where the smoothing parameter is a function that changes along with the curvature of regression functions. The simulations and application show that the generalized profiling method leads to good estimates for the regression functions and their variances.  相似文献   

18.
对于医学图像滤波来说,很重要的一点就是滤波后的图像应该尽可能地保留图像中的边缘和细节特征.但通常在滤波过程中,在消除噪声的同时会模糊图像中一些重要的结构信息.在最近几年中,基于尺度的滤波方法已经有效地应用在灰度图像的滤波中.现把基于尺度的方法推广到矢量(彩色)图像的滤波中.在传统滤波方法(矢量中值滤波、基本矢量方向滤波和方向距离滤波)基础上,相应地提出了3种基于球尺度的矢量滤波器.新的滤波方法能根据图像中像素的尺度信息,在图像边缘和细节附近,即区域边界执行较小的平滑,而在区域内部进行较大的平滑,从而能够自适应地控制滤波过程.实验结果表明,所提出的滤波方法与传统滤波方法相比,在消除噪声的同时更能够保留图像中的边缘和细节特征.  相似文献   

19.
Communication between organizations is formalized as process choreographies in daily business. While the correct ordering of exchanged messages can be modeled and enacted with current choreography techniques, no approach exists to describe and automate the exchange of data between processes in a choreography using messages. This paper describes an entirely model-driven approach for BPMN introducing a few concepts that suffice to model data retrieval, data transformation, message exchange, and correlation – four aspects of data exchange. For automation, this work utilizes a recent concept to enact data dependencies in internal processes. We present a modeling guideline to derive local process models from a given choreography; their operational semantics allows to correctly enact the entire choreography from the derived models only including the exchange of data. Targeting on successful interactions, we discuss means to ensure correct process choreography modeling. Finally, we implemented our approach by extending the camunda BPM platform with our approach and show its feasibility by realizing all service interaction patterns using only model-based concepts.  相似文献   

20.
Forest information over a landscape is often represented as a spatial mosaic of polygons, separated by differences in species composition, height, age, crown closure, productivity, and other variables. These polygons are commonly delineated on medium-scale photography (e.g., 1:15,000) by a photo-interpreter familiar with the inventory area, and displayed and stored in a Geographic Information System (GIS) layer as a forest cover map. Forest cover maps are used for multiple purposes including timber and habitat supply analyses, and carbon inventories, at a regional or management unit level, and for parks planning, operational planning, and selection of stands for many purposes at a local level. Attribute data for each polygon commonly include the variables used to delineate the polygon, and other variables that can be measured or estimated using these medium-scale photographs. Additional measures that can only be obtained via expensive ground measures or possibly on high resolution photographs (e.g., volume per unit area, biomass components per unit area, tree-list of species and diameters) are available only for a sample of polygons, or may have been gathered independently using a sample survey over the land area. Improved linkages over a variety of data sources may help to support landscape level analyses. This study presents an approach to combine information from a systematic (grid) ground survey, forest cover (polygon) data, and Landsat Thematic Mapper (TM) imagery using variable-space nearest neighbor methods to estimate (i) mean ground-measured attributes for each polygon, in particular, volume per ha (m3/ha), stems per ha, and quadratic mean diameter for each polygon; and (ii) variation of these ground attributes within polygons. The approach was initially evaluated using Monte Carlo simulations with known measures of these attributes. Nearest neighbor methods were then applied to an approximate 5000 ha area (about 1000 polygons) of high productivity, mountainous forests located near the Pacific Coast of British Columbia, Canada. Based on the simulation results, the use of Landsat pixel reflectances to estimate volume per ha, average tree size (i.e., quadratic mean diameter), and stems per ha did not show great promise in improving estimates for each polygon over using forest cover data alone. However, in application, the use of remotely sensed data provided estimates of within-polygon variability. At the same time, the estimated means of these three imputed variables over the entire study area were very similar to the representative sample estimates using the ground data only. Extensions to other variables such as ranges of diameters and numbers of snags may also be possible providing useful data for habitat and forest growth analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号