首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There is a perceived tradeoff between the ease of measuring alcohol in the body and the accuracy of the result. Direct tests of blood alcohol concentrations are considered the most accurate, desktop stationary breath testers based on electro-chemical infra-red technology are slightly less accurate, but accepted for evidentiary purposes in most jurisdictions, and quick portable breath testers based on fuel-cell technology are the easiest to administer but not acceptable in many courts. This study compared the accuracy of an evidentiary portable breath tester and an evidentiary desktop breath tester relative to blood alcohol concentrations. Inverse regressions were used to obtain confidence limits for the alcohol levels as read by the breath testers that would provide tradeoffs of false positives and false negatives for three levels of confidence: 95%, 96%, and 98%; corresponding to false positive values of 2.5%, 2%, and 1%, respectively. A decision tree model is offered for the optimal use of the three measures, so that portable breath testers can be sufficient for high level BrAC, stationary breath testers can be sufficient for medium level BrAC, and blood tests are recommended for still lower BrACs. The model provides quantitative BrAC threshold levels for the two most common BAC levels used to imply DWI: 50 mg/dl and 80 mg/dl.  相似文献   

2.
There is a perceived tradeoff between the ease of measuring alcohol in the body and the accuracy of the result. Direct tests of blood alcohol concentrations are considered the most accurate, desktop stationary breath testers based on electro-chemical infra-red technology are slightly less accurate, but accepted for evidentiary purposes in most jurisdictions, and quick portable breath testers based on fuel-cell technology are the easiest to administer but not acceptable in many courts. This study compared the accuracy of an evidentiary portable breath tester and an evidentiary desktop breath tester relative to blood alcohol concentrations. Inverse regressions were used to obtain confidence limits for the alcohol levels as read by the breath testers that would provide tradeoffs of false positives and false negatives for three levels of confidence: 95%, 96%, and 98%; corresponding to false positive values of 2.5%, 2%, and 1%, respectively. A decision tree model is offered for the optimal use of the three measures, so that portable breath testers can be sufficient for high level BrAC, stationary breath testers can be sufficient for medium level BrAC, and blood tests are recommended for still lower BrACs. The model provides quantitative BrAC threshold levels for the two most common BAC levels used to imply DWI: 50 mg/dl and 80 mg/dl.  相似文献   

3.
The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized.  相似文献   

4.
目的旨在传统Harris角点检测算子的基础上进行改进,以提高算子的漏检率和伪角点检测能力。方法以自动物流包装线上物料的识别为例,把获取到的图像先进行预处理,得到灰度图像,在灰度图的基础上,首先通过方向可调滤波器进行4个不同角度的旋转,再分别进行角点检测,最后通过逻辑运算综合判断真伪角点。结果把图像预处理的图像数据使用改进后的Harris角点检测算子进行角点检测,并与经典角点检测算子进行比较,结果表明改进后的算子确实有很强的辨别真伪角点的能力。结论实验证明该方法可有效提高角点检测算子的识别准确率,误检率降低到了1.3%,漏检率降低到了2.9%。  相似文献   

5.
This article investigates computation of pointwise and simultaneous tolerance limits under the logistic regression model for binary data. The data consist of n binary responses, where the probability of a positive response depends on covariates via the logistic regression function. Upper tolerance limits are constructed for the number of positive responses in m future trials for fixed as well as varying levels of the covariates. The former provides pointwise upper tolerance limits, and the latter provides simultaneous upper tolerance limits. The upper tolerance limits are obtained from upper confidence limits for the probability of a positive response, modeled using the logistic function. To compute pointwise upper confidence limits for the logistic function, likelihood-based asymptotic methods, small sample asymptotics, as well as bootstrap methods are investigated and numerically compared. To compute simultaneous upper tolerance limits, a bootstrap approach is investigated. The problems have been motivated by an application of interest to the U.S. Army, dealing with the testing of ballistic armor plates for protecting soldiers from projectiles and shrapnel, where the success probability depends on covariates such as the projectile velocity, size of the armor plate, etc. Such an application is used to illustrate the tolerance interval computations in the article. We provide the R codes used for the calculations presented in the examples in the article as supplementary material, available online.  相似文献   

6.
This paper discusses approximate statistical estimates of limiting errors associated with single differential phase measurement of a time delay (phase difference) between two reflectors of the passive surface acoustic wave (SAW) sensor. The remote wireless measurement is provided at the ideal coherent receiver using the maximum likelihood function approach. Approximate estimates of the mean error, mean square error, estimate variance, and Cramér-Rao bound are derived along with the error probability to exceed a threshold in a wide range of signal-to-noise ratio (SNR) values. The von Mises/Tikhonov distribution is used as an approximation for the phase difference and differential phase diversity. Simulation of the random phase difference and limiting errors also is applied.  相似文献   

7.
A method of automated baseline correction has been developed and applied to Raman spectra with a low signal-to-noise ratio and surface-enhanced infrared absorption (SEIRA) spectra with bipolar bands. Baseline correction is initiated by dividing the raw spectrum into equally spaced segments in which regional minima are located. Following identification, the minima are used to generate an intermediate second-derivative spectrum where points are assigned as baseline if they reside within a locally defined threshold region. The threshold region is similar to a confidence interval encountered in statistics. To restrain baseline and band point discrimination to the local level, the calculation of the confidence region employs only a predefined number of already-accepted baseline minima as part of the sample set. Statistically based threshold criteria allow the procedure to make an unbiased assessment of baseline points regardless of the behavior of vibrational bands. Furthermore, the threshold region is adaptive in that it is further modified to consider abrupt changes in baseline. The present procedure is model-free insofar as it makes no assumption about the precise nature of the perturbing baseline nor requires treatment of spectra prior to execution.  相似文献   

8.
Communications networks are highly reliable and almost never experience widespread failures. But from time to time performance degrades and the probability that a call is blocked or fails to reach its destination jumps from nearly 0 to an unacceptable level. High but variable blocking may then persist for a noticeable period of time. Extended periods of high blocking, or events, can be caused by congestion in response to natural disasters, fiber cuts, equipment failures, and software errors, for example. Because the consequences of an event depend on the level of blocking and its persistence, lists of events at specified blocking and duration thresholds, such as 50% for 30 minutes or 90% for 15 minutes, are often maintained. Reliability parameters at specified blocking and duration thresholds, such as the mean number of events per year and mean time spent in events, are estimated from the lists of reported events and used to compare network service providers, transmission facilities, or brands of equipment, for example. This article shows how data obtained with two-stage sampling can be used to estimate blocking probabilities as a function of time. The estimated blocking probabilities are then used to detect and characterize events and to estimate reliability parameters at specified blocking and duration thresholds. Our estimators are model-free, except for one step in a sampling bias correction, and practical even if there are hundreds of millions of observations. Pointwise confidence intervals for reliability parameters as a function of blocking and duration thresholds are built using a kind of “partial bootstrapping” that is suitable for very large sets of data. The performance of the algorithm for event detection and the estimators of reliability parameters are explored with simulated data. An application to comparison of two network service providers is given in this article, and possible adaptations for other monitoring problems are sketched.  相似文献   

9.
Owing to computer floating-point errors, a singular phenomenon is observed in GM(1, 1) (first order one variable grey differential equation model) when the grey development coefficient equals zero, causing fatal and meaningless prediction errors in following calculations. To prevent such a phenomenon from leading to an erroneous prediction result, discriminants for judgment should be developed. In this study, the method of symbolic operations is adopted to develop the general discriminants. Two discriminants, which are used for even and odd numbers of raw data, respectively, are successfully developed to play an important role in validating the feasibility of GM(1, 1). Two practical examples are used to demonstrate the importance of discriminants.  相似文献   

10.
Road crashes have an unquestionably hierarchical crash-car-occupant structure. Multilevel models are used with correlated data, but their application to crash data can be difficult. The number of sub-clusters per cluster is small, with less than two cars per crash and less than two occupants per car, whereas the number of clusters can be high, with several hundred/thousand crashes. Application of the Monte-Carlo method on observed and simulated French road crash data between 1996 and 2000 allows comparing estimations produced by multilevel logistic models (MLM), Generalized Estimating Equation models (GEE) and logistic models (LM). On the strength of a bias study, MLM is the most efficient model while both GEE and LM underestimate parameters and confidence intervals. MLM is used as a marginal model and not as a random-effect model, i.e. only fixed effects are taken into account. Random effects allow adjusting risks on the hierarchical structure, conferring an interpretative advantage to MLM over GEE. Nevertheless, great care is needed for data coding and quite a high number of crashes are necessary in order to avoid problems and errors with estimates and estimate processes. On balance, MLM must be used when the number of vehicles per crash or the number of occupants per vehicle is high, when the LM results are questionable because they are not in line with the literature or finally when the p-values associated to risk measures are close to 5%. In other cases, LM remains a practical analytical tool for modelling crash data.  相似文献   

11.
It is generally accepted that the method detection limit or MDL (defined in 40 CFR 136, Appendix B) provides protection against false positives 99% of the time. This is correct, but only for the next single measurement performed after the MDL is determined. Subsequent measurements are not protected against false positives with the same degree of confidence, and there is no protection for false negatives. This paper provides a simple cost-effective approach for estimating the "reliable detection limit." Unlike the MDL, the statistic may be used for an indefinite number of future measurements and minimizes false negatives.  相似文献   

12.
针对灰度图像多个斜坡状边缘目标的微小尺寸实时检测,提出了两次阈值分割的检测算法。预分割提取目标并确定其过渡区。而后分割是在过渡区搜索目标真实边缘,实现尺寸测量。实验表明,这种两次阈值分割算法对 0.5mm 以下目标测量优势明显,精度优于 0.001mm,并降低了噪声对真实边缘点数目的影响。在实时性能上,测量平均时间为 60ms,比拟合算法缩短了 40%,保证了检测系统下位机 2000 packages/h 的处理速率。  相似文献   

13.
《成像科学杂志》2013,61(4):200-210
Abstract

This paper is an extension of previous work on the image segmentation of electronic structures on patterned wafers to improve the defect detection process on optical inspection tools. Die-to-die wafer inspection is based upon the comparison of the same area on two neighbourhood dies. The dissimilarities between the images are a result of defects in this area of one of the dies. The noise level can vary from one structure to the other, within the same image. Therefore, segmentation is needed to create a mask and apply an optimal threshold in each region. Contrast variation on the texture can affect the response of the parameters used for the segmentation. This paper shows a method of anticipating these variations with a limited number of training samples and modifying the classifier accordingly to improve the segmentation results.  相似文献   

14.
基于小波变换的海面背景红外小目标检测方法   总被引:3,自引:3,他引:3  
采用基于正交小波分解的多分辨率分析实现频带选择,抑制噪声和背景杂波;检测图像的水平和垂直方向边缘,确定海天线和目标潜在区;利用不同方向边缘进行融合,除去大量背景干扰而获得目标候选点;检测候选目标点的灰度值,确定阈值排除虚警点并分割目标。实验结果表明,该方法能检测复杂海面背景中的舰船红外小目标。  相似文献   

15.
The visualization of computed tomography brain images is basically done by performing the window setting, which stretches an image from the Digital Imaging and Communications in Medicine format into the standard grayscale format. However, the standard window setting does not provide a good contrast to highlight the hypodense area for the detection of ischemic stroke. While the conventional histogram equalization and other proposed enhanced schemes insufficiently enhance the image contrast, they also may introduce unwanted artifacts on the so‐called “enhanced image.” In this article, a new adaptive method is proposed to excellently improve the image contrast without causing any unwanted defects. The method first decomposed an image into equal‐sized nonoverlapped sub‐blocks. After that, the distribution of the extreme levels in the histogram for a sub‐block is eliminated. The eliminated distribution pixels are then equally redistributed to the other grey levels with threshold limitation. Finally, the grey level reallocation function is defined. The bilinear interpolation is used to estimate the best value for each pixel in the images to remove the potential blocking effect. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 153–160, 2012  相似文献   

16.
This paper presents the effect of measurement errors and learning on monitoring processes with individual Bernoulli observations. A cumulative sum control chart is considered to evaluate the possible impacts of measurement errors and learning. We propose a time‐dependent learning effect model along with measurement errors and incorporate them into the Bernoulli CUSUM control chart statistic. The performance of the Bernoulli CUSUM control chart is then merely assessed by comparing the average number of observations to signal (ANOS) under two proposed conditions with the condition of no possible errors. Thus, the ANOS values are obtained under different proportions of non‐conforming items, once considering errors due to measurement by inspectors, and once considering both errors and learning effect together. The experimental results show that the efficiency of the control chart to detect assignable causes deteriorates in the presence of measurement errors and enhances when learning affects operators' performance. The proposed approach has a potential to be used in monitoring high‐quality Bernoulli processes as well as disease diagnosis, and other health care applications with Bernoulli observations.  相似文献   

17.
In this study, a simple screening algorithm was developed to prevent the occurrence of Type II errors or samples with high prediction error that are not detected as outliers. The method is used to determine “good” and “bad” spectra and to prevent a false negative condition where poorly predicted samples appear to be within the calibration space, yet have inordinately large residual or prediction errors. The detection and elimination of this type of sample, which is a true outlier but not easily detected, is extremely important in medical decisions, since such erroneous data can lead to considerable mistakes in clinical analysis and medical diagnosis. The algorithm is based on a cross-correlation comparison between samples spectra measured over the region of 4160-4880 cm− 1. The correlation values are converted using the Fisher's z-transform, while a z-test of the transformed values is performed to screen out the outlier spectra. This approach allows the use of a tuning parameter used to decrease the percentage of samples with high analytical (residual) errors. The algorithm was tested using a dataset with known reference values to determine the number of false negative and false positive samples. The cross-correlation algorithm performance was tested on several hundred blood samples prepared at different hematocrit (24 to 48%) and glucose (30 to 500 mg/dL) levels using blood component materials from thirteen healthy human volunteers. Experimental results illustrate the effectiveness of the proposed algorithm in finding and screening out Type II outliers in terms of sensitivity and specificity, and the ability to predict or estimate future or validation datasets ensuring lower error of prediction. To our knowledge this is the first paper to introduce a statistically useful screening method based on spectra cross-correlation to detect the occurrence of Type II outliers (false negative samples) for routine analysis in a clinically relevant application for medical diagnosis.  相似文献   

18.
This paper can be considered as an extension of the work of Tran et al (for monitoring compositional data using a multivariate exponentially weighted moving average MEWMA-compositional data [CoDa] chart) by taking into account potential measurement errors that are known to highly affect production processes. A linearly covariate error model with a constant error variance is used to study the impact of measurement errors on the MEWMA-CoDa control chart. In particular, the influence of the device parameters (σM,b), the number of independent observations m, and the the number of variables p are investigated in terms of the MEWMA optimal couples (r,H) as well as in terms of their corresponding ARLs. A comparison between the Hotelling-CoDa T2 and the proposed chart is made in order to show that the MEWMA-CoDa chart is more efficient in detecting shifts in the presence of measurement errors. A real-life example of muesli production, using multiple measurements for each composition, is used to estimate the parameters and also to demonstrate how the MEWMA-CoDa can handle measurement errors to detect shifts in the process.  相似文献   

19.
Modern products frequently feature monitors designed to detect actual or impending malfunctions. False alarms (Type I errors) or excessive delays in detecting real malfunctions (Type II errors) can seriously reduce monitor utility. Sound engineering practice includes physical evaluation of error rates. Type II error rates are relatively easy to evaluate empirically. However, adequate evaluation of a low Type I error rate is difficult without using accelerated testing concepts, inducing false alarms using artificially low thresholds and then selecting production thresholds by appropriate extrapolation, as outlined here. This acceleration methodology allows for informed determination of detection thresholds and confidence in monitor performance with substantial reductions over current alternatives in time and cost required for monitor development. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
We propose a new approach to the analysis of limb-scanning measurements of the atmosphere that are continually recorded from an orbiting platform. The retrieval is based on the simultaneous analysis of observations taken along the whole orbit. This approach accounts for the horizontal variability of the atmosphere, hence avoiding the errors caused by the assumption of horizontal homogeneity along the line of sight of the observations. A computer program that implements the proposed approach has been designed; its performance is shown with a simulated retrieval analysis based on a satellite experiment planned to fly during 2001. This program has also been used for determining the size and the character of the errors that are associated with the assumption of horizontal homogeneity. A computational strategy that reduces the large number of computer resources apparently demanded by the proposed inversion algorithm is described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号