首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
How might the application of analytical procedures be improved given the inherent shortcomings of traditional analytic techniques and the apparent difficulties auditors have in combining all critical cues when evaluating the results of the analytical procedures? This research attempts to improve analytical methods by applying a new technology, Artificial Neural Networks (ANNs), to perform pattern recognition of the investigation signals generated by analytical procedures. ANNs, a type of artificial intelligence technology, are able to recognize patterns in data even when the data is noisy, ambiguous, distorted or variable. Four years of audited financial data from a medium-sized distributor were used to calculate five commonly applied financial ratios. The performance of these ratios, applied independently and in combinations, was evaluated using a presumed lack of actual errors and certain seeded material errors. The ANN method evaluated the information content of the combinations of financial ratios using an entropy cost function derived from information theory. This exploratory study suggests that the use of an ANN to analyze patterns of related fluctuations across numerous financial ratios provides a more reliable indication of the presence of material errors than either traditional analytic procedures or pattern analysis, as well as providing insight to the plausible causes of the error. Preliminary results suggest that the use of pattern analysis methods as a supplement to traditional analytical procedures will offer improved performance in recognizing material misstatements within the financial accounts.  相似文献   

2.
A maximum-likelihood interpretation for slow feature analysis   总被引:1,自引:0,他引:1  
Turner R  Sahani M 《Neural computation》2007,19(4):1022-1038
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses "slowness" as a heuristic by which to extract semantic information from multidimensional time series. Here, we develop a probabilistic interpretation of this algorithm, showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual springboard with which to motivate several novel extensions to the algorithm.  相似文献   

3.
4.
Describes how to estimate 3D surface models from dense sets of noisy range data taken from different points of view, i.e., multiple range maps. The proposed method uses a sensor model to develop an expression for the likelihood of a 3D surface, conditional on a set of noisy range measurements. Optimizing this likelihood with respect to the model parameters provides an unbiased and efficient estimator. The proposed numerical algorithms make this estimation computationally practical for a wide variety of circumstances. The results from this method compare favorably with state-of-the-art approaches that rely on the closest-point or perpendicular distance metric, a convenient heuristic that produces biased solutions and fails completely when surfaces are not sufficiently smooth, as in the case of complex scenes or noisy range measurements. Empirical results on both simulated and real ladar data demonstrate the effectiveness of the proposed method for several different types of problems. Furthermore, the proposed method offers a general framework that can accommodate extensions to include surface priors, more sophisticated noise models, and other sensing modalities, such as sonar or synthetic aperture radar.  相似文献   

5.
State estimation is addressed for a class of discrete-time systems that may switch among different modes taken from a finite set. The system and measurement equations of each mode are assumed to be linear and perfectly known, but the current mode of the system is unknown. Moreover, additive, independent, normally distributed noises are assumed to affect the dynamics and the measurements. First, relying on a well-established notion of mode observability developed “ad hoc” for switching systems, an approach to system mode estimation based on a maximum-likelihood criterion is proposed. Second, such a mode estimator is embedded in a Kalman filtering framework to estimate the continuous state. Under the unique assumption of mode observability, stability properties in terms of boundedness of the mean square estimation error are proved for the resulting filter. Simulation results showing the effectiveness of the proposed filter are reported.  相似文献   

6.
A time-efficient method for evaluating the maximum-likelihood classifier for LANDSAT MSS data is described and its extension to the case of unequal prior probabilities is summarized, following Shlien (1975) and Strahler (1980). The use of unequal prior probabilities is demonstrated by example and it is shown that, where classes are well-separated, then the effect of including prior probability estimates is negligible, but where classes are closely-related, then the choice of prior probability estimate can have a considerable effect.  相似文献   

7.
We deal with the parameter estimation problem for probability density models with latent variables. For this problem traditionally the expectation maximization (EM) algorithm has been broadly used. However, it suffers from bad local maxima, and the quality of the estimator is sensitive to the initial model choice. Recently, an alternative density estimator has been proposed that is based on matching the moments between sample averaged and model averaged. This moment matching estimator is typically used as the initial iterate for the EM algorithm for further refinement. However, there is actually no guarantee that the EM-refined estimator still yields the moments close enough to the sample-averaged one. Motivated by this issue, in this paper we propose a novel estimator that takes merits of both worlds: we do likelihood maximization, but the moment discrepancy score is used as a regularizer that prevents the model-averaged moments from straying away from those estimated from data. On some crowd-sourcing label prediction problems, we demonstrate that the proposed approach yields more accurate density estimates than the existing estimators.  相似文献   

8.
A new maximum-likelihood phase estimation method for X-ray pulsar signals   总被引:1,自引:0,他引:1  
X-ray pulsar navigation (XPNAV) is an attractive method for autonomous navigation of deep space in the future. Currently, techniques for estimating the phase of X-ray pulsar radiation involve the maximization of the general non-convex object functions based on the average profile fxom the epoch folding method. This results in the suppression of useful information and highly complex computation. In this paper, a new maximum likelihood (ML) phase estimation method that directly utilizes the measured time of arrivals (TOAs) is presented. The X-ray pulsar radiation will be treated as a cyclo-stationary process and the TOAs of the photons in a period will be redefined as a new process, whose probability distribution function is the normalized standard profile of the pulsar. We demonstrate that the new process is equivalent to the generally used Poisson model. Then, the phase estimation problem is recast as a cyclic shift parameter estimation under the ML estimation, and we also put forward a parallel ML estimation method to improve the ML solution. Numerical simulation results show that the estimator described here presents a higher precision and reduces the computational complexity compared with currently used estimators.  相似文献   

9.
10.
Modifications are offered for some algorithms previously established by the authors for the maximum-likelihood estimation of the parameters of mixed exponential and Weibull distributions. The prior work assumed that the likelihood function was “well behaved” and that good starting points were available. We now provide enhancements of the methods to permit the handling of any alternative possibility.  相似文献   

11.
《Parallel Computing》1986,3(3):187-192
The community of people using vector processors is growing rapidly. First, within the United States, the National Science Foundation has established several vector supercomputer centers, and a large number of scientists in academe will be using these resources. Second, IBM has added a vector capability to its high-end mainframe system, and the widespread use of these systems will dramatically increase the community of people using vector processors. Finally, host of minicomputer manufacturers have added vector capability to their latest systems. So, as a result, there will likely be a reveal of interest in vectorization and some exciting additions to the associated technology. The intent of this paper is to provide many of these new users of vector processors with a high-level discussion of some of the fundamental aspects of vector processing.  相似文献   

12.
Mackey  Steve 《ITNOW》2003,45(5):22-23
  相似文献   

13.
In machine learning and statistics, kernel density estimators are rarely used on multivariate data due to the difficulty of finding an appropriate kernel bandwidth to overcome overfitting. However, the recent advances on information-theoretic learning have revived the interest on these models. With this motivation, in this paper we revisit the classical statistical problem of data-driven bandwidth selection by cross-validation maximum likelihood for Gaussian kernels. We find a solution to the optimization problem under both the spherical and the general case where a full covariance matrix is considered for the kernel. The fixed-point algorithms proposed in this paper obtain the maximum likelihood bandwidth in few iterations, without performing an exhaustive bandwidth search, which is unfeasible in the multivariate case. The convergence of the methods proposed is proved. A set of classification experiments are performed to prove the usefulness of the obtained models in pattern recognition.  相似文献   

14.
Zhong  Weilin  Jiang  Linfeng  Zhang  Tao  Ji  Jinsheng  Xiong  Huilin 《Multimedia Tools and Applications》2020,79(31-32):22525-22549
Multimedia Tools and Applications - Person re-identification (re-id) is the task of recognizing images of the same pedestrian captured by different cameras with non-overlapping views. Person re-id...  相似文献   

15.
《Computers & Education》1986,10(1):145-148
How do you integrate Information Technology and education? This is the question addressed by the article. Trying to keep IT in its place as the educator's apprentice vexes many, but well-designed curricular activity can achieve a great deal and keep the educator in control of the technological tool.The paper first describes the activity and then exposes the pedagogical, technological, and curriculum issues. The societal context for wishing to develop information and IT skills is also exposed through the activity described. Finally, perspectives of the teacher, the trainer and the curriculum designer are examined.The main strength of the activity, which is itself an example of others of a similar nature, is that it represents a philosophy of education and a strategic approach to its achievement. The philosophy is that curriculum development is best achieved through staff development; the strategy is to produce material for teacher education which can subsequently be applied by those teachers to their students. In both cases, the vehicle consists of good strategy backed by good, content-free software.  相似文献   

16.
17.
A visual attention model for robot object tracking   总被引:1,自引:0,他引:1  
Inspired by human behaviors, a robot object tracking model is proposed on the basis of visual attention mechanism, which is fit for the theory of topological perception. The model integrates the image-driven, bottom-up attention and the object-driven, top-down attention, whereas the previous attention model has mostly focused on either the bottom-up or top-down attention. By the bottom-up component, the whole scene is segmented into the ground region and the salient regions. Guided by top-down strategy which is achieved by a topological graph, the object regions are separated from the salient regions. The salient regions except the object regions are the barrier regions. In order to estimate the model, a mobile robot platform is developed, on which some experiments are implemented. The experimental results indicate that processing an image with a resolution of 752*480 pixels takes less than 200ms and the object regions are unabridged. The analysis obtained by comparing the proposed model with the existing model demonstrates that the proposed model has some advantages in robot object tracking in terms of speed and efficiency.  相似文献   

18.
The satellite image deconvolution problem is ill-posed and must be regularized. Herein, we use an edge-preserving regularization model using a ? function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to use the maximum-likelihood estimator (MLE), applied to the observed image. We need sampling from prior and posterior distributions. Since the convolution prevents use of standard samplers, we have developed a modified Geman-Yang algorithm, using an auxiliary variable and a cosine transform. We present a Markov chain Monte Carlo maximum-likelihood (MCMCML) technique which is able to simultaneously achieve the estimation and the reconstruction.  相似文献   

19.
提出了一种抑制混沌的方法, 通过对混沌动力系统增加一个基于系统变量的正反馈控制项, 成功而快速引导系统从混沌运动转化为低周期运动. 利用Melnikov方法分析了该控制方法在Duffing振子中实现混沌抑制的控制机理, 得到了结论: 正反馈项可以消除混沌系统的稳定流形和不稳定流形的横截相交. 仿真表明, 该方法简单而广泛适用.  相似文献   

20.
Few-shot learning is a challenging problem in computer vision that aims to learn a new visual concept from very limited data. A core issue is that there is a large amount of uncertainty introduced by the small training set. For example, the few images may include cluttered backgrounds or different scales of objects. Existing approaches mostly address this problem from either the original image space or the embedding space by using meta-learning. To the best of our knowledge, none of them tackle this problem from both spaces jointly. To this end, we propose a fusion spatial attention approach that performs spatial attention in both image and embedding spaces. In the image space, we employ a Saliency Object Detection (SOD) module to extract the saliency map of an image and provide it to the network as an additional channel. In the embedding space, we propose an Adaptive Pooling (Ada-P) module tailored to few-shot learning that introduces a meta-learner to adaptively fuse local features of the feature maps for each individual embedding. The fusion process assigns different pooling weights to the features at different spatial locations. Then, weighted pooling can be conducted over an embedding to fuse local information, which can avoid losing useful information by considering the spatial importance of the features. The SOD and Ada-P modules can be used within a plug-and-play module and incorporated into various existing few-shot learning approaches. We empirically demonstrate that designing spatial attention methods for few-shot learning is a nontrivial task and our method has proven effective for it. We evaluate our method using both shallow and deeper networks on three widely used few-shot learning benchmarks, miniImageNet, tieredImageNet and CUB, and demonstrate very competitive performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号