首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Semantic scene classification is an open problem in computer vision, especially when information from only a single image is employed. In applications involving image collections, however, images are clustered sequentially, allowing surrounding images to be used as temporal context. We present a general probabilistic temporal context model in which the first-order Markov property is used to integrate content-based and temporal context cues. The model uses elapsed time-dependent transition probabilities between images to enforce the fact that images captured within a shorter period of time are more likely to be related. This model is generalized in that it allows arbitrary elapsed time between images, making it suitable for classifying image collections. In addition, we derived a variant of this model to use in ordered image collections for which no timestamp information is available, such as film scans. We applied the proposed context models to two problems, achieving significant gains in accuracy in both cases. The two algorithms used to implement inference within the context model, Viterbi and belief propagation, yielded similar results with a slight edge to belief propagation. Matthew Boutell received the BS degree in Mathematical Science from Worcester Polytechnic Institute, Massachusetts, in 1993, the MEd degree from University of Massachusetts at Amherst in 1994, and the PhD degree in Computer Science from the University of Rochester, Rochester, NY, in 2005. He served for several years as a mathematics and computer science instructor at Norton High School and Stonehill College and as a research intern/consultant at Eastman Kodak Company. Currently, he is Assistant Professor of Computer Science and Software Engineering at Rose-Hulman Institute of Technology in Terre Haute, Indiana. His research interests include image understanding, machine learning, and probabilistic modeling. Jiebo Luo received his PhD degree in Electrical Engineering from the University of Rochester, Rochester, NY in 1995. He is a Senior Principal Scientist with the Kodak Research Laboratories. He was a member of the Organizing Committee of the 2002 IEEE International Conference on Image Processing and 2006 IEEE International Conference on Multimedia and Expo, a guest editor for the Journal of Wireless Communications and Mobile Computing Special Issue on Multimedia Over Mobile IP and the Pattern Recognition journal Special Issue on Image Understanding for Digital Photos, and a Member of the Kodak Research Scientific Council. He is on the editorial boards of the IEEE Transactions on Multimedia, Pattern Recognition, and Journal of Electronic Imaging. His research interests include image processing, pattern recognition, computer vision, medical imaging, and multimedia communication. He has authored over 100 technical papers and holds over 30 granted US patents. He is a Kodak Distinguished Inventor and a Senior Member of the IEEE. Chris Brown (BA Oberlin 1967, PhD University of Chicago 1972) is Professor of Computer Science at the University of Rochester. He has published in many areas of computer vision and robotics. He wrote COMPUTER VISION with his colleague Dana Ballard, and influential work on the “active vision” paradigm was reported in two special issues of the International Journal of Computer Vision. He edited the first two volumes of ADVANCES IN COMPUTER VISION for Erlbaum and (with D. Terzopoulos) REAL-TIME COMPUTER VISION, from Cambridge University Press. He is the co-editor of VIDERE, the first entirely on-line refereed computer vision journal (MIT Press). His most recent PhD students have done research in infrared tracking and face recognition, features and strategies for image understanding, augmented reality, and three-dimensional reconstruction algorithms. He supervised the undergraduate team that twice won the AAAI Host Robot competition (and came third in the Robot Rescue competition in 2003).  相似文献   

2.
In this paper, we introduce a new approach to deadlock-free routing in wormhole-routed networks called the message flow model. This method may be used to develop deterministic, partially-adaptive, and fully-adaptive routing algorithms for wormhole-routed networks with arbitrary topologies. We first establish the necessary and sufficient condition for deadlock free routing, based on the analysis of the message flow on each channel. We then use the model to develop new adaptive routing algorithms for 2D meshes  相似文献   

3.
为解决分布式环境下消息分发系统中的按需通信,在对Gelemter元组空间模型进行改进的基础上,对消息分发系统中的元组空间通信进行了结构设计,定义了元组空间的特征模型,并基于局部性原理提出一种元组空间通信的空间分解算法.该算法依据在实际通信中不同元组不同元素的匹配频度的差异,将元组空间分解为依赖特征空间、特征元组和特征元素之间抽象关系的一组缓冲子空间,通信进程在进行匹配操作时可直接从缓冲子空间中获取匹配元组,从而降低通信的计算成本.  相似文献   

4.
提出了一种用于股票价格预测的人工神经网络(ANN),隐马尔可夫模型(HMM)和粒子群优化算法(PSO)的组合模型-APHMM模型.在APHMM模型中,ANN算法将股票的每日开盘价、最高价、最低价与收盘价转换为相互独立的量并作为HMM的输入.然后,利用PSO算法对HMM的参数初始值进行优化,并用Baum-Welch算法进行参数训练.经过训练后的HMM在历史数据中找出一组与今天股票的上述4个指标模式最相似数据,加权平均计算每个数据与它后一天的收盘价格差,则今天的股票收盘价加上这个加权平均价格差便为预测的股票收盘价.实验结果表明,APHMM模型具有良好的预测性能.  相似文献   

5.
Breast cancer is one of the leading causes of death among women worldwide. In most cases, the misinterpretation of medical diagnosis plays a vital role in increased fatality rates due to breast cancer. Breast cancer can be diagnosed by classifying tumors. There are two different types of tumors, such as malignant and benign tumors. Identifying the type of tumor is a tedious task, even for experts. Hence, an automated diagnosis is necessary. The role of machine learning in medical diagnosis is eminent as it provides more accurate results in classifying and predicting diseases. In this paper, we propose a deep ensemble network (DEN) method for classifying and predicting breast cancer. This method uses a stacked convolutional neural network, artificial neural network and recurrent neural network as the base classifiers in the ensemble. The random forest algorithm is used as the meta-learner for providing the final prediction. Experimental results show that the proposed DEN technique outperforms all the existing approaches in terms of accuracy, sensitivity, specificity, F-score and area under the curve (AUC) measures. The analysis of variance test proves that the proposed DEN model is statistically more significant than the other existing classification models; thus, the proposed approach may aid in the early detection and diagnosis of breast cancer in women, hence aiding in the development of early treatment techniques to increase survival rate.  相似文献   

6.
7.
A theoretical framework is laid out, where a Stock Exchange is represented as a process under decentralized control. Attention is devoted to a specific case, in which the trading activity is described by a second order dynamical system. Three economically significant modes of behavior are identified. The stock market can (1)_adjust to a stable equilibrium, (2) approach a stable limit cycle, (3) diverge to infinity. The transition from mode (1) to mode (2) is a supercritical Hopf bifurcation, whereas the transition from mode (2) to mode (3) is a homoclinic bifurcation.  相似文献   

8.
A strictly hierarchical message transfer scheme requires that a message follow a specified referral path unless finally it is either rejected or filled at any one of the information centers of the network. Thus at each node in the network three decisions can be made: satisfy, reject, or refer the message to the succeeding node in the hierarchy. By associating probabilities and costs with each of these decisions, we develop a Markovian model for the total network cost. The mean and variance of total cost are derived. Applicability of the model is discussed by considering the problems related to the estimation of necessary parameters. In particular, a queue-theoretic model is developed for estimating response time for a message at an information center.  相似文献   

9.
10.
We describe an exact model for the two-dimensional cutting stock problem with two stages and the guillotine constraint. It is an integer linear programming (ILP) arc-flow model, formulated as a minimum flow problem, which is an extension of a model proposed by Valério de Carvalho for the one dimensional case. In this paper, we explore the behavior of this model when it is solved with a commercial software, explicitly considering all its variables and constraints. We also derive a new family of cutting planes and a new lower bound, and consider some variants of the original problem. The model was tested on a set of real instances from the wood industry, with very good results. Furthermore the lower bounds provided by the linear programming relaxation of the model compare favorably with the lower bounds provided by models based on assignment variables.  相似文献   

11.
基于结构修剪神经网络的股票指数预测模型*   总被引:1,自引:1,他引:0  
股票市场是非线性系统,具有内部结构复杂性和外部因素多变性,在股市指数价格和成交量基础上,引入宏观经济指标共同构建模型预测指标体系,并分析各指标之间的长期均衡关系和因果关系。在贝叶斯分析的基础上,将代表网络复杂性的惩罚项引入模型误差函数中,并通过动态调整惩罚因子删减网络中对股票市场不敏感的隐层神经元,在保证模型泛化能力的同时实现网络结构精简。以上证指数为例,构建基于BP算法的结构修剪神经网络预测模型,在不同的预测指标体系下对股票市场运行规律进行学习,并对上证指数进行仿真预测。最后,通过与其他神经网络预测模型  相似文献   

12.
13.
Md. Rafiul   《Neurocomputing》2009,72(16-18):3439
This paper presents a novel combination of the hidden Markov model (HMM) and the fuzzy models for forecasting stock market data. In a previous study we used an HMM to identify similar data patterns from the historical data and then used a weighted average to generate a ‘one-day-ahead’ forecast. This paper uses a similar approach to identify data patterns by using the HMM and then uses fuzzy logic to obtain a forecast value. The HMM's log-likelihood for each of the input data vectors is used to partition the dataspace. Each of the divided dataspaces is then used to generate a fuzzy rule. The fuzzy model developed from this approach is tested on stock market data drawn from different sectors. Experimental results clearly show an improved forecasting accuracy compared to other forecasting models such as, ARIMA, artificial neural network (ANN) and another HMM-based forecasting model.  相似文献   

14.
Knowledge-based vector space model for text clustering   总被引:5,自引:4,他引:1  
This paper presents a new knowledge-based vector space model (VSM) for text clustering. In the new model, semantic relationships between terms (e.g., words or concepts) are included in representing text documents as a set of vectors. The idea is to calculate the dissimilarity between two documents more effectively so that text clustering results can be enhanced. In this paper, the semantic relationship between two terms is defined by the similarity of the two terms. Such similarity is used to re-weight term frequency in the VSM. We consider and study two different similarity measures for computing the semantic relationship between two terms based on two different approaches. The first approach is based on the existing ontologies like WordNet and MeSH. We define a new similarity measure that combines the edge-counting technique, the average distance and the position weighting method to compute the similarity of two terms from an ontology hierarchy. The second approach is to make use of text corpora to construct the relationships between terms and then calculate their semantic similarities. Three clustering algorithms, bisecting k-means, feature weighting k-means and a hierarchical clustering algorithm, have been used to cluster real-world text data represented in the new knowledge-based VSM. The experimental results show that the clustering performance based on the new model was much better than that based on the traditional term-based VSM.  相似文献   

15.
LTI状态空间模型的参数估计   总被引:1,自引:0,他引:1  
采用3种方法研究了LTI(Linear time-invariant)状态空间模型中未知参数的估计问题:利用Metropolis-Hastings算法,从后验分布中抽取一定容量的样本,得出其均值和标准差;采用进化算法来最小化对数似然函数,得到全局最优解;采用模拟退火算法来最大化似然函数,得到全局最优解.最后,通过数值实验验证和比较了3种估计算法的有效性.  相似文献   

16.
PC-based system for classifying dysmorphic syndromes in children   总被引:1,自引:0,他引:1  
A system is described running on any IBM-compatible personal computer operating under MS-DOS which permits an expert in pediatric dysmorphology to formulate dysmorphic features and syndromes in a knowledge base, and a user to enter, analyze, store and retrieve case data. For each case, the system calculates the Bayesian probability of the presence of each syndrome in the knowledge base and lists as the differential diagnosis those syndrome whose probability exceeds 90%. For explanatory purposes, the syndrome definitions can be displayed and compared to the case data. The system has been tested and used in the Department of Clinical Genetics at Uppsala University Hospital. The current scope of the system's knowledge was found to be of valuable assistance to the pediatric practitioner but it must be expanded and refined for use by a clinical geneticist. This study has shown the validity of the system's design and structure and further work is under way to extend its knowledge capability.  相似文献   

17.
This paper describes an approach to providing software fault tolerance for future deep‐space robotic National Aeronautics and Space Administration missions, which will require a high degree of autonomy supported by an enhanced on‐board computational capability. We focus on introspection‐based adaptive fault tolerance guided by the specific requirements of applications. Introspection supports monitoring of the program execution with the goal of identifying, locating, and analyzing errors. Fault tolerance assertions for the introspection system can be provided by the user, domain‐specific knowledge, or via the results of static or dynamic program analysis. This work is part of an on‐going project at the Jet Propulsion Laboratory in Pasadena, California. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
Recently, many fuzzy time series models have already been used to solve nonlinear and complexity issues. However, first-order fuzzy time series models have proven to be insufficient for solving these problems. For this reason, many researchers proposed high-order fuzzy time series models and focused on three main issues: fuzzification, fuzzy logical relationships, and defuzzification. This paper presents a novel high-order fuzzy time series model which overcomes the drawback mentioned above. First, it uses entropy-based partitioning to more accurately define the linguistic intervals in the fuzzification procedure. Second, it applies an artificial neural network to compute the complicated fuzzy logical relationships. Third, it uses the adaptive expectation model to adjust the forecasting during the defuzzification procedure. To evaluate the proposed model, we used datasets from both the Taiwanese stock index from 2000 to 2003 and from the student enrollment records of the University of Alabama. The results of our study show that the proposed model is able to obtain an accurate forecast without encountering conventional fuzzy time series issues.  相似文献   

19.
Basing on the Gordon model perspective and applying multiple criteria decision making (MCDM), this research explores the influential factors and relative weight of dividend, discount rate, and dividend growth rate. The purpose is to establish an investment decision model and provides investors with a reference/selection of stocks most suitable for investing effects to achieve the greatest returns. Taking full consideration of the interrelation effect among variables of the decision model, this paper introduced analytical network process (ANP) and examined leading electronics companies spanning the hottest sectors of lens, solar, and handset by experts. Empirical findings indicated that dividend was affected by industry outlook, earnings, operating cash flow, and dividend payout rate; discount rate was affected by market β and risk-free rate; and dividend growth rate was affected by earnings growth rate and dividend payout growth rate. Also, according to literatures, discount rate possessed a self-effect relationship. Among the eight evaluation criteria, market β was the most important factor influencing investment decisions, followed by dividend growth rate and risk-free rate. In stock evaluations, leadership companies in the solar industry outperformed those in handset and lens, becoming investors’ favorite stock group at the time that this research was conducted.  相似文献   

20.
《国际计算机数学杂志》2012,89(11):1697-1707
This study presents a new hybrid model that combines the grey forecasting model with the GARCH to improve the variance forecasting ability in variance as compared to the traditional GARCH. A range-based measure of ex post volatility is employed as a proxy for the unobservable volatility process in evaluating the forecasting ability due to true underlying volatility process not being observed. Overall, the results show that the new hybrid model can enhance the volatility forecasting ability of the traditional GARCH.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号