首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
自动化技术   2篇
  2010年   1篇
  2005年   1篇
排序方式: 共有2条查询结果,搜索用时 46 毫秒
1
1.
We study a general algorithm to improve the accuracy in cluster analysis that employs the James-Stein shrinkage effect in k-means clustering. We shrink the centroids of clusters toward the overall mean of all data using a James-Stein-type adjustment, and then the James-Stein shrinkage estimators act as the new centroids in the next clustering iteration until convergence. We compare the shrinkage results to the traditional k-means method. A Monte Carlo simulation shows that the magnitude of the improvement depends on the within-cluster variance and especially on the effective dimension of the covariance matrix. Using the Rand index, we demonstrate that accuracy increases significantly in simulated data and in a real data example.  相似文献   
2.
Bo  Chew Lim 《Pattern recognition》2005,38(12):2333-2350
Skew estimation for textual document images is a well-researched topic and numerals of methods have been reported in the literature. One of the major challenges is the presence of interfering non-textual objects of various types and quantities in the document images. Many existing methods require proper separation of the textual objects which are well aligned from the non-textual objects which are mostly nonaligned. Some comparative evaluation work on the existing methods chooses only the text zones of the test image database. Therefore, the object filtering or zoning stage is crucial to the skew detection stage. However, it is difficult if not impossible to design general-purpose filters that are able to discriminate noises from textual components. This paper presents a robust, general-purpose skew estimation method that does not need any filtering or zoning preprocessing. In fact, this method does apply filtering, but not on the input components at the beginning of the detection process, rather on the output spectrum at the end of the detection process. Therefore, the problem of finding a textual component filter has been transformed into finding a convolution filter on the output accumulator array. This method consists of three steps: (1) the calculation of the slopes of the virtual lines that pass through the centroids of all the unique pairs of the connected components in an image, and quantizes the arctangents of the slopes into a 1-D accumulator array that covers the range from -90 to +90; (2) a special convolution on the resultant histogram, after which there remain only the prominent peaks that possibly correspond to the skew angles of the image; (3) the verification of the detection result. Its computational complexity and detection precision are uncoupled, unlike those projection-profile-based or Hough-transform-based methods whose speeds drop when higher precision is in demand. Speedup measures on the baseline implementation are also presented. The University of Washington English Document Image Database I (UWDB-I) contains a large number of scanned document images with significant amount of non-textual objects. Therefore, it is a good image database for evaluating the proposed method.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号