共查询到20条相似文献,搜索用时 0 毫秒
1.
Accurate segmentation of apple fruit under natural illumination conditions provides benefits for growers to plan relevant applications of nutrients and pesticides. It also plays an important role for monitoring the growth status of the fruit. However, the segmentation of apples throughout various growth stages had only achieved a limited success so far due to the color changes of apple fruit as it matures as well as occlusion and the non-uniform background of apple images acquired in an orchard environment. To achieve the segmentation of apples with different colors and with various illumination conditions for the whole growth stage, a segmentation method independent of color was investigated. Features, including saliency and contour of the image, were combined in this algorithm to remove background and extract apples. Saliency using natural statistics (SUN) visual attention model was used for background removal and it was combined with threshold segmentation algorithm to extract salient binary region of apple images. The centroids of the obtained salient binary region were then extracted as initial seed points. Image sharpening, globalized probability of boundary-oriented watershed transform-ultrametric contour map (gPb-OWT-UCM) and Otsu algorithms were applied to detect saliency contours of images. With the built seed points and extracted saliency contours, a region growing algorithm was performed to accurately segment apples by retaining as many fruit pixels and removing as many background pixels as possible. A total of 556 apple images captured in natural conditions were used to evaluate the effectiveness of the proposed method. An average segmentation error (SE), false positive rate (FPR), false negative rate (FNR) and overlap Index (OI) of 8.4, 0.8, 7.5 and 90.5% respectively, were achieved and the performance of the proposed method outperformed other six methods in comparison. The method developed in this study can provide a more effective way to segment apples with green, red, and partially red colors without changing any features and parameters and therefore it is also applicable for monitoring the growth status of apples. 相似文献
2.
分析了电信行业客户关系管理系统的数据独有特点,提出基于客户细分的客户流失预测模型.首先,采用模糊核C-均值聚类算法用于客户细分并对细分结果进行分析,发现高价值客户的群体特征.再利用企业历史数据建立基于SAS数据挖掘技术的客户流失预测模型.最后,把高价值客户作为预测目标数据应用于该模型当中预测出有流失倾向的客户.实验结果表明,该方法有效可行,可以为企业提供准确、有流失倾向的客户名单. 相似文献
3.
通过分析基于客户生命周期价值客户价值细分的各种方法,给出了一种简单易行的基于AHP(层次分析法)的客户价值细分方案,为客户价值细分提供了一种新的思路。该方案引入AHP通过领域专家的群体决策计算出RFM(最近购买时间、购买频率和总购买金额)的权重,根据加权的RFM变量来对客户群进行聚类分析,探讨了该方案的具体实施流程,并对方案中的关键方法做了详细的分析。最后通过实例分析,聚类结果表明了这种方案能够有效地对客户群体进行细分。 相似文献
4.
Segmentation has been taken immense attention and has extensively been used in strategic marketing. Vast majority of the research in this area focuses on the usage or development of different techniques. By means of the internet and database technologies, huge amount of data about markets and customers has now become available to be exploited and this enables researchers and practitioners to make use of sophisticated data analysis techniques apart from the traditional multivariate statistical tools. These sophisticated techniques are a family of either data mining or machine learning research. Recent research shows a tendency towards the usage of them into different business and marketing problems, particularly in segmentation. Soft computing, as a family of data mining techniques, has been recently started to be exploited in the area of segmentation and it stands out as a potential area that may be able to shape the future of segmentation research. In this article, the current applications of soft computing techniques in segmentation problem are reviewed based on certain critical factors including the ones related to the segmentation effectiveness that every segmentation study should take into account. The critical analysis of 42 empirical studies reveals that the usage of soft computing in segmentation problem is still in its early stages and the ability of these studies to generate knowledge may not be sufficient. Given these findings, it can be suggested that there is more to dig for in order to obtain more managerially interpretable and acceptable results in further studies. Also, recommendations are made for other potentials of soft computing in segmentation research. 相似文献
5.
为企业更深入了解消费者的行为和偏好,帮助企业制定决策和发展客户关系,结合现有的客户细分方法,提出一种多指标客户细分模型。从宏观和微观角度,对传统指标进行优化,构建RFMPA多指标客户体系;采用熵值法客观赋权;采用因子分析降维;采用改进的K-means算法完成客户细分。利用大型连锁超市客户消费数据进行实证研究,对比数据实验结果表明,该模型能够更好解决客户细分问题,提高企业客户关系管理和决策质量。 相似文献
6.
In this paper, we propose a general methodology for face-color modeling and segmentation. One of the major difficulties in face detection and retrieval is partial face extraction due to highlights, shadows and lighting variations. We show that a mixture-of-Gaussians modeling of the color space, provides a robust representation that can accommodate large color variations, as well as highlights and shadows. Our method enables to segment within-face regions, and associate semantic meaning to them, and provides statistical analysis and evaluation of the dominant variability within a given archive. 相似文献
7.
In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the "goodness" of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans. 相似文献
8.
As depth cameras become more popular, pixel depth information becomes easier to obtain. This information can clearly enhance many image processing applications. However, combining depth and color information is not straightforward as these two signals can have different noise characteristics, differences in resolution, and their boundaries do not generally agree. We present a technique that combines depth and color image information from real devices in synergy. In particular, we focus on combining them to improve image segmentation. We use color information to fill and clean depth and use depth to enhance color image segmentation. We demonstrate the utility of the combined segmentation for extracting layers and present a novel image retargeting algorithm for layered images. 相似文献
9.
Computer-aided automatic analysis of microscopic leukocyte is a powerful diagnostic tool in biomedical fields which could reduce the effects of human error, improve the diagnosis accuracy, save manpower and time. However, it is a challenging to segment entire leukocyte populations due to the changing features extracted in the leukocyte image, and this task remains an unsolved issue in blood cell image segmentation. This paper presents an efficient strategy to construct a segmentation model for any leukocyte image using simulated visual attention via learning by on-line sampling. In the sampling stage, two types of visual attention, “bottom-up” and “top-down” together with the movement of the human eye are simulated. We focus on a few regions of interesting and sample high gradient pixels to group training sets. While in the learning stage, the SVM (support vector machine) model is trained in real-time to simulate the visual neuronal system and then classifies pixels and extracts leukocytes from the image. Experimental results show that the proposed method has better performance compared to the marker controlled watershed algorithms with manual intervention and thresholding-based methods. 相似文献
10.
It is challenge to segment fine-grained objects due to appearance variations and clutter of backgrounds. Most of existing segmentation methods hardly separate small parts of the instance from its background with sufficient accuracy. However, such small parts usually contain important semantic information, which is crucial in fine-grained categorization. Observing that fine-grained objects almost share the same configuration of parts, we present a novel part-aware segmentation method, which explicitly detects semantic parts and preserve these parts during segmentation. We firstly design a hybrid part localization method, which generates accurate part proposals with moderate computation. Then we iteratively update the segmentation outputs and the part proposals, which obtains better foreground segmentation results. Experiments demonstrate the superiority of the proposed method, as compared to state-of-the-art segmentation approaches for fine-grained categorization. 相似文献
11.
Spatial attributes are important factors for predicting customer behavior. However, thorough studies on this subject have never been carried out. This paper presents a new idea that incorporates spatial predicates describing the spatial relationships between customer locations and surrounding objects into customer attributes. More specifically, we developed two algorithms in order to achieve spatially enabled customer segmentation. First, a novel filtration algorithm is proposed that can select more relevant predicates from the huge amounts of spatial predicates than existing filtration algorithms. Second, since spatial predicates fundamentally involve some uncertainties, a rough set-based spatial data classification algorithm is developed to handle the uncertainties and therefore provide effective spatial data classification. A series of experiments were conducted and the results indicate that our proposed methods are superior to existing methods for data classification. 相似文献
13.
In cluster analysis, determining number of clusters is an important issue because information about the most appropriate number of clusters do not exist in the real-world problems. Automatic clustering is a clustering approach which is able to automatically find the most suitable number of clusters as well as divide the instances into the corresponding clusters. This study proposes a novel automatic clustering algorithm using a hybrid of improved artificial bee colony optimization algorithm and K-means algorithm (iABC). The proposed iABC algorithm improves the onlooker bee exploration scheme by directing their movements to a better location. Instead of using a random neighborhood location, the improved onlooker bee considers the data centroid to find a better initial centroid for the K-means algorithm. To increase efficiency of the improvement, the updating process is only applied on the worst cluster centroid. The proposed iABC algorithm is verified using some benchmark datasets. The computational result indicates that the proposed iABC algorithm outperforms the original ABC algorithm for automatic clustering problem. Furthermore, the proposed iABC algorithm is utilized to solve the customer segmentation problem. The result reveals that the iABC algorithm has better and more stable result than original ABC algorithm. 相似文献
14.
An overview is given of Q+, an interactive tool for performance modeling that uses graphical input and visual output. Two major enhancements are a subnetwork capability for structuring models hierarchically and an integrated expression capability. New capabilities are custom icons and temporal browsing. With a Q+ icon palette, users can draw their own icons and manipulate existing ones. The browser allows browsing, editing and updating Q+ information, which can be textual or graphical. Automatic model building, operations management, and experimental design with Q+ are discussed 相似文献
15.
In recent years, gene regulatory networks (GRNs) have been proposed to work as reliable and robust control mechanisms for robots. Because recurrent neural networks (RNNs) have the unique characteristic of presenting system dynamics over time, we thus adopt such kind of network structure and the principles of gene regulation to develop a biologically and computationally plausible GRN model for robot control. To simulate the regulatory effects and to make our model inferable from time-series data, we also implement an enhanced network-learning algorithm to derive network parameters efficiently. In addition, we present a procedure of programming-by-demonstration to collect behavior sequence data of the robot as expression profiles, and then employ our network-modeling framework to infer controllers. To verify the proposed approach, experiments have been conducted, and the results show that our regulatory model can be inferred for robot control successfully. 相似文献
17.
Segmentation using an ensemble of classifiers (or committee machine) combines multiple classifiers’ results to increase the performance when compared to single classifiers. In this paper, we propose new concepts for combining rules. They are based (1) on uncertainties of the individual classifiers, (2) on combining the result of existing combining rules, (3) on combining local class probabilities with the existing segmentation probabilities at each individual segmentation, and (4) on using uncertainty-based weights for the weighted majority rule. The results show that the proposed local-statistics-aware combining rules can reduce the effect of noise in the individual segmentation result and consequently improve the performance of the final (combined) segmentation. Also, combining existing combining rules and using the proposed uncertainty- based weights can further improve the performance. 相似文献
18.
This paper presents results from computer experiments with an algorithm to perform scene disposition and motion segmentation from visual motion or optic flow. The maximum a posteriori (MAP) criterion is used to formulate what the best segmentation or interpretation of the scene should be, where the scene is assumed to be made up of some fixed number of moving planar surface patches. The Bayesian approach requires, first, specification of prior expectations for the optic flow field, which here is modeled as spatial and temporal Markov random fields; and, secondly, a way of measuring how well the segmentation predicts the measured flow field. The Markov random fields incorporate the physical constraints that objects and their images are probably spatially continuous, and that their images are likely to move quite smoothly across the image plane. To compute the flow predicted by the segmentation, a recent method for reconstructing the motion and orientation of planar surface facets is used. The search for the globally optimal segmentation is performed using simulated annealing. 相似文献
19.
网页主题信息通常湮没在大量的无关文字和HTML标记中,给应用程序迅速获取主题信息增加的难度.提出了一种基于网页分块的正文信息抽取方法.该方法首先识别和提取网页正文内容块,然后利用正则表达式和简单的判别规则内容块滤除内容块中的HTML标记和无关文字.实验证明,该方法能够准确地提取网页正文信息,且通用性较强,易于实现. 相似文献
20.
This correspondence is concerned with a method for image segmentation on the visual principle. The inconsistency between the conventional discriminating criterion and the human vision mechanism in perceiving an object and its background is analyzed and an improved discriminating criterion with visual nonlinearity is defined. A new model and an algorithm for image segmentation calculation are proposed based on the spatially adaptive principle of human vision and the relevant hypotheses about object recognition. This is a two-stage process of image segmentation. First, initial segmentation is realized with the bottom-up segmenting algorithm, followed by the goal-driven segmenting algorithm to improve the segmentation results concerning certain regions of interest. Experimental results show that, compared with some conventional and gradient-based segmenting methods, the new method has the excellent performance of extracting small objects from the images of natural scenes with a complicated background. 相似文献
|