The advantage of efficient searches belonging to ant-miner over several other approaches leads to prominent achievements on rules mining. Fuzzy ant-miner, an extension of the ant-miner provides a fuzzy mining framework for the automatic extraction of fuzzy rules from labeled numerical data. However, it is easily trapped in local optimal, especially when it applies to medical cases, where real world accuracy is elusive; and the interpretation and integration of medical knowledge is necessary. In order to relieve such a local optimal difficulty, this paper proposes OMFAM which applies simulated annealing to optimize fuzzy set parameters associated with a modified fuzzy ant-miner (MFAM). MFAM employs attributes and training case weighting. The proposed method, OMFAM was experimented with six critical medical cases for developing efficient medical diagnosis systems. The performance measurement relates to accuracy as well as interpretability of the mined rules. The performance of the OMFAM is compared with such references as MFAM, fuzzy ant-miner (FAM), and other classification methods. At last, it indicates the superiority of the OMFAM algorithm over the others. 相似文献
Most well-known classifiers can predict a balanced data set efficiently, but they misclassify an imbalanced data set. To overcome this problem, this research proposes a new impurity measure called minority entropy, which uses information from the minority class. It applies a local range of minority class instances on a selected numeric attribute with Shannon’s entropy. This range defines a subset of instances concentrating on the minority class to be constructed by decision tree induction. A decision tree algorithm using minority entropy shows improvement compared with the geometric mean and F-measure over C4.5, the distinct class-based splitting measure, asymmetric entropy, a top–down decision tree and Hellinger distance decision tree on 24 imbalanced data sets from the UCI repository. 相似文献
Summary: The effects of PEO concentration, addition of PEG of various molecular weights (1 000–35 000 g · mol?1), inorganic salt of various types (i.e., NaCl, LiCl, KCl, MgCl2, and CaCl2), or SDS, and the solvent system (i.e., mixed solvents of distilled water and methanol, ethanol, or 2‐propanol) on the bead formation and/or morphological appearance of electrospun PEO fibers were investigated using SEM. The formation of beaded fibers upon addition of low‐molecular‐weight PEGs into the PEO solution suggested that the very short relaxation time and/or the plasticizing effect of these low‐molecular‐weight PEGs contributed to the formation of the bead‐on‐string morphology of the as‐spun fibers. On the other hand, the observed improvement in the electro‐spinnability of the PEO solution with increasing PEO concentration and upon addition of NaCl and SDS suggested that the observed increase in the viscosity and conductivity and the observed decrease in the surface tension of the solution were indispensable for total suppression of the beads. However, when the conductivity of the solution increased only marginally, beads could still be obtained.
This paper demonstrates how the p-recursive piecewise polynomial (p-RPP) generators and their derivatives are constructed. The feedforward computational time of a multilayer feedforward network can be reduced by using these functions as the activation functions. Three modifications of training algorithms are proposed. First, we use the modified error function so that the sigmoid prime factor for the updating rule of the output units is eliminated. Second, we normalize the input patterns in order to balance the dynamic range of the inputs. And third, we add a new penalty function to the hidden layer to get the anti-Hebbian rules in providing information when the activation functions have zero sigmoid prime factor. The three modifications are combined with two versions of Rprop (Resilient propagation) algorithm. The proposed procedures achieved excellent results without the need for careful selection of the training parameters. Not only the algorithm but also the shape of the activation function has important influence on the training performance. 相似文献
Summary: In the present contribution, polyamide‐6 (PA‐6) solutions were prepared in various pure and mixed‐solvent systems and later electrospun with the polarity of the emitting electrode being either positive or negative. The PA‐6 concentration in the as‐prepared solutions was fixed at 32% w/v. Some of the solution properties, i.e., shear viscosity, surface tension, and conductivity, were measured. Irrespective of the polarity of the emitting electrode, only the electrospinning of PA‐6 solution in formic acid (85 wt.‐% aqueous solution) produced uniform electrospun fibers, while solutions of PA‐6 in m‐cresol or sulfuric acid (either 20 or 40 wt.‐% aqueous solution) did not. In the mixed‐solvent systems, formic acid (85 wt.‐% aqueous solution) was blended with m‐cresol, sulfuric acid (either 20 or 40 wt.‐% aqueous solution), acetic acid, or ethanol in the compositional range of 10–40 vol.‐% (based on the amount of the minor solvent). Generally, the average fiber diameter increased with increasing amount of the minor solvent or liquid. Interestingly, the diameters of the fibers obtained under the negative electrode polarity were larger than those obtained under the positive one.
This paper proposes a new method for extracting the invariant features of an image based on the concept of principal component analysis and a competitive learning algorithm. The proposed algorithm can be applied to binary, gray-level, or colored-texture images with a size greater than 256 × 256 pixels. In addition to translation, scaling, and rotation invariant extraction, the extraction of a feature invariant to color intensity can be implemented by using this method. In our experiment, the proposed method shows the capability to differentiate images having the same shape but different colored textures. The experimental results report the effectiveness of this technique and its performance as measured by recognition accuracy rate and computational time. These results are also compared with those obtained by classical techniques. 相似文献
Software effort estimation has played an important role in software project management. An accurate estimation helps reduce cost overrun and the eventual project failure. Unfortunately, many existing estimation techniques rely on the total project effort which is often determined from the project life cycle. As the project moves on, the course of action deviates from what originally has planned, despite close monitoring and control. This leads to re-estimating software effort so as to improve project operating costs and budgeting. Recent research endeavors attempt to explore phase level estimation that uses known information from prior development phases to predict effort of the next phase by using different learning techniques. This study aims to investigate the influence of preprocessing in prior phases on learning techniques to re-estimate the effort of next phase. The proposed re-estimation approach preprocesses prior phase effort by means of statistical techniques to select a set of input features for learning which in turn are exploited to generate the estimation models. These models are then used to re-estimate next phase effort by using four processing steps, namely data transformation, outlier detection, feature selection, and learning. An empirical study is conducted on 440 estimation models being generated from combinations of techniques on 5 data transformation, 5 outlier detection, 5 feature selection, and 5 learning techniques. The experimental results show that suitable preprocessing is significantly useful for building proper learning techniques to boosting re-estimation accuracy. However, there is no one learning technique that can outperform other techniques over all phases. The proposed re-estimation approach yields more accurate estimation than proportion-based estimation approach. It is envisioned that the proposed re-estimation approach can facilitate researchers and project managers on re-estimating software effort so as to finish the project on time and within the allotted budget. 相似文献
This study discusses the uniqueness of brain wave signals (electroencephalography, EEG) in a singular individual to determine personal authentication. The brain is the most complex biological structure known to man and its wave signals are very difficult to mimic or steal, EEG signals can be measured from different locations, but too many signals can degrade recognition speed and accuracy. A practical technique combining independent component analysis for signal cleaning and a supervised neural network for authenticating signals was proposed. This new process called homogeneous identify filtering was introduced to identify persons in considered and outside groups. From 16 different EEG signal locations, four truly relevant locations of 1,000 data points (F4, C4, P4, O2), 1,500 data points (F8, F3, C3, P4), and 3,000 data points (Fp1, F4, P4, O2) by SOBIRO algorithm were selected. This selection was used to identify 20 persons with high accuracy within the test group. The significant location for authentication is position P4 which is the parietal lobe of the brain.