首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Classification is an important technique in data mining.The decision trees builty by most of the existing classification algorithms commonly feature over-branching,which will lead to poor efficiency in the subsequent classification period.In this paper,we present a new value-oriented classification method,which aims at building accurately proper-sized decision trees while reducing over-branching as much as possible,based on the concepts of frequent-pattern-node and exceptive-child-node.The experiments show that while using relevant anal-ysis as pre-processing ,our classification method,without loss of accuracy,can eliminate the over-branching greatly in decision trees more effectively and efficiently than other algorithms do.  相似文献   

2.
Mining with streaming data is a hot topic in data mining. When performing classification on data streams, traditional classification algorithms based on decision trees, such as ID3 and C4.5, have a relatively poor efficiency in both time and space due to the characteristics of streaming data. There are some advantages in time and space when using random decision trees. An incremental algorithm for mining data streams, SRMTDS (Semi-Random Multiple decision Trees for Data Streams), based on random decision trees is proposed in this paper. SRMTDS uses the inequality of Hoeffding bounds to choose the minimum number of split-examples, a heuristic method to compute the information gain for obtaining the split thresholds of numerical attributes, and a Naive Bayes classifier to estimate the class labels of tree leaves. Our extensive experimental study shows that SRMTDS has an improved performance in time, space, accuracy and the anti-noise capability in comparison with VFDTc, a state-of-the-art decision-tree algorithm for classifying data streams.  相似文献   

3.
An optimal adaptive H-infinity tracking control design via wavelet network   总被引:1,自引:1,他引:0  
In this paper, an optimal adaptive H-infinity tracking control design method via wavelet network for a class of uncertain nonlinear systems with external disturbances is proposed to achieve H-infinity tracking performance. First, an alternate tracking error and a performance index with respect to the tracking error and the control effort are introduced in order to obtain better performance, especially, in reducing the cost of the control effort in the case of small attenuation levels. Next, H-infinity tracking performance, which attenuates the influence of both wavelet network approximation error and external disturbances on the modified tracking error, is formulated. Our results indicate that a small attenuation level does not lead to a large control signal. The proposed method insures an optimal trade-off between the amplitude of control signals and the performance of tracking errors. An example is given to illustrate the design efficiency.  相似文献   

4.
Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET.  相似文献   

5.
The on-chip memory performance of embedded systems directly affects the system designers' decision about how to allocate expensive silicon area. A novel memory architecture, flexible sequential and random access memory (FSRAM), is investigated for embedded systems. To realize sequential accesses, small “links”are added to each row in the RAM array to point to the next row to be prefetched. The potential cache pollution is ameliorated by a small sequential access buyer (SAB). To evaluate the architecture-level performance of FSRAM, we ran the Mediabench benchmark programs on a modified version of the SimpleScalar simulator. Our results show that the FSRAM improves the performance of a baseline processor with a 16KB data cache up to 55%, with an average of 9%; furthermore, the FSRAM reduces 53.1% of the data cache miss count on average due to its prefetching effect. We also designed RTL and SPICE models of the FSRAM, which show that the FSRAM significantly improves memory access time, while reducing power consumption, with negligible area overhead.  相似文献   

6.
This paper presents a parallel implementation of the hybrid BiCGStab(2) (bi-conjugate gradient stabilized) iterative method in a GPU (graphics processing unit) for solution of large and sparse linear systems. This implementation uses the CUDA-Matlab integration, in which the method operations are performed in a GPU core using Matlab built-in functions. The goal is to show that the exploitation of parallelism by using this new technology can provide a significant computational performance. For the validation of the work, we compared the proposed implementation with a BiCGStab(2) sequential and parallelized implementation in the C and CUDA-C languages. The results showed that the proposed implementation is more efficient and can be viable for simulations being carried out with quality and in a timely manner. The gains in computational efficiency were 76x and 6x compared to the implementation in C and CUDA-C, respectively.  相似文献   

7.
Due to the famous dimensionality curse problem, search in a high-dimensional space is considered as a "hard" problem. In this paper, a novel composite distance transformation method, which is called CDT, is proposed to support a fast κ-nearest-neighbor (κ-NN) search in high-dimensional spaces. In CDT, all (n) data points are first grouped into some clusters by a κ-Means clustering algorithm. Then a composite distance key of each data point is computed. Finally, these index keys of such n data points are inserted by a partition-based B^+-tree. Thus, given a query point, its κ-NN search in high-dimensional spaces is transformed into the search in the single dimensional space with the aid of CDT index. Extensive performance studies are conducted to evaluate the effectiveness and efficiency of the proposed scheme. Our results show that this method outperforms the state-of-the-art high-dimensional search techniques, such as the X-Tree, VA-file, iDistance and NB-Tree.  相似文献   

8.
It is known that latent semantic indexing (LSI) takes advantage of implicit higher-order (or latent) structure in the association of terms and documents. Higher-order relations in LSI capture "latent semantics". These findings have inspired a novel Bayesian framework for classification named Higher-Order Naive Bayes (HONB), which was introduced previously, that can explicitly make use of these higher-order relations. In this paper, we present a novel semantic smoothing method named Higher-Order Smoothing (HOS) for the Naive Bayes algorithm. HOS is built on a similar graph based data representation of the HONB which allows semantics in higher-order paths to be exploited. We take the concept one step further in HOS and exploit the relationships between instances of different classes. As a result, we move beyond not only instance boundaries, but also class boundaries to exploit the latent information in higher-order paths. This approach improves the parameter estimation when dealing with insufficient labeled data. Results of our extensive experiments demonstrate the value of HOS oi1 several benchmark datasets.  相似文献   

9.
In this paper,an important question,whether a small language model can be practically accurate enough,is raised.Afterwards,the purpose of a language model,the problems that a language model faces,and the factors that affect the performance of a language model,are analyzed. Finally,a novel method for language model compression is proposed,which makes the large language model usable for applications in handheld devices,such as mobiles,smart phones,personal digital assistants (PDAs),and handheld personal computers (HPCs).In the proposed language model compression method,three aspects are included.First,the language model parameters are analyzed and a criterion based on the importance measure of n-grams is used to determine which n-grams should be kept and which removed.Second,a piecewise linear warping method is proposed to be used to compress the uni-gram count values in the full languagemodel.And third, a rank-based quantization method is adopted to quantize the bi-gram probability values.Experiments show that by using this compression method the language model can be reduced dramatically to only about 1M bytes while the performance almost does not decrease.This provides good evidence that a language model compressed by means of a well-designed compression technique is practically accurate enough,and it makes the language model usable in handheld devices.  相似文献   

10.
11.
This paper explores a tree kernel based method for semantic role labeling (SRL) of Chinese nominal predicates via a convolution tree kernel. In particular, a new parse tree representation structure, called dependency-driven constituent parse tree (D-CPT), is proposed to combine the advantages of both constituent and dependence parse trees. This is achieved by directly representing various kinds of dependency relations in a CPT-style structure, which employs dependency relation types instead of phrase labels in CPT (Constituent Parse Tree). In this way, D-CPT not only keeps the dependency relationship information in the dependency parse tree (DPT) structure but also retains the basic hierarchical structure of CPT style. Moreover, several schemes are designed to extract various kinds of necessary information, such as the shortest path between the nominal predicate and the argument candidate, the support verb of the nominal predicate and the head argument modified by the argument candidate, from D-CPT. This largely reduces the noisy information inherent in D-CPT. Finally, a convolution tree kernel is employed to compute the similarity between two parse trees. Besides, we also implement a feature-based method based on D-CPT. Evaluation on Chinese NomBank corpus shows that our tree kernel based method on D-CPT performs significantly better than other tree kernel-based ones and achieves comparable performance with the state-of-the-art feature-based ones. This indicates the effectiveness of the novel D-CPT structure in representing various kinds of dependency relations in a CPT-style structure and our tree kernel based method in exploring the novel D-CPT structure. This also illustrates that the kernel-based methods are competitive and they are complementary with the feature- based methods on SRL.  相似文献   

12.
Model predictive control (MPC) is an optimal control method that predicts the future states of the system being controlled and estimates the optimal control inputs that drive the predicted states to the required reference. The computations of the MPC are performed at pre-determined sample instances over a finite time horizon. The number of sample instances and the horizon length determine the performance of the MPC and its computational cost. A long horizon with a large sample count allows the MPC to better estimate the inputs when the states have rapid changes over time, which results in better performance but at the expense of high computational cost. However, this long horizon is not always necessary, especially for slowly-varying states. In this case, a short horizon with less sample count is preferable as the same MPC performance can be obtained but at a fraction of the computational cost. In this paper,we propose an adaptive regression-based MPC that predicts the bestminimum horizon length and the sample count from several features extracted from the time-varying changes of the states. The proposed technique builds a synthetic dataset using the system model and utilizes the dataset to train a support vector regressor that performs the prediction. The proposed technique is experimentally compared with several state-of-the-art techniques on both linear and non-linear models. The proposed technique shows a superior reduction in computational time with a reduction of about 35–65% compared with the other techniques without introducing a noticeable loss in performance.  相似文献   

13.
Intra coding in H.264/AVC can significantly improve the compression efficiency at the cost of high computational complexity due to the use of the rate-distortion optimization technique.To reduce the complexity of intra prediction,we propose a feature-based mode decision algorithm for 4×4 intra prediction.This algorithm is motivated by the fact that a good prediction mode usually has a small residue block,which is the difference between the original block and predicted block.The sum of the absolute transformed coefficients and deviation information of residue block are used to measure the distortion of prediction mode and the smoothness of residue block,respectively.According to the ranking of these two features of all possible modes,and together with the most probable mode,small numbers of candidate modes can be determined.Moreover,the candidate modes are further reduced to only one by using two early termination rules based on the correlation of rank and most probable mode.Simulation results demonstrate that the proposed mode decision method reduces about 60% of total encoding time of intra coding with negligible loss of coding performance.  相似文献   

14.
Non-blocking message total ordering protocol   总被引:1,自引:0,他引:1  
Message total ordering is a critical part in active replication in order to maintain consistency among members in a fault tolerant group. The paper proposes a non-blocking message total ordering protocol (NBTOP) for distributed systems. Non-blocking property refers to that the members in a fault tolerant group keep on running independently without waiting for installing the same group view when a fault tolerant group evolves even when decision messages collide. NBTOP takes advantage of token ring as its logical control way. Members adopt re-requesting mechanism (RR) to obtain their lost decisions. Forward acknowledgement mechanism (FA) is put forth to solve decision collisions. The paper further proves that NBTOP satisfies the properties of total order, agreement, and termination. NBTOP is implemented, and its performance test is done. Comparing with the performance of Totem, the results show that NBTOP has a better total ordering delay. It manifests that non-blocking property helps to improve protocol efficiency.  相似文献   

15.
A cognitive radio system allows higher data transmission rates due to the efficient spectrum utilization. Spectrum sensing plays a substantial role in such a cognition scenario. In this paper, a novel multiple antenna sensing algorithm is proposed for detecting the presence or the absence of the primary user signal. The scheme is called CRABWISE (Cognitive RAdio sensing Based on the joint distribution of pseudo WIShart matrix Eigenvalues). It turns out that without prior information about the PU (primary user) signal, CRABWISE performs near to the optimal sensing performance, which is observed for an energy detection sensing being equipped with perfect prior information of the PU signal. The performance of CRABWISE is investigated using the receiver operating characteristic for signals transmitted over a delay-dispersive channel. Moreover, we study how to find the optimum threshold for the proposed test numerically. The achievable performance is considered for increasing length of the received signal frame in terms of both probability of detection and probability of a wrong decision.  相似文献   

16.
In real applications of inductive learning for classifi cation, labeled instances are often defi cient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classifi cation accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a signifi cant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks;otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.  相似文献   

17.
Active vibration control is used to instead passive solutions in order to increase the performance at low frequencies, in a variety of different engineering systems. This method has improved the performance specifically. The paper will propose a proportional difference type iterative learning control algorithm to deal with the periodic sources and to investigate the active solution as the three degrees of freedom "mass spring damping" mount. Simulation shows that this method could get a better tracking performance, also the displacement could converge to zero with a fast speed.  相似文献   

18.
In a modern electrical driver, rotor field oriented control an appropriate transient response. In this method, the space (RFOC) method has been used to achieve a good performance and vector of the rotor flux comes handy by the rotor resistance value. The rotor resistance is one of the important parameters which varies according to motor speed and room temperature alteration. In this paper, a new on-line estimation method is utilized to obtain the rotor resistance by using Walsh functions domain. The Walsh functions are one of the most applicable functions in piecewise constant basis functions (PCBF) to solve dynamic equations. On the other hand, an integral operational matrix is used to simplify the process and speed of the computation algorithm. The simulations results show that the proposed method is capable of solving the dynamic equations in an electrical machine on a time interval which robustly estimates the rotor resistance in contrast with injection noises.  相似文献   

19.
The moving mirror's speed evenness and distance at even speed determine the spectrogram quality and resolution of the Fourier transform spectrometer (FTS). To improve the performance of FTS, a precise control system is designed to realize the moving mirror (MM)'s reciprocating move at even speed. A laser reference measurement interfer-ometer with phase-shifting through polarization is introduced, which makes the position measurement resolution reach the half wavelength of the laser. At the moment MM changes direction, the configuration of the interference signal is complicated, which induces the measurement count error using a common direction judgment method. In this paper, an improved direction judgment method is proposed based on the analysis of the interfering signal while MM changes direction, and the corresponding logical circuits are designed in Field Programmable Gate Array(FPGA). The MM is driven by a moving coil direct current (DC) linear motor, and the mathematical model is described. According to the analysis of the system characteristics and requirement, a fuzzy-PID control strategy is proposed. The fuzzy-PID control algorithm and its digital realization are studied. In order to reduce the computing quantity, the PID parameters for different inputs are calculated in advance by computer and stored in memory as tables, so the main work of the fuzzy-PID digital control algorithm is the simple look-up of the table, which makes the computing quantity very small and easy to realize in a Digital Signal Processing (DSP) chip. The control system is realized, and the experiment results show that the moving mirror's speed reaches evenness within 0.1 s almost without overshoot after changing direction.  相似文献   

20.
This paper proposes a human control model in teleoperation rendezvous on the basis of human information processing (perception, judgment, inference, decision and response). A predictive display model is introduced to provide the human operator with predictive information of relative motion. By use of this information, the longitudinal and lateral control models for the operator are presented based on phase plane control method and fuzzy control method, and human handling qualities are analyzed. The integration of these two models represents the human control model. Such a model can be used to simulate the control process of the human operator, which teleoperates the rendezvous with the aid of predictive display. Experiments with human in the loop are carried out based on the semi-Physical simulation system to verify this human control model. The results show that this human control model can emulate human operators' performance effectively, and provides an excellent way for the analysis, evaluation and design of the teleoperation rendezvous system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号