首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mixture of Experts is one of the most popular ensemble methods in pattern recognition systems. Although, diversity between the experts is one of the necessary conditions for the success of combining methods, ensemble systems based on Mixture of Experts suffer from the lack of enough diversity among the experts caused by unfavorable initial parameters. In the conventional Mixture of Experts, each expert receives the whole feature space. To increase diversity among the experts, solve the structural issues of Mixture of Experts such as zero coefficient problem, and improve efficiency in the system, we intend to propose a model, entitled Mixture of Feature Specified Experts, in which each expert gets a different subset of the original feature set. To this end, we first select a set of feature subsets which lead to a set of diverse and efficient classifiers. Then the initial parameters are infused to the system with training classifiers on the selected feature subsets. Finally, we train the expert and the gating networks using the learning rule of classical Mixture of Experts to organize collaboration between the members of system and aiding the gating network to find the best partitioning of the problem space. To evaluate our proposed method, we have used six datasets from the UCI repository. In addition the generalization capability of our proposed method is considered on real-world database of EEG based Brain-Computer Interface. The performance of our method is evaluated with various appraisal criteria and significant improvement in recognition rate of our proposed method is indicated in all practical tests.  相似文献   

2.
Negative Correlation Learning (NCL) is a popular combining method that employs special error function for the simultaneous training of base neural network (NN) experts. In this article, we propose an improved version of NCL method in which the capability of gating network, as the combining part of Mixture of Experts method, is used to combine the base NNs in the NCL ensemble method. The special error function of the NCL method encourages each NN expert to learn different parts or aspects of the training data. Thus, the local competence of the experts should be considered in the combining approach. The gating network provides a way to support this needed functionality for combining the NCL experts. So the proposed method is called Gated NCL. The improved ensemble method is compared with the previous approaches were used for combining NCL experts, including winner-take-all (WTA) and average (AVG) combining techniques, in solving several classification problems from UCI machine learning repository. The experimental results show that our proposed ensemble method significantly improved performance over the previous combining approaches.  相似文献   

3.

High-Efficiency Video Coding (HEVC) is the new emerging video coding standard of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). The HEVC standard provides a significant improvement in compression efficiency in comparison with existing standards such as H264/AVC by means of greater complexity. In this paper we will examine several HEVC optimizations based on image analysis to reduce its huge CPU, resource and memory expensive encoding process. The proposed algorithms optimize the HEVC quad-tree partitioning procedure, intra/inter prediction and mode decision by means of H264-based methods and spatial and temporal homogeneity analysis which is directly applied to the original video. The validation process of these approaches was conducted by taking into account the human visual system (HVS). The adopted solution makes it possible to perform HEVC real time encoding for HD sequences on a low cost processor with negligible quality loss. Moreover, the frames pre-processing leverages the logic units and embedded hardware available on an Intel GPU, so the execution time of these stages are negligible for the encoding processor.

  相似文献   

4.
Mixture of experts (ME) models comprise a family of modular neural network architectures aiming at distilling complex problems into simple subtasks. This is done by deploying a separate gating module for softly dividing the input space into overlapping regions to be each assigned to one or more expert networks. Conversely, support vector machines (SVMs) refer to kernel-based methods, neural-network-alike models that constitute an approximate implementation of the structural risk minimization principle. Such learning machines follow the simple, but powerful idea of nonlinearly mapping input data into high-dimensional feature spaces wherein a linear decision surface discriminating different regions is properly designed. In this work, we formally characterize and empirically evaluate a novel approach, named as Mixture of Support Vector Machine Experts (MSVME), whose main purpose is to combine the complementary properties of both SVM and ME models. In the formal characterization, an algorithm based on a maximum likelihood criterion is considered for the MSVME training, and we demonstrate that it is possible to train each expert based on an SVM perspective. Regarding the empirical evaluation, simulation results involving nonlinear dynamic system identification problems are reported, contrasting the performance shown by the MSVME approach with that exhibited by conventional SVM and ME models.  相似文献   

5.
We study on-line decision problems where the set of actions that are available to the decision algorithm varies over time. With a few notable exceptions, such problems remained largely unaddressed in the literature, despite their applicability to a large number of practical problems. Departing from previous work on this “Sleeping Experts” problem, we compare algorithms against the payoff obtained by the best ordering of the actions, which is a natural benchmark for this type of problem. We study both the full-information (best expert) and partial-information (multi-armed bandit) settings and consider both stochastic and adversarial rewards models. For all settings we give algorithms achieving (almost) information-theoretically optimal regret bounds (up to a constant or a sub-logarithmic factor) with respect to the best-ordering benchmark.  相似文献   

6.
7.
Accurate high-resolution leaf area index (LAI) reference maps are necessary for the validation of coarser-resolution satellite-derived LAI products. In this article, we propose an efficient method based on the Bayesian Maximum Entropy (BME) paradigm to combine field observations and Landsat Enhanced Thematic Mapper Plus (ETM+)-derived LAI surfaces in order to produce more accurate LAI reference maps. This method takes into account the uncertainties associated with field observations and with the regression relationship between ETM+-derived LAI and field measurements to perform a non-linear prediction of LAI, the variable of interest. In order to demonstrate the difference by soft data and hard data, we estimate the LAI reference maps by three BME interpolation methods, BME1, BME2, and BME3. BME1 and BME2 perform maximum estimation and mean estimation, respectively, by taking the ETM+-derived LAI as interval soft data and the field LAI measurements as hard data. BME3 is utilized when ETM+-derived LAI surfaces are processed as uniform probability soft data and field measurements are processed as Gaussian probability soft data. Three study sites are selected from the BigFoot project (NASA's Earth Observing System validation programme) (http://www.fsl.orst.edu/larse/bigfoot/index.html). In regard to the mean and standard deviation of LAI surfaces, standard deviation predicted by BME methods has lower values than that derived by ETM+. The mean value of the BME-predicted LAI, which takes into account the uncertainties of field measurements, is lower than that of ETM+-derived LAI at each study site. A comparison with field measurements shows that BME1, BME2, and BME3 have root mean square errors (RMSE) of 0.455, 0.485, and 0.517 and average biases of??0.017,??0.010, and??0.304, respectively. The RMSEs and biases of the predicted LAI surfaces are less when compared to the ETM+-derived LAI, which has the average RMSE and bias of 0.642 and??0.080. When the field measurements are processed as soft data, the predicted LAI by BME3 has more bias than those of the predictions by BME1 and BME2, but has less RMSE than that of the ETM+-derived LAI by 0.125. In summary, BME is capable of incorporating the spatial autocorrelation and the uncertainties in the field LAI measurements into the LAI surface estimation to produce a more accurate LAI surface with less RMSE in validation. The maximum estimation has relatively better accuracy than the mean estimation. The results indicate that the BME is a promising method for fusing point-scale and area-scale data.  相似文献   

8.

The main propose of this investigation is to develop an interpolating meshless numerical procedure for solving the stochastic parabolic interface problems. The present numerical algorithm is constructed from the interpolating moving least squares (ISMLS) approximation. At first, the space variable has been discretized by using the ISMLS approximation. Then, the PDE reduces to the system of nonlinear ODEs. In the next, to achieve a high-order numerical formula, we employ a fourth-order time discrete scheme that is well-known as the explicit fourth-order exponential time differencing Runge-Kutta method (ETDRK4). This method is simple and has acceptable accuracy for solving the considered problems. Several examples with adequate intricacy are examined to check the new numerical procedure.

  相似文献   

9.
This paper presents a face detection method which makes use of a modified mixture of experts. In order to improve the face detection accuracy, a novel structure is introduced which uses the multilayer perceptrons (MLPs), as expert and gating networks, and employs a new learning algorithm to adapt with the MLPs. We call this model Mixture of MLP Experts (MMLPE). Experiments using images from the CMU-130 test set demonstrate the robustness of our method in detecting faces with wide variations in pose, facial expression, illumination, and complex backgrounds. The MMLPE produces promising high detection rate of 98.8% with ten false positives.  相似文献   

10.

Structural engineering is focused on the safe and efficient design of infrastructure. Projects can range in size and complexity, many requiring massive amounts of materials and expensive construction and operational costs. Therefore, one of the primary objectives for structural engineers is a cost-effective design. Incorporating optimality criteria into the design procedure introduces additional complexities that result in problems that are nonlinear, nonconvex, and have a discontinuous solution space. Population-based optimization algorithms (known as metaheuristics) have been found to be very efficient approaches to these problems. Many researchers have developed and applied state-of-art metaheuristics to automate and optimize the design of real-world civil engineering problems. While there is a large body of published papers in this area, there are few comprehensive reviews that list, summarize, and categorize metaheuristic optimization in structural engineering. This paper provides an extensive survey of a wide range of metaheuristic techniques to structural engineering optimization problems. Also, information is provided on available structural engineering benchmark problems, the formulation of different objective functions, and the handling of various types of constraints. The performance of different optimization techniques is compared for many benchmark problems.

  相似文献   

11.
Hu  Jia  Guo  Tiande  Zhao  Tong 《Applied Intelligence》2022,52(12):14233-14245

Inspired by the fact that certain randomization schemes incorporated into the stochastic (proximal) gradient methods allow for a large reduction in computational time, we incorporate such a scheme into stochastic alternating direction method of multipliers (ADMM), yielding a faster stochastic alternating direction method (FSADM) for solving a class of large scale convex composite problems. In the numerical experiments, we observe a reduction of this method in computational time compared to previous methods. More importantly, we unify the stochastic ADMM for solving general convex and strongly convex composite problems (i.e., the iterative scheme does not change when the the problem goes from strongly convex to general convex). In addition, we establish the convergence rates of FSADM for these two cases.

  相似文献   

12.

In this paper, we present several important details in the process of legacy code parallelization, mostly related to the problem of maintaining numerical output of a legacy code while obtaining a balanced workload for parallel processing. Since we maintained the non-uniform mesh imposed by the original finite element code, we have to develop a specially designed data distribution among processors so that data restrictions are met in the finite element method. In particular, we introduce a data distribution method that is initially used in shared memory parallel processing and obtain better performance than the previous parallel program version. Besides, this method can be extended to other parallel platforms such as distributed memory parallel computers. We present results including several problems related to performance profiling on different (development and production) parallel platforms. The use of new and old parallel computing architectures leads to different behavior of the same code, which in all cases provides better performance in multiprocessor hardware.

  相似文献   

13.
14.
One of the key problems in using B-splines successfully to approximate an object contour is to determine good knots. In this paper, the knots of a parametric B-spline curve were treated as variables, and the initial location of every knot was generated using the Monte Carlo method in its solution domain. The best km knot vectors among the initial candidates were searched according to the fitness. Based on the initial parameters estimated by an improved k-means algorithm, the Gaussian Mixture Model (GMM) for every knot was built according to the best km knot vectors. Then, the new generation of the population was generated according to the Gaussian mixture probabilistic models. An iterative procedure repeating these steps was carried out until a termination criterion was met. The GMM-based continuous optimization algorithm could determine the appropriate location of knots automatically. A set of experiments was then implemented to evaluate the performance of the new algorithm. The results show that the proposed method achieves better approximation accuracy than methods based on artificial immune system, genetic algorithm or squared distance minimization (SDM).  相似文献   

15.
Classic aggregation operators in group decision making such as the ordered weighted averaging (OWA), induced ordered weighted averaging (IOWA), C‐IOWA, P‐IOWA, and I‐IOWA have shown to be successful tools to provide flexibility in the aggregation of preferences. However, these operators do not take advantage of information related to the interaction between experts. Experts involved in a group decision‐making problem may have developed opinions about the reliability of other experts' judgments, either because they have previous history of interaction with each other or because they have knowledge that informs them on the reliability of other colleagues in the group in solving decision‐making problems in the past. In this paper, and within the framework of social network decision making, we present three new social network analysis based IOWA operators that take advantage of the linguistic trustworthiness information gathered from the experts' social network to aggregate the social group preferences. Their use is analysed with simple but illustrative examples.  相似文献   

16.
Abstract:

The existence of probability misconceptions at various educational levels has been well documented. Furthermore, these misconceptions have been shown to be widespread and highly resistant to change. The author's previous research has shown considerable success in overcoming misconceptions in the short term by basing the knowledge reconstruction process on problems that draw out beliefs held by students that are in agreement with accepted theory and that are therefore expected to receive correct responses. Such problems are referred to as anchoring situations or anchors.

In this study, anchoring probability situations that are conceptually analogous to misconception‐prone target probability situations were generated and tested with secondary mathematics students. The testing showed that probability misconceptions were common but also that the generated anchors were effective in reconstructing misconception‐laden probability knowledge. A follow‐up test showed that 65% of the reconstructed knowledge was retained after six months.  相似文献   

17.
Accurate monitoring of spatial and temporal variation in tree cover provides essential information for steering management practices in orchards. In this light, the present study investigates the potential of Hyperspectral Mixture Analysis. Specific focus lies on a thorough study of non-linear mixing effects caused by multiple photon scattering. In a series of experiments the importance of multiple scattering is demonstrated while a novel conceptual Nonlinear Spectral Mixture Analysis approach is presented and successfully tested on in situ measured mixed pixels in Citrus sinensis L. orchards. The rationale behind the approach is the redistribution of nonlinear fractions (i.e., virtual fractions) among the actual physical ground cover entities (e.g., tree, soil). These ‘virtual’ fractions, which account for the extent and nature of multiple photon scattering only have a physical meaning at the spectral level but cannot be interpreted as an actual physical part of the ground cover. Results illustrate that the effect of multiple scattering on Spectral Mixture Analysis is significant as the linear approach provides a mean relative root mean square error (RMSE) for tree cover fraction estimates of 27%. While traditional nonlinear approaches only slightly reduce this error (RMSE = 23%), important improvements are obtained for the novel Nonlinear Spectral Mixture Analysis approach (RMSE = 12%).  相似文献   

18.

For a system of linear equations Ax = b, the following natural questions appear:

? does this system have a solution?

? if it does, what are the possible values of a given objective function f(x1,...,xn) (e.g., of a linear function f(x) = ∑C i X i ) over the system's solution set?

We show that for several classes of linear equations with uncertainty (including interval linear equations) these problems are NP-hard. In particular, we show that these problems are NP-hard even if we consider only systems of n+2 equations with n variables, that have integer positive coefficients and finitely many solutions.

  相似文献   

19.

The using of an autonomous wheeled mobile robot (AWMR) that perform diverse processes in a numerous number of applications without human’s interposition in an unknown environment is thriving, nowadays. An AWMR can search the environment, create an adequate map, and localizing itself into this map, by interpreting the environment, autonomously. The FastSLAM is a structure for simultaneous localization and mapping (SLAM) for an AWMR. The correctness and efficiency of the estimation of the FastSLAM often depend on the accurate a previous knowledge of the control and measurement noise covariance matrices. Also, inaccurate previous knowledge may seriously degrade their efficiency. One of the major causes of losing particle manifold is sample impoverishment in the FastSLAM. These cases of the most main problems. This paper presents a robust new method to solve these problems as called Hybrid filter SLAM. In this method, for learning the measurement and control noise covariance matrices for increasing correctness and consistency are utilized Intuitionistic Fuzzy Logic System (IFLS). In order to optimize efficiency of sampling from Cuckoo Search (CS). The results of the simulation and experimental shown that the Hybrid filter SLAM is efficient than the FastSLAM that has less number of computations and good performance for the larger environment.

  相似文献   

20.
《Ergonomics》2012,55(11):993-1001
Anthropometric data concerning British civilian adults is incomplete with respect both to the samples of people investigated and the measurements taken. The purpose of the present paper is to review the currently available sources and to provide (by estimation) a data set which is sufficiently comprehensive and accurate for general application in workspace design.

The method of estimation employed required a knowledge of the mean and standard deviation of stature in the target population. Statistical parameters of other body dimensions were obtained by scaling these down according to ratios previously determined for other reference populations. A previous study had indicated the magnitude of the errors to be anticipated in this procedure.

Anthropometric tables are provided for the adult male and female populations of Great Britain (aged 16-64 years). Percentile values are given for fifty bodily dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号