Multi-stream automatic speech recognition (MS-ASR) has been confirmed to boost the recognition performance in noisy conditions. In this system, the generation and the fusion of the streams are the essential parts and need to be designed in such a way to reduce the effect of noise on the final decision. This paper shows how to improve the performance of the MS-ASR by targeting two questions; (1) How many streams are to be combined, and (2) how to combine them. First, we propose a novel approach based on stream reliability to select the number of streams to be fused. Second, a fusion method based on Parallel Hidden Markov Models is introduced. Applying the method on two datasets (TIMIT and RATS) with different noises, we show an improvement of MS-ASR. 相似文献
Ultra-high-performance concrete (UHPC) is a recent class of concrete with improved durability, rheological and mechanical and durability properties compared to traditional concrete. The production cost of UHPC is considerably high due to a large amount of cement used, and also the high price of other required constituents such as quartz powder, silica fume, fibres and superplasticisers. To achieve specific requirements such as desired production cost, strength and flowability, the proportions of UHPC’s constituents must be well adjusted. The traditional mixture design of concrete requires cumbersome, costly and extensive experimental program. Therefore, mathematical optimisation, design of experiments (DOE) and statistical mixture design (SMD) methods have been used in recent years, particularly for meeting multiple objectives. In traditional methods, simple regression models such as multiple linear regression models are used as objective functions according to the requirements. Once the model is constructed, mathematical programming and simplex algorithms are usually used to find optimal solutions. However, a more flexible procedure enabling the use of high accuracy nonlinear models and defining different scenarios for multi-objective mixture design is required, particularly when it comes to data which are not well structured to fit simple regression models such as multiple linear regression. This paper aims to demonstrate a procedure integrating machine learning (ML) algorithms such as Artificial Neural Networks (ANNs) and Gaussian Process Regression (GPR) to develop high-accuracy models, and a metaheuristic optimisation algorithm called Particle Swarm Optimisation (PSO) algorithm for multi-objective mixture design and optimisation of UHPC reinforced with steel fibers. A reliable experimental dataset is used to develop the models and to justify the final results. The comparison of the obtained results with the experimental results validates the capability of the proposed procedure for multi-objective mixture design and optimisation of steel fiber reinforced UHPC. The proposed procedure not only reduces the efforts in the experimental design of UHPC but also leads to the optimal mixtures when the designer faces strength-flowability-cost paradoxes.
In this paper a new method for handling occlusion in face recognition is presented. In this method the faces are partitioned
into blocks and a sequential recognition structure is developed. Then, a spatial attention control strategy over the blocks
is learned using reinforcement learning. The outcome of this learning is a sorted list of blocks according to their average
importance in the face recognition task. In the recall mode, the sorted blocks are employed sequentially until a confident
decision is made. Obtained results of various experiments on the AR face database demonstrate the superior performance of
proposed method as compared with that of the holistic approach in the recognition of occluded faces. 相似文献
With the rapid growth of laser applications and the introduction of high efficiency lasers (e.g. fiber lasers), laser material processing has gained increasing importance in a variety of industries. Among the applications of laser technology, laser cladding has received significant attention due to its high potential for material processing such as metallic coating, high value component repair, prototyping, and even low-volume manufacturing. In this paper, two optimization methods have been applied to obtain optimal operating parameters of Laser Solid Freeform Fabrication Process (LSFF) as a real world engineering problem. First, Particle Swarm Optimization (PSO) algorithm was implemented for real-time prediction of melt pool geometry. Then, a hybrid evolutionary algorithm called Self-organizing Pareto based Evolutionary Algorithm (SOPEA) was proposed to find the optimal process parameters. For further assurance on the performance of the proposed optimization technique, it was compared to some well-known vector optimization algorithms such as Non-dominated Sorting Genetic Algorithm (NSGA-II) and Strength Pareto Evolutionary Algorithm (SPEA 2). Thereafter, it was applied for simultaneous optimization of clad height and melt pool depth in LSFF process. Since there is no exact mathematical model for the clad height (deposited layer thickness) and the melt pool depth, the authors developed two Adaptive Neuro-Fuzzy Inference Systems (ANFIS) to estimate these two process parameters. Optimization procedure being done, the archived non-dominated solutions were surveyed to find the appropriate ranges of process parameters with acceptable dilutions. Finally, the selected optimal ranges were used to find a case with the minimum rapid prototyping time. The results indicate the acceptable potential of evolutionary strategies for controlling and optimization of LSFF process as a complicated engineering problem. 相似文献
Data Grid is a geographically distributed environment that deals with large-scale data-intensive applications. Effective scheduling in Grid can reduce the amount of data transferred among nodes by submitting a job to a node, where most of the requested data files are available. Data replication is another key optimization technique for reducing access latency and managing large data by storing data in a wisely manner. In this paper two algorithms are proposed, first a novel job scheduling algorithm called Combined Scheduling Strategy (CSS) that uses hierarchical scheduling to reduce the search time for an appropriate computing node. It considers the number of jobs waiting in queue, the location of required data for the job and the computing capacity of sites. Second a dynamic data replication strategy, called the Modified Dynamic Hierarchical Replication Algorithm (MDHRA) that improves file access time. This strategy is an enhanced version of Dynamic Hierarchical Replication (DHR) strategy. Data replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement. MDHRA replaces replicas based on the last time the replica was requested, number of access, and size of replica. It selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. The simulation results demonstrate the proposed replication and scheduling strategies give better performance compared to the other algorithms. 相似文献
Mapping quality of the self-organising maps (SOMs) is sensitive to the map topology and initialisation of neurons. In this article, in order to improve the convergence of the SOM, an algorithm based on split and merge of clusters to initialise neurons is introduced. The initialisation algorithm speeds up the learning process in large high-dimensional data sets. We also develop a topology based on this initialisation to optimise the vector quantisation error and topology preservation of the SOMs. Such an approach allows to find more accurate data visualisation and consequently clustering problem. The numerical results on eight small-to-large real-world data sets are reported to demonstrate the performance of the proposed algorithm in the sense of vector quantisation, topology preservation and CPU time requirement. 相似文献
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation. 相似文献
In our previous work, “robust transmission of scalable video stream using modified LT codes”, an LT code with unequal packet protection property was proposed. It was seen that applying the proposed code to any importance-sorted input data, could increase the probability of early decoding of the most important parts when enough number of encoded symbols is available at the decoder’s side. In this work, the performance of the proposed method is assessed in general case for a wide range of loss rate, even when there are not enough encoded symbols at the decoder’s side. Also in this work the degree distribution of input nodes is investigated in more detail. It is illustrated that sorting input nodes in encoding graph, as what we have done in our work, has superior advantage in comparison with unequal input node selection method that is used in traditional rateless code with unequal error protection property. 相似文献
Cops and Robbers is a pursuit and evasion game played on graphs that has received much attention. We consider an extension of Cops and Robbers, distance k Cops and Robbers, where the cops win if at least one of them is of distance at most k from the robber in G. The cop number of a graph G is the minimum number of cops needed to capture the robber in G. The distance k analogue of the cop number, written ck(G), equals the minimum number of cops needed to win at a given distance k. We study the parameter ck from algorithmic, structural, and probabilistic perspectives. We supply a classification result for graphs with bounded ck(G) values and develop an O(n2s+3) algorithm for determining if ck(G)≤s for s fixed. We prove that if s is not fixed, then computing ck(G) is NP-hard. Upper and lower bounds are found for ck(G) in terms of the order of G. We prove that