首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This study analyzes the effect of degradation on human and automatic speaker verification (SV) tasks. The perceptual test is conducted by the subjects having knowledge about speaker verification. An automatic SV system is developed using the Mel-frequency cepstral coefficients (MFCC) and Gaussian mixture model (GMM). The human and automatic speaker verification performances are compared for clean train and different degraded test conditions. Speech signals are reconstructed in clean and degraded conditions by highlighting different speaker specific information and compared through perceptual test. The perceptual cues that the human subjects used as speaker specific information are investigated and their importance in degraded condition is highlighted. The difference in the nature of human and automatic SV tasks is investigated in terms of falsely accepted and falsely rejected speech pairs. Speech signals are reconstructed in clean and degraded conditions by highlighting different speaker specific information and compared through perceptual test. A discussion on human vs automatic speaker verification is carried out and the possibility of performance improvement of automatic speaker verification under degraded condition is suggested.  相似文献   

2.

Requirements communication plays a vital role in development projects in coordinating the customers, the business roles and the software engineers. Communication gaps represent a significant source of project failures and overruns. For example, misunderstood or uncommunicated requirements can lead to software that does not meet the customers’ requirements, and subsequent low number of sales or additional cost required to redo the implementation. We propose that requirements engineering (RE) distance measures are useful for locating gaps in requirements communication and for improving on development practice. In this paper, we present a case study of one software development project to evaluate this proposition. Thirteen RE distances were measured including geographical and cognitive distances between project members, and semantic distances between requirements and testing artefacts. The findings confirm that RE distances impact requirements communication and project coordination. Furthermore, the concept of distances was found to enable constructive group reflection on communication gaps and improvements to development practices. The insights reported in this paper can provide practitioners with an increased awareness of distances and their impact. Furthermore, the results provide a stepping stone for further research into RE distances and methods for improving on software development processes and practices.

  相似文献   

3.
The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compared the effectiveness of a symbolic solver (CVC3), a random solver, two heuristic search solvers, and seven hybrid solvers (i.e. mix of random, symbolic, and heuristic solvers). We evaluated the solvers on a benchmark generated with a concolic execution of 9 subjects. The performance of each solver was measured by its precision, which is the fraction of constraints that the solver can find solution out of the total number of constraints that some solver can find solution. As expected, symbolic solving subsumes the other approaches for the 4 subjects that only generate decidable constraints. For the remaining 5 subjects, which contain undecidable constraints, the hybrid solvers achieved the highest precision (fraction of constraints that a solver can find a solution out of the total number of satisfiable constraints). We also observed that the solvers were complementary, which suggests that one should alternate their use in iterations of a concolic execution driver.  相似文献   

4.
Virtual Reality - Fiducial markers are a cost-effective solution for solving labeling and monocular localization problems, making them valuable tools for augmented reality (AR), robot navigation,...  相似文献   

5.
In this work, algorithms for segmenting handwritten digits based on different concepts are compared by evaluating them under the same conditions of implementation. A robust experimental protocol based on a large synthetic database is used to assess each algorithm in terms of correct segmentation and computational time. Results on a real database are also presented. In addition to the overall performance of each algorithm, we show the performance for different types of connections, which provides an interesting categorization of each algorithm. Another contribution of this work concerns the complementarity of the algorithms. We have observed that each method is able to segment samples that cannot be segmented by any other method, and do so independently of their individual performance. Based on this observation, we conclude that combining different segmentation algorithms may be an appropriate strategy for improving the correct segmentation rate.  相似文献   

6.
We report on a case study in applying different formal methods to model and verify an architecture for administrating digital signatures. The architecture comprises several concurrently executing systems that authenticate users and generate and store digital signatures by passing security relevant data through a tightly controlled interface. The architecture is interesting from a formal-methods perspective as it involves complex operations on data as well as process coordination and hence is a candidate for both data-oriented and process-oriented formal methods. We have built and verified two models of the signature architecture using two representative formal methods. In the first, we specify a data model of the architecture in Z that we extend to a trace model and interactively verify by theorem proving. In the second, we model the architecture as a system of communicating processes that we verify by finite-state model checking. We provide a detailed comparison of these two different approaches to formalization (infinite state with rich data types versus finite state) and verification (theorem proving versus model checking). Contrary to common belief, our case study suggests that Z is well suited for temporal reasoning about process models with complex operations on data. Moreover, our comparison highlights the advantages of proving theorems about such models and provides evidence that, in the hands of an experienced user, theorem proving may be neither substantially more time-consuming nor more complex than model checking.  相似文献   

7.

Computational modeling of visual saliency has become an important research problem in recent years, with applications in video quality estimation, video compression, object tracking, retargeting, summarization, and so on. While most visual saliency models for dynamic scenes operate on raw video, several models have been developed for use with compressed-domain information such as motion vectors and transform coefficients. This paper presents a comparative study of eleven such models as well as two high-performing pixel-domain saliency models on two eye-tracking datasets using several comparison metrics. The results indicate that highly accurate saliency estimation is possible based only on a partially decoded video bitstream. The strategies that have shown success in compressed-domain saliency modeling are highlighted, and certain challenges are identified as potential avenues for further improvement.

  相似文献   

8.
Generative model-based document clustering: a comparative study   总被引:7,自引:2,他引:7  
This paper presents a detailed empirical study of 12 generative approaches to text clustering, obtained by applying four types of document-to-cluster assignment strategies (hard, stochastic, soft and deterministic annealing (DA) based assignments) to each of three base models, namely mixtures of multivariate Bernoulli, multinomial, and von Mises-Fisher (vMF) distributions. A large variety of text collections, both with and without feature selection, are used for the study, which yields several insights, including (a) showing situations wherein the vMF-centric approaches, which are based on directional statistics, fare better than multinomial model-based methods, and (b) quantifying the trade-off between increased performance of the soft and DA assignments and their increased computational demands. We also compare all the model-based algorithms with two state-of-the-art discriminative approaches to document clustering based, respectively, on graph partitioning (CLUTO) and a spectral coclustering method. Overall, DA and CLUTO perform the best but are also the most computationally expensive. The vMF models provide good performance at low cost while the spectral coclustering algorithm fares worse than vMF-based methods for a majority of the datasets.  相似文献   

9.
Filtering for texture classification: a comparative study   总被引:31,自引:0,他引:31  
In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks (1980), ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, and optimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments  相似文献   

10.
This article has been retracted at the request of the editor.Reason: The article published in Volume 28, issue 11, pages 1131–1139 was retracted as subsequent to publication it was found to be substantially the same as Shortest Paths with Euclidean Distances: An Explanatory Model, by B L Golden and M Ball, published in Networks, Volume 8 (1978), pages 297–314.We apologise to Professor B L Golden and Professor M Ball for not discovering this instance of plagiarism prior to publication.This Notice was published in CAOR, Volume 29, Issue 7 (June 2002).  相似文献   

11.
Differential evolution (DE) algorithm suffers from high computational time due to slow nature of evaluation. Micro-DE (MDE) algorithms utilize a very small population size, which can converge faster to a reasonable solution. Such algorithms are vulnerable to premature convergence and high risk of stagnation. This paper proposes a MDE algorithm with vectorized random mutation factor (MDEVM), which utilizes the small size population benefit while empowers the exploration ability of mutation factor through randomizing it in the decision variable level. The idea is supported by analyzing mutation factor using Monte-Carlo based simulations. To facilitate the usage of MDE algorithms with very-small population sizes, a new mutation scheme for population sizes less than four is also proposed. Furthermore, comprehensive comparative simulations and analysis on performance of the MDE algorithms over various mutation schemes, population sizes, problem types (i.e. uni-modal, multi-modal, and composite), problem dimensionalities, and mutation factor ranges are conducted by considering population diversity analysis for stagnation and pre-mature convergence. The MDEVM is implemented using a population-based parallel model and studies are conducted on 28 benchmark functions provided for the IEEE CEC-2013 competition. Experimental results demonstrate high performance in convergence speed of the proposed MDEVM algorithm.  相似文献   

12.
In this paper, we examine pricing strategies in the U.S. online book industry over two time periods, with an aim to understand whether and how the driving factors of price dispersion change over time. Our empirical results show that dispersion in prices has remained substantial over the period of 2001–2006, but the driving factors of these variations in price have evolved. In 2001 online book retailers generally engaged in obfuscation, frustrating consumer search by manipulating shipping options. As documented by prior literature and revealed in our 2001 data, higher prices charged by retailers were positively related with longer shipping time. This strategy has been abandoned, as shown by our results of a 2006 sample. Online retailers are now competing to ship items quicker than rivals and to pass fewer or no shipping costs on to consumers. The impact of trust assurance seals (e.g., seals of online security and privacy) on price has materialized over the period of 2001–2006. This is because as more consumers become security conscious, the effects of assurance seals on the price becomes better recognized. Moreover, although retailers are roughly clustered into three cohorts, they strategize prices across different product items within each cohort.  相似文献   

13.
Classification of adaptive memetic algorithms: a comparative study.   总被引:5,自引:0,他引:5  
Adaptation of parameters and operators represents one of the recent most important and promising areas of research in evolutionary computations; it is a form of designing self-configuring algorithms that acclimatize to suit the problem in hand. Here, our interests are on a recent breed of hybrid evolutionary algorithms typically known as adaptive memetic algorithms (MAs). One unique feature of adaptive MAs is the choice of local search methods or memes and recent studies have shown that this choice significantly affects the performances of problem searches. In this paper, we present a classification of memes adaptation in adaptive MAs on the basis of the mechanism used and the level of historical knowledge on the memes employed. Then the asymptotic convergence properties of the adaptive MAs considered are analyzed according to the classification. Subsequently, empirical studies on representatives of adaptive MAs for different type-level meme adaptations using continuous benchmark problems indicate that global-level adaptive MAs exhibit better search performances. Finally we conclude with some promising research directions in the area.  相似文献   

14.
Active learning for on-road vehicle detection: a comparative study   总被引:1,自引:0,他引:1  
In recent years, active learning has emerged as a powerful tool in building robust systems for object detection using computer vision. Indeed, active learning approaches to on-road vehicle detection have achieved impressive results. While active learning approaches for object detection have been explored and presented in the literature, few studies have been performed to comparatively assess costs and merits. In this study, we provide a cost-sensitive analysis of three popular active learning methods for on-road vehicle detection. The generality of active learning findings is demonstrated via learning experiments performed with detectors based on histogram of oriented gradient features and SVM classification (HOG–SVM), and Haar-like features and Adaboost classification (Haar–Adaboost). Experimental evaluation has been performed on static images and real-world on-road vehicle datasets. Learning approaches are assessed in terms of the time spent annotating, data required, recall, and precision.  相似文献   

15.
The purpose of this work is to introduce the reader to an Add-in implementation, Decom. This implementation provides the whole processing requirements for analysis of dimeric spectra. General linear and nonlinear decomposition algorithms were integrated as an Excel Add-in for easy installation and usage. In this work, the results of several samples investigations were compared to those obtained by Datan.  相似文献   

16.
This paper addresses the question of the extent to which structured design methods contribute to the quality of a software product. Comparative data were obtained from the development of a commercial real-time embedded system, for which two versions of the product were produced. Version A was developed informally; by contrast version B used a structured design method. Maintenance effort for A was high compared with that for B. The case study was set up to measure the effect of using structured design on the resulting internal code structure whose metrics were captured by a static analysis tool. Results show that version B has less component coupling than version A. The component size results show that the distribution of B is shifted with respect to A, with more smaller components and fewer large ones. In respect of the detailed code structure within components, the results indicate that B is better structured than A. Only the fully structured components of A and B could be measured for testability, with no significant difference being apparent for the specified test case strategies. Overall, the evidence of this comparative study points to modest advantages of the structured method over the informal development method in this case. Caution must be exercised, however, against sweeping generalizations of these results.  相似文献   

17.
In this paper, two variations of simulated annealing method have been proposed and tested on the minimum makespan job shop scheduling problems. In the conventional simulated annealing, the temperature declines constantly, providing the search with a higher transition probability in the beginning of the search and lower probability toward the end of the search. In the first proposed method, an adaptive temperature control scheme is used that changes temperature based on the number of consecutive improving moves. In the second method, a tabu list has been added to the adaptive simulated annealing algorithm in order to avoid revisits. The performance of these two algorithms is evaluated and favorably compared with the conventional simulated annealing.  相似文献   

18.
Intelligent systems for ground vibration measurement: a comparative study   总被引:1,自引:0,他引:1  
This paper deals with the application of genetic algorithm (GA) optimization technique to predict peak particle velocity (PPV). PPV is one of the important parameters to be determined to minimize the damage caused by ground vibration. A number of previous researchers have tried to use different empirical methods to predict PPV but these empirical methods have their limitations due to their less versatile application. In this paper, GA technique is used for the prediction of PPV by incorporating blast design and explosive parameters and the suitability of one technique over other has been analyzed based on the results. Datasets have been obtained from one of the Kurasia mines. 127 data sets were used to establish GA architecture and 10 data sets have been used for validation of GA model to observe its prediction capability. The results obtained have been compared with different traditional vibration predictors, multivariate regression analysis, artificial neural network and the superiority of application of GA over previous methodology have been discussed. The mean absolute percentage error in the proposed architect is very low (0.08) as compared to other predictors.  相似文献   

19.
Vowel classification accuracy is studied using a generalized maximum likelihood ratio method. It is shown that two simplifying assumptions can reduce computation times by as much as a factor of five while producing practically no change in recognition accuracy. The two simplifying assumptions remove cross correlation terms and produce an Euclidean distance discriminant function. The vowels are taken from 350 multisyllabic isolated words spoken by five male speakers. The vowels occur in a variety of preand postconsonantal contexts. The recognition scores obtained for vowels are 83 percent. The effect of grouping of similar vowels on recognition scores is found to be marginal. The high back and high front vowels show better recognition scores (92-94 percent). In general, recognition performance for individual vowels follows a definite trend with respect to. the vowel diagram. A reasonable similarity is observed between confusion matrix and the distribution of vowels in first and second formant frequency (F1 F2) plane.  相似文献   

20.
The effect of non-orthogonality of an entangled non-orthogonal state-based quantum channel is investigated in detail in the context of the teleportation of a qubit. Specifically, average fidelity, minimum fidelity and minimum assured fidelity (MASFI) are obtained for teleportation of a single-qubit state using all the Bell-type entangled non-orthogonal states known as quasi-Bell states. Using Horodecki criterion, it is shown that the teleportation scheme obtained by replacing the quantum channel (Bell state) of the usual teleportation scheme by a quasi-Bell state is optimal. Further, the performance of various quasi-Bell states as teleportation channel is compared in an ideal situation (i.e., in the absence of noise) and under different noise models (e.g., amplitude and phase damping channels). It is observed that the best choice of the quasi-Bell state depends on the amount non-orthogonality, both in noisy and noiseless case. A specific quasi-Bell state, which was found to be maximally entangled in the ideal conditions, is shown to be less efficient as a teleportation channel compared to other quasi-Bell states in particular cases when subjected to noisy channels. It has also been observed that usually the value of average fidelity falls with an increase in the number of qubits exposed to noisy channels (viz., Alice’s, Bob’s and to be teleported qubits), but the converse may be observed in some particular cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号