首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Digital Signal Processing》2000,10(1-3):93-112
Dunn, Robert B., Reynolds, Douglas A., and Quatieri, Thomas F., Approaches to Speaker Detection and Tracking in Conversational Speech, Digital Signal Processing10(2000), 93–112.Two approaches to detecting and tracking speakers in multispeaker audio are described. Both approaches use an adapted Gaussian mixture model, universal background model (GMM-UBM) speaker detection system as the core speaker recognition engine. In one approach, the individual log-likelihood ratio scores, which are produced on a frame-by-frame basis by the GMM-UBM system, are used to first partition the speech file into speaker homogenous regions and then to create scores for these regions. We refer to this approach as internal segmentation. Another approach uses an external segmentationalgorithm, based on blind clustering, to partition the speech file into speaker homogenous regions. The adapted GMM-UBM system then scores each of these regions as in the single-speaker recognition case. We show that the external segmentation system outperforms the internal segmentation system for both detection and tracking. In addition, we show how different components of the detection and tracking algorithms contribute to the overall system performance.  相似文献   

2.
On the basis of a recent paper (Engeman, R. M., and Swanson, G. D. Comput. Biomed. Res.24, 599 (1991)) on the choice of appropriate methods of 2 x 2 contingency table analysis, experimental results are reported and discussed in order to further clarify the real limits of the methods most commonly used in small sample size data situations.  相似文献   

3.
The study is devoted to a concept and algorithmic realization of nonlinear mappings aimed at increasing the effectiveness of the problem solving method. Given the original input space X and a certain problem solving method M, designed is a nonlinear mapping ? so that the method operating in the transformed space M(?(X)) becomes more efficient. The nonlinear mappings realize a transformation of X through contractions and expansions of selected regions of the original space. In particular, we show how a piecewise linear mapping is optimized by using particle swarm optimization (PSO) and a suitable fitness function quantifying the objective of the problem. Several families of problems are investigated and illustrated through illustrative experimental results.  相似文献   

4.
In this paper, we explore the difference between preemption and activity splitting in the resource-constrained project scheduling problem (RCPSP) literature and identify a new set of RCPSPs that only allows non-preemptive activity splitting. Each activity can be processed in multiple modes and both renewable and non-renewable resources are considered. Renewable resources have time-varying resource constraints and vacations. Multi-mode RCPSP (MRCPSP) with non-preemptive activity splitting is shown to be a generalization of the RCPSP with calendarization. Activity ready times and due dates are considered to study the impact on project makespan. Computational experiments are conducted to compare optimal makespans under three different problem settings: RCPSPs without activity splitting (P1), RCPSPs with non-preemptive activity splitting (P2), and preemptive RCPSPs (P3). A precedence tree-based branch-and-bound algorithm is modified as an exact method to find optimal solutions. Resource constraints are included into the general time window rule and priority rule-based simple heuristics are proposed to search for good initial solutions to tighten bounding rules. Results indicate that there are significant makespan reductions possible when non-preemptive activity splitting or preemptions are allowed. The higher the range of time-varying renewable resource limits and the tighter the renewable resource limits are, the bigger the resulting makespan reduction can be.  相似文献   

5.
For fast binary addition, a carry-lookahead (CLA) design is the obvious choice (1., 3.). However, the direct implementation of a CLA adder in VLSI faces some undesirable limitations. Either the design lacks regularity, thus increasing the design and implementation costs, or the interconnection wires are too long, thus causing area-time inefficiency and limits on the size of addition. R. P Brent and H. T Kung (IEEE Trans. Comput.C-31 (Mar. 1982)) solved the regularity problem by reformulating the carry chain computation. They showed that an n-bit addition can be performed in time O(log n), using area O(n log n) with maximum interconnection wire length 0(n). In this paper, we give an alternative log n stage design which is nearly optimum with respect to regularity, area-time efficiency, and maximum interconnection wire length.  相似文献   

6.
This paper focuses on the challenging problem of integrating real-time traffic with data traffic on the same network, while having predictable quality of service (QoS) guarantees. Predictable QoS guarantees mean both deterministic and probabilistic (with a known probability distribution) functions of loss, delay, and jitter. As of today there are only limited solutions to this problem. This work presents, to the best of our knowledge, the first coherent system solution, for combining, on one hand, bursty data traffic with deterministic no loss due to congestion, and on the other hand, periodic real-time traffic with deterministic bandwidth guarantees, constant jitter, and bounded delay. The principles of this architecture facilitate the implementation of a scalable multimedia system that will combine or integrate distributed/parallel computing (e.g., over a network of PC/workstations) with real-time applications (e.g., interactive video teleconferencing).  相似文献   

7.
To improve the performance of the standard particle swarm optimization (PSO) which suffers from premature convergence and slow convergence speed, many PSO variants introduce lots of stochastic or aimless strategies to overcome the convergence problem. However, the mutual learning between elites particles is omitted, although which might be benefit to the convergence speed and, prevent the premature convergence. In this paper, we introduce DSLPSO, which integrates three novel strategies, specifically, tabu detecting, shrinking and local learning strategies, into PSO to overcome the aforementioned shortcomings. In DSLPSO, search space of each dimension is divided into many equal subregions. Then the tabu detecting strategy, which has good ergodicity for search space, helps the global historical best particle to detect a more suitable subregion, and thus help it jump out of a local optimum. The shrinking strategy enables DSLPSO to take optimization in a smaller search space and obtain a higher convergence speed. In the local learning strategy, a differential between two elites particles is used to increase solution accuracy. The experimental results show that DSLPSO has a superior performance in comparison with several other participant PSOs on most of the tested functions, as well as offering faster convergence speed, higher solution accuracy and stronger reliability.  相似文献   

8.
Automatic wayfinding verbal aids for blind pedestrians in simple and structured urban areas are claimed to rely on specific database features and guidance functions (i.e., instructions and spatial information provided at specific places). This paper reports an experiment in which these requirements for such areas, and for complex and unstructured urban areas, were tested by 7 cane- and 3 dog-users. Very few hesitations and errors along with the need for few modifications of the verbal guidance rules and for a one-meter localization device for traversing crosswalks were found. Further research issues for designing a localized verbal navigational aid are also presented (i.e., extension/diversification of the population and gradual introduction of the interface).
Florence GaunetEmail:
  相似文献   

9.
The ability to control the flow of particles (e.g., droplets and cells) in microfluidic environments can enable new methods for synthesis of biomaterials (Mann and Ozin in Nature 382:313–318, 1996), biocharacterization, and medical diagnosis (Pipper et al. in Nat Med 13:1259–1263, 2007). Understanding the factors that affect the particle passage can improve the control over the particles’ flow through microchannels (Vanapalli et al. in Lab Chip 9:982, 2009). The first step to understand the particle passage is to measure the resulting flow rate, induced pressure drop across the channel, and other parameters. Flow rates and pressure drops during passage of a particle through microchannels are typically measured using microfluidic comparators. Since the first microfluidic comparators were reported, a few design factors have been explored experimentally and theoretically, e.g., sensitivity (Vanapalli et al. in Appl Phys Lett 90:114109, 2007). Nevertheless, there is still a gap in the understanding of the temporal and spatial resolution limits of microfluidic comparators. Here we explore, theoretically and experimentally, the factors that affect the spatial and temporal resolution. We determined that the comparator sensitivity is defined by the device geometry adjacent and upstream the measuring point in the comparator. Further, we determined that, in order of importance, the temporal resolution is limited by the convective timescale, capacitive timescale due to channel expansion, and unsteady timescale due to the flow inertia. Finally, we explored the flow velocity limits by characterizing the transition between low to moderate Reynolds numbers (Re <<1 to Re ~ 50). The present work can guide the design of microfluidic comparators and clarify the limits of this technique.  相似文献   

10.
A mathematical basis is given for comparing the relative merits of various techniques used to reduce the order of large linear and non-linear dynamics problems during their numerical integration. In such techniques as Guyan-Irons, path derivatives, selected eigenvectors, Ritz vectors, etc., the nth order initial value problem of [/.y = f(y) for t > 0, y(0) given] is typically reduced to the mth order (m ? n) problem of z? = g(z) for t > 0, z(0) given] by the transformation y = Pz where P changes from technique to technique. This paper gives an explicit approximate expression for the reduction error ei in terms of P and the Jacobian of f. It is shown that: (a) reduction techniques are more accurate when the time rate of change of the response y is relatively small; (b) the change in response between two successive stations contributes to the errors at future stations after the change in response is transformed by a filtering matrix H, defined in terms of P; (c) the error committed at a station propagates to future stations by a mixing and scaling matrix G, defined in terms of P, Jacobian of f, and time increment h. The paper discusses the conditions under which the reduction errors may be minimized and gives guidelines for selecting the reduction basis vector, i.e. the columns of P.  相似文献   

11.
It is often desirable to have statistical tolerance limits available for the distributions used to describe time-to-failure data in reliability problems. For example, one might wish to know if at least a certain proportion, say β, of a manufactured product will operate at least T hours. This question cannot usually be answered exactly, but it may be possible to determine a lower tolerance limit L(X), based on a random sample X, such that one can say with a certain confidence γ that at least 100β% of the product will operate longer than L(X). Then reliability statements can be made based on L(X), or, decisions can be reached by comparing L(X) to T. Tolerance limits of the type mentioned above are considered in this paper, which presents a new approach to constructing lower and upper tolerance limits on order statistics in future samples. Attention is restricted to invariant families of distributions under parametric uncertainty. The approach used here emphasizes pivotal quantities relevant for obtaining tolerance factors and is applicable whenever the statistical problem is invariant under a group of transformations that acts transitively on the parameter space. It does not require the construction of any tables and is applicable whether the past data are complete or Type II censored. The proposed approach requires a quantile of the F distribution and is conceptually simple and easy to use. For illustration, the Pareto distribution is considered. The discussion is restricted to one-sided tolerance limits. A practical example is given.  相似文献   

12.
Alder, F. A., Dill, J. C., and Lindsey, A. R., Performance of Simplex Signaling in Circular Trellis-Coded Modulation, Digital Signal Processing11 (2001) 159–167This paper presents measures of performance of simplex signaling in a circular trellis-coded modulation (CTCM) scheme. Background is given on both CTCM and simplex signaling. The CTCM system is shown to give substantial coding gain when compared to conventional BPSK. Performance is also shown to improve as trellis size increases.  相似文献   

13.
Feature selection is an important preprocessing step in pattern recognition and machine learning, and feature evaluation arises as key issues in the construction of feature selection algorithms. In this study, we introduce a new concept of neighborhood evidential decision error to evaluate the quality of candidate features and construct a greedy forward algorithm for feature selection. This technique considers both the Bayes error rate of classification and spatial information of samples in the decision boundary regions.Within the decision boundary regions, each sample xi in the neighborhood of x provides a piece of evidence reflecting the decision of x so as to separate the decision boundary regions into two subsets: recognizable and misclassified regions. The percentage of misclassified samples is viewed as the Bayes error rate of classification in the corresponding feature subspaces. By minimizing the neighborhood evidential decision error (i.e., Bayes error rate), the optimal feature subsets of raw data set can be selected. Some numerical experiments were conducted to validate the proposed technique by using nine UCI classification datasets. The experimental results showed that this technique is effective in most of the cases, and is insensitive to the size of neighborhood comparing with other feature evaluation functions such as the neighborhood dependency.  相似文献   

14.
For obtaining superior search performance in particle swarm optimization (PSO), we proposed particle swarm optimization with diversive curiosity (PSO/DC). The mechanism of diversive curiosity in PSO can prevent premature convergence and ensure exploration. To clarify the characteristics of PSO/DC, we estimated the range for appropriate parameter values, and investigated the trade-off between exploration and exploitation. Applications of the proposed method to a two-dimensional multimodal optimization problem and a suite of five-dimensional benchmark problems well demonstrate its effectiveness. Our experimental results basically accord with the findings in psychology, i.e., diversive curiosity being prone to exploration and anxiety.
Masumi IshikawaEmail:
  相似文献   

15.
Plasmodium falciparum subtilisin-like protease 1 (SUB1) is a novel target for the development of innovative antimalarials. We recently described the first potent difluorostatone-based inhibitors of the enzyme ((4S)-(N-((N-acetyl-l-lysyl)-l-isoleucyl-l-threonyl-l-alanyl)-2,2-difluoro-3-oxo-4-aminopentanoyl)glycine (1) and (4S)-(N-((N-acetyl-l-isoleucyl)-l-threonyl-l-alanylamino)-2,2-difluoro-3-oxo-4-aminopentanoyl)glycine (2)). As a continuation of our efforts towards the definition of the molecular determinants of enzyme-inhibitor interaction, we herein propose the first comprehensive computational investigation of the SUB1 catalytic core from six different Plasmodium species, using homology modeling and molecular docking approaches. Investigation of the differences in the binding sites as well as the interactions of our inhibitors 1,2 with all SUB1 orthologues, allowed us to highlight the structurally relevant regions of the enzyme that could be targeted for developing pan-SUB1 inhibitors. According to our in silico predictions, compounds 1,2 have been demonstrated to be potent inhibitors of SUB1 from all three major clinically relevant Plasmodium species (P. falciparum, P. vivax, and P. knowlesi). We next derived multiple structure-based pharmacophore models that were combined in an inclusive pan-SUB1 pharmacophore (SUB1-PHA). This latter was validated by applying in silico methods, showing that it may be useful for the future development of potent antimalarial agents.  相似文献   

16.
In this paper a definition of the boundary of a finite set of points in the plane is given. It is based on the concept of the density of such a set of points. An algorithm is given for finding an approximation to such a boundary. They are well-known definitions of connectedness, boundary, hole, and cluster for a finite set of points in the plane (see, e.g., A. Rosenfeld, Amer. Math. Monthly86 1979, 621–630; Inform. and Contr.39 1978, 19–34) and algorithms for constructing these objects (see, e.g., A Rosenfeld and A. C. Kak, Digital Picture Processing, Academic Press, New York 1976). These definitions have been given for subsets of a grid of points. This paper attempts to define these objects for an arbitrary finite set of points.  相似文献   

17.
Khuwaja, G. A., An Adaptive Combined Classifier System for Invariant Face Recognition, Digital Signal Processing12 (2002) 21–46In classification tasks it may be wise to combine observations from different sources. In this paper, to obtain classification systems with both good generalization performance and efficiency in space and time, a learning vector quantization learning method based on combinations of weak classifiers is proposed. The weak classifiers are generated using automatic elimination of redundant hidden layer neurons of the network on both the entire face images and the extracted features: forehead, right eye, left eye, nose, mouth, and chin. The neuron elimination is based on the killing of blind neurons, which are redundant. The classifiers are then combined through majority voting on the decisions available from input classifiers. It is demonstrated that the proposed system is capable of achieving better classification results with both good generalization performance and a fast training time on a variety of test problems using a large and variable database. The selection of stable and representative sets of features that efficiently discriminate between faces in a huge database is discussed.  相似文献   

18.
This paper presents a robust approach to extracting content from instructional videos for handwritten recognition, indexing and retrieval, and other e-learning applications. For the instructional videos of chalkboard presentations, retrieving the handwritten content (e.g., characters, drawings, figures) on boards is the first and prerequisite step towards further exploration of instructional video content. However, content extraction in instructional videos is still challenging due to video noise, non-uniformity of the color in board regions, light condition changes in a video session, camera movements, and unavoidable occlusions by instructors. To solve this problem, we first segment video frames into multiple regions and estimate the parameters of the board regions based on statistical analysis of the pixels in dominant regions. Then we accurately separate the board regions from irrelevant regions using a probabilistic classifier. Finally, we combine top-hat morphological processing with a gradient-based adaptive thresholding technique to retrieve content pixels from the board regions. Evaluation of the content extraction results on four full-length instructional videos shows the high performance of the proposed method. The extraction of content text facilitates the research on full exploitation of instructional videos, such as content enhancement, indexing, and retrieval.
Chekuri ChoudaryEmail:
  相似文献   

19.
A region growing scheme based upon the facet model (R. M. Haralick, Computer Graphics Image Processing 12, 1980, 60–73; R. M. Haralick and L. T. Watson Computer Graphics Image Processing 15, 1981, 113–129) is presented. The process begins with an initial segmentation which preserves much of the detailed resolution of the original image. Next a region property list and a region adhacency graph corresponding to the segmented image are constructed. Global information is then used to merge atomic regions. The region growing algorithm is based upon extensions of the facet model, but it is a higher-level algorithm which treats regions as primitive elements. The basic algorithm and several variations are described, including a version that uses a threshold on the amount a property vector is allowed to change to control the region growing process. The convergence of this thresholded facet iteration is also proved. Finally, the results of comparative experiments are presented.  相似文献   

20.
Fan, H., and De, P., High Speed Adaptive Signal Progressing Using the Delta Operator, Digital Signal Processing11 (2001), 3–34.In this paper the use of the delta operator, i.e., a scaled difference operator, in adaptive signal processing with fast sampling is presented. It is recognized that most discrete-time signals and systems are the result of sampling continuous-time signals and systems. When sampling is fast, all resulting signals and systems tend to become ill conditioned and thus difficult to deal with using the conventional algorithms. The delta operator based algorithms, as will be developed in this paper, are numerically better behaved under finite precision implementations for fast sampling. Therefore, they provide many improvements in terms of numerical accuracy and/or convergence speed. Furthermore, the delta operator based algorithms can in most cases be shown to have meaningful continuous-time limits as the sampling becomes faster and faster. Thus they function as a bridge in unifying discrete-time algorithms with continuous-time algorithms. This enhances our insight into and overall understanding of these various algorithms. In this paper, several well-known algorithms in statistical and adaptive signal processing will be cast into their delta operator counterparts. Some new delta operator based algorithms will also be developed. Whenever applicable, corresponding continuous-time limits of these delta operator based algorithms will be pointed out. Computer simulation results using finite precision implementation will also be presented for some of the new algorithms, which generally show much improvement compared with the results from using traditional algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号