共查询到20条相似文献,搜索用时 15 毫秒
1.
在文本无关说话人确认领域,基于总差异空间的说话人确认方法已成为主流方法,其中概率线性判别分析(Probabilistic Linear Discriminant Analysis,PLDA)因其优异的性能受到广泛关注。然而传统PLDA模型没有考虑注册语音与测试语音时长失配情况下的差异信息,不能很好地解决因时长失配带来的说话人确认系统性能下降的问题。该文提出一种估计时长差异信息方法,并将此差异信息融入PLDA模型,从而提高PLDA模型对时长差异的鲁棒性。在NIST数据库上的实验表明,所提出的方法可以较好地补偿时长差异,性能上也优于PLDA方法。 相似文献
2.
Steganography is used for multimedia data security. It is a process of hiding the data within multimedia communication between the parties embedding the secret data inside a carrier file to be protected during its transmission. The research focus is on hiding within Arabic text steganography as a current challenging research area. The work innovation is utilizing text pseudo-spaces characters for data hiding. We present two studies for this text Steganography utilizing pseudo-spaces alone as well as combined with Kashida (extension character) as the old Arabic text stego techniques. Experimental results have shown that the proposed algorithms achieved high capacity and security ratio as compared to state-of-the-art Steganography methods presented for Arabic language. The proposed pseudo-spaces stego technique is of great benefit that can be further used for languages similar to Arabic such as Urdu and Persian as well as opening direction of text-stego research for other languages of the world. 相似文献
3.
With the rapid increasing of learning materials and learning objects in e-learning, the need for recommender system has also
become more and more imperative. Although, the traditional recommendation system has achieved great success in many domains,
it is not suitable to support e-learning recommender system because the approach in e-learning is hybrid and it is obtained
mainly by two mechanisms: the learners’ learning processes and the analysis of social interaction. Therefore, this study proposes
a flexible recommendation approach to satisfy this demand. The recommendation is designed based on a multidimensional recommendation
model. Furthermore, we use Markov Chain Model to divide the group learners into advanced learners and beginner learners by
using the learners’ learning activities and learning processes so that we can correctly estimate the rating which also include
learners’ social interaction. The experimental result shows that the proposed system can give a more satisfying and qualified
recommendation. 相似文献
4.
Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies
malware and thus prevents it from adversely affecting a host. In order to evade detection, malware writers use various obfuscation
techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these
evasion tactics. In this paper, we describe the design and implementation of a malware transformer that reverses the obfuscations performed by a malware writer. Our experimental evaluation demonstrates that this malware
transformer can drastically improve the detection rates of commercial malware detectors. 相似文献
5.
In a Content-based Video Retrieval system, the shot boundary detection is an unavoidable stage. Such a high demanding task needs a deep study from a computational point of view to allow finding suitable optimization strategies. This paper presents different strategies implemented on both a shared-memory symmetric multiprocessor and a Beowulf cluster, and the evaluation of two different programming paradigms: shared-memory and message passing. Several approaches for video segmentation as well as data access are tested in the experiments that also consider load balancing issues. 相似文献
6.
When humans produce summaries of documents, they do not simply extract sentences and concatenate them. Rather, they create new sentences that are grammatical, that cohere with one another, and that capture the most salient pieces of information in the original document. Given that large collections of text/abstract pairs are available online, it is now possible to envision algorithms that are trained to mimic this process. In this paper, we focus on sentence compression, a simpler version of this larger challenge. We aim to achieve two goals simultaneously: our compressions should be grammatical, and they should retain the most important pieces of information. These two goals can conflict. We devise both a noisy-channel and a decision-tree approach to the problem, and we evaluate results against manual compressions and a simple baseline. 相似文献
7.
Neural Computing and Applications - There is an urgent need, accelerated by the COVID-19 pandemic, for methods that allow clinicians and neuroscientists to remotely evaluate hand movements. This... 相似文献
8.
Efficient and robust saliency detection is a fundamental problem in computer vision field for its wide applications, such as image segmentation and image retargeting, etc. In this paper, with the aim of uniformly highlighting the salient objects and suppressing the saliency of the background in images, we propose an efficient three-stage saliency detection method. First, boundary prior and connectivity prior are used to generate coarse saliency maps. To suppress the saliency value of the cluttered background, two supergraphs together with the adjacent graph are created so that the saliency of the background regions with similar appearances which are separated by other regions can be reduced effectively. Second, a local context-based saliency propagation is proposed to refine the saliency such that regions with similar features hold similar saliency. Finally, a logistic regressor is learned to combine the three refined saliency maps into the final saliency map automatically. The proposed method improves saliency detection on many cluttered images. The experimental results on two widely used public datasets with pixel accurate salient region annotations show that our method outperforms the state-of-the-art methods. 相似文献
9.
Effective ranking algorithms for mobile Web searches are being actively pursued. Due to the peculiar and troublesome properties of mobile contents such as scant text, few outward links, and few input keywords, conventional Web search techniques using bag-of-words ranking functions or link-based algorithms are not good enough for mobile Web searches. Our solution is to use click logs to clarify access-concentrated search results for each query and to utilize the titles and snippets to expand the queries. Many previous works regard the absolute click numbers as the degree of access concentration, but they are strongly biased such that higher-ranked search results are more easily clicked than lower-ranked ones. Therefore, it is considered that only higher-ranked search results are access-concentrated ones and that only terms extracted from them can be used to expand a query. In this paper, we introduce a new measure that is capable of estimating the degree of access concentration. This measure is used to precisely extract access concentration sites from many search results and to expand queries with terms extracted from them. We conducted an experiment using the click logs and data from an actual mobile Web search site. Results obtained show that our proposed method is a more effective way to boost the search precision than using other query expansion methods such as the top K search results or the most-often-clicked search results. 相似文献
10.
Latent Semantic Indexing (LSI) is a standard approach for extracting and representing the meaning of words in a large set of documents. Recently it has been shown that it is also useful for identifying concerns in source code. The tree cutting strategy plays an important role in obtaining the clusters, which identify the concerns. In this contribution the authors compare two tree cutting strategies: the Dynamic Hybrid cut and the commonly used fixed height threshold. Two case studies have been performed on the source code of Philips Healthcare to compare the results using both approaches. While some of the settings are particular to the Philips-case, the results show that applying a dynamic threshold, implemented by the Dynamic Hybrid cut, is an improvement over the fixed height threshold in the detection of clusters representing relevant concerns. This makes the approach as a whole more usable in practice. 相似文献
11.
For intrusion detection, the LERAD algorithm learns a succinct set of comprehensible rules for detecting anomalies, which could be novel attacks. LERAD validates the learned rules on a separate held-out validation set and removes rules that cause false alarms. However, removing rules with possible high coverage can lead to missed detections. We propose three techniques for increasing coverage— Weighting, Replacement and Hybrid. Weighting retains previously pruned rules and associate weights to them. Replacement, on the other hand, substitutes pruned rules with other candidate rules to ensure high coverage. We also present a Hybrid approach that selects between the two techniques based on training data coverage. Empirical results from seven data sets indicate that, for LERAD, increasing coverage by Weighting, Replacement and Hybrid detects more attacks than Pruning with minimal computational overhead. 相似文献
12.
Latent Semantic Indexing (LSI) is a standard approach for extracting and representing the meaning of words in a large set of documents. Recently it has been shown that it is also useful for identifying concerns in source code. The tree cutting strategy plays an important role in obtaining the clusters, which identify the concerns. In this contribution the authors compare two tree cutting strategies: the Dynamic Hybrid cut and the commonly used fixed height threshold. Two case studies have been performed on the source code of Philips Healthcare to compare the results using both approaches. While some of the settings are particular to the Philips-case, the results show that applying a dynamic threshold, implemented by the Dynamic Hybrid cut, is an improvement over the fixed height threshold in the detection of clusters representing relevant concerns. This makes the approach as a whole more usable in practice. 相似文献
13.
In recent years, much attention has been given to the problem of outlier detection, whose aim is to detect outliers - objects who behave in an unexpected way or have abnormal properties. The identification of outliers is important for many applications such as intrusion detection, credit card fraud, criminal activities in electronic commerce, medical diagnosis and anti-terrorism, etc. In this paper, we propose a hybrid approach to outlier detection, which combines the opinions from boundary-based and distance-based methods for outlier detection (
[Jiang et al., 2005],
[Jiang et al., 2009] and [Knorr and Ng, 1998]). We give a novel definition of outliers - BD ( boundary and distance)- based outliers, by virtue of the notion of boundary region in rough set theory and the definitions of distance-based outliers. An algorithm to find such outliers is also given. And the effectiveness of our method for outlier detection is demonstrated on two publicly available databases. 相似文献
14.
Effective human and automatic processing of speech requires recovery of more than just the words. It also involves recovering phenomena such as sentence boundaries, filler words, and disfluencies, referred to as structural metadata. We describe a metadata detection system that combines information from different types of textual knowledge sources with information from a prosodic classifier. We investigate maximum entropy and conditional random field models, as well as the predominant hidden Markov model (HMM) approach, and find that discriminative models generally outperform generative models. We report system performance on both broadcast news and conversational telephone speech tasks, illustrating significant performance differences across tasks and as a function of recognizer performance. The results represent the state of the art, as assessed in the NIST RT-04F evaluation. 相似文献
15.
User Modeling and User-Adapted Interaction - Pervasive computing environments deliver a multitude of possibilities for human–computer interactions. Modern technologies, such as gesture... 相似文献
16.
In the era of big data, some data records are interrelated with each other in many areas, such as marketing, management, health care, and education. These interrelated data can be more naturally represented as networks with nodes and edges. Inside this type of networks, there is usually a hidden community structure where each community represents a relatively independent functional module. Such hidden community structures are useful for many applications, such as word-of-mouth marketing, promoting decentralized social interactions inside organizations, and searching biological pathways related to various diseases. Therefore, how to detect hidden community structures becomes an important task with wide applications. Currently, modularity-based methods are widely-used among many existing community structure detection methods. They detect communities with more internal edges than expected under the null hypothesis of independence. Since research in correlation analysis also searches for patterns which occur more than expected under the null hypothesis of independence, this paper proposed a framework of changing the original modularity function according to different existing correlation functions in the correlation analysis research area. Such a framework can utilize not only the current but also the future potential research progresses in correlation analysis to advance community detection. In addition, a novel graphical analysis on different modified-modularity functions is conducted to analyze their different preferences, which are also validated by our evaluation on both real life and simulated networks. Our work to connect modularity-based methods with correlation analysis has several significant impacts on the community detection research and its applications to expert and intelligent systems. First, the research progress in correlation analysis can be utilized to define a more effective objective function in community detection for better detection results since different real-life applications might need communities with different resolutions. Second, any existing research progress for the modularity function, such as the Louvain method for speeding up the search and different extensions for overlapping community detection, can be applied in a similar way to the new objective function derived from existing correlation functions, because the new objective function is unified within one framework with the modularity function. Third, our framework opens a large unexplored area for the researchers interested in community detection. For example, what is the best heuristic search method for each different objective function? What are the characteristics of each objective function when applied to overlapping community detection? Among different extensions to overlapping community detection, which extension is better for each objective function? 相似文献
17.
Speech and hand gestures offer the most natural modalities for everyday human-to-human interaction. The availability of diverse spoken dialogue applications and the proliferation of accelerometers on consumer electronics allow the introduction of new interaction paradigms based on speech and gestures. Little attention has been paid, however, to the manipulation of spoken dialogue systems (SDS) through gestures. Situation-induced disabilities or real disabilities are determinant factors that motivate this type of interaction. In this paper, six concise and intuitively meaningful gestures are proposed that can be used to trigger the commands in any SDS. Using different machine learning techniques, a classification error for the gesture patterns of less than 5 % is achieved, and the proposed set of gestures is compared to ones proposed by users. Examining the social acceptability of the specific interaction scheme, high levels of acceptance for public use are encountered. An experiment was conducted comparing a button-enabled and a gesture-enabled interface, which showed that the latter imposes little additional mental and physical effort. Finally, results are provided after recruiting a male subject with spastic cerebral palsy, a blind female user, and an elderly female person. 相似文献
18.
In this paper, we propose a novel scheme for simulating geometric active contours (geometric flow) of one kind, applying multiquadric (MQ) quasi-interpolation. We first represent the geometric flow in its parametric form. Then we obtain the numerical scheme by using the derivatives of the quasi-interpolation to approximate the spatial derivative of each dependent variable and a forward difference to approximate the temporal derivative of each dependent variable. The resulting scheme is simple, efficient and easy to implement. Also images with complex boundaries can be more easily proposed on the basis of the good properties of the MQ quasi-interpolation. Several biomedical and astronomical examples of applications are shown in the paper. Comparisons with other methods are included to illustrate the validity of the method. 相似文献
20.
Projection pursuit learning networks (PPLNs) have been used in many fields of research but have not been widely used in image processing. In this paper we demonstrate how this highly promising technique may be used to connect edges and produce continuous boundaries. We also propose the application of PPLN to deblurring a degraded image when little or no a priori information about the blur is available. The PPLN was successful at developing an inverse blur filter to enhance blurry images. Theory and background information on projection pursuit regression (PPR) and PPLN are also presented. 相似文献
|