首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
张雯  王沁  张晓彤 《微计算机信息》2007,23(33):190-192
SNMP是广泛应用的网络管理协议,随着需要在管理站和代理进程间传送的管理信息的快速增加,对其通信效率提出了更高要求,特别是庞大表格数据的传输问题尤为突出。针对表格数据的提取特点,一种改进的GetBulkRequest被提出。在代理进程中添加一算法模块,解决GetBulkRequest往往提取大量无关数据而造成带宽浪费和反应延迟的问题,提高了表格数据提取的效率。  相似文献   

2.
3.
一个基于软件设计模式的生物信息存储模式   总被引:1,自引:0,他引:1  
为了消除各生物信息学数据库之间的模式异构问题,根据生物信息的存储现状,提出了一种存储模式。该模式从物种、类别、基本信息、功能和测序方法五个方面对数据中的信息进行抽象。运用了软件设计模式的思想,通过“派生”“组装”等面向对象的方法生成与模式对应的XML schema文件。抽象出的存储模式不但能使数据之间的关系更加紧密,而且可以形成交叉索引的完整生物信息体系。  相似文献   

4.
Efficient storage techniques for digital continuous multimedia   总被引:4,自引:0,他引:4  
The problem of collocational storage of media strands, which are sequences of continuously recorded audio samples or video frames, on disk to support the integration of storage and transmission of multimedia data with computing is examined. A model that relates disk and device characteristics to the playback rates of media strands and derives storage patterns so as to guarantee continuous retrieval of media strands is presented. To efficiently utilize the disk space, mechanisms for merging storage patterns of multiple media strands by filling the gaps between media blocks of one strand with media blocks of other strands are developed. Both an online algorithm suitable for merging a new media strand into a set of already stored strands and an offline merging algorithm that can be applied a priori to the storage of a set of media strands before any of them have been stored on disk are proposed. As a consequence of merging, storage patterns of media strands may become perturbed slightly. To compensate for this read-ahead and buffering are required so that continuity of retrieval remains satisfied are also presented  相似文献   

5.
With the great advantages of digitization, more and more documents are being transformed into digital representations. Most content digitization of documents is performed by scanners or digital cameras. However, the transformation might degrade the image quality caused by lighting variations, i.e. uneven illumination distribution. In this paper we describe a new approach for text images to compensate uneven illumination distribution with a high degree of text recognition. Our proposed scheme is implemented by enhancing the contrast of the scanned documents, and then generating an edge map from the contrast-enhanced image for locating text area. With the information of the text location, a light distribution image (background) is created to assist the producing of the final light balanced image. Simulation results demonstrate that our approach is superior to the previous works of Hsia et al. (2005, 2006).  相似文献   

6.
Binary decomposition methods transform multiclass learning problems into a series of two-class learning problems that can be solved with simpler learning algorithms. As the number of such binary learning problems often grows super-linearly with the number of classes, we need efficient methods for computing the predictions. In this article, we discuss an efficient algorithm that queries only a dynamically determined subset of the trained classifiers, but still predicts the same classes that would have been predicted if all classifiers had been queried. The algorithm is first derived for the simple case of pairwise classification, and then generalized to arbitrary pairwise decompositions of the learning problem in the form of ternary error-correcting output codes under a variety of different code designs and decoding strategies.  相似文献   

7.
Multimedia Tools and Applications - Sentiment analysis is a domain of study that focuses on identifying and classifying the ideas expressed in the form of text into positive, negative and neutral...  相似文献   

8.
A heuristic algorithm synthesizes efficient multiplexers consisting of a multiplexer component tree from a technology library. The synthesized multiplexer structures are more area and delay efficient than those generated by commercial tools  相似文献   

9.
The contradictory requirements of data privacy and data analysis have fostered the development of statistical disclosure control techniques. In this context, microaggregation is one of the most frequently used methods since it offers a good trade-off between simplicity and quality. Unfortunately, most of the currently available microaggregation algorithms have been devised to work with small datasets, while the size of current databases is constantly increasing. The usual way to tackle this problem is to partition large data volumes into smaller fragments that can be processed in reasonable time by available algorithms. This solution is applied at the cost of losing quality. In this paper, we revisited the computational needs of microaggregation showing that it can be reduced to two steps: sorting the dataset with regard to a vantage point and a set of k-nearest neighbors searches. Considering this new point of view, we propose three new efficient quality-preserving microaggregation algorithms based on k-nearest neighbors search techniques. We present a comparison of our approaches with the most significant strategies presented in the literature using three real very large datasets. Experimental results show that our proposals overcome previous techniques by keeping a better balance between performance and the quality of the anonymized dataset.  相似文献   

10.
The challenge to enhance the naturalness and efficiency of spoken language man–machine interface, emotional speech identification and its classification has been a predominant research area. The reliability and accuracy of such emotion identification greatly depends on the feature selection and extraction. In this paper, a combined feature selection technique has been proposed which uses the reduced features set artifact of vector quantizer (VQ) in a Radial Basis Function Neural Network (RBFNN) environment for classification. In the initial stage, Linear Prediction Coefficient (LPC) and time–frequency Hurst parameter (pH) are utilized to extract the relevant feature, both exhibiting complementary information from the emotional speech. Extensive simulations have been carried out using Berlin Database of Emotional Speech (EMO-DB) with various combination of feature set. The experimental results reveal 76 % accuracy for pH and 68 % for LPC using standalone feature set, whereas the combination of feature sets, (LP VQC and pH VQC) enhance the average accuracy level up to 90.55 %.  相似文献   

11.
The Journal of Supercomputing - Nowadays, GPU clusters are available in almost every data processing center. Their GPUs are typically shared by different applications that might have different...  相似文献   

12.
Two techniques to perform an irregularly structured Gröbner basis computation (a basic method used in symbolic polynomial manipulation) on distributed memory machines are developed. The first technique, based on relaxation of dependencies present in the sequential computation, exploits coarse-grain parallelism. In this so-called relaxation approach, at every step, each processor reduces a local pair if available, communicates the result and status information from other processors, and updates the local set of pairs and basis. The basis is replicated on each processor while the set of pairs is distributed across the processors. The computation terminates when no pairs are left to be reduced on each processor. A relaxation algorithm based on this approach, along with its experimental results, are provided. The other technique, named quasi-barrier, is developed to enhance the performance of the relaxation algorithm. Using this technique, load balance and performance can be improved by synchronizing p processors when a fraction of the active tasks are completed. The performance enhancement is significant for large numbers of processors when the distribution of pair reduction times is close to exponential. The experimental results obtained on Intel Paragon and IBM SP2 demonstrate the effectiveness of these techniques.  相似文献   

13.
Multiprocessor mapping and scheduling algorithms have been extensively studied over the past few decades and have been tackled from different perspectives. In the late 1980's, the two-step decomposition of scheduling nto clustering and cluster-scheduling - was introduced. Ever since, several clustering and merging algorithms have been proposed and individually reported to be efficient. However, it is not clear how effective they are and how well they compare against single-step scheduling algorithms or other multistep algorithms. In this paper, we explore the effectiveness of the two-phase decomposition of scheduling and describe efficient and novel techniques that aggressively streamline interprocessor communications and can be tuned to exploit the significantly longer compilation time that is available to embedded system designers. We evaluate a number of leading clustering and merging algorithms using a set of benchmarks with diverse structures. We present an experimental setup for comparing the single-step against the two-step scheduling approach. We determine the importance of different steps in scheduling and the effect of different steps on overall schedule performance and show that the decomposition of the scheduling process indeed improves the overall performance. We also show that the quality of the solutions depends on the quality of the clusters generated in the clustering step. Based on the results, we also discuss why the parallel time metric in the clustering step may not provide an accurate measure for the final performance of cluster-scheduling.  相似文献   

14.
In this paper, we present special discretization and solution techniques for the numerical simulation of the Lattice Boltzmann equation (LBE). In Hübner and Turek (Computing, 81:281–296, 2007), the concept of the generalized mean intensity had been proposed for radiative transfer equations which we adapt here to the LBE, treating it as an analogous (semi-discretized) integro-differential equation with constant characteristics. Thus, we combine an efficient finite difference-like discretization based on short-characteristic upwinding techniques on unstructured, locally adapted grids with fast iterative solvers. The fully implicit treatment of the LBE leads to nonlinear systems which can be efficiently solved with the Newton method, even for a direct solution of the stationary LBE. With special exact preconditioning by the transport part due to the short-characteristic upwinding, we obtain an efficient linear solver for transport dominated configurations (macroscopic Stokes regime), while collision dominated cases (Navier-Stokes regime for larger Re numbers) are treated with a special block-diagonal preconditioning. Due to the new generalized equilibrium formulation (GEF) we can combine the advantages of both preconditioners, i.e. independence of the number of unknowns for convection-dominated cases with robustness for stiff configurations. We further improve the GEF approach by using hierarchical multigrid algorithms to obtain grid-independent convergence rates for a wide range of problem parameters, and provide representative results for various benchmark problems. Finally, we present quantitative comparisons between a highly optimized CFD-solver based on the Navier-Stokes equation (FeatFlow) and our new LBE solver (FeatLBE).  相似文献   

15.
Social networks have undergone an explosive growth in recent years. They constitute a central part of users׳ everyday lives as they are used as major tools for the spread of information, ideas and notifications among the members of the network. In this work we investigate the use of location-based social networks as a medium of emergency notification, for efficient dissemination of emergency information among members of the social network under time constraints. Our objective is the following: given a location-based social network comprising a number of mobile users, the social relationships among the users, the set of recipients, and the corresponding timeliness requirements, our goal is to select an appropriate subset of users so that the spread of information is maximized, time constraints are satisfied and costs are considered. We propose LATITuDE, our system that investigates the interactions among the members of the social network to infer their social relationships, and develop scalable dissemination mechanisms that select the most efficient set of users to initiate the dissemination process in order to maximize the information reach among the appropriate receivers within a time window. Our detailed experimental results illustrate that our approach is practical, effectively addresses the problem of informing the appropriate set of users within a deadline when an emergency event occurs, uses a small number of messages, and consistently outperforms its competitors.  相似文献   

16.
P.A.  M.  D.K.   《Pattern recognition》2006,39(12):2344-2355
Hybrid hierarchical clustering techniques which combine the characteristics of different partitional clustering techniques or partitional and hierarchical clustering techniques are interesting. In this paper, efficient bottom-up hybrid hierarchical clustering (BHHC) techniques have been proposed for the purpose of prototype selection for protein sequence classification. In the first stage, an incremental partitional clustering technique such as leader algorithm (ordered leader no update (OLNU) method) which requires only one database (db) scan is used to find a set of subcluster representatives. In the second stage, either a hierarchical agglomerative clustering (HAC) scheme or a partitional clustering algorithm—‘K-medians’ is used on these subcluster representatives to obtain a required number of clusters. Thus, this hybrid scheme is scalable and hence would be suitable for clustering large data sets and we also get a hierarchical structure consisting of clusters and subclusters and the representatives of which are used for pattern classification. Even if more number of prototypes are generated, classification time does not increase much as only a part of the hierarchical structure is searched. The experimental results (classification accuracy (CA) using the prototypes obtained and the computation time) of the proposed algorithms are compared with that of the hierarchical agglomerative schemes, K-medians and nearest neighbour classifier (NNC) methods. The proposed methods are found to be computationally efficient with reasonably good CA.  相似文献   

17.
In this paper, an efficient K-medians clustering (unsupervised) algorithm for prototype selection and Supervised K-medians (SKM) classification technique for protein sequences are presented. For sequence data sets, a median string/sequence can be used as the cluster/group representative. In K-medians clustering technique, a desired number of clusters, K, each represented by a median string/sequence, is generated and these median sequences are used as prototypes for classifying the new/test sequence whereas in SKM classification technique, median sequence in each group/class of labelled protein sequences is determined and the set of median sequences is used as prototypes for classification purpose. It is found that the K-medians clustering technique outperforms the leader based technique and also SKM classification technique performs better than that of motifs based approach for the data sets used. We further use a simple technique to reduce time and space requirements during protein sequence clustering and classification. During training and testing phase, the similarity score value between a pair of sequences is determined by selecting a portion of the sequence instead of the entire sequence. It is like selecting a subset of features for sequence data sets. The experimental results of the proposed method on K-medians, SKM and Nearest Neighbour Classifier (NNC) techniques show that the Classification Accuracy (CA) using the prototypes generated/used does not degrade much but the training and testing time are reduced significantly. Thus the experimental results indicate that the similarity score does not need to be calculated by considering the entire length of the sequence for achieving a good CA. Even space requirement is reduced during both training and classification.  相似文献   

18.
In this paper, we present an efficient sub-optimal algorithm for fitting smooth planar parametric curves by G1 arc splines. To fit a parametric curve by an arc spline within a prescribed tolerance, we first sample a set of points and tangents on the curve adaptively as well as with enough density, so that an interpolation biarc spline curve can be with any desired high accuracy. Then, we construct new biarc curves interpolating local triarc spirals explicitly based on the control of permitted tolerances. To reduce the segment number of fitting arc spline as much as possible, we replace the corresponding parts of the spline by the new biarc curves and compute active tolerances for new interpolation steps. By applying the local biarc curve interpolation procedure recursively and sequentially, the result circular arcs with no radius extreme are minimax-like approximation to the original curve while the arcs with radius extreme approximate the curve parts with curvature extreme well too, and we obtain a near optimal fitting arc spline in the end. Even more, the fitting arc spline has the same end points and end tangents with the original curve, and the arcs will be jointed smoothly if the original curve is composed of several smooth connected pieces. The algorithm is easy to be implemented and generally applicable to circular arc interpolation problem of all kinds of smooth parametric curves. The method can be used in wide fields such as geometric modeling, tool path generation for NC machining and robot path planning, etc. Several numerical examples are given to show the effectiveness and efficiency of the method.  相似文献   

19.
随着构件复用研究的深入和构件库规模的扩大,构件的描述、检索及适配技术成为当前研究的热点。然而传统的构件描述与检索方法存在查准率和查询效率低,查询结果也不利于下一步构件适配的问题。针对这些问题并结合分而治之和树匹配思想,提出了一种新的易扩展的维度匹配模型,并给出了相应的构件检索匹配算法,有效提高了构件的查询效率和查准率,缓解了适配的压力;且该算法的时间复杂度和空间复杂度是线性的。  相似文献   

20.
在基于构件的软件开发过程中,检索和提取满足用户需求的构件已成为目前研究重点.在构件库的效率优化方面,主要包括构件检索效率和构件理解效率的优化.利用数据挖掘中基于拥挤因子的改进蚁群算法来优化构件的复用规则,从而提高复用者对于所需构件选取的准确性.通过实验证明,该方法挖掘出来的构件复用规则准确率为75.3%,高于Apriori算法和蚁群算法,对于构件的检索和选取提供了较好的决策支持.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号