首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Data Processing》1986,28(8):426-433
Three commonly used benchmark programs: the Whetsone, the Dhrystone, and the Sieve, were run on an IBM PC, an IBM PC/AT, a Vax 11/785 and a Vax 8600 computer. The results show very large differences in performances predicted by the different benchmarks.  相似文献   

2.
3.
4.
5.
A rapid hierarchical classification program enables the clustering of 5000 elements in only a few minutes of central processor time using an IBM 370/168 computer. The program algorithm, based on the reductibility axiom in graph theory, is related to the criterion of correspondence analysis. Its application to a set of hydrogeological data is described briefly.  相似文献   

6.
本文通过对IBM Power与DEC Alpha RISC微处理器的设计策略及对性能的影响做了系统的分析与比较。  相似文献   

7.
8.
A general class of methods for (partial) rotation of a set of (loading) matrices to maximal agreement has been available in the literature since the 1980s. It contains a generalization of canonical correlation analysis as a special case. However, various other generalizations of canonical correlation analysis have been proposed. A new general class of methods for each such alternative generalization of canonical correlation is proposed. Together, these general classes of methods form a superclass of methods that strike a compromise between explaining the variance within sets of variables and explaining the agreement between sets of variables, as illustrated in some examples. Furthermore, one general algorithm for finding the solutions for all methods in all general classes is offered. As a consequence, for all methods in the superclass of methods, algorithms are available at once. For the existing methods, the general algorithm usually reduces to the standard algorithms employed in these methods, and thus the algorithms for all these methods are shown to be related to each other.  相似文献   

9.
10.
Data mining can extract important knowledge from large data collections ut sometimes these collections are split among various parties. Privacy concerns may prevent the parties from directly sharing the data and some types of information about the data. We address secure mining of association rules over horizontally partitioned data. The methods incorporate cryptographic techniques to minimize the information shared, while adding little overhead to the mining task.  相似文献   

11.
《Computers & chemistry》1989,13(3):179-184
This paper shows how a course-grained parallel computer, the Alliant FX/80, can be used to speed up the processing of NMR data. Examples of simple vector operations are first shown and the level of parallelization that can be achieved is discussed. Speed enhancements of over 450 are achieved in the ideal case of vector addition. For NMR processing, the FFT algorithm is speeded up by 18 times and smaller but significant enhancements are achieved for other algorithms.  相似文献   

12.
A data acquisition, display and plotting program for the IBM PC   总被引:3,自引:0,他引:3  
A program, AQ, has been developed to perform analog-to-digital (A/D) conversions on IBM PC products using the Data Translation DT2801-A or DT2801 boards. This program provides support for all of the triggered and continuous A/D modes of these boards. Additional subroutines for management of data files and display of acquired data have also been developed. These programs have been written so that a minimum number of keystrokes are required for their operation. Parameter files are used to simplify reconfiguration of this program for various data acquisition tasks.  相似文献   

13.
By executing different fingerprint-image matching algorithms on large data sets, it reveals that the match and non-match similarity scores have no specific underlying distribution function. Thus, it requires a nonparametric analysis for fingerprint-image matching algorithms on large data sets without any assumption about such irregularly discrete distribution functions. A precise receiver operating characteristic (ROC) curve based on the true accept rate (TAR) of the match similarity scores and the false accept rate (FAR) of the non-match similarity scores can be constructed. The area under such an ROC curve computed using the trapezoidal rule is equivalent to the Mann-Whitney statistic directly formed from the match and non-match similarity scores. Thereafter, the Z statistic formulated using the areas under ROC curves along with their variances and the correlation coefficient is applied to test the significance of the difference between two ROC curves. Four examples from the extensive testing of commercial fingerprint systems at the National Institute of Standards and Technology are provided. The nonparametric approach presented in this article can also be employed in the analysis of other large biometric data sets.  相似文献   

14.
Fortran 77 implementations of the Level 3 Basic Linear Algebra Subprograms (BLAS) in double precision, structured and tuned to achieve high performance on the IBM 3090 VF, are presented. The implementations are designed to exploit the memory hierarchy and the vector processor efficiently. Efficient cache reuse is provided by a method for matrix blocking adapted to the memory hierarchy. Vector registers and compound vector instructions are used efficiently through carefully designed Fortran code constructs. Performance results generally show speed comparable to the highly tuned IBM ESSL library. In some cases our implementations are actually faster than ESSL. The generality of the program design and the use of Fortran 77 make the implementations portable and well suited to serve as design platforms for other machines with similar architectures.  相似文献   

15.
The problem of optimization of communications during the execution of a program on a parallel computer with distributed memory is investigated. Statements are formulated that make it possible to determine the possibility of organization of data broadcast and translation. The conditions proposed are represented in the form suitable for practical application and can be used for automated parallelization of programs. This work was done within the framework of the State Program of Fundamental Studies of the Republic of Belarus (under the code name “Mathematical structures 21”) with the partial support of the Foundation for Fundamental Studies of the Republic of Belarus (grant F03-062). __________ Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 166–182, March–April 2006.  相似文献   

16.
Self-organising maps (SOM) have become a commonly-used cluster analysis technique in data mining. However, SOM are not able to process incomplete data. To build more capability of data mining for SOM, this study proposes an SOM-based fuzzy map model for data mining with incomplete data sets. Using this model, incomplete data are translated into fuzzy data, and are used to generate fuzzy observations. These fuzzy observations, along with observations without missing values, are then used to train the SOM to generate fuzzy maps. Compared with the standard SOM approach, fuzzy maps generated by the proposed method can provide more information for knowledge discovery.  相似文献   

17.
A software package, OMNILAB, has been written for the IBM PC, XT and AT computers to be a general purpose system for data collection, analysis, and display. The program supports collection of dta from a variety of absorbance detectors, from pH meters and from other instruments that output time-varying analog voltages in the ranges of millivolts or volts. The program includes capabilities for averaging of data, baseline subtraction, integration of curves, and versatile formatting for both video and hardcopy display of data.  相似文献   

18.
19.
Ocean colour is the only essential climate variable that targets a biological variable (chlorophyll-a concentration (chl-a)) and is also amenable to remote sensing at the global scale. However, the finite lifetime of individual ocean-colour sensors, and the differences in their characteristics increase the difficulty of creating a long-term, consistent, ocean-colour time series that meets the requirements of climate studies. The Ocean Colour Climate Change Initiative (OC-CCI), a European Space Agency programme, has recently produced a time series of satellite-based ocean-colour products at the global scale, merging data from three sensors: Sea-viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer on the Aqua Earth Observing System (MODIS-Aqua), and Medium Resolution Imaging Spectrometer (MERIS), while attempting to reduce inter-sensor biases.In this work we present a comparison between the OC-CCI chlorophyll-a product and precursor satellite-derived data sets, from both single missions (SeaWiFS, MODIS-Aqua, and MERIS) and multi-mission products (global ocean colour (GlobColour) and Making Earth Science Data Records for Use in Research Environments (MEaSUREs)). To this end, OC-CCI global monthly composites are compared to the similar products offered by single-mission and multi-mission records. Our results indicate that the OC-CCI product provides a higher number of observations. Comparing the observations that match with precursors, the OC-CCI product was generally most similar to the single-mission products. Relationships between OC-CCI and other precursors did not change significantly during a common and continuous period, and, on average the root-mean-square differences between log-transformed chlorophyll-a concentration are below or equal to 0.11. Further, when considering variability that could arise when merging data from different sources, it is shown that the OC-CCI product is a longer term constant than those from other multi-mission initiatives studied here.  相似文献   

20.
Data envelopment analysis (DEA) is computationally intensive. This work answers conclusively questions about computational performance and scale limits of the standard LP-based procedures currently used. Examples of DEA problems with up to 15K entities are documented and it is not hard to imagine problem size increasing as new more sophisticated applications are found for DEA. This work reports on a comprehensive computational study involving DEA problems with up to 100K DMUs. We explore the impact of different LP algorithms including interior point methods as well as accelerators such as advanced basis starts and DEA specific enhancements such as “restricted basis entry” (RBE). Our results demonstrate that solution times behave close to quadratically and that massive problems can be solved efficiently. We propose ideas for extending DEA into a data mining tool.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号