首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ELBAMAP: software for management of electrophoresis banding patterns   总被引:2,自引:0,他引:2  
ELBAMAP (Electrophoresis Band Management Package) is a 41-kbyte program written in Borland Turbo Pascal, which will run on IBM PC compatible machines. The program consists of 16 procedures involved in data entry, calculation, comparison and graphic representation of banding patterns. These procedures are selected interactively in response to a main menu and a series of prompts. Database files may be stored for subsequent analysis or addition of data. Pairwise comparisons of banding patterns may be stored as a similarity matrix suitable for use by other packages which will carry out multivariate or cluster analysis. The system has been used successfully for storage and comparison of plasmid DNA digest patterns analysed by agarose gel electrophoresis and seed promoter profiles analysed by SDS-polyacrylamide gel electrophoresis.  相似文献   

2.
GEOFILE is a versatile, interactive program in BASIC for the creation and editing of data files. These files consist of a two-dimensional array of numerical values, rows corresponding to different parameters, and columns to different samples. In addition to numerical values, the files also contain strings corresponding to sample numbers, sample labels, and parameter symbols. Rapid and successful file creation is assisted by the large number of error-traps in the program and comprehensive file editing is accomplished partly through a simple routine which uses an alphanumeric grid to identify data items. Files created and/or edited may be stored on disk for subsequent use in a variety of other programs such as graph plotting or statistical analysis.  相似文献   

3.
A program, written in dBASE, is described that manages demographic, sample, clinical laboratory, and restriction endonuclease map data. The program exports migration distances of DNA fragments resulting from specified endonuclease digests to a commercially available, but modified, curve fitting program (CURVE-FITTER) where the fragment sizes in base pairs are calculated. The calculated values are imported back into dBASE files for report generation or later analysis. The program will produce hard copy reports for single or multiple individuals.  相似文献   

4.
John C. Cavouras 《Software》1983,13(9):809-815
Ways to implement coroutines in a block-structured language with no multitasking facilities are presented. Coroutines are implemented as procedures. The reactivation points are kept in global variables, one variable for each procedure. Local variables whose values are required on re-entry are stored as STATIC objects. The variables or data of re-entrant coroutines are stored in an event list associated with each such coroutine. A procedure with several entries is a convenient mechanism to trap the primitive calls issued by the coroutines. This procedure returns to the master program by using a non-local GOTO. The implementation of the above in PL/I and C is described and a comparison is made with sequential Pascal. Ada includes constructs which satisfy most requirements.  相似文献   

5.
《Knowledge》2006,19(3):180-186
This paper is concerned with finding sequential accesses from web log files, using ‘Genetic Algorithm’ (GA). Web log files are independent from servers, and they are ASCII format. Each transaction, whether completed or not, is recorded in the web log files and these files are unstructured for knowledge discovery in database techniques. Data which is stored in web logs have become important for discovering of user behaviors since the using of internet increased rapidly. Analyzing of these log files is one of the important research area of web mining. Especially, with the advent of CRM (Customer Resource Management) issues in business circle, most of the modern firms operating web sites for several purposes are now adopting web-mining as a strategic way of capturing knowledge about potential needs of target customers, future trends in the market and other management factors.Our work (ALMG—Automatic Log Mining via Genetic) has mined web log files via genetic algorithm. When we search the studies about web mining in literature, it can be seen that, GA is generally used in web content and web structure mining. On the other hand, ALMG is a study about web mining usage. The difference between ALMG and other similar works at literature is this point. As for in another work that we are encountering, GA is used for processing the data between HTML tags which are placed at client PC. But ALMG extracts information from data which is placed at server. It is thought to use log files is an advantage for our purpose. Because, we find the character of requests which is made to the server than detect a single person's behavior. We developed an application with this purpose. Firstly, the application is analyzed web log files, than found sequential accessed page groups automatically.  相似文献   

6.
针对注射模具CAE软件Z-Mold在模拟分析之前需要输入大量信息的问题,探索一种通用软件开发技术——数据驱动对话框.为简化程序开发工作,将对话框原型描述保存在定义文件中,在系统运行时根据原型定义文件动态生成对话框,在其关闭时将输入信息保存在结果文件中以供计算模块调用.该技术使对话框变成一种由数据驱动的对象,不需要修改程序就能更改对话框内容,从而提高Z-Mold的可重用性和开发效率.  相似文献   

7.
Temperature is a useful environmental tracer for quantifying movement and exchange of water and heat through and near sediment–water interfaces (SWI). Heat tracing involves analyzing temperature time series or profiles from temperature probes deployed in sediments. Ex-Stream is a MATLAB program that brings together two transient and two steady one-dimensional coupled heat and fluid flux analytical models. The program includes a graphical user interface, a detailed user manual, and postprocessing capabilities that enable users to extract fluid fluxes from time-series temperature observations. Program output is written to comma-separated values files, displayed within the MATLAB command window, and may be optionally plotted. The models that are integrated into Ex-Stream can be run collectively, allowing for direct comparison, or individually.  相似文献   

8.
Lake Heat Flux Analyzer is a program used for calculating the surface energy fluxes in lakes according to established literature methodologies. The program was developed in MATLAB for the rapid analysis of high-frequency data from instrumented lake buoys in support of the emerging field of aquatic sensor network science. To calculate the surface energy fluxes, the program requires a number of input variables, such as air and water temperature, relative humidity, wind speed, and short-wave radiation. Available outputs for Lake Heat Flux Analyzer include the surface fluxes of momentum, sensible heat and latent heat and their corresponding transfer coefficients, incoming and outgoing long-wave radiation. Lake Heat Flux Analyzer is open source and can be used to process data from multiple lakes rapidly. It provides a means of calculating the surface fluxes using a consistent method, thereby facilitating global comparisons of high-frequency data from lake buoys.  相似文献   

9.
Yuhong Liu   《Computers & Geosciences》2006,32(10):1544-1563
Traditionally, there are two mainstream avenues for geostatistical modeling: pixel-based two-point simulation and object-based simulation. Each is good at either data conditioning or reproducing geological shapes, but none is good at both. Multiple-point simulation combines the strengths of these two avenues. As an advanced pixel-based technique, it inherits the flexibility of pixel-based techniques by building the model one pixel at a time, hence data conditioning is easily achieved; it is also capable of reproducing curvilinear geological shapes through borrowing multiple-point statistics from a training image.The snesim code provides such a multiple-point simulation program. A training image is used to represent the prior geological knowledge, which is scanned to obtain the conditioning probability values for the central node belonging to a facies category given any multiple-point conditioning data event. These training probability values are stored in a search tree a prior to simulation. Then in a sequential simulation mode, at each uninformed node, according to its specific conditioning data event a probability value is retrieved from the search tree, and a value is simulated from it.There are many input parameters to the snesim program, the impact of which might not be immediately clear to people who are not familiar with the code. In this paper, we aim at bringing important aspects of this program and providing practical guidelines to using the program. Sensivity analyses are performed on the important input parameters. The results are analyzed and recommendations are provided on how to set these parameters appropriately.  相似文献   

10.
11.
B. S. Carter 《Software》1985,15(4):369-377
In longitudinal studies subjects are measured from time to time at examinations. The subjects have some data, such as birth date, which do not change between the examination occasions. The bulk of the data are values, of variables such as height, which are taken at the examinations. The essence of these longitudinal studies is found in the change from one exam to the next. A program is described which will derive values from this type of data structure and produce summary statistics. Values can be sent to BCD files for printing or further analysis.  相似文献   

12.
A data acquisition, display and plotting program for the IBM PC   总被引:3,自引:0,他引:3  
A program, AQ, has been developed to perform analog-to-digital (A/D) conversions on IBM PC products using the Data Translation DT2801-A or DT2801 boards. This program provides support for all of the triggered and continuous A/D modes of these boards. Additional subroutines for management of data files and display of acquired data have also been developed. These programs have been written so that a minimum number of keystrokes are required for their operation. Parameter files are used to simplify reconfiguration of this program for various data acquisition tasks.  相似文献   

13.
Electronic examination systems, which include Internet-based system, require extremely complicated installation, configuration and maintenance of software as well as hardware. In this paper, we present the design and development of a flexible, easy-to-use and secure examination system (e-Test), in which any commonly used computer can be used as a platform for a computer-based assessment. In our scheme, the e-Test program and the other associated data files, which include questions and answers, user registration information and configuration database as well as score files, are all stored in a single Iomega Zip disk. To ensure security, all the data files are encrypted and can only be decrypted by the e-Test program. Also, during initialization, the e-Test program will attempt to detect and identify the globally unique physical address of the network card of the test computer used. Only those computers with a pre-registered network card will be able to run the test program. In addition, the system developed also provides friendly user interfaces for the examiners to change the test questions, adding and deleting student names and computers for the assessment as well as other system parameters. The system developed has been successfully used in a randomized multiple choice examination in a course on analog and digital signals involving more than 5000 full time second-year students.  相似文献   

14.
Two-dimensional data obtained from a histological cross-section of a tissue can be utilized to obtain three-dimensional information by the methods of quantitative stereology. The resulting quantitative information is useful in both experimental studies and whole-animal investigations for regulatory and safety purposes. Quantitative stereologic analysis requires considerable data collection and calculation and is thus practical only through the use of computer hardware and software. We have previously reported the development of a program, STEREO, which compiles data from carcinogenesis experiments, recording information from tissue sections for the estimation of the number of altered hepatic foci (AHF) per liver and the volume fraction of AHF in liver on a three-dimensional basis. The data file itself was built by measuring tissue and focal transections through a slide-reading process that involved the manual use of a digitizer. In order to increase the speed and efficiency of the analytical process, we have integrated the STEREO program with a public domain software, Scion Image. This software integration involves two portions: the building macros and the interface. Macros for quantitative stereology used in Scion Image were written to customize and simplify the measurement and to generate data needed for building each of the data files. An interface program, BuildFi.exe, was developed to receive data generated from Scion Image and to align sequential tissue plots from up to four serial sections stained with different markers. As a result, the user can store data on a disk in the format of the STEREO data files. By combining STEREO with Scion Image, the slide-reading process is simplified and can be performed automatically. It has proven to be more objective, time saving, and efficient than all earlier versions.  相似文献   

15.
A case-based approach to software reuse   总被引:2,自引:0,他引:2  
This software reuse system helps a user build programs by reusing modules stored in an existing library. The system, dubbed caesar (Case-basEd SoftwAre Reuse), is conceived in the case-based reasoning framework, where cases consist of program specifications and the corresponding C language code. The case base is initially seeded by decomposing relevant programs into functional slices using algorithms from dataflow analysis. caesar retrieves stored specifications from this base and specializes and/or generalizes them to match the user specification. Testing techniques are applied to the construct assembled by caesar through sequential composition to generate test data which exhibits the behavior of the code. For efficiency, inductive logic programming techniques are used to capture combinations of functions that frequently occur together in specifications. Such combinations may be stored as new functional slices.  相似文献   

16.
This paper reviews the different types of data files and methods of storing and retrieving information using sequential access and random access files. A data management program that contains specifications on 300 robot models is used to show how to: (1) create a new data file, (2) add information to an existing file, (3) modify records within a file, (4) display information from a data file, and (5) to analyze data from individual records.  相似文献   

17.
R.J. Yang   《Computers & Structures》1989,31(6):881-890
A modular approach for shape optimization of three-dimensional solid structures is described. A major consideration in the development of this capability is the desire to use a commercially available finite element program, such as NASTRAN, for analysis. Since NASTRAN cannot be called as a subroutine, a system architecture was developed of independently executable modules in which sequential execution is controlled by job control language. Also, shape sensitivities are not commonly available in commercial programs. A hybrid approach which is based on the material derivative concept is developed to obtain shape sensitivities by post processing finite element results stored on files. A quick generation of a good optimization model combined with an efficient optimization system will result in a drastic design time saving. In this paper, different modeling approaches for shape optimization are discussed. Emphasis will be placed upon a special modeling technique which overlays the design model onto an already existing finite element model. Several automotive related examples are used to evaluate the program's effectiveness.  相似文献   

18.
19.
基于Web的数据挖掘是一种结合了数据挖掘和互联网系统的热门研究课题。本文首先综述了基于Web的几类数据挖掘技术,包括Web内容挖掘、Web的访问挖掘、Web页面聚类以及用户频繁访问路径发现等技术。在此基础上又着重介绍了Web数据挖掘技术在电子商务中的具体应用。  相似文献   

20.
基于Web的数据挖掘技术研究及其在电子商务中的应用   总被引:1,自引:0,他引:1  
基于Web的数据挖掘是一种结合了数据挖掘和互联网系统的热门研究课题.本文首先综述了基于Web的几类数据挖掘技术,包括Web内容挖掘、Web的访问挖掘、Web页面聚类以及用户频繁访问路径发现等技术.在此基础上又着重介绍了Web数据挖掘技术在电子商务中的具体应用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号