共查询到20条相似文献,搜索用时 170 毫秒
1.
2.
3.
描述了Excel在计量管理和检定工作中的重要作用,应用Excel表格进行个人工作量统计、计量器具汇总、测量设备管理及计量检定数据处理等,极大简化了繁琐的工作程序,使计量工作科学化、规范化和数字化,实现计量器具动态管理。 相似文献
4.
EXCEL与MATLAB辅助化工原理教学的实践 总被引:1,自引:0,他引:1
探讨了用Excel电子表格和MATLAB辅助化工原理教学的实现方法.学生在掌握化工单元操作原理的同时,掌握了Excel和MATLAB强大的计算功能、绘图功能和数值计算功能,学会了用Excel和MATLAB做复杂计算、数据处理和设计,提高了学生用计算机解决实际问题的能力. 相似文献
5.
徐征 《中国石油和化工标准与质量》2012,32(3):9
应用Excel制作专用模板,用于检测数据处理、统计分析和水质评价,既能提高报表数据的准确性和可靠性,优化检测工作流程,提高检测效率。 相似文献
6.
在我们环境监测日常工作中,常常需要进行大量的数据处理,如使用计算器进行,不但工作量大,而且容易计算错误,准确率很低。Microsoft Excel办公软件为我们提供了丰富的数据处理函数,如能熟练掌握并应用于我们日常的监测数据处理工作中可以有效地提高工作效率,大大地简化计算过程,减少计算错误。 相似文献
7.
21世纪随着世界经济一体化和信息技术的迅猛发展,导致了经济关系和经济结构的变化,财务管理人员的工作不再是单一账务处理,而是必须关注宏观环境变化,为管理决策提供多方位、多角度、多层次的统计分析信息。ERP作为一种基于计算机辅助信息管理系统的现代企业管理模式,全面集成企业所有信息资源,并为企业提供决策、计划、控制与经营业绩评估的系统化管理平台,有利于企业财务管理人员从日常账务的处理阶段向更深层次地分析、预测职能进行转变。本篇文章主要是对ERP系统对企业财务管理的运用进行分析,并根据分析的情况提出建议。 相似文献
8.
9.
以实际生产工艺过程为实例,介绍了用Excel进行列管式换热器设计的方法。结果表明,利用Excel强大的函数库和数据处理能力,能较大程度提高换热器设计效率,并具有较强的直观性。 相似文献
10.
本文阐述了完整、准确和全面的工艺安全信息,一般可划分为物料危害信息、工艺设计基础信息、设备基础信息三类。这些信息可以为管理、技术、操作和维修等人员在施工作业与工艺安全管理活动中进行分析、判断、决策提供特定的工艺安全信息,为合理地决策和安全地工作提供相应的保障,在工艺装置管理中的意义以及塔里木油田的工艺安全信息现状。 相似文献
11.
Zhou Z Marepally SR Nune DS Pallakollu P Ragan G Roth MR Wang L Lushington GH Visvanathan M Welti R 《Lipids》2011,46(9):879-884
LipidomeDB Data Calculation Environment (DCE) is a web application to quantify complex lipids by processing data acquired
after direct infusion of a lipid-containing biological extract, to which a cocktail of internal standards has been added,
into an electrospray source of a triple quadrupole mass spectrometer. LipidomeDB DCE is located on the public Internet at
. LipidomeDB DCE supports targeted analyses; analyte information can be entered, or pre-formulated lists of typical plant
or animal polar lipid analytes can be selected. LipidomeDB DCE performs isotopic deconvolution and quantification in comparison
to internal standard spectral peaks. Multiple precursor or neutral loss spectra from up to 35 samples may be processed simultaneously
with data input as Excel files and output as tables viewable on the web and exportable in Excel. The pre-formulated compound
lists and web access, used with direct-infusion mass spectrometry, provide a simple approach to lipidomic analysis, particularly
for new users. 相似文献
12.
Economic evaluation of health care interventions based on decision analytic modelling can generate valuable information for health policy decision makers. However, the usefulness of the results obtained depends on the quality of the data input into the model; that is, the accuracy of the estimates for the costs, effectiveness, and transition probabilities between the different health states of the model. The aim of this paper is to review the use of Bayesian decision models in economic evaluation and to demonstrate how the individual components required for decision analytical modelling (i.e., systematic review incorporating meta-analyses, estimation of transition probabilities, evaluation of the model, and sensitivity analysis) may be addressed simultaneously in one coherent Bayesian model evaluated using Markov Chain Monte Carlo simulation implemented in the specialist Bayesian statistics software WinBUGS. To illustrate the method described, a simple probabilistic decision model is developed to evaluate the cost implications of using prophylactic antibiotics in caesarean section to reduce the incidence of wound infection. The advantages of using the Bayesian statistical approach outlined compared to the conventional classical approaches to decision analysis include the ability to: (i) perform all necessary analyses, including all intermediate analyses (e.g., meta-analyses) required to derive model parameters, in a single coherent model; (ii) incorporate expert opinion either directly or regarding the relative credibility of different data sources; (iii) use the actual posterior distributions for parameters of interest (opposed to making distributional assumptions necessary for the classical formulation); and (iv) incorporate uncertainty for all model parameters. 相似文献
13.
Margaritis Kostoglou Konstantinos Samaras Thodoris D. Karapantsios 《American Institute of Chemical Engineers》2010,56(1):11-23
There are many open questions regarding the evolution of waves, especially for the case of turbulent films. To resolve the complexity in modeling wavy turbulent films, more information needs to be derived from experimental data. On this account, a new way is proposed herein to analyze experimental film thickness traces, replacing the usual statistical analysis. Large waves are identified in experimental traces, and their shape is described by approximation with a few parameters curve. The probability density functions of these parameters are identified and the whole procedure can be considered as a compression method of the information content of experimental data series. By comparing results at several downstream locations, information on the evolution of waves along the flow is derived. This information indicates a 3D character of the flow, customary neglected in modeling efforts. In addition, the current results can be used for the numerical reconstruction of experimental film thickness traces. © 2009 American Institute of Chemical Engineers AIChE J, 2009 相似文献
14.
15.
This article aims to leverage the big data in shale gas industry for better decision making in optimal design and operations of shale gas supply chains under uncertainty. We propose a two-stage distributionally robust optimization model, where uncertainties associated with both the upstream shale well estimated ultimate recovery and downstream market demand are simultaneously considered. In this model, decisions are classified into first-stage design decisions, which are related to drilling schedule, pipeline installment, and processing plant construction, as well as second-stage operational decisions associated with shale gas production, processing, transportation, and distribution. A data-driven approach is applied to construct the ambiguity set based on principal component analysis and first-order deviation functions. By taking advantage of affine decision rules, a tractable mixed-integer linear programming formulation can be obtained. The applicability of the proposed modeling framework is demonstrated through a small-scale illustrative example and a case study of Marcellus shale gas supply chain. Comparisons with alternative optimization models, including the deterministic and stochastic programming counterparts, are investigated as well. © 2018 American Institute of Chemical Engineers AIChE J, 65: 947–963, 2019 相似文献
16.
L. C. Jensen A. M. Ewing R. D. Wills F. T. Lindgren 《Journal of the American Oil Chemists' Society》1967,44(1):5-10
The usefulness of computers in data evaluation is generally recognized; however, the problem of utilizing a computer in the most intelligent manner deserves careful consideration. Several programs are described which aid in serum lipid and lipoprotein analysis. Two programs requiring a minimum of manual measurements have been developed to analyze gas-liquid chromatograms. These programs perform many operations including corrections for baseline, linearity, Gaussian resolution, and variation in column conditions. The presentation in some detail of one of these programs for NCH elemental analysis illustrates the development and refinement of a program for a specific instrument. Finally, a general purpose statistical analysis program has been developed which greatly aids in summarizing and correlating data from these programs, as well as other sources, such as ultracentrifugal data. 相似文献
17.
以化工原理干燥的计算为例,讨论了Excel和1stOpt软件在化工原理计算中的应用。通过使用1stOpt和Excel软件,不但极大简化了学生处理数据的过程,而且克服了试差法繁琐的缺点。Excel和1stOpt软件可以提高学生的数据处理水平,具有一定的推广价值。 相似文献
18.
Protein complexes are the main functional modules in the cell that coordinate and perform the vast majority of molecular functions. The main approaches to identify and quantify the interactome to date are based on mass spectrometry (MS). Here I summarize the benefits and limitations of different MS-based interactome screens, with a focus on untargeted interactome acquisition, such as co-fractionation MS. Specific emphasis is given to the discussion of discovery- versus hypothesis-driven data analysis concepts and their applicability to large, proteome-wide interactome screens. Hypothesis-driven analysis approaches, i.e., complex- or network-centric, are highlighted as promising strategies for comparative studies. While these approaches require prior information from public databases, also reviewed herein, the available wealth of interactomic data continuously increases, thereby providing more exhaustive information for future studies. Finally, guidance on the selection of interactome acquisition and analysis methods is provided to aid the reader in the design of protein-protein interaction studies. 相似文献
19.
Simplified models have many appealing properties and sometimes give better parameter estimates and model predictions, in sense of mean‐squared‐error, than extended models, especially when the data are not informative. In this paper, we summarize extensive quantitative and qualitative results in the literature concerned with using simplified or misspecified models. Based on confidence intervals and hypothesis tests, we develop a practical strategy to help modellers decide whether a simplified model should be used, and point out the difficulty in making such a decision. We also evaluate several methods for statistical inference for simplified or misspecified models. 相似文献
20.
Ritva R. Butrum Susan E. Gebhardt 《Journal of the American Oil Chemists' Society》1976,53(12):a727-a730
A computerized Nutrient Data Bank has been designed for storage, summary, and retrieval of food composition data. The system
is a repository for data from domestic and international sources, including research institutions, industry, and independent
laboratories. Source data are carefully screened with regard to identification of the food and conditions which may affect
its nutritive value. Variables such as treatment and processing of the food and method of nutrient analysis can be considered
in the analysis and retrieval of the data. All primary data will go into Data Base I. After statistical analysis of primary
data, unique criteria will be developed for each food for use in summarizing the nutrient data into composite values. Data
Bases II and III will be derived from the information in Data Base I by averaging, weighting, and selection. The summarized
data will include averages for each nutrient, the number of samples, range values, and standard error. The data can be used
for compiling a new nutrition handbook and for rapid retrieval of information for scientists. 相似文献