首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
As more complex biogeochemical situations are being investigated (e.g., evolving reactivity, passivation of reactive surfaces, dissolution of sorbates), there is a growing need for biogeochemical simulators to flexibly and facilely address new reaction forms and rate laws. This paper presents an approach that accommodates this need to efficiently simulate general biogeochemical processes, while insulating the user from additional code development.The approach allows for the automatic extraction of fundamental reaction stoichiometry and thermodynamics from a standard chemistry database, and the symbolic entry of arbitrarily complex user-specified reaction forms, rate laws, and equilibria. The user-specified equilibria and kinetic rates (i.e., they are not defined in the format of the standardized database) are interpreted by the Maple V (Waterloo Maple) symbolic mathematical software package. FORTRAN 90 code is then generated by Maple for (1) the analytical Jacobian matrix (if preferred over the numerical Jacobian matrix) used in the Newton–Raphson solution procedure, and (2) the residual functions for governing equations, user-specified equilibrium expressions and rate laws. Matrix diagonalization eliminates the need to conceptualize the system of reactions as a tableau, which comprises a list of components, species, the stoichiometric matrix, and the formation equilibrium constant vector that forms the species from components (Morel and Hering, 1993), while identifying a minimum rank set of basis species with enhanced numerical convergence properties. The newly generated code, which is designed to operate in the BIOGEOCHEM biogeochemical simulator, is then compiled and linked into the BIOGEOCHEM executable. With these features, users can avoid recoding the simulator to accept new equilibrium expressions or kinetic rate laws, while still taking full advantage of the stoichiometry and thermodynamics provided by an existing chemical database. Thus, the approach introduces efficiencies in the specification of biogeochemical reaction networks and eliminates opportunities for mistakes in preparing input files and coding errors. Test problems are used to demonstrate the features of the procedure.  相似文献   

2.
3.
The OPS-model (Operational model for Priority Substances) is a flexible atmospheric transport model for the calculation of concentration and deposition of low-reactive pollutants. The averaging period can be chosen from one month up to a period of more than 10 years. The receptor points may be defined on a regular grid in a model domain ranging from the local scale (several 100 m around a source) up to the scale of the European continent (ca. 2000×2000 km) or they may be defined by exact geographical (x,y) coordinates. The latter is for example applicable when the user wishes to compare the model results with measured values from monitoring stations. The emissions can be defined as any combination of point sources and (diffuse) area sources with variable horizontal dimensions. The model uses statistical meteorological data. The minimum set of required meteorological information consists of 6-hourly data for wind speed and direction, global radiation, temperature, and precipitation amount and duration. These data are pre-processed in a separate programme to calculate the necessary statistics.  相似文献   

4.
This paper structures a novel vision for OLAPby fundamentally redefining several of the pillars on which OLAP has been based for the last 20 years. We redefine OLAP queries, in order to move to higher degrees of abstraction from roll-up’s and drill-down’s, and we propose a set of novel intentional OLAP operators, namely, describe, assess, explain, predict, and suggest, which express the user’s need for results. We fundamentally redefine what a query answer is, and escape from the constraint that the answer is a set of tuples; on the contrary, we complement the set of tuples with models (typically, but not exclusively, results of data mining algorithms over the involved data) that concisely represent the internal structure or correlations of the data. Due to the diverse nature of the involved models, we come up (for the first time ever, to the best of our knowledge) with a unifying framework for them, that places its pillars on the extension of each data cell of a cube with information about the models that pertain to it — practically converting the small parts that build up the models to data that annotate each cell. We exploit this data-to-model mapping to provide highlights of the data, by isolating data and models that maximize the delivery of new information to the user. We introduce a novel method for assessing the surprise that a new query result brings to the user, with respect to the information contained in previous results the user has seen via a new interestingness measure. The individual parts of our proposal are integrated in a new data model for OLAP, which we call the Intentional Analytics Model. We complement our contribution with a list of significant open problems for the community to address.  相似文献   

5.
AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write implementation-specific code which can be selected and loaded by a special source loader, being part of the AIDA software. This feature is also accessible for maintaining software on different sites and on different installations.  相似文献   

6.
This paper describes the application of a parameterisation procedure to reduce a comprehensive atmospheric chemical mechanism to a set of algebraic polynomials. This is achieved here by the numerical fitting of orthonormal polynomial functions to changes in species concentrations from one time point to the next, using the Gram–Schmidt orthonormalisation technique. Polynomials are then optimised by the use of Horner equations. Some reduction of the number of species to only those which influence the concentration of the important species has been carried out previously by the removal of fast time-scale species and stable products, without significant loss of accuracy. Consequently the repro-model is fitted to a subset of the original mechanism, with polynomials generated for each of the species in the lower dimensional subspace. This method has been successfully applied to diurnal tropospheric cycles and several results are shown here. Deviations between the repro-model results and those of the original mechanism are consistently less than 1% across the scenario for all key species, with the repro-model running up to 25 times faster than the numerical solution of the original scheme. The repro-modelling technique is thus highly suited to replacing the system of ordinary differential equations in the chemical sub-model of a dispersion code—significantly reducing the computational burden without loss of accuracy.  相似文献   

7.
CALCMIN, an open source Visual Basic program, was implemented in EXCEL?. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations.Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.  相似文献   

8.
A novel model of adaptation decision-taking engine in multimedia adaptation   总被引:2,自引:0,他引:2  
In heterogeneous environments, universal multimedia access (UMA) is proposed to provide multimedia content services. Multimedia adaptation is one of technologies to perform UMA, in which adaptation decision-taking engine (ADTE) is a key component. Though there are many models of ADTE existing, it needs to be reconsidered for personalized content services. In this paper, a novel model of ADTE is proposed based on decision tree termed adaptation decision tree (ADT) in which adaptation decision is viewed as sequence decision: modality decision and format decision. Correspondingly, user preferences are divided into two types: user modality preferences and user format preferences. By utilizing user preferences, the ADT model is built up. Before making decision, an optimal multimedia variation set (OMVS) with respect to user modality preferences is constructed and any element here is with the shortest distance to user format preferences for every modality. Therefore, adaptation decision can be executed by letting the element in OMVS travel along the ADT one by one. Finally, the first element that reaches the leaf with the logical value true is the decision result, or the one with the smallest value in distance is the decision variation if no elements get to proper leaf. Quantitative analysis and experimental simulation prove that the model is effective and efficient to cope with adaptation decision in multimedia adaptation especially in dynamic user preferences and resource-limited cases.  相似文献   

9.
根据用户搜索历史,将用户关注的信息按标题分类,通过自编码神经网络提取特征值。设定学习样本标题最多为25个汉字,编码方式采用汉字机内码(GBK码)。使用 MATLAB工具进行深度学习,将样本在原空间的特征表示变换到一个新的特征空间。  相似文献   

10.
根据云计算资源建立了资源受限设备弹性应用的安全模型。首先介绍了由一个或多个Weblet组成的一个弹性应用程序,每个Weblet可在移动设备端或云端启动,Weblet之间可根据所处的计算环境的动态变化或用户的配置进行迁移。分析了该模式的安全性,提出建立弹性应用程序的安全设计模型,包括实现Weblet运行所在的移动设备端和云端之间的身份验证、安全会话管理和通过外部网络的访问服务。该模型解决了Weblet之间的安全迁移和授权云Weblet通过外部Web网络去访问敏感用户数据的问题。该方案能应用在云计算场景,如在企业应用环境下的私有云和公有云之间的应用集成。  相似文献   

11.
ACNUC is a database structure and retrieval software for use with either the GenBank or EMBL nucleic acid sequence data collections. The nucleotide and textual data furnished by both collections are each restructured into a database that allows sequence retrieval on a multi-criterion basis. The main selection criteria are: species (or higher order taxon), keyword, reference, journal, author, and organelle; all logical combinations of these criteria can be used. Direct access to sequence regions that code for a specific product (protein, tRNA or rRNA) is provided. A versatile extraction procedure copies selected sequences, or fragments of them, from the database to user files suitable to be analysed by user-supplied application programs. A detailed help mechanism is provided to aid the user at any time during the retrieval session. All software has been written in FORTRAN 77 which guarantees a high degree of transportability to minicomputers or mainframes.  相似文献   

12.
针对传统质量评估模式中指标权重赋值依据单一的问题,首先将服务描述本体分为共享本体和专属本体两个抽象层次,构建具有“抽象-应用-度量”多层结构的QoS本体,用于QoS度量的对象描述和数据采集;然后建立基于深度信任网络和回归模型的双向度量模型DM-QSM,将服务描述信息和类似服务历史数据作为训练样本数据集对DM-QSM进行正向训练,再结合用户反馈对DM-QSM进行逆向调优,以实现QoS度量指标权重及其偏好度的自适应调节。最后选用可编程建模环境NetLogo为实验平台、公共服务数据集QWS为训练样本集、电子商务应用服务为测试样本集,验证了DM-QSM的可行性和有效性。  相似文献   

13.
面向数据库的化工软件集成环境的设计   总被引:2,自引:2,他引:0  
对于化工领域,用户若能够在一个软件集成环境上对现有的化工软件进行集成,则可以很大程度上节省用户软件二次开发或掌握集成技术所耗费的人力物力。本文试图研究开发一种化工软件集成的环境。利用Windows自身的特点,以及其管理应用程序的一般方法,设计了化工软件的集成环境。该集成环境由界面集成模块、代码集成模块、数据集成模块和数据库管理子系统4个模块组成,对于4个模块分别给出了界面集成策略、代码集成策略、数据集成策略、数据库和数据库管理子系统的建立策略。  相似文献   

14.
One way to compare clustering techniques is in terms of the part done by the computer and the part controlled by the user. This paper presents a mathematical formulation of the clustering problem in which no parameters to be controlled by the user are included, thus no outside interference is required. The model was applied to clustering data points defined in a multi-dimentional space. The experiments demonstrate that the partition depends mainly upon the structure inherent in the data set. This approach is particularly useful in the case where no preliminary information, as to the number of categories or their distribution, is available.  相似文献   

15.
《Computers & chemistry》1990,14(2):141-156
An algorithm is described for the numerical solution of proper matched multinomial sigma-pi equation systems. These are most commonly associated with ideal dilute solution chemical equilibrium equations. In this context, the algorithm will obtain near full floating point precision for all equilibrium concentrations which lie within nearly the full floating point dynamic range of the implementation. No starting approximation is required from the user. The method converged in seven or fewer iterations for systems containing up to 14 reactions and 18 reactants.As a second application, this equation system is applied to a special class of steady-state chemical reaction networks through the introduction of flux-shifters as chemical network elements.  相似文献   

16.
基于量子力学计算的真实溶剂似导体屏蔽模型(COSMO-RS)在预测纯组分饱和蒸汽压、混合物汽液平衡、液液平衡等方面已取得重要进展,但在模型参数的优化方面还不尽人意.COSMO-SAC(segment activity coefficient)活度系数模型是COSMO-RS模型的1种改进形式,在汽液平衡预测方面能够符合热力学一致性.本文通过WS混合规则将COSMO-SAC与SRK状态方程结合应用于二元体系汽液平衡数据的关联,首次利用牛顿-拉夫逊迭代算法获得多个二元体系最优化的二元交互作用系数k12,填补文献空白.在较宽温度和压力范围内对不同类型二元体系(包括烃与烃,烃与醇,烃与酮,烃与酯,以及醇与水等)汽液平衡数据进行关联的结果表明,采用最优化的二元交互作用系数k12后,模型的计算精度大大提高,与实验数据吻合良好.尤其对醇-水等体系,优化后的模型关联精度明显优于文献数据.  相似文献   

17.
图形用户界面(GUI)是底层代码的前端表示。针对基于现有的模型生成的测试用例集不能尽快找到软件缺陷的问题,本文从代码层和界面层出发对待测程序进行分析,提出一种GUI测试模型WEHG,该模型的特点是:1)根据事件处理函数中定义变量和引用变量的数量和给对应的节点设置权重值,从而保证拥有更多变量的节点能够优先生成测试用例;2)根据事件处理函数的定义引用对给节点之间的依赖关系设置依赖值,使依赖度高的节点能够优先加入测试序列中。对比实验结果表明,该方法能够更快地发现软件中的缺陷,提高测试用例的缺陷探测效率,降低软件测试的成本。  相似文献   

18.
Jose M. Larrain 《Calphad》1980,4(3):155-171
A correlation is presented for the thermodynamic properties of alloys of iron and nickel at high temperatures. It covers both solid and liquid phases and is in good agreement with the available experimental information. The correlation is based on the assumption that the solutions consist of a mixture of chemical species in a state of thermodynamic equilibrium. The three-suffix Margules equations with zero ternary interactions and the regular solution assumption are used to describe the activity coefficients of the (assumed) species. The correlation is applicable over the full range of compositions, and is recommended for use at temperatures higher than 800 K. Values calculated for properties of interest are included.  相似文献   

19.
The booming e-commerce and intense regional competition are pushing the transformation of Hong Kong's warehousing industry towards more automated and efficient. However, this falls into a dilemma when stakeholders do not have enough technical and operational capability (to use advanced facility and systems) or strong motivation (due to high investment and risk). To resolve this, we first propose a new business model in which a third party - the warehousing equipment supplier (WES) is introduced to the current business between the warehouse owner (WO) and the user. The WO and WES own different advantages, complement each other and have the potential to “make the pie bigger” and promote the transformation. Then we utilize cooperative game-theory approaches (the Cournot game and the Shapley value) to explore the possible equilibrium in the new business model on profit distribution, the essential conditions for this new model to succeed and what factors that determine and affect the efficiency of the game. The experiments using the real data set show that market demand and its sensitivity to service quality, service price, price elasticity, etc. can exert impacts on the cooperative surplus and lead to the feasibility/infeasibility of the business model.  相似文献   

20.
Liquid--liquid equilibrium (LLE) data are important in chemical industry for the design of separation equipments, and it is troublesome to determine experimentally. In this paper, a new method for correlation of ternary LLE data is presented. The method is implemented by using a combined structure that uses genetic algorithm (GA)--trained neural network (NN). NN coefficients that satisfy the criterion of equilibrium were obtained by using GA. At the training phase, experimental concentration data and corresponding activity coefficients were used as input and output, respectively. At the test phase, trained NN was used to correlate the whole experimental data by giving only one initial value. Calculated results were compared with the experimental data, and very low root-mean-square deviation error values are obtained between experimental and calculated data. By using this model tie-line and solubility curve data of LLE can be obtained with only a few experimental data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号