首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Information technology (IT) change is difficult to implement successfully. Cultural (people) issues are a major barrier to IT implementation in the architecture, engineering, and construction (AEC) industry and existing change models have limitations, particularly with respect to cultural issues, which directly affect the ability of companies within the AEC industry to successfully implement IT change. This paper discusses research exploring the relationships between a resistance to change index (RTCI) and the demographics of individuals to understand different AEC participants’ resistance to IT change. Identifying individuals that exhibit different intensities of resistance to IT change efforts and their attendant demographics provides benchmark data to organizations. The ability to identify potential resistors is the first step in helping ensure that new technology implementations succeed. Data was collected from a 156-person sample of the AEC population to determine the relationships among different demographic groups within the AEC population and differences in their RTCI. The data analysis found several demographic groups that were different in their likelihood of resistance, including profession, gender, computer understanding and experience, and awareness of past or future changes occurring in their company. Age and education level were expected to have relationships with RTCI, based on industry stereotypes. The data analysis found that these stereotypes have no scientific basis. Two other stereotypes, gender and computer understanding and experience, were supported by the data analysis.  相似文献   

2.
Integrated project systems hold the promise for improving the quality while reducing the time and cost of architecture/engineering/construction (AEC) projects. A fundamental requirement of such systems is to support the modeling and management of the design and construction information and to allow the exchange of such information among different project disciplines in an effective and efficient manner. This paper presents a methodology to implement integrated project systems through the use of a model-based approach that involves developing integrated “smart AEC objects.” Smart AEC objects are an evolutionary step that builds upon past research and experience in AEC product modeling, geometric modeling, intelligent CAD systems, and knowledge-based design methods. Smart objects are 3D parametric entities that combine the capability to represent various aspects of project information required to support multidisciplinary views of the objects, and the capability to encapsulate “intelligence” by representing behavioral aspects, design constraints, and life-cycle data management features into the objects. An example implementation of smart objects to support integrated design of falsework systems is presented. The paper also discusses the requirements for extending existing standard data models, specifically the Industry Foundation Classes (IFC), to support the modeling of smart AEC objects.  相似文献   

3.
Exchanging data between different software systems is a critical requirement in the architecture, engineering, and construction industry, where task specific data models and public data exchange standards have been applied for data representation and exchange. Matching two data models effectively and efficiently is a challenging task, especially when performed manually, due to the large size and the complexity of today’s data schemas. Some existing computer-aided approaches have attempted to automate matching of different schemas. These approaches work and reduce human effort under specific conditions; however, they do not always result in an accurate matching of two schemas. Achieving schema matching result comparable in accuracy to manual matching requires leveraging domain specific knowledge. Yet utilization of domain knowledge for schema matching rarely has been incorporated in prior studies. In this paper, we present a semiautomated approach that leverages domain knowledge to improve the schema matching process. Compared to a generic schema matching approach, the approach discussed in this paper is able to generate more accurate results due to the incorporation of domain specific constraints, which are represented and reasoned with to create a match between data models. A prototype was developed to validate this approach through a number of real world test cases, including the matching of two publicly-available data exchange standards.  相似文献   

4.
SAMS is a specialized software that has been developed for analyzing, modeling, and generating synthetic samples of hydrologic and water resources time series such as monthly streamflows. The 2003 version of SAMS provides enhanced technical capabilities from the earlier versions of the software. The graphical user interface and the mechanisms for handling the data have been entirely rewritten in MS Visual C++. As a result SAMS-2003 is easier to use and easier to update and maintain. In addition, substantial changes and restructuring have been made to enhance the modeling and data generation capabilities. The package provides many menu option windows that focus on three primary application modules—statistical analysis of data, fitting of a stochastic model (including parameter estimation and testing), and generating synthetic series. SAMS has the capability of analyzing and modeling single site and multisite annual and seasonal data such as monthly and weekly streamflows based on a number of single site and multisite stochastic models, and aggregation and disaggregation modeling schemes. The models are then utilized for generating synthetic data. Results from the various computations, e.g., the generated samples, can be presented in graphical and tabular forms and, if desired, saved to an output file. Some illustrations are provided to demonstrate the improved technical capabilities of the program using flow data of the Colorado River system.  相似文献   

5.
The Industry Foundation Classes (IFC) has been known as a common product model that has interoperability between similar and dissimilar IT systems for the architecture, engineering, construction, and facility management (AEC/FM) industries covering all life cycle phases. Continuous efforts have been undertaken regarding the development of standardized specifications based on the IFC, a product model used in the AEC/FM industries. In accordance, the XM-4 project has been initiated by the Korea Chapter of the International Alliance for Interoperability (IAI) organizations to develop a two-dimensional (2D) extension model for IFC. The XM-4 project aims to add to the IFC2x platform the ability to exchange 2D computer-aided design data within representations of virtual building models, including annotations and styles mainly adapted from the ISO 10303. The focus of this research has been on developing a 2D extension model for the IFC as part of the IAI XM-4 project. This paper provides the scope and rationale of the model extension, major modeling concepts, defined high-level entities and row-level entities, and implementation issues to be considered.  相似文献   

6.
Clinical practice guidelines have enormous potential to improve the quality of and accountability in health care. Making the most of this potential should become easier as guideline developers integrate guidelines within information systems and electronic medical records. A major barrier to such integration is the lack of computing infrastructure in many clinical settings. To successfully implement guidelines in information systems, developers must create more specific recommendations than those that have been required for traditional guidelines. Using reusable software components to create guidelines can make the development of protocols faster and less expensive. In addition, using decision models to produce guidelines enables developers to structure guideline problems systematically, to prioritize information acquisition, to develop site-specific guidelines, and to evaluate the cost-effectiveness of the explicit incorporation of patient preferences into guideline recommendations. Ongoing research provides a foundation for the use of guideline development tools that can help developers tailor guidelines appropriately to their practice settings. This article explores how medical informatics can help clinicians find, use, and create practice guidelines.  相似文献   

7.
Mathematical modeling using the cellular automata (CA) approach is an attractive alternative to models based on partial differential equations when the domains to be simulated have complex boundary conditions. The computational efficiency of CA models is readily observed when using parallel processors, but implementations in personal computers are, although feasible, not quite efficient. In an effort to improve the computational efficiency of CA implementations in personal computers, we introduce in this paper a bitwise implementation based on the use of each bit as a different CA cell. Thus, in a 32-bit processor, each computer word stores information about 32 different CA cells. We illustrate the bitwise implementation with a biofilm model that simulates substrate diffusion and microbial growth of a single-species, single-substrate, structurally heterogeneous biofilm. The efficiency of the bitwise implementation was evaluated by comparing the computational time of equivalent CA biofilm models that used more common low-level implementations, namely, if-then operators and look-up tables. The processing speed of the bitwise implementation was over an order of magnitude higher than the processing speed of the other two implementations. Regarding the biofilm simulations, the CA model exhibited self-organization of the biofilm morphology as a function of kinetic and physical parameters.  相似文献   

8.
9.
A functional distribution of coronary volume can be estimated from the response of arterio-venous O2 content difference (AVO2) to a flow step. However, the results depend on the assumed O2 exchange model. The previously used model consisted of a single mixed compartment with O2 exchange (reference model). The purpose of this study is to provide an estimate of the errors made in the volume estimations by not taking into account factors as flow heterogeneity, different mixing sites or Krogh-like O2 exchange. The approach is indirect: the response of the AVO2 to a flow step has been calculated with alternative O2 exchange models in which factors mentioned are incorporated. These transients are fitted with the reference model. The resulting estimated volumes are different from the volumes assumed in the alternative models. Large differences are obtained with some of the alternative models, e.g. the model with Krogh characteristics. However, these models seem unrealistic because capillary pO2 is higher than venous pO2. Only small differences in volume are obtained with the more realistic models. Therefore, these results indicate that the coronary volumes are approximated well by the estimations obtained with the reference model. These volume estimations were 9.9 and 3.8 ml 100 g-1 for the O2 exchange vessels and the distal venous volume, respectively.  相似文献   

10.
Current practices and integration trends in the architectural/engineering/construction (A/E/C) industry are increasing the demands for the implementation and deployment of integrated project systems. Much of the research throughout the last decade was driven by the need to develop integrated project systems and standard industry-wide data models to support their development. This paper presents a multitier component-based framework that aims to facilitate the implementation of modular and distributed integrated project systems that would support multidisciplinary project processes throughout the project life cycle. The framework addresses the specific requirements of A/E/C projects, and highlights the required functionality and approach to develop integrated project systems. The framework defines a three-tier architecture: Applications tier, common domain-services tier, and project data-repository tier. The applications tier includes a set of function-specific software tools that interact with the domain-services-tier components via a set of adapters. Adapters map the applications’ internal proprietary-data models to and from a standard integrated data model. The domain-services-tier components implement a number of generic services, such as data management, transactions management, document management, and workflow management. The data-repository tier represents a centralized shared storage of all relevant project information. The paper also discusses the implementation of a prototype software system that demonstrates the use of the framework’s reusable components and the industry foundation classes data model in typical building projects.  相似文献   

11.
Rapid development of community health information networks raises the issue of semantic interoperability between distributed and heterogeneous systems. Indeed, operational health information systems originate from heterogeneous teams of independent developers and have to cooperate in order to exchange data and services. A good cooperation is based on a good understanding of the messages exchanged between the systems. The main issue of semantic interoperability is to ensure that the exchange is not only possible but also meaningful. The main objective of this paper is to analyze semantic interoperability from a software engineering point of view. It describes the principles for the design of a semantic mediator (SM) in the framework of a distributed object manager (DOM). The mediator is itself a component that should allow the exchange of messages independently of languages and platforms. The functional architecture of such a SM is detailed. These principles have been partly applied in the context of the HELIOS object-oriented software engineering environment. The resulting service components are presented with their current state of achievement.  相似文献   

12.
One-dimensional analytical mass transport models are familiar to environmental professionals because they are typically used as learning devices in undergraduate groundwater courses. The application of the models requires relatively certain knowledge of contaminant release to the saturated zone. However, release data are typically not reliably known at sites with uncontrolled contaminant releases. A mass balance approach has been developed to calculate contaminant release parameters based on site-specific groundwater concentration data. Standard numerical calibration and sensitivity analysis techniques were modified for use with the one-dimensional spreadsheet model. A groundwater concentration dataset from a Superfund site was used to evaluate three schemes for calculating the model initial concentration. The site application demonstrates how the spreadsheet model could be used for preliminary remediation system comparisons including restoration time estimating. The use of the spreadsheet model may reduce the effort associated with subsequent numerical modeling typically required for remedial design. The spreadsheet application highlights the importance of collecting physical data with groundwater concentration data.  相似文献   

13.
Statistical analysis of repeated measures data using SAS procedures   总被引:3,自引:0,他引:3  
Mixed linear models were developed by animal breeders to evaluate genetic potential of bulls. Application of mixed models has recently spread to all areas of research, spurred by availability of advanced computer software. Previously, mixed model analyses were implemented by adapting fixed-effect methods to models with random effects. This imposed limitations on applicability because the covariance structure was not modeled. This is the case with PROC GLM in the SAS System. Recent versions of the SAS System include PROC MIXED. This procedure implements random effects in the statistical model and permits modeling the covariance structure of the data. Thereby, PROC MIXED can compute efficient estimates of fixed effects and valid standard errors of the estimates. Modeling the covariance structure is especially important for analysis of repeated measures data because measurements taken close in time are potentially more highly correlated than those taken far apart in time.  相似文献   

14.
Indirect response models (IRM) represent one of the possible ways to explain and quantitatively describe a delayed pharmacodynamic effect at non-steady-state conditions. The standard way to get estimates of pharmacodynamic (PD) parameters of IRM consists of two steps. First, an appropriate parametric pharmacokinetic (PK) model (compartmental, polyexponential, etc.) is to be fitted to plasma concentration-time data, and then IRM is fitted to PD data having PD model as an input. In the present work it is demonstrated that a simple piecewise function which consists in interpolation lines connecting concentration-time points can be used as a universal nonparametric PK model thereby allowing to skip the first step. MS Excel spreadsheets implementing this PK model and four known versions of IRM are presented. The usefulness of the approach is demonstrated by fitting IRMs to simulated data as well as to real PK/PD data of warfarin and terbutaline. Estimates of IRM parameters obtained with the nonparametric PK model were close to that published in the literature.  相似文献   

15.
The coil cooling and storage unit (CCSU) is used to cool cold-rolled coils to the temper rolling temperature after the annealing cycle is over at the batch annealing furnace (BAF) in a cold rolling mill (CRM). In the CCSU, the coils are kept on the cooling bases for any fixed time irrespective of the grade and tonnage. Therefore, the need for a mathematical model to accurately predict the cooling time of the coils was felt. The current study involves experimental and numerical analysis of a stack of coils with respect to heat transfer and fluid flow. A comparative study was carried out to ascertain the relative merits of convectors and “C” inserts (CIs) in the cooling the coils. The air flow distribution for the case of different convectors and CIs was measured by means of a full scale physical model. Two different mathematical models were applied to model the fluid flow and flow distribution through the stack of coils. The first flow model uses the hydraulic resistance concept for estimating the air flow rate distribution, whereas the second flow model uses commercial computational fluid dynamics (CFD) software and predicts the velocity distribution in the flow path between two coils in a stack. The predictions from these two models compare well with the experimental data. The flow models were used to calculate the average heat-transfer coefficient in different flow passages in a stack. The heat-transfer coefficients thus obtained were used to tune and validate a two-dimensional transient heat-transfer model of coils. The heat-transfer model predicts the cooling time of coils accurately and also suggests a possible reduction of cooling time if CIs are used in place of convectors.  相似文献   

16.
Describes the fundamental relations between multidimensional scaling and factor analysis. Metric and nonmetric versions of both models are described in terms of type of data analyzed, assumptions made, objectives, computational procedures, geometric representations of data and solutions, and psychological meaning of results. What is commonly taken to be a fundamental identity between the metric versions of the 2 models is shown to be merely the employment of the same theorems. The strongest relations between the techniques lie in the realm of individual differences models for multidimensional scaling. Several such models are presented and are shown to represent the application of the logic of factor analysis to the substance of multidimensional scaling. (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Reinforced concrete (RC) columns are the most critical components in bridges under seismic excitation. In this paper, a simple closed-form formulation to estimate the fragility of RC columns is developed. The formulation is used to estimate the conditional probability of failure of an example column for given shear and deformation demands. The estimated fragilities are as accurate as more sophisticated estimates (i.e., predictive fragilities) and do not require any reliability software. A sensitivity analysis is carried out to identify to which parameter(s) the reliability of the example column is most sensitive. The closed-form formulation uses probabilistic capacity models. A Bayesian procedure is presented to update existing probabilistic models with new data. The model updating process can incorporate different types of information, including laboratory test data, field observations, and subjective engineering judgment, as they become available.  相似文献   

18.
Profile hidden Markov models   总被引:7,自引:0,他引:7  
The recent literature on profile hidden Markov model (profile HMM) methods and software is reviewed. Profile HMMs turn a multiple sequence alignment into a position-specific scoring system suitable for searching databases for remotely homologous sequences. Profile HMM analyses complement standard pairwise comparison methods for large-scale sequence analysis. Several software implementations and two large libraries of profile HMMs of common protein domains are available. HMM methods performed comparably to threading methods in the CASP2 structure prediction exercise.  相似文献   

19.
Recent work has investigated various schemes for the attachment of free-floating grains in models of equiaxed solidification in multicomponent alloys. However, these models are deterministic in nature, and simply investigating their differences for a limited number of results would not constitute an adequate comparison of their predictions. Instead, the models are compared in the context of the uncertainty in the most important input parameters. This approach is especially important in light of the effort required to implement a new model. If the predictions are essentially the same, then either model will suffice, or one may be selected for ease of implementation, numerical robustness, or computational efficiency. If, however, the models are significantly different, then the most accurate should be selected. In order to investigate the effects of input uncertainty on the output of grain attachment models, the PRISM Uncertainty Quantification framework was employed. The three models investigated were a constant packing fraction (CPF) scheme, an average solid velocity method (AVM), and a continuum attachment approach. Comparisons were made between the CPF and AVM models to estimate the importance of the local velocity field and between the CPF and continuum models to determine the sensitivity of the macrosegregation to new parameters unique to the continuum model.  相似文献   

20.
通过构建不规则三角网模型的方法,采取包围盒方法排除两三角网模型中大量无效三角形单元,辅以空间编码技术进一步定位相交三角形集合,即利用AUTOCAD软件下的VBA语言实现初期网格模型构建和最终交线生成;中期数据处理采用VC++编译环境,对其进行相交检测并求出交线端点坐标,经过端点排序最终生成连续的多段线或闭合交线环形成包络面,此举能够极大地降低计算过程的实体建模和运算的大量数据而导致的计算效率低,最终准为实现露天矿生产过程中数字化模拟.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号