共查询到20条相似文献,搜索用时 0 毫秒
1.
We apply the Fuzzy Temporal Constraint System we have developed to the case of SARS (Severe Acute Respiratory Syndrome). The
idea is to characterize the temporal evolution of the symptoms of this ill-known disease by modelling patients’ data in a
Fuzzy Temporal Constraint Network. We discuss how the system is able to manage both fuzzy qualitative and metric constraints
allowing to represent in a flexible manner the symptoms of different patients. In this way it is possible to deduce characteristic
periods of an ill-known disease such as SARS was. A new user interface is included into the architecture of the System. 相似文献
2.
Zhang Zhe Xiong Hui Xu Tong Qin Chuan Zhang Le Chen Enhong 《Knowledge and Information Systems》2022,64(9):2435-2456
Knowledge and Information Systems - To assure the development of effective treatment plans, it is crucial for understanding the complication relationships among diseases. In practice, traditional... 相似文献
3.
4.
Electronic Markets - Purchase prediction has an important role for decision-makers in e-commerce to improve consumer experience, provide personalised recommendations and increase revenue. Many... 相似文献
5.
6.
Mario Malcangi 《Neural computing & applications》2016,27(5):1165-1173
Driving safety can be achieved by predicting imminent falling asleep at the wheel. Several methods of early detection have been investigated by continuous monitoring of physiological and behavioral parameters. Requirements for noninvasive, unattended, personal adaptation need to be met, along with the effectiveness of the detection method, in order to perform reliably when applied. Because wakefulness and sleep are reflected in several human physiological conditions, such as cardiac activity, breathing, movement, and galvanic skin conductance, captured bioelectric signal features were extracted. A fuzzy decision-fusion logic was tuned to make inferences about oncoming driver fatigue and drowsiness. The evolving fuzzy neural network paradigm was applied to the previous developed framework to improve reliability while keeping target system complexity low. 相似文献
7.
Designers rely on performance predictions to direct the design toward appropriate requirements. Machine learning (ML) models exhibit the potential for rapid and accurate predictions. Developing conventional ML models that can be generalized well in unseen design cases requires an effective feature engineering and selection. Identifying generalizable features calls for good domain knowledge by the ML model developer. Therefore, developing ML models for all design performance parameters with conventional ML will be a time-consuming and expensive process. Automation in terms of feature engineering and selection will accelerate the use of ML models in design.Deep learning models extract features from data, which aid in model generalization. In this study, we (1) evaluate the deep learning model’s capability to predict the heating and cooling demand on unseen design cases and (2) obtain an understanding of extracted features. Results indicate that deep learning model generalization is similar to or better than that of a simple neural network with appropriate features. The reason for the satisfactory generalization using the deep learning model is its ability to identify similar design options within the data distribution. The results also indicate that deep learning models can filter out irrelevant features, reducing the need for feature selection. 相似文献
8.
The compression of scan patterns in diagnostic imaging is considered. An integral approach is proposed to the elaboration
of objective quantitative criteria for estimating the admissible distortions of a compressed image. 相似文献
9.
10.
11.
《Information and Software Technology》2002,44(1):53-62
Project managers can make more effective and efficient project adjustments if they detect project high-risk elements early. We analyzed 42 software development projects in order to investigate some early risk factors and their effect on software project success. Developers in our organization found the most important factors for project success to be: (1) the presence of a committed sponsor and (2) the level of confidence that the customers and users have in the project manager and development team. However, several other software project factors, which are generally recognized as important, were not considered important by our respondents. 相似文献
12.
RunZhi JinKyuMan Cho ChangTaek HyunMyungJin Son 《Expert systems with applications》2012,39(5):5214-5222
Accurate prediction of construction cost in the initial phase of a construction project is critical to the success of the project. Accordingly, many researchers have proposed various methodologies for predicting the cost in the initial phase with the use of limited information. This study was aimed at improving the prediction performance of a cost prediction model based on the Case-Based Reasoning (CBR) technique, which has recently become widely used. Toward this end, an improved CBR model that uses the Multiple Regression Analysis (MRA) technique in the revision phase of the CBR technique was developed. To verify the prediction performance of the proposed model, a case study was performed on 41 business facilities and 99 multi-family housing projects. The results showed that the prediction performance of the revised CBR model for business facilities and multi-family housings improved by 17.23% and 4.39%, respectively, compared to that of the existing CBR model. The proposed MRA-based revised CBR model is expected to be useful in estimating the construction cost in the initial phase of a project. 相似文献
13.
14.
Norman Fenton Martin Neil William Marsh Peter Hearty Łukasz Radliński Paul Krause 《Empirical Software Engineering》2008,13(5):499-537
Standard practice in building models in software engineering normally involves three steps: collecting domain knowledge (previous
results, expert knowledge); building a skeleton of the model based on step 1 including as yet unknown parameters; estimating
the model parameters using historical data. Our experience shows that it is extremely difficult to obtain reliable data of
the required granularity, or of the required volume with which we could later generalize our conclusions. Therefore, in searching
for a method for building a model we cannot consider methods requiring large volumes of data. This paper discusses an experiment
to develop a causal model (Bayesian net) for predicting the number of residual defects that are likely to be found during
independent testing or operational usage. The approach supports (1) and (2), does not require (3), yet still makes accurate
defect predictions (an R
2 of 0.93 between predicted and actual defects). Since our method does not require detailed domain knowledge it can be applied
very early in the process life cycle. The model incorporates a set of quantitative and qualitative factors describing a project
and its development process, which are inputs to the model. The model variables, as well as the relationships between them,
were identified as part of a major collaborative project. A dataset, elicited from 31 completed software projects in the consumer
electronics industry, was gathered using a questionnaire distributed to managers of recent projects. We used this dataset
to validate the model by analyzing several popular evaluation measures (R
2, measures based on the relative error and Pred). The validation results also confirm the need for using the qualitative factors
in the model. The dataset may be of interest to other researchers evaluating models with similar aims. Based on some typical
scenarios we demonstrate how the model can be used for better decision support in operational environments. We also performed
sensitivity analysis in which we identified the most influential variables on the number of residual defects. This showed
that the project size, scale of distributed communication and the project complexity cause the most of variation in number
of defects in our model. We make both the dataset and causal model available for research use. 相似文献
15.
The energy-efficient building design requires building performance simulation (BPS) to compare multiple design options for their energy performance. However, at the early stage, BPS is often ignored, due to uncertainty, lack of details, and computational time. This article studies probabilistic and deterministic approaches to treat uncertainty; detailed and simplified zoning for creating zones; and dynamic simulation and machine learning for making energy predictions. A state-of-the-art approach, such as dynamic simulation, provide a reliable estimate of energy demand, but computationally expensive. Reducing computational time requires the use of an alternative approach, such as a machine learning (ML) model. However, an alternative approach will cause a prediction gap, and its effect on comparing options needs to be investigated. A plugin for Building information modelling (BIM) modelling tool has been developed to perform BPS using various approaches. These approaches have been tested for an office building with five design options. A method using the probabilistic approach to treat uncertainty, detailed zoning to create zones, and EnergyPlus to predict energy is treated as the reference method. The deterministic or ML approach has a small prediction gap, and the comparison results are similar to the reference method. The simplified model approach has a large prediction gap and only makes only 40% comparison results are similar to the reference method. These findings are useful to develop a BIM integrated tool to compare options at the early design stage and ascertain which approach should be adopted in a time-constraint situation. 相似文献
16.
Mário Cunha André R. S. Marçal Lisa Silva 《International journal of remote sensing》2013,34(12):3125-3142
A forecast model for estimating the annual variation in regional wine yield based on remote sensing was developed for the main wine regions of Portugal. Normalized Difference Vegetation Index (NDVI) time-series obtained by the VEGETATION sensor, on board the most recent Satellite Pour l'Observation de la Terre (SPOT) satellite, over the period 1998–2008 were used for four test sites located in the main wine regions of Portugal: Douro (two sites), Vinhos Verdes and Alentejo. The CORINE (Coordination of Information on the Environment) Land Cover maps from 2000 were initially used to select the suitable regional test sites. The NDVI values of the second decade of April of the previous season to harvest were significantly correlated to the wine yield for all studied regions. The relation between the NDVI and grapevine induction and differentiation of the inflorescence primordial or bud fruitfulness during the previous season is discussed. This NDVI measurement can be made about 17 months before harvest and allows us to obtain very early forecasts of potential regional wine yield. Appropriate statistical tests indicated that the wine yield forecast model explains 77–88% of the inter-annual variability in wine yield. The comparison of official wine yield and the adjusted prediction models, based on 36 annual data records for all regions, shows an average spread deviation between 2.9% and 7.1% for the different regions. The dataset provided by the VEGETATION sensor proved to be a valuable tool for vineyard monitoring, mainly for inter-annual comparisons on a regional scale due to their high data acquisition rates and wide availability. The accuracy, very early indication and low-cost of the developed forecast model justify its use by the winery and viticulture industry. 相似文献
17.
针对多发于老龄人群的帕金森病(PD)的早期智能化诊断的问题,提出基于医疗检测文本信息数据的聚类技术来对PD进行分析预测。首先,对原始数据集进行预处理以获取有效特征信息,并通过主成分分析(PCA)方法将原始特征分别降维到8个不同维度的维度空间;然后,应用5个传统的经典聚类模型和3种不同的聚类集成方法分别对8个维度空间的数据进行聚类;最后,采用4个聚类性能指标来预测数据集中的多巴胺异常PD患者、健康体和无多巴胺缺失(SWEDD) PD患者。仿真结果显示,PCA特征维度值取30时,高斯混合模型(GMM)的聚类准确度达到89.12%;PCA特征维度值取70时,谱聚类(SC)的聚类准确度达到61.41%;PCA特征维度值取80时,元聚类算法(MCLA)的聚类准确度达到59.62%。对比实验结果表明,5种经典聚类方法中,PCA的特征维度值小于40时,高斯混合模型聚类效果最佳;3种聚类集成方法中,对于不同的特征维度,MCLA的聚类性能均表现优异,进而为PD的早期智能化辅助诊断提供了技术和理论支撑。 相似文献
18.
针对多发于老龄人群的帕金森病(PD)的早期智能化诊断的问题,提出基于医疗检测文本信息数据的聚类技术来对PD进行分析预测。首先,对原始数据集进行预处理以获取有效特征信息,并通过主成分分析(PCA)方法将原始特征分别降维到8个不同维度的维度空间;然后,应用5个传统的经典聚类模型和3种不同的聚类集成方法分别对8个维度空间的数据进行聚类;最后,采用4个聚类性能指标来预测数据集中的多巴胺异常PD患者、健康体和无多巴胺缺失(SWEDD) PD患者。仿真结果显示,PCA特征维度值取30时,高斯混合模型(GMM)的聚类准确度达到89.12%;PCA特征维度值取70时,谱聚类(SC)的聚类准确度达到61.41%;PCA特征维度值取80时,元聚类算法(MCLA)的聚类准确度达到59.62%。对比实验结果表明,5种经典聚类方法中,PCA的特征维度值小于40时,高斯混合模型聚类效果最佳;3种聚类集成方法中,对于不同的特征维度,MCLA的聚类性能均表现优异,进而为PD的早期智能化辅助诊断提供了技术和理论支撑。 相似文献
19.