首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Decision tree is one of the most widely used and practical methods in data mining and machine learning discipline. However, many discretization algorithms developed in this field focus on univariate only, which is inadequate to handle the critical problems especially owned by medical domain. In this paper, we propose a new multivariate discretization method called Multivariate Interdependent Discretization for Continuous Attributes - MIDCA. Our algorithm can minimize the uncertainty between the interdependent attribute and the continuous-valued attribute, and at the same time maximize their correlation. The experimental results demonstrate a comparison of performance of various decision tree algorithms on twelve real-life datasets from UCI repository.  相似文献   

2.
Model Checking Data Consistency for Cache Coherence Protocols   总被引:1,自引:0,他引:1       下载免费PDF全文
A method for automatic verification of cache coherence protocols is presented, in which cache coherence protocols are modeled as concurrent value-passing processes, and control and data consistency requirement are described as formulas in first-orderμ-calculus. A model checker is employed to check if the protocol under investigation satisfies the required properties. Using this method a data consistency error has been revealed in a well-known cache coherence protocol. The error has been corrected, and the revised protocol has been shown free from data consistency error for any data domain size, by appealing to data independence technique.  相似文献   

3.
This paper analyses the advantages of comhining Web service and data warehouse, then introduces a distributed data warehouse model based on Web service. It also introduces some features of Web services and data warehouse, Web service is a bridge, which can link different models, different operation systems and different program languages. With the development and application of Web service technology, the current operation of design and application will shift to develop and make use of Web service. Web service will matke "software as service" come true and finally let people think software is of great value, and its value can be embodied and transferred on the web. As a decision supporting system, data warehouse provides a solid plattbrm formed by current data and past data. Using this platform, companies can make a series of commerce analysis. This work combines the Web service and data warehouse, extends data warehouse's network ability. Then, Companies Corporation and individual can obtain information conveniently.  相似文献   

4.
A parallel multithreaded program that is ostensibly deterministic may nevertheless behave nondeterministically due to bugs in the code. These bugs are called determinacy races, and they result when one thread updates a location in shared memory while another thread is concurrently accessing the location. We have implemented a provably efficient determinacy-race detector for Cilk, an algorithmic multithreaded programming language. If a Cilk program is run on a given input data set, our debugging tool, which we call the ``Nondeterminator,' either determines at least one location in the program that is subject to a determinacy race, or else it certifies that the program is race free when run on the data set. The core of the Nondeterminator is an asymptotically efficient serial algorithm (inspired by Tarjan's nearly linear-time least-common-ancestors algorithm) for detecting determinacy races in series-parallel directed acyclic graphs. For a Cilk program that runs in T time on one processor and uses v shared-memory locations, the Nondeterminator runs in O(T α(v,v)) time, where α is Tarjan's functional inverse of Ackermann's function, a very slowly growing function which, for all practical purposes, is bounded above by 4 . The Nondeterminator uses at most a constant factor more space than does the original program. On a variety of Cilk program benchmarks, the Nondeterminator exhibits a slowdown of less than 12 compared with the serial execution time of the original optimized code, which we contend is an acceptable slowdown for debugging purposes. Received November 11, 1997, and in final form September 21, 1998.  相似文献   

5.
6.
由开放数据协会 (OpenDataConsortium)研发了一个数据分发策略范式 ,以帮助指导各地政府将他们所拥有的地理数据对公众开放 ,同时达成一个一致和标准的数据分发策略。数据分发策略项目是由公共部门和私营企业共同参与在地理数据同盟 (GeopataAlliance)的指导下完成的。策略范式致力于涉及公共数据分发的法律和商业问题 ,如版权、许可证、责任、安全限制、隐私、元数据维护、数据接受和分发方法及数据销售的争议问题。美国国家地理数据委员会 (FGDC)秘书处KathyCovert说“数据分发策略范式将使地方政府能够使用空间数据处理公共要求 ,而…  相似文献   

7.
In this paper, a Graph-based semantic Data Model (GDM) is proposed with the primary objective of bridging the gap between the human perception of an enterprise and the needs of computing infrastructure to organize information in some particular manner for efficient storage and retrieval. The Graph. Data Model (GDM) has been proposed as an alternative data model to combine the advantages of the relational model with the positive features of semantic data models. The proposed GDM offers a structural representation for interacting to the designer, making it always easy to comprehend the complex relations amongst basic data items. GDM allows an entire database to be viewed as a Graph (V, E) in a layered organization. Here, a graph is created in a bottom up fashion where V represents the basic instances of data or a functionally abstracted module, called primary semantic group (PSG) and secondary semantic group (SSG). An edge in the model implies the relationship among the secondary semantic groups. The contents of the lowest layer are the semantically grouped data values in the form of primary semantic groups. The SSGs are nothing but the higher-level abstraction and are created by the method of encapsulation of various PSGs, SSGs and basic data elements. This encapsulation methodology to provide a higher-level abstraction continues generating various secondary semantic groups until the designer thinks that it is sufficient to declare the actual problem domain. GDM, thus, uses standard abstractions available in a semantic data model with a structural representation in terms of a graph. The operations on the data model are formalized in the proposed graph algebra. A Graph Query Language (GQL) is also developed, maintaining similarity with the widely accepted user-friendly SQL. Finally, the paper also presents the methodology to make this GDM compatible with the distributed environment, and a corresponding query processing technique for distributed environment is also suggested for the sake of completeness.  相似文献   

8.
Durán  Juan M. 《Minds and Machines》2020,30(3):301-323

Many philosophical accounts of scientific models fail to distinguish between a simulation model and other forms of models. This failure is unfortunate because there are important differences pertaining to their methodology and epistemology that favor their philosophical understanding. The core claim presented here is that simulation models are rich and complex units of analysis in their own right, that they depart from known forms of scientific models in significant ways, and that a proper understanding of the type of model simulations are fundamental for their philosophical assessment. I argue that simulation models can be distinguished from other forms of models by the many algorithmic structures, representation relations, and new semantic connections involved in their architecture. In this article, I reconstruct a general architecture for a simulation model, one that faithfully captures the complexities involved in most scientific research with computer simulations. Furthermore, I submit that a new methodology capable of conforming such architecture into a fully functional, computationally tractable computer simulation must be in place. I discuss this methodology—what I call recasting—and argue for its philosophical novelty. If these efforts are heading towards the right interpretation of simulation models, then one can show that computer simulations shed new light on the philosophy of science. To illustrate the potential of my interpretation of simulation models, I briefly discuss simulation-based explanations as a novel approach to questions about scientific explanation.

  相似文献   

9.
To perform realistic finite element simulations of cardiovascular surgical procedures (such as balloon angioplasty, stenting or bypass), it is necessary to use appropriate constitutive models able to describe the mechanical behavior of the human arterial wall (in healthy and diseased conditions) as well as to properly calibrate the material parameters involved in such constitutive models. Moving from these considerations, the goal of the present study is to compare the reliability of two isotropic phenomenological models and of four structural invariant-based constitutive models, commonly used to describe the passive mechanical behavior of arteries. The arterial wall is modeled as a thick-wall tube with one- and two- layer structure. Residual stresses inclusion is also considered, to evaluate informations on the stress distribution through the wall thickness. The predictive capability of the investigated models is tested using extension/inflation data on human carotid arteries related by two different experimental works available in the literature. The material parameters involved in the investigated models are computed in the least-square sense thought a best fitting procedure, relying on a multi-start optimization algorithm. The good quality of the optimal solution is validated quantitatively computing proper error measures and comparing the model prediction curves. The final outcome of the paper is a critical review of the six considered constitutive models, comparing their formulation and evidencing the more or less capability of such models to fit the considered experimental data.  相似文献   

10.
Traffic models play a significant role in the analysis and characterization of network traffic and network performance. Thorough research and accurate modeling on network traffic become an efficient way to explore network internal mechanism, control network flux and optimize network performance. In this paper, we put forward a Haar DWT-basod (Discrete Wavelet Transform) traffic model. The scaling analysis on two simulated traces shows that our model can generate multi-fractal traffic data, which is in a close fit to the real trace statistics.  相似文献   

11.
This article will show how data privacy is legally recognized and treated in Japan and argue in particular how it can (not) play a role as a tool to combat discrimination in the workplace. The first part will depict a brief history of the development of the ‘right to privacy’ in Japan. This part will also take a look at the national Act on Protection of Personal Information and its complicated enforcement, paying attention to the general legal culture in Japan. The second part will observe other laws which are related to personal data protection and legal protection against discrimination. At the end of this article the author will make a comment on the Japanese data privacy law and its desirable development.  相似文献   

12.
A simple abstract model of Eiffel is introduced,and its denotational semantics is defined using VDM style.A static analysis approach is presented to treat multiple inheritance and renaming mechanism.Within the framework of denotational semantics introduced in this paper,the key features of Eiffel,such as identification,classification,multiple inheritance,polymorphism and dynamic binding,can be adequately characterized.  相似文献   

13.
This paper investigates the issue on how to effectively model time series with a new algorithm given by a Multilayer Feedforward Neural Network (MLFNN) and an Autoregressive Moving Average (ARMA). The static nonlinear part is modeled by MLFNN, and the linear part is modeled by an ARMA model, The algorithm is developed for estimating the weights of the MLFNN and the parameters of ARMA model. To illustrate the feasibility and simplicity of the above procedures for time series data mining, the problem of measuring normality in H'FI'P traffic for the purpose of anomaly-based network intrusion detection is addressed. The detection results provided by the approach of this paper show important improvements, both in detection ratio and regarding false alarms, in comparison with those obtained using other current techniques, Simulation examples are included to illustrate the performance of the proposed method.  相似文献   

14.
In broadband wireless technology, due to having many salient advantages, such as high data rates, quality of service, scalability, security, mobility, etc., LTE-A currently has been one of the trends of wireless system development. This system provides several sophisticated authentication and encryption techniques to enhance its system security. However, LTE-A still suffers from various attacks, like eavesdropping and replay attacks. Therefore, in this paper, we propose a novel security scheme, called the security system for a 4G environment (Se4GE for short), which as an LTE-A-based system integrates the RSA and Diffie–Hellman algorithms to solve some of LTE-A’s security drawbacks where LTE-A stands for LTE-Advance which is a 4G system. The Se4GE is an end-to-end ciphertext transfer mechanism which dynamically changes encryption keys to enforce the security of data transmission in an LTE-A system. The Se4GE also produces several logically connected random keys, called the intelligent protection-key chain, which invokes two encryption/decryption techniques to provide users with broader demands for security services. The analytical results show that the Se4GE has higher security level than that of an LTE-A system.  相似文献   

15.
This report presents the design and implementation of a Distributed Data Acquisition, Monitoring and Processing System (DDAMAP). It is assumed that operations of a factory are organized into two-levels: client machines at plant-level collect real-time raw da to from sensors and measurement inst rumentat ions and transfer them to a central processor over the Ethernets, and the central processor handles tasks of real-time data processing and monitoring. This system utilizes the computation power of Intel T2300 dual-core processor and parallel computations supported by multi-threading techniques. Our experiments show that these techniques can significantly improve the system performance and are viable solutions to real-time high-speed data processing.  相似文献   

16.
Final Data     
赵清海 《软件》2003,(12):7-7
超级数据恢复工具的特性功能包括:支持FAT16/32和NTFS恢复完全删除的数据和目录,恢复主引导扇区和FAT表损坏丢失的数据,恢复快速格式化的硬盘和软盘中的数据,恢复CIH破坏的数据,恢复硬盘损坏丢失的数据,通过网络远程控制数据恢复等等,最新版本更是有了很大的增强。  相似文献   

17.
RDF is the data interchange layer for the Semantic Web.In order to manage the increasing amount of RDF data, an RDF repository should provide not only the necessary scalability and efficiency,but also sufficient inference capabilities. Though existing RDF repositories have made progress towards these goals,there is still ample space for improving the overall performance.In this paper,we propose a native RDF repository,SystemⅡ,to pursue a better tradeoff among system scalability,query efficiency,and infer...  相似文献   

18.
Energy consumption prediction of a CNC machining process is important for energy efficiency optimization strategies.To improve the generalization abilities,more and more parameters are acquired for energy prediction modeling.While the data collected from workshops may be incomplete because of misoperation,unstable network connections,and frequent transfers,etc.This work proposes a framework for energy modeling based on incomplete data to address this issue.First,some necessary preliminary operations are used for incomplete data sets.Then,missing values are estimated to generate a new complete data set based on generative adversarial imputation nets(GAIN).Next,the gene expression programming(GEP)algorithm is utilized to train the energy model based on the generated data sets.Finally,we test the predictive accuracy of the obtained model.Computational experiments are designed to investigate the performance of the proposed framework with different rates of missing data.Experimental results demonstrate that even when the missing data rate increases to 30%,the proposed framework can still make efficient predictions,with the corresponding RMSE and MAE 0.903 k J and 0.739 k J,respectively.  相似文献   

19.
This paper presents the use of a student model to improve the explanations provided by an intelligent tutoring system,namely SimpleQuestl,in the domain of electronics.The method of overlay modelling is adopted to build the studdent model.The diagnosis is based on the comparison of the behaviours of the student and the expert.The student model is consulted by the “explainer”and “debugging”procedures in order to re-order the sequence of the explanation.  相似文献   

20.
Land Surface Temperature (LST) is an important parameter that describes energy balance of substance and energy exchange between the surface and the atmosphere,and LST has widely used in the fields of urban heat island effect,soil moisture and surface radiative flux.Currently,no satellite sensor can deliver thermal infrared data at both high temporal resolution and spatial resolution,which strongly limits the wide application of thermal infrared data.Based on the MODIS land surface temperature product and Landsat ETM+image,a temporal and spatial fusion method is proposed by combining the TsHARP (Thermal sHARPening) model with the STITFM (Spatio\|Temporal Integrated Temperature Fusion Model) algorithm,defined as CTsSTITFM model in this study.The TsHARP method is used to downscale the 1 km MODIS land surface temperature image to LST data at spatial resolution of 250 m.Then the accuracy is verified by the retrieval LST from Landsat ETM+ image at the same time.Land surface temperature image at 30 m spatial scale is predicted by fusing Landsat ETM+ and downscaling MODIS data using STITFM model.The fusion LST image is validated by the estimated LST from Landsat ETM+ data for the same predicted.The results show that the proposed method has a better precision comparing to the STITFM algorithm.Under the default parameter setting,the predicted LST values using CTsSTITFM fusion method have a root mean square error (RMSE) less than 1.33 K.By adjusting the window size of CTsSTITFM fusion method,the fusion results in the selected areas show some regularity with the increasing of the window.In general,a reasonable window size set may slightly improve the effects of LST fusion.The CTsSTITFM fusion method can solve the problem of mixed pixels caused by coarse\|scale MODIS surface temperature images to some degree.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号