Process analytics is one of the popular research domains that advanced in the recent years. Process analytics encompasses identification, monitoring, and improvement of the processes through knowledge extraction from historical data. The evolution of Artificial Intelligence (AI)-enabled Electronic Health Records (EHRs) revolutionized the medical practice. Type 2 Diabetes Mellitus (T2DM) is a syndrome characterized by the lack of insulin secretion. If not diagnosed and managed at early stages, it may produce severe outcomes and at times, death too. Chronic Kidney Disease (CKD) and Coronary Heart Disease (CHD) are the most common, long-term and life-threatening diseases caused by T2DM. Therefore, it becomes inevitable to predict the risks of CKD and CHD in T2DM patients. The current research article presents automated Deep Learning (DL)-based Deep Neural Network (DNN) with Adagrad Optimization Algorithm i.e., DNN-AGOA model to predict CKD and CHD risks in T2DM patients. The paper proposes a risk prediction model for T2DM patients who may develop CKD or CHD. This model helps in alarming both T2DM patients and clinicians in advance. At first, the proposed DNN-AGOA model performs data preprocessing to improve the quality of data and make it compatible for further processing. Besides, a Deep Neural Network (DNN) is employed for feature extraction, after which sigmoid function is used for classification. Further, Adagrad optimizer is applied to improve the performance of DNN model. For experimental validation, benchmark medical datasets were used and the results were validated under several dimensions. The proposed model achieved a maximum precision of 93.99%, recall of 94.63%, specificity of 73.34%, accuracy of 92.58%, and F-score of 94.22%. The results attained through experimentation established that the proposed DNN-AGOA model has good prediction capability over other methods. 相似文献
This paper concerns the following problem: given a set of multi-attribute records, a fixed number of buckets and a two-disk system, arrange the records into the buckets and then store the buckets between the disks in such a way that, over all possible orthogonal range queries (ORQs), the disk access concurrency is maximized. We shall adopt the multiple key hashing (MKH) method for arranging records into buckets and use the disk modulo (DM) allocation method for storing buckets onto disks. Since the DM allocation method has been shown to be superior to any other allocation methods for allocating an MKH file onto a two-disk system for answering ORQs, the real issue is knowing how to determine an optimal way for organizing the records into buckets based upon the MKH concept.
A performance formula that can be used to evaluate the average response time, over all possible ORQs, of an MKH file in a two-disk system using the DM allocation method is first presented. Based upon this formula, it is shown that our design problem is related to a notoriously difficult problem, namely the Prime Number Problem. Then a performance lower bound and an efficient algorithm for designing optimal MKH files in certain cases are presented. It is pointed out that in some cases the optimal MKH file for ORQs in a two-disk system using the DM allocation method is identical to the optimal MKH file for ORQs in a single-disk system and the optimal average response time in a two-disk system is slightly greater than one half of that in a single-disk system. 相似文献
A goal of this study is to develop a Composite Knowledge Manipulation Tool (CKMT). Some of traditional medical activities are rely heavily on the oral transfer of knowledge, with the risk of losing important knowledge. Moreover, the activities differ according to the regions, traditions, experts’ experiences, etc. Therefore, it is necessary to develop an integrated and consistent knowledge manipulation tool. By using the tool, it will be possible to extract the tacit knowledge consistently, transform different types of knowledge into a composite knowledge base (KB), integrate disseminated and complex knowledge, and complement the lack of knowledge. For the reason above, I have developed the CKMT called as K-Expert and it has four advanced functionalities as follows. Firstly, it can extract/import logical rules from data mining (DM) with the minimum of effort. I expect that the function can complement the oral transfer of traditional knowledge. Secondly, it transforms the various types of logical rules into database (DB) tables after the syntax checking and/or transformation. In this situation, knowledge managers can refine, evaluate, and manage the huge-sized composite KB consistently with the support of the DB management systems (DBMS). Thirdly, it visualizes the transformed knowledge in the shape of decision tree (DT). With the function, the knowledge workers can evaluate the completeness of the KB and complement the lack of knowledge. Finally, it gives SQL-based backward chaining function to the knowledge users. It could reduce the inference time effectively since it is based on SQL query and searching not the sentence-by-sentence translation used in the traditional inference systems. The function will give the young researchers and their fellows in the field of knowledge management (KM) and expert systems (ES) more opportunities to follow up and validate their knowledge. Finally, I expect that the approach can present the advantages of mitigating knowledge loss and the burdens of knowledge transformation and complementation. 相似文献