首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
能源动力   1篇
无线电   2篇
一般工业技术   8篇
自动化技术   5篇
  2022年   2篇
  2021年   8篇
  2020年   2篇
  2019年   1篇
  2018年   1篇
  2017年   2篇
排序方式: 共有16条查询结果,搜索用时 20 毫秒
1.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image.  相似文献   
2.
Heart disease (HD) is a serious widespread life-threatening disease. The heart of patients with HD fails to pump sufficient amounts of blood to the entire body. Diagnosing the occurrence of HD early and efficiently may prevent the manifestation of the debilitating effects of this disease and aid in its effective treatment. Classical methods for diagnosing HD are sometimes unreliable and insufficient in analyzing the related symptoms. As an alternative, noninvasive medical procedures based on machine learning (ML) methods provide reliable HD diagnosis and efficient prediction of HD conditions. However, the existing models of automated ML-based HD diagnostic methods cannot satisfy clinical evaluation criteria because of their inability to recognize anomalies in extracted symptoms represented as classification features from patients with HD. In this study, we propose an automated heart disease diagnosis (AHDD) system that integrates a binary convolutional neural network (CNN) with a new multi-agent feature wrapper (MAFW) model. The MAFW model consists of four software agents that operate a genetic algorithm (GA), a support vector machine (SVM), and Naïve Bayes (NB). The agents instruct the GA to perform a global search on HD features and adjust the weights of SVM and BN during initial classification. A final tuning to CNN is then performed to ensure that the best set of features are included in HD identification. The CNN consists of five layers that categorize patients as healthy or with HD according to the analysis of optimized HD features. We evaluate the classification performance of the proposed AHDD system via 12 common ML techniques and conventional CNN models by using a cross-validation technique and by assessing six evaluation criteria. The AHDD system achieves the highest accuracy of 90.1%, whereas the other ML and conventional CNN models attain only 72.3%–83.8% accuracy on average. Therefore, the AHDD system proposed herein has the highest capability to identify patients with HD. This system can be used by medical practitioners to diagnose HD efficiently.  相似文献   
3.
Wireless Sensor Network (WSN) forms an essential part of IoT. It is embedded in the target environment to observe the physical parameters based on the type of application. Sensor nodes in WSN are constrained by different features such as memory, bandwidth, energy, and its processing capabilities. In WSN, data transmission process consumes the maximum amount of energy than sensing and processing of the sensors. So, diverse clustering and data aggregation techniques are designed to achieve excellent energy efficiency in WSN. In this view, the current research article presents a novel Type II Fuzzy Logic-based Cluster Head selection with Low Complexity Data Aggregation (T2FLCH-LCDA) technique for WSN. The presented model involves a two-stage process such as clustering and data aggregation. Initially, three input parameters such as residual energy, distance to Base Station (BS), and node centrality are used in T2FLCH technique for CH selection and cluster construction. Besides, the LCDA technique which follows Dictionary Based Encoding (DBE) process is used to perform the data aggregation at CHs. Finally, the aggregated data is transmitted to the BS where it achieves energy efficiency. The experimental validation of the T2FLCH-LCDA technique was executed under three different scenarios based on the position of BS. The experimental results revealed that the T2FLCH-LCDA technique achieved maximum energy efficiency, lifetime, Compression Ratio (CR), and power saving than the compared methods.  相似文献   
4.
Multimedia Tools and Applications - The design of Two-Dimensional Infinite Input Response Filters (2D IIR) is an important task in the field of signal processing. These filters are widely used in...  相似文献   
5.
The dynamic nature of wireless sensor networks (WSNs) and numerous possible cluster configurations make searching for an optimal network structure on-the-fly an open challenge. To address this problem, we propose a genetic algorithm-based, self-organizing network clustering (GASONeC) method that provides a framework to dynamically optimize wireless sensor node clusters. In GASONeC, the residual energy, the expected energy expenditure, the distance to the base station, and the number of nodes in the vicinity are employed in search for an optimal, dynamic network structure. Balancing these factors is the key of organizing nodes into appropriate clusters and designating a surrogate node as cluster head. Compared to the state-of-the-art methods, GASONeC greatly extends the network life and the improvement up to 43.44 %. The node density greatly affects the network longevity. Due to the increased distance between nodes, the network life is usually shortened. In addition, when the base station is placed far from the sensor field, it is preferred that more clusters are formed to conserve energy. The overall average time of GASONeC is 0.58 s with a standard deviation of 0.05.  相似文献   
6.
In present digital era, an exponential increase in Internet of Things (IoT) devices poses several design issues for business concerning security and privacy. Earlier studies indicate that the blockchain technology is found to be a significant solution to resolve the challenges of data security exist in IoT. In this view, this paper presents a new privacy-preserving Secure Ant Colony optimization with Multi Kernel Support Vector Machine (ACOMKSVM) with Elliptical Curve cryptosystem (ECC) for secure and reliable IoT data sharing. This program uses blockchain to ensure protection and integrity of some data while it has the technology to create secure ACOMKSVM training algorithms in partial views of IoT data, collected from various data providers. Then, ECC is used to create effective and accurate privacy that protects ACOMKSVM secure learning process. In this study, the authors deployed blockchain technique to create a secure and reliable data exchange platform across multiple data providers, where IoT data is encrypted and recorded in a distributed ledger. The security analysis showed that the specific data ensures confidentiality of critical data from each data provider and protects the parameters of the ACOMKSVM model for data analysts. To examine the performance of the proposed method, it is tested against two benchmark dataset such as Breast Cancer Wisconsin Data Set (BCWD) and Heart Disease Data Set (HDD) from UCI AI repository. The simulation outcome indicated that the ACOMKSVM model has outperformed all the compared methods under several aspects.  相似文献   
7.
Supplier selection is a common and relevant phase to initialize the supply chain processes and ensure its sustainability. The choice of supplier is a multi-criteria decision making (MCDM) to obtain the optimal decision based on a group of criteria. The health care sector faces several types of problems, and one of the most important is selecting an appropriate supplier that fits the desired performance level. The development of service/product quality in health care facilities in a country will improve the quality of the life of its population. This paper proposes an integrated multi-attribute border approximation area comparison (MABAC) based on the best-worst method (BWM), plithogenic set, and rough numbers. BWM is applied to regulate the weight vector of the measures in group decision-making problems with a high level of consistency. For the treatment of uncertainty, a plithogenic set and rough number (RN) are used to improve the accuracy of results. Plithogenic set operations are used to deal with information in the desired manner that handles uncertainty and vagueness. Then, based on the plithogenic aggregation and the results of BWM evaluation, we use MABAC to find the optimal alternative according to defined criteria. To examine the proposed integrated algorithm, an empirical example is produced to select an optimal supplier within five options in the healthcare industry.  相似文献   
8.
Wireless Personal Communications - At a time when the Delay Tolerant Network (DTN) Research Group is making great strides in establishing the much anticipated Solar System Internet (SSI) in space,...  相似文献   
9.
Proton Exchange Membrane fuel cells (PEMFCs) are a promising renewable energy source to convert the chemical reactions between hydrogen and oxygen into electricity. To simulate, evaluate, manage, and optimize PEMFCs, an accurate mathematical model is essential. Therefore, this paper improves the accuracy of a mathematical model for the PEMFC based on semi-empirical equations by proposing a meta-heuristic technique to optimize its unidentified parameters. Because the I–V characteristic curve of the PEMFC systems has a nonlinear and multivariable nature, conventional optimization techniques are difficult and time-consuming but modern meta-heuristic algorithms are ideally suited. Therefore, in this paper, a new improved optimization algorithm based on the Heap-based optimizer (HBO) has been proposed to estimate the unknown parameters of PEMFCs models using an objective function that minimizes the error between the measured and estimated data. This improved HBO (IHBO) effectively uses two strategies: ranking-based position update (RPU) and Lévy-based exploitation improvement (LEI) to improve the final accuracy to the SSE value with higher convergence speed. Four well-known commercial PEMFCs, (the 500 W BCS stack, NetStack PS6, H-12 stack, and AVISTA SR-12 500 W modular) are utilized to verify the proposed IHBO and compare it with 11 popular optimizers using various performance metrics. The experimental findings show the superiority of IHBO in terms of convergence speed, stability, and final accuracy, where IHBO could fulfill fitness values of 0.01170, 2.14570, 0.11802, and 0.00014 for the 500 W BCS stack, NetStack PS6, H-12 stack, and AVISTA SR-12 500 W modular, respectively.  相似文献   
10.
Programming and Computer Software - Despite the extensive research of using web services for security purposes, there is a big challenge towards finding a no radical solution for NoSQL injection...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号