首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Internet of Things (IoT) technology has been developed for directing and maintaining the atmosphere in smart buildings in real time. In order to optimise the power generation sector and schedule routine maintenance, it is crucial to predict future energy demand. Electricity demand forecasting is difficult because of the complexity of the available demand patterns. Establishing a perfect prediction of energy consumption at the building’s level is vital and significant to efficiently managing the consumed energy by utilising a strong predictive model. Low forecast accuracy is just one of the reasons why energy consumption and prediction models have failed to advance. Therefore, the purpose of this study is to create an IoT-based energy prediction (IoT-EP) model that can reliably estimate the energy consumption of smart buildings. A real-world test case on power predictions is conducted on a local electricity grid to test the practicality of the approach. The proposed (IoT-EP) model selects the significant features as input neurons, the predictable data is selected as output nodes, and a multi-layer perceptron is constructed along with the features of the Convolution Neural Network (CNN) algorithm. The analysis of the proposed IoT-EP model has higher accuracy of 90%, correlation of 89%, and variance of 16% in less training time of 29.2 s, and with a higher prediction speed of 396 (observation/sec). When compared to existing models, the results showed that the proposed (IoT-EP) model outperforms with a satisfactory level of accuracy in predicting energy consumption in smart buildings.  相似文献   

2.

Tele health utilizes information and communication mechanisms to convey medical information for providing clinical and educational assistances. It makes an effort to get the better of issues of health service delivery involving time factor, space and laborious terrains, validating cost-efficiency and finer ingress in both developed and developing countries. Tele health has been categorized into either real-time electronic communication, or store-and-forward communication. In recent years, a third-class has been perceived as remote healthcare monitoring or tele health, presuming data obtained via Internet of Things (IOT). Although, tele health data analytics and machine learning have been researched in great depth, there is a dearth of studies that entirely concentrate on the progress of ML-based techniques for tele health data analytics in the IoT healthcare sector. Motivated by this fact, in this work a method called, Weighted Bayesian and Polynomial Taylor Deep Network (WB-PTDN) is proposed to improve health prediction in a computationally efficient and accurate manner. First, the Independent Component Data Arrangement model is designed with the objective of normalizing the data obtained from the Physionet dataset. Next, with the normalized data as input, Weighted Bayesian Feature Extraction is applied to minimize the dimensionality involved and therefore extracting the relevant features for further health risk analysis. Finally, to obtain reliable predictions concerning tele health data analytics, First Order Polynomial Taylor DNN-based Feature Homogenization is proposed that with the aid of First Order Polynomial Taylor function updates the new results based on the result analysis of old values and therefore provides increased transparency in decision making. The comparison of proposed and existing methods indicates that the WB-PTDN method achieves higher accuracy, true positive rate and lesser response time for IoT based tele health data analytics than the traditional methods.

  相似文献   

3.
In the area of medical image processing, stomach cancer is one of the most important cancers which need to be diagnose at the early stage. In this paper, an optimized deep learning method is presented for multiple stomach disease classification. The proposed method work in few important steps—preprocessing using the fusion of filtering images along with Ant Colony Optimization (ACO), deep transfer learning-based features extraction, optimization of deep extracted features using nature-inspired algorithms, and finally fusion of optimal vectors and classification using Multi-Layered Perceptron Neural Network (MLNN). In the feature extraction step, pre-trained Inception V3 is utilized and retrained on selected stomach infection classes using the deep transfer learning step. Later on, the activation function is applied to Global Average Pool (GAP) for feature extraction. However, the extracted features are optimized through two different nature-inspired algorithms—Particle Swarm Optimization (PSO) with dynamic fitness function and Crow Search Algorithm (CSA). Hence, both methods’ output is fused by a maximal value approach and classified the fused feature vector by MLNN. Two datasets are used to evaluate the proposed method—CUI WahStomach Diseases and Combined dataset and achieved an average accuracy of 99.5%. The comparison with existing techniques, it is shown that the proposed method shows significant performance.  相似文献   

4.
Medical Resonance Imaging (MRI) is a noninvasive, nonradioactive, and meticulous diagnostic modality capability in the field of medical imaging. However, the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation. Therefore, to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network (SANR_CNN) for eliminating noise and improving the MR image reconstruction quality. The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality, and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets. The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction. An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean squared error (MSE). The proposed SANR_CNN model achieved higher PSNR, SSIM, and MSE efficiency than the other noise removal techniques. The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.  相似文献   

5.
The most common alarming and dangerous disease in the world today is the coronavirus disease 2019 (COVID-19). The coronavirus is perceived as a group of coronaviruses which causes mild to severe respiratory diseases among human beings. The infection is spread by aerosols emitted from infected individuals during talking, sneezing, and coughing. Furthermore, infection can occur by touching a contaminated surface followed by transfer of the viral load to the face. Transmission may occur through aerosols that stay suspended in the air for extended periods of time in enclosed spaces. To stop the spread of the pandemic, it is crucial to isolate infected patients in quarantine houses. Government health organizations faced a lack of quarantine houses and medical test facilities at the first level of testing by the proposed model. If any serious condition is observed at the first level testing, then patients should be recommended to be hospitalized. In this study, an IoT-enabled smart monitoring system is proposed to detect COVID-19 positive patients and monitor them during their home quarantine. The Internet of Medical Things (IoMT), known as healthcare IoT, is employed as the foundation of the proposed model. The least-squares (LS) method was applied to estimate the linear model parameters for a sequential pilot survey. A statistical sequential analysis is performed as a pilot survey to efficiently collect preliminary data for an extensive survey of COVID-19 positive cases. The Bayesian approach is used, based on the assumption of the random variable for the priori distribution of the data sample. Fuzzy inference is used to construct different rules based on the basic symptoms of COVID-19 patients to make an expert decision to detect COVID-19 positive cases. Finally, the performance of the proposed model was determined by applying a four-fold cross-validation technique.  相似文献   

6.
Corona is a viral disease that has taken the form of an epidemic and is causing havoc worldwide after its first appearance in the Wuhan state of China in December 2019. Due to the similarity in initial symptoms with viral fever, it is challenging to identify this virus initially. Non-detection of this virus at the early stage results in the death of the patient. Developing and densely populated countries face a scarcity of resources like hospitals, ventilators, oxygen, and healthcare workers. Technologies like the Internet of Things (IoT) and artificial intelligence can play a vital role in diagnosing the COVID-19 virus at an early stage. To minimize the spread of the pandemic, IoT-enabled devices can be used to collect patient’s data remotely in a secure manner. Collected data can be analyzed through a deep learning model to detect the presence of the COVID-19 virus. In this work, the authors have proposed a three-phase model to diagnose covid-19 by incorporating a chatbot, IoT, and deep learning technology. In phase one, an artificially assisted chatbot can guide an individual by asking about some common symptoms. In case of detection of even a single sign, the second phase of diagnosis can be considered, consisting of using a thermal scanner and pulse oximeter. In case of high temperature and low oxygen saturation levels, the third phase of diagnosis will be recommended, where chest radiography images can be analyzed through an AI-based model to diagnose the presence of the COVID-19 virus in the human body. The proposed model reduces human intervention through chatbot-based initial screening, sensor-based IoT devices, and deep learning-based X-ray analysis. It also helps in reducing the mortality rate by detecting the presence of the COVID-19 virus at an early stage.  相似文献   

7.
Increasingly, Wireless Sensor Networks (WSNs) are contributing enormous amounts of data. Since the recent deployments of wireless sensor networks in Smart City infrastructures, significant volumes of data have been produced every day in several domains ranging from the environment to the healthcare system to transportation. Using wireless sensor nodes, a Smart City environment may now be shown for the benefit of residents. The Smart City delivers intelligent infrastructure and a stimulating environment to citizens of the Smart Society, including the elderly and others. Weak, Quality of Service (QoS) and poor data performance are common problems in WSNs, caused by the data fusion method, where a small amount of bad data can significantly impact the total fusion outcome. In our proposed research, a WSN multi-sensor data fusion technique employing fuzzy logic for event detection. Using the new proposed Algorithm, sensor nodes will collect less repeated data, and redundant data will be used to increase the data's overall reliability. The network's fusion delay problem is investigated, and a minimum fusion delay approach is provided based on the nodes’ fusion waiting time. The proposed algorithm performs well in fusion, according to the results of the experiment. As a result of these discoveries, It is concluded that the algorithm describe here is effective and dependable instrument with a wide range of applications.  相似文献   

8.
Biopsy is one of the most commonly used modality to identify breast cancer in women, where tissue is removed and studied by the pathologist under the microscope to look for abnormalities in tissue. This technique can be time-consuming, error-prone, and provides variable results depending on the expertise level of the pathologist. An automated and efficient approach not only aids in the diagnosis of breast cancer but also reduces human effort. In this paper, we develop an automated approach for the diagnosis of breast cancer tumors using histopathological images. In the proposed approach, we design a residual learning-based 152-layered convolutional neural network, named as ResHist for breast cancer histopathological image classification. ResHist model learns rich and discriminative features from the histopathological images and classifies histopathological images into benign and malignant classes. In addition, to enhance the performance of the developed model, we design a data augmentation technique, which is based on stain normalization, image patches generation, and affine transformation. The performance of the proposed approach is evaluated on publicly available BreaKHis dataset. The proposed ResHist model achieves an accuracy of 84.34% and an F1-score of 90.49% for the classification of histopathological images. Also, this approach achieves an accuracy of 92.52% and F1-score of 93.45% when data augmentation is employed. The proposed approach outperforms the existing methodologies in the classification of benign and malignant histopathological images. Furthermore, our experimental results demonstrate the superiority of our approach over the pre-trained networks, namely AlexNet, VGG16, VGG19, GoogleNet, Inception-v3, ResNet50, and ResNet152 for the classification of histopathological images.  相似文献   

9.
Data fusion is one of the challenging issues, the healthcare sector is facing in the recent years. Proper diagnosis from digital imagery and treatment are deemed to be the right solution. Intracerebral Haemorrhage (ICH), a condition characterized by injury of blood vessels in brain tissues, is one of the important reasons for stroke. Images generated by X-rays and Computed Tomography (CT) are widely used for estimating the size and location of hemorrhages. Radiologists use manual planimetry, a time-consuming process for segmenting CT scan images. Deep Learning (DL) is the most preferred method to increase the efficiency of diagnosing ICH. In this paper, the researcher presents a unique multi-modal data fusion-based feature extraction technique with Deep Learning (DL) model, abbreviated as FFE-DL for Intracranial Haemorrhage Detection and Classification, also known as FFEDL-ICH. The proposed FFEDL-ICH model has four stages namely, preprocessing, image segmentation, feature extraction, and classification. The input image is first preprocessed using the Gaussian Filtering (GF) technique to remove noise. Secondly, the Density-based Fuzzy C-Means (DFCM) algorithm is used to segment the images. Furthermore, the Fusion-based Feature Extraction model is implemented with handcrafted feature (Local Binary Patterns) and deep features (Residual Network-152) to extract useful features. Finally, Deep Neural Network (DNN) is implemented as a classification technique to differentiate multiple classes of ICH. The researchers, in the current study, used benchmark Intracranial Haemorrhage dataset and simulated the FFEDL-ICH model to assess its diagnostic performance. The findings of the study revealed that the proposed FFEDL-ICH model has the ability to outperform existing models as there is a significant improvement in its performance. For future researches, the researcher recommends the performance improvement of FFEDL-ICH model using learning rate scheduling techniques for DNN.  相似文献   

10.
In this paper, a simple and computationally efficient approach is proposed for person independent facial emotion recognition. The proposed approach is based on the significant features of an image, i.e., the collection of few largest eigenvalues (LE). Further, a Levenberg–Marquardt algorithm-based neural network (LMNN) is applied for multiclass emotions classification. This leads to a new facial emotion recognition approach (LE-LMNN) which is systematically examined on JAFFE and Cohn–Kanade databases. Experimental results illustrate that the LE-LMNN approach is effective and computationally efficient for facial emotion recognition. The robustness of the proposed approach is also tested on low-resolution facial emotion images. The performance of the proposed approach is found to be superior as compared to the various existing methods.  相似文献   

11.
With the emergence of the COVID-19 pandemic, the World Health Organization (WHO) has urged scientists and industrialists to explore modern information and communication technology (ICT) as a means to reduce or even eliminate it. The World Health Organization recently reported that the virus may infect the organism through any organ in the living body, such as the respiratory, the immunity, the nervous, the digestive, or the cardiovascular system. Targeting the abovementioned goal, we envision an implanted nanosystem embedded in the intra living-body network. The main function of the nanosystem is either to perform diagnosis and mitigation of infectious diseases or to implement a targeted drug delivery system (i.e., delivery of the therapeutic drug to the diseased tissue or targeted cell). The communication among the nanomachines is accomplished via communication-based molecular diffusion. The control/interconnection of the nanosystem is accomplished through the utilization of Internet of bio-nano things (IoBNT). The proposed nanosystem is designed to employ a coded relay nanomachine disciplined by the decode and forward (DF) principle to ensure reliable drug delivery to the targeted cell. Notably, both the sensitivity of the drug dose and the phenomenon of drug molecules loss before delivery to the target cell site in long-distance due to the molecules diffusion process are taken into account. In this paper, a coded relay NM with conventional coding techniques such as RS and Turbo codes is selected to achieve minimum bit error rate (BER) performance and high signal-to-noise ratio (SNR), while the detection process is based on maximum likelihood (ML) probability and minimum error probability (MEP). The performance analysis of the proposed scheme is evaluated in terms of channel capacity and bit error rate by varying system parameters such as relay position, number of released molecules, relay and receiver size. Analysis results are validated through simulation and demonstrate that the proposed scheme can significantly improve delivery performance of the desirable drugs in the molecular communication system.  相似文献   

12.
In the recent years, microarray technology gained attention for concurrent monitoring of numerous microarray images. It remains a major challenge to process, store and transmit such huge volumes of microarray images. So, image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared easily. Various techniques have been proposed in the past with applications in different domains. The current research paper presents a novel image compression technique i.e., optimized Linde–Buzo–Gray (OLBG) with Lempel Ziv Markov Algorithm (LZMA) coding technique called OLBG-LZMA for compressing microarray images without any loss of quality. LBG model is generally used in designing a local optimal codebook for image compression. Codebook construction is treated as an optimization issue and can be resolved with the help of Grey Wolf Optimization (GWO) algorithm. Once the codebook is constructed by LBG-GWO algorithm, LZMA is employed for the compression of index table and raise its compression efficiency additionally. Experiments were performed on high resolution Tissue Microarray (TMA) image dataset of 50 prostate tissue samples collected from prostate cancer patients. The compression performance of the proposed coding esd compared with recently proposed techniques. The simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques.  相似文献   

13.
Nowadays, security plays an important role in Internet of Things (IoT) environment especially in medical services’ domains like disease prediction and medical data storage. In healthcare sector, huge volumes of data are generated on a daily basis, owing to the involvement of advanced health care devices. In general terms, health care images are highly sensitive to alterations due to which any modifications in its content can result in faulty diagnosis. At the same time, it is also significant to maintain the delicate contents of health care images during reconstruction stage. Therefore, an encryption system is required in order to raise the privacy and security of healthcare data by not leaking any sensitive data. The current study introduces Improved Multileader Optimization with Shadow Image Encryption for Medical Image Security (IMLOSIE-MIS) technique for IoT environment. The aim of the proposed IMLOSIE-MIS model is to accomplish security by generating shadows and encrypting them effectively. To do so, the presented IMLOSIE-MIS model initially generates a set of shadows for every input medical image. Besides, shadow image encryption process takes place with the help of Multileader Optimization (MLO) with Homomorphic Encryption (IMLO-HE) technique, where the optimal keys are generated with the help of MLO algorithm. On the receiver side, decryption process is initially carried out and shadow image reconstruction process is conducted. The experimentation analysis was carried out on medical images and the results inferred that the proposed IMLOSIE-MIS model is an excellent performer compared to other models. The comparison study outcomes demonstrate that IMLOSIE-MIS model is robust and offers high security in IoT-enabled healthcare environment.  相似文献   

14.
Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI) and Computed Tomography (CT) scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, and the 3D U-Net scored 97.99% among the different methods of deep learning. It is to be mentioned that traditional Convolutional Neural Network (CNN) gives 97.90% accuracy on top of the 3D MRI. In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images. Other than that, we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case. The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980. At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions. On the other hand, a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer (REST) API. Eventually, the suggested models were validated through the Area Under the Curve–Receiver Characteristic Operator (AUC–ROC) curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem. Through Comparative Analysis, we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities.  相似文献   

15.
Liver tumor is the fifth most occurring type of tumor in men and the ninth most occurring type of tumor in women according to recent reports of Global cancer statistics 2018. There are several imaging tests like Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and ultrasound that can diagnose the liver tumor after taking the sample from the tissue of the liver. These tests are costly and time-consuming. This paper proposed that image processing through deep learning Convolutional Neural Network (CNNs) ResUNet model that can be helpful for the early diagnose of tumor instead of conventional methods. The existing studies have mainly used the two Cascaded CNNs for liver segmentation and evaluation of Region Of Interest (ROI). This study uses ResUNet, an updated version of U-Net and ResNet Models that utilize the service of Residential blocks. We apply over method on the 3D-IRCADb01 dataset that is based on CT slices of liver tumor affected patients. The results showed the True Value Accuracy around 99% and F1 score performance around 95%. This method will be helpful for early and accurate diagnose of the Liver tumor to save the lives of many patients in the field of Biotechnology.  相似文献   

16.
With the massive success of deep networks, there have been significant efforts to analyze cancer diseases, especially skin cancer. For this purpose, this work investigates the capability of deep networks in diagnosing a variety of dermoscopic lesion images. This paper aims to develop and fine-tune a deep learning architecture to diagnose different skin cancer grades based on dermatoscopic images. Fine-tuning is a powerful method to obtain enhanced classification results by the customized pre-trained network. Regularization, batch normalization, and hyperparameter optimization are performed for fine-tuning the proposed deep network. The proposed fine-tuned ResNet50 model successfully classified 7-respective classes of dermoscopic lesions using the publicly available HAM10000 dataset. The developed deep model was compared against two powerful models, i.e., InceptionV3 and VGG16, using the Dice similarity coefficient (DSC) and the area under the curve (AUC). The evaluation results show that the proposed model achieved higher results than some recent and robust models.  相似文献   

17.
The endoscopy procedure has demonstrated great efficiency in detecting stomach lesions, with extensive numbers of endoscope images produced globally each day. The content‐based gastric image retrieval (CBGIR) system has demonstrated substantial potential in gastric image analysis. Gastric precancerous diseases (GPD) have higher prevalence in gastric cancer patients. Thus, effective intervention is crucial at the GPD stage. In this paper, a CBGIR method is proposed using a modified ResNet‐18 to generate binary hash codes for a rapid and accurate image retrieval process. We tested several popular models (AlexNet, VGGNet and ResNet), with ResNet‐18 determined as the optimum option. Our proposed method was valued using a GPD data set, resulting in a classification accuracy of 96.21 ± 0.66% and a mean average precision of 0.927 ± 0.006 , outperforming other state‐of‐art conventional methods. Furthermore, we constructed a Gastric‐Map (GM) based on feature representations in order to visualize the retrieval results. This work has great auxiliary significance for endoscopists in terms of understanding the typical GPD characteristics and improving aided diagnosis.  相似文献   

18.
Wireless Sensor Network (WSN) is considered to be one of the fundamental technologies employed in the Internet of things (IoT); hence, enabling diverse applications for carrying out real-time observations. Robot navigation in such networks was the main motivation for the introduction of the concept of landmarks. A robot can identify its own location by sending signals to obtain the distances between itself and the landmarks. Considering networks to be a type of graph, this concept was redefined as metric dimension of a graph which is the minimum number of nodes needed to identify all the nodes of the graph. This idea was extended to the concept of edge metric dimension of a graph G, which is the minimum number of nodes needed in a graph to uniquely identify each edge of the network. Regular plane networks can be easily constructed by repeating regular polygons. This design is of extreme importance as it yields high overall performance; hence, it can be used in various networking and IoT domains. The honeycomb and the hexagonal networks are two such popular mesh-derived parallel networks. In this paper, it is proved that the minimum landmarks required for the honeycomb network HC(n), and the hexagonal network HX(n) are 3 and 6 respectively. The bounds for the landmarks required for the hex-derived network HDN1(n) are also proposed.  相似文献   

19.
In recent times, the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features. The IoT has shown wide adoption in various applications including smart cities, healthcare, trade, business, etc. Among these applications, fitness applications have been widely considered for smart fitness systems. The users of the fitness system are increasing at a high rate thus the gym providers are constantly extending the fitness facilities. Thus, scheduling such a huge number of requests for fitness exercise is a big challenge. Secondly, the user fitness data is critical thus securing the user fitness data from unauthorized access is also challenging. To overcome these issues, this work proposed a blockchain-based load-balanced task scheduling approach. A thorough analysis has been performed to investigate the applications of IoT in the fitness industry and various scheduling approaches. The proposed scheduling approach aims to schedule the requests of the fitness users in a load-balanced way that maximize the acceptance rate of the users’ requests and improve resource utilization. The performance of the proposed task scheduling approach is compared with the state-of-the-art approaches concerning the average resource utilization and task rejection ratio. The obtained results confirm the efficiency of the proposed scheduling approach. For investigating the performance of the blockchain, various experiments are performed using the Hyperledger Caliper concerning latency, throughput, resource utilization. The Solo approach has shown an improvement of 32% and 26% in throughput as compared to Raft and Solo-Raft approaches respectively. The obtained results assert that the proposed architecture is applicable for resource-constrained IoT applications and is extensible for different IoT applications.  相似文献   

20.
The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially. This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network (F-RCNN). The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions. Furthermore, image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation. The permanent changes in climate are of serious concern. The leading causes beyond these destructive variations are ozone layer depletion, greenhouse gas release, deforestation, pollution, water resources contamination, and UV radiation. This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues, e.g., skin cancer, damage to marine life, crops damage, and impacts on living being’s immune systems. We have tried to classify the ozone images dataset into two major classes, depleted and non-depleted regions, to extract the required persuading features through F-RCNN. Furthermore, CNN has been used for feature extraction in the existing literature, and those extricated diverse RoIs are passed on to the CNN for grouping purposes. It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results. The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91% to 93% in identifying climate variation through ozone concentration classification, whether the region in the image under consideration is depleted or non-depleted. Our proposed model presented 93% accuracy, and it outperforms the prevailing techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号