首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI) and Computed Tomography (CT) scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, and the 3D U-Net scored 97.99% among the different methods of deep learning. It is to be mentioned that traditional Convolutional Neural Network (CNN) gives 97.90% accuracy on top of the 3D MRI. In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images. Other than that, we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case. The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980. At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions. On the other hand, a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer (REST) API. Eventually, the suggested models were validated through the Area Under the Curve–Receiver Characteristic Operator (AUC–ROC) curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem. Through Comparative Analysis, we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities.  相似文献   

2.
Segmenting brain tumors in Magnetic Resonance Imaging (MRI) volumes is challenging due to their diffuse and irregular shapes. Recently, 2D and 3D deep neural networks have become famous for medical image segmentation because of the availability of labelled datasets. However, 3D networks can be computationally expensive and require significant training resources. This research proposes a 3D deep learning model for brain tumor segmentation that uses lightweight feature extraction modules to improve performance without compromising contextual information or accuracy. The proposed model, called Hybrid Attention-Based Residual Unet (HA-RUnet), is based on the Unet architecture and utilizes residual blocks to extract low- and high-level features from MRI volumes. Attention and Squeeze-Excitation (SE) modules are also integrated at different levels to learn attention-aware features adaptively within local and global receptive fields. The proposed model was trained on the BraTS-2020 dataset and achieved a dice score of 0.867, 0.813, and 0.787, as well as a sensitivity of 0.93, 0.88, and 0.83 for Whole Tumor, Tumor Core, and Enhancing Tumor, on test dataset respectively. Experimental results show that the proposed HA-RUnet model outperforms the ResUnet and AResUnet base models while having a smaller number of parameters than other state-of-the-art models. Overall, the proposed HA-RUnet model can improve brain tumor segmentation accuracy and facilitate appropriate diagnosis and treatment planning for medical practitioners.  相似文献   

3.
Breast cancer positions as the most well-known threat and the main source of malignant growth-related morbidity and mortality throughout the world. It is apical of all new cancer incidences analyzed among females. Two features substantially influence the classification accuracy of malignancy and benignity in automated cancer diagnostics. These are the precision of tumor segmentation and appropriateness of extracted attributes required for the diagnosis. In this research, the authors have proposed a ResU-Net (Residual U-Network) model for breast tumor segmentation. The proposed methodology renders augmented, and precise identification of tumor regions and produces accurate breast tumor segmentation in contrast-enhanced MR images. Furthermore, the proposed framework also encompasses the residual network technique, which subsequently enhances the performance and displays the improved training process. Over and above, the performance of ResU-Net has experimentally been analyzed with conventional U-Net, FCN8, FCN32. Algorithm performance is evaluated in the form of dice coefficient and MIoU (Mean Intersection of Union), accuracy, loss, sensitivity, specificity, F1score. Experimental results show that ResU-Net achieved validation accuracy & dice coefficient value of 73.22% & 85.32% respectively on the Rider Breast MRI dataset and outperformed as compared to the other algorithms used in experimentation.  相似文献   

4.
Tumor detection has been an active research topic in recent years due to the high mortality rate. Computer vision (CV) and image processing techniques have recently become popular for detecting tumors in MRI images. The automated detection process is simpler and takes less time than manual processing. In addition, the difference in the expanding shape of brain tumor tissues complicates and complicates tumor detection for clinicians. We proposed a new framework for tumor detection as well as tumor classification into relevant categories in this paper. For tumor segmentation, the proposed framework employs the Particle Swarm Optimization (PSO) algorithm, and for classification, the convolutional neural network (CNN) algorithm. Popular preprocessing techniques such as noise removal, image sharpening, and skull stripping are used at the start of the segmentation process. Then, PSO-based segmentation is applied. In the classification step, two pre-trained CNN models, alexnet and inception-V3, are used and trained using transfer learning. Using a serial approach, features are extracted from both trained models and fused features for final classification. For classification, a variety of machine learning classifiers are used. Average dice values on datasets BRATS-2018 and BRATS-2017 are 98.11 percent and 98.25 percent, respectively, whereas average jaccard values are 96.30 percent and 96.57% (Segmentation Results). The results were extended on the same datasets for classification and achieved 99.0% accuracy, sensitivity of 0.99, specificity of 0.99, and precision of 0.99. Finally, the proposed method is compared to state-of-the-art existing methods and outperforms them.  相似文献   

5.
With the massive success of deep networks, there have been significant efforts to analyze cancer diseases, especially skin cancer. For this purpose, this work investigates the capability of deep networks in diagnosing a variety of dermoscopic lesion images. This paper aims to develop and fine-tune a deep learning architecture to diagnose different skin cancer grades based on dermatoscopic images. Fine-tuning is a powerful method to obtain enhanced classification results by the customized pre-trained network. Regularization, batch normalization, and hyperparameter optimization are performed for fine-tuning the proposed deep network. The proposed fine-tuned ResNet50 model successfully classified 7-respective classes of dermoscopic lesions using the publicly available HAM10000 dataset. The developed deep model was compared against two powerful models, i.e., InceptionV3 and VGG16, using the Dice similarity coefficient (DSC) and the area under the curve (AUC). The evaluation results show that the proposed model achieved higher results than some recent and robust models.  相似文献   

6.
In the last decade, there has been a significant increase in medical cases involving brain tumors. Brain tumor is the tenth most common type of tumor, affecting millions of people. However, if it is detected early, the cure rate can increase. Computer vision researchers are working to develop sophisticated techniques for detecting and classifying brain tumors. MRI scans are primarily used for tumor analysis. We proposed an automated system for brain tumor detection and classification using a saliency map and deep learning feature optimization in this paper. The proposed framework was implemented in stages. In the initial phase of the proposed framework, a fusion-based contrast enhancement technique is proposed. In the following phase, a tumor segmentation technique based on saliency maps is proposed, which is then mapped on original images based on active contour. Following that, a pre-trained CNN model named EfficientNetB0 is fine-tuned and trained in two ways: on enhanced images and on tumor localization images. Deep transfer learning is used to train both models, and features are extracted from the average pooling layer. The deep learning features are then fused using an improved fusion approach known as Entropy Serial Fusion. The best features are chosen in the final step using an improved dragonfly optimization algorithm. Finally, the best features are classified using an extreme learning machine (ELM). The experimental process is conducted on three publically available datasets and achieved an improved accuracy of 95.14, 94.89, and 95.94%, respectively. The comparison with several neural nets shows the improvement of proposed framework.  相似文献   

7.
Brain tumor is an anomalous proliferation of cells in the brain that can evolve to malignant and benign tumors. Currently, segmentation of brain tumor is the most important surgical and pharmaceutical procedures. However, manually segmenting brain tumors is hard because it is hard to find erratically shaped tumors with only one modality; the MRI modalities are integrated to provide multi-modal images with data that can be utilized to segment tumors. The recent developments in machine learning and the accessibility of medical diagnostic imaging have made it possible to tackle the challenges of segmenting brain tumors with deep neural networks. In this work, a novel Shuffled-YOLO network has been proposed for segmenting brain tumors from multimodal MRI images. Initially, the scalable range-based adaptive bilateral filer (SCRAB) pre-processing technique was used to eliminate the noise artifacts from MRI while preserving the edges. In the segmentation phase, we propose a novel deep Shuffled-YOLO architecture for segmenting the internal tumor structures that include non-enhancing, edema, necrosis, and enhancing tumors from the multi-modality MRI sequences. The experimental fallouts reveal that the proposed Shuffled-YOLO network achieves a better accuracy range of 98.07% for BraTS 2020 and 97.04% for BraTS 2019 with very minimal computational complexity compared to the state-of-the-art models.  相似文献   

8.
Identifying fruit disease manually is time-consuming, expert-required, and expensive; thus, a computer-based automated system is widely required. Fruit diseases affect not only the quality but also the quantity. As a result, it is possible to detect the disease early on and cure the fruits using computer-based techniques. However, computer-based methods face several challenges, including low contrast, a lack of dataset for training a model, and inappropriate feature extraction for final classification. In this paper, we proposed an automated framework for detecting apple fruit leaf diseases using CNN and a hybrid optimization algorithm. Data augmentation is performed initially to balance the selected apple dataset. After that, two pre-trained deep models are fine-tuning and trained using transfer learning. Then, a fusion technique is proposed named Parallel Correlation Threshold (PCT). The fused feature vector is optimized in the next step using a hybrid optimization algorithm. The selected features are finally classified using machine learning algorithms. Four different experiments have been carried out on the augmented Plant Village dataset and yielded the best accuracy of 99.8%. The accuracy of the proposed framework is also compared to that of several neural nets, and it outperforms them all.  相似文献   

9.
To propose and implement an automated machine learning (ML) based methodology to predict the overall survival of glioblastoma multiforme (GBM) patients. In the proposed methodology, we used deep learning (DL) based 3D U-shaped Convolutional Neural Network inspired encoder-decoder architecture to segment the brain tumor. Further, feature extraction was performed on these segmented and raw magnetic resonance imaging (MRI) scans using a pre-trained 2D residual neural network. The dimension-reduced principal components were integrated with clinical data and the handcrafted features of tumor subregions to compare the performance of regression-based automated ML techniques. Through the proposed methodology, we achieved the mean squared error (MSE) of 87 067.328, median squared error of 30 915.66, and a SpearmanR correlation of 0.326 for survival prediction (SP) with the validation set of Multimodal Brain Tumor Segmentation 2020 dataset. These results made the MSE far better than the existing automated techniques for the same patients. Automated SP of GBM patients is a crucial topic with its relevance in clinical use. The results proved that DL-based feature extraction using 2D pre-trained networks is better than many heavily trained 3D and 2D prediction models from scratch. The ensembled approach has produced better results than single models. The most crucial feature affecting GBM patients' survival is the patient's age, as per the feature importance plots presented in this work. The most critical MRI modality for SP of GBM patients is the T2 fluid attenuated inversion recovery, as evident from the feature importance plots.  相似文献   

10.
Magnetic resonance imaging (MRI) brain tumor segmentation is a crucial task for clinical treatment. However, it is challenging owing to variations in type, size, and location of tumors. In addition, anatomical variation in individuals, intensity non-uniformity, and noises adversely affect brain tumor segmentation. To address these challenges, an automatic region-based brain tumor segmentation approach is presented in this paper which combines fuzzy shape prior term and deep learning. We define a new energy function in which an Adaptively Regularized Kernel-Based Fuzzy C-Means (ARKFCM) Clustering algorithm is utilized for inferring the shape of the tumor to be embedded into the level set method. In this way, some shortcomings of traditional level set methods such as contour leakage and shrinkage have been eliminated. Moreover, a fully automated method is achieved by using U-Net to obtain the initial contour, reducing sensitivity to initial contour selection. The proposed method is validated on the BraTS 2017 benchmark dataset for brain tumor segmentation. Average values of Dice, Jaccard, Sensitivity and specificity are 0.93 ± 0.03, 0.86 ± 0.06, 0.95 ± 0.04, and 0.99 ± 0.003, respectively. Experimental results indicate that the proposed method outperforms the other state-of-the-art methods in brain tumor segmentation.  相似文献   

11.
Gliomas segmentation is a critical and challenging task in surgery and treatment, and it is also the basis for subsequent evaluation of gliomas. Magnetic resonance imaging is extensively employed in diagnosing brain and nervous system abnormalities. However, brain tumor segmentation remains a challenging task, because differentiating brain tumors from normal tissues is difficult, tumor boundaries are often ambiguous and there is a high degree of variability in the shape, location, and extent of the patient. It is therefore desired to devise effective image segmentation architectures. In the past few decades, many algorithms for automatic segmentation of brain tumors have been proposed. Methods based on deep learning have achieved favorable performance for brain tumor segmentation. In this article, we propose a Multi-Scale 3D U-Nets architecture, which uses several U-net blocks to capture long-distance spatial information at different resolutions. We upsample feature maps at different resolutions to extract and utilize sufficient features, and we hypothesize that semantically similar features are easier to learn and process. In order to reduce the computational cost, we use 3D depthwise separable convolution instead of some standard 3D convolution. On BraTS 2015 testing set, we obtained dice scores of 0.85, 0.72, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively. Our segmentation performance was competitive compared to other state-of-the-art methods.  相似文献   

12.
The brain tumour is the mass where some tissues become old or damaged, but they do not die or not leave their space. Mainly brain tumour masses occur due to malignant masses. These tissues must die so that new tissues are allowed to be born and take their place. Tumour segmentation is a complex and time-taking problem due to the tumour’s size, shape, and appearance variation. Manually finding such masses in the brain by analyzing Magnetic Resonance Images (MRI) is a crucial task for experts and radiologists. Radiologists could not work for large volume images simultaneously, and many errors occurred due to overwhelming image analysis. The main objective of this research study is the segmentation of tumors in brain MRI images with the help of digital image processing and deep learning approaches. This research study proposed an automatic model for tumor segmentation in MRI images. The proposed model has a few significant steps, which first apply the pre-processing method for the whole dataset to convert Neuroimaging Informatics Technology Initiative (NIFTI) volumes into the 3D NumPy array. In the second step, the proposed model adopts U-Net deep learning segmentation algorithm with an improved layered structure and sets the updated parameters. In the third step, the proposed model uses state-of-the-art Medical Image Computing and Computer-Assisted Intervention (MICCAI) BRATS 2018 dataset with MRI modalities such as T1, T1Gd, T2, and Fluid-attenuated inversion recovery (FLAIR). Tumour types in MRI images are classified according to the tumour masses. Labelling of these masses carried by state-of-the-art approaches such that the first is enhancing tumour (label 4), edema (label 2), necrotic and non-enhancing tumour core (label 1), and the remaining region is label 0 such that edema (whole tumour), necrosis and active. The proposed model is evaluated and gets the Dice Coefficient (DSC) value for High-grade glioma (HGG) volumes for their test set-a, test set-b, and test set-c 0.9795, 0.9855 and 0.9793, respectively. DSC value for the Low-grade glioma (LGG) volumes for the test set is 0.9950, which shows the proposed model has achieved significant results in segmenting the tumour in MRI using deep learning approaches. The proposed model is fully automatic that can implement in clinics where human experts consume maximum time to identify the tumorous region of the brain MRI. The proposed model can help in a way it can proceed rapidly by treating the tumor segmentation in MRI.  相似文献   

13.
With the rapid growth of the autonomous system, deep learning has become integral parts to enumerate applications especially in the case of healthcare systems. Human body vertebrae are the longest and complex parts of the human body. There are numerous kinds of conditions such as scoliosis, vertebra degeneration, and vertebrate disc spacing that are related to the human body vertebrae or spine or backbone. Early detection of these problems is very important otherwise patients will suffer from a disease for a lifetime. In this proposed system, we developed an autonomous system that detects lumbar implants and diagnoses scoliosis from the modified Vietnamese x-ray imaging. We applied two different approaches including pre-trained APIs and transfer learning with their pre-trained models due to the unavailability of sufficient x-ray medical imaging. The results show that transfer learning is suitable for the modified Vietnamese x-ray imaging data as compared to the pre-trained API models. Moreover, we also explored and analyzed four transfer learning models and two pre-trained API models with our datasets in terms of accuracy, sensitivity, and specificity.  相似文献   

14.
The identification of brain tumors is multifarious work for the separation of the similar intensity pixels from their surrounding neighbours. The detection of tumors is performed with the help of automatic computing technique as presented in the proposed work. The non-active cells in brain region are known to be benign and they will never cause the death of the patient. These non-active cells follow a uniform pattern in brain and have lower density than the surrounding pixels. The Magnetic Resonance (MR) image contrast is improved by the cost map construction technique. The deep learning algorithm for differentiating the normal brain MRI images from glioma cases is implemented in the proposed method. This technique permits to extract the linear features from the brain MR image and glioma tumors are detected based on these extracted features. Using k-mean clustering algorithm the tumor regions in glioma are classified. The proposed algorithm provides high sensitivity, specificity and tumor segmentation accuracy.  相似文献   

15.
Computer vision is one of the significant trends in computer science. It plays as a vital role in many applications, especially in the medical field. Early detection and segmentation of different tumors is a big challenge in the medical world. The proposed framework uses ultrasound images from Kaggle, applying five diverse models to denoise the images, using the best possible noise-free image as input to the U-Net model for segmentation of the tumor, and then using the Convolution Neural Network (CNN) model to classify whether the tumor is benign, malignant, or normal. The main challenge faced by the framework in the segmentation is the speckle noise. It’s is a multiplicative and negative issue in breast ultrasound imaging, because of this noise, the image resolution and contrast become reduced, which affects the diagnostic value of this imaging modality. As result, speckle noise reduction is very vital for the segmentation process. The framework uses five models such as Generative Adversarial Denoising Network (DGAN-Net), Denoising U-Shaped Net (D-U-NET), Batch Renormalization U-Net (Br-U-NET), Generative Adversarial Network (GAN), and Nonlocal Neutrosophic of Wiener Filtering (NLNWF) for reducing the speckle noise from the breast ultrasound images then choose the best image according to peak signal to noise ratio (PSNR) for each level of speckle-noise. The five used methods have been compared with classical filters such as Bilateral, Frost, Kuan, and Lee and they proved their efficiency according to PSNR in different levels of noise. The five diverse models are achieved PSNR results for speckle noise at level (0.1, 0.25, 0.5, 0.75), (33.354, 29.415, 27.218, 24.115), (31.424, 28.353, 27.246, 24.244), (32.243, 28.42, 27.744, 24.893), (31.234, 28.212, 26.983, 23.234) and (33.013, 29.491, 28.556, 25.011) for DGAN, Br-U-NET, D-U-NET, GAN and NLNWF respectively. According to the value of PSNR and level of speckle noise, the best image passed for segmentation using U-Net and classification using CNN to detect tumor type. The experiments proved the quality of U-Net and CNN in segmentation and classification respectively, since they achieved 95.11 and 95.13 in segmentation and 95.55 and 95.67 in classification as dice score and accuracy respectively.  相似文献   

16.
The segmentation of Organs At Risk (OAR) in Computed Tomography (CT) images is an essential part of the planning phase of radiation treatment to avoid the adverse effects of cancer radiotherapy treatment. Accurate segmentation is a tedious task in the head and neck region due to a large number of small and sensitive organs and the low contrast of CT images. Deep learning-based automatic contouring algorithms can ease this task even when the organs have irregular shapes and size variations. This paper proposes a fully automatic deep learning-based self-supervised 3D Residual UNet architecture with CBAM(Convolution Block Attention Mechanism) for the organ segmentation in head and neck CT images. The Model Genesis structure and image context restoration techniques are used for self-supervision, which can help the network learn image features from unlabeled data, hence solving the annotated medical data scarcity problem in deep networks. A new loss function is applied for training by integrating Focal loss, Tversky loss, and Cross-entropy loss. The proposed model outperforms the state-of-the-art methods in terms of dice similarity coefficient in segmenting the organs. Our self-supervised model could achieve a 4% increase in the dice score of Chiasm, which is a small organ that is present only in a very few CT slices. The proposed model exhibited better accuracy for 5 out of 7 OARs than the recent state-of-the-art models. The proposed model could simultaneously segment all seven organs in an average time of 0.02 s. The source code of this work is made available at https://github.com/seeniafrancis/SABOSNet .  相似文献   

17.
Human Action Recognition (HAR) is a current research topic in the field of computer vision that is based on an important application known as video surveillance. Researchers in computer vision have introduced various intelligent methods based on deep learning and machine learning, but they still face many challenges such as similarity in various actions and redundant features. We proposed a framework for accurate human action recognition (HAR) based on deep learning and an improved features optimization algorithm in this paper. From deep learning feature extraction to feature classification, the proposed framework includes several critical steps. Before training fine-tuned deep learning models – MobileNet-V2 and Darknet53 – the original video frames are normalized. For feature extraction, pre-trained deep models are used, which are fused using the canonical correlation approach. Following that, an improved particle swarm optimization (IPSO)-based algorithm is used to select the best features. Following that, the selected features were used to classify actions using various classifiers. The experimental process was performed on six publicly available datasets such as KTH, UT-Interaction, UCF Sports, Hollywood, IXMAS, and UCF YouTube, which attained an accuracy of 98.3%, 98.9%, 99.8%, 99.6%, 98.6%, and 100%, respectively. In comparison with existing techniques, it is observed that the proposed framework achieved improved accuracy.  相似文献   

18.
Glioma is the severe type of brain tumor which leads to immediate death for the case high‐grade Glioma. The Glioma tumor patient in case of low grade can extend their life period if tumor is timely detected and providing proper surgery. In this article, a computer‐aided fully automated Glioma brain tumor detection and segmentation system is proposed using Adaptive Neuro Fuzzy Inference System (ANFIS) classifier based Graph cut approach. Initially, orientation analysis is performed on the brain image to obtain the edge enhanced abnormal regions in the brain. Then, features are extracted from the orientation enhanced image and these features are trained and classified using ANFIS classifier to classify the test brain image into either normal or abnormal. Normalized Graph cur segmentation methodology is applied on the classified abnormal brain image to segment the tumor region. The proposed Glioma tumor segmentation method is validated using the metric parameters as sensitivity, specificity, accuracy and dice similarity coefficient.  相似文献   

19.
More than 500,000 patients are diagnosed with breast cancer annually. Authorities worldwide reported a death rate of 11.6% in 2018. Breast tumors are considered a fatal disease and primarily affect middle-aged women. Various approaches to identify and classify the disease using different technologies, such as deep learning and image segmentation, have been developed. Some of these methods reach 99% accuracy. However, boosting accuracy remains highly important as patients’ lives depend on early diagnosis and specified treatment plans. This paper presents a fully computerized method to detect and categorize tumor masses in the breast using two deep-learning models and a classifier on different datasets. This method specifically uses ResNet50 and AlexNet, convolutional neural networks (CNNs), for deep learning and a K-Nearest-Neighbor (KNN) algorithm to classify data. Various experiments have been conducted on five datasets: the Mammographic Image Analysis Society (MIAS), Breast Cancer Histopathological Annotation and Diagnosis (BreCaHAD), King Abdulaziz University Breast Cancer Mammogram Dataset (KAU-BCMD), Breast Histopathology Images (BHI), and Breast Cancer Histopathological Image Classification (BreakHis). These datasets were used to train, validate, and test the presented method. The obtained results achieved an average of 99.38% accuracy, surpassing other models. Essential performance quantities, including precision, recall, specificity, and F-score, reached 99.71%, 99.46%, 98.08%, and 99.67%, respectively. These outcomes indicate that the presented method offers essential aid to pathologists diagnosing breast cancer. This study suggests using the implemented algorithm to support physicians in analyzing breast cancer correctly.  相似文献   

20.
Deep learning (DL) techniques, which do not need complex pre-processing and feature analysis, are used in many areas of medicine and achieve promising results. On the other hand, in medical studies, a limited dataset decreases the abstraction ability of the DL model. In this context, we aimed to produce synthetic brain images including three tumor types (glioma, meningioma, and pituitary), unlike traditional data augmentation methods, and classify them with DL. This study proposes a tumor classification model consisting of a Dense Convolutional Network (DenseNet121)-based DL model to prevent forgetting problems in deep networks and delay information flow between layers. By comparing models trained on two different datasets, we demonstrated the effect of synthetic images generated by Cycle Generative Adversarial Network (CycleGAN) on the generalization of DL. One model is trained only on the original dataset, while the other is trained on the combined dataset of synthetic and original images. Synthetic data generated by CycleGAN improved the best accuracy values for glioma, meningioma, and pituitary tumor classes from 0.9633, 0.9569, and 0.9904 to 0.9968, 0.9920, and 0.9952, respectively. The developed model using synthetic data obtained a higher accuracy value than the related studies in the literature. Additionally, except for pixel-level and affine transform data augmentation, synthetic data has been generated in the figshare brain dataset for the first time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号