首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
细胞计数一直是医学影像分析中非常重要的一项工作,在生物医学实验和临床医学等领域起着十 分关键的作用。针对细胞计数工作中存在的由细胞尺寸变化等因素造成的细胞计数精度低的问题,引入高度拥挤 目标识别网络 CSRNet 并加以改进,构建了一种基于多尺度特征融合的细胞计数方法。首先,使用 VGG16 的前 10 层提取细胞特征,避免了由于网络过深造成的小目标信息丢失;其次,引入空间金字塔池化结构提取细胞的多 尺度特征并进行特征融合,降低了因细胞形态各异、尺寸不一和细胞遮挡等问题带来的计数误差;然后,使用混 合空洞卷积对特征图进行解码,得到密度图,解决了 CSRNet 在解码过程中像素遗漏的问题;最后对密度图逐像 素进行回归得到细胞总数。另外,在训练过程中引入了一种新的组合损失函数以代替欧几里得损失函数,不仅考 虑了 ground truth 密度图与预测密度图单个像素点之间的关系,还考虑了其全局和局部的密度水平。实验证明, 优化后的 CSRNet 在 VGG cells 和 MBM cells 数据集上取得了较好的结果,有效改善了由细胞尺寸变化等因素造 成的细胞计数精度低的问题。  相似文献   

2.
单个图像中的人群计数在计算机视觉领域中备受关注,因为其在公共安全方面具有重要作用.例如,在人群聚集的场景中监控设备可以实时监测人群数量变化,对过度拥挤和异常情况进行预警以预防安全事故的发生.然而,由于受到遮挡、透视扭曲、尺度变化和背景干扰的严重影响,在单个图像中对人群计数的预测要达到较高精确度是极其困难的,其面临着巨大的挑战.在本文中,我们提出了一个名为FF-CAM的创新性模型来计算图像中的人群数量.它首先将主网络低层的特征图与高层的特征图合并,实现不同尺度的特征融合,且无需额外的分支或子任务,解决了由于透视导致的尺度多样性问题.随后融合的特征图被送入通道注意力模块以优化不同特征的融合过程,并进行特征通道的重新校准以充分使用全局和空间信息.此外,我们在网络的末端利用扩张卷积来获得高质量的人群密度图,扩张卷积层扩大了感受野,其输出包含更详细的空间信息和全局信息,不会降低空间分辨率.最后,我们加入基于SSIM的损失函数用于比较估计人群密度图和真值的局部相关性,以及基于回归人数的损失函数用于比较估计人群数量与真实人数之间的差异.我们的FF-CAM在UCF_CC_50数据集、ShanghaiTech数据集和UCF_QRNF数据集中进行训练并测试,获得了出色的结果.在UCF_CC_50数据集上比现有方法的MAE提高了4.5%,MSE提高了3.8%.  相似文献   

3.
Normalized mutual information (NMI) is a widely used measure to compare community detection methods. Recently, however, the need of adjustment for information theory‐based measures has been argued because of the so‐called selection bias problem, that is, they show the tendency in choosing clustering solutions with more communities. In this article, an experimental evaluation of these measures is performed to deeply investigate the problem, and an adjustment that scales the values of these measures is proposed. Experiments on synthetic networks, for which the ground‐truth division is known, highlight that scaled NMI does not present the selection bias behavior. Moreover, a comparison among some well‐known community detection methods on synthetic generated networks shows a fairer behavior of scaled NMI, especially when the network topology does not present a clear community structure. The experimentation also on two real‐world networks reveals that the corrected formula allows to choose, among a set, the method finding a network division that better reflects the ground‐truth structure.  相似文献   

4.
A Bayesian probability-based vanishing point detection algorithm is presented which introduces the use of multiple features and training with ground truth data to determine vanishing point locations. The vanishing points of 352 images were manually identified to create ground truth data. Each intersection is assigned a probability of being coincident with a ground truth vanishing point, based upon conditional probabilities of a number of features. The results of this algorithm are demonstrated to be superior to the results of a similar algorithm where each intersection is considered to be of equal importance. The advantage of this algorithm is that multiple features derived from ground truth training are used to determine vanishing point location.  相似文献   

5.
Wu  Qiong  Fan  Chunxiao  Li  Yong  Li  Yang  Hu  Jiahao 《Multimedia Tools and Applications》2020,79(29-30):21265-21278

In recent years, various deep neural networks have been proposed to improve the performance in the single image super-resolution (SISR) task. The commonly used per-pixel MSE loss function captures less perceptual difference and tends to make the super-resolved images overly smooth, while the perceptual loss function defined on image features extracted from one or two layers of a pretrained network yields more visually pleasing results. We propose a new perceptual loss function via combining features from multiple levels, which incorporates the discrepancy between the reconstruction and the ground truth in different structures. In addition, some variants of the proposed perceptual loss are explored. Extensive quantitative and qualitative comparisons with the state-of-the-art methods demonstrate that our loss function can drive the same network to produce better results when used alone or combined with other loss functions.

  相似文献   

6.
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation.  相似文献   

7.
The computation of optical flow within an image sequence is one of the most widely used techniques in computer vision. In this paper, we present a new approach to estimate the velocity field for motion-compensated compression. It is derived by a nonlinear system using the direct temporal integral of the brightness conservation constraint equation or the Displaced Frame Difference (DFD) equation. To solve the nonlinear system of equations, an adaptive framework is used, which employs velocity field modeling, a nonlinear least-squares model, Gauss–Newton and Levenberg–Marquardt techniques, and an algorithm of the progressive relaxation of the over-constraint. The three criteria by which successful motion-compensated compression is judged are 1.) The fidelity with which the estimated optical flow matches the ground truth motion, 2.) The relative absence of artifacts and “dirty window” effects for frame interpolation, and 3.) The cost to code the motion vector field. We base our estimated flow field on a single minimized target function, which leads to motion-compensated predictions without incurring penalties in any of these three criteria. In particular, we compare our proposed algorithm results with those from Block-Matching Algorithms (BMA), and show that with nearly the same number of displacement vectors per fixed block size, the performance of our algorithm exceeds that of BMA in all the three above points. We also test the algorithm on synthetic and natural image sequences, and use it to demonstrate applications for motion-compensated compression.  相似文献   

8.
目的 模型功能窃取攻击是人工智能安全领域的核心问题之一,目的是利用有限的与目标模型有关的信息训练出性能接近的克隆模型,从而实现模型的功能窃取。针对此类问题,一类经典的工作是基于生成模型的方法,这类方法利用生成器生成的图像作为查询数据,在同一查询数据下对两个模型预测结果的一致性进行约束,从而进行模型学习。然而此类方法生成器生成的数据常常是人眼不可辨识的图像,不含有任何语义信息,导致目标模型的输出缺乏有效指导性。针对上述问题,提出一种新的模型窃取攻击方法,实现对图像分类器的有效功能窃取。方法 借助真实的图像数据,利用生成对抗网络(generative adversarial net,GAN)使生成器生成的数据接近真实图像,加强目标模型输出的物理意义。同时,为了提高克隆模型的性能,基于对比学习的思想,提出一种新的损失函数进行网络优化学习。结果 在两个公开数据集CIFAR-10(Canadian Institute for Advanced Research-10)和SVHN(street view house numbers)的实验结果表明,本文方法能够取得良好的功能窃取效果。在CIFAR-10数据集上,相比目前较先进的方法,本文方法的窃取精度提高了5%。同时,在相同的查询代价下,本文方法能够取得更好的窃取效果,有效降低了查询目标模型的成本。结论 本文提出的模型窃取攻击方法,从数据真实性的角度出发,有效提高了针对图像分类器的模型功能窃取攻击效果,在一定程度上降低了查询目标模型代价。  相似文献   

9.
Bayesian networks (BN) are a powerful tool for various data-mining systems. The available methods of probabilistic inference from learning data have shortcomings such as high computation complexity and cumulative error. This is due to a partial loss of information in transition from empiric information to conditional probability tables. The paper presents a new simple and exact algorithm for probabilistic inference in BN from learning data. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 93–99, May–June 2007.  相似文献   

10.
在社交网络上的信息传播的研究中,设定一个传播概率,运用各种传播模型来模拟信息传播过程,是最常见的一种方式,然而人为设定的传播概率对传播过程有很大影响。根据复杂网络的相关研究,计算信息源节点的影响力,并以此为基础提出了一种计算信息传播概率的方法。实验对比了人为设定的传播概率与考虑了信息源节点影响力的传播概率对传播结果造成的差异,并通过证明影响力算法的有效性,说明了计算后的传播概率更加合理。  相似文献   

11.
A novel similarity, neighborhood counting measure, was recently proposed which counts the number of neighborhoods of a pair of data points. This similarity can handle numerical and categorical attributes in a conceptually uniform way, can be calculated efficiently through a simple formula, and gives good performance when tested in the framework of k-nearest neighbor classifier. In particular it consistently outperforms a combination of the classical Euclidean distance and Hamming distance. This measure was also shown to be related to a probability formalism, G probability, which is induced from a target probability function P. It was however unclear how G is related to P, especially for classification. Therefore it was not possible to explain some characteristic features of the neighborhood counting measure. In this paper we show that G is a linear function of P, and G-based Bayes classification is equivalent to P-based Bayes classification. We also show that the k-nearest neighbor classifier, when weighted by the neighborhood counting measure, is in fact an approximation of the G-based Bayes classifier, and furthermore, the P-based Bayes classifier. Additionally we show that the neighborhood counting measure remains unchanged when binary attributes are treated as categorical or numerical data. This is a feature that is not shared by other distance measures, to the best of our knowledge. This study provides a theoretical insight into the neighborhood counting measure.  相似文献   

12.
A new image classification technique for analysis of remotely-sensed data based on geostatistical indicator kriging is introduced. Conventional classification techniques require ground truth information, use only the spectral characteristics of an unknown pixel in comparison, rely on a Gaussian probability distribution for the spectral signature of the training data, and work on a pixel support or spatial resolution without allowing classification on larger or smaller volumes. The indicator kriging classifier overcomes such problems because: (1) it relies on spectral information from laboratory studies rather than on ground truth data, (2) through the kriging estimation variances an estimate of uncertainly is derived, (3) it incorporates spatial aspects because it uses local estimation techniques, (4) it is distribution-free, (5) and may be applied on different supports if the data are corrected for support changes. Comparison of classification results applied to the problem of mapping calcite and dolomite from GER imaging spectrometry data shows that indicator kriging performs better than the conventional classification algorithms and gives insight in the accuracy of the results without prior field knowledge  相似文献   

13.
The main objective of the current paper is to evaluate and explain differences between computed green-up dates of vegetated land surface derived from satellite observations and budburst dates from ground observational networks. Landscapes dominated by deciduous broad-leaved trees in Germany are analysed. While ground observations generally record the onset of bud break, remote sensing refers to a detectable change of surface reflectance, which accounts for the unfolding of the majority of the leaves. The satellite detects, even in a homogeneous stand, two signals: the green-up of the understorey and, shortly after, the green-up of the canopy (overstorey). Results of comparisons indicate an earlier, although not consistently, satellite-derived green-up than bud break derived from ground observations.We hypothesise that this is due to heterogeneous ground cover and a detection of the greening of non-tree vegetation by the satellite. This hypothesis is tested by analysing the difference between satellite-derived green-up dates (dGU) and budburst observed on the ground (dBB) in function of the proportion of non-deciduous-forest (ndf) land use types in satellite scenes. The satellite data (a daily 1-km resolution AVHRR product) are analysed with progressively more restricted selection criteria regarding the land surface elements. The two sets of observations are compared using Gaussian Mixture Models to evaluate the statistical properties of the probability density functions (pdf) as produced by the two sets rather than comparisons of geographically coincident data. It is shown that a heterogeneous vegetation cover is likely to be the main factor determining the difference between the computed green-up date and date of budburst of the dominating tree species.  相似文献   

14.
For classifying multispectral satellite images, a multilayer perceptron (MLP) is trained using either (i) ground truth data or (ii) the output of a K-means clustering program or (iii) both, as applied to certain representative parts of the given data set. In the second case, different sets of clustered image outputs, which have been checked against actual ground truth data wherever available, are used for testing the MLP. The cover classes are, typically, different types of (a) vegetation (including forests and agriculture); (b) soil (including mountains, highways and rocky terrain); and (c) water bodies (including lakes). Since the extent of ground truth may not be sufficient for training neural networks, the proposed procedure (of using clustered output images) is believed to be novel and advantageous. Moreover, it is found that the MLP offers an accuracy of more than 99% when applied to the multispectral satellite images in our library. As importantly, comparison with some recent results shows that the proposed application of the MLP leads to a more accurate and faster classification of multispectral image data.  相似文献   

15.
16.
基于模糊集合的证据理论信息融合方法   总被引:1,自引:0,他引:1  
提出了一种利用模糊集合确定概率分配函数(mass函数)进行信息融合的方法。该方法首先构造出融合对象的模糊集合,然后以隶属度函数为基础计算出概率分配函数,再利用D-S规则对多传感器信息进行融合。汽车轮胎压力监测的实验表明该方法获得的mass函数在信息融合中的有效性。  相似文献   

17.
Underwater imaging is being used increasingly by marine biologists as a means to assess the abundance of marine resources and their biodiversity. Previously, we developed the first automatic approach for estimating the abundance of Norway lobsters and counting their burrows in video sequences captured using a monochrome camera mounted on trawling gear. In this paper, an alternative framework is proposed and tested using deep-water video sequences acquired via a remotely operated vehicle. The proposed framework consists of four modules: (1) preprocessing, (2) object detection and classification, (3) object-tracking, and (4) quantification. Encouraging results were obtained from available test videos for the automatic video-based abundance estimation in comparison with manual counts by human experts (ground truth). For the available test set, the proposed system achieved 100% precision and recall for lobster counting, and around 83% precision and recall for burrow detection.  相似文献   

18.
针对红外场景中行人、车辆等目标识别率低且存在复杂背景干扰的问题,提出一种基于Effi-YOLOv3模型的红外目标检测方法。将轻量高效的EfficientNet骨干网络与YOLOv3网络相结合,提升网络模型的运行速度。通过模拟人类视觉的感受野机制,引入改进的感受野模块,在几乎不增加计算量的情况下大幅增强网络有效感受野。基于可变形卷积和动态激活函数构建DBD和CBD结构,提升模型特征编码的灵活性,扩大模型容量。选择兼顾预测框与真值框中心点距离、重叠率和长宽比偏差的CIoU作为损失函数,更好地反映预测框与真值框的重叠程度,加快预测框回归速度。实验结果表明,该方法在FLIR数据集上的平均精度均值达到70.8%,Effi-YOLOv3模型参数量仅为YOLOv3模型的33.3%,对于红外场景中的目标检测效果更优。  相似文献   

19.
The fuzzy c-partition entropy approach for threshold selection is an effective approach for image segmentation. The approach models the image with a fuzzy c-partition, which is obtained using parameterized membership functions. The ideal threshold is determined by searching an optimal parameter combination of the membership functions such that the entropy of the fuzzy c-partition is maximized. It involves large computation when the number of parameters needed to determine the membership function increases. In this paper, a recursive algorithm is proposed for fuzzy 2-partition entropy method, where the membership function is selected as S-function and Z-function with three parameters. The proposed recursive algorithm eliminates many repeated computations, thereby reducing the computation complexity significantly. The proposed method is tested using several real images, and its processing time is compared with those of basic exhaustive algorithm, genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) and simulated annealing (SA). Experimental results show that the proposed method is more effective than basic exhaustive search algorithm, GA, PSO, ACO and SA.  相似文献   

20.
Wireless Sensor Network (WSN) should be capable of fulfilling its mission, in a timely manner and without loss of important information. In this paper, we propose a new analytical model for calculating RRT (Reliable Real-Time) degree in multihop WSNs, where RRT degree describes the percentage of real-time data that the network can reliably deliver on time from any source to its destination. Also, packet loss probability is modeled as a function of the probability of link failure when the buffer is full and the probability of node failure when node’s energy is depleted. Most of the network properties are considered as random variables and a queuing theory based model is derived. In this model, the effect of network load on the packets’ delay, RRT degree, and node’s energy depletion rate are considered. Also network calculus is tailored and extended so that a worst case analysis of the delay and queue quantities in sensor networks is possible. Simulation results are used to validate the proposed model. The simulation results agree very well with the model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号