首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
H.264编码标准的帧间模式选择算法采用全候选模式搜寻算法,为编码器带来了高运算复杂度。提出一种快速帧间模式选择算法,旨在降低H.264帧间编码的运算复杂度。算法利用深度图像信息从彩色图像中分割出背景区域和运动剧烈的区域,宏块根据所在的区域选择相应的编码策略,从而达到缩小候选模式范围,加快模式选择速度的目的。实验证明,该算法在图像质量基本不变的情况下编码时间平均节省23%以上。  相似文献   

2.
针对隐蔽目标难于发现、复杂多变、信息量少等特点, 提出了一种用红外热像仪检测与跟踪遮挡目标的方法。关于目标检测, 首先利用模糊 C均值聚类(fuzzy C-means clustering, FCM)对图像中的区域进行聚类, 根据亮度信息初步分割出候选目标区域, 然后用纹理信息进一步分割候选的目标区域, 以检测出最终的目标区域; 关于目标跟踪, 由于被遮挡的目标轮廓不连贯, 所以采用链码提取目标轮廓, 并应用质心跟踪算法跟踪目标。通过实验, 验证了该方法对遮挡目标检测和跟踪的有效性。  相似文献   

3.
为了在复杂的环境中准确识别出交通灯信息,提出一种基于HSV色彩空间和形状特征的交通灯识别方法.该方法首先将图像的RGB色彩空间转换成HSV色彩空间,并根据HSV色彩空间中的H与V无关性,利用不同颜色的H阈值对图像进行分割提取候选区域;然后对原图像经预处理及灰度形态学操作后,利用Hough变换检测目标可能位置;最后把目标疑似位置与候选区域进行逻辑滤波融合,并对融合后图像依据颜色H值判定交通灯信息.该方法对480幅各种场景下的交通灯图片进行实验,结果表明该方法具有很强的鲁棒性、稳定性和高效性,能够较好地识别出交通灯.  相似文献   

4.
在末制导阶段,提出了基于灰度梯度分割的机头检测算法.首先利用平均梯度算子进行边缘检测,分割出目标区域和目标最亮区域,寻找各区域形心位置,通过判断目标区域形心和目标最亮区域形心位置,检测出飞机轴线,确定出候选机头信息;然后利用序列图像帧间较为稳定的相关特征进行D-S合成,对候选机头信息的可信度进行判断,得出准确的机头信息;最后利用数帧不同姿态的目标红外图像数据进行了算法验证.仿真结果表明:在末制导阶段,基于灰度梯度分割的机头检测算法,可有效地识别出机头,并具有较快的运算速度,可实现实时处理.  相似文献   

5.
提出了对采集到的图像进行一系列预处理并在此基础上对车牌粗定位和精确定位。首先,对车牌图像进行灰度化、平滑去噪,边缘检测后,提取边缘比较密集的车牌候选区域;然后,对裁剪出的彩色图像在HSI彩色空间的基础上进行边缘提取操作,取得图像饱和度和亮度边缘,并进行边缘合成;最后,利用色调信息的提取来判定车牌区域。该方法能使车牌区域得到有效的增强,达到精确定位目的。  相似文献   

6.
为了精准检测移动目标,提出一种大数据驱动的红外移动目标检测方法。采取空间滤波法预处理红外图像,抑制红外图像背景、增强图像内移动目标边缘,采用Seletive Search策略,通过区域划分算法划分预处理后红外图像为若干块小区域,提取若干个红外移动目标候选区域;为避免相邻红外移动目标候选区域图像间存在帧间差异及虚警,影响移动目标中心位置检测效果,提取移动目标候选区域的灰度特征,并结合运动特征建立加权融合特征,精准定位移动目标候选区域,将移动目标候选区域输入卷积神经网络,网络输出结果即为检测到红外移动目标,利用损失函数判定该目标是否为真实移动目标。实验研究表明:所提方法能够精准定位红外移动目标候选区域,检测出红外移动目标,检测性能较好,拥有较强的收敛性。  相似文献   

7.
于晓  高玲 《光电子.激光》2023,34(9):942-949
为了可以从模糊检务图像中高效、准确地提取文字信息,本文提出了基于边缘增强的最大稳定极值区域(maximally stable extremal regions, MSER)算法和免疫遗传(immunogenetic algorithm, IGA)优化支持向量机(support vector machine, SVM)的多特征自适应权重融合相结合的方法来提取模糊检务图像中的文本。利用边缘增强的MSER算法对图像文本进行检测,将所检测出的MSER进行合并得到文字候选区域;为了滤除候选区域中的非文本区域,采用特征融合公式对图像的3种特征进行融合,然后采用IGA优化SVM分类器寻找最优参数,最后将候选区域送入训练好的分类器滤除非文本。实验结果表明,相较于其他算法,本文算法有更高的真阳率与更低的假阳率,针对模糊检务图像文字提取具有更高的准确性。  相似文献   

8.
詹维  仇荣超  刘军  马新星 《红外》2018,39(9):41-48
针对复杂岸岛背景下的红外舰船目标检测问题,提出了一种多光谱融合红外舰船目标检测方法。首先根据不同谱段信息相互间的关系进行基于非下采样轮廓波变换(Nonsubsampled Contourlet Transform, NSCT)域的多级多光谱图像融合,然后利用LSD线段检测和聚类对融合后的图像进行岸岛线检测。采用选择性搜索算法生成初始目标候选区域,然后结合岸岛线空间位置以及舰船目标的几何特征和灰度特征约束剔除部分虚假目标区域,最后提取候选区域的方向梯度直方图(Histogram of Oriented Gradient, HOG)特征算子。利用线性支持向量机(Support Vector Machine, SVM)分类器进行分类识别,以检测出真实舰船目标。实验结果表明,与单谱段红外舰船目标检测方法相比,本文方法在检测精度上有较大提升。  相似文献   

9.
针对复杂海面背景下红外图像舰船目标由于灰度不均匀、海杂波干扰大等因素造成的自动检测虚警率高、准确率低的问题,提出了一种显著区域提取和目标精确分割相结合的红外舰船目标检测方法。首先,利用基于图论的视觉显著性(Graph-based Visual Saliency ,GBVS)模型计算待检测图像的显著图,使得目标区域信息增强;其次,结合舰船目标先验信息(长短轴、面积等),利用多级阈值划分算法提取关注的显著区域,并确定原图中候选目标区域;最后,利用空间约束模糊C均值(Fuzzy C-Means,FCM)算法对候选区域进行分割,结合目标先验知识对分割区域筛选并输出目标位置。所提方法在公开数据集IRShips上与相关方法进行比较,结果表明,相比直接进行全图目标搜索的方法,所提方法不仅准确率高、执行速度快,且检测目标的位置更加精确。  相似文献   

10.
一种基于粒子群算法的红外运动小目标检测算法   总被引:4,自引:2,他引:2  
许春晓  孙德宝  李宁  邹彤 《红外技术》2004,26(5):10-12,17
提出一种基于粒子群算法的运动小目标检测算法。首先利用两步分离法对图像进行预处理,筛选出少量的侯选运动目标,然后构造候选航迹,利用粒子群算法对候选航迹进行搜索,从而检测出目标航迹和目标点。实验结果表明,该算法能有效地检测出运动小目标的位置。  相似文献   

11.
A method for the automatic measurement of femur length in fetal ultrasound images is presented. Fetal femur length measurements are used to estimate gestational age by comparing the measurement to a typical growth chart. Using a real-time ultrasound system, sonographers currently indicate the femur endpoints on the ultrasound display station with a mouse-like device. The measurements are subjective, and have been proven to be inconsistent. The automatic approach described exploits prior knowledge of the general range of femoral size and shape by using morphological operators, which process images based on shape characteristics. Morphological operators are used first to remove the background (noise) from the image, next to refine the shape of the femur and remove spurious artifacts, and finally to produce a single pixel-wide skeleton of the femur. The skeleton endpoints are assumed to be the femur endpoints. The length of the femur is calculated as the distance between those endpoints. A comparison of the measurements obtained with the manual and with the automated techniques is included.  相似文献   

12.
In this paper, a novel multiresolution algorithm for registering multimodal images, using an adaptive Monte Carlo scheme is presented. At each iteration, random solution candidates are generated from a multidimensional solution space of possible geometric transformations, using an adaptive sampling approach. The generated solution candidates are evaluated based on the Pearson type-VII error between the phase moments of the images to determine the solution candidate with the lowest error residual. The multidimensional sampling distribution is refined with each iteration to produce increasingly more plausible solution candidates for the optimal alignment between the images. The proposed algorithm is efficient, robust to local optima, and does not require manual initialization or prior information about the images. Experimental results based on various real-world medical images show that the proposed method is capable of achieving higher registration accuracy than existing multimodal registration algorithms for situations, where little to no overlapping regions exist.   相似文献   

13.
Recently neural style transfer has achieved great development, but there is still a big gap compared with manual creation. Most of the existing methods ignore the comprehensive consideration of preserving various semantic information of original content images, resulting in distortion or loss of original content features of the generated works, which are dull and difficult to convey the original themes and emotions. In this paper, we analyze the ability of the existing methods to maintain single semantic information and propose a fast style transfer framework with multi-semantic preservation. The experiments indicate that our method can effectively retain the original semantic information including salience and depth features, so that the final artwork has better visual effect by highlighting its regional focus and depth information. Compared with existing methods, our method has better ability in semantic preservation and can generate more artworks with distinct regions, controllable semantics, diverse contents and rich emotions.  相似文献   

14.
Image segmentation remains one of the major challenges in image analysis. In medical applications, skilled operators are usually employed to extract the desired regions that may be anatomically separate but statistically indistinguishable. Such manual processing is subject to operator errors and biases, is extremely time consuming, and has poor reproducibility. We propose a robust algorithm for the segmentation of three-dimensional (3-D) image data based on a novel combination of adaptive K-mean clustering and knowledge-based morphological operations. The proposed adaptive K-mean clustering algorithm is capable of segmenting the regions of smoothly varying intensity distributions. Spatial constraints are incorporated in the clustering algorithm through the modeling of the regions by Gibbs random fields. Knowledge-based morphological operations are then applied to the segmented regions to identify the desired regions according to the a priori anatomical knowledge of the region-of-interest. This proposed technique has been successfully applied to a sequence of cardiac CT volumetric images to generate the volumes of left ventricle chambers at 16 consecutive temporal frames. Our final segmentation results compare favorably with the results obtained using manual outlining. Extensions of this approach to other applications can be readily made when a priori knowledge of a given object is available.  相似文献   

15.
基于深度学习的红外遥感信息自动提取   总被引:1,自引:0,他引:1  
陈睿敏  孙胜利 《红外》2017,38(8):37-43
为了提高红外遥感图像地物 信息自动提取的精确性,同时避免人工提取遥感 信息的低效性,提出了一种基于UNet深度学习模型 的遥感信息提取算法。该算法用于从红外遥感图像中分割 出5类地物信息(包括道路、建筑、树木、农田和水 体)。首先,对分辨率高但数量较少的训练数 据进行小像幅的随机裁剪,并对其进行相应的数据增 强处理。然后搭建UNet深度学习模型,并用它 自动提取遥感图像的特征信息。采用交叉熵损失函数 以及Adam反向传播优化算法对该模型进行训练,并对测 试样本中的5幅遥感图像进行精确的地物信息提取。最后,运 用Jaccard指数对测试结果进行精度评定。实验结果表明,该 方法对高分辨率红外遥感图像信息和可见光 遥感图像信息进行了充分融合,对于不同种类地物 的定位和分类都取得了较高精度。  相似文献   

16.
This paper presents a fully automated method for segmenting articular knee cartilage and bone from in vivo 3-D dual echo steady state images. The magnetic resonance imaging (MRI) datasets were obtained from the Osteoarthritis Initiative (OAI) pilot study and include longitudinal images from controls and subjects with knee osteoarthritis (OA) scanned twice at each visit (baseline, 24 month). Initially, human experts segmented six MRI series. Five of the six resultant sets served as reference atlases for a multiatlas segmentation algorithm. The methodology created precise knee segmentations that were used to extract articular cartilage volume, surface area, and thickness as well as subchondral bone plate curvature. Comparison to manual segmentation showed Dice similarity coefficient (DSC) of 0.88 and 0.84 for the femoral and tibial cartilage. In OA subjects, thickness measurements showed test-retest precision ranging from 0.014 mm (0.6%) at the femur to 0.038 mm (1.6%) at the femoral trochlea. In the same population, the curvature test-retest precision ranged from 0.0005 mm(-1) (3.6%) at the femur to 0.0026 mm(-1) (11.7%) at the medial tibia. Thickness longitudinal changes showed OA Pearson correlation coefficient of 0.94 for the femur. In conclusion, the fully automated segmentation methodology produces reproducible cartilage volume, thickness, and shape measurements valuable for the study of OA progression.  相似文献   

17.
High-resolution X-ray computed tomography (CT) imaging is routinely used for clinical pulmonary applications. Since lung function varies regionally and because pulmonary disease is usually not uniformly distributed in the lungs, it is useful to study the lungs on a lobe-by-lobe basis. Thus, it is important to segment not only the lungs, but the lobar fissures as well. In this paper, we demonstrate the use of an anatomic pulmonary atlas, encoded with a priori information on the pulmonary anatomy, to automatically segment the oblique lobar fissures. Sixteen volumetric CT scans from 16 subjects are used to construct the pulmonary atlas. A ridgeness measure is applied to the original CT images to enhance the fissure contrast. Fissure detection is accomplished in two stages: an initial fissure search and a final fissure search. A fuzzy reasoning system is used in the fissure search to analyze information from three sources: the image intensity, an anatomic smoothness constraint, and the atlas-based search initialization. Our method has been tested on 22 volumetric thin-slice CT scans from 12 subjects, and the results are compared to manual tracings. Averaged across all 22 data sets, the RMS error between the automatically segmented and manually segmented fissures is 1.96 +/- 0.71 mm and the mean of the similarity indices between the manually defined and computer-defined lobe regions is 0.988. The results indicate a strong agreement between the automatic and manual lobe segmentations.  相似文献   

18.
This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated with multivariate linear regression using image intensities sampled in a region of interest around an initialization curve. Subsequently, the position of the vessel boundary is refined with a robust nonlinear regression technique using intensity profiles sampled across the boundary of the rough segmentation and using information about plausible cross-sectional vessel shapes. The method was evaluated by quantitatively comparing segmentation results to manual annotations of 229 coronary arteries. On average the difference between the automatically obtained segmentations and manual contours was smaller than the inter-observer variability, which is an indicator that the method outperforms manual annotation. The method was also evaluated by using it for centerline refinement on 24 publicly available datasets of the Rotterdam Coronary Artery Evaluation Framework. Centerlines are extracted with an existing method and refined with the proposed method. This combination is currently ranked second out of 10 evaluated interactive centerline extraction methods. An additional qualitative expert evaluation in which 250 automatic segmentations were compared to manual segmentations showed that the automatically obtained contours were rated on average better than manual contours.  相似文献   

19.
The authors developed an efficient semiautomatic tissue classifier for X-ray computed tomography (CT) images which can be used to build patient- or animal-specific finite element (FE) models for bioelectric studies. The classifier uses a gray scale histogram for each tissue type and three-dimensional (3-D) neighborhood information. A total of 537 CT images from four animals (pigs) were classified with an average accuracy of 96.5% compared to manual classification by a radiologist. The use of 3-D, as opposed to 2-D, information reduced the error rate by 78%. Models generated using minimal or full manual editing yielded substantially identical voltage profiles. For the purpose of calculating voltage gradients or current densities in specific tissues, such as the myocardium, the appropriate slices need to be fully edited, however. The authors' classifier offers an approach to building FE models from image information with a level of manual effort that can be adjusted to the need of the application  相似文献   

20.
Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号