首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目的 生活中照片拍摄时难以捕捉到所有人脸表情最佳的时刻,多次摆拍不仅费时,而且可能会错过某些场景,传统的后期编辑软件不具备针对性,且操作复杂。针对人物照片中部分人脸表情不佳的情况,提出一种基于表情传输的交互式照片编辑算法。方法 首先将包含源人脸的照片与具有目标表情人脸的照片进行特征点检测,通过交互将指定人脸部分选出并将其姿态归正为眼睛在同一水平线上,如果目标人脸与源人脸身份相同,将目标人脸区域根据源人脸的轮廓以及左右半脸分布以扫描线变形得到替换目标;不相同时根据目标人脸特征点分布的几何特征计算源人脸中特征点的新位置,通过基于特征点变化的网格变形得到替换目标,最后利用二次光照与泊松融合将其无缝拼接到源图中。结果 实验表明算法可以对人脸五官清晰且在宽容度内的人物照片进行表情编辑,处理结果只改变了人物的脸部表情,并且无明显拼接痕迹。结论 提出了一种新型针对目标人脸不同身份信息的交互式表情传输模型,该模型可以适应不同的编辑条件与要求,效果出色。  相似文献   

2.
We propose a connectivity editing framework for quad‐dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non‐quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad‐dominant mesh design.  相似文献   

3.
Recent studies have shown remarkable success in face image generation task. However, existing approaches have limited diversity, quality and controllability in generating results. To address these issues, we propose a novel end-to-end learning framework to generate diverse, realistic and controllable face images guided by face masks. The face mask provides a good geometric constraint for a face by specifying the size and location of different components of the face, such as eyes, nose and mouse. The framework consists of four components: style encoder, style decoder, generator and discriminator. The style encoder generates a style code which represents the style of the result face; the generator translate the input face mask into a real face based on the style code; the style decoder learns to reconstruct the style code from the generated face image; and the discriminator classifies an input face image as real or fake. With the style code, the proposed model can generate different face images matching the input face mask, and by manipulating the face mask, we can finely control the generated face image. We empirically demonstrate the effectiveness of our approach on mask guided face image synthesis task.  相似文献   

4.
生成对抗网络近年来发展迅速,其中语义区域分割与生成模型的结合为图像生成技术研究提供了新方向.在当前的研究中,语义信息作为指导生成的条件,可以通过编辑和控制输入的语义分割掩码来生成理想的特定风格图像.文中提出了一种具有语义区域风格约束的图像生成框架,利用条件对抗生成网络实现了图像分区域的自适应风格控制.具体而言,首先获得...  相似文献   

5.
We present an efficient and stable framework, called Unified Intersection Resolver (UIR), for cloth simulation systems where not only impending collisions but also pre‐existing penetrations often arise. These two types of collisions are handled in a unified manner, by detecting edge‐face intersections first and then forming penetration stencils to be resolved iteratively. A stencil is a quadruple of vertices and it reveals either a vertex‐face or an edge‐edge collision event happened. Each quadruple also implicitly defines a collision normal, through which the four stencil vertices can be relocated, so that the corresponding edge‐face intersection disappear. We deduce three different ways, i.e., from predefined surface orientation, from history data and from global intersection analysis, to determine the collision normals of these stencils robustly. Multiple stencils that constitute a penetration region are processed simultaneously to eliminate penetrations. Cloth trapped in pinched environmental objects can be handled easily within our framework. We highlight its robustness by a number of challenging experiments involving collisions.  相似文献   

6.
吴涛    董肖莉  孟伟    徐健  覃鸿    李卫军   《智能系统学报》2021,16(1):134-141
针对目前主流的线条提取算法对于区域对比度不明显的边缘的检测能力较弱,且对于所有区域采用无差别、统一化的处理策略,所生成的线条画往往较复杂,非常不利于机器人机械臂绘图的问题,本文提出了一种基于语义分割的简洁线条肖像画生成方法(concise line portrait generation based on semantic segmentation, CLPG-SS)。首先,对人脸图像进行语义分割,将人脸划分为不同的区域,基于不同区域提取边缘轮廓与五官细节线条,进行边缘切向流优化,从而加强方向信息;在此基础上,利用线条图来生成调和图像,并利用优化后的边缘切向流、人脸语义分割结果以及调和图像,针对不同的分割区域调整线条提取方法的参数,实现对细节无关区域的线条过滤和细节重点区域的线条加强,生成简洁线条肖像画。实验结果表明:本文提出的CLPG-SS方法能够有效提取人脸主轮廓线条,并针对不同区域实现了对细节线条的针对性调节,提高了机器人机械臂的绘制效率。  相似文献   

7.
文章介绍了一种利用器官周边存在灰度跃变的物理现象实现脸部器官定位的方法。其原理是将多次微分运算施加于肖像的脸部区域,从而突显出脸部的高亮度分布区。对这些高亮度区域运用基于脸部几何特征的筛选规则排除干扰区域后,获得脸部器官所在的几何中心,为下一步通过“器官挪移”、“挤眉弄眼”、“假面易容”等手法,创作具有漫画风格的手机动画脸谱奠定基础。  相似文献   

8.
One of the most common tasks in image and video editing is the local adjustment of various properties (e.g., saturation or brightness) of regions within an image or video. Edge‐aware interpolation of user‐drawn scribbles offers a less effort‐intensive approach to this problem than traditional region selection and matting. However, the technique suffers a number of limitations, such as reduced performance in the presence of texture contrast, and the inability to handle fragmented appearances. We significantly improve the performance of edge‐aware interpolation for this problem by adding a boosting‐based classification step that learns to discriminate between the appearance of scribbled pixels. We show that this novel data term in combination with an existing edge‐aware optimization technique achieves substantially better results for the local image and video adjustment problem than edge‐aware interpolation techniques without classification, or related methods such as matting techniques or graph cut segmentation.  相似文献   

9.
基于纹理分析的保细节平滑滤波器   总被引:8,自引:0,他引:8       下载免费PDF全文
平滑去噪是图象处理中一个重要课题,但是以往在处理平滑去噪问题上一直存在平滑和保细节的矛盾。为解决此问题,提出了一种基于纹理分析和保细节平滑滤波器,该滤波器采用了多尺度多方向的模板,并利用纹理分析等手段,同时根据图象各部分特性,通过自适应地选择模板来进行平滑滤波,该算法兼顾了降噪和保细节两方面要求。实验结果证明,该算法实现简单,计算速度快,且效果优于其他几种常用的保边界平滑算法。  相似文献   

10.
自动上妆旨在通过计算机算法实现人脸妆容的编辑与合成,隶属于人脸图像分析领域.其在互动娱乐应用、图像视频编辑、辅助人脸识别等多方面起着重要作用.然而作为人脸编辑任务,其仍难以在保证图像的编辑结果自然、真实的同时又很好地满足编辑需求,并且仍有难以精确控制编辑区域、图像编辑前后一致性差、图像质量不够精细等问题.针对以上难点,创新性地提出了一种掩模控制的自动上妆生成对抗网络,该网络利用掩模方法,能够重点编辑上妆区域,约束人脸妆容编辑中无需编辑的区域不变,保持主体信息.同时其又能单独编辑人脸的眼影、嘴唇、脸颊等局部区域,实现特定区域上妆,丰富了上妆功能.此外,该网络能够进行多数据集联合训练,除妆容数据集外,还可以使用其他人脸数据集作为辅助,增强模型的泛化能力,得到更加自然的上妆结果.最后,依据多种评价标准,进行了充分的定性及定量实验,并与目前的主流算法进行了对比,综合评价了所提方法的性能.  相似文献   

11.
Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo‐cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non‐trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per‐frame rest‐poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist‐created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.  相似文献   

12.
目的 针对目前人脸图像美化算法存在的对于细节丰富的眼睛和头发等区域处理过度平滑,美化后的图像整体美化效果较差等问题,提出一种基于肤色分割与平滑人脸图像的美化方法。方法 首先对脸部瑕疵特性,用双指数边缘保护滤波器平滑人脸图像的瑕疵,与此同时很好保持图像边缘信息;再通过利用色度直方图自适应快速检测、修正、分割肤色区域;然后利用拟合高斯羽化皮肤区域生成蒙版,融合平滑图像和原图像,保留图像头发背景等细节信息;最后基于人像美感标准,对皮肤亮度通过拟合log曲线实现快速自适应调整人脸图像亮度,增强眼睛等细节,从而快速实现人脸图像美化方法。结果 通过与其他人像美化算法相比较,在保留边缘方面,该算法更有效地对皮肤边缘上的瑕疵进行平滑,达到更好地美化人脸图像;而在时间复杂度方面,相对于前人的算法,计算速度快12倍,实现快速美化人脸图像。结论 该算法适应能力较强,对大部分人脸图像的脸部瑕疵完美去除的同时达到背景信息不变,肤色美白自然,使整体美化效果显著;尤其是细节丰富的边缘区域平滑适度,具有一定的实用性。  相似文献   

13.
有遮挡人脸图像还原是指通过对遮挡区域的图像进行估计,尽可能使用语义上合理的内容来填补.现有的人脸图像还原算法大多使用预先定义的掩模来模拟遮挡,并未考虑真实场景下的遮挡(如眼镜、口罩等)大小和位置对图像还原的影响.提出了一种基于深度卷积生成对抗网络的遮挡感知人脸还原方法,通过学习最接近遮挡图像的编码,来推断缺失的内容,并在生成的过程中自动检测出遮挡的区域,此外,为了减少面部信息丢失,保证恢复后的人脸的真实性,引入语义感知网络,以此进一步优化所提模型.对所选数据集的实验表明,所提出的模型效果较好.  相似文献   

14.

There are many solutions to prevent the spread of the COVID-19 virus and one of the most effective solutions is wearing a face mask. Almost everyone is wearing face masks at all times in public places during the coronavirus pandemic. This encourages us to explore face mask detection technology to monitor people wearing masks in public places. Most recent and advanced face mask detection approaches are designed using deep learning. In this article, two state-of-the-art object detection models, namely, YOLOv3 and faster R-CNN are used to achieve this task. The authors have trained both the models on a dataset that consists of images of people of two categories that are with and without face masks. This work proposes a technique that will draw bounding boxes (red or green) around the faces of people, based on whether a person is wearing a mask or not, and keeps the record of the ratio of people wearing face masks on the daily basis. The authors have also compared the performance of both the models i.e., their precision rate and inference time.

  相似文献   

15.
大多数现有的视觉语言预训练方法侧重于理解任务,并在训练时使用类似于BERT的损失函数(掩码语言建模和图像文本匹配).尽管它们在许多理解类型的下游任务中表现良好,例如视觉问答、图像文本检索和视觉蕴涵,但它们不具备生成信息的能力.为了解决这个问题,提出了视觉语言理解和生成的统一多模态预训练(unified multimodal pre-training for vision-language understanding and generation, UniVL). UniVL能够处理理解任务和生成任务,并扩展了现有的预训练范式,同时使用随机掩码和因果掩码,因果掩码即掩盖未来标记的三角形掩码,这样预训练的模型可以具有自回归生成的能力.将几种视觉语言理解任务规范为文本生成任务,并使用基于模版提示的方法对不同的下游任务进行微调.实验表明,在使用同一个模型时,理解任务和生成任务之间存在权衡,而提升这两个任务的可行方法是使用更多的数据. UniVL框架在理解任务和生成任务方面的性能与最近的视觉语言预训练方法相当.此外,实验还证明了基于模版提示的生成方法更有效,甚至在少数场景中它优于判别方法.  相似文献   

16.
Usually visualization is applied to gain insight into data. Yet consuming the data in form of visual representation is not always enough. Instead, users need to edit the data, preferably through the same means used to visualize them. In this work, we present a semi‐automatic approach to visual editing of graphs. The key idea is to use an interactive EditLens that defines where an edit operation affects an already customized and established graph layout. Locally optimal node positions within the lens and edge routes to connected nodes are calculated according to different criteria. This spares the user much manual work, but still provides sufficient freedom to accommodate application‐dependent layout constraints. Our approach utilizes the advantages of multi‐touch gestures, and is also compatible with classic mouse and keyboard interaction. Preliminary user tests have been conducted with researchers from bio‐informatics who need to manually maintain a slowly, but constantly growing molecular network. As the user feedback indicates, our solution significantly improves the editing procedure applied so far.  相似文献   

17.
新型冠状病毒肺炎疫情严重威胁人们的生命安全,对于聚集性人群密度及口罩佩戴情况的监管是控制病毒扩散的重要途经。公共场所具有人流密集且流动性大的特点,人工监测易增加感染风险,而现有基于深度学习的口罩检测算法存在功能及场景单一的问题,不能在多场景下实现多类别检测,同时精度也有待提升。提出Cascade-Attention R-CNN目标检测算法,实现对聚集区域、行人和口罩佩戴情况的自动检测。针对任务中目标尺度变化过大的问题,选取高精度两阶段Cascade R-CNN目标检测算法作为基础检测框架。通过设计多个级联的候选分类-回归网络并加入空间注意力机制,突出候选区域特征中的重要特征并抑制噪声特征,从而提高检测精度。在此基础上,构建聚集性传染风险智能监测模型,结合Cascade-Attention R-CNN算法的输出结果确定传染风险等级。实验结果表明,该模型对于不同场景和视角的多类别目标图片具有较高的准确性和鲁棒性,Cascade-Attention R-CNN算法平均精度均值达到89.4%,较原始Cascade RCNN算法提升2.6个百分点,较经典的两阶段目标检测算法Faster R-CNN和单阶段目标检测框架RetinaNet分别提升10.1和8.4个百分点。  相似文献   

18.
This paper presents a novel method to enhance the performance of structure‐preserving image and texture filtering. With conventional edge‐aware filters, it is often challenging to handle images of high complexity where features of multiple scales coexist. In particular, it is not always easy to find the right balance between removing unimportant details and protecting important features when they come in multiple sizes, shapes, and contrasts. Unlike previous approaches, we address this issue from the perspective of adaptive kernel scales. Relying on patch‐based statistics, our method identifies texture from structure and also finds an optimal per‐pixel smoothing scale. We show that the proposed mechanism helps achieve enhanced image/texture filtering performance in terms of protecting the prominent geometric structures in the image, such as edges and corners, and keeping them sharp even after significant smoothing of the original signal.  相似文献   

19.
Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analyzing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations. It supports intuitive articulation motion editing through manipulating the blending coefficients of these components. To robustly reconstruct poses from altered LAs, we devise a connection‐map‐based algorithm which consists of two steps of linear optimization. A variety of experiments show that our decomposition is truly localized with respect to rotational motions and outperforms state‐of‐the‐art approaches in precisely capturing local articulated motion.  相似文献   

20.
Recent proliferation of camera phones, photo sharing and social network services has significantly changed how we process our photos. Instead of going through the traditional download‐edit‐share cycle using desktop editors, an increasing number of photos are taken with camera phones and published through cellular networks. The immediacy of the sharing process means that on‐device image editing, if needed, should be quick and intuitive. However, due to the limited computational resources and vastly different user interaction model on small screens, most traditional local selection methods can not be directly adapted to mobile devices. To address this issue, we present TouchTone, a new method for edge‐aware image adjustment using simple finger gestures. Our method enables users to select regions within the image and adjust their corresponding photographic attributes simultaneously through a simple point‐and‐swipe interaction. To enable fast interaction, we develop a memory‐ and computation‐efficient algorithm which samples a collection of 1D paths from the image, computes the adjustment solution along these paths, and interpolates the solutions to entire image through bilateral filtering. Our system is intuitive to use, and can support several local editing tasks, such as brightness, contrast, and color balance adjustments, within a minute on a mobile device.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号