首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We describe a robust method for the recovery of the depth map (or height map) from a gradient map (or normal map) of a scene, such as would be obtained by photometric stereo or interferometry. Our method allows for uncertain or missing samples, which are often present in experimentally measured gradient maps, and also for sharp discontinuities in the scene’s depth, e.g. along object silhouette edges. By using a multi-scale approach, our integration algorithm achieves linear time and memory costs. A key feature of our method is the allowance for a given weight map that flags unreliable or missing gradient samples. We also describe several integration methods from the literature that are commonly used for this task. Based on theoretical analysis and tests with various synthetic and measured gradient maps, we argue that our algorithm is as accurate as the best existing methods, handling incomplete data and discontinuities, and is more efficient in time and memory usage, especially for large gradient maps.  相似文献   

2.
Developing a satisfactory and effective method for auto-annotating images that works under general conditions is a challenging task. The advantages of such a system would be manifold: it can be used to annotate existing, large databases of images, rendering them accessible to text search engines; or it can be used as core for image retrieval based on a query image’s visual content. Manual annotation of images is a difficult, tedious and time consuming task. Furthermore, manual annotations tend to show great inter-person variance: considering an image, the opinions about what elements are significant and deserve an annotation vary strongly. The latter poses a problem for the evaluation of an automatic method, as an annotation’s correctness is greatly subjective. In this paper we present an automatic method for annotating images, which addresses one of the existing methods’ major limitation, namely a fixed annotation length. The proposed method, PATSI, automatically chooses the resulting annotation’s length for each query image. It is held as simple as possible and a build-in parameter optimization procedure renders PATSI de-facto parameter free. Finally, PATSI is evaluated on standard datasets, outperforming various state-of-the-art methods.  相似文献   

3.
数字水印技术研究   总被引:17,自引:6,他引:17  
数字水印是信息隐藏技术的一个重要分支,是一种全新的数字产品保护技术。它是将特定的数字信息嵌入到图像、音频、视频或软件等各种数字产品中,以达到信息安全和版权保护等目的。综述了当前数字水印技术的原理特点、典型方案和攻击方法,分类讨论了数字水印的应用与研究现状以及发展中存在的问题,提出了数字水印今后的研究发展方向。  相似文献   

4.
We present a sensitivity analysis-based method for explaining prediction models that can be applied to any type of classification or regression model. Its advantage over existing general methods is that all subsets of input features are perturbed, so interactions and redundancies between features are taken into account. Furthermore, when explaining an additive model, the method is equivalent to commonly used additive model-specific methods. We illustrate the method’s usefulness with examples from artificial and real-world data sets and an empirical analysis of running times. Results from a controlled experiment with 122 participants suggest that the method’s explanations improved the participants’ understanding of the model.  相似文献   

5.
第四代地理信息系统研究中的尺度转换数字模型   总被引:18,自引:1,他引:18       下载免费PDF全文
分析了空间插值模型,数字地面模型和它们与地理信息系统集成的研究进展,存在的缺陷和需要解决的理论问题,论述了建立基于曲面论和遥感反演方法数字模型及实现其与地理信息系统有效集成的必要性和可行性。  相似文献   

6.
详细阐明了高速数字印制电路板在设计时应该注意的问题,阐明了在列车车载系统中高速数字电路将会受到哪些方面的影响,以及产生这些影响的原因,针对具体问题提出设计高速电路时应该采取的几种方法。实践证明,采取这些方法设计的电路能够极大提高产品的抗干扰性能。  相似文献   

7.
The cognitive information flow analysis (CIFA) is introduced as a method to integrate results from cognitive task and work analyses in order to provide a focus on the necessary system information flow, which includes how information is produced, consumed, and transformed by the various system functions and users. CIFA can be used as a tool to augment cognitive task and work analyses. This paper presents the CIFA technique, provides a case study that applies the CIFA method to existing goal-directed task analysis and modified cognitive work analysis results, and provides insight into CIFA’s use for informing the design of a human-robot system. CIFA augments the results provided by cognitive task and work analyses and can guide system design and development. CIFA differs from existing information flow techniques in that it allows representation of systems containing large numbers of users for highly complex and uncertain domains. Existing cognitive task and work analyses integration methods rely heavily on relational tables. CIFA specifically expresses the interconnectivity of the various system subcomponents, including partial ordering and parallelism, by fundamentally focusing on the information flow. CIFA also identifies both existing and potential, information bottlenecks and highlights teamwork.  相似文献   

8.

Today, the importance of digital images as a medium for social communication is growing rapidly. Sometimes, an image needs to be authenticated by verifying its source camera model or device. Recently, deep networks have become very successful at visual pattern recognition. With this motivation, several investigators have explored the possibility of using convolutional neural networks (CNNs) for camera source identification. In this paper, we use selective preprocessing, instead of a indiscriminate one, in order not to hinder the CNN’s strong ability to learn useful features for this kind of forensic task. To generate a consistent and balanced dataset, we limit the maximum number of original images to 200 per camera model, and we discard vertically taken images. Using a relatively simple deep network structure, the proposed method achieved a better prediction accuracy—95.0%—than GoogleNet and other existing methods. Also, challenging camera models such as the Sony DSC H50 and W170 can be classified with the quite high prediction accuracies of 87.9% and 83.1%, respectively.

  相似文献   

9.
在视觉问答(VQA)任务中,“可解释”是指在特定的任务中通过各种方法去解释模型为什么有效。现有的一些VQA模型因为缺乏可解释性导致模型无法保证在生活中能安全使用,特别是自动驾驶和医疗相关的领域,将会引起一些伦理道德问题,导致无法在工业界落地。主要介绍视觉问答任务中的各种可解释性实现方式,并分为了图像解释、文本解释、多模态解释、模块化解释和图解释五类,讨论了各种方法的特点并对其中的一些方法进行了细分。除此之外,还介绍了一些可以增强可解释性的视觉问答数据集,这些数据集主要通过结合外部知识库、标注图片信息等方法来增强可解释性。对现有常用的视觉问答可解释方法进行了总结,最后根据现有视觉问答任务中可解释性方法的不足提出了未来的研究方向。  相似文献   

10.
Business and Information Systems Engineering (BISE) is at a turning point. Planning, designing, developing and operating IT used to be a management task of a few elites in public administrations and corporations. But the continuous digitization of nearly all areas of life changes the IT landscape fundamentally. Success in this new era requires putting the human perspective – the digital user – at the very heart of the new digitized service-led economy. BISE faces not just a temporary trend but a complex socio-technical phenomenon with far-reaching implications. The challenges are manifold and have major consequences for all stakeholders, both in information systems and management research as well as in practice. Corporate processes have to be re-designed from the ground up, starting with the user’s perspective, thus putting usage experience and utility of the individual center stage. The digital service economy leads to highly personalized application systems while organizational functions are being fragmented. Entirely new ways of interacting with information systems, in particular beyond desktop IT, are being invented and established. These fundamental challenges require novel approaches with regards to innovation and development methods as well as adequate concepts for enterprise or service system architectures. Gigantic amounts of data are being generated at an accelerating rate by an increasing number of devices – data that need to be managed. In order to tackle these extraordinary challenges we introduce ‘user, use & utility’ as a new field of BISE that focuses primarily on the digital user, his or her usage behavior and the utility associated with system usage in the digitized service-led economy. The research objectives encompass the development of theories, methods and tools for systematic requirement elicitation, systems design, and business development for successful Business and Information Systems Engineering in a digitized economy – information systems that digital users enjoy using. This challenge calls for leveraging insights from various scientific disciplines such as Design, Engineering, Computer Science, Psychology and Sociology. BISE can provide an integrated perspective, thereby assuming a pivotal role within the digitized service led economy.  相似文献   

11.
3D scanned point cloud data of teeth is popular used in digital orthodontics. The classification and semantic labelling for point cloud of each tooth is a key and challenging task for planning dental treatment. Utilizing the priori ordered position information of tooth arrangement, we propose an effective network for tooth model classification in this paper. The relative position and the adjacency similarity feature vectors are calculated for tooth 3D model, and combine the geometric feature into the fully connected layers of the classification training task. For the classification of dental anomalies, we present a dental anomalies processing method to improve the classification accuracy. We also use FocalLoss as the loss function to solve the sample imbalance of wisdom teeth. The extensive evaluations, ablation studies and comparisons demonstrate that the proposed network can classify tooth models accurately and automatically and outperforms state-of-the-art point cloud classification methods.  相似文献   

12.
There are three main approaches for reconstructing 3D models of buildings. Laser scanning is accurate but expensive and limited by the laser’s range. Structure-from-motion (SfM) and multi-view stereo (MVS) recover 3D point clouds from multiple views of a building. MVS methods, especially patch-based MVS, can achieve higher density than do SfM methods. Sophisticated algorithms need to be applied to the point clouds to construct mesh surfaces. The recovered point clouds can be sparse in areas that lack features for accurate reconstruction, making recovery of complete surfaces difficult. Moreover, segmentation of the building’s surfaces from surrounding surfaces almost always requires some form of manual inputs, diminishing the ease of practical application of automatic 3D reconstruction algorithms. This paper presents an alternative approach for reconstructing textured mesh surfaces from point cloud recovered by patch-based MVS method. To a good first approximation, a building’s surfaces can be modeled by planes or curve surfaces which are fitted to the point cloud. 3D points are resampled on the fitted surfaces in an orderly pattern, whose colors are obtained from the input images. This approach is simple, inexpensive, and effective for reconstructing textured mesh surfaces of large buildings. Test results show that the reconstructed 3D models are sufficiently accurate and realistic for 3D visualization in various applications.  相似文献   

13.
Bill Boni 《Network Security》2002,2002(6):16-17
There is a war going on in cyberspace and the ‘good guys’ appear to be losing it. The combat is not just cyber terrorists probing and defacing military, political or economic targets, but much more commonly at this point, between cyber criminals and managers of IT staffs supporting E-commerce operations. The facts are that a class of ‘elite’ hackers is now commonly able to attack sites, extract credit card account information then cover their tracks by destroying digital evidence along their path. These intruders have become more brazen as they have become more successful.  相似文献   

14.
基于深度学习的三维数据分析理解方法研究综述   总被引:1,自引:0,他引:1  
基于深度学习的三维数据分析理解是数字几何领域的一个研究热点.不同于基于深度学习的图像分析理解,基于深度学习的三维数据分析理解需要解决的首要问题是数据表达的多样性.相较于规则的二维图像,三维数据有离散表达和连续表达的方法,目前基于深度学习的相关工作多基于三维数据的离散表示,不同的三维数据表达方法与不同的数字几何处理任务对深度学习网络的要求也不同.本文首先汇总了常用的三维数据集与特定任务的评价指标,并分析了三维模型特征描述符.然后从特定任务出发,就不同的三维数据表达方式,对现有的基于深度学习的三维数据分析理解网络进行综述,对各类方法进行对比分析,并从三维数据表达方法的角度进一步汇总现有工作.最后基于国内外研究现状,讨论了亟待解决的挑战性问题,展望了未来发展的趋势.  相似文献   

15.
Effective identification of the change point of a multivariate process is an important research issue since it is associated with the determination of assignable causes which may seriously affect the underlying process. Most existing studies either use the maximum likelihood estimator (MLE) method or the machine learning (ML) method to estimate or identify the change point of a process. Typically, the MLE method may be criticized for its assumption that the process distribution is known, and the ML method may have the deficiency of using a large number of input variables in the modeling procedure. Diverging from existing approaches, this study proposes an integrated hybrid scheme to mitigate the difficulties of the MLE and ML methods. The proposed scheme includes four components: the logistic regression (LR) model, the multivariate adaptive regression splines (MARS) model, the support vector machine (SVM) classifier and the change point identification strategy. It performs three tasks in order to effectively identify the change point in a multivariate process. The initial task is to use the LR and MARS models to reduce and refine the whole set of input or explanatory variables. The remaining variables are then served as input variables to the SVM in the second task. The last task is to integrate use of the SVM outputs with our proposed identification strategy to determine the change point in a multivariate process. Experimental simulation results reveal that the proposed hybrid scheme is able to effectively identify the change point and outperform the typical statistical process control (SPC) chart alone and the single stage SVM methods.  相似文献   

16.
Normal estimation is an essential task for scanned point clouds in various CAD/CAM applications. Many existing methods are unable to reliably estimate normals for points around sharp features since the neighborhood employed for the normal estimation would enclose points belonging to different surface patches across the sharp feature. To address this challenging issue, a robust normal estimation method is developed in order to effectively establish a proper neighborhood for each point in the scanned point cloud. In particular, for a point near sharp features, an anisotropic neighborhood is formed to only enclose neighboring points located on the same surface patch as the point. Neighboring points on the other surface patches are discarded. The developed method has been demonstrated to be robust towards noise and outliers in the scanned point cloud and capable of dealing with sparse point clouds. Some parameters are involved in the developed method. An automatic procedure is devised to adaptively evaluate the values of these parameters according to the varying local geometry. Numerous case studies using both synthetic and measured point cloud data have been carried out to compare the reliability and robustness of the proposed method against various existing methods.  相似文献   

17.
Online social network such as Twitter, Facebook and Instagram are increasingly becoming the go-to medium for users to acquire information and discuss what is happening globally. Understanding real-time conversations with masses on social media platforms can provide rich insights into events, provided that there is a way to detect and characterise events. To this end, in the past twenty years, many researchers have been developing event detection methods based on the data collected from various social media platforms. The developed methods for discovering events are generally modular in design and novel in scale and speed. To review the research in this field, we line up existing works for event detection in online social networks and organise them to provide a comprehensive and in-depth survey. This survey comprises three major parts: research methodologies, the review of state-of-the-art literature and the evolution of significant challenges. Each part is supposed to attract readers with different motivations and expectations on the ‘things’ delivered in this survey. For example, the methodologies provide the life-cycle to design new event detection models, from data collection to model evaluations. A timeline and a taxonomy of existing methods are also introduced to elaborate the development of various technologies under the umbrella of event detection. These two parts benefit those with a background in event detection and want to commit a deep exploration of existing models such as discussing their pros and cons alike. The third part shows the development of the major open issues in this field. It also indicates the milestones of each challenge in terms of typical models. Our survey can contribute to the community by highlighting possible new problem statements and opening new research directions.  相似文献   

18.
Interrupting users engaged in tasks typically has negative effects on their task completion time, error rate, and affective state. Empirical research has shown that these negative effects can be mitigated by deferring interruptions until more opportune moments in a user’s task sequence. However, existing systems that reason about when to interrupt do not have access to models of user tasks that would allow for such finer-grained temporal reasoning. To enable this reasoning, we have developed an integrated framework for specifying and monitoring user tasks. For task specification, our framework provides a language that supports expressive specification of tasks using a concise notation. For task monitoring, our framework provides an event database and handler that manages events from any instrumented application and a task monitor that observes a user’s progress through specified tasks. We describe the design and implementation of our framework, showing how it can be used to specify and monitor practical, representative user tasks. We also report results from two user studies measuring the effectiveness of our existing implementation. The use of our framework will enable attention aware systems to consider a user’s position in a task when reasoning about when to interrupt.  相似文献   

19.
Chaffin DB 《Ergonomics》2005,48(5):478-491
This paper presents the need to improve existing digital human models (DHMs) so they are better able to serve as effective ergonomics analysis and design tools. Existing DHMs are meant to be used by a designer early in a product development process when attempting to improve the physical design of vehicle interiors and manufacturing workplaces. The emphasis in this paper is placed on developing future DHMs that include valid posture and motion prediction models for various populations. It is argued that existing posture and motion prediction models now used in DHMs must be changed to become based on real motion data to assure validity for complex dynamic task simulations. It is further speculated that if valid human posture and motion prediction models are developed and used, these can be combined with psychophysical and biomechanical models to provide a much greater understanding of dynamic human performance and population specific limitations and that these new DHM models will ultimately provide a powerful ergonomics design tool.  相似文献   

20.
Increasing reliance on automation and robotization presents great opportunities to improve the management of construction sites as well as existing buildings. Crucial in the use of robots in a built environment is their capacity to locate themselves and navigate as autonomously as possible. Robots often rely on planar and 3D laser scanners for that purpose, and building information models (BIM) are seldom used, for a number of reasons, namely their unreliability, unavailability, and mismatch with localization algorithms used in robots. However, while BIM models are becoming increasingly reliable and more commonly available in more standard data formats (JSON, XML, RDF), they become more promising and reliable resources for localization and indoor navigation, in particular in the more static types of existing infrastructure (existing buildings). In this article, we specifically investigate to what extent and how such building data can be used for such robot navigation. Data flows are built from BIM model to local repository and further to the robot, making use of graph data models (RDF) and JSON data formats. The local repository can hereby be considered to be a digital twin of the real-world building. Navigation on the basis of a BIM model is tested in a real world environment (university building) using a standard robot navigation technology stack. We conclude that it is possible to rely on BIM data and we outline different data flows from BIM model to digital twin and to robot. Future work can focus on (1) making building data models more reliable and standard (modelling guidelines and robot world model), (2) improving the ways in which building features in the digital building model can be recognized in 3D point clouds observed by the robots, and (3) investigating possibilities to update the BIM model based on robot feedback.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号