首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
开源数据集加速了深度学习的发展, 但存在许多不合理使用数据集的现象. 为保护数据集的知识产权, 近期工作提出数据集水印算法, 在数据集发布前预先植入水印, 当模型在此数据集上训练时该水印会被附着在模型中, 之后通过验证可疑模型是否存在水印来追溯数据集的非法使用. 但已有数据集水印算法无法在小扰动下提供有效并且隐蔽的黑盒水印验证. 为解决这一问题, 本文首次提出利用独立于图像内容与标签的风格属性来植入水印, 并限制对原数据集的扰动不涉及标签的修改. 通过不引入图像内容与标签的不一致性和额外的代理模型保证水印隐蔽性和有效性. 在水印验证阶段仅使用可疑模型的预测结果通过假设检验给出判断. 本文在CIFAR-10数据集上与现有5种方法相比较, 实验结果验证了本文提出的基于风格的数据集水印算法的有效性与功能不变性. 此外, 本文开展的消融实验验证了本文所提的风格优化模块的必要性, 算法在不同超参设定以及不同数据集下的有效性.  相似文献   

2.
Service-oriented architectures (SOA) provide a flexible and dynamic platform for implementing business solutions. In this paper, we address the modeling of such architectures by refining business-oriented architectures, which abstract from technology aspects, into service-oriented ones, focusing on the ability of dynamic reconfiguration (binding to new services at run-time) typical for SOA.The refinement is based on conceptual models of the platforms involved as architectural styles, formalized by graph transformation systems. Based on a refinement relation between abstract and platform-specific styles we investigate how to realize business-specific scenarios on the SOA platform by automatically deriving refined, SOA-specific reconfiguration scenarios.Research partially supported by the European Research Training Network SegraVis (on Syntactic and Semantic Integration of Visual Modelling Techniques)  相似文献   

3.
The amounts of available Semantic Web (SW) data (including Linked Open Data) constantly increases. Users would like to browse and explore effectively such information spaces without having to be acquainted with the various vocabularies and query language syntaxes. This paper discusses the work that has been done in the area for the case of RDF/S datasets, with emphasis on session-based interaction schemes for exploratory search. In particular, it surveys the related works according to various aspects, such as assumed user goals, structuring of the underlying information space, generality and configuration requirements, and various (state space-based) features of the navigation structure. Subsequently it introduces a small but concise formal model of the interaction (that captures the core functionalities) which is used as reference model for describing what the existing systems support. Finally the paper describes the evaluation methods that have been used. Overall, the presented analysis aids the understanding and comparison of the various different approaches that have been proposed so far.  相似文献   

4.
Two exploratory data analysis techniques the comap and the quad plot are shown to have both strengths and shortcomings when analysing spatial multivariate datasets. A hybrid of these two techniques is proposed: the quad map which is shown to overcome the outlined shortcomings when applied to a dataset containing weather information for disaggregate incidents of urban fires. Common to the quad plot, the quad map uses Polya models in order to articulate the underlying assumptions behind histograms. The Polya model formalises the situation in which past fire incident counts are computed and displayed in (multidimensional) histograms as appropriate assessments of conditional probability providing valuable diagnostics such as posterior variance i.e. sensitivity to new information. Finally we discuss how new technology in particular Online Analytics Processing (OLAP) and Geographical Information Systems (GISs) offer potential in automating exploratory spatial data analyses techniques, such as the quad map.  相似文献   

5.
插画的重生     
资深代理Scott Hull认为插画师们要想在这个日益困难的市场生存并确保光明的未来就必须采取一些措施,即使这些措施非常简单。  相似文献   

6.
Instant messaging service is an important aspect of social media and sprung up in last decades. Traditional instant messaging service transfers information mainly based on textual message, while the visual message is ignored to a great extent. Such instant messaging service is thus far from satisfactory in all-around information communication. In this paper, we propose a novel visual assisted instant messaging scheme named Chat with illustration (CWI), which presents users visual messages associated with textual message automatically. When users start their chat, the system first identifies meaningful keywords from dialogue content and analyzes grammatical and logical relations. Then CWI explores keyword-based image search on a hierarchically clustering image database which is built offline. Finally, according to grammatical and logical relations, CWI assembles these images properly and presents an optimal visual message. With the combination of textual and visual message, users could get a more interesting and vivid communication experience. Especially for different native language speakers, CWI can help them cross language barrier to some degree. In addition, a visual dialogue summarization is also proposed, which help users recall the past dialogue. The in-depth user studies demonstrate the effectiveness of our visual assisted instant messaging scheme.  相似文献   

7.
In disciplines that produce a wide variety of data – such as materials engineering – it can be difficult to provide an infrastructure for storing, managing, sharing and exploring datasets, particularly whilst that data is still in use. The Heterogeneous Data Centre (HDC) is an extension to a file server that provides scientists with tools for exploring their datasets, managing relationships between them and adding metadata. Many of the features evolved from close consultation with our users. In this paper, we evaluate the HDC׳s interface features for managing datasets using data provided by users from the materials engineering and human genetics domains. In particular, we show the simplicity of capturing data through a file share and the flexibility and extensibility of a system supporting hierarchical metadata, dataset relationships and plug-ins.  相似文献   

8.
9.
Numerical simulation of physical phenomena is an accepted way of scientific inquiry. However, the field is still evolving, with a profusion of new solution and grid generation techniques being continuously proposed. Concurrent and retrospective visualization are being used to validate the results. There is a need for representation schemes which allow access of structures in an increasing order of smoothness. We describe our methods on datasets obtained from curvilinear grids. Our target application required visualization of a computational simulation performed on a very remote supercomputer. Since no grid adaptation was performed, it was not deemed necessary to simplify or compress the grid. Inherent to the identification of significant structures is determining the location of the scale coherent structures and assigning saliency values to them. Scale coherent structures are obtained as a result of combining the coefficients of a wavelet transform across scales. The result of this operation is a correlation mask that delineates regions containing significant structures. A spatial subdivision is used to delineate regions of interest. The mask values in these subdivided regions are used as a measure of information content. Later, another wavelet transform is conducted within each subdivided region and the coefficients are sorted based on a perceptual function with bandpass characteristics. This allows for ranking of structures based on the order of significance, giving rise to an adaptive and embedded representation scheme. We demonstrate our methods on two datasets from computational field simulations. We show how our methods allow the ranked access of significant structures. We also compare our adaptive representation scheme with a fixed blocksize scheme  相似文献   

10.
11.
We present real-time vascular visualization methods, which extend on illustrative rendering techniques to particularly accentuate spatial depth and to improve the perceptive separation of important vascular properties such as branching level and supply area. The resulting visualization can and has already been used for direct projection on a patient's organ in the operation theater where the varying absorption and reflection characteristics of the surface limit the use of color. The important contributions of our work are a GPU-based hatching algorithm for complex tubular structures that emphasizes shape and depth as well as GPU-accelerated shadow-like depth indicators, which enable reliable comparisons of depth distances in a static monoscopic 3D visualization. In addition, we verify the expressiveness of our illustration methods in a large, quantitative study with 160 subjects.  相似文献   

12.
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.  相似文献   

13.
Given the posterior probability estimates of 14 classifiers on 38 datasets, we plot two-dimensional maps of classifiers and datasets using principal component analysis (PCA) and Isomap. The similarity between classifiers indicate correlation (or diversity) between them and can be used in deciding whether to include both in an ensemble. Similarly, datasets which are too similar need not both be used in a general comparison experiment. The results show that (i) most of the datasets (approximately two third) we used are similar to each other, (ii) multilayer perceptrons and k-nearest neighbor variants are more similar to each other than support vector machine and decision tree variants, (iii) the number of classes and the sample size has an effect on similarity.  相似文献   

14.
We investigate algebraic processing strategies for large numeric datasets equipped with a (possibly irregular) grid structure. Such datasets arise, for example, in computational simulations, observation networks, medical imaging, and 2-D and 3-D rendering. Existing approaches for manipulating these datasets are incomplete: The performance of SQL queries for manipulating large numeric datasets is not competitive with specialized tools. Database extensions for processing multidimensional discrete data can only model regular, rectilinear grids. Visualization software libraries are designed to process arbitrary gridded datasets efficiently, but no algebra has been developed to simplify their use and afford optimization. Further, these libraries are data dependent – physical changes to data representation or organization break user programs. In this paper, we present an algebra of gridfields for manipulating arbitrary gridded datasets, algebraic optimization techniques, and an implementation backed by experimental results. We compare our techniques to those of Geographic Information Systems (GIS) and visualization software libraries, using real examples from an Environmental Observation and Forecasting System. We find that our approach can express optimized plans inaccessible to other techniques, resulting in improved performance with reduced programming effort.  相似文献   

15.
16.
In the past decades, we have witnessed a revolution in information technology. Routine collection of systematically generated data is now commonplace. Databases with hundreds of fields (variables), and billions of records (observations) are not unusual. This presents a difficulty for classical data analysis methods, mainly due to the limitation of computer memory and computational costs (in time, for example). In this paper, we propose an intelligent regression analysis methodology which is suitable for modeling massive datasets. The basic idea here is to split the entire dataset into several blocks, applying the classical regression techniques for data in each block, and finally combining these regression results via weighted averages. Theoretical justification of the goodness of the proposed method is given, and empirical performance based on extensive simulation study is discussed.  相似文献   

17.
This paper presents an efficient and accurate isosurface rendering algorithm for the natural C1 splines on the face-centered cubic (FCC) lattice. Leveraging fast and accurate evaluation of a spline field and its gradient, accompanied by efficient empty-space skipping, the approach generates high-quality isosurfaces of FCC datasets at interactive speed (20–70 fps). The pre-processing computation (quasi-interpolation and min/max cell construction) is improved 20–30-fold by OpenCL kernels. In addition, a novel indexing scheme is proposed that allows an FCC dataset to be stored as a four-channel 3D texture. When compared with other reconstruction schemes on the Cartesian and BCC (body-centered cubic) lattices, this method can be considered a practical reconstruction scheme that offers both quality and performance. The OpenCL and GLSL (OpenGL Shading Language) source codes are provided as a reference.  相似文献   

18.
Exploring spatial datasets with histograms   总被引:2,自引:0,他引:2  
As online spatial datasets grow both in number and sophistication, it becomes increasingly difficult for users to decide whether a dataset is suitable for their tasks, especially when they do not have prior knowledge of the dataset. In this paper, we propose browsing as an effective and efficient way to explore the content of a spatial dataset. Browsing allows users to view the size of a result set before evaluating the query at the database, thereby avoiding zero-hit/mega-hit queries and saving time and resources. Although the underlying technique supporting browsing is similar to range query aggregation and selectivity estimation, spatial dataset browsing poses some unique challenges. In this paper, we identify a set of spatial relations that need to be supported in browsing applications, namely, the contains, contained and the overlap relations. We prove a lower bound on the storage required to answer queries about the contains relation accurately at a given resolution. We then present three storage-efficient approximation algorithms which we believe to be the first to estimate query results about these spatial relations. We evaluate these algorithms with both synthetic and real world datasets and show that they provide highly accurate estimates for datasets with various characteristics. Recommended by: Sunil Prabhakar Work supported by NSF grants IIS 02-23022 and CNF 04-23336. An earlier version of this paper appeared in the 17th International Conference on Data Engineering (ICDE 2001).  相似文献   

19.
崔鑫  徐华  宿晨 《计算机应用》2020,40(6):1662-1667
合成少数类过抽样技术(SMOTE)中的噪声样本可能参与合成新样本,所以难以保证新样本的合理性。针对这个问题,结合聚类算法提出了改进算法CSMOTE。该算法抛弃了SMOTE在最近邻间线性插值的思想,使用少数类的簇心与其对应簇中的样本进行线性插值合成新样本,并且对参与合成的样本进行了筛选,降低了噪声样本参与合成的可能。在六个实际数据集上,将CSMOTE算法与四个SMOTE的改进算法以及两种欠抽样算法进行了多次的对比实验,CSMOTE算法在所有数据集上均获得了最高的AUC值。实验结果表明,CSMOTE算法具有更高的分类性能,可以有效解决数据集中样本分布不均衡的问题。  相似文献   

20.
A commercial application of computers in technical illustration is described. Details are also given of the CACTI system which produces cut-away and exploded construction drawings for certain classes of mechanical engineering illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号