首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
Similarity assessment of 3D mechanical components for design reuse   总被引:2,自引:0,他引:2  
Duplicate designs consume a significant amount of resources in most new product development. Search of similar parts for a given query part is the key to avoid this problem by facilitating design reuse. Most search algorithms convert the CAD model into a shape signature and compute the similarity between two models according to a measure function of their signatures. However, each algorithm defines the shape signature in a different way, and thus has its own limitations in discriminating 3D parts. This paper proposes a search scheme that successfully complements various shape signatures in similarity assessment of 3D mechanical components. It considers form-feature, topological, and geometric information in component comparison. Such an integrated approach can effectively solve the feature intersection problem, inherited in any feature-based approaches, and capture the user's intent more precisely in the search, which geometry-based methods fail to accomplish. We also develop a set of algorithms that performs the component comparison in a polynomial time. The proposed scheme is implemented in a product design environment consisting of commercial CAD and PDM systems. The result demonstrates the practicality of this work in automatic search of similar mechanical components for design reuse.  相似文献   

2.
In the recent years, the use of workflows has significantly expanded from its original domain of business processes towards new areas. The increasing demand for individual and more flexible workflows asks for new methods that support domain experts to create, monitor, and adapt workflows. The emergent field of process-oriented case-based reasoning addresses this problem by proposing methods for reasoning with workflows based on experience. New workflows can be constructed by reuse of already available similar workflows from a repository. Hence, methods for the similarity assessment of workflows and for the efficient retrieval of similar workflows from a repository are of core importance. To this end, we describe a new generic model for representing workflows as semantically labeled graphs, together with a related model for knowledge intensive similarity measures. Further, new algorithms for workflow similarity computation, based on A⁎ search are described. A new retrieval algorithm is introduced that goes beyond traditional sequential retrieval for graphs, interweaving similarity computation with case selection. We describe the application of this model and several experimental evaluations of the algorithms in the domain of scientific workflows and in the domain of business workflows, thereby showing its broad applicability.  相似文献   

3.
介绍了关联规则挖掘的基本原理和方法,详细分析了分布式关联规则挖掘算法并给出其模型;提出一种充分考虑数据源异构性、基于相似度的的分布式数据挖掘方法.实验证明该模型提高了挖掘的准确率.  相似文献   

4.
现有的时间序列的相似性度量大多基于欧氏距离,并不适用于不同粒度时间序列的相似性匹配,无法直接对其相似性进行有效的度量,为此,提出一种基于对应差值比样本的相似性度量,用于不同粒度时间序列的相似性匹配.首先对不同时间粒度的时序数据进行阐述,并定义了对应差值比样本与相似度计算方法;接着提出基于它们的相似性匹配算法;最后实验证...  相似文献   

5.
针对评估颅面和颅骨轮廓相似性过程中的离散点对应、曲线拟合带来的误差以及人工干预问题, 提出一种基于重采样和Fourier变换的离散轮廓像素点的相似度量方法。首先使用Canny算子和滑动窗口方法进行边缘检测和边界跟踪, 得到待比较颅骨和颅面的边缘轮廓像素集合; 然后对轮廓统一重采样, 解决了两者离散点的对应问题; 最后直接对数据进行规格化和离散傅里叶变换, 利用傅里叶描述子进行相似性度量, 避免了曲线拟合导致的误差和手工介入问题。实验表明, 该算法提高了颅骨和颅面相似性度量的准确率, 复杂度较低, 可实现评价过程自动化, 为颅像重合身份认证奠定了基础。  相似文献   

6.
This paper presents the SYMBAD (similarity based agents for design) system, exploring multi-agent aspects in an architecture company, capturing, cataloging, and communicating information produced by the team members. The main task managed by the designers is to build stands to present the image of a company, project its presence in the market and emphasize the corporate identity to all prospects. From conceptual design to the construction of a final product, a stand project passes through many hands, each one adding bits and pieces until it is completed. Reuse of materials and ideas is less feasible as design complexity increases. The processes and problems in stand projects are quite common and can be easily found in other design situations. We present an agent framework to improve process awareness in an architecture company. The agents instrument the process to produce global awareness, to facilitate reuse and optimize the process as a whole. In this paper we present the agent architecture, as well as each agent’s general functioning and reasoning rules.  相似文献   

7.
Similarity query processing is becoming increasingly important in many applications such as data cleaning, record linkage, Web search, and document analytics. In this paper we study how to provide end-to-end similarity query support natively in a parallel database system. We discuss how to express a similarity predicate in its query language, how to build indexes, how to answer similarity queries (selections and joins) efficiently in the runtime engine, possibly using indexes, and how to optimize similarity queries. One particular challenge is how to incorporate existing similarity join algorithms, which often require a series of steps to achieve a high efficiency, including collecting token frequencies, finding matching record id pairs, and reassembling result records based on id pairs. We present a novel approach that uses existing runtime operators to implement such complex join algorithms without reinventing the wheel; doing so positions the system to automatically benefit from future improvements to those operators. The approach includes a technique to transform a similarity join plan into an efficient operator-based physical plan during query optimization by using a template expressed largely in the system’s user-level query language; this technique greatly simplifies the specification of such a transformation rule. We use Apache AsterixDB, a parallel Big Data management system, to illustrate and validate our techniques. We conduct an experimental study using several large, real datasets on a parallel computing cluster to assess the similarity query support. We also include experiments involving three other parallel systems and report the efficacy and performance results.  相似文献   

8.
基于DCT的时序数据相似性搜索   总被引:2,自引:0,他引:2  
数据的高维度是造成时序数据相似性搜索困难的主要原因。最有效的解决方法是对时序数据进行维归约,然后对压缩后的数据建立空间索引。目前维归约的方法主要是离散傅立叶变换(DFT)和离散小波变换(DWT)。提出了一种新的方法,利用离散余弦变换(DCT)进行维归约,并在此基础上给出了对时序数据进行范围查询和近邻查询的相似性搜索方法。与基于DFT、DWT的搜索方法相比,该方法在理论分析和实验结果上都显示出较高的效率。  相似文献   

9.
QAR数据的高维度以及维度之间不确定的相互关联性,使得原有低维空间上度量时间序列的相似性的方法不再适用,另一方面由于民航行业的特殊性,利用QAR数据进行相似性搜索来确定飞行故障,对相似性的定义也有特殊的要求。通过专家经验结合一种层次分析算法来确定飞行故障所关联的属性维度的重要性,对QAR数据的多维子序列进行符号化表示,并利用k-d树的特殊性质建立索引,使QAR数据多维子序列的快速相似性搜索成为可能,结合形状和距离对相似性进行定义和度量,实验证明查找速度快,准确度较为满意。  相似文献   

10.
叙述了传统的PCA方法在处理QAR数据相似性问题的不足,提出基于EROS的KPCA方法处理QAR数据之间的相似性问题。通过引入EROS方法而不需要对数据进行向量化,引入核矩阵对QAR数据进行主成分分析,可以有效降低数据的维数。选取两组QAR数据集,采用支持向量积方法,选用不同数目的主成分进行分类实验,同SPCA方法和GPCA方法进行比较,实验结果显示把该方法运用到QAR数据集,具有较好的分类结果。  相似文献   

11.
In previous work, we have shown the possibility to automatically discriminate between legitimate software and spyware-associated software by performing supervised learning of end user license agreements (EULAs). However, the amount of false positives (spyware classified as legitimate software) was too large for practical use. In this study, the false positives problem is addressed by removing noisy EULAs, which are identified by performing similarity analysis of the previously studied EULAs. Two candidate similarity analysis methods for this purpose are experimentally compared: cosine similarity assessment in conjunction with latent semantic analysis (LSA) and normalized compression distance (NCD). The results show that the number of false positives can be reduced significantly by removing noise identified by either method. However, the experimental results also indicate subtle performance differences between LSA and NCD. To improve the performance even further and to decrease the large number of attributes, the categorical proportional difference (CPD) feature selection algorithm was applied. CPD managed to greatly reduce the number of attributes while at the same time increase classification performance on the original data set, as well as on the LSA- and NCD-based data sets.  相似文献   

12.

对于面板数据, 首先给出面板数据的空间投射方法, 将面板数据投射为空间的向量序列. 然后, 基于空间向量的夹角和距离分别构建相似性和接近性关联度模型. 具体方法为: 利用向量夹角构建面板数据的相似性关联度模型; 同时, 基于向量差的模构建面板数据的接近性关联度模型. 分别讨论两种关联度模型的规范性和对称性等性质. 最后, 通过实例验证了相似性和接近性关联度模型的合理性. 分析结果表明, 所提出的关联度模型能较好地反映面板数据的相似性和接近性的关联程度.

  相似文献   

13.
Discovering interesting patterns or substructures in data streams is an important challenge in data mining. Clustering algorithms are very often applied to identify single substructures although they are designed to partition a data set. Another problem of clustering algorithms is that most of them are not designed for data streams. This paper discusses a recently introduced procedure that deals with both problems. The procedure explores ideas from cluster analysis, but was designed to identify single clusters without the necessity to partition the whole data set into clusters. The new extended version of the algorithm is an incremental clustering approach applicable to stream data. It identifies new clusters formed by the incoming data and updates the data space partition. Clustering of artificial and real data sets illustrates the abilities of the proposed method.  相似文献   

14.
Children's bicycles are the product most often involved in leisure accidents to children. One of the possible reasons for this might be a lack of fit between the dimensions of the bicycle and the dimensions of the child. In a project entitled KIMA-1, some 33 dimensions of 279 children (aged 2.5-5.5 years) were measured at seven infant welfare centres in the province of Zuid, Holland. These data were used to compare dimensions of children with dimensions of bicycles. Furthermore, the requirements regarding bicycle dimensions laid down in product safety acts of different countries were compared with both the results of KIMA-1 and some bicycles available in shops. It is concluded that maximum product safety and comfort of the bicycle are achieved when the bicycle is fitted to the dimensions of the child. Enhancement of this fitting process can be achieved by relating the dimensions of the bicycle to the stature rather than to the age of the child. The comparison of the KIMA-1 data to the dimensions laid down in product safety acts led to the conclusion that Dutch children are larger than the population on which the safety dimensions are based. Furthermore, secular changes in body dimensions call for a revision of the relevant safety dimensions in 10-15 years.  相似文献   

15.
《Information Systems》1986,11(2):123-135
A complete algorithm is presented to automate the design of functional data models. The algorithm receives a set of functions and a set of constraints defined on those functions and returns a non-redundant data model with maximal semantic content. The major design criterion of non-redundancy implies that no function in the model should be derivable from the other functions in that model. The minor criterion of maximal semantic content implies that among all equivalent non-redundant data models, the model that exerts maximum control over the behavior of data is the optimum model.  相似文献   

16.
本文主要针对单片机在有线数据传输方面的应用,介绍了一种基于单片机MSP430实现的MODEM的数据传输系统.方案采用一个嵌入式的MODEM作为系统传输数据的MODEM,MODEM和单片机通过串口连接,实现数据的实时传输.  相似文献   

17.
Summary In order to verify that a nondeterministic sequential program is partially correct it is sufficient to establish the conjunction of two constituent properties: weak partial correctness and functional, that is reproducible, behavior. It is possible to continue this divide-and-conquer strategy for the concept of functional behavior. If the nondeterministic sequential program is derived from a set of interacting parallel processes then the functional behavior of the former can be expressed in terms of two weaker complementary properties of the latter: weak functional behavior and input/output liveness. The only remaining issue is input/output dependability: the absence of input/output livelock. The theoretical framework of data spaces is used to derive closure theorems for these constituent properties. For instance, it is shown that a system of weakly functional processes is again weakly functional.This research is supported in part by the Office of Naval Research under Contract No. N00014-77-C-0536 through the University of Southern CaliforniaA preliminary version of the paper was presented at the Symposium on Formal Methods and Mathematical Tools for Software Design, Mathematical Research Institute, Oberwolfach, West Germany, Nov. 21–27, 1976  相似文献   

18.
A method of link analysis employed for retrieving information from the Web is extended in order to evaluate one aspect of quality in an object-oriented model. The principal eigenvectors of matrices derived from the adjacency matrix of a modified class diagram are used to identify and quantity heavily loaded portions of an object-oriented design that deviate from the principle of distributed responsibilities.  相似文献   

19.
Data mining is crucial in many areas and there are ongoing efforts to improve its effectiveness in both the scientific and the business world. There is an obvious need to improve the outcomes of mining techniques such as clustering and other classifiers without abandoning the standard mining tools that are popular with researchers and practitioners alike. Currently, however, standard tools do not have the flexibility to control similarity relations between attribute values, a critical feature in improving mining-clustering results. The study presented here introduces the Similarity Adjustment Model (SAM) where adjusted Fuzzy Similarity Functions (FSF) control similarity relations between attribute values and hence ameliorate clustering results obtained with standard data mining tools such as SPSS and SAS. The SAM draws on principles of binary database representation models and employs FSF adjusted via an iterative learning process that yields improved segmentation regardless of the choice of mining-clustering algorithm. The SAM model is illustrated and evaluated on three common datasets with the standard SPSS package. The datasets were run with several clustering algorithms. Comparison of “Naïve” runs (which used original data) and “Fuzzy” runs (which used SAM) shows that the SAM improves segmentation in all cases.  相似文献   

20.
This paper considers the system design of data acquisition units and takes as an example the design of a unit for the MOS 6502 microprocessor although the control logic of the final system may be readily adapted to other microprocessors. The MOS 6502 is used in both the Apple II and Commodore PET desktop microcomputers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号