首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Error-based segmentation of cloud data for direct rapid prototyping   总被引:1,自引:0,他引:1  
This paper proposes an error-based segmentation approach for direct rapid prototyping (RP) of random cloud data. The objective is to fully integrate reverse engineering and RP for rapid product development. By constructing an intermediate point-based curve model (IPCM), a layer-based RP model is directly generated from the cloud data and served as the input to the RP machine for fabrication. In this process, neither a surface model nor an STL file is generated. This is accomplished via three steps. First, the cloud data is adaptively subdivided into a set of regions according to a given subdivision error, and the data in each region is compressed by keeping the feature points (FPs) within the user-defined shape tolerance using a digital image based reduction method. Second, based on the FPs of each region, an IPCM is constructed, and RP layer contours are then directly extracted from the models. Finally, the RP layer contours are faired with a discrete curvature based fairing method and subsequently closed to generate the final layer-based RP model. This RP model can be directly submitted to the RP machine for prototype manufacturing. Two case studies are presented to illustrate the efficacy of the approach.  相似文献   

2.
In this study, a method for generation of sectional contour curves directly from cloud point data is given. This method computes contour curves for rapid prototyping model generation via adaptive slicing, data points reducing and B-spline curve fitting. In this approach, first a cloud point data set is segmented along the component building direction to a number of layers. The points are projected to the mid-plane of the layer to form a 2-dimensional (2D) band of scattered points. These points are then utilized to construct a boundary curve. A number of points are picked up along the band and a B-spline curve is fitted. Then points are selected on the B-spline curve based on its discrete curvature. These are the points used as centers for generation of circles with a user-define radius to capture a piece of the scattered band. The geometric center of the points lying within these circles is treated as a control point for a B-spline curve fitting that represents a boundary contour curve. The advantage of this method is simplicity and insensitivity to common small inaccuracies. Two experimental results are included to demonstrate the effectiveness and applicability of the proposed method.  相似文献   

3.
Tool-path generation from measured data   总被引:4,自引:0,他引:4  
Presented in the paper is a procedure through which 3-axis NC tool-paths (for roughing and finishing) can be directly generated from measured data (a set of point sequence curves). The rough machining is performed by machining volumes of material in a slice-by-slice manner. To generate the roughing tool-path, it is essential to extract the machining regions (contour curves and their inclusion relationships) from each slice. For the machining region extraction, we employ the boundary extraction algorithm suggested by Park and Choi (Comput.-Aided Des. 33 (2001) 571). By making use of the boundary extraction algorithm, it is possible to extract the machining regions with O(n) time complexity, where n is the number of runs. The finishing tool-path can be obtained by defining a series of curves on the CL (cutter location) surface. However, calculating the CL-surface of the measured data involves time-consuming computations, such as swept volume modeling of an inverse tool and Boolean operations between polygonal volumes. To avoid these computational difficulties, we develop an algorithm to calculate the finishing tool-path based on well-known 2D geometric algorithms, such as 2D curve offsetting and polygonal chain intersection algorithms.  相似文献   

4.
NURBS-based adaptive slicing for efficient rapid prototyping   总被引:4,自引:0,他引:4  
This paper presents slicing algorithms for efficient model prototyping. The algorithms directly operate upon a non-uniform rational B-spline surface model. An adaptive slicing algorithm is developed to obtain an accurate and smooth part surface. A selective hatching strategy is employed to further reduce the build time by solidifying the kernel regions of a part with the maximum allowable thick layers while solidifying the skin areas with adaptive thin layers to obtain the required surface accuracy. In addition, it provides a generalization to the containment problem with mixed tolerances for slicing a part. The article also developed a direct method for computing skin contours for all tolerance requirements. Some case studies are presented to illustrate the developed algorithms and the selective hatching and adaptive slicing strategy. The developed algorithms have been implemented and tested on a fused deposition modeling rapid prototyping machine. Both the implementation and test results are discussed in the paper.  相似文献   

5.
This paper presents an adaptive approach to improve the process planning of Rapid Prototyping/Manufacturing (RP/M) for complex product models such as biomedical models. Non-Uniform Rational B-Spline (NURBS)-based curves were introduced to represent the boundary contours of the sliced layers in RP/M to maintain the geometrical accuracy of the original models. A mixed tool-path generation algorithm was then developed to generate contour tool-paths along the boundary and offset curves of each sliced layer to preserve geometrical accuracy, and zigzag tool-paths for the internal area of the layer to simplify computing processes and speed up fabrication. In addition, based on the developed build time and geometrical accuracy analysis models, adaptive algorithms were designed to generate an adaptive speed of the RP/M nozzle/print head for the contour tool-paths to address the geometrical characteristics of each layer, and to identify the best slope degree of the zigzag tool-paths towards achieving the minimum build time. Five case studies of complex biomedical models were used to verify and demonstrate the improved performance of the approach in terms of processing effectiveness and geometrical accuracy.  相似文献   

6.
In big data applications, data privacy is one of the most concerned issues because processing large-scale privacy-sensitive data sets often requires computation resources provisioned by public cloud services. Sub-tree data anonymization is a widely adopted scheme to anonymize data sets for privacy preservation. Top–Down Specialization (TDS) and Bottom–Up Generalization (BUG) are two ways to fulfill sub-tree anonymization. However, existing approaches for sub-tree anonymization fall short of parallelization capability, thereby lacking scalability in handling big data in cloud. Still, either TDS or BUG individually suffers from poor performance for certain valuing of k-anonymity parameter. In this paper, we propose a hybrid approach that combines TDS and BUG together for efficient sub-tree anonymization over big data. Further, we design MapReduce algorithms for the two components (TDS and BUG) to gain high scalability. Experiment evaluation demonstrates that the hybrid approach significantly improves the scalability and efficiency of sub-tree anonymization scheme over existing approaches.  相似文献   

7.
  总被引:4,自引:0,他引:4  
This paper presents a reverse engineering system for rapid modeling and manufacturing of products with complex surfaces. The system consists of three main components: a 3D optical digitizing system, a surface reconstruction software and a rapid prototyping machine. The unique features of the 3D optical digitizing system include the use of white-light source, and cost-effective and quick image acquisition. The surface reconstruction process consists of three major steps: (1) range view registration by an iterative closed-form solution, (2) range surface integration by reconstructing an implicit function to update the volumetric grid, and (3) iso-surface extraction by the Marching Cubes algorithm. The modeling software exports models in STL format, which are used as input to an FDM 2000 machine to manufacture products. The examples are included to illustrate the systems and the methods.  相似文献   

8.
In the design of complex parts involving free-form or sculptured surfaces, the design is usually represented by a B-rep model. But in production involving rapid prototyping (RP) or solid machining, the B-rep model is often converted to the popular STL model. Due to defects such as topological and geometric errors in the B-rep model, the resulting STL model may contain gaps, overlaps, and inconsistent orientations. This paper presents the extension of a surface reconstruction algorithm to the global stitching of STL models for RP and solid machining applications. The model to be stitched may come from the digitization of physical objects by 3D laser scanners, or the triangulation of trimmed surfaces of a B-rep model. Systematic procedures have been developed for each of these two different but equally important cases. The result shows that the proposed method can robustly and effectively solve the global stitching problem for very complex STL models.  相似文献   

9.
The paper describes a computer-based tool for the selection of techniques used in the manufacture of prototypes and limited production runs of industrial products. The underlying decision model, based on the AHP methodology, ranks available techniques by a score resulting from the composition of priorities at different levels, each considering homogeneous and independent evaluation criteria. At each level, priorities are calculated from pairwise comparisons among either manufacturing techniques or properties of the techniques with respect to different types of application. This approach is enhanced with a procedure that adapts the parameters of the decision model to prototype specifications. Compared to scoring procedures, the method reduces the ambiguity on the values of weighting factors and allows an easier interpretation of the rationale behind the selection process.  相似文献   

10.
This paper presents a system development that extends haptic modeling to a number of key aspects in product development. Since haptic modeling has been developed based on physical laws, it is anticipated that a natural link between the virtual world and practical applications can be established based on haptic interaction. In the proposed system, a haptic device is used as the central mechanism for reverse engineering, shape modeling, real time mechanical property analysis, machining tool path planning and coordinate measuring machine (CMM) tolerance inspection path planning. With all these features in a single haptic system, it is possible to construct a three dimensional part by either haptic shape modeling or reverse engineering, then performing real-time mechanical property analysis in which the stiffness of a part can be felt and intuitively evaluated by the user, or generating collision free cutter tool path and CMM tolerance inspection path. Due to the force feed back in all of the above activities, the product development process is more intuitive, efficient and user-friendly. A prototype system has been developed to demonstrate the proposed capabilities.  相似文献   

11.
ABSTRACT

Data security is a primary concern for the enterprise moving data to cloud. This study attempts to match the data of different values with the different security management strategies from the perspective of the enterprise user. With the help of core ideas on data value evaluation in information lifecycle management, this study extracts usage features and user features from the operating data of the enterprise information system, and applies K-means to cluster the data according to its value. A total of 39,348 records of logon log and 120 records of users from the information system of a ship-fitting manufacturer in China were collected for an empirical study. The functional modules of the manufacturer’s information system are divided into five classes according to their value, which is proven reasonable by the discriminant function obtained via discriminant analysis. The differentiated data security management strategies on cloud computing are formulated for a case study with five types of data to enhance the enterprise’s active cloud computing data security defense.  相似文献   

12.
    
This research examines the use of rapid prototyping technologies in the supply chain of spare parts. Spare parts are manufactured in small production lots and distributed in wide areas, eventually requiring short delivery times. The focus of this research is the use of rapid prototyping in humanitarian logistics. The demand of humanitarian aid is large, but it is very difficult to predict and also to supply. The use of rapid prototyping to produce spare parts can greatly increase the availability of scarce resources. In this paper, it is demonstrated that rapid prototyping of spare parts for last mile vehicles can help achieve a cost‐effective solution to increase vehicle availability. Also, a detailed implementation plan is developed to serve as a guideline for any organization to successfully introduce the equipment in their operations.  相似文献   

13.
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood.An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.  相似文献   

14.
Pankaj Jalote 《Software》1987,17(11):847-858
This paper describes a system for automatically generating an implementation of an abstract data type from its axiomatic specifications. Such a system can be useful for rapid prototyping and for detecting inconsistencies in the specifications by testing the generated implementation. In the generated Implementation, an instance of the data type is represented by its state. An operation on the data type is implemented by a collection of functions — a function for each of the axioms specified for the operation, and a function for the operation that determines, depending on the state of the instance(s) on which the operation is being performed, which of the axioms of the operation is applicable. The system is developed on a Sun-3 workstation running Unix. It is written in C and generates the implementation of the abstract data type in C.  相似文献   

15.
点云数据重构三维网格形状的新算法   总被引:3,自引:1,他引:3  
在分析现有重构方法局限性的基础上,提出了一种基于神经网络的点云数据重构三维网格形状的新算法。首先对点云数据平滑处理;然后进行特征线提取,并以特征线为基础对曲面进行分割。该方法能直接从神经网络的权值矩阵得到曲线的控制顶点/曲面的控制网格,通过神经网络的权值约束实现曲线段/曲面片之间的光滑拼接。能显著提高逼近网格的品质,从而实现了点云数据的精确曲面重构,实际的算例结果表明该方法实用可靠。  相似文献   

16.
In recent years, an increasing number of data-intensive applications deal with continuously changing data objects (CCDOs), such as data streams from sensors and tracking devices. In these applications, the underlying data management system must support new types of spatiotemporal queries that refer to the spatiotemporal trajectories of the CCDOs. In contrast to traditional data objects, CCDOs have continuously changing attributes. Therefore, the spatiotemporal relation between any two CCDOs can change over time. This problem can be more complicated, since the CCDO trajectories are associated with a degree of uncertainty at every point in time. This is due to the fact that databases can only be discretely updated. The paper formally presents a comprehensive framework for managing CCDOs with insights into the spatiotemporal uncertainty problem and presents an original parallel-processing solution for efficiently managing the uncertainty using the map-reduce platform of cloud computing.  相似文献   

17.
In a distributed stream processing system, streaming data are continuously disseminated from the sources to the distributed processing servers. To enhance the dissemination efficiency, these servers are typically organized into one or more dissemination trees. In this paper, we focus on the problem of constructing dissemination trees to minimize the average loss of fidelity of the system. We observe that existing heuristic-based approaches can only explore a limited solution space and hence may lead to sub-optimal solutions. On the contrary, we propose an adaptive and cost-based approach. Our cost model takes into account both the processing cost and the communication cost. Furthermore, as a distributed stream processing system is vulnerable to inaccurate statistics, runtime fluctuations of data characteristics, server workloads, and network conditions, we have designed our scheme to be adaptive to these situations: an operational dissemination tree may be incrementally transformed to a more cost-effective one. Our adaptive strategy employs distributed decisions made by the distributed servers independently based on localized statistics collected by each server at runtime. For a relatively static environment, we also propose two static tree construction algorithms relying on apriori system statistics. These static trees can also be used as initial trees in a dynamic environment. We apply our schemes to both single- and multi-object dissemination. Our extensive performance study shows that the adaptive mechanisms are effective in a dynamic context and the proposed static tree construction algorithms perform close to optimal in a static environment.  相似文献   

18.
The paper describes a very special and suggestive example of optical three-dimensional (3D) acquisition, reverse engineering and rapid prototyping of a historic automobile, a Ferrari 250 Mille Miglia, performed primarily using an optical 3D whole-field digitiser based on the projection of incoherent light (OPL-3D, developed in our laboratory). The entire process consists in the acquisition, the point cloud alignment, the triangle model definition, the NURBS creation, the production of the STL file, and finally the generation of a scaled replica of the car.The process, apart from the importance of the application to a unique, prestigious historic racing car, demonstrates the ease of application of the optical system to the gauging and the reverse engineering of large surfaces, as automobile body press parts and full-size clays, with high accuracy and reduced processing time, for design and restyling applications.  相似文献   

19.
Monitoring the extent and pattern of snow cover in the dry, high altitude, Trans Himalayan region (THR) is significant to understand the local and regional impact of ongoing climate change and variability. The freely available Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover images, with 500 m spatial and daily temporal resolution, can provide a basis for regional snow cover mapping, monitoring and hydrological modelling. However, high cloud obscuration remains the main limitation. In this study, we propose a five successive step approach — combining data from the Terra and Aqua satellites; adjacent temporal deduction; spatial filtering based on orthogonal neighbouring pixels; spatial filtering based on a zonal snowline approach; and temporal filtering based on zonal snow cycle — to remove cloud obscuration from MODIS daily snow products. This study also examines the spatial and temporal variability of snow cover in the THR of Nepal in the last decade. Since no ground stations measuring snow data are available in the region, the performance of the proposed methodology is evaluated by comparing the original MODIS snow cover data with least cloud cover against cloud-generated MODIS snow cover data, filled by clouds of another densely cloud-covered product. The analysis indicates that the proposed five-step method is efficient in cloud reduction (with average accuracy of > 91%). The results show very high interannual and intra-seasonal variability of average snow cover, maximum snow extent and snow cover duration over the last decade. The peak snow period has been delayed by about 6.7 days per year and the main agropastoral production areas of the region were found to experience a significant decline in snow cover duration during the last decade.  相似文献   

20.
Mobile industry promotes new products every recent year. It is reported that consumers in high-income countries typically replace their mobile phones at intervals of between 12 and 18 months. How to transfer data from one mobile to another and how to share data between different mobile apps are taken into consideration. We build a cloud service to combine contacts from different mobiles and synchronize contacts in different mobiles. Some algorithms and data structures are designed to meet the needs of data combination and synchronization. The result shows that our method can provide more efficient result than other solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号