首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In modern machining applications, reduction of contour error is an important issue concerning multi-axis contour following tasks. One popular approach to this problem is the cross-coupled controller (CCC). By exploiting the structure of CCC, an integrated control scheme is developed in this paper and an in-depth investigation on the issue of contour error reduction is also conducted. The proposed motion control scheme consists of a feedback controller, a feedforward controller, and a modified contour error controller (CCC equipped with a real-time contour error estimator). In addition, a fuzzy logic-based feedrate regulator is proposed to further reduce the contour error. The proposed feedrate regulator is designed based on the real-time estimated contour error and the curvature of the free-form parametric curves for machining. Several experiments are conducted to evaluate the performance of the proposed approach. Experimental results demonstrate the effectiveness of the proposed approach.  相似文献   

2.
In this paper, the problem of optimal feedrate planning along a curved tool path for 3-axis CNC machines with the acceleration and jerk limits for each axis and the tangential velocity bound is addressed. It is proved that the optimal feedrate planning must be “Bang–Bang” or “Bang–Bang-Singular” control, that is, at least one of the axes reaches its acceleration or jerk bound, or the tangential velocity reaches its bound throughout the motion. As a consequence, the optimal parametric velocity can be expressed as a piecewise analytic function of the curve parameter u. The explicit formula for the velocity function when a jerk reaches its bound is given by solving a second-order differential equation. Under a “greedy rule”, an algorithm for optimal jerk confined feedrate planning is presented. Experiment results show that the new algorithm can be used to reduce the machining vibration and improve the machining quality.  相似文献   

3.
There has been much recent interest in curvature-dependent contour evolution processes, particularly when the resultant family of contours satisfies the heat (diffusion) equation. Computer simulations of these processes have used high-precision computation to closely approximate the solutions to the equation. This paper describes a class of low-precision contour evolution processes, based on a digital approximation to the curvature of the contour derived from its chain code, that can be applied to contours in low-resolution digital images. We have found that these methods perform quite similarly to the PDE-based methods at much lower computational cost. Our methods are also not limited to using linear functions of the contour’s curvature; we give several examples of digital contour evolution processes that depend nonlinearly on curvature, and discuss their possible uses.  相似文献   

4.
Continuous linear commands are widely executed in computer numerical control (CNC) machining. The tangential discontinuity at the junction of consecutive segments restricts the machining efficiency and deteriorates the surface quality. Corners of linear segments have been successfully blended by inserting parametric splines. There still exists challenges when the common methods are employed in the line-segment commands due to part of the following restrictions: (1) the stringent computation for iteratively calculating the arc-length; (2) the unwanted feedrate fluctuation; (3) the oversize contour deviation for separately completing curve fitting and velocity planning.A novel smoothing method based on a clothoid pair to synchronously accomplish planning of geometry blending and speed scheduling is proposed, the spline parameter of which is arc-length-parameterized. The arc-length, curvature extreme, and geometric shape of the transition curve are analytically expressed by the transition length. On these bases, the transition curve and the velocity profile are concurrently constructed based on the predefined approximation error, the reachable velocity, and normal kinematic constraints in the look-ahead stage. Then, a real-time interpolation scheduling is developed to overcome the crossing difficulties between the linear and parametric segments. Compared with existing methods, the proposed method can analytically calculate the length of transition curves for the arc-length-parameterized expression form. Furthermore, the feedrate fluctuation is eliminated in the fine interpolation. Moreover, the overlarge contour derivation produced by corner smoothing is significantly avoided. It is friendlier to the CNC system for the on-line executing smooth motion since more computing resources can be released to handle other tasks, smoother motion can be achieved and higher contour accuracy can be obtained. The experimental results also demonstrate its practicability and reliability.  相似文献   

5.
Recently, modern manufacturing systems have been designed which can machine arbitrary parametric curves while greatly reducing data communication between CAD/CAM and CNC systems. However, a constant feedrate and chord accuracy between two interpolated points along parametric curves are generally difficult to achieve due to the non-uniform map between curves and parameters. A speed-controlled interpolation algorithm with an adaptive feedrate is proposed in this paper. Since the chord error in interpolation depends on the curve speed and the radius of curvature, the feedrate in the proposed algorithm is automatically adjusted so that a specified limit on the chord error is met. Both simulation and experimental results for non-uniform rational B-spline (NURBS) examples are provided to verify the feasibility and precision of the proposed interpolation algorithm.  相似文献   

6.
The growing hierarchical self organizing map (GHSOM) has been shown to be an effective technique to facilitate anomaly detection. However, existing approaches based on GHSOM are not able to adapt online to the ever-changing anomaly detection. This results in low accuracy in identifying intrusions, particularly “unknown” attacks. In this paper, we propose an adaptive GHSOM based approach (A-GHSOM) to network anomaly detection. It consists of four significant enhancements: enhanced threshold-based training, dynamic input normalization, feedback-based quantization error threshold adaptation, and prediction confidence filtering and forwarding. We first evaluate the A-GHSOM approach for intrusion detection using the KDD’99 dataset. Extensive experimental results demonstrate that compared with eight representative intrusion detection approaches, A-GHSOM achieves significant overall accuracy improvement and significant improvement in identifying “unknown” attacks while maintaining low false-positive rates. It achieves an overall accuracy of 99.63%, and 94.04% accuracy in identifying “unknown” attacks while the false positive rate is 1.8%. To avoid drawing research results and conclusions solely based on experiments with the KDD dataset, we have also built a dataset (TD-Sim) that consists of a mixture of live trace data from the Lawrence Berkeley National Laboratory and simulated traffic based on our testbed network, ensuring adequate coverage of a variety of attacks. Performance evaluation with the TD-Sim dataset shows that A-GHSOM adapts to live traffic and achieves an overall accuracy rate of 97.12% while maintaining the false positive rate of 2.6%.  相似文献   

7.
Two robust adaptive control schemes for single-input single-output (SISO) strict feedback nonlinear systems possessing unknown nonlinearities, capable of guaranteeing prescribed performance bounds are presented in this paper. The first assumes knowledge of only the signs of the virtual control coefficients, while in the second we relax this assumption by incorporating Nussbaum-type gains, decoupled backstepping and non-integral-type Lyapunov functions. By prescribed performance bounds we mean that the tracking error should converge to an arbitrarily predefined small residual set, with convergence rate no less than a prespecified value, exhibiting a maximum overshoot less than a sufficiently small prespecified constant. A novel output error transformation is introduced to transform the original “constrained” (in the sense of the output error restrictions) system into an equivalent “unconstrained”one. It is proven that the stabilization of the “unconstrained” system is sufficient to solve the problem. Both controllers are smooth and successfully overcome the loss of controllability issue. The fact that we are only concerned with the stabilization of the “unconstrained” system, severely reduces the complexity of selecting both the control parameters and the regressors in the neural approximators. Simulation studies clarify and verify the approach.  相似文献   

8.
Parametric interpolation has been widely used in CNC machining because of its advantages over the traditional linear or circular interpolation. Many researchers focused on this field and have made great progress in the specific one, NURBS curve interpolation. These works greatly improved the CNC machining with constant feedrate, confined chord error and limited acceleration/deceleration. However, during CNC machining process, mechanical shocks to machine tool caused by the undesired acceleration/deceleration profile will dramatically deteriorate the surface accuracy and quality of the machined parts. This is, in most occasions, very harmful to machine tools. In this paper, an accurate adaptive NURBS curve interpolator is proposed with consideration of acceleration–deceleration control. The proposed design effectively reduces the machining shocks by constraining the machine tool jerk dynamically. Meanwhile, the constant feedrate is maintained during most time of machining process, and thus high accuracy is achieved while the feedrate profile is greatly smoothed. In order to deal with the sudden change of the acceleration/deceleration around the corner with large curvature, a real-time flexible acceleration/deceleration control scheme is introduced to adjust the feedrate correspondingly. Case study has been taken to verify the feasibility and advantages of the proposed design.  相似文献   

9.
The feedrate scheduling of NURBS interpolator for CNC machine tools   总被引:4,自引:0,他引:4  
This paper proposes an off-line feedrate scheduling method of CNC machines constrained by chord tolerance, acceleration and jerk limitations. The off-line process for curve scanning and feedrate scheduling is realized as a pre-processor, which releases the computational burden in real-time task. The proposed method first scans a non-uniform rational B-spline (NURBS) curve and finds out the crucial points with large curvature (named as critical point) or G0 continuity (named as breakpoint). Then, the NURBS curve is divided into several NURBS sub-curves using curve splitting method which guarantees the convergence of predictor–corrector interpolation (PCI) algorithm. The suitable feedrate at critical point is adjusted according to the limits of chord error, centripetal acceleration and jerk, and at breakpoint is adjusted based on the formulation of velocity variation. The feedrate profile corresponding to each NURBS block is constructed according to the block length and the given limits of acceleration and jerk. In addition, feedrate compensation method for short NURBS blocks is performed to make the jerk-limited feedrate profile more continuous and precise. Because the feedrate profile is established in off-line, the calculation of NURBS interpolation is extremely efficient for CNC high-speed machining. Finally, simulations and experiments with two free-form NURBS curves are conducted to verify the feasibility and applicability of the proposed method.  相似文献   

10.
Methodologies for planning motion trajectory of parametric interpolation such as non-uniform rational B-spline (NURBS) curves have been proposed in the past. However, most of the algorithms were developed based on the constraints of feedrate, acceleration/deceleration (acc/dec), jerk, and chord errors. The errors caused by servo dynamics were rarely included in the design process. This paper proposes an integrated look-ahead dynamics-based (ILD) algorithm which considers geometric and servo errors simultaneously. The ILD consists of three different modules: a sharp corner detection module, a jerk-limited module, and a dynamics module. The sharp corner detection module identifies sharp corners of a curve and then divides the curve into small segments. The jerk-limited module plans the feedrate profile of each segment according to the constraints of feedrate, acc/dec, jerk, and chord errors. To ensure that the contour errors are bounded within the specified value, the dynamics module further modifies the feedrate profile based on the derived contour error equation. Simulations and experiments are performed to validate the ILD algorithm. It is shown that the ILD approach improves tracking and contour accuracies significantly compared to adaptive-feedrate and curvature-feedrate algorithms.  相似文献   

11.
The Coarse Woody Debris (CWD) quantity, defined as biomass per unit area (t/ha), and the quality, defined as the proportion of standing dead logs to the total CWD quantity, greatly contribute to many ecological processes such as forest nutrient cycling, tree regeneration, wildlife habitat, fire dynamics, and carbon dynamics. However, a cost-effective and time-saving method to determine CWD is not available. Very limited literature could be found that applies remote sensing technique to CWD inventory. In this paper, we fused the wall-to-wall multi-frequency and multi-polarization Airborne Synthetic Aperture Radar (AirSAR) and optical Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) to estimate the quantity and quality of CWD in Yellowstone post-fire forest ecosystem, where the severe 1988 fire event resulted in high spatial heterogeneity of dead logs. To relate backscatter values to CWD metrics, we first reduced the terrain effect to remove the interference of topography on AirSAR backscatter. Secondly, we removed the influence of regenerating sapling by quadratic polynomial fitting between AVIRIS Enhanced Vegetation Index (EVI) and different channels backscatters. The quantity of CWD was derived from Phh and Phv, and the quality of CWD was derived from Phh aided by the ratio of Lhv and Phh. Two maps of Yellowstone post-fire CWD quantity and quality were produced. The calculated CWD quantity and quality were validated by extensive field surveys. Regarding CWD quantity, the correlation coefficient between measured and predicted CWD is only 0.54 with mean absolute error up to 29.1 t/ha. However, if the CWD quantity was discretely classified into three categories of “≤ 60”, “60-120”, and “≥ 120”, the overall accuracy is 65.6%; if classified into two categories of “≤ 90” and “≥ 90”, the overall accuracy is 73.1%; if classified into two categories of “≤ 60” and “≥ 60”, the overall accuracy is 84.9%. This indicates our attempt to map CWD quantity spatially and continuously achieved partial success; however, the general and discrete categories are reasonable. Regarding CWD quality, the overall accuracy of 5 types (Type 1—standing CWD ratio ≥ 40%; Type 2—15% ≤ standing CWD ratio < 40%; Type 3—7% ≤ standing CWD ratio< 15%; Type 4—3% ≤ standing CWD ratio < 7%; Type 5—standing CWD ratio < 3%) is only 40.32%. However, when type 1, 2, 3 are combined into one category and type 4 and 5 are combined into one category, the overall accuracy is 67.74%. This indicates the partial success of our initial results to map CWD quality into detailed categories, but the result is acceptable if solely very coarse CWD quality is considered. Bias can be attributed to the complex influence of many factors, such as field survey error, sapling compensation, terrain effect reduction, surface properties, and backscatter mechanism understanding.  相似文献   

12.
Quadrangulation methods aim to approximate surfaces by semiregular meshes with as few extraordinary vertices as possible. A number of techniques use the harmonic parameterization to keep quads close to squares, or fit parametrization gradients to align quads to features. Both types of techniques create near-isotropic quads; feature-aligned quadrangulation algorithms reduce the remeshing error by aligning isotropic quads with principal curvature directions. A complementary approach is to allow for anisotropic elements, which are well-known to have significantly better approximation quality.In this work we present a simple and efficient technique to add curvature-dependent anisotropy to harmonic and feature-aligned parameterization and improve the approximation error of the quadrangulations. We use a metric derived from the shape operator which results in a more uniform error distribution, decreasing the error near features.  相似文献   

13.
This paper describes Macro-informatics of cognition that is the guideline for mathematical formulation of macroscopic feature. The macroscopic feature emerges from the total of shape elements, and the feature is important in the styling design. The mathematical formulation of macroscopic feature is difficult using conventional microscopic shape information such as dimension and curvature. In this paper, for formulation of macroscopic feature, the importance of “condition” that is various physical elements in the circumstance is mentioned. Moreover, this paper describes the mathematical formulation of macroscopic feature “complexity,” and its application for design. The formulation consists of curvature integration and multi-resolution representation. In application, shape generation method based on a genetic algorithm is introduced.  相似文献   

14.
This is a companion of the paper Chiuso and Picci (2004d) where we do asymptotic error analysis of a weighted PI-MOESP type method and compare accuracy with respect to estimates obtained by customary “joint” subspace methods. The analysis shows that, under certain conditions, methods based on orthogonal decomposition of the input-output data and block-decoupled parametrization perform better than traditional joint-model based methods in the circumstance of nearly parallel regressors.  相似文献   

15.
16.
17.
Data clustering is typically considered a subjective process, which makes it problematic. For instance, how does one make statistical inferences based on clustering? The matter is different with pattern classification, for which two fundamental characteristics can be stated: (1) the error of a classifier can be estimated using “test data,” and (2) a classifier can be learned using “training data.” This paper presents a probabilistic theory of clustering, including both learning (training) and error estimation (testing). The theory is based on operators on random labeled point processes. It includes an error criterion in the context of random point sets and representation of the Bayes (optimal) cluster operator for a given random labeled point process. Training is illustrated using a nearest-neighbor approach, and trained cluster operators are compared to several classical clustering algorithms.  相似文献   

18.
Adaptive neural control of nonlinear MIMO systems with unknown time delays   总被引:1,自引:0,他引:1  
In this paper, a novel adaptive NN control scheme is proposed for a class of uncertain multi-input and multi-output (MIMO) nonlinear time-delay systems. RBF NNs are used to tackle unknown nonlinear functions, then the adaptive NN tracking controller is constructed by combining Lyapunov-Krasovskii functionals and the dynamic surface control (DSC) technique along with the minimal-learning-parameters (MLP) algorithm. The proposed controller guarantees uniform ultimate boundedness (UUB) of all the signals in the closed-loop system, while the tracking error converges to a small neighborhood of the origin. An advantage of the proposed control scheme lies in that the number of adaptive parameters for each subsystem is reduced to one, triple problems of “explosion of complexity”, “curse of dimension” and “controller singularity” are solved, respectively. Finally, a numerical simulation is presented to demonstrate the effectiveness and performance of the proposed scheme.  相似文献   

19.
The PAC-learning model is distribution-independent in the sense that the learner must reach a learning goal with a limited number of labeled random examples without any prior knowledge of the underlying domain distribution. In order to achieve this, one needs generalization error bounds that are valid uniformly for every domain distribution. These bounds are (almost) tight in the sense that there is a domain distribution which does not admit a generalization error being significantly smaller than the general bound. Note however that this leaves open the possibility to achieve the learning goal faster if the underlying distribution is “simple”. Informally speaking, we say a PAC-learner L is “smart” if, for a “vast majority” of domain distributions D, L does not require significantly more examples to reach the “learning goal” than the best learner whose strategy is specialized to D. In this paper, focusing on sample complexity and ignoring computational issues, we show that smart learners do exist. This implies (at least from an information-theoretical perspective) that full prior knowledge of the domain distribution (or access to a huge collection of unlabeled examples) does (for a vast majority of domain distributions) not significantly reduce the number of labeled examples required to achieve the learning goal.  相似文献   

20.
This study presents a data-driven and semiautomatic classification system carried out by object-based image analysis and fuzzy logic in a selected landslide-prone area in the Western Black Sea region of Turkey. In the first stage, a multiresolution segmentation process was performed using Landsat ETM+ satellite images of the study area. The model was established on 5235 image objects obtained by the segmentation process. A total of 70 landslide locations and 10 input parameters including normalized difference vegetation index, slope angle, curvature, brightness, mean band blue, asymmetry, shape index, length/width ratio, gray level co-occurrence matrix, and mean difference to infrared band were considered in the analyses. Membership functions were used to classify the study area by five fuzzy operators such as “and”, “or”, “mean arithmetic”, “mean geometric”, and “algebraic product”. In order to assess the performances of the so-produced maps, 700 image objects, which were not used in the model, were taken into consideration. Based on the results, the map produced by “fuzzy and” operator performed better than those classified by the other fuzzy operators. The proposed methodology applied in this study may be useful for decision makers, local administrations, and scientists interested in landslides. It may also be useful in landslide-prone areas for planning, management, and regional development purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号