首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The authors describe several fundamentally useful primitive operations and routines and illustrate their usefulness in a wide range of familiar version processes. These operations are described in terms of a vector machine model of parallel computation. They use a parallel vector model because vector models can be mapped onto a wide range of architectures. They also describe implementing these primitives on a particular fine-grained machine, the connection machine. It is found that these primitives are applicable in a variety of vision tasks. Grid permutations are useful in many early vision algorithms, such as Gaussian convolution, edge detection, motion, and stereo computation. Scan primitives facilitate simple, efficient solutions of many problems in middle- and high-level vision. Pointer jumping, using permutation operations, permits construction of extended image structures in logarithmic time. Methods such as outer products, which rely on a variety of primitives, play an important role of many high-level algorithms  相似文献   

2.
Interval data offer a valuable way of representing the available information in complex problems where uncertainty, inaccuracy, or variability must be taken into account. Considered in this paper is the learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. The back-propagation (BP) learning algorithm is very slow for interval neural networks, just as for usual real-valued neural networks. Extreme learning machine (ELM) has faster learning speed than the BP algorithm. In this paper, ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM). There are two steps in the ELM for usual feedforward neural networks. The first step is to randomly generate the weights connecting the input and the hidden layers, and the second step is to use the Moore–Penrose generalized inversely to determine the weights connecting the hidden and output layers. The first step can be directly applied for interval neural networks. But the second step cannot, due to the involvement of nonlinear constraint conditions for IELM. Instead, we use the same idea as that of the BP algorithm to form a nonlinear optimization problem to determine the weights connecting the hidden and output layers of IELM. Numerical experiments show that IELM is much faster than the usual BP algorithm. And the generalization performance of IELM is much better than that of BP, while the training error of IELM is a little bit worse than that of BP, implying that there might be an over-fitting for BP.  相似文献   

3.
This paper presents a new architecture for embedded systems and describes an appropriate method for programming a control system. A grinding machine control system was built and an experimental verification of the theoretical approach was performed. The efficiency of this novel system was compared with the conventional control systems by grinding a workpiece up to the stringent quality requirements. The superior performance of the OR dataflow control system lead to the encouraging conclusions presented in this paper.  相似文献   

4.
Logic programming languages have gained wide acceptance because of two reasons. First is their clear declarative semantics and the second is the wide scope for parallelism they provide which can be exploited by building suitable parallel architectures. In this paper, we propose a multi-ring dataflow machine to support theOR-parallelism and theArgument parallelism of logic programs. A new scheme is suggested for handling the deferred read mechanism of the dataflow architecture. The required data structures, the dataflow actors and the builtin dataflow procedures for OR-parallel execution are discussed. Multiple binding environments arising in the OR-parallel execution are handled by a new scheme called thetagged variable scheme. Schemes for constrained OR-parallel execution are also discussed.  相似文献   

5.
Analog neural network for support vector machine learning   总被引:1,自引:0,他引:1  
An analog neural network for support vector machine learning is proposed, based on a partially dual formulation of the quadratic programming problem. It results in a simpler circuit implementation with respect to existing neural solutions for the same application. The effectiveness of the proposed network is shown through some computer simulations concerning benchmark problems  相似文献   

6.
7.
Compute-intensive applications have gradually changed focus from massively parallel supercomputers to capacity as a resource obtained on-demand. This is particularly true for the large-scale adoption of cloud computing and MapReduce in industry, while it has been difficult for traditional high-performance computing (HPC) usage in scientific and engineering computing to exploit this type of resources. However, with the strong trend of increasing parallelism rather than faster processors, a growing number of applications target parallelism already on the algorithm level with loosely coupled approaches based on sampling and ensembles. While these cannot trivially be formulated as MapReduce, they are highly amenable to throughput computing. There are many general and powerful frameworks, but in particular for sampling-based algorithms in scientific computing there are some clear advantages from having a platform and scheduler that are highly aware of the underlying physical problem. Here, we present how these challenges are addressed with combinations of dataflow programming, peer-to-peer techniques and peer-to-peer networks in the Copernicus platform. This allows automation of sampling-focused workflows, task generation, dependency tracking, and not least distributing these to a diverse set of compute resources ranging from supercomputers to clouds and distributed computing (across firewalls and fragile networks). Workflows are defined from modules using existing programs, which makes them reusable without programming requirements. The system achieves resiliency by handling node failures transparently with minimal loss of computing time due to checkpointing, and a single server can manage hundreds of thousands of cores e.g. for computational chemistry applications.  相似文献   

8.
9.
Zhu  Xing  Xu  Qiang  Tang  Minggao  Li  Huajin  Liu  Fangzhou 《Neural computing & applications》2018,30(12):3825-3835

A novel hybrid model composed of least squares support vector machines (LSSVM) and double exponential smoothing (DES) was proposed and applied to calculate one-step ahead displacement of multifactor-induced landslides. The wavelet de-noising and Hodrick-Prescott filter methods were used to decompose the original displacement time series into three components: periodic term, trend term and random noise, which respectively represent periodic dynamic behaviour of landslides controlled by the seasonal triggers, the geological conditions and the random measuring noise. LSSVM and DES models were constructed and trained to forecast the periodic component and the trend component, respectively. Models’ inputs include the seasonal triggers (e.g. reservoir level and rainfall data) and displacement values which are measurable variables in a specific prior time. The performance of the hybrid model was evaluated quantitatively. Calculated displacement from the hybrid model is excellently consistent with actual monitored value. Results of this work indicate that the hybrid model is a powerful tool for predicting one-step ahead displacement of landslide triggered by multiple factors.

  相似文献   

10.
11.
For high-speed trains, high precision of train positioning is important to guarantee train safety and operational efficiency. By analyzing the operational data of Beijing–Shanghai high-speed railway, we find that the currently used average speed model (ASM) is not good enough as the relative error is about 2.5 %. To reduce the positioning error, we respectively establish three models for calculating train positions by advanced neural computing methods, including back-propagation (BP), radial basis function (RBF) and adaptive network-based fuzzy inference system (ANFIS). Furthermore, six indices are defined to evaluate the performance of the three established models. Compared with ASM, the positioning error can be reduced by about 50 % by neural computing models. Then, to increase the robustness of neural computing models and real-time response, online learning methods are developed to update the parameters in the last layer of neural computing models by the gradient descent method. With the online learning methods, the positioning error of neural computing models can be further reduced by about 10 %. Among the three models, the ANFIS model is the best in both training and testing. The BP model is better than the RBF model in training, but worse in testing. In a word, the three models can reduce the half number of transponders to save the cost under the same positioning error or reduce the positioning error about 50 % in the case of the same number of transponders.  相似文献   

12.
Measurement of machine performance degradation using a neural network model   总被引:13,自引:0,他引:13  
Machines degrade as a result of aging and wear, which decreases performance reliability and increases the potential for faults and failures. The impact of machine faults and failures on factory productivity is an important concern for manufacturing industries. Economic impacts relating to machine availability and reliability, as well as corrective (reactive) maintenance costs, have prompted facilities and factories to improve their maintenance techniques and operations to monitor machine degradation and detect faults. This paper presents an innovative methodology that can change maintenance practice from that of reacting to breakdowns, to one of preventing breakdowns, thereby reducing maintenance costs and improving productivity. To analyze the machine behavior quantitatively, a pattern discrimination model (PDM) based on a cerebellar model articulation controller (CMAC) neural network was developed. A stepping motor and a PUMA 560 robot were used to study the feasibility of the developed technique. Experimental results have shown that the developed technique can analyze machine degradation quantitatively. This methodology could help operators set up machines for a given criterion, determine whether the machine is running correctly, and predict problems before they occur. As a result, maintenance hours could be used more effectively and productively.  相似文献   

13.
Ni  Mingze  Wang  Ce  Zhu  Tianqing  Yu  Shui  Liu  Wei 《Machine Learning》2022,111(11):3977-4002
Machine Learning - Deep-learning based natural language processing (NLP) models are proven vulnerable to adversarial attacks. However, there is currently insufficient research that studies attacks...  相似文献   

14.
A one-layer recurrent neural network for support vector machine learning.   总被引:1,自引:0,他引:1  
This paper presents a one-layer recurrent neural network for support vector machine (SVM) learning in pattern classification and regression. The SVM learning problem is first converted into an equivalent formulation, and then a one-layer recurrent neural network for SVM learning is proposed. The proposed neural network is guaranteed to obtain the optimal solution of support vector classification and regression. Compared with the existing two-layer neural network for the SVM classification, the proposed neural network has a low complexity for implementation. Moreover, the proposed neural network can converge exponentially to the optimal solution of SVM learning. The rate of the exponential convergence can be made arbitrarily high by simply turning up a scaling parameter. Simulation examples based on benchmark problems are discussed to show the good performance of the proposed neural network for SVM learning.  相似文献   

15.
The relation between the decision trees generated by a machine learning algorithm and the hidden layers of a neural network is described. A continuous ID3 algorithm is proposed that converts decision trees into hidden layers. The algorithm allows self-generation of a feedforward neural network architecture. In addition, it allows interpretation of the knowledge embedded in the generated connections and weights. A fast simulated annealing strategy, known as Cauchy training, is incorporated into the algorithm to escape from local minima. The performance of the algorithm is analyzed on spiral data.  相似文献   

16.
17.
Deduplication is the task of identifying the entities in a data set which refer to the same real world object. Over the last decades, this problem has been largely investigated and many techniques have been proposed to improve the efficiency and effectiveness of the deduplication algorithms. As data sets become larger, such algorithms may generate critical bottlenecks regarding memory usage and execution time. In this context, cloud computing environments have been used for scaling out data quality algorithms. In this paper, we investigate the efficacy of different machine learning techniques for scaling out virtual clusters for the execution of deduplication algorithms under predefined time restrictions. We also propose specific heuristics (Best Performing Allocation, Probabilistic Best Performing Allocation, Tunable Allocation, Adaptive Allocation and Sliced Training Data) which, together with the machine learning techniques, are able to tune the virtual cluster estimations as demands fluctuate over time. The experiments we have carried out using multiple scale data sets have provided many insights regarding the adequacy of the considered machine learning algorithms and proposed heuristics for tackling cloud computing provisioning.  相似文献   

18.
In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudo-dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with super-linear VC dimension. For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds. Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.  相似文献   

19.
《计算机工程与科学》2017,(10):1934-1940
普通神经网络进行抽油机工况诊断时存在诊断精度偏低的问题,提出选用连续过程神经元网络作为诊断模型,特征输入选取能直接反映示功图几何形态特征的位移和载荷两种连续信号。为提高模型学习速度,提出过程神经网络的极限学习算法,将训练转换为最小二乘问题,根据样本输入计算隐层输出矩阵,使用SVD法求解Moore-Penrose广义逆,最后计算隐层输出权值。通过诊断实验,模型学习速度提升5倍左右,与普通神经网络进行对比,诊断精度提高8个百分点左右,验证了方法的有效性。  相似文献   

20.
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks' states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics that either limit the number of clusters that may form during training or limit the exploration of the space of hidden recurrent state neurons. These limitations, while necessary, may lead to decreased fidelity, in which the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input-output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号