共查询到20条相似文献,搜索用时 15 毫秒
1.
Efficient triangulation of simple polygons 总被引:1,自引:0,他引:1
Godfried Toussaint 《The Visual computer》1991,7(5-6):280-295
This paper considers the topic of efficiently traingulating a simple polygon with emphasis on practical and easy-to-implement algorithms. It also describes a newadaptive algorithm for triangulating a simplen-sided polygon. The algorithm runs in timeO(n(1+t
o)) witht
0<n. The quantityt
0 measures theshape-complexity of thetriangulation delivered by the algorithm. More preciselyt
0 is the number of obtained triangles contained in the triangulation that share zero edges with the input polygon and is, furthermore, related to the shape-complexity of theinput polygon. Although the worst-case complexity of the algorithm isO(n
2), for several classes of polygons it runs in linear time. The practical advantages of the algorithm are that it is simple and does not require sorting or the use of balanced tree structures. On the theoretical side, it is of interest because it is the first polygon triangulation algorithm where thecomputational complexity is a function of theoutput complexity. As a side benefit, we introduce a new measure of the complexity of a polygon triangulation that should find application in other contexts as well. 相似文献
2.
In the problem area of evaluating complex software systems, there are two distinguished areas of research, development, and application identified by the two buzzwords validation and verification, respectively. From the perspective adopted by the authors, verification is usually more formally based and, thus, can be supported by formal reasoning tools like theorem provers, for instance. The scope of verification approaches is limited by the difficulty of finding a sufficiently complete formalization to build upon. In paramount realistic problem domains, validation seems to be more appropriate, although it is less stringent in character and, therefore, validation results are often less definite. The aim of this paper is to exemplify a validation approach based on a clear and thoroughly formal theory. In this way, validation and verification should be brought closer to each other. To allow for precise and sufficiently clear results, the authors have selected the application domain of algorithms and systems for learning formal languages. By means of the validation toolkit TIC, some series of validation experiments have been performed. The results are presented for the sake of illustrating the underlying formal concepts in use. Comparing the validity of one learning approach to the invalidity of another one can be seen as an interesting result in its own right. 相似文献
3.
4.
Susan L. Epstein 《Minds and Machines》1992,2(3):239-265
The extent to which concepts, memory, and planning are necessary to the simulation of intelligent behavior is a fundamental
philosophical issue in Artificial Intelligence. An active and productive segement of the AI community has taken the position
that multiple low-level agents, properly organized, can account for high-level behavior. Empirical research on these questions
with fully operational systems has been restricted to mobile robots that do simple tasks. This paper recounts experiments
with Hoyle, a system in a cerebral, rather than a physical, domain. The program learns to perform well and quickly, often
outpacing its human creators at two-person, perfect information board games. Hoyle demonstrates that a surprising amount of
intelligent behavior can be treated as if it were situation-determined, that often planning is unnecessary, and that the memory
required to support this learning is minimal. Concepts, however, are crucial to this reactive program's ability to learn and
perform. 相似文献
5.
Machine Learning - Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many observations in order to... 相似文献
6.
Hui-Ya LiAuthor VitaeChia-Lung HungAuthor Vitae Wen-Jyi HwangAuthor Vitae Yi-Tsan HungAuthor Vitae 《Journal of Parallel and Distributed Computing》2011,71(2):236-244
This paper presents a novel pipelined architecture for competitive learning (CL). The architecture is implemented by the field programmable gate array (FPGA). It is used as a hardware accelerator in a system on programmable chip (SOPC) for reducing the computation time. In the architecture, a novel codeword swapping scheme is adopted so that neuron competitions for different training vectors can be operated concurrently. The neuron updating process is based on a hardware divider with simple table lookup operations. The divider performs finite precision calculations for area cost reduction at the expense of slight degradation in training performance. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experimental results show that the NIOS processor with the proposed architecture as an accelerator can achieve up to a speedup of 254 over its software counterpart running on a general purpose processor Pentium IV without hardware support. 相似文献
7.
Efficient greedy learning of gaussian mixture models 总被引:10,自引:0,他引:10
This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one after the other. We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided. 相似文献
8.
Nayyar A. Zaidi Geoffrey I. Webb Mark J. Carman François Petitjean Wray Buntine Mike Hynes Hans De Sterck 《Machine Learning》2017,106(9-10):1289-1329
Recent advances have demonstrated substantial benefits from learning with both generative and discriminative parameters. On the one hand, generative approaches address the estimation of the parameters of the joint distribution—\(\mathrm{P}(y,\mathbf{x})\), which for most network types is very computationally efficient (a notable exception to this are Markov networks) and on the other hand, discriminative approaches address the estimation of the parameters of the posterior distribution—and, are more effective for classification, since they fit \(\mathrm{P}(y|\mathbf{x})\) directly. However, discriminative approaches are less computationally efficient as the normalization factor in the conditional log-likelihood precludes the derivation of closed-form estimation of parameters. This paper introduces a new discriminative parameter learning method for Bayesian network classifiers that combines in an elegant fashion parameters learned using both generative and discriminative methods. The proposed method is discriminative in nature, but uses estimates of generative probabilities to speed-up the optimization process. A second contribution is to propose a simple framework to characterize the parameter learning task for Bayesian network classifiers. We conduct an extensive set of experiments on 72 standard datasets and demonstrate that our proposed discriminative parameterization provides an efficient alternative to other state-of-the-art parameterizations. 相似文献
9.
Efficient parallel processing of competitive learning algorithms 总被引:1,自引:0,他引:1
Kentaro Sano Shintaro Momose Hiroyuki Takizawa Hiroaki Kobayashi Tadao Nakamura 《Parallel Computing》2004,30(12):1361-1383
Vector quantization (VQ) is an attractive technique for lossy data compression, which has been a key technology for data storage and/or transfer. So far, various competitive learning (CL) algorithms have been proposed to design optimal codebooks presenting quantization with minimized errors. Although algorithmic improvements of these CL algorithms have achieved faster codebook design than conventional ones, limitations of speedup still exist when large data sets are processed on a single processor. Considering a variety of CL algorithms, parallel processing on flexible computing environment, like general-purpose parallel computers is in demand for a large-scale codebook design. This paper presents a formulation for efficiently parallelizing CL algorithms, suitable for distributed-memory parallel computers with a message-passing mechanism. Based on this formulation, we parallelize three CL algorithms: the Kohonen learning algorithm, the MMPDCL algorithm and the LOJ algorithm. Experimental results indicate a high scalability of the parallel algorithms on three different types of commercially available parallel computers: IBM SP2, NEC AzusA and PC cluster. 相似文献
10.
This paper introduces a novel method of visual learning based on genetic programming, which evolves a population of individuals (image analysis programs) that process attributed visual primitives derived from raw raster images. The goal is to evolve an image analysis program that correctly recognizes the training concept (shape). The approach uses generative evaluation scheme: individuals are rewarded for reproducing the shape of the object being recognized using graphical primitives and elementary background knowledge encoded in predefined operators. Evolutionary run is driven by a multiobjective fitness function to prevent premature convergence and enable effective exploration of the space of solutions. We present the method in detail and verify it experimentally on the task of learning two visual concepts from examples. 相似文献
11.
12.
This study examines a blended learning setting in an undergraduate course in psychology. A virtual learning environment (VLE) complemented the face-to-face lecture. The usage was voluntary and the VLE was designed to support the learning process of the students. Data from users (N = 80) and non-users (N = 82) from two cohorts were collected. Control variables such as demographical data, attitude towards the learning subject, computer literacy, motivation, learning effort and available infrastructure were captured by means of a self-report. As a learning outcome, the grade in the final exam was included. For the VLE-users, the mean performance in the VLE was taken as a predictor for success in the final exam. Two different groups of VLE-users were observed and classified into ’light and ’heavy’ users. The results showed that among those students who had spent two or more hours per week for pre- and post processing of the lectures, ‘heavy’ VLE-users performed better than non-users in the final exam. Additionally, the ‘heavy’ users’ performance in the VLE was the best predictor for the grade in the final exam. We discuss the results in the context of self-regulated learning competence. 相似文献
13.
Grondman I Vaandrager M Bu?oniu L Babuska R Schuitema E 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2012,42(3):591-602
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm. 相似文献
14.
The kernel function plays a central role in kernel methods. Most existing methods can only adapt the kernel parameters or the kernel matrix based on empirical data. Recently, Ong et al. introduced the method of hyperkernels which can be used to learn the kernel function directly in an inductive setting. However, the associated optimization problem is a semidefinite program (SDP), which is very computationally expensive, even with the recent advances in interior point methods. In this paper, we show that this learning problem can be equivalently reformulated as a second-order cone program (SOCP), which can then be solved more efficiently than SDPs. Comparison is also made with the kernel matrix learning method proposed by Lanckriet et al. Experimental results on both classification and regression problems, with toy and real-world data sets, show that our proposed SOCP formulation has significant speedup over the original SDP formulation. Moreover, it yields better generalization than Lanckriet et al.'s method, with a speed that is comparable, or sometimes even faster, than their quadratically constrained quadratic program (QCQP) formulation. 相似文献
15.
16.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection. 相似文献
17.
协作过滤是一种有效的个性化推荐技术,针对该技术随着用户和资源的增多,数据的高维稀疏特性严重导致推荐质量的下降和计算速度减慢的问题,研究并实现了一种基于极速神经网络的协作过滤方法。采用主成分分析解决数据高维稀疏性问题,采用极速神经网络技术解决计算速度慢的问题。实验结果表明,该方法具有良好的泛化性能和学习速度,能很好的满足个性化资源推荐的需求。 相似文献
18.
Based on uncertainty theory, we investigate the relations among efficiency concepts of the multiobjective programming (MOP) with uncertain vectors. We first propose the uncertain MOP model, and study its convexity. Then, we define different efficiency concepts such as expected-value efficiency, expected-value proper efficiency, and establish their relations under the assumed conditions, which are illustrated through two numerical examples. Finally, in the uncertain environment, we apply the theoretical results to a redundancy allocation problem with two objectives in reparable parallel-series systems, and discuss how to obtain different types of efficient solutions according to the decision-maker's preferences. 相似文献
19.
Mitchell J. Nathan 《Digital Creativity》2013,24(3-4):101-111
Abstract ANIMATE, an interactive computer animation-based tutor, has been developed as part of an on-going test of a theory of word problem comprehension. Tutor feedback is unobtrusive and interpretive: unexpected behavior in the equation-driven animation of a situation highlights equation errors which the student resolves through iterative debugging. The student has responsibility for learning, goal-setting and diagnosis. Experimental controls (n = 96) with Motion problems show that improvement cannot be solely attributed to practice, computer use, or use of the situation-based method. Concurrent think-aloud protocols of students (n = 7) solving Motion, Work and Investment problems over two days (in a pretest-posttest design) uncover specific changes which underlie these improvements. ANIMATE is an effective problem-solving aid, and there is transfer of learning. Problems with impossible situations were acknowledged by median-level subjects (posttest scores between 77% and 85%), but solved blindly by high-level subjects (post test scores > = 95%), suggesting an automatically controlled processing dichotomy. On Day 2, subjects spent more time reviewing problem texts and correcting flawed expressions. They developed self-directed debugging skills, reminiscent of expert problem-solving in many domains, without relying on tutor feedback behaviors. The system is unintelligent by ITS standards but communicates knowledge to the students, helping them teach themselves approaches for mathematical problem-solving. 相似文献