A new algorithm, dubbed memory-based adaptive partitioning (MAP) of search space, which is intended to provide a better accuracy/speed ratio in the convergence of multi-objective evolutionary algorithms (MOEAs) is presented in this work. This algorithm works by performing an adaptive-probabilistic refinement of the search space, with no aggregation in objective space. This work investigated the integration of MAP within the state-of-the-art fast and elitist non-dominated sorting genetic algorithm (NSGAII). Considerable improvements in convergence were achieved, in terms of both speed and accuracy. Results are provided for several commonly used constrained and unconstrained benchmark problems, and comparisons are made with standalone NSGAII and hybrid NSGAII-efficient local search (eLS). 相似文献
Instance-based learning (IBL), so called memory-based reasoning (MBR), is a commonly used non-parametric learning algorithm. k-nearest neighbor (k-NN) learning is the most popular realization of IBL. Due to its usability and adaptability, k-NN has been successfully applied to a wide range of applications. However, in practice, one has to set important model parameters only empirically: the number of neighbors (k) and weights to those neighbors. In this paper, we propose structured ways to set these parameters, based on locally linear reconstruction (LLR). We then employed sequential minimal optimization (SMO) for solving quadratic programming step involved in LLR for classification to reduce the computational complexity. Experimental results from 11 classification and eight regression tasks were promising enough to merit further investigation: not only did LLR outperform the conventional weight allocation methods without much additional computational cost, but also LLR was found to be robust to the change of k. 相似文献
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper, we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. 相似文献
In this paper, we propose two new filtering algorithms which are a combination of user-based and item-based collaborative filtering schemes. The first one, Hybrid-Ib, identifies a reasonably large neighbourhood of similar users and then uses this subset to derive the item-based recommendation model. The second algorithm, Hybrid-CF, starts by locating items similar to the one for which we want a prediction, and then, based on that neighbourhood, it generates its user-based predictions. We start by describing the execution steps of the algorithms and proceed with extended experiments. We conclude that our algorithms are directly comparable to existing filtering approaches, with Hybrid-CF producing favorable or, in the worst case, similar results in all selected evaluation metrics. 相似文献
Short message service (SMS) is a widely used service in modern mobile phones that allows users to send or receive short text messages. Current SMS, however, has two problems of inconvenient input and short message length. These problems can be resolved if a phone has an ability of automatic word spacing. This is because users need not put spaces in sending messages and longer messages are possible as they contain no space. Thus, automatic word spacing will be a very useful tool for SMS, if it can be commercially served. The practical issues of implementing it on the devices such as mobile phones are small memory and low computing power of the devices. To tackle these problems, this paper proposes a combined model of rule-based learning and memory-based learning. According to the experimental results, the model shows higher accuracy than rule-based learning or memory-based learning alone. In addition, the generated rules are so small and simple that the proposed model is appropriate for small memory devices. 相似文献
Recommender systems have emerged in the e-commerce domain and have been developed to actively recommend appropriate items to online users. The use of recently developed hybrid recommendation systems has helped overcome the main drawbacks of Content-Based Filtering (CBF) and Collaborative Filtering (CF). In hybrid recommendation systems that combine CF and CBF, the CF part uses two methods, including memory- and model-based approaches. Both approaches have some advantages and disadvantages for item recommendation. Sparsity has been one of the main difficulties associated with these approaches, whereas recommendation with high accuracy has been one of the important advantages of the memory-based approach. However, this approach is not scalable for current recommendation systems as their databases include huge numbers of items and users. In contrast, the model-based approach generates recommendations with low accuracy but is scalable for large databases in e-commerce recommender systems. Accordingly, to address this problem and take advantage of both approaches, in this work, we propose a new hybrid recommendation method and evaluate it using a real-world dataset. The aim is to improve efficiency and accuracy by designing a heuristic hybrid recommender method that combines memory-based and model-based approaches. Specifically, we use ontology in the CF part and improve ontology structure by eliminating uniformity of edges of the hierarchical relation between concepts (IS-A relation) in item ontology in the CBF part. Ontology structure is considered for improving accuracy; according to this, a new method for measuring semantic similarity that is more accurate than the traditional methods is presented. This new method can enhance the accuracy of CF and CBF in our method. In addition, the number of searches required to find similar clusters and neighbor users of the target user is decreased significantly using ontology, enhanced clustering and the new proposed algorithm. We evaluate the proposed method using a real-world dataset. The experimental results show that our method is more scalable and accurate than the benchmark k-Nearest Neighbor (k-NN) and model-based recommendation methods. 相似文献
The high investment cost of flexible manufacturing systems (FMS) requires their management to be effective and efficient. The effectiveness in managing FMSs includes addressing machine loading, scheduling parts and dispatching vehicles and the quality of the solution. Therefore the problem is inevitably multi-criteria, and decision maker's judgement may contribute to the quality of the solution and the systems's performance. On the other hand, each of these problems of FMS is hard to optimize due to the large and discrete solution spaces (NP-hard). The FMS manager must address each of these problems hierarchically (separately) or simultaneously (aggregately) in a limited time. The efficiency of the management is related to the response time.
Here we propose a decision support system that utilizes an evolutionary algorithm (EA) with a memory of “good” past experiments as the solution engine. Therefore, even in the absence of an expert decision maker the performance of the solution engine and/or the quality of the solutions are maintained.
The experiences of the decision maker(s) are collected in a database (i.e., memory-base) that contains problem characteristics, the modeling parameters of the evolutionary program, and the quality of the solution. The solution engine in the decision support system utilizes the information contained in the memory-base in solving the current problem. The initial population is created based on a memory-based seeding algorithm that incorporates information extracted from the quality solutions available in the database. Therefore, the performance of the engine is designed to improve following each use gradually. The comparisons obtained over a set of randomly generated test problems indicate that EAs with the proposed memory-based seeding perform well. Consequently, the proposed DSS improves not only the effectiveness (better solution) but also the efficiency (shorter response time) of the decision maker(s). 相似文献