In this article we study thetabu search (TS) method in an application for solving an important class of scheduling problems. Tabu search is characterized by integrating artificial intelligence and optimization principles, with particular emphasis on exploiting flexible memory structures, to yield a highly effective solution procedure. We first discuss the problem of minimizing the sum of the setup costs and linear delay penalties when N jobs, arriving at time zero, are to be scheduled for sequential processing on a continuously available machine. A prototype TS method is developed for this problem using the common approach of exchanging the position of two jobs to transform one schedule into another. A more powerful method is then developed that employs insert moves in combination with swap moves to search the solution space. This method and the best parameters found for it during the preliminary experimentation with the prototype procedure are used to obtain solutions to a more complex problem that considers setup times in addition to setup costs. In this case, our procedure succeeded in finding optimal solutions to all problems for which these solutions are known and a better solution to a larger problem for which optimizing procedures exceeded a specified time limit (branch and bound) or reached a memory overflow (branch and bound/dynamic programming) before normal termination. These experiments confirm not only the effectiveness but also the robustness of the TS method, in terms of the solution quality obtained with a common set of parameter choices for two related but different problems. 相似文献
The ability of fermentative CO2 to blow off the volatile compounds that are synthesized during fermentation has been studied. Model solutions simulating a fermenting must were purged at different CO2 flow rates and temperatures, and the amount of volatile compounds blown off by the stream of CO2 was recorded by high-resolution gas chromatography. Data showed that under normal fermenting conditions, fatty acid ethyl esters and some fusel alcohol acetates are blown off the solution at a high rate. The maximum loss rate was observed for ethyl decanoate. The purging speed is doubled when temperature increases from 17 °C to 27 °C. Losses can be interpreted by a linear model and are a function of the compound and the flow rate of CO2. These models allow us to reconstruct the volatile synthesis vs time functions through graphic calculus and to estimate the proportion of volatile material retained, hydrolysed and purged. Synthesis takes place during the tumultuous period of fermentation together with CO2 production that blows off the volatile material. Hydrolysis takes place in the last stages of fermentation. In a 10-1 open fermenter, up to 80% of volatile material can be blown off while an average of 10% is retained. Residual esterase activity accounts for about 20% of the total amount of ester synthesized. 相似文献
Very low bit-rate video coding has recently become one of the most important areas of image communication and a large variety of applications have already been identified. Since conventional approaches are reaching a saturation point, in terms of coding efficiency, a new generation of video coding techniques, aiming at a deeper “understanding” of the image, is being studied. In this context, image analysis, particularly the identification of objects or regions in images (segmentation), is a very important step. This paper describes a segmentation algorithm based on split and merge. Images are first simplified using mathematical morphology operators, which eliminate perceptually less relevant details. The simplified image is then split according to a quad tree structure and the resulting regions are finally merged in three steps: merge, elimination of small regions and control of the number of regions. 相似文献
Parallel Programming skills may require a long time to acquire. “Think in parallel” is a skill that requires time, effort, and experience. In this work, we propose to facilitate the students’ learning process in parallel programming by using instant messaging. Our aim was to find out whether students’ interaction through instant messaging tools is beneficial for the learning process. In order to do so, we asked several students of an HPC course of the Master’s degree in Computer Science of the University of León to develop a specific parallel application, each of them using a different application program interface: OpenMP, MPI, CUDA, or OpenCL. Even though the used APIs are different, there are common points in the design process. We encouraged students to interact with each other by using Gitter, an instant messaging tool for GitHub users. Our analysis of the communications and results demonstrate that the direct interaction of students through the Gitter tool has a positive impact on the learning process.
Applied Intelligence - The17 Sustainable Development Goals (SDGs) established by the United Nations Agenda 2030 constitute a global blueprint agenda and instrument for peace and prosperity... 相似文献
Neural Computing and Applications - Preserving red-chili quality is of utmost importance in which the authorities demand quality techniques to detect, classify, and prevent it from impurities. For... 相似文献
We focus on two aspects of the face recognition, feature extraction and classification. We propose a two component system, introducing Lattice Independent Component Analysis (LICA) for feature extraction and Extreme Learning Machines (ELM) for classification. In previous works we have proposed LICA for a variety of image processing tasks. The first step of LICA is to identify strong lattice independent components from the data. In the second step, the set of strong lattice independent vector are used for linear unmixing of the data, obtaining a vector of abundance coefficients. The resulting abundance values are used as features for classification, specifically for face recognition. Extreme Learning Machines are accurate and fast-learning innovative classification methods based on the random generation of the input-to-hidden-units weights followed by the resolution of the linear equations to obtain the hidden-to-output weights. The LICA-ELM system has been tested against state-of-the-art feature extraction methods and classifiers, outperforming them when performing cross-validation on four large unbalanced face databases. 相似文献
Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organs, and
other higher order structures. This framework provides polynomial time solutions to NP-complete problems by trading space
for time, and whose efficient simulation poses challenges in three different aspects: an intrinsic massively parallelism of
P systems, an exponential computational workspace, and a non-intensive floating point nature. In this paper, we analyze the
simulation of a family of recognizer P systems with active membranes that solves the Satisfiability problem in linear time
on different instances of Graphics Processing Units (GPUs). For an efficient handling of the exponential workspace created
by the P systems computation, we enable different data policies to increase memory bandwidth and exploit data locality through
tiling and dynamic queues. Parallelism inherent to the target P system is also managed to demonstrate that GPUs offer a valid
alternative for high-performance computing at a considerably lower cost. Furthermore, scalability is demonstrated on the way
to the largest problem size we were able to run, and considering the new hardware generation from Nvidia, Fermi, for a total
speed-up exceeding four orders of magnitude when running our simulations on the Tesla S2050 server. 相似文献
Nowadays, the impact of technological developments on improving human activities is becoming more evident. In e-learning, this situation is no different. There are common to use systems that assist the daily activities of students and teachers. Typically, e-learning recommender systems are focused on students; however, teachers can also benefit from these type of tools. A recommender system can propose actions and resources that facilitate teaching activities like structuring learning strategies. In any case, a complete user’s representation is required. This paper shows how a fuzzy ontology can be used to represent user profiles into a recommender engine and enhances the user’s activities into e-learning environments. A fuzzy ontology is an extension of domain ontologies for solving the problems of uncertainty in sharing and reusing knowledge on the Semantic Web. The user profile is built from learning objects published by the user himself into a learning object repository. The initial experiment confirms that the automatically obtained fuzzy ontology is a good representation of the user’s preferences. The experiment results also indicate that the presented approach is useful and warrants further research in recommending and retrieval information. 相似文献