首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The distributed flexible job shop scheduling problem(DFJSP),which is an extension of the flexible job shop scheduling problem,is a famous NP-complete combinatorial optimization problem.This problem is widespread in the manufacturing industries and comprises the following three subproblems:the assignment of jobs to factories,the scheduling of operations to machines,and the sequence of operations on machines.However,studies on DFJSP are seldom because of its difficulty.This paper proposes an effec...  相似文献   

2.
To conduct a large-scale hydrologic-response and landform evolution simulation at high resolution,a complex physics-based numerical model,the Integrated Hydrology Model(InHM),was revised utilizing cluster parallel computing.The parallelized InHM(ParInHM) divides the simulated area into multiple catchments based on geomorphologic features,and generates boundary-value problems for each catchment to construct simulation tasks,which are then dispatched to different computers to start the simulation.Landform evolution is considered during simulating and implemention in one framework.The dynamical Longest-Processing-Time(LPT) first scheduling algorithm is applied to job management.In addition,a pause-integratedivide-resume routine method is used to ensure the hydrologic validity during the simulation period.The routine repeats until the entire simulation period is finished.ParInHM has been tested in a computer cluster that uses 16 processors for the calculation,to simulate 100 years’ hydrologic-response and soil erosion for the 117-km2 Kaho’olawe Island in the Hawaiian Islands under two different mesh resolutions.The efficiency of ParInHM was evaluated by comparing the performance of the cluster system utilizing different numbers of processors,as well as the performance of non-parallelized system without domain decomposition.The results of this study show that it is feasible to conduct a regional-scale hydrologic-response and sediment transport simulation at high resolution without demanding significant computing resources.  相似文献   

3.
4.
To make the on-board computer system more dependable and real-time in a satellite, an algorithm of the fault-tolerant scheduling in the on-board computer system with high priority recovery is proposed in this paper. This algorithm can schedule the on-board fault-tolerant tasks in real time. Due to the use of dependability cost, the overhead of scheduling the fault-tolerant tasks can be reduced. The mechanism of the high priority recovery will improve the response to recovery tasks. The fault-tolerant scheduling model is presented simulation results validate the correctness and feasibility of the proposed algorithm.  相似文献   

5.
It is a key issue that constructing successful knowledge base to satisfy an efficient adaptive scheduling for the com- plex manufacturing system.Therefore,a hybrid artificial neural network (ANN)-based scheduling knowledge acquisition algo- rithm is presented in this paper.We combined genetic algorithm (GA) with simulated annealing (SA) to develop a hybrid opti- mization method,in which GA was introduced to present parallel search architecture and SA was introduced to increase escaping probability from local optima and ability to neighbor search.The hybrid method was utilized to resolve the optimal attributes subset of manufacturing system and determine the optimal topology and parameters of ANN under different scheduling objectives;ANN was used to evaluate the fitness of chromosome in the method and generate the scheduling knowledge after obtaining the optimal at- tributes subset,optimal ANN's topology and parameters.The experimental results demonstrate that the proposed algorithm pro- duces significant performance improvements over other machine learning-based algorithms.  相似文献   

6.
The layered control architecture is designed for the need of the multirobot intelligent team formation. There are three levels:the cooperation task level, the coordination behavior level and the action planning level. The cooperation task level uses the potential grid method, which improves the safety of the path and reduces the calculation complexity. The coordination behavior level uses the reinforcement learning which can strengthen the robots' intelligence. The action planning level uses the fuzzy planning methods to realize the action matching. The communication model transfers the message in different level. This architecture shows not only the independence and the intelligence of the single robot but also the cooperation and the coordination among the robots. In each level, the task is distributed reasonably and clearly. Finally the feasibility of the architecture is verified further in the simulation of the experiment. The expansibility of the architecture is good and the architecture can be used in the similar system.  相似文献   

7.
Motivated by industrial applications we study a single-machine scheduling problem in which all the jobs are mutu- ally independent and available at time zero.The machine processes the jobs sequentially and it is not idle if there is any job to be pro- cessed.The operation of each job cannot be interrupted.The machine cannot process more than one job at a time.A setup time is needed if the machine switches from one type of job to another.The objective is to find an optimal schedule with the minimal total jobs'completion time.While the sum of jobs'processing time is always a constant,the objective is to minimize the sum of setup times.Ant colony optimization(ACO)is a meta-heuristic that has recently been applied to scheduling problem.In this paper we propose an improved ACO-Branching Ant Colony with Dynamic Perturbation(DPBAC)algorithm for the single-machine schedul- ing problem.DPBAC improves traditional ACO in following aspects:introducing Branching Method to choose starting points;im- proving state transition rules;introducing Mutation Method to shorten tours;improving pheromone updating rules and introduc- ing Conditional Dynamic Perturbation Strategy.Computational results show that DPBAC algorithm is superior to the traditional ACO algorithm.  相似文献   

8.
High resolution cameras and multi camera systems are being used in areas of video surveillance like security of public places, traffic monitoring, and military and satellite imaging. This leads to a demand for computational algorithms for real time processing of high resolution videos. Motion detection and background separation play a vital role in capturing the object of interest in surveillance videos, but as we move towards high resolution cameras, the time-complexity of the algorithm increases and thus fails to be a part of real time systems. Parallel architecture provides a surpass platform to work efficiently with complex algorithmic solutions. In this work, a method was proposed for identifying the moving objects perfectly in the videos using adaptive background making, motion detection and object estimation. The pre-processing part includes an adaptive block background making model and a dynamically adaptive thresholding technique to estimate the moving objects. The post processing includes a competent parallel connected component labelling algorithm to estimate perfectly the objects of interest. New parallel processing strategies are developed on each stage of the algorithm to reduce the time-complexity of the system. This algorithm has achieved a average speedup of 12.26 times for lower resolution video frames(320×240, 720×480, 1024×768) and 7.30 times for higher resolution video frames(1360×768, 1920×1080, 2560×1440) on GPU, which is superior to CPU processing. Also, this algorithm was tested by changing the number of threads in a thread block and the minimum execution time has been achieved for 16×16 thread block. And this algorithm was tested on a night sequence where the amount of light in the scene is very less and still the algorithm has given a significant speedup and accuracy in determining the object.  相似文献   

9.
This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing. Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.  相似文献   

10.
An analysis of image combination in SPECAN algorithm is delivered in time-frequency domain in de-tail and a new image combination method is proposed. For four multi-looks processing one sub-aperture data in every three sub-apertures is processed in this combination method. The continual sub-aperture processing in SPECAN algorithm is realized and the processing efficiency can be dramatically increased. A new parameter is also put forward to measure the processing efficient of SAR image processing. Finally, the raw data of RADARSAT are used to test the method and the result proves that this method is feasible to be used in SPECAN algorithm of spaceborne SAR and can improve processing efficiently. SPECAN algorithm with this method can be used in quick-look imaging.  相似文献   

11.
A method for modeling the parallel machine scheduling problems with fuzzy parameters and precedence constraints based on credibility measure is provided. For the given n jobs to be processed on m machines, it is assumed that the processing times and the due dates are nonnegative fuzzy numbers and all the weights are positive, crisp numbers. Based on credibility measure, three parallel machine scheduling problems and a goal-programming model are formulated. Feasible schedules are evaluated not only by their objective values but also by the credibility degree of satisfaction with their precedence constraints. The genetic algorithm is utilized to find the best solutions in a short period of time. An illustrative numerical example is also given. Simulation results show that the proposed models are effective, which can deal with the parallel machine scheduling problems with fuzzy parameters and precedence constraints based on credibility measure.  相似文献   

12.
Behavior-based dual dynamic agent architecture   总被引:1,自引:0,他引:1  
The objective of the architecture is to make agent promptly and adaptively accomplish tasks in the realtime and dynamic environment. The architecture is composed of elementary level behavior layer and high level behavior layer. In the elementary level behavior layer, the reactive architecture is introduced to make agent promptly react to events( in the high level behavior layer, the deliberation architecture is used to enhance the intelligence of the agent. A confidence degree concept is proposed to combine the two layers of the architecture. An agent decision making process is also presented, which is based on the architecture. The results of experiment in RoboSoccer simulation team show that the proposed architecture and the decision process are successful.  相似文献   

13.
Polynomial-time randomized algorithms were constructed to approximately solve optimal robust performance controller design problems in probabilistic sense and the rigorous mathematical justification of the approach was given. The randomized algorithms here were based on a property from statistical learning theory known as uniform convergence of empirical means (UCEM). It is argued that in order to assess the performance of a controller as the plant varies over a pre-specified family, it is better to use the average performance of the controller as the objective function to he optimized, rather than its worst-case performance. The approach is illustrated to be efficient through an example.  相似文献   

14.
This paper presents a novel parallel implementation technology for wave-based structural health monitoring (SHM) in laminated composite plates. The wavelet-based B-spline wavelet on he interval (BSWI) element is constructed according to Hamilton’s principle, and the element by element algorithm is parallelly executed on graphics processing unit (GPU) using compute unified device architecture (CUDA) to get the responses in full wave field accurately. By means of the Fourier spectral analysis method,the Mindlin plate theory is selected for wave modeling of laminated composite plates while the Kirchhoff plate theory predicts unreasonably phase and group velocities. Numerical examples involving wave propagation in laminated composite plates without and with crack are performed and discussed in detail. The parallel implementation on GPU is accelerated 146 times comparing with the same wave motion problem executed on central processing unit (CPU). The validity and accuracy of the proposed parallel implementation are also demonstrated by comparing with conventional finite element method (FEM) and the computation time has been reduced from hours to minutes. The damage size and location have been successfully determined according to wave propagation results based on delay-and-sum (DAS). The results show that the proposed parallel implementation of wavelet finite element method (WFEM) is very appropriate and efficient for wave-based SHM in laminated composite plates.  相似文献   

15.
Replication is an approach often used to speed up the execution of queries submitted to a large dataset. A compile-time/run-time approach is presented for minimizing the response time of 2-dimensional range when a distributed replica of a dataset exists. The aim is to partition the query payload (and its range) into subsets and distribute those to the replica nodes in a way that minimizes a client's response time. However, since query size and distribution characteristics of data (data dense/sparse regions) in varying ranges are not known a priori, performing efficient load balancing and parallel processing over the unpredictable workload is difficult. A technique based on the creation and manipulation of dynamic spatial indexes for query payload estimation in distributed queries was proposed. The effectiveness of this technique was demonstrated on queries for analysis of archived earthquake-generated seismic data records.  相似文献   

16.
Constrained long-term production scheduling problem(CLTPSP) of open pit mines has been extensively studied in the past few decades due to its wide application in mining projects and the computational challenges it poses become an NP-hard problem.This problem has major practical significance because the effectiveness of the schedules obtained has strong economical impact for any mining project.Despite of the rapid theoretical and technical advances in this field,heuristics is still the only viable approach for large scale industrial applications.This work presents an approach combining genetic algorithms(GAs) and Lagrangian relaxation(LR) to optimally determine the CLTPSP of open pit mines.GAs are stochastic,parallel search algorithms based on the natural selection and the process of evolution.LR method is known for handling large-scale separable problems; however,the convergence to the optimal solution can be slow.The proposed Lagrangian relaxation and genetic algorithms(LR-GAs) combines genetic algorithms into Lagrangian relaxation method to update the Lagrangian multipliers.This approach leads to improve the performance of Lagrangian relaxation method in solving CLTPSP.Numerical results demonstrate that the LR method using GAs to improve its performance speeding up the convergence.Subsequently,highly near-optimal solution to the CLTPSP can be achieved by the LR-GAs.  相似文献   

17.
Aim of this research is to minimize makespan in the flexible job shop environment by the use of genetic algorithms and scheduling rules. Software is developed using genetic algorithms and scheduling rules based on certain constraints such as non-preemption of jobs, recirculation, set up times, non-breakdown of machines etc. Purpose of the software is to develop a schedule for flexible job shop environment, which is a special case of job shop scheduling problem. Scheduling algorithm used in the software is verified and tested by using MT10 as benchmark problem, presented in the flexible job shop environment at the end. LEKIN software results are also compared with results of the developed software by the use of MT10 benchmark problem to show that the latter is a practical software and can be used successfully at BIT Training Workshop.  相似文献   

18.
An analysis of the key factors affecting on the single production process job scheduling of the parts waiting for be- ing processed on the key equipments for SMEs (Small Manufacturing Enterprises) is given in this paper,which include interval number,real number and uncertain linguistic value.A kind of hybrid multi-attribute decision making method for the single pro- duction process job scheduling is presented in this paper,that the parts are firstly sorted about each factor,and then the total evalu- ative attributive value of each part is calculated with the method of weighted arithmetic average,and thus the part with the highest total evaluative attributive value is chosen for being processed firstly.The mathematic model corresponding to the method is set up in this paper.An example is studied in this paper,and the results of the example testify the correctness of this model.  相似文献   

19.
On discrete-amplitude signal analysis and its applications   总被引:2,自引:0,他引:2  
Discrete-amplitude signal analysis is studied. A reconstruction theorem of an arbitrary signal quantized in amplitude hut continuous in time, from 2 bits of its binary representation, is devised. A new concept of discrete-amplitude multiresolution (DAM), with the signal representation precision taken as its scale, is proposed. The singularities and the residue reducing effect of 2-bit reconstruction of some discrete-time signals are investigated. Two practical examples of applying the discrete-amplitude signal analysis to data compression and signal detection are presented It is shown both analytically and practically that the discrete-amplitude signal analysis is of simple formulation, parallel processing and efficient computation, and is well suited to hardware implementation and real-time signal processing  相似文献   

20.
Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet the needs of the operation.Aiming at the coupling multi-task in the intelligent production of vegetables in the open field,the task assignment method for multiple unmanned tractors based on consistency alliance is studied.Firstly,unmanned vegetable production in the open field is abst...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号