首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Manufacturing job shop scheduling is a notoriously difficult problem that lends itself to various approaches - from optimal algorithms to suboptimal heuristics. We combined popular heuristic job shop-scheduling approaches with emerging AI techniques to create a dynamic and responsive scheduler. We fashioned our job shop scheduler's architecture around recent holonic manufacturing systems architectures and implemented our system using multiagent systems. Our scheduling approach is based on evolutionary algorithms but differs from common approaches by evolving the scheduler rather than the schedule. A holonic, multiagent systems approach to manufacturing job shop scheduling evolves the schedule creation rules rather than the schedule itself. The authors test their approach using a benchmark agent-based scheduling problem and compare performance results with other heuristic-scheduling approaches.  相似文献   

2.
McGovern  Amy  Moss  Eliot  Barto  Andrew G. 《Machine Learning》2002,49(2-3):141-160
The execution order of a block of computer instructions on a pipelined machine can make a difference in running time by a factor of two or more. Compilers use heuristic schedulers appropriate to each specific architecture implementation to achieve the best possible program speed. However, these heuristic schedulers are time-consuming and expensive to build. We present empirical results using both rollouts and reinforcement learning to construct heuristics for scheduling basic blocks. In simulation, the rollout scheduler outperformed a commercial scheduler on all benchmarks tested, and the reinforcement learning scheduler outperformed the commercial scheduler on several benchmarks and performed well on the others. The combined reinforcement learning and rollout approach was also very successful. We present results of running the schedules on Compaq Alpha machines and show that the results from the simulator correspond well to the actual run-time results.  相似文献   

3.
While high-performance architectures have included some Instruction-Level Parallelism (ILP) for at least 25 years, recent computer designs have exploited ILP to a significant degree. Although a local scheduler is not sufficient for generation of excellent ILP code, it is necessary as many global scheduling and software pipelining techniques rely on a local scheduler. Global scheduling techniques are well-documented, yet practical discussions of local schedulers are notable in their absence. This paper strives to remedy that disparity by describing a list scheduling framework and several important practical details that, taken together, allow implementation of an efficient local instruction scheduler that is easily retargetable for ILP architectures. The foundation of our machine-independent instruction scheduler is a timing model that allows easy retargetability to a wide range of architectures. In addition to describing how a general list-scheduler can be implemented within the framework of our timing model, experimental results indicate that lookahead scheduling can profoundly improve a scheduler's ability to produce a legal schedule. Further experimental data shows that deciding to schedule a data dependence DAG (DDD) in forward or reverse order depends significantly upon that target architecture, suggesting the possibility of scheduling in each direction and using the best of the two schedules. In contrast, experiments demonstrate little difference in code quality for schedules generated by either instruction-driven or operation-driven schedulers. Thus, the inherent flexibility of operation-driven methods suggests including that approach in a retargetable instruction scheduler. List scheduling is, of course, a heuristic scheduling method. A variety of scheduling heuristics are presented. In addition, the paper describes a method, using a genetic algorithm search, to ‘fine-tune’ the weights of twenty-four individual heuristics to form a DDD-node heuristic tuned to a specific architecture. © 1998 John Wiley & Sons, Ltd.  相似文献   

4.
For over 3 decades there was a belief that computer-based solutions would "solve" complex industrial scheduling problems, yet most manufacturing organizations still require human contributions for effective scheduling performance. We present a new model of scheduling for the development and implementation of effective scheduling systems within manufacturing companies. The model derives from investigating the work of 7 schedulers in 4 manufacturing environments using a qualitative field study approach, for which novel field-based data collection and analysis methods were developed. The results show that scheduling in practice comprises task, role, and monitoring activities and that the business environment influences a scheduler at work. A new definition of scheduling is presented that includes the significant facilitation and implementation aspects of human scheduling ignored by many computer-based scheduling approaches. The implications for this model extend across the domains of human factors and operations management, especially for the analysis and improvement of existing and new production planning and control processes and enterprise information systems. Actual or potential applications of this research include the analysis, design, and management of planning, scheduling, and control processes in industry; the selection, training, and support of production schedulers; and the allocation of tasks to humans and computer systems in industrial planning, scheduling, and control processes.  相似文献   

5.
Future factories will feature strong integration of physical machines and cyber-enabled software, working seamlessly to improve manufacturing production efficiency. In these digitally enabled and network connected factories, each physical machine on the shop floor can have its ‘virtual twin’ available in cyberspace. This ‘virtual twin’ is populated with data streaming in from the physical machines to represent a near real-time as-is state of the machine in cyberspace. This results in the virtualization of a machine resource to external factory manufacturing systems. This paper describes how streaming data can be stored in a scalable and flexible document schema based database such as MongoDB, a data store that makes up the virtual twin system. We present an architecture, which allows third-party integration of software apps to interface with the virtual manufacturing machines. We evaluate our database schema against query statements and provide examples of how third-party apps can interface with manufacturing machines using the VMM middleware. Finally, we discuss an operating system architecture for VMMs across the manufacturing cyberspace, which necessitates command and control of various virtualized manufacturing machines, opening new possibilities in cyber-physical systems in manufacturing.  相似文献   

6.
The construction of effectual connection to bridge the gap between physical machine tools and upper software applications is one of the inherent requirements for smart factories. The difficulties in this issue lies in the lack of effective and appropriate means for real-time data acquisition, storage and processing in monitoring and the post workflows. The rapid advancements in Internet of things (IoT) and information technology have made it possible for the realization of this scheme, which have become an important module of the concepts such as “Industry 4.0”, etc. In this paper, a framework of bi-directional data and control flows between various machine tools and upper-level software system is proposed, within which several key stumbling blocks are presented, and corresponding solutions are subsequently deeply investigated and analyzed. Through monitoring manufacturing big data, potential essential information are extracted, providing useful guides for practical production and enterprise decision-making. Based on the integrated model, an NC machine tool intelligent monitoring and data processing system in smart factories is developed. Typical machine tools, such as Siemens series, are the main objects for investigation. The system validates the concept and performs well in the complex manufacturing environment, which will be a beneficial attempt and gain its value in smart factories.  相似文献   

7.
近年来研究流簇(Coflow)为单位的调度策略成为改进数据中心网络的新热点.然而现有的信息未知流簇调度器难以快速地推理任务级信息,导致小任务不能被及时调度,以及平均任务完成时间无法最小化.因此数据中心网络需要更加高效的推理模型提升流簇大小判断的准确性和敏感性.提出了一种基于机器学习的流簇大小推理模型(MLcoflow)...  相似文献   

8.
In this paper, we present the Cello disk scheduling framework for meeting the diverse service requirements of applications. Cello employs a two-level disk scheduling architecture, consisting of a class-independent scheduler and a set of class-specific schedulers. The two levels of the framework allocate disk bandwidth at two time-scales: the class-independent scheduler governs the coarse-grain allocation of bandwidth to application classes, while the class-specific schedulers control the fine-grain interleaving of requests. The two levels of the architecture separate application-independent mechanisms from application-specific scheduling policies, and thereby facilitate the co-existence of multiple class-specific schedulers. We demonstrate that Cello is suitable for next generation operating systems since: (i) it aligns the service provided with the application requirements, (ii) it protects application classes from one another, (iii) it is work-conserving and can adapt to changes in work-load, (iv) it minimizes the seek time and rotational latency overhead incurred during access, and (v) it is computationally efficient.  相似文献   

9.
Performance of disk I/O schedulers is affected by many factors, such as workloads, file systems, and disk systems. Disk scheduling performance can be improved by tuning scheduler parameters, such as the length of read timers. Scheduler performance tuning is mostly done manually. To automate this process, we propose four self-learning disk scheduling schemes: Change-sensing Round-Robin, Feedback Learning, Per-request Learning, and Two-layer Learning. Experiments show that the novel Two-layer Learning Scheme performs best. It integrates the workload-level and request-level learning algorithms. It employs feedback learning techniques to analyze workloads, change scheduling policy, and tune scheduling parameters automatically. We discuss schemes to choose features for workload learning, divide and recognize workloads, generate training data, and integrate machine learning algorithms into the Two-layer Learning Scheme. We conducted experiments to compare the accuracy, performance, and overhead of five machine learning algorithms: Decision Tree, Logistic Regression, Naïve Bayes, Neural Network, and Support Vector Machine Algorithms. Experiments with real-world and synthetic workloads show that self-learning disk scheduling can adapt to a wide variety of workloads, file systems, disk systems, and user preferences. It outperforms existing disk schedulers by as much as 15.8% while consuming less than 3%-5% of CPU time.  相似文献   

10.
This paper presents the embedded realization and experimental evaluation of a media stream scheduler on network interface (NI) CoProcessor boards. When using media frames as scheduling units, the scheduler is able to operate in real-time on streams traversing the CoProcessor, resulting in its ability to stream video to remote clients at real-time rates. This paper presents a detailed evaluation of the effects of placing application or kernel-level functionality, like packet scheduling on NIs, rather than the host machines to which they are attached. The main benefits of such placement are: 1) that traffic is eliminated from the host bus and memory subsystem, thereby allowing increased host CPU utilization for other tasks, and 2) that NI-based scheduling is immune to host-CPU loading, unlike host-based media schedulers that are easily affected even by transient load conditions. An outcome of this work is a proposed cluster architecture for building scalable media servers by distributing schedulers and media stream producers across the multiple NIs used by a single server and by clustering a number of such servers using commodity network hardware and software.  相似文献   

11.
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.  相似文献   

12.
Conventional schedulers schedule operations in dependence order and never revisit or undo a scheduling decision on any operation. In contrast, backtracking schedulers may unschedule operations and can often generate better schedules. This paper develops and evaluates the backtracking approach to fill branch delay slots. We first present the structure of a generic backtracking scheduling algorithm and prove that it terminates. We then describe two more aggressive backtracking schedulers and evaluate their effectiveness. We conclude that aggressive backtracking-based instruction schedulers can effectively improve schedule quality by eliminating branch delay slots with a small amount of additional computation.  相似文献   

13.
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously allocated to other peers. This paper is the first to introduce the scheduling haste problem. It also presents a reliable grid scheduler. The proposed scheduler selects the most suitable worker to execute an input grid task using a fuzzy inference system. Hence, it minimizes the turnaround time for a set of grid tasks. Moreover, our scheduler is a system-oriented one as it avoids the scheduling haste problem. Experimental results have shown that the proposed scheduler outperforms traditional grid schedulers as it introduces a better scheduling efficiency.  相似文献   

14.
In a smart city, IoT devices are required to support monitoring of normal operations such as traffic, infrastructure, and the crowd of people. IoT-enabled systems offered by many IoT devices are expected to achieve sustainable developments from the information collected by the smart city. Indeed, artificial intelligence (AI) and machine learning (ML) are well-known methods for achieving this goal as long as the system framework and problem statement are well prepared. However, to better use AI/ML, the training data should be as global as possible, which can prevent the model from working only on local data. Such data can be obtained from different sources, but this induces the privacy issue where at least one party collects all data in the plain. The main focus of this article is on support vector machines (SVM). We aim to present a solution to the privacy issue and provide confidentiality to protect the data. We build a privacy-preserving scheme for SVM (SecretSVM) based on the framework of federated learning and distributed consensus. In this scheme, data providers self-organize and obtain training parameters of SVM without revealing their own models. Finally, experiments with real data analysis show the feasibility of potential applications in smart cities. This article is the extended version of that of Hsu et al. (Proceedings of the 15th ACM Asia Conference on Computer and Communications Security. ACM; 2020:904-906).  相似文献   

15.
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific community and in industry. A Grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously allocated to other peers. This paper is the first to introduce the scheduling haste problem. It also presents a reliable grid scheduler. The proposed scheduler selects the most suitable worker to execute an input grid task using a fuzzy inference system. Hence, it minimizes the turnaround time for a set of grid tasks. Moreover, our scheduler is a system-oriented one as it avoids the scheduling haste problem. Experimental results have shown that the proposed scheduler outperforms traditional grid schedulers as it introduces a better scheduling efficiency.  相似文献   

16.
大规模数据分析环境中,经常存在一些持续时间较短、并行度较大的任务。如何调度这些低延迟要求的并发作业是目前研究的一个热点。现有的一些集群资源管理框架中,集中式调度器由于主节点的瓶颈无法达到低延迟的要求,而一些分布式调度器虽然达成了低延迟的任务调度,但在最优资源分配以及资源分配冲突方面存在一定的不足。从大规模实时作业的需求出发,设计和实现了一个分布式的集群资源调度框架,以满足大规模数据处理的低延迟要求。首先提出了两阶段调度框架以及优化后的两阶段多路调度框架;然后针对两阶段多路调度过程中存在的一些资源冲突问题,提出了基于负载平衡的任务转移机制,从而解决了各个计算节点的负载不平衡问题;最后使用实际负载以及一个模拟调度器对大规模集群中的任务调度框架进行了模拟和验证。对于实际负载,所提框架的调度延迟控制在理想调度的12%以内;在模拟环境下,该框架与集中式调度器相比在短时间任务的延迟上能够减少40%以上。  相似文献   

17.
This paper discusses a manufacturing execution system (MES) that prefers and attempts to follow a given schedule. The MES performs this task in an autonomic manner, filling in missing details, providing alternatives for unfeasible assignments, handling auxiliary tasks, and so on. The paper presents the research challenge, depicts the MES design, and gives experimental results. The research contribution resides in the novel architecture in which the MES cooperates with schedulers without inheriting the limitations of the world model employed by the scheduler. The research forms a first development, and a list of further research is given.  相似文献   

18.
Cloud manufacturing is a service-oriented, customer-centric and demand-driven process with well-established industrial automation. Even though, it does not necessarily mean the absence of human beings. Due to products and their corresponding manufacturing processes becoming increasingly complex, operators' daily working lives are also becoming more difficult. Enhanced human–machine interaction is one of the core areas for the success of the next generation of manufacturing. However, the current research only focuses on the automation and flexibility features of cloud manufacturing, the interaction between human and machine and the value co-creation among operators is missing. Therefore, a new method is needed for operators to support their work, with the objective of reducing the time and cost of machine control and maintenance. This paper describes a practical demonstration that uses the technologies of the Internet of things (IoT), wearable technologies, augmented reality, and cloud storage to support operators' activities and communication in discrete factories. This case study exhibits the capabilities and user experience of smart glasses in a cloud manufacturing environment, and shows that smart glasses help users stay productive and engaged.  相似文献   

19.
为了能有效处理海量数据,进行关联分析、商业预测等,Hadoop分布式云计算平台应运而生。但随着Hadoop的广泛应用,其作业调度方面的不足也显现出来,现有的多种作业调度器存在参数设置复杂、启动时间长等缺陷。借助于人工蜂群算法的自组织性强、收敛速度快的优势,设计并实现了能实时检测Hadoop内部资源使用情况的资源感知调度器。相比于原有的作业调度器,该调度器具有参数设置少、启动速度快等优势。基准测试结果表明,该调度器在异构集群上,调度资源密集型作业比原有调度器快10%~20%左右。  相似文献   

20.
A cyber-physical system is one of the integral parts of the development endeavor of the smart manufacturing domain and the Industry 4.0 wave. With the advances in data analytics, smart manufacturing is gradually transforming the global manufacturing landscape. In the Resistance Spot Welding (RSW) domain, the focus has been more on the physical systems, compared to the virtual systems. The cyber-physical system facilitates the integrated analysis of the design and manufacturing processes by converging the physical and virtual stages to improve product quality in real-time. However, a cyber-physical system integrated RSW weldability certification is still an unmet need. This research is to realize a real-time data-driven cyber-physical system framework with integrated analytics and parameter optimization capabilities for connected RSW weldability certification. The framework is based on the conceptualization of the layers of the cyber-physical system and can incorporate the design and machine changes. It integrates data from the analytics lifecycle phases, starting from the data collection operation, to the predictive analytics operation, and to the visualization of the design. This integrated framework aims to support decision-makers to understand product design and its manufacturing implications. In addition to data analytics, the proposed framework implements a closed-loop machine parameter optimization considering the target product design. The framework visualizes the target product assembly with predicted response parameters along with displaying the process parameters and material design parameters simultaneously. This layer should help the designers in their decision-making process and the engineers to gain knowledge about the manufacturing processes. A case study on the basis of a real industrial case and data is presented in detail to illustrate the application of the envisioned cyber-physical systems framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号