首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes the deletion-retention contour machine (DRCM), an efficient implementation of a retention block-structured language. It allows programs to be handled by the deletion strategy until some forward reference is generated during execution. Then the retention strategy is adopted, when a time- and space-efficient garbage compaction algorithm recovers the inaccessible cells. Moreover, the garbage collector can, on discovering the absence of accessible forward references, restore the deletion strategy. An estimate of the computation time of a lifetime well-stacking (LWS) program on the DRCM is obtained, which shows that an LWS program runs on the DRCM in almost the same time as on a stack machine with lifetime checks to prevent dangling references. Such a property also holds for LWS programs with full-label and nonlocal gotos.Supported (in part) by the Consiglio Nazionale delle Ricerche (Contract No. 79.00823.07), Italy.  相似文献   

2.
3.
4.
Implementing a concurrent programming language such as Java by means of a translator to an existing language is attractive as it provides portability over all platforms supported by the host language and reduces development time—as many low‐level tasks can be delegated to the host compiler. The C and C++ programming languages are popular choices for many language implementations due to the availability of efficient compilers on a wide range of platforms. For garbage‐collected languages, however, they are not a perfect match as no support is provided for accurately discovering pointers to heap‐allocated data on thread stacks. We evaluate several previously published techniques and propose a new mechanism, lazy pointer stacks, for performing accurate garbage collection in such uncooperative environments. We implemented the new technique in the Ovm Java virtual machine with our own Java‐to‐C/C++ compiler using GCC as a back‐end compiler. Our extensive experimental results confirm that lazy pointer stacks outperform existing approaches: we provide a speedup of 4.5% over Henderson's accurate collector with a 17% increase in code size. Accurate collection is essential in the context of real‐time systems, we thus validate our approach with the implementation of a real‐time concurrent garbage collection algorithm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
6.
7.
张聪品  吴长茂  赵理莉 《计算机应用》2010,30(11):2876-2879
为了提高垃圾收集效率,减少用户程序等待时间,提出了一种在多核系统下基于LISP2算法的并行节点复制算法。该算法通过把LISP2算法的4个垃圾收集阶段分别并行化来实现并行垃圾收集。实验结果显示,该算法在多核系统下能有效提高垃圾收集效率。  相似文献   

8.
针对城市生活垃圾分类收运过程中存在的环境二次污染和垃圾产生量不确定性等问题,提出了一种基于智能垃圾桶的动态收运车辆路径优化方法。建立以最小化碳排放成本、燃油消耗成本、固定成本和车辆延迟到达惩罚成本为目标的动态车辆路径优化模型。采用滚动时域的方式将动态问题转换为一系列静态问题,并设计两阶段算法进行求解。首先采用粒子群算法对收运车辆路径进行规划,而后在每个时域末,综合考虑待清运垃圾桶的位置和垃圾量、垃圾收运车辆的位置和装载量以动态调整现有车辆路径。研究结果表明,相较于传统的静态收运方案,动态垃圾收运方案能够在降低车辆运输成本和碳排放成本的同时,显著降低由于清运不及时造成的环境二次污染的风险。  相似文献   

9.
An asynchronous garbage collector for a message-passing multiprocessor (multicomputer) is described. This combines Weighted Reference Counting (WRC) interprocessor collection and tracing intraprocessor collection to permit individual processors to reclaim local storage independently. A novel feature is the integration of Weighted Reference Counting collection and the communication algorithms required to support a global address space in a single assignment language. This significantly reduces communication overhead and space requirements attributable to garbage collection. In addition, techniques are described that avoid the creation of cyclic structures that cannot be reclaimed using WRC. Experimental studies performed in a concurrent logic programming system that incorporates the collector confirm its efficiency and the benefits of integrating garbage collector and language implementation.  相似文献   

10.
Tablets, smartphones, and wearables have limited resources. Applications on these devices employ a graphical user interface (GUI) for interaction with users. Language runtimes for GUIs employ dynamic memory management using garbage collection (GC). However, GC policies and algorithms are designed for data centers and cloud computing, but they are not necessarily ideal for resource-constrained embedded devices. In this article, we present GUI GC, a JavaFX GUI benchmark, which we use to compare the performance of the four GC policies of the Eclipse OpenJ9 Java runtime on a resource-constrained environment. Overall, our experiments suggest that the default policy Gencon registered significantly lower execution times than its counterparts. The region-based policy, Balanced, did not fully utilize blocking times; thus, using GUI GC, we conducted experiments with explicit GC invocations that measured significant improvements of up to 13.22% when multiple CPUs were available. Furthermore, we created a second version of GUI GC that expands on the number of controllable load-stressing dimensions; we conducted a large number of randomly configured experiments to quantify the performance effect that each knob has. Finally, we analyzed our dataset to derive suitable knob configurations for desired runtime, GC, and hardware stress levels.  相似文献   

11.
Nowadays, clustered environments are commonly used in high‐performance computing and enterprise‐level applications to achieve faster response time and higher throughput than single machine environments. Nevertheless, how to effectively manage the workloads in these clusters has become a new challenge. As a load balancer is typically used to distribute the workload among the cluster's nodes, multiple research efforts have concentrated on enhancing the capabilities of load balancers. Our previous work presented a novel adaptive load balancing strategy (TRINI) that improves the performance of a clustered Java system by avoiding the performance impacts of major garbage collection, which is an important cause of performance degradation in Java. The aim of this paper is to strengthen the validation of TRINI by extending its experimental evaluation in terms of generality, scalability and reliability. Our results have shown that TRINI can achieve significant performance improvements, as well as a consistent behaviour, when it is applied to a set of commonly used load balancing algorithms, demonstrating its generality. TRINI also proved to be scalable across different cluster sizes, as its performance improvements did not noticeably degrade when increasing the cluster size. Finally, TRINI exhibited reliable behaviour over extended time periods, introducing only a small overhead to the cluster in such conditions. These results offer practitioners a valuable reference regarding the benefits that a load balancing strategy, based on garbage collection, can bring to a clustered Java system. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
To avoid a poor random write performance, flash-based solid state drives typically rely on an internal log-structure. This log-structure reduces the write amplification and thereby improves the write throughput and extends the drive’s lifespan. In this paper, we analyze the performance of the log-structure combined with the dd-choices garbage collection algorithm, which repeatedly selects the block with the fewest number of valid pages out of a set of dd randomly chosen blocks, and consider non-uniform random write workloads. Using a mean field model, we show that the write amplification worsens as the hot data gets hotter.  相似文献   

13.
AR (Autoregressive) model is a common predictor that has been extensively used for time series forecasting. Many training methods can used to update AR model parameters, for instance, least square estimate and maximum likelihood estimate; however, both techniques are sensitive to noisy samples and outliers. To deal with the problems, an evolving AR predictor---EAR, is developed in this work to enhance prediction accuracy and mitigate the effect of noisy samples and outliers. The model parameters of EAR are trained with an ALSE (adaptive least square estimate) method, which can learn samples characteristics more effectively. In each training epoch, the ALSE weights the samples by their fitting accuracy. The samples with larger fitting errors will be given a larger penalty value in the cost function; however, the penalties of difficult-to-predict samples will be adaptively reduced to enhance the prediction accuracy. The effectiveness of the developed EAR predictor is verified by simulation tests. Test results show that the proposed EAR predictor can capture the dynamics of the time series effectively and predict the future trend accurately.  相似文献   

14.
Ontological analysis of modelling languages has been mainly used for evaluating quality of modelling language w.r.t. one specific upper ontology. Generally speaking this evaluation has been done by identifying the coverage of the modelling language constructs w.r.t. the ontology and vice-versa. However, a quite limited support has been developed for performing the ontological analysis task. Specifically, various ontologies used for ontological analysis are not associated to a machine readable format; the coverage of modelling language constructs is mostly provided by informal tables mapping one construct on to one ontological concept; the way in which this coverage task is undertaken is poorly specified (resulting in distinct results for distinct experts involved), and finally, preventing any ontology enrichment for dealing with some specialised language constructs. This limited support also prevents application of ontological analysis outcomes to problems and domains dealing with interoperability, integration and integrated usage of enterprise and IS models, which is today one of the key aspects for making interoperable, maintainable and evolvable inter and intra enterprise software systems. The paper provides an overview of the Unified Enterprise Modelling Language (UEML) approach, which introduces advanced support to ontological analysis of modelling languages. The paper is specifically focused on the task of ontological analysis of modelling languages (named incorporation of modelling languages) by introducing and explaining several guidelines and rules for driving the task: therefore, not all the aspects of the UEML approach will be discussed through the paper. The guidelines and rules are illustrated by incorporation of three selected modelling constructs from IDEF3, a well known language for specifying enterprise processes.  相似文献   

15.
It is generally believed that the time-cost of solving any size n problem using two-way divide-and-conquer is minimized by balancing—that is, by dividing the problem into subproblems of size ?n2? and ?n2?. A counter-example is presented: balanced division, applied to finding the greatest and least elements of a size n set, will in the worst case force 11% more comparisons to be made than an optimal division such as into subsets of sizes 2 and n?2. A necessary condition and a slightly stronger sufficient condition are given for balancing to be cost-optimal. Even if balancing is cost-optimal, it may not be the only cost-optimal division strategy; a necessary and sufficient condition is given for a division strategy to be ‘balanced enough’ to be cost-optimal. As an application, a new iterative merge-sorting algorithm is presented which requires no more comparisons than the balanced one of Erkio and Peltola (1977) but merges subarrays consisting of consecutive elements of the whole array.  相似文献   

16.
大滞后生产过程的智能式补偿预测控制   总被引:18,自引:0,他引:18  
利用智能建模的方法修正系统的预测输出,提高了预测输出的精度.通过预测控制与人工智能方法的结合,建立了一种适于大滞后生产过程的控制方法.这一方法已成功地应用于小氮肥生产的氢氮比控制中.  相似文献   

17.
This paper proposes a predictive compensation strategy to reduce the detrimental effect of stochastic time delays induced by communication networks on control performance. Values of a manipulated variable at the present sampling instant and future time instants can be determined by performing a receding horizon optimal procedure only once. When the present value of the manipulated variable does not arrive at a smart actuator, its predictive one is imposed to the corresponding process. Switching of a manipulated variable between its true present value and the predictive one usually results in unsmooth operation of a control system. This paper shows: 1) for a steady process, as long as its input is sufficiently smooth, the smoothness of its output can be guaranteed; 2) a manipulated variable can be switched smoothly by filtering the manipulated variable just using a simple low-pass filter. Thus the control performance can be improved. Finally, the effectiveness of the proposed method is demonstrated by simulation study.  相似文献   

18.
This paper proposes a predictive compensation strategy to reduce the detrimental effect of stochastic time delays induced by communication networks on control performance. Values of a manipulated variable at the present sampling instant and future time instants can be determined by performing a receding horizon optimal procedure only once. When the present value of the manipulated variable does not arrive at a smart actuator, its predictive one is imposed to the corresponding process Switching of a manipulated variable between its true present value and the predictive one usually results in unsmooth operation of a control system. This paper shows: 1) for a steady process, as long as its input is sufficiently smooth, the smoothness of its output can be guaranteed; 2) a manipulated variable can be switched smoothly by filtering the manipulated variable just using a simple low-pass filter. Thus the control performance can be improved. Finally, the effectiveness of the proposed method is demonstrated by simulation study.  相似文献   

19.
汤小春  田凯飞 《计算机科学》2017,44(12):11-16, 22
实时数据的有效性与CPU处理能力是信息物理融合系统(CPS)中的一对矛盾,提高采样频率可以保证实时数据的有效性,但是会增加CPU负荷,降低系统的计算能力。首先,利用实时数据的语义特点建立数据的有效性模型;然后,通过在CPU空闲期间设置预调度任务,合理地利用数据有效性模型,设置新的有效性间隔和实时数据的更新事务的开始时间,减少CPU执行时间;最后,在棉花采摘锭的自转、公转及油压等参数上,对基于语义模型的实时数据有效性保证策略进行了系统的评价,结果表明所提方法能够减少15%左右的CPU负荷。  相似文献   

20.
The complexity of the monitored data available in modern intensive care units (ICUs) means that they are best processed, for presentation to medical staff, by expert system techniques. This article describes an expert system that has an appropriately designed inference engine, handling the temporal considerations inherent in monitoring and manages data acquisition via a Medical Information Bus. Also, we will describe how we extended our monitoring system in ICUs by writing an interface allowing communication in dynamic SQL with a relational database management system. The extended system facilitates both permanent filing of case data and the use of filed data by the rules of the expert system, and allows automatic intelligent screening of data prior to permanent filing so as to ensure data reliability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号