首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Formal control techniques for power-performance management   总被引:1,自引:0,他引:1  
These techniques determine when to speed up a processor to reach performance targets and when to slow it down to save energy. They use dynamic voltage and frequency scaling to balance speed and avoid worst case frequency limitations for both multiple-clock-domain and chip multiprocessors.  相似文献   

2.
This paper discusses the performance analysis of two generic fundamental parallel search techniques on shared memory multi-processor systems in solving the constraint satisfaction problem (CSP). Probabilistic analysis on their expected computation steps needed and their inherent load-balancing capability is performed. Corresponding experimental results are alsoprovided to verify the correctness of the proposed analysis. This fundamental analysis approach can be further applied to various advanced parallel search techniques or various problem solving techniques on parallel platforms. This research was supported in part by the University of Texas at San Antonio under the Faculty Research Award program  相似文献   

3.
4.
This research project investigates the ability of neural networks, specifically, the backpropagation algorithm, to integrate fundamental and technical analysis for financial performance prediction. The predictor attributes include 16 financial statement variables and 11 macroeconomic variables. The rate of return on common shareholders' equity is used as the to-be-predicted variable. Financial data of 364 S&P companies are extracted from the CompuStat database, and macroeconomic variables are extracted from the Citibase database for the study period of 1985–1995. Used as predictors in Experiments 1, 2, and 3 are the 1 year's, the 2 years', and the 3 years' financial data, respectively. Experiment 4 has 3 years' financial data and macroeconomic data as predictors. Moreover, in order to compensate for data noise and parameter misspecification as well as to reveal prediction logic and procedure, we apply a rule extraction technique to convert the connection weights from trained neural networks to symbolic classification rules. The performance of neural networks is compared with the average return from the top one-third returns in the market (maximum benchmark) that approximates the return from perfect information as well as with the overall market average return (minimum benchmark) that approximates the return from highly diversified portfolios. Paired t tests are carried out to calculate the statistical significance of mean differences. Experimental results indicate that neural networks using 1 year's or multiple years' financial data consistently and significantly outperform the minimum benchmark, but not the maximum benchmark. As for neural networks with both financial and macroeconomic predictors, they do not outperform the minimum or maximum benchmark in this study. The experimental results also show that the average return of 0.25398 from extracted rules is the only compatible result to the maximum benchmark of 0.2786. Consequentially, we demonstrate rule extraction as a postprocessing technique for improving prediction accuracy and for explaining the prediction logic to financial decision makers.  相似文献   

5.
资源分配是云计算的核心之一,对云计算资源分配算法的性能进行评价可为云计算平台设计提供指导.讨论了两种云计算资源分配算法,提出了一种基于PEPA的资源分配算法的性能评价模型,该模型通过建立云计算系统中各组件之间的交互关系进行形式化分析和推理,获得了云计算系统性能的评价指标.实验通过分析资源分配过程中不同参数变化对系统性能的影响,结果表明,PEPA模型方法可以直接评估资源分配算法性能的优劣,并能够确定算法性能提升的关键因素,从而减少云平台设计过程的周期.  相似文献   

6.
In this paper, we define a number of tools that we think belong to the core of any toolkit for requirements engineers. The tools are conceptual and hence, they need precise definitions that lay down as exactly as possible what their meaning and possible use is. We argue that this definition can best be achieved by a formal specification of the tool. This means that for each semi-formal requirements engineering tool we should provide a formal specification that precisely specifies its meaning. We argue that this mutually enhances the formal and semi-formal technique: it makes formal techniques more usable and, as we will argue, at the same time simplifies the diagram-based notations.At the same time, we believe that the tools of the requirements engineer should, where possible, resemble the familiar semi-formal specification techniques used in practice today. In order to achieve this, we should search existing requirements specification techniques to look for a common kernel of familiar semi-formal techniques and try to provide a formalisation for these.In this paper we illustrate this approach by a formal analysis of the Shlaer-Mellor method for object-oriented requirements specification. The formal specification language used in this analysis is LCM, a language based on dynamic logic, but similar results would have been achieved by means of another language. We analyse the techniques used in the information model, state model, process model and communication model of the Shlaer-Mellor method, identify ambiguities and redundancies, indicate how these can be eliminated and propose a formalisation of the result. We conclude with a listing of the tools extracted from the Shlaer-Mellor method that we can add to a toolkit that in addition contains LCM as formal specification technique.  相似文献   

7.
The performance analysis of optimization techniques is very important to understand the strengths and weaknesses of each technique. It is not very common to find an optimization technique that performs equally on all optimization problems, and the numbers offered by the most common performance measures, the achieved function value (fitness) and the number of function evaluations, are not representative by their own. For instance, reporting that an optimization technique O on a benchmark function B achieved a fitness F after a number of evaluations E is not semantically meaningful. Some of the logical questions that would arise for such report are: (a) how other techniques performed on the same benchmark, and (b) what are the characteristics of this benchmark (for example, modality and separability). The comparative optimizer rank and score (CORS) proposes an easy to apply and interpret method for the investigation of the problem solving abilities of optimization techniques. CORS offers eight new performance measures that are built on the basic performance measures (that is, achieved fitness, number of function evaluations, and time consumed). The CORS performance measures represent the performance of an optimization technique in comparison to other techniques that were tested under the same benchmarks, making the results more meaningful. Besides, these performance measures are all normalized in a range from 1 to 100, which helps the results to keep well-interpretable by their own. Furthermore, all the CORS performance measures are aggregatable, in which the results are easily accumulated and represented by the common characteristics defining optimization problems (such as dimensionality, modality, and separability), instead of a per benchmark function basis (such as F1, F2, and F3). In order to demonstrate and validate the CORS method, it was applied to the performance data of eight novel optimization techniques of the recent contributions to metaheuristics, namely, the bat algorithm (BA), cuckoo search (CS), differential search (DS), firefly algorithm (FA), gravitational search algorithm (GSA), one rank cuckoo search (ORCS), separable natural evolution strategy (SNES), and exponential natural evolution strategy (xNES). These performance data were generated by 96 tests of 16 benchmark functions and 6 dimensionalities. Along with the basic and CORS performance data, the aggregated CORS results were found to offer a very helpful knowledge regarding the performance of the examined techniques.  相似文献   

8.
This paper is an investigation into the performance of E-commerce applications. E-commerce has become one of the most popular applications of the web as a large population of web users is now benefiting from various on-line services including product searches, product purchases and product comparison. E-commerce provides users with 24-7 shopping facilities. However, the consequence of these benefits and facilities is the excessive load on E-commerce web servers and the performance degradation of E-commerce (eCom) requests they process. This paper addresses this issue and proposes a class-based priority scheme which classifies eCom requests into high and low priority requests. In E-commerce, some requests (e.g. payment) are generally considered more important than others (e.g. search or browse). We believe that by assigning class-based priorities at multiple service levels, E-commerce web servers can perform better and can improve the performance of high priority eCom requests. In this paper, we formally specify and implement the proposed scheme and evaluate its performance using multiple servers. Experimental results demonstrate that the proposed scheme significantly improves the performance of high priority eCom requests.  相似文献   

9.
Firewalls are an important means to secure critical ICT infrastructures. As configurable off‐the‐shelf products, the effectiveness of a firewall crucially depends on both the correctness of the implementation itself as well as the correct configuration. While testing the implementation can be done once by the manufacturer, the configuration needs to be tested for each application individually. This is particularly challenging as the configuration, implementing a firewall policy, is inherently complex, hard to understand, administrated by different stakeholders and thus difficult to validate. This paper presents a formal model of both stateless and stateful firewalls (packet filters), including NAT , to which a specification‐based conformance test case generation approach is applied. Furthermore, a verified optimisation technique for this approach is presented: starting from a formal model for stateless firewalls, a collection of semantics‐preserving policy transformation rules and an algorithm that optimizes the specification with respect of the number of test cases required for path coverage of the model are derived. We extend an existing approach that integrates verification and testing, that is, tests and proofs to support conformance testing of network policies. The presented approach is supported by a test framework that allows to test actual firewalls using the test cases generated on the basis of the formal model. Finally, a report on several larger case studies is presented. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
The Topic Detection task is focused on discovering the main topics addressed by a series of documents (e.g., news reports, e-mails, tweets). Topics, defined in this way, are expected to be thematically similar, cohesive and self-contained. This task has been broadly studied from the point of view of clustering and probabilistic techniques. In this work, we propose for this task the application of Formal Concept Analysis (FCA), an exploratory technique for data analysis and organization. In particular, we propose an extension of FCA-based methods for topic detection applied in the literature by applying the stability concept for the topic selection. The hypothesis is that FCA will enable the better organization of the data and stability the better selection of topics based on this data organization, thus better fulfilling the task requirements by improving the quality and accuracy of the topic detection process. In addition, the proposed FCA-based methodology is able to cope with some well-known drawbacks that clustering and probabilistic methodologies present, such as: the need to set a predefined number of clusters or the difficulty in dealing with topics with complex generalization-specialization relationships. In order to prove this hypothesis, the FCA operation is compared to other established techniques — Hierarchical Agglomerative Clustering (HAC) and Latent Dirichlet Allocation (LDA). To allow this comparison, these approaches have been implemented by the authors in a novel experimental framework. The quality of the topics detected by the different approaches in terms of their suitability for the topic detection task is evaluated by means of internal clustering validity metrics. This evaluation demonstrates that FCA generates cohesive clusters, which are less subject to changes in cluster granularity. Driven by the quality of the detected topics, FCA achieves the best general outcome, improving the experimental results for Topic Detection Task at the 2013 Replab Campaign.  相似文献   

11.
12.
13.
A static analysis method for verifying timing properties of real-time distributed programs is presented. The goal is to calculate the worst-case response time of concurrent tasks which run mainly independently but share, and may have to wait for, logical or physical devices. For such tasks, the determination of the worst-case waiting time is a crucial problem because of the unpredictable order of synchronization events. We investigate the class of distributed Client-Server programs in which independent, time-critical tasks (clients) are synchronized only through additional server tasks, playing the role of monitors or resource managers. This model follows well-known real-time design guidelines for distributed ADA programs proposed to enhance schedulability and synchronization analysis. Our formal analysis approach is flow graph oriented. It leads to generating reduced program paths each of which represents a sequence of ordered local and global operations, thus transforming and reducing the original problem of computing the worst-case waiting time of a concurrent task into a graph-theoretic problem of calculating the maximal blocking time for one of its corresponding program paths. While local operations are completely independent global operations require mutually exclusive access to shared resources. We prove that computing the worst-case blocking time for a program path is NP-complete. Even for a reduced problem solution—which would yield a good upper bound for the worst-case blocking time—there was a conjecture maintained over many years that this problem was NP-complete. A major result of this paper is to show that this is wrong. Instead, we construct a polynomial solution algorithm, and we prove its correctness. The effectiveness and complexity of our method are discussed, with particular emphasis on distributed real-time debugging.  相似文献   

14.
This is the first part of a large survey paper in which we analyze recent literature on Formal Concept Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract and keywords. We developed a knowledge browsing environment to support our literature analysis process. We use the visualization capabilities of FCA to explore the literature, to discover and conceptually represent the main research topics in the FCA community. In this first part, we zoom in on and give an extensive overview of the papers published between 2003 and 2011 on developing FCA-based methods for knowledge processing. We also give an overview of the literature on FCA extensions such as pattern structures, logical concept analysis, relational concept analysis, power context families, fuzzy FCA, rough FCA, temporal and triadic concept analysis and discuss scalability issues.  相似文献   

15.
《Computer Communications》1995,18(12):921-928
A language for security services base modelling is developed and presented. The security services base is defined according to security mechanisms defined in the OSI security framework. Elements of this base are modelled with corresponding channels. For each channel, a set of productions is introduced which form a grammar of a language. The language is suitable for formal synthesis and analysis of secure architectures. The method presented in this paper is not limited to cryptographic algorithms; any other security mechanisms can also be incorporated. Furthermore, the method can be used easily for machine processing.  相似文献   

16.
夏琦  王忠群 《计算机应用》2012,32(11):3067-3070
因特网上的资源具有不确定性、随机性,需要考虑如何保证网构软件系统在运行中满足资源需求。使用随机性资源接口自动机对软件构件的行为进行形式化建模,并使用随机性资源接口自动机网络描述构件组装系统的组合行为;在资源不确定的情况下,检验组合系统是否满足资源约束,并提出基于可达图的相应算法。给出了一个实例网上书店系统,并用模型检测工具Spin验证了模型的正确性。  相似文献   

17.
18.
In this work we introduce Bio-PEPA, a process algebra for the modelling and the analysis of biochemical networks. It is a modification of PEPA to deal with some features of biological models, such as stoichiometry and the use of generic kinetic laws. Bio-PEPA may be seen as an intermediate, formal, compositional representation of biological systems, on which different kinds of analysis can be carried out. Finally, we show a representation of a model, concerning a simple genetic network, in the new language.  相似文献   

19.
To give worst case guarantees for the timing behavior of complex distributed embedded real-time systems, e.g. end-to-end latencies, different compositional approaches for system-level performance analysis have been developed which exhibit great flexibility and scalability. While these approaches are in theory able to handle arbitrary complex systems, the system-level results can easily become very pessimistic with increasing numbers of components. In this article, the basic principles of compositional system-level analysis are explained and its inherent strengths and weaknesses are elaborated. Furthermore, we present improved analysis techniques from existing research which can greatly reduce the pessimism of the system-level analysis results. Two techniques will be discussed in detail: the exploitation of a system’s communication infrastructure by usage of composition and decomposition operators and the exploitation of information w.r.t. the correlation of event processing. These techniques help to make system-level analysis not only applicable, but also a highly useful technique in the integration phase of embedded system design.  相似文献   

20.
We capture student interactions in an e-learning community to construct a semantic web (SW) to create a collective meta-knowledge structure guiding students as they search the existing knowledge corpus. We use formal concept analysis (FCA) as a knowledge acquisition tool to process the students virtual surfing trails to express and exploit the dependencies between web-pages to yield subsequent and more effective focused search results. We mirror social navigation and bypass the cumbersome manual annotation of webpages and at the same time paralleling social navigation for knowledge.We present our system KAPUST2 (Keeper and Processor of User Surfing Trails) which constructs from captured students trails a conceptual lattice guiding student queries. We use KAPUST as an e-learning software for an undergraduate class over two semesters. We show how the lattice evolved over the two semesters, improving its performance by exploring the relationship between ‘kinds’ of research assignments and the e-learning semantic web development. Course instructors monitored the evolution of the lattice with interesting positive pedagogical consequences. These are reported as well in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号