首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Network flow control under capacity constraints: A case study   总被引:1,自引:0,他引:1  
In this paper, we demonstrate how tools from nonlinear system theory can play an important role in tackling “hard nonlinearities” and “unknown disturbances” in network flow control problems. Specifically, a nonlinear control law is presented for a communication network buffer management model under physical constraints. Explicit conditions are identified under which the problem of asymptotic regulation of a class of networks against unknown inter-node traffic is solvable, in the presence of control input and state saturation. The conditions include a Lipschitz-type condition and a “PE” condition. Under these conditions, we achieve either asymptotic or practical regulation for a single-node system. We also propose a decentralized, discontinuous control law to achieve (global) asymptotic regulation of large-scale networks. Our main result on controlling large-scale networks is based on an interesting extension of the well-known Young's inequality for the case with saturation nonlinearities. We present computer simulations to illustrate the effectiveness of the proposed flow control schemes.  相似文献   

2.
A crucial step in the modeling of a system is to determine the values of the parameters to use in the model. In this paper we assume that we have a set of measurements collected from an operational system, and that an appropriate model of the system (e.g., based on queueing theory) has been developed. Not infrequently proper values for certain parameters of this model may be difficult to estimate from available data (because the corresponding parameters have unclear physical meaning or because they cannot be directly obtained from available measurements, etc.). Hence, we need a technique to determine the missing parameter values, i.e., to calibrate the model.As an alternative to unscalable “brute force” technique, we propose to view model calibration as a non-linear optimization problem with constraints. The resulting method is conceptually simple and easy to implement. Our contribution is twofold. First, we propose improved definitions of the “objective function” to quantify the “distance” between performance indices produced by the model and the values obtained from measurements. Second, we develop a customized derivative-free optimization (DFO) technique whose original feature is the ability to allow temporary constraint violations. This technique allows us to solve this optimization problem accurately, thereby providing the “right” parameter values. We illustrate our method using two simple real-life case studies.  相似文献   

3.
Though blogs and wikis have been used to support knowledge management and e-learning, existing blogs and wikis cannot support different types of knowledge and adaptive learning. A case in point, types of knowledge vary greatly in category and viewpoints. Additionally, adaptive learning is crucial to improving one’s learning performance. This study aims to design a semantic bliki system to tackle such issues. To support various types of knowledge, this study has developed a new social software called “bliki” that combines the advantages of blogs and wikis. This bliki system also applies Semantic Web technology to organize an ontology and a variety of knowledge types. To aid adaptive learning, a function called “Book” is provided to enable learners to arrange personalized learning goals and paths. The learning contents and their sequences and difficulty levels can be specified according to learners’ metacognitive knowledge and collaborative activities. An experiment is conducted to evaluate this system and the experimental results show that this system is able to comprehend various types of knowledge and to improve learners’ learning performance.  相似文献   

4.
Time series data mining (TSDM) techniques permit exploring large amounts of time series data in search of consistent patterns and/or interesting relationships between variables. TSDM is becoming increasingly important as a knowledge management tool where it is expected to reveal knowledge structures that can guide decision making in conditions of limited certainty. Human decision making in problems related with analysis of time series databases is usually based on perceptions like “end of the day”, “high temperature”, “quickly increasing”, “possible”, etc. Though many effective algorithms of TSDM have been developed, the integration of TSDM algorithms with human decision making procedures is still an open problem. In this paper, we consider architecture of perception-based decision making system in time series databases domains integrating perception-based TSDM, computing with words and perceptions, and expert knowledge. The new tasks which should be solved by the perception-based TSDM methods to enable their integration in such systems are discussed. These tasks include: precisiation of perceptions, shape pattern identification, and pattern retranslation. We show how different methods developed so far in TSDM for manipulation of perception-based information can be used for development of a fuzzy perception-based TSDM approach. This approach is grounded in computing with words and perceptions permitting to formalize human perception-based inference mechanisms. The discussion is illustrated by examples from economics, finance, meteorology, medicine, etc.  相似文献   

5.
Defining operational semantics for a process algebra is often based either on labeled transition systems that account for interaction with a context or on the so-called reduction semantics: we assume to have a representation of the whole system and we compute unlabeled reduction transitions (leading to a distribution over states in the probabilistic case). In this paper we consider mixed models with states where the system is still open (towards interaction with a context) and states where the system is already closed. The idea is that (open) parts of a system “P” can be closed via an operator “PG” that turns already synchronized actions whose “handle” is specified inside “G” into prioritized reduction transitions (and, therefore, states performing them into closed states). We show that we can use the operator “PG” to express multi-level priorities and external probabilistic choices (by assigning weights to handles inside G), and that, by considering reduction transitions as the only unobservable τ transitions, the proposed technique is compatible, for process algebra with general recursion, with both standard (probabilistic) observational congruence and a notion of equivalence which aggregates reduction transitions in a (much more aggregating) trace based manner. We also observe that the trace-based aggregated transition system can be obtained directly in operational semantics and we present the “aggregating” semantics. Finally, we discuss how the open/closed approach can be used to also express discrete and continuous (exponential probabilistic) time and we show that, in such timed contexts, the trace-based equivalence can aggregate more with respect to traditional lumping based equivalences over Markov Chains.  相似文献   

6.
This paper is concerned with a proof-theoretic observation about two kinds of proof systems for regular cyclic objects. It is presented for the case of two formal systems that are complete with respect to the notion of “recursive type equality” on a restricted class of recursive types in μ-term notation. Here we show the existence of an immediate duality with a geometrical visualization between proofs in a variant of the coinductive axiom system due to Brandt and Henglein and “consistency-unfoldings” in a variant of a 'syntactic-matching' proof system for testing equations between recursive types due to Ariola and Klop.Finally we sketch an analogous result of a duality between a similar pair of proof systems for bisimulation equivalence on equational specifications of cyclic term graphs.  相似文献   

7.
SweetWiki: A semantic wiki   总被引:1,自引:0,他引:1  
Everyone agrees that user interactions and social networks are among the cornerstones of “Web 2.0”. Web 2.0 applications generally run in a web browser, propose dynamic content with rich user interfaces, offer means to easily add or edit content of the web site they belong to and present social network aspects. Well-known applications that have helped spread Web 2.0 are blogs, wikis, and image/video sharing sites; they have dramatically increased sharing and participation among web users. It is possible to build knowledge using tools that can help analyze users’ behavior behind the scenes: what they do, what they know, what they want. Tools that help share this knowledge across a network, and that can reason on that knowledge, will lead to users who can better use the knowledge available, i.e., to smarter users. Wikipedia, a wildly successful example of web technology, has helped knowledge-sharing between people by letting individuals freely create and modify its content. But Wikipedia is designed for people—today's software cannot understand and reason on Wikipedia's content. In parallel, the “semantic web”, a set of technologies that help knowledge-sharing across the web between different applications, is starting to gain attraction. Researchers have only recently started working on the concept of a “semantic wiki”, mixing the advantages of the wiki and the technologies of the semantic web. In this paper we will present a state-of-the-art of semantic wikis, and we will introduce SweetWiki, an example of an application reconciling two trends of the future web: a semantically augmented web and a web of social applications where every user is an active provider as well as a consumer of information. SweetWiki makes heavy use of semantic web concepts and languages, and demonstrates how the use of such paradigms can improve navigation, search, and usability.  相似文献   

8.
“Fuzzy Functions” are proposed to be determined by the least squares estimation (LSE) technique for the development of fuzzy system models. These functions, “Fuzzy Functions with LSE” are proposed as alternate representation and reasoning schemas to the fuzzy rule base approaches. These “Fuzzy Functions” can be more easily obtained and implemented by those who are not familiar with an in-depth knowledge of fuzzy theory. Working knowledge of a fuzzy clustering algorithm such as FCM or its variations would be sufficient to obtain membership values of input vectors. The membership values together with scalar input variables are then used by the LSE technique to determine “Fuzzy Functions” for each cluster identified by FCM. These functions are different from “Fuzzy Rule Base” approaches as well as “Fuzzy Regression” approaches. Various transformations of the membership values are included as new variables in addition to original selected scalar input variables; and at times, a logistic transformation of non-scalar original selected input variables may also be included as a new variable. A comparison of “Fuzzy Functions-LSE” with Ordinary Least Squares Estimation (OLSE)” approach show that “Fuzzy Function-LSE” provide better results in the order of 10% or better with respect to RMSE measure for both training and test cases of data sets.  相似文献   

9.
Technology benefits last years longer than the standard ROI valuation analysis but are rarely enumerated. In this paper, we utilize a nonconstant dividend growth model to “capture” lasting marginal productivity gained through the “reinvestment” of labor capital rather than the standard the one-time gain of reducing the labor force to realize labor productivity gains. This innovative methodology for capturing the productivity value of maintained employees enables the valuation of continuing marginal productivity gains and the management of workload for the affected employees at Intel. This methodology is applied to the valuation of a standard operating system and hardware upgrade.  相似文献   

10.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

11.
In this paper, we propose a permission-based message efficient mutual exclusion (MUTEX) algorithm for mobile ad hoc networks (MANETs). To reduce messages cost, the algorithm uses the “look-ahead” technique, which enforces MUTEX only among the hosts currently competing for the critical section. We propose mechanisms to handle dozes and disconnections of mobile hosts. The assumption of FIFO channel in the original “look-ahead” technique is also relaxed. The proposed algorithm can also tolerate link or host failures, using timeout-based mechanisms. Both analytical and simulation results show that the proposed algorithm works well under various conditions, especially when the mobility is high or load level is low. To our knowledge, this is the first permission-based MUTEX algorithm for MANETs.  相似文献   

12.
This study is to discuss the impact of stock repurchase declaration and purpose of repurchase on the stock price in the backdrop of listed companies on Taiwan’s stock market. Event Study Method is employed to discuss stock price fluctuations while GARCH (Generalized Autoregressive Conditional Heteroscedasticity) is applied to estimate the Market Model regressive coefficients. The samples consisted of companies declaring first stock repurchase are selected from August 9, 2000 to December 31, 2005 with a precondition that all the companies shall be listed ones 150 days prior to declaration. The study results reveal that companies from other industries have considerably bigger average CAR than companies of the electrics industry before and after the declaration of stock repurchase. Companies with application purpose of “maintain stockholders’ equities and corporate credit” have considerably bigger average CAR than companies with application purpose of “transferring stocks to employees”. In industries other than electrics, companies with application purpose of “maintain stockholders’ equities and corporate credit” have bigger accumulated abnormal return response than companies with application purpose of “transferring stocks to employees”. In case of “maintain stockholders’ equities and corporate credit” as the application purpose of stock repurchase, companies from industries other than electrics have relatively higher average CAR response. The empirical study results can serve as a reference for the listed company management and to related academic studies.  相似文献   

13.
Many artificial intelligence tasks, such as automated question answering, reasoning, or heterogeneous database integration, involve verification of a semantic category (e.g. “coffee” is a drink, “red” is a color, while “steak” is not a drink and “big” is not a color). In this research, we explore completely automated on-the-fly verification of a membership in any arbitrary category which has not been expected a priori. Our approach does not rely on any manually codified knowledge (such as WordNet or Wikipedia) but instead capitalizes on the diversity of topics and word usage on the World Wide Web, thus can be considered “knowledge-light” and complementary to the “knowledge-intensive” approaches. We have created a quantitative verification model and established (1) what specific variables are important and (2) what ranges and upper limits of accuracy are attainable. While our semantic verification algorithm is entirely self-contained (not involving any previously reported components that are beyond the scope of this paper), we have tested it empirically within our fact seeking engine on the well known TREC conference test questions. Due to our implementation of semantic verification, the answer accuracy has improved by up to 16% depending on the specific models and metrics used.  相似文献   

14.
A sophisticated commonsense knowledgebase is essential for many intelligent system applications. This paper presents a methodology for automatically retrieving event-based commonsense knowledge from the web. The approach is based on matching the text in web search results to designed lexico-syntactic patterns. We apply a semantic role labeling technique to parse the extracted sentences so as to identify the essential knowledge associated with the event(s) described in each sentence. Particularly, we propose a semantic role substitution strategy to prune knowledge items that have a high probability of erroneously parsed semantic roles. The experimental results in a case study for retrieving the knowledge is “capable of” shows that the accuracy of the retrieved commonsense knowledge is around 98%.  相似文献   

15.
The objective of this paper is to explain our approach called “Work Flow Methodology for Analysis and Conceptual Data Base Design of Large Scale Computer Based Information System”. The user fills in, through the different steps of the methodology and in the light of the definition of dynamic adaptive system, a number of forms which relate the topological dimension to the time dimension for each application of a given system. In addition, we obtain the “Unit Subschema” which defines the responsibilities of issuing and authorization of receiving information at the proper time. Finally, we apply our methodology to the Registration System in Kuwait University.  相似文献   

16.
Coupling the recently proposed syntactic/semantic model of programmer behavior [1] with classic educational psychological theories yields new insights to teaching programming to novices. These new insights should make programming education more natural to students. alleviate “computer shock” (the analog of “math anxiety” [2]) and promote the development of widespread “computer literacy”.The spiral approach is the parallel acquisition of syntactic and semantic knowledge in a sequence which provokes student interest by using meaningful examples, builds on previous knowledge, is in harmony with the student's cognitive skills, provides reinforcement of recently acquired material and develops confidence through successful accomplishment of increasingly difficult tasks. The relationship of structured programming and flowcharts to the spiral approach is discussed.  相似文献   

17.
Throughout their lives, people are faced with various learning situations, for example when they learn how to use new software, services or information systems. However, research in the field of Interactive Learning Environments shows that learners needing assistance do not systematically seek or use help, even when it is available. The aim of the present study is to explore the role of some factors from research in Interactive Learning Environments in another situation: using a new technology not as a means of acquiring knowledge but to realize a specific task. Firstly, we present the three factors included in this study (1) the role of the content of assistance, namely operative vs. function-oriented help; (2) the role of the user’s prior knowledge; (3) the role of the trigger of assistance, i.e. help provided after the user’s request vs. help provided by the system. In this latter case, it is necessary to detect the user’s difficulties. On the basis of research on problem-solving, we list behavioral criteria expressing the user’s difficulties. Then, we present two experiments that use “real” technologies developed by a large company and tested by “real” users. The results showed that (1) even when participants had reached an impasse, most of them never sought assistance, (2) operative assistance that was automatically provided by the system was effective for novice users, and (3) function-oriented help that was automatically provided by the system was effective for expert users. Assistance can support deadlock awareness and can also focus on deadlock solving by guiding task. Assistance must be adapted to prior knowledge, progress and goals of learners to improve learning.  相似文献   

18.
Design-patterns and design-principles represent two approaches, which elicit design knowledge from successful learning environments and formulate it as design guidelines. The two approaches are fairly similar in their strategies, but differ in their research origins. This study stems from the design-principles approach, and explores how learning is affected by curriculum-materials designed according to two main design-principles: (a) engage learners in peer instruction, and (b) reuse student artifacts as resource for further learning. These principles were employed in three higher-education courses and examined with 385 students. Data analysis was conducted in two trajectories: In the “bird’s eye view” trajectory we used a “feature” unit of analysis to illustrate how learning was supported by features designed according to the two design-principles in each of the courses. In the “design-based research” trajectory we focused on one feature, a web-based Jigsaw activity, in a philosophy of education course, and demonstrated how it was refined via three design iterations. Students were required to specialize in one of three philosophical perspectives, share knowledge with peers who specialized in other perspectives, and reuse the shared knowledge in new contexts. Outcomes indicated that the design in the first iteration did not sufficiently support student ability to apply the shared knowledge. Two additional design-principles were employed in the next iterations: (c) provide knowledge representation and organization tools, and (d) employ multiple social-activity structures. The importance of combining several design-principles for designing curricular materials is discussed in terms of Alexander’s design-pattern language and his notion of referencing between design-patterns.  相似文献   

19.
Drawing upon contingency theory “fit” research in the IT and supply chain management literature, we applied the “fit” concept to the relationship between B2B e-commerce supply chain integration and performance. The results demonstrated that the effect of B2B supply chain integration on financial, market, and operational performance decreased as product turbulence and demand unpredictability jointly increased. Managerial implications include the conditions under which IT investments yield performance improvement and the need for firms to actively manage demand uncertainty.  相似文献   

20.
An integrated multi-unit chemical plant presents a challenging control design problem due to the existence of recycling streams. In this paper, we develop a framework for analyzing the effects of recycling dynamics on closed-loop performance from which a systematic design of a decentralized control system for a recycled, multi-unit plant is established. In the proposed approach, the recycled streams are treated as unmodelled dynamics of the “unit” model so that their effects on closed-loop stability and performance can be analyzed using the robust control theory. As a result, two measures are produced: (1) the ν-gap metric, which quantifies the strength of recycling effects, and (2) the maximum stability margin of “unit” controller, which represents the ability of the “unit” controller to compensate for such effects. A simple rule for the “unit” control design is then established using the combined two measures in order to guarantee the attainment of good overall closed-loop performances. As illustrated by several design examples, the controllability of a recycled, multi unit process under a decentralized “unit” controller can be determined without requiring any detailed design of the “unit” controller because the simple rule is calculated from the open-loop information only.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号