首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
A fundamental question that must be addressed in software agents for knowledge management is coordination in multi-agent systems. The coordination problem is ubiquitous in knowledge management, such as in manufacturing, supply chains, negotiation, and agent-mediated auctions. This paper summarizes several multi-agent systems for knowledge management that have been developed recently by the author and his collaborators to highlight new research directions for multi-agent knowledge management systems. In particular, the paper focuses on three areas of research:
  • Coordination mechanisms in agent-based supply chains. How do we design mechanisms for coordination, information and knowledge sharing in supply chains with self-interested agents? What would be a good coordination mechanism when we have a non-linear structure of the supply chain, such as a pyramid structure? What are the desirable properties for the optimal structure of efficient supply chains in terms of information and knowledge sharing? Will DNA computing be a viable tool for the analysis of agent-based supply chains?
  • Coordination mechanisms in agent-mediated auctions. How do we induce cooperation and coordination among various self-interested agents in agent-mediated auctions? What are the fundamental principles to promote agent cooperation behavior? How do we train agents to learn to cooperate rather than program agents to cooperate? What are the principles of trust building in agent systems?
  • Multi-agent enterprise knowledge management, performance impact and human aspects. Will people use agent-based systems? If so, how do we coordinate agent-based systems with human beings? What would be the impact of agent systems in knowledge management in an information economy?
  相似文献   

2.
The International Parallel and Distributed Processing Symposium (IPDPS) 2008 panel with the title “How to avoid making the same Mistakes all over again: What the parallel-processing Community has (failed) to offer the multi/many-core Generation” sought to provoke discussion on current and recent computer science education in relation to the emergence of fundamentally parallel multi/many-core systems. Is today’s/tomorrow’s/yesterday’s computer science graduate equipped to deal with the challenges of parallel software development for such systems? Are mistakes from the past being unnecessarily repeated? What are the fundamental contributions of the parallel processing research community to the current state of affairs that are possibly being ignored? What are the new challenges that have not been addressed in past parallel processing research? How should computer-science education in parallel processing look like? Should it be taught at all?  相似文献   

3.
Product recommendation is one of the most important services in the Internet. In this paper, we consider a product recommendation system which recommends products to a group of users. The recommendation system only has partial preference information on this group of users: a user only indicates his preference to a small subset of products in the form of ratings. This partial preference information makes it a challenge to produce an accurate recommendation. In this work, we explore a number of fundamental questions. What is the desired number of ratings per product so to guarantee an accurate recommendation? What are some effective voting rules in summarizing ratings? How users’ misbehavior such as cheating, in product rating may affect the recommendation accuracy? What are some efficient rating schemes? To answer these questions, we present a formal mathematical model of a group recommendation system. We formally analyze the model. Through this analysis we gain the insight to develop a randomized algorithm which is both computationally efficient and asymptotically accurate in evaluating the recommendation accuracy under a very general setting. We propose a novel and efficient heterogeneous rating scheme which requires equal or less rating workload, but can improve over a homogeneous rating scheme by as much as 30%. We carry out experiments on both synthetic data and real-world data from TripAdvisor. Not only we validate our model, but also we obtain a number of interesting observations, i.e., a small of misbehaving users can decrease the recommendation accuracy remarkably. For TripAdvisor, one hundred ratings per product is sufficient to guarantee a high accuracy recommendation. We believe our model and methodology are important building blocks to refine and improve applications of group recommendation systems.  相似文献   

4.
5.
Despite its progress over the last three decades, there is an obvious imbalance between the extraordinary growth of structural optimization theory and its negligible application to professional practice. What are the causes of this imbalance? How can it be corrected? This paper attempts to answer these questions by surveying authoritative opinions on the subject. A reappraisal of the optimization nature, theory and practice suggests that the disappointing penetration of structural optimization in professional engineering is largely due to the confusion between theoretical and engineering models. It is suggested that an important step forward could be taken by focusing onengineering rather thanmathematical optimization, i.e. by starting from real problems in search of their optimal solutions, instead of starting from optimization algorithms in search of practical applications, as currently is often the case.  相似文献   

6.
Random walk with restart: fast solutions and applications   总被引:6,自引:4,他引:2  
How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the “connection subgraphs”, personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block-wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman–Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150 × speed up with 90%+ quality preservation.  相似文献   

7.
Gray  W.D. 《Software, IEEE》1997,14(4):26-28
The issue here is not whether discount techniques should be used; they are inevitable. The issue is, in trying to do the best job you can with the ridiculously limited resources provided you, what should you do? How confident should you be in the techniques you are using? A bad design may come back and bite you. When you choose a technique to use in a hurry, you are placing your professional reputation and perhaps your job on the line. You deserve to know four things about any technique that you apply. The hit rate: How many real problems will this technique uncover? The false-alarm rate: How many (and what sorts) of things will it falsely identify as problems (that may not exist, but are costly and time consuming to “fix”)? What does it miss? What types of problems (and how many) does this technique not discover? The correct rejections: How confident are you in your discount technique's ability to flag problems? Discount techniques are not a substitute for the potent combination of analytic and empirical methodologies that usability professionals can bring to bear in designing and evaluating an interface  相似文献   

8.
What are names for computer files and commands like? How do people go about naming them? How do the properties such names can have affect the ease with which they can be learned and used? This paper sketches a general view of names and naming in which the linguistic forms that names take are deliberately structured to reflect functional interrelations between their referents. This view is then applied to an analysis of personal filenames chosen by CMS users and to a series of experimental studies of command languages.  相似文献   

9.
Type-2 fuzzy sets and systems: an overview   总被引:1,自引:0,他引:1  
This paper provides an introduction to and an overview of type-2 fuzzy sets (T2 FS) and systems. It does this by answering the following questions: What is a T2 FS and how is it different from a T1 FS? Is there new terminology for a T2 FS? Are there important representations of a T2 FS and, if so, why are they important? How and why are T2 FSs used in a rule-based system? What are the detailed computations for an interval T2 fuzzy logic system (IT2 FLS) and are they easy to understand? Is it possible to have an IT2 FLS without type reduction? How do we wrap this up and where can we go to learn more?  相似文献   

10.
Creating High Confidence in a Separation Kernel   总被引:2,自引:0,他引:2  
Separation of processes is the foundation for security and safety properties of systems. This paper reports on a collaborative effort of Government, Industry and Academia to achieve high confidence in the separation of processes. To this end, this paper will discuss (1) what a separation kernel is, (2) why the separation of processes is fundamental to security systems, (3) how high confidence in the separation property of the kernel was obtained, and (4) some of the ways government, industry, and academia cooperated to achieve high confidence in a separation kernel. What is separation? Strict separation is the inability of one process to interfere with another. In a separation kernel, the word separation is interpreted very strictly. Any means for one process to disturb another, be it by communication primitives, by sharing of data, or by subtle uses of kernel primitives not intended for communication, is ruled out when twoprocesses are separated. Why is separation fundamental? Strict separation between processes enables the evaluation of a system to check that the system meets its security policy. For example, if a red process is strictly separated from a black process, then it can be concluded that there is no flow of information from red to black. How was high confidence achieved? We have collaborated and shared our expertise in the use of SPECWARE. SPECWARE is a correct by construction method, in which high level specifications are built up from modules using specification combinators. Refinements of the specifications are made until an implementation is achieved. These refinements are also subject to combinators. The high confidence in the separation property of the kernel stems from the use of formal methods in the development of the kernel. How did we collaborate? Staff from the Kestrel Institute (developers of SPECWARE), the Department of Defense (DoD), and Motorola (developers of the kernel) cooperated in the creation of the Mathematically Analyzed Separation Kernel (MASK). DoD provided the separation kernel concept, and expertise in computer security and high confidence development. Kestrel provided expertise in SPECWARE. Motorola combined its own the expertise with that of DoD and Kestrel in creating MASK.  相似文献   

11.
How can we maintain a dynamic profile capturing a user’s reading interest against the common interest? What are the queries that have been asked 1,000 times more frequently to a search engine from users in Asia than in North America? What are the keywords (or tags) that are 1,000 times more frequent in the blog stream on computer games than in the blog stream on Hollywood movies? To answer such interesting questions, we need to find discriminative items in multiple data streams. Each data source, such as Web search queries in a region and blog postings on a topic, can be modeled as a data stream due to the fast growing volume of the source. Motivated by the extensive applications, in this paper, we study the problem of mining discriminative items in multiple data streams. We show that, to exactly find all discriminative items in stream S 1 against stream S 2 by one scan, the space lower bound is \(\Omega(|\Sigma| \log \frac{n_1}{|\Sigma|})\), where Σ is the alphabet of items and n 1 is the current size of S 1. To tackle the space challenge, we develop three heuristic algorithms that can achieve high precision and recall using sub-linear space and sub-linear processing time per item with respect to |Σ|. The complexity of all algorithms are independent to the size of the two streams. An extensive empirical study using both real data sets and synthetic data sets verifies our design.  相似文献   

12.
Many enterprises have been devoting a significant portion of their budget to product development in order to distinguish their products from those of their competitors and to make them better fit the needs and wants of customers. Hence, businesses should develop product designing that could satisfy the customers’ requirements since this will increase the enterprise’s competitiveness and it is an essential criterion to earning higher loyalties and profits. This paper investigates the following research issues in the development of new digital camera products: (1) What exactly are the customers’ “needs” and “wants” for digital camera products? (2) What features is more importance than others? (3) Can product design and planning for product lines/product collection be integrated with the knowledge of customers? (4) How can the rules help us to make a strategy during we design new digital camera? To investigate these research issues, the Apriori and C5.0 algorithms are methodologies of association rules and decision trees for data mining, which is implemented to mine customer’s needs. Knowledge extracted from data mining results is illustrated as knowledge patterns and rules on a product map in order to propose possible suggestions and solutions for product design and marketing.  相似文献   

13.
In HCI research there is a body of work concerned with the development of systems capable of reasoning about users’ attention and how this might be most effectively guided for specific applications. We present eight issues relevant to this endeavour: What is attention? How can attention be measured? How do graphical displays interact with attention? How do knowledge, performance and attention interact? What is working memory? How does doing two things at a time affect attention? What is the effect of artificial feedback loops on attention? Do attentional processes differ across tasks? For each issue we present design implications for developing attention–aware systems, and present a general discussion focussing on the dynamic nature of attention, tasks (number, nature and variety), level of processing, nature of the display, and validity of measures. In conclusion, we emphasise the need to adopt a dynamic view of attention and suggest that attention is a more complex phenomenon than some designers may have realised; however, embracing the multi-faceted nature of attention provides a range of design opportunities yet to be explored.  相似文献   

14.
New definitions are proposed for the security of Transient-Key Cryptography (a variant on Public-Key Cryptography) that account for the possibility of super-polynomial-time Monte Carlo cryptanalytical attacks. Weaker definitions no longer appear to be satisfactory in the light of Adleman's recent algorithm capable of breaking the Diffie-Hellman scheme in RTIME(O(20(√n log n))) for keys of length n. The basic question we address is: How can one relate the amount of time a cryptanalyst is willing to spend decoding cryptograms to his likelihood of success? What more can one say than the obvious “The more time he uses, the less lucky he needs to be?” These questions and others are partially answered in a relativized model of computation in which there exists a transient-key cryptosystem such that even a cryptanalyst willing to spend as much as (almost) O(2n/log n) steps on length n cryptograms cannot hope to break but an exponentially small fraction of them, even if he is allowed to make use of a true random number generator. This is rather tight because the same cryptosystem falls immediately if the cryptanalyst is willing to spend O(2cn) steps for any constant c > 0.  相似文献   

15.
《EDPACS》2013,47(9):18-19
Abstract

Whether you are responsible for ensuring the availability of your enterprise network or you are a chief technology officer or information security manager, you will likely ask yourself these questions: How much should I spend on security? Am I more secure today than I was yesterday? What metrics can I use to measure whether my security is improving or not? When can I stop patching so I can get back to doing real work?  相似文献   

16.
用系统科学和智能方法研究城市发展问题   总被引:1,自引:0,他引:1  
吴澄  刘民  郝井华  董明宇 《自动化学报》2015,41(6):1093-1101
我国正处于城镇化的快速发展阶段.然而,在城镇化的过程中,决策者常常面临这样的问题:一个城市的资源能支撑多大的人口规模?对产业结构进行怎样的调整才能最大化释放人口承载力?下一年度的城市国内生产总值(Gross domestic product, GDP)增幅 定为多少合理?GDP增幅与居民消费价格指数(Consumer price index, CPI)、就业的定量关系如何?财政投资应投向哪些领域才能最大程度地提高居民满意度?哪些决策影响城市协调发展?等等.如何在城镇化的大背景下化解上述难题,做好城 市发展的顶层规划与设计,是一类重大而复杂的难题.针对此类问题,本文采用系统科学和智能方法,定性定量相结合,首先建立城市GDP、财政收入等关键指标的智能预报模型以及GDP增幅与CPI、就业的定量关系模型,在此基础上 建立城市人口承载力与城市协调发展的优化决策模型,并针对我国某大型城 市的若干重要决策场景进行了案例分析.结果表明,采用系统科学与智能方法,是研究和解决该类难题的新颖且行之有效的方法,相关成果有助于城市决策者提高决策水平,实现定性定量相结合的科学决策.  相似文献   

17.
Zvegintzov  N. 《Software, IEEE》1998,15(2):93-96
Typical empirical questions about software and the software business include “How productive are programming teams?”, “What are the industry norms?”, “What are the best practices?”, and “How should I measure the productivity of a programming team?” These, and others like them, are frequently asked questions. I always answer these questions with a question. “What are you trying to decide?” People almost always ask empirical questions because they need to make decisions. These decisions are important, involving risk to human life and health, affecting economic and societal well-being, and determining the equitable and productive use of scarce resources. The questioners seem to feel-because the questions are so frequently asked and the answers so often provided-that the answers must be widely known, but in looking at the answers, I observe the same pattern repeatedly. Answers that seemed well-known and widely supported turn out to be only widely proliferated. They are based on one or two sources, which often prove thin and unreliable. They are “transparency facts”, copied from presentation to presentation. When confronted with such questions, the questioners have no time to research the answers, so they grab whatever answers are at hand. Thus, many frequently asked questions about software are more truly frequently begged questions  相似文献   

18.
This paper, recognizing that retrenchment is the prognosis for the forseeable future for most collegiate institutions, employs Markov chains to trace the passage of faculty in time, and linear and parametric programming to provide guidelines for control of faculty size and composition. If cutbacks in size need to be carried out, how best should this be done? How does affirmative action influence the decision? What about accrediting agency regulations? What effects on quality can be anticipated? What policies should be recognized simultaneously? These and other questions are examined on both short-run and long-run bases using aggregated variables and a very detailed illustration.  相似文献   

19.
Temporal entities are assertions (e.g. events, states) whose truth can be associated with time. An interesting problem in temporal reasoning is the problem of representing the fact that such entities are repeated over time. The problem has attracted some interest lately in the knowledge representation community. In this paper, we take a novel approach to the problem, which allows us to recognize repeated (or recurrent) entities as a class of temporal entities with well-defined properties. We derive some general properties for this important class of temporal entities, and some properties for an interesting subclass, namely the class of repetition of concatenable entities. Conacatenable entities have been called unrepeatable in the literature. As such we take a special interest in investigating their properties. The logical theory used here is a reified theory, which admits entity types, into its ontology as opposed to tokens, and uses Allens interval logic. Finally, we relate the new class of repetitive entities to existing classes in Shohams taxonomy.  相似文献   

20.
嵌入式软件在安全关键领域的广泛应用使得保障软件的安全性成为学界的研究热点。故障树技术是工业界常用的传统的安全分析方法之一。然而,传统的故障树无法精确描述安全关键系统中具有时序特征的系统故障。针对此问题,给出了一种结合线性时序逻辑和故障树的安全验证方法。该方法运用线性时序逻辑对故障树进行形式化规约,从中抽取出软件安全属性并用时序逻辑公式进行描述,用以支持对安全关键软件的模型检验。最后,以某机载控制系统软件数据处理故障模块的模型检验为例,来说明该方法的有效性和可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号