首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
Summary Reasoning about programs involves some logical framework which seems to go beyond classical predicate logic. LAR is an extension of predicate logic by additional concepts which are to formalize our natural reasoning about algorithms. Semantically, this extension introduces an underlying time scale on which formulas are considered and time shifting connectives. Besides a full model-theoretic treatment, a consistent and complete formal system for LAR is given. The pure logical system can serve as a basis for various theories. As an example, a theory of while program schemes is developed which contains Hoare's correctness proof system.  相似文献   

2.
Personalization of content returned from a Web site is an important problem in general and affects e-commerce and e-services in particular. Targeting appropriate information or products to the end user can significantly change (for the better) the user experience on a Web site. One possible approach to Web personalization is to mine typical user profiles from the vast amount of historical data stored in access logs. We present a system that mines the logs to obtain profiles and uses them to automatically generate a Web page containing URLs the user might be interested in. Profiles generated are only based on the prior traversal patterns of the user on the Web site and do not involve providing any declarative information or require the user to log in. Profiles are dynamic in nature. With time, a users traversal pattern changes. To reflect changes to the personalized page generated for the user, the profiles have to be regenerated, taking into account the existing profile. Instead of creating a new profile, we incrementally add and/or remove information from a user profile, aiming to save time as well as physical memory requirements.  相似文献   

3.
In many e-commerce applications, ranging from dynamic Web content presentation, to personalized ad targeting, to individual recommendations to the customers, it is important to build personalized profiles of individual users from their transactional histories. These profiles constitute models of individual user behavior and can be specified with sets of rules learned from user transactional histories using various data mining techniques. Since many discovered rules can be spurious, irrelevant, or trivial, one of the main problems is how to perform post-analysis of the discovered rules, i.e., how to validate user profiles by separating good rules from the bad. This validation process should be done with an explicit participation of the human expert. However, complications may arise because there can be very large numbers of rules discovered in the applications that deal with many users, and the expert cannot perform the validation on a rule-by-rule basis in a reasonable period of time. This paper presents a framework for building behavioral profiles of individual users. It also introduces a new approach to expert-driven validation of a very large number of rules pertaining to these users. In particular, it presents several types of validation operators, including rule grouping, filtering, browsing, and redundant rule elimination operators, that allow a human expert validate many individual rules at a time. By iteratively applying such operators, the human expert can validate a significant part of all the initially discovered rules in an acceptable time period. These validation operators were implemented as a part of a one-to-one profiling system. The paper also presents a case study of using this system for validating individual user rules discovered in a marketing application.  相似文献   

4.
In this paper we deepen Mundici's analysis on reducibility of the decision problem from infinite-valued ukasiewicz logic to a suitable m-valued ukasiewicz logic m , where m only depends on the length of the formulas to be proved. Using geometrical arguments we find a better upper bound for the least integer m such that a formula is valid in if and only if it is also valid in m. We also reduce the notion of logical consequence in to the same notion in a suitable finite set of finite-valued ukasiewicz logics. Finally, we define an analytic and internal sequent calculus for infinite-valued ukasiewicz logic.  相似文献   

5.
The basic starting point of this paper is that context constitutes most of the user interface when doing VR-related experiments, but even so one bases performance measures on only a few active tasks. Thus, in order to meaningfully compare results obtained in vastly different experiments one needs to somehow subtract the contribution to observables that are due to the context. For the case where one is investigating whether changes in one observable causes changes in another, a method, context calibration, is proposed that does just that. This method is expected to, to a large extent, factor out the part of one's results that are due to factors that are not explicitly considered when evaluating the experiment, factors that the experimenter might not even suspect influences the experiment. A procedure for systematically investigating the theoretical assumptions underlying context calibration is also discussed as is an initial experiment adhering to the proposed methodology.  相似文献   

6.
Any assessment of the cost of encoding one data structure in another must take into account, among other issues, the intended patterns of traversing the guest structure. Two such usage patterns, namely, worstedge traversal and all-edges-equally-likely traversal, are particularly significant, since any bounds on encoding costs relative to these patterns yield bounds relative to large classes of other patterns also. The foregoing remarks are formalized in this paper, and a number of techniques for bounding the costs of encodings relative to these special usage patterns are developed and exemplified. Specifically, data structures are represented here as undirected graphs; and a number of lower bounds on the costs of data encodings are derived by comparing various structural features of the guest and host graphs. Relevant features include both maximum and average vertex-degree, volume, and exposure, a measure of connectivity.A preliminary version of this paper was presented, under the title, Toward a theory of data encoding, at the Conference on Theoretical Computer Science, Waterloo, Ontario, August 15–17, 1977.  相似文献   

7.
Software that is to be designed and written for operation in the factory environment is especially difficult to conceptualize, design and successfully install. This paper focuses on some aspects of software engineering that apply to this situation and may prove useful to others involved in this profession. The particular problem that is considered in the paper is that of a Real-time Production Monitoring System although any industrial system could have been used. Monitoring industrial processes and displaying meaning ful data in real-time is extremely difficult, mainly because each component, although complementary, is functionally, electrically and temporally quite different. It is therefore difficult to design a standard factory data structure or always to find elegant processing mechanisms. In order to integrate data from these disparate sources, the system must be carefully architected, using consistent and sound software engineering principles.The paper included practical aspects of the implementation of this particular information system, which is a growing component in the management process of a typical computer-integrated manufacturing facility. The paper contains sections on human-factors engineering, fault detection and system recovery. The selection of the operating system platform is critical, and software engineering professionals should appreciate the sections devoted to the system components. Some material is based on the author's own practical experience gained in the design and implementation of several such systems.  相似文献   

8.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

9.
A version of topology's fundamental group is developed for digital images in dimension at most 3 in [7] and [8]. In the latter paper, it is shown that such a digital image X , k 3, has a continuous analog C(X) Rk such that X has digital fundamental group isomorphic to 1(C(X)). However, the construction of the digital fundamental group in [7] and [8] does not greatly resemble the classical construction of the fundamental group of a topological space. In the current paper, we show how classical methods of algebraic topology may be used to construct the digital fundamental group. We construct the digital fundamental group based on the notions of digitally continuous functions presented in [10] and digital homotopy [3]. Our methods are very similar to those of [6], which uses different notions of digital topology. We show that the resulting theory of digital fundamental groups is related to that of [7] and [8] in that it yields isomorphic fundamental groups for the digital images considered in the latter papers (for certain connectedness types).  相似文献   

10.
Web personalization has quickly changed from a value-added facility to a service required in presenting large quantities of information because individual users of the Internet have various needs and preferences in seeking information. This paper presents a novel personalized recommendation system with online preference analysis in a distance learning environment called Coursebot. Users can both browse and search for course materials by using the interface of Coursebot. Moreover, the proposed system includes appropriate course materials ranked according to a users interests. In this work, an analysis measure is proposed to combine typical grey relational analysis and implicit rating, and thus a users interests are calculated from the content of documents and the users browsing behavior. This algorithms low computational complexity and ease of adding knowledge support online personalized analysis. In addition, the user profiles are dynamically revised to provide efficiently personalized information that reflects a users interests after each page is visited.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号