首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We outline an alternative model of the interface in HCI, the ‘intraface’, in response to design issues arising from navigational and learning problems in hypertext domains. Ours is a model of general application to computer systems. It is composed of four key elements, identifiable within a dynamic interconnected context. These are the user; his/her interests; the tools employed and the ‘ensemble’ of representations brought to bear. In this paper we sketch the present shortcomings of HCI design before outlining the background for the model which draws upon two themes in contemporary psychology, conversational analysis and ‘affordance’ realist theories in perception. This framework allows for the development of principles of cooperation, user engagement and learning in HCI environments.  相似文献   

2.
This paper describes two techniques which can be used to improve the flexibility of modern microcomputer systems. The first is the idea of a ‘processor-configured’ system, whereby the memory and peripheral interfaces may have their structure logically altered by a single from the processor card, so that they match the wordlength of the processor used. The technique may be applied to existing bus standards with a minimum of modification, and then allows 8-bit and 16-bit processor cards to be exchanged on the system with a minimum of inconvenience, and without compromising performance. The second technique, of ‘shadow-ROM’, is not new, but is not yet widely used in personal computer systems. The system described, using most significant address page switching, is a particularly flexible one.  相似文献   

3.
4.
Dongmei  Ramiro  Luigi   《Computer Communications》2006,29(18):3766-3779
This paper discusses issues of personalization of presence services in the context of Internet Telephony. Such services take into consideration the willingness and ability of a user to communicate in a network, as well as possibly other factors such as time, address, etc. Via a three-layer service architecture for communications in the session initiation protocol (SIP) standard, presence system basic services and personalized services (personal policies) are clearly separated and discussed. To enrich presence related services, presence information is illustratively extended from the well known “online” and “offline” indicators to a much broader meaning that includes “location”, “lineStatus”, “role”, “availability”, etc. Based on this, the call processing language (CPL) is extended in order to describe presence related personalized services for both call processing systems and presence systems using information such as a person’s presence status, time, address, language, or any of their combinations. A web-based system is designed and implemented to simulate these advanced services. In the implementation, personal policies are programmed by end users via a graphic user interface (GUI) and are automatically translated into extended CPL. The simulation system clearly displays when, where and what CPL policies should be used for the provision of personalized presence services and call processing services. Policy conflicts are also addressed by setting policy priorities in the system.  相似文献   

5.
The European automotive industry requires frequent interaction and transfer of data between geographically dispersed designers and engineers at all stages of the product introduction process. The RACE CAR project identified and demonstrated Integrated Broadband Communications (IBC)-supported applications to support this process and improve competitiveness. User requirements for workstation-based, multi-media facilities including conferencing were identified. Two experiments were designed to investigate the role of face-to-face video and the means by which participants organise and control their interactions. These are critical issues in the multi-cultural, international environment of the European automotive industry. In the first experiment groups of three users solved a cooperative, screen-based, object manipulation task supported by different levels of communication. ‘Linked computers plus an audio link’ resulted in significantly faster completion times than either ‘audio alone’ or ‘linked computer plus audio and face-to-face video’. ‘Linked computers plus audio’ was also perceived as the most effective communications media. The passage of cursor via verbal agreement was successfully managed. Video was generally considered beneficial for initial introductions, assessing understanding and facilitating a stronger feeling of group identity.

In the second experiment, subjects were grouped under ‘chaired’ or ‘free-for-all’ conditions and linked via (1) audio and linked computers or (2) audio, linked computers and face-to-face video. The task was similar to Experiment 1 and attempts to introduce contention were made through adding hidden, sub-goals. The task took significantly less time to complete in the ‘video chaired’ condition than the ‘non-video chaired’ or ‘video free-for-all’ conditions. This suggests that video has an important role in enabling a chairperson to control the meeting. Contention was not successfully achieved.

The results of the experiments suggest face-to-face video may be useful in chaired meetings and to develop ‘team’ feeling. A free-for-all method of control passing was seen as most appropriate although problems in achieving contention in Experiment 2 meant the impact of disagreement was not fully investigated. The results are discussed in relation to the European automotive industry and areas for further study identified. Relevance to industry

The European automotive industry, which maintains distinct engineering functions in disparate countries, is striving to reduce the length of its design life cycle by improving communications between designers and engineers. The studies described in this paper provide information of use to the developers and procurers of systems intended to support this process. In particular issues relating to the relevance of face-to-face video and use of control mechanisms for co-operative computer-mediated work.  相似文献   


6.
The study statistically tests the proposition of the ISO/IEC 9126 standard that higher external quality (quality when software is executed) implies higher ‘quality in use’. For this purpose, the study based on 75 user survey data shows that individual external quality subcharacteristics are positively associated with user satisfaction as defined in ‘quality in use’. This study also investigates the external quality subcharacteristics that strongly influence user satisfaction. Our results provide guidance for the revision of the Standard as well as supporting management decisions (i.e., resource allocation) aimed at improving software product quality.  相似文献   

7.
Buried stormwater pipe networks play a key role in surface drainage systems for urban areas of Australia. The pipe networks are designed to convey water from rainfall and surface runoff only and do not transport sewage. The deterioration of stormwater pipes is commonly graded into structural and serviceability condition using CCTV inspection data in order to recognize two different deterioration processes and consequences. This study investigated the application of neural networks modelling (NNM) in predicting serviceability deterioration that is associated with reductions of pipe diameter until a complete blockage. The outcomes of the NNM are predictive serviceability condition for individual pipes, which is essential for planning proactive maintenance programs, and ranking of pipe factors that potentially contribute to the serviceability deterioration. In this study the Bayesian weight estimation using Markov Chain Monte Carlo simulation was used for calibrating the NNM on a case study in order to account for the uncertainty often encountered in NNM's calibration using conventional back-propagation weight estimation. The performance and the ranked factors obtained from the NNM were also compared against a classical model using multiple discrimination analysis (MDA). The results showed that the predictive performance of the NNM using Bayesian weight estimation is better than that of the NNM using conventional backpropagation and MDA model. Furthermore, among nine input factors, ‘pipe age’ and ‘location’ appeared insignificant whilst ‘pipe size’, ‘slope’, ‘the number of trees’ and ‘climatic condition’ were found consistently important over both models for serviceability deterioration process. The remaining three factors namely, ‘structure’, ‘soil’ and ‘buried depth’ might be redundant factors. A better and more consistent data collection regime may help to improve the predictive performance of the NNM and identify the significant factors.  相似文献   

8.
This paper is related to improvements carried out n the field of human-machine communication in complex industrial processes, using the concept of the ‘intelligent’ interface. Following a review of literature on this subject, an ‘intelligent’ interface design based on ergonomical concepts is described. Finally, we present our approach to the design of an ‘intelligent’ interface. The Decisional Module of Imagery (D.M.I.) as it is called, is based on two models: a task model and a user model. The D.M.I.'s structure and its integration in an experimental platform are described in the last part of this paper.  相似文献   

9.
Current state-of-the-art security systems incorporate ‘passive’ and/or ‘human’ elements, the effectiveness of which can only be measured by their ability to ‘deter’ intruders. However, rapidly changing economic and cultural conditions have weakened the strengths associated with such systems. In the not too distant future, the need for an ‘active’ security system will become necessary in order to reduce the onslaught of crime.

This paper presents a conceptual basis for the incorporation of artificial intelligence concepts in the design and implementation of ‘active’ security systems. Specifically, the paper discusses issues pertaining to a real-time model for visual perception and tracking of possible intruders.  相似文献   


10.
Certain problems, notably in computer vision, involve adjusting a set of real-valued labels to satisfy certain constraints. They can be formulated as optimisation problems, using the ‘least-disturbance’ principle: the minimal alteration is made to the labels that will achieve a consistent labelling. Under certain linear constraints, the solution can be achieved iteratively and in parallel, by hill-climbing. However, where ‘weak’ constraints are imposed on the labels — constraints that may be broken at a cost — the optimisation problem becomes non-convex; a continuous search for the solution is no longer satisfactory. A strategy is proposed for this case, by construction of convex envelopes and by the use of ‘graduated’ non-convexity.  相似文献   

11.
12.
There is a growing information gap between the development of advanced human-machine systems, and the availability of human factors design criteria that can be applied during their design process. Despite increased interest in the development of human factors design guidelines, there also remains considerable uncertainty and concern regarding the actual utility of such information. Indeed, many existing human factors reference materials have been criticized by designers for being ‘too wordy’, ‘too general’, and ‘too hard to understand’. The development of clear, relevant, and useful human factors guidelines requires a judicious mix of science and art to overcome such criticisms. Specifically, while a number of empirical and systematic methods can be productively applied to their development, the final design guidelines will always represent a subjective integration of user requirements, design constraints, available information, and expert judgement. This paper summarizes procedures and heuristics associated with both the science and the art components of human factors design guideline development.  相似文献   

13.
ODA, Office Document Architecture, is an international standard (IS 8613) which enables compound documents to be interchanged between and processed on heterogeneous computer systems. The ESPRIT PODA-2 Project is playing a major role on promoting and developing ODA offering a set of ODA tools such as an ODA Application Programmatic Interface, ODA Formatter.

Based on the PODA toolkit, Bull offers a coherent set of Q112 ODA tools ‘The Bull ODA Product Set’, which allows the user to create a document using World for Windows Microsoft, to convert it in ODA and browse it or send it; of course it works the other way. The Converter Rich Text Format implementation problematic is discussed, the concept which drives the conversion is widely explained: examples of mapping between Word for Windows capabilities and the Q112 level give more concrete view of the implementation and fallback strategy. The overall architecture of the RTF/ODA converter introduces the Application Programmatic Interfaces and the object-oriented design method. The example of document feature conversion ‘style’ finalizes this report. The key points expressed in the future perspective illustrate the importance that ODA will gain in Europe and all over the world.  相似文献   


14.
The paper addresses an issue that must be resolved to produce a scientifically sound and practically useful reference model for intelligent multimedia presentation systems (IMP systems), namely that of providing, from the point of view of Human-Computer Interaction (HCI), a systematic understanding of the types of output information to be presented by IMP systems. The term ‘medium’, as it is used in the context of multimedia systems, is too coarse-grained for distinguishing between different types of output information. The paper introduces the notion of (representational) ‘modalities’ to enable sufficiently fine-grained distinctions to be made. For the term itself to be meaningful, ‘multimodal’ presentations must be composed of unimodal representations. In the approach presented, unimodal representations are defined from a small number of basic properties whose combinations specify the ‘generic’ level of a taxonomy of unimodal output modalities. Additional basic property distinctions serve to generate the more fine-grained ‘atomic’ and ‘sub-atomic’ levels in a hierarchical fashion. The taxonomy is set up with the aim of satisfying four basic requirements, viz. completeness, orthogonality, relevance and intuitiveness. A concluding discussion illustrates the practical use of the taxonomy.  相似文献   

15.
Corruption of the instruction pointer in an embedded computer system has been shown to be a common failure mode in the presence of electromagnetic interference, and previous investigators have suggested that the use of techniques such as ‘Function Tokens’ (FT) and ‘NOP Fills’ (NF) can reduce the impact of such failures. In this paper, both a statistical analysis and empirical tests of code from an embedded application are used to assess and compare these techniques. Two main results are presented. First, it is demonstrated that claims about the effectiveness of FT may neither be well founded nor generally applicable; specifically, it is concluded that rather than increasing system reliability, the use of FT will have the opposite effect. Second, it is demonstrated that NF may be easily applied in most embedded applications, and that the use of this approach can have a positive impact on system reliability.  相似文献   

16.
In order to perform business modelling as apart of information systems development, there is a need for frameworks and methods. The paper proposes a framework for business interaction based on a language/action perspective. The framework is an architecture of five generic layers. The first layer concept is ‘business act’, which functions as the basic unit of analysis. The following four layer concepts are ‘action pair’, ‘exchange’, ‘business transaction’, and ‘transaction group’. The framework is inspired by a similar framework constructed by Weigand et al. The paper makes a critical examination of this framework as a basis for the proposed framework.  相似文献   

17.
Giving undo attention   总被引:3,自引:0,他引:3  
The problems associated with the provision of an undo support facility in the context of a synchronous shared or group editor are investigated. Previous work on the development of formal models of ‘undo’ has been restricted to single user systems and has focused on the functionality of undo, as opposed to discussing the support that users require from any error recovery facility. Motivated by new issues that arise in the context of computer supported co-operative work, the authors aim to integrate formal modelling of undo with an analysis of how users understand undo facilities. Together, these combined perspectives of the system and user lead to concrete design advice for implementing an undo facility. The special issues that arise in the context of shared undo also shed light on the emphasis that should be placed on single user undo. In particular, the authors regard undo not as a system command to be implemented, but as a user intention to be supported by the system.  相似文献   

18.
Although noise research concentrates principally on the workplace, recent studies have shown that preferred leisure noise may be as loud as 10 dBA higher than workplace levels. This two-phase study compared leisure noise preferences for workers who were exposed to either a ‘loud’ (≥ 85 dBA) or ‘not loud’ (< 85 dBA) work environment. Phase 1 examined 110 subjects' noise level preferences that were recorded before and after work for a one-day observation. Phase 2 recorded 12 additional subjects' preferences for five consecutive days. Analysis of both phases' results determined that leisure noise levels prior to work were not significantly different. Those exposed to the ‘loud’ environment preferred noise levels significantly higher (6.5 to 9 dBA) than their before work levels. Over the five consecutive days (Phase 2) only the ‘loud’ group preferred noise levels significantly higher after work (Day 5 versus Day 1). Thus, it can be concluded that ‘loud’ work environments and consecutive daily exposure to these environments do influence preferred leisure noise levels.  相似文献   

19.
The networking community has tackled the resource-finding problem using several methods. The knowledge of the name or property of the resource enables one to find it over the network. Many techniques were proposed and investigated for a single instance of the resource. The Internet has experienced dramatic growth in the use and provision of services such as ftp, gopher, archie and World-Wide-Web. The heavy demands being placed on servers inspire replication (mirroring) of servers. This replication results in client intending to contact the ‘best’ server among many content-equivalent servers.

The solutions that were used for the ‘best’ server selection include multicast and broadcast communication to send request to all servers and choose the best one from all the replies. These solutions require the client to be powerful enough to handle all the replies, which may be overwhelming leading to client's machine getting hung. The other solution uses the name servers to provide a different unicast address for one member of a group of servers at different locations. The inherent disadvantage in this method is that the user is unable to choose the best server. The idea of application layer anycasting allows the user to select the best server according to the user's selection criteria. The main disadvantage of this scheme is that the client that does the selection may not be powerful to handle responses from all the content-equivalent servers. In this paper the idea of application layer anycasting has been extended by allowing the active routers to locate the best server. Active networks, unlike the traditional networks are not just passive carrier of bits but instead provides the capability for the user to inject customized programs into the networks that may modify, store or redirect the user data flowing through the network. Anycasting is done in the application level as it provides better end-to-end control, and there is no support in the network level. The choice of ‘best’ server is done based on the first response from the servers. The active routers do the filtering of responses from laggards and the client gets the response only from the best server, thus the client machine is not overwhelmed by responses. The client deals with the vital issue of security with respect to Active networks by the use of various encryption schemes. Since the ‘best’ server chosen is not always the best forever, a TTL value is associated with each of the best server found, and the best one is reselected after its expiry.

The performance of the proposed scheme is compared with the networks without active networks and is found to provide better response time for requests. Further, the proposed scheme avoids the overloading of a server, jockeying, and reduces the overhead of the client in selecting the best server. The overhead on the routers in active networks is insignificant compared to the advantages accrued due to it.  相似文献   


20.
A parallel finite element analysis based on a domain decomposition technique (DDT) is considered. In the present DDT, an analysis domain is divided into a number of smaller subdomains without overlap. Finite element analyses of the subdomains are performed under the constraint of both displacement continuity and force equivalence among them. The constraint is satisfied through iterative calculations based on either the Uzawa algorithm or the Conjugate Gradient (CG) method. Owing to the iterative algorithm, a large scale finite element analysis can be divided into a number of smaller ones which can be carried out in parallel.

The DDT is implemented on a parallel computer network composed of a number of 32-bit microprocessors, transputers. The developed parallel calculation system named the ‘FEM server type system’ involves peculiar features such as network independence and dynamic workload balance.

The characteristics of the domain decomposition method such as computational speed and memory requirement are first examined in detail through the finite element calculations of homogeneous or inhomogeneous cracked plate subjected to a tensile load on a single CPU computer.

The ‘speedup’ and ‘performance’ features of the FEM server type system are discussed on a parallel computer system composed of up to 16 transputers, with changing network types and domain decompositions. It is clearly demonstrated that the present parallel computing system requires a much smaller amount of computational memory than the conventional finite element method and also that, due to the feature of dynamic workload balancing, high performance (over 90%) is achieved even in a large scale finite element calculation with irregular domain decomposition.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号