首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Every software system, whether designed for commercial or noncommercial use, must be documented. Documentation must be geared to users' needs and knowledge; it should be neither too technical nor too simple. The quality of documentation is a major determinant of how well a system is received and how widely it is used.  相似文献   

2.
Abstract

In order to make grammar and style checkers customizable to meet writers' individual or organisational house style needs, complex rules specifying how to recognise and replace undesirable forms must be modified by non-expert users. Attempts in current commercial systems to provide such a facility are unsatisfactory: given the notations used to represent rules in these systems, any system that is powerful enough to perform its basic task of grammar and style checking is too complex to be comprehensible to a rule writer. This paper argues that any system with adequate natural language processing (NLP) resources to perform the basic tasks of a grammar and style checker can be augmented with a rule definition facility which, largely making use of those same resources, would be radically more usable than any existing system. The proposed approach is crucially dependent on the modular representation of system knowledge and incorporates techniques from knowledge representation, human-computer interaction and machine learning.  相似文献   

3.

This article presents a two‐experiment series that strongly supports the hypothesis that user preference for interactive screens and performance using interactive screens is related to screen complexity. The relationship follows an inverted U‐shaped curve, with too little or too much complexity depressing preference and performance. The implication for interactive systems designers is that while a clear screen is a necessary condition for user satisfaction, it is not a sufficient one; the appropriate level of screen complexity must also be considered.  相似文献   

4.
Some real-time systems are designed to deliver services to objects that are controlled by external sources. Their services must be delivered on a timely basis, and the system fails when some services are delivered too late. In general, the timing requirements of the system may change when the states of the objects monitored by the system change. Such a system may fail if the timing requirements which it is designed to meet are erroneous. It may underutilize resources and consequently be costly or unreliable if the requirements are too stringent. Hence, one must identify how changes in object states call for changes in system requirements and how these changes should be incorporated into the design and implementation of the system. This paper first describes a methodology to determine timing requirements and to take into account requirement changes at runtime. The method is based on several timing requirement determination schemes. Simulation data show that these schemes are effective for applications such as mobile IP hand-offs. The paper then discusses how to incorporate this methodology in the system architecture and in the development process.
J. W. S. LiuEmail:
  相似文献   

5.
Abstract

A prime requirement of a system is that it be reliable and that the system designer has been able to estimate the reliability in numerical terms. Should it turn out that certain sections of the system arc too prone to fait the designer may consider duplicating such sections in some manner, i.e. the designer tries to enhance the overall reliability through use of redundancy. But how much—if anything—will be gained reliability wise by such a design change? A possible gain is hard to assess due to interaction between former and new components; however, it is frequently possible, by fairly simple means, to find upper and lower bounds for such gain in reliability

The purpose of this paper is twofold: first to introduce the readers to some fundamental concepts in the theory of system reliability, and second to present two new results valid for redundant system configurations (we ask the question: what arc the upper and lower bounds for the expected life or a parallel configuration, when the expected life for each of the blocks is parallel is known). (An expanded version of the paper is available from the authors upon request.)  相似文献   

6.
Ed Moyle 《EDPACS》2013,47(4):17-20
Abstract

Big Data Analytics can be a fantastic business opportunity for many organizations. Already organizations are using advanced analytics to streamline production processes, optimize back office activities, market more effectively, and better satisfy customer demand. That said, it goes without saying (as recent headlines can attest) that sometimes enhanced analytics capabilities can introduce risks such as erosion of privacy, overly-intrusive knowledge about customers, etc.

Given this dichotomy, making the decision about when, whether, how much, and how to invest in big data analytics initiatives can be a challenge. Invest too soon and you may obviate existing investments or disrupt business activities; invest too late and you may find that competitors gain advantages that make the market landscape asymmetric.

This article outlines how and why applying “tried and true” governance principles can help make this decision easier. For those that have formalized governance structures in place, how they might inform the decision an organization makes in this regard – and for those that don’t have a formalized governance program – how they might co-opt some of those principles to help make this decision more approachable.  相似文献   

7.
Abstract

This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-based learning  相似文献   

8.
Abstract

Users who have high expectations for a new system too often become disappointed once they discover the system does not live up to their dreams. Some IS managers circumvent this problem by purposely setting expectations too low. However, high expectations do not have to be a guarantee for disappointment and dissatisfaction.  相似文献   

9.
Abstract

This paper presents a procedure for the generation of sentences that produces written sentences in a particular language starting from formal representations of their meaning. The procedure is composed of a lexicalization part independent of the specific language used and of an ordering part that must be reconfigured according to the target language. The internal representation is described, and it is then shown how the generation procedure can deal with both one-clause and multiclause sentences. Applications to the English language are shown throughout the tex  相似文献   

10.
Summary A syntactic error recovery technique is presented that is simple and at the same time very powerful. It has the main property that it is phrase marker oriented, where phrase markers are considered symbols delimiting language constructions, e.g., begin and end for blocks, (and) for expressions, and [and] for indices. The basic idea of this error recovery technique originates from P. Branquart and has been worked out in the Algol 68 compiler project, see [8] and [9]. Here, we are especially concerned with the generation aspects of error recovery. In particular, it is investigated how error recovery can be mechanised in an ELL(1) and LALR(1) syntax directed translation scheme and which conditions the syntax must satisfy. Both the ELL(1) and LALR(1) generators are implemented and are part of the system LILA: a Language Implementation LAboratory [28, 29, 30]. Only the ELL(1) generator is described here.  相似文献   

11.
ABSTRACT

Communicating with confidential data requires special attention in a mobile agents environment, especially when the other hosts must be prevented from eavesdropping on the communication. We propose a communication model for secured communication between the agents belonging to publishers and consumers data. Confidentiality is ensured using our on-the-fly encryption-decryption sequence using ElGamal system to directly convert the message or plaintext into one that is encrypted directly with the public key of consumer. The scheme ensures that the data possessed by the agents is secured at all times when it is executing at any untrusted host. Our minimal implementation of the model with Aglets agent platform gives the first faithful picture of the happenings in the model. Finally, we also explain how the homomorphic property of ElGamal scheme can be integrated with our model for a Web-based application such as voting involving multiple agents.  相似文献   

12.
Abstract

Although Global 2000 organizations today are becoming increasingly aware of the importance of a metrics program to maximize the effectiveness of an information security strategy, there's little guidance available around the practical “how to's” of putting such a program into practice. As a result, security metrics are shrouded in mystery and are considered “too hard” to do—with the end result being that this necessary and effective management tool has yet to be implemented at many organizations, and in the organizations where it has been launched, it has yet to be automated to ease management and reduce resource costs.  相似文献   

13.
ContextClass maintainability is the likelihood that a class can be easily modified. Before releasing an object-oriented software system, it is impossible to know with certainty when, where, how, and how often a class will be modified. At that stage, this likelihood can be estimated using the internal quality attributes of a class, which include cohesion, coupling, and size. To reduce the future class maintenance efforts and cost, developers are encouraged to carefully test and well document low maintainability classes before releasing the object-oriented system.ObjectiveWe empirically study the relationship between internal class quality attributes (size, cohesion, and coupling) and an external quality attribute (class maintainability). Using statistical techniques, we also construct models based on the selected internal attributes to predict class maintainability.MethodWe consider classes of three open-source systems. For each class, we account for two actual maintainability indicators, the number of revised lines of code and the number of revisions in which the class was involved. Using 19 internal quality measures, we empirically explore the impact of size, cohesion, and coupling on class maintainability. We also empirically investigate the abilities of the measures, considered both individually and combined, to estimate class maintainability. Statistically based prediction models are constructed and validated.ResultsOur results demonstrate that classes with better qualities (i.e., higher cohesion values and lower size and coupling values) have better maintainability (i.e., are more likely to be easily modified) than those of worse qualities. Most of the considered measures are shown to be predictors of the considered maintainability indicators to some degree. The abilities of the considered internal quality measures to predict class maintainability are improved when the measures are combined using optimized multivariate statistical models.ConclusionThe prediction models can help software engineers locate classes with low maintainability. These classes must be carefully tested and well documented.  相似文献   

14.

Software development is a complex process in which it is all too easy to introduce errors or faults that can be extremely difficult to identify . However , there has been little research into the use of intelligent diagnostic techniques for software ( as compared to hardware , for example ). Therefore , applying such techniques to software is potentially extremely useful . In many cases the presence of a bug is only identified when an invalid result is observed . However , the task of identifying the software elements that may have contributed to this result can be a time - consuming and tedious task , but must be performed before the actual root cause can be identified and a repair effected . This operation is akin to fault localization within hardware diagnosis and has been the subject of extensive research within the model - based community . This article presents a technique that minimizes the tedious task of fault localization with the software system , leaving the developer to concentrate on identifying the root cause and remedial action . In particular , we treat diagnosis as a trial with witnesses for the prosecution and defense . The result is a diagnostic trial that uses the source code of a system as its model and knowledge of valid and invalid results ( victims ) to identify a set of suspects for a developer to investigate further .  相似文献   

15.
Over the last few years, there has been intense work on the problem of retrieval of continuous media (CM) data from disk. However, no single unified framework exists within which such retrieval problems can be studied. In this paper, we first propose a formal model for CM data retrieval from heterogeneous disk servers. This model can be used to characterize CM data retrieval problems independently of how data is laid out on disk, and what objectives (e.g., minimize client delay, maximize buffer utilization, etc.) the system manager is interested in. We then show how using this formal model, we can neatly define what it means to optimally handle events that occur in movie-on-demand (MOD) systems. Examples of such events include new clients entering the system, old clients leaving the system, continuing clients doing pause, rewind and fast-forward operations. Multiple events may occur simultaneously and we show how such events trigger state transitions in the system. We then develop an algorithm called the QuickSOL algorithm that handles events occurring in MOD systems. This algorithm works in two phases: in the first phase, it quickly finds a way of handling as many of the events occurring at time t as possible. In the second phase, it optimizes the solution found in the first phase. The advantage is that the algorithm can be interrupted anytime after the first phase is completed. We report on experiments showing that QuickSOL works well in practice.  相似文献   

16.
Background:Experiments in which study units are assigned to experimental groups nonrandomly are called quasi-experiments. They allow investigations of cause–effect relations in settings in which randomization is inappropriate, impractical, or too costly.Problem outline:The procedure by which the nonrandom assignments are made might result in selection bias and other related internal validity problems. Selection bias is a systematic (not happening by chance) pre-experimental difference between the groups that could influence the results. By detecting the cause of the selection bias, and designing and analyzing the experiments accordingly, the effect of the bias may be reduced or eliminated.Research method:To investigate how quasi-experiments are performed in software engineering (SE), we conducted a systematic review of the experiments published in nine major SE journals and three conference proceedings in the decade 1993–2002.Results:Among the 113 experiments detected, 35% were quasi-experiments. In addition to field experiments, we found several applications for quasi-experiments in SE. However, there seems to be little awareness of the precise nature of quasi-experiments and the potential for selection bias in them. The term “quasi-experiment” was used in only 10% of the articles reporting quasi-experiments; only half of the quasi-experiments measured a pretest score to control for selection bias, and only 8% reported a threat of selection bias. On average, larger effect sizes were seen in randomized than in quasi-experiments, which might be due to selection bias in the quasi-experiments.Conclusion:We conclude that quasi-experimentation is useful in many settings in SE, but their design and analysis must be improved (in ways described in this paper), to ensure that inferences made from this kind of experiment are valid.  相似文献   

17.
ABSTRACT

From the government’s perspective, it is important to understand the factors that influence effective utilization of new service channels, particularly the use of smart devices. Furthermore, how the utilization of a new channel affects trust in the government is an important performance factor whose linkage mechanism also needs to be investigated. This study collected 417 questionnaires from Korean citizens who communicate with the government via smart devices; the questionnaires were analyzed using structural equation analysis. This research indicates that in order to maximize trust in government, service delivery via smart devices must be designed with a clear understanding of the three significant components of such communication, namely the service, channel, and citizens. The service selected must be appropriate to the characteristics of the channel, and service reform may be necessary, beyond using the channel simply as a service means. Citizens’ ability to utilize the channel must also be fully considered. In order to increase the efficacy of new channel utilization, fast implementation is less important than understanding how to satisfy citizens’ needs regarding use of the public service via a smart device.  相似文献   

18.
Abstract

A knowledge-based scheduling system has been developed for the domain of university class scheduling. The problem addressed is how to schedule courses during the various time periods throughout the day. The class schedule must satisfy a variety of appropriate constraints. The system, written in Prolog, resolves conflicting assignments through backtracking. The inefficiency of Prolog's backtracking feature, with respect to this application, is partly circumvented by the use of a dynamic circular array. The system is now being used to help schedule industrial engineering classes at the Pennsylvania State University.  相似文献   

19.
Abstract

The conventional usability lab is primarily responsible for testing prototypes and products to determine if customers will accept a new design. Often this testing comes too late in the development cycle to allow major design or product changes to occur. In the Customer-Centered Design Group at Tektronix Labs, the usability lab is a small part of our group's involvement in the entire design life cycle of a Tektronix product. We work with design groups to bring the benefits of a usability lab to all phases of design, beginning with understanding our customer's current system and work processes to assessing the competitor's strengths and weaknesses to simulating and evaluating design alternatives. Our ‘lab’ is often on the road; meeting with customers where they work, working with design teams to simulate and prototype designs, and evaluating designs with our customers. To keep in touch with customers and to keep product development focused, we feel a usability group must break down the barriers inherent in a conventional testing suite. By breaking these barriers we can better determine what customers need and how these needs are addressed throughout the entire product life cycle.  相似文献   

20.
ContextThe design of complex systems demands methodologies to analyze its correct behaviour. It is usual that a correct behaviour is determined by the compliance with temporal requirements. Currently, testing is the most used technology to validate the correctness of systems. Although several techniques that take into account time aspects have been proposed, most of them require the tester interacts with the system. However, if this is not possible, it is necessary to apply a passive testing approach where the tester monitors the behaviour of the system.ObjectiveThe aim of this paper is to propose a methodology to perform passive testing on communicating systems in which the behaviour of their components must fulfill temporal restrictions associated with both performance and delays/timeouts.MethodOur framework uses algorithms for checking traces collected from the systems against invariants which formally represent the most relevant properties that must be fulfilled by the system. In order to support the feasibility of the methodology, we have performed an empirical study on a complex system for automatic recognition of images based on a pipeline architecture. We have analyzed the correctness of the system’s behaviour with respect to a set of invariants. Finally, an experiment, based on mutations of the system, was conducted to study the level of detection of a set of invariants.ResultsDifferent errors were detected and fixed along the development of the system by means of the proposed methodology. The results of the experiments with the mutated versions of the system indicated that the designed set of invariants was more effective in finding errors associated to temporal aspects than those related to communication among components.ConclusionThe proposed technique has been shown to be very useful for analyzing complex timed systems, and find errors when the tester has no control over their behaviour.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号