共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The UNIX operating system enjoys an ever increasing popularity throughout the computing community; there will be 1.4 million UNIX licences distributed by 1985, rising at a rate of about 400,000 per annum. With universal acceptance of a system comes a dangerously high degree of inertia. Consider the analogous area of programming languages, where there has been great resistance to change, despite major advancements. This paper presents a critique of UNIX, based on three areas which we consider to be of vital importance to future operating systems. These areas are operating system structures and design, programming support environments and distributed computing. The criticisms presented are in no way intended to discredit UNIX. UNIX compares favourably with most of the present generation of operating systems. The intention is to highlight deficiencies in the state of the art in operating system design. 相似文献
3.
Harold Thimbleby 《Software》1999,29(5):457-478
Our experience of using Java is disappointing: as a programming language (irrespective of its implementations and libraries), Java itself leaves much to be desired. This paper discusses a few serious problems with Java's design, which leads us to suggest that the language definition should have been an integral part of the design process rather than, as appears, a retrospective commentary. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
4.
5.
6.
7.
A critique of software defect prediction models 总被引:4,自引:0,他引:4
Fenton N.E. Neil M. 《IEEE transactions on pattern analysis and machine intelligence》1999,25(5):675-689
Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the “quality” of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of “software decomposition” in order to test hypotheses about defect introduction and help construct a better science of software engineering 相似文献
8.
Valerie Gray Hardcastle 《Minds and Machines》1995,5(1):89-107
Information processing theories in psychology give rise to executive theories of consciousness. Roughly speaking, these theories maintain that consciousness is a centralized processor that we use when processing novel or complex stimuli. The computational assumptions driving the executive theories are closely tied to the computer metaphor. However, those who take the metaphor serious — as I believe psychologists who advocate the executive theories do — end up accepting too particular a notion of a computing device. In this essay, I examine the arguments from theoretical computational considerations that cognitive psychologists use to support their general approach in order to show that they make unwarranted assumptions about the processing attributes of consciousness. I then go on to examine the assumptions behind executive theories which grow out of the computer metaphor of cognitive psychology and conclude that we may not be the sort of computational machine cognitive psychology assumes and that cognitive psychology's approach in itself does not buy us anything in developing theories of consciousness. Hence, the state space in which we may locate consciousness is vast, even within an information processing framework. 相似文献
9.
David McClelland 《Information & Communications Technology Law》1998,7(1):15-30
Although, the Latent Damage System was produced in the late 1980s, Susskind—a co‐developer—asserts in his latest book that this and similar systems will have a profound influence upon the future direction and practice of law, by bringing about a shift in the legal paradigm (Susskind, The Future of Law Facing the Challenges of Information Technology, 2996. pp. 105, 286). As part of the research into the conflict which, in my view, exists between the artificial intelligence and law movement and adversarial argumentation in the litigatory process, I analyse the claims and objectives made by the developers of the Latent Damage System and suggest that the current technological know‐how is incapable of representing dynamic, adversarial, legal environments. In consequence, I contend that intelligent‐based applications cannot provide an authentic and automatic access to resolving adversarial legal disputes. 相似文献
10.
Additional references relating to the work of J.E. White and J.L. Speyer (ibid., vol.AC-32, p.593-603, July 1987) are reported and commented on. White and Speyer discussed predecessor studies that dealt with failure detection in navigation applications described by stochastic time-invariant linear system with additive Gaussian white process and measurement noises being present, and sought to use Kalman filters tuned in this application context to detect a priori specified failures. It is argued that, since the predecessor work dealt exclusively with time-invariant deterministic systems devoid of noise terms, exclusive use of observers suffice for failure detection in this more benign context 相似文献
11.
This paper surveys the applications of thinning in image processing, and examines the difficulties that confront existing thinning algorithms. A fundamental problem is that an algorithm may not be guaranteed to operate successfully on all possible images: in particular, it may not discriminate properly between ‘noise spurs’ and valid limbs, and the skeleton produced may not accurately reflect the shape of the object under scrutiny. Analysis of the situation results in a new, systematic approach to thinning, leading to a family of algorithms able to achieve guaranteed standards of skeleton precision. One algorithm of this family is described in detail.
“There is still no definitely good method for thinning” - Nagao(28) 相似文献
12.
KEVIN WARWICK 《International journal of control》2013,86(6):1253-1264
This paper discusses the use of multi-layer perceptron networks for linear or linearizable, adaptive feedback control schemes in a discrete-time environment. A close look is taken at the model structure selected and the extent of the resulting parametrization. A comparison is made with standard, non-perceptron algorithms, e.g. self-tuning control, and it is shown how gross over-parametrization can occur in the neural network case. Because of the resultant heavy computational burden and poor controller convergence, a strong case is made against the use of neural networks for discrete-time linear control. 相似文献
13.
14.
Technology scruples: why intimidation will not save the recording industry and how enchantment might
While the recording industry continues to lobby for increasingly draconian laws to protect their interests, users of digital
technology continue to share files and copy protected music. This paper considers the ethics of copying and argues that legal
measures are unlikely to solve the music industry’s problems in the age of digital reproduction. It begins with a review of
the legal arguments around copyright legislation and notes that the law is currently unclear and contested. Adapting the game
“scruples” to questions of what is and is not considered theft, a qualitative study reflects on the ways that ethical positions
around new media are reached and articulated. The findings relate ethical positions constructed around notions of resistance,
intangibility and identity. It is argued that the global online population cannot be policed without consent and that mechanics
of artist reimbursement must be developed that account for consumers’ technology scruples. File sharing is then considered
not as a legal problem but as a design challenge and a strategy of enchantment is suggested. The design concept of a digital
music box is outlined to illustrate strategies of enchantment rather than litigation and intimidation. 相似文献
15.
W. van Peer 《Computers and the Humanities》1989,23(4-5):301-307
The present paper is a critique of quantitative studies of literature. It is argued that such studies are involved in an act of reification, in which, moreover, fundamental ingredients of the texts, e.g. their (highly important) range of figurative meanings, are eliminated from the analysis. Instead a concentration on lower levels of linguistic organization, such as grammar and lexis, may be observed, in spite of the fact that these are often the least relevant aspects of the text. In doing so, quantitative studies of literature significantly reduce not only the cultural value of texts, but also the generalizability of its own findings. What is needed, therefore, is an awareness and readiness to relate to matters of textuality as an organizing principle underlying the cultural functioning of literary works of art.Willie van Peer is Associate Professor of Literary Theory at the University of Utrecht (The Netherlands), author of Stylistics and Psychology: Investigations of Foregrounding (Croom Helm, 1986) and editor ofThe Taming of the Text: Explorations in Language, Literature and Culture (Routledge, 1988). His major research interests lie in theory formation and its epistemological problems, and in the interrelationship between literary form and function. 相似文献
16.
17.
Although user experience is now widely accepted as a central concern in human–computer interaction and interaction design,
its conceptual and methodological implications are still being worked out. Enchantment has become emblematic in this process
by pointing to the enlivening potential of technology in people-technology relations. As part of an ongoing project to deepen
our understanding of enchantment in user experience, this paper presents a case study of one person’s enchantment with their
Internet uses. The analysis suggests three salient aspects of this enchantment: responsive crossing of boundaries; dialogue
in personal transformation; and the potential endlessness and depths of enchantment. It also suggests that some characteristics
of interaction with the particular medium facilitate enchantment: personal control over self-presentation; the paradox of
being able to carefully craft meaning from what is normally chaotic; the possibility of finding and constructing personal
narratives online; playing in a vast pool of information. Reflection on the results of this single case analysis points to
the value for understanding user experience of in-depth single-case analyses that focus on the personal and particular. 相似文献
18.
Tikk D. Biro G. Gedeon T.D. Koczy L.T. Jae Dong Yang 《Fuzzy Systems, IEEE Transactions on》2002,10(5):596-606
Investigates Sugeno's and Yasukawa's (1993) qualitative fuzzy modeling approach. We propose some easily implementable solutions for the unclear details of the original paper, such as trapezoid approximation of membership functions, rule creation from sample data points, and selection of important variables. We further suggest an improved parameter identification algorithm to be applied instead of the original one. These details are crucial concerning the method's performance as it is shown in a comparative analysis and helps to improve the accuracy of the built-up model. Finally, we propose a possible further rule base reduction which can be applied successfully in certain cases. This improvement reduces the time requirement of the method by up to 16% in our experiments. 相似文献
19.
R. T. Coupe 《欧洲信息系统杂志》1994,3(1):28-36
There is a common belief that CASE software enhances developer productivity and the quality of applications software. However, though the few empirical studies of the impact of CASE have produced inconclusive findings, they do indicate an absence of appreciable productivity gains. There is a need to determine the extent to which CASE products are worth their cost, and this paper focuses on the key methodological issues involved in assessing the cost-effectiveness of CASE products. Existing studies of the impact of CASE software have considered developers' perceptions, but have not considered system users' perceptions nor used software metrics to assess applications software. It is also rare for the characteristics of the development environments and other factors that have an important bearing on the productivity and quality of software to be investigated. In this paper, perceptual and objective measurement, and the different ways of designing the research and of accessing the population of CASE users, are outlined and evaluated. While the aim is to establish the most appropriate research design and measurement approaches for determining the value of CASE tools, the conclusions are also relevant to the assessment of the impact of other new software technologies. 相似文献
20.
JATI K. SENGUPTA 《International journal of systems science》2013,44(3):511-525
The concept of mean variance efficiency widely used in portfolio theory of modern finance is examined here in terms of (a) its limitations in statistical and empirical applications and (b) the alternative non-parametric measures. The non-parametric measures and tests of portfolio efficiency raise some of the most fundamental issues of modern financial economics today and these are shown to have valuable implications for the theory of capital market efficiency. 相似文献