共查询到20条相似文献,搜索用时 31 毫秒
1.
David Corrall 《Requirements Engineering》1997,2(4):217-219
Conclusions It is asserted that current approaches and automated support for requirements engineering are not yet sufficient to build
today’s and tomorrow’s complex systems. Requirements engineering, itself intricately connected to system design and system
solution and not separate from either, needs to be embedded into a total systems engineering approach. This is the route to
systems engineering maturity. Software and systems engineering can and should learn from each other. 相似文献
2.
Benjamin Wells 《Natural computing》2011,10(4):1383-1405
In 1944 the computing machine known as Colossus became operational in support of British cryptanalysis and decryption of German
High Command wireless traffic. This first electronic digital and very unconventional computer was not a stored-program general
purpose computer in today’s terms, despite printed claims to the contrary. At least one of these asserts Colossus was a Turing
machine. While an appropriate Turing machine can simulate the operation of Colossus, that is not an argument for generality
of computation. Nor does the behavior of Colossus resemble that of a Turing machine, much less a universal Turing machine
(UTM). Nonetheless, we shall see that a UTM could have been implemented on a clustering of the ten Colossus machines installed
at Bletchley Park, England, by the end of WWII in 1945. Improvements require even fewer machines. Several advances in input,
output, speed, processing, and applications—within the hardware capability of the time and respectful of the specification
of Colossus—are also offered. 相似文献
3.
Vicki L. Hanson 《Universal Access in the Information Society》2011,10(4):443-452
Is current research on computing by older adults simply looking at a short-term problem? Or will the technology problems that
plague the current generation also be problematic for today’s tech-savvy younger generations when they become “old”? This
paper considers age-related and experience-related issues that affect ability to use new technology. Without more consideration
of the skills of older users, it is likely that applications and devices 20 years from now will have changed such that this
“older” generation finds themselves confronting an array of technologies that they little understand and find generally inaccessible.
Recent evidence suggests that older adults bring specific strengths to Web browsing. A fuller investigation of these strengths
and how to design to optimize for strengths of older users has the potential to address the need for usable technology for
this increasingly important demographic. 相似文献
4.
Privacy is an ancient concept, but its interpretation and application in the area of e-commerce are still new. It is increasingly
widely accepted, however, that by giving precedence to consumer privacy bigger benefits can be reaped by all parties involved.
There has been much investigation into the concept of privacy, legal frameworks for protecting this most impalpable of human
values and, more recently, computing technologies that help preserve an individual’s privacy in today’s environment. In this
paper we review the historical development of this fundamental concept, discussing how advancements both in society and in
technology have challenged the right to privacy, and we survey the existing computing technologies that promote consumer privacy
in e-commerce. Our study shows that historically the protection of privacy has been driven primarily both by our understanding
of privacy and by the advancement of technology, analyses the limitations of privacy protections for current e-commerce applications,
and identifies directions for the future development of successful privacy enhancing technologies. 相似文献
5.
Arvid Kauppi Johan Wikström Bengt Sandblad Arne W. Andersson 《Cognition, Technology & Work》2006,8(1):50-56
Improving train traffic control can be a cost-efficient way to improve train traffic punctuality and increase utilization
of existing and future railway infrastructure. However, performance in train traffic control tasks currently involves working
on a technical level in order to regulate the traffic flow. Working in a preventive manner is poorly supported and train traffic
controllers are usually restricted to just solving problems as they occur. This often results in unnecessarily long delays
and decreased timeliness of train traffic. The main objective of this paper is to describe a proposed control strategy and
a case study, which evaluates the control strategy and the prototype tool derived from the research. By shifting the control
paradigm to a high-level control strategy, many of today’s problems may be avoided, with benefits of the reduction in delays,
improved timeliness and better utilization of the infrastructure. Twenty-one train traffic controllers participated in a case
study, with a simulated prototype environment. The majority of the participating train traffic controllers were positive to
the new concepts and ideas. Many of the important aspects of the proposed control strategy can be investigated with the simulation,
but due to the complexity of train traffic some issues must be evaluated in an operative environment. 相似文献
6.
Miomir Vukobratović Veljko Potkonjak Kalman Babković Branislav Borovac 《Multibody System Dynamics》2007,17(1):71-96
In the last decade we have witnessed a rapid growth of Humanoid Robotics, which has already constituted an autonomous research field. Humanoid robots (or simply humanoids) are expected in all situations of humans’ everyday life, “living” and cooperating with us. They will work in services, in
homes, and hospitals, and they are even expected to get involved in sports. Hence, they will have to be capable of doing diverse
kinds of tasks. This forces the researchers to develop an appropriate mathematical model to support simulation, design, and
control of these systems. Another important fact is that today’s, and especially tomorrow’s, humanoid robots will be more
and more humanlike in their shape and behavior. A dynamic model developed for an advanced humanoid robot may become a very
useful tool for the dynamic analysis of human motion in different tasks (walking, running and jumping, manipulation, various
sports, etc.). So, we derive a general model and talk about a human-and-humanoid simulation system. The basic idea is to start
from a human/humanoid considered as a free spatial system (“flier”). Particular problems (walking, jumping, etc.) are then
considered as different contact tasks – interaction between the flier and various objects (being either single bodies or separate
dynamic systems). 相似文献
7.
Major problems are faced in the aerospace industry today concerning safety in the crowded skies around airports and continuing increases in fuel prices. Lockheed-California Company in collaboration with the NASA Langley Research Center has been working on a development that tackles both of these problems — an airborne four-dimensional computer capability for the L-1011 Tristar jetliner. A trial installation plane is flying with colour electronic displays on a portion of its instrument panel to achieve these ends. The display system is intended to control flights to such a degree that arrival times can be predicted to within a matter of seconds, substantially reducing the congestion and delays of today's airways. Accurate on-board prediction of arrival times in conjunction with an en route traffic metering technique should make air traffic flow much more efficient and lead to a substantial reduction in fuel consumption. 相似文献
8.
A recent investigation revealed that there is a substantiated need for the development of a micro-simulation system designed for traffic safety assessment. This paper describes the development of a road traffic simulation system, which uses a ‘nanoscopic model’ of driver behaviour and an integrated analysis-evaluation system designed for traffic safety assessment. The primary focus is on estimating the effects of an advanced driver assistance system thereby reducing traffic accidents. The effectiveness and validity of the present system are demonstrated through comparison with measured traffic data. This paper also proposes algorithms embedded in a ‘driver-agent’, for recognising driver’s intentions regarding choosing steering-control modes, lateral control tasks, and the driving mood. This is because the driver assistance systems need to recognise the driver’s intention when choosing steering-control. The results of a simulation study, using the data drawn from actual driving, show that the systems would achieve a high recognition capability. As an example of how driving mood recognition applies to driver assistance systems, an advanced steering system and the adaptability to the driver’s mood, have also been presented. 相似文献
9.
The importance of reporting is ever increasing in today’s fast-paced market environments and the availability of up-to-date
information for reporting has become indispensable. Current reporting systems are separated from the online transaction processing
systems (OLTP) with periodic updates pushed in. A pre-defined and aggregated subset of the OLTP data, however, does not provide
the flexibility, detail, and timeliness needed for today’s operational reporting. As technology advances, this separation
has to be re-evaluated and means to study and evaluate new trends in data storage management have to be provided. This article
proposes a benchmark for combined OLTP and operational reporting, providing means to evaluate the performance of enterprise
data management systems for mixed workloads of OLTP and operational reporting queries. Such systems offer up-to-date information
and the flexibility of the entire data set for reporting. We describe how the benchmark provokes the conflicts that are the
reason for separating the two workloads on different systems. In this article, we introduce the concepts, logical data schema,
transactions and queries of the benchmark, which are entirely based on the original data sets and real workloads of existing,
globally operating enterprises. 相似文献
10.
The way in which humans perceive and react to visual complexity is an important issue in many areas of research and application,
particularly because simplification of complex matter can lead to better understanding of both human behaviour in visual control
tasks as well as the visual environment itself. One area of interest is how people perceive their world in terms of complexity
and how this can be modelled mathematically and/or computationally. A prototype model of complexity has been derived using
subcomponents called ‘SymGeons’ (Symmetrical Geometric Icons) based on Biederman’s original Geon Model for human perception.
The SymGeons are primitive shapes which constitute foreground objects. This paper outlines the derivation and ongoing development
of the ‘SymGeon’ model and how it compares to human perception of visual complexity. The application of the model to understanding
complex human-in-the-loop problems associated with visual remote control operations, e.g. control of remotely operated vehicles,
is discussed. 相似文献
11.
In recent years, various efforts have been made in air traffic control (ATC) to maintain traffic safety and efficiency in the face of increasing air traffic demands. ATC is a complex process that depends to a large degree on human capabilities, and so understanding how controllers carry out their tasks is an important issue in the design and development of ATC systems. In particular, the human factor is considered to be a serious problem in ATC safety and has been identified as a causal factor in both major and minor incidents. There is, therefore, a need to analyse the mechanisms by which errors occur due to complex factors and to develop systems that can deal with these errors. From the cognitive process perspective, it is essential that system developers have an understanding of the more complex working processes that involve the cooperative work of multiple controllers. Distributed cognition is a methodological framework for analysing cognitive processes that span multiple actors mediated by technology. In this research, we attempt to analyse and model interactions that take place in en route ATC systems based on distributed cognition. We examine the functional problems in an ATC system from a human factors perspective, and conclude by identifying certain measures by which to address these problems. This research focuses on the analysis of air traffic controllers' tasks for en route ATC and modelling controllers' cognitive processes. PRACTITIONER SUMMARY: This research focuses on an experimental study to gain a better understanding of controllers' cognitive processes in air traffic control. We conducted ethnographic observations and then analysed the data to develop a model of controllers' cognitive process. This analysis revealed that strategic routines are applicable to decision making. 相似文献
12.
Information systems are the glue between people and computers. Both the social and business environments are in a continual,
some might say chaotic, state of change while computer hardware continues to double its performance about every 18 months.
This presents a major challenge for information system developers. The term user-friendly is an old one, but one which has come to take on a multitude of meanings. However, in today’s context we might well take
a user-friendly system to be one where the technology fits the user’s cognitive models of the activity in hand. This article
looks at the relationship between information systems and the changing demands of their users as the underlying theme for
the current issue of Cognition, Technology and Work. People, both as individuals and organisations, change. The functionalist viewpoint, which attempts to freeze and inhibit
such change, has failed systems developers on numerous occasions. Responding to, and building on, change in the social environment
is still a significant research issue for information systems specialists who need to be able to create living information
systems. 相似文献
13.
Ugo Pagallo 《AI & Society》2011,26(4):347-354
This paper adopts a legal perspective to counter some exaggerations of today’s debate on the social understanding of robotics.
According to a long and well-established tradition, there is in fact a relative strong consensus among lawyers about some
key notions as, say, agency and liability in the current use of robots. However, dealing with a field in rapid evolution,
we need to rethink some basic tenets of the contemporary legal framework. In particular, time has come for lawyers to acknowledge
that some acts of robots should be considered as a new source of legal responsibility for others’ behaviour. 相似文献
14.
Domagoj Babić Alan J. Hu 《International Journal on Software Tools for Technology Transfer (STTT)》2009,11(4):325-338
Despite many advances, today’s software model checkers and extended static checkers still do not scale well to large code
bases when verifying properties that depend on complex interprocedural flow of data. An obvious approach to improve performance
is to exploit software structure. Although a tremendous amount of work has been done on exploiting structure at various levels
of granularity, the fine-grained shared structure among multiple verification conditions has been largely ignored. In this
paper, we formalize the notion of shared structure among verification conditions and propose a novel and efficient approach
to exploit this sharing by safely reusing facts learned while checking one verification condition to help solve the others.
Experimental results show that this approach can improve the performance of verification, even on path- and context-sensitive
and dataflow-intensive properties. 相似文献
15.
The success of the Internet is largely ascribable to the packet-switching scheme, which, however, also presents major challenges.
Having identified three missing links in the current Internet architecture based on our long-term experiences of designing
and operating large-scale backbones, we put forward a new, but incrementally deployable, network scheme—address switching.
The address switching has both the advantages of packet switching and circuit switching; it supplies the missing links in
the current Internet architecture and can reform the Internet traffic. Our analysis, protocol design and experiments indicate
that the address switching can greatly improve the quality of service (QoS), security and routing scalability of today’s Internet.
So it can provide flexible, high-performance and “per-service” networking for the scientific research communities. Moreover,
it can provide a fairer and more sustainable business model for the commodity Internet.
Supported by the China Next Generation Internet Project (Grant No. CNGI-04-13-2T), and the National Basic Research Program
of China (Grant No. 041710001) 相似文献
16.
Amdahl’s Law is based upon two assumptions – that of boundlessness and homogeneity – and so it can fail when applied to single
chip heterogeneous multiprocessor designs, and even microarchitecture. We show that a performance increase in one part of
the system can negatively impact the overall performance of the system, in direct contradiction to the way Amdahl’s Law is
instructed. Fundamental assumptions that are consistent with Amdahl’s Law are a heavily ingrained part of our computing design
culture, for research as well as design. This paper points in a new direction. We motivate that emphasis should be made on
holistic, system level views instead of divide and conquer approaches. This, in turn, has relevance to the potential impacts
of custom processors, system-level scheduling strategies and the way systems are partitioned. We realize that Amdahl’s Law
is one of the few, fundamental laws of computing. However, its very power is in its simplicity, and if that simplicity is
carried over to future systems, we believe that it will impede the potential of future computing systems. 相似文献
17.
18.
Samer Al-Kiswany Matei Ripeanu Adriana Iamnitchi Sudharshan Vazhkudai 《Journal of Grid Computing》2009,7(1):91-114
The avalanche of data from scientific instruments and the ensuing interest from geographically distributed users to analyze
and interpret it accentuates the need for efficient data dissemination. A suitable data distribution scheme will find the
delicate balance between conflicting requirements of minimizing transfer times, minimizing the impact on the network, and
uniformly distributing load among participants. We identify several data distribution techniques, some successfully employed
by today’s peer-to-peer networks: staging, data partitioning, orthogonal bandwidth exploitation, and combinations of the above.
We use simulations to explore the performance of these techniques in contexts similar to those used by today’s data-centric
scientific collaborations and derive several recommendations for efficient data dissemination. Our experimental results show
that the peer-to-peer solutions that offer load balancing and good fault tolerance properties and have embedded participation
incentives lead to unjustified costs in today’s scientific data collaborations deployed on over-provisioned network cores.
However, as user communities grow and these deployments scale, peer-to-peer data delivery mechanisms will likely outperform
other techniques. 相似文献
19.
Most of today’s malware are able to detect traditional debuggers and change their behavior whenever somebody tries to analyze
them. The analysis of such malware becomes then a much more complex task. In this paper, we present the functionalities provided
by the Kolumbo kernel module that can help simplify the analysis of malware. Four functionalities are provided for the analyst:
system calls monitoring, virtual memory contents dumping, pseudo-breakpoints insertion and eluding anti-debugging protections
based on ptrace. The module as been designed to minimize its impact on the system and to be as undetectable as possible. However, it has
not been conceived to analyze programs with kernel access. 相似文献
20.
This paper aims at constructing a music composition system that composes music by the interaction between human and a computer.
Even users without special musical knowledge can compose 16-bar musical works with one melody part and some backing parts
using this system. The interactive Genetic Algorithm is introduced to music composition so that users’ feeling toward music
is reflected in the composed music. One chromosome corresponds to 4-bar musical work information. Users participate in music
composition by evaluating composed works after GA operators such as crossover, mutation, virus infection are applied to chromosomes
based on the evaluation results. From the experimental results, it is found that the users’ evaluation values become high
over the progress of generations. That is, the system can compose 16-bar musical works reflecting users’ feeling.
Muneyuki Unehara: He received his M.S. in Engineering in 2002 from Institute of Science and Engineering, University of Tsukuba. Currently,
he is a Ph.D. candidate of Graduate School of Systems and Information Engineering, University of Tsukuba. His research interests
include the construction of intelligent systems by considering soft computing techniques and human interface.
Takehisa Onisawa, Ph.D.: He received Dr.Eng. in Systems Science in 1986 from Tokyo Institute of Technology. Currently, he is a Professor in the Graduate
School of Systems and Information Engineering, University of Tsukuba. His research interests include applications of soft
computing techniques to human centered systems thinking. He is a member of IEEE and IFSA. 相似文献