共查询到20条相似文献,搜索用时 78 毫秒
1.
Steve Lipner 《Datenschutz und Datensicherheit - DuD》2010,34(3):135-137
The increasing adoption of “client and cloud” computing raises several important concerns about security. This article discusses
security issues that are associated with “client and cloud” and their impact on organizations that host applications “in the
cloud.” It describes how Microsoft minimizes the security vulnerabilities in these, possibly mission-critical, platforms and
applications by following two, complementary approaches: developing the policies, practices, and technologies to make their
“client and cloud” applications as secure as possible, and managing the security of the platform environment through clearly
defined operational security policies. 相似文献
2.
Naohiko Kohtake Ryo Ohsawa Takuro Yonezawa Masayuki Iwai Kazunori Takashio Hideyuki Tokuda 《Personal and Ubiquitous Computing》2007,11(7):591-606
This paper proposes the concept of DIY (do-it-yourself) ubiquitous computing, an architecture allowing non-experts to establish
ubiquitous computing environments in the real world. This concept has been implemented in the “u-Texture”, which is a self-organizable
panel that works as a building block. While the traditional scheme attaches devices such as computers, sensors, and network
equipments externally to make everyday objects smart, the u-Texture has these devices built in beforehand to assemble smart
objects. The u-Texture can change its own behavior autonomously through recognition of its location, its angle of inclination,
and surrounding environment by assembling these factors physically. This paper describes the design, the implementation, and
various applications of u-Textures to confirm that the concept can contribute to establishment of ubiquitous computing environments
in the real world without expert users. 相似文献
3.
Linda M. Gallant 《Personal and Ubiquitous Computing》2006,10(5):325-332
Product testing of mobile communication technology has typically employed the same research methodologies that were traditionally applied to stationary technology. An approach that does not primarily rely on physical location to study mobile communication technologies is thus needed. The stable component of mobile communication technology is not physical space but human communication. Therefore, a research model is developed based on an ethnography of communication approach, which designates “talk” (i.e., symbolic communication) as the primary and essential unit of measurement while making stationary physical location secondary. This allows design teams to enter a user “speech community” anywhere. Eight participants tested both the stationary and mobile version of customer relationship management software for sales. All participants were professional salespeople, comprising a speech community. Users articulated their “local” speech community meaning systems in the form of scenarios of use, which can guide product design and marketing. The findings show that proof-of-concept testing of mobile versions of desktop software can be done in conjunction with the usability testing for stationary technology. 相似文献
4.
Joseph F. McCarthy 《Personal and Ubiquitous Computing》2001,5(1):75-77
Most environments are passive– deaf, dumb and blind, unaware of their inhabitants and unable to assist them in a meaningful way. However, with the advent
of ubiquitous computing – ever smaller, cheaper and faster computational devices embedded in a growing variety of “smart”
objects – it is becoming increasingly possible to create active environments: physical spaces that can sense and respond appropriately to the people and activities taking place within them.
Most of the early ubiquitous computing applications focus on how individuals interact with their environments as they work on foreground tasks. In contrast, this paper focuses on how groups of people affect and are affected by background aspects of their environments. 相似文献
5.
Smart Objects as Building Blocks for the Internet of Things 总被引:5,自引:0,他引:5
Kortuem Gerd Kawsar Fahim Sundramoorthy Vasughi Fitton Daniel 《Internet Computing, IEEE》2010,14(1):44-51
The combination of the Internet and emerging technologies such as near-field communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application. 相似文献
6.
Recent advances in hardware and software technologies for computer games have proved to be more than capable of delivering
quite detailed virtual environments on PC platforms and gaming consoles for so-called “serious” applications, at a fraction
of the cost than was the case 8 years ago. SubSafe is a recent example of what can be achieved in part-task naval training applications using gaming technologies, exploiting
freely available, freely distributable software. SubSafe is a proof-of-concept demonstrator that presents end users with an interactive, real-time three-dimensional model of part
of a Trafalgar Class submarine. This “Part 1” paper presents the background to the SubSafe project and outlines the experimental design for a pilot study being conducted between August 2008 and January 2009, in conjunction
with the Royal Navy’s Submarine School in Devonport. The study is investigating knowledge transfer from the classroom to a
real submarine environment (during week 7 of the students’ “Submarine Qualification Dry” course), together with general usability
and interactivity assessments. Part 2 of the paper (to be completed in early 2009) will present the results of these trials
and consider future extensions of the research into other submarine training domains, including periscope ranging and look-interval
assessment skills, survival systems deployment training and the planning and rehearsal of submersible rescue operations. 相似文献
7.
Fahim Kawsar Tatsuo Nakajima Jong Hyuk Park Sang-Soo Yeo 《The Journal of supercomputing》2010,54(1):4-28
A smart object system encompasses the synergy between computationally augmented everyday objects and external applications. This paper presents a software framework for building smart object systems following a declarative programming approach centered around custom written documents that glue the smart objects together. More specifically, in the proposed framework, applications’ requirements and smart objects’ services are objectified through structured documents. A runtime infrastructure provides the spontaneous federation between smart objects and applications through structural type matching of these documents. There are three primary advantages of our approach: firstly, it allows developers to write applications in a generic way without prior knowledge of the smart objects that could be used by the applications. Secondly, smart object management (locating, accessing, etc.) issues are completely handled by the infrastructure; thus application development becomes rapid and simple. Finally, the programming abstraction used in the framework allows extension of functionalities of smart objects and applications very easily. We describe an implemented prototype of our framework and show examples of its use in a real life scenario to illustrate its feasibility. 相似文献
8.
Daniel E. O’Leary 《Information Systems and E-Business Management》2008,6(3):239-255
Supporting decisions in real time has been the subject of a number of research efforts. This paper reviews the technology
and architecture necessary to create an autonomic supply chain for a real-time enterprise for supply chain systems. The technologies
weaved together include knowledge-based event managers, intelligent agents, radio frequency identification (RFID), database
and system integration, and enterprise resource planning systems.
This article is part of the “Handbook on Decision Support Systems” edited by Frada Burstein and Clyde W. Holsapple (2008)
Springer. 相似文献
9.
Significant research efforts are currently focused upon investigating the potential of nanoparticles for enhancing performance
in numerous diverse research fields. These range from developing energy efficient heating/cooling technologies to advanced
in vivo drug delivery systems. In these applications, nanoparticles are suspended in fluid mediums, coined “nanofluids”,
and empirical investigations have shown that their transport properties are far superior to those anticipated from conventional
prediction models. Most research efforts to date have focused upon understanding the bulk properties of these fluids, but
it is noted that accurate models cannot be developed without knowledge of how embedded particles affect local flow phenomena.
This letter describes the novel application of micro-particle image velocimetry in attaining such measurements within nanofluids
and demonstrates how these can lead to developing theories based on observed flow physics or to validate/negate many of the
recently proposed theories attempting to elucidate the mechanisms at play in nanofluids. 相似文献
10.
Edward E. Cobb 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):173-190
Businesses today are searching for information solutions that enable them to compete in the global marketplace. To minimize
risk, these solutions must build on existing investments, permit the best technology to be applied to the problem, and be
manageable. Object technology, with its promise of improved productivity and quality in application development, delivers
these characteristics but, to date, its deployment in commercial business applications has been limited. One possible reason
is the absence of the transaction paradigm, widely used in commercial environments and essential for reliable business applications.
For object technology to be a serious contender in the construction of these solutions requires:
– technology for transactional objects. In December 1994, the Object Management Group adopted a specification for an object
transaction service (OTS). The OTS specifies mechanisms for defining and manipulating transactions. Though derived from the X/Open distributed
transaction processing model, OTS contains additional enhancements specifically designed for the object environment. Similar
technology from Microsoft appeared at the end of 1995.
– methodologies for building new business systems from existing parts. Business process re-engineering is forcing businesses
to improve their operations which bring products to market. Workflow computing, when used in conjunction with “object wrappers” provides tools to both define and track execution of business processes which leverage existing applications and infrastructure.
– an execution environment which satisfies the requirements of the operational needs of the business. Transaction processing
(TP) monitor technology, though widely accepted for mainframe transaction processing, has yet to enjoy similar success in
the client/server marketplace. Instead the database vendors, with their extensive tool suites, dominate. As object brokers
mature they will require many of the functions of today's TP monitors. Marrying these two technologies can produce a robust
execution environment which offers a superior alternative for building and deploying client/server applications.
Edited by Andreas Reuter, Received February 1995 / Revised August 1995 / Accepted May 1996 相似文献
11.
This paper provides an overview of a multi-modal wearable computer system, SNAP&TELL. The system performs real-time gesture
tracking, combined with audio-based control commands, in order to recognize objects in an environment, including outdoor landmarks.
The system uses a single camera to capture images, which are then processed to perform color segmentation, fingertip shape
analysis, robust tracking, and invariant object recognition, in order to quickly identify the objects encircled and SNAPped
by the user’s pointing gesture. In addition, the system returns an audio narration, TELLing the user information concerning
the object’s classification, historical facts, usage, etc. This system provides enabling technology for the design of intelligent
assistants to support “Web-On-The-World” applications, with potential uses such as travel assistance, business advertisement,
the design of smart living and working spaces, and pervasive wireless services and internet vehicles.
An erratum to this article can be found at
An erratum to this article can be found at 相似文献
12.
Bin Guo Ryota Fujimura Daqing Zhang Michita Imai 《Multimedia Tools and Applications》2012,59(1):259-277
Treasure is a pervasive game playing in the context of people’s daily living environments. Unlike previous pervasive games
that are based on the predefined contents and proprietary devices, Treasure exploits the “design-in-play” concept to enhance
the variability of a game in mixed-reality environments. Dynamic and personalized role design and allocation by players is
enabled by exploring local smart objects as game props. The variability of the game is also enhanced by several other aspects,
such as user-oriented context-aware action setting and playing environment redeployment. The effectiveness of the “design-in-play”
concept is validated through a user study, where 15 subjects were recruited to play and author the trial game. 相似文献
13.
Matteo Colombo 《Minds and Machines》2010,20(2):183-202
According to John Haugeland, the capacity for “authentic intentionality” depends on a commitment to constitutive standards
of objectivity. One of the consequences of Haugeland’s view is that a neurocomputational explanation cannot be adequate to
understand “authentic intentionality”. This paper gives grounds to resist such a consequence. It provides the beginning of
an account of authentic intentionality in terms of neurocomputational enabling conditions. It argues that the standards, which
constitute the domain of objects that can be represented, reflect the statistical structure of the environments where brain
sensory systems evolved and develop. The objection that I equivocate on what Haugeland means by “commitment to standards”
is rebutted by introducing the notion of “florid, self-conscious representing”. Were the hypothesis presented plausible, computational
neuroscience would offer a promising framework for a better understanding of the conditions for meaningful representation. 相似文献
14.
Service management and design has largely focused on the interactions between employees and customers. This perspective holds
that the quality of the “service experience” is primarily determined during this final “service encounter” that takes place
in the “front stage.” This emphasis discounts the contribution of the activities in the “back stage” of the service value
chain where materials or information needed by the front stage are processed. However, the vast increase in web-driven consumer
self-service applications and other automated services requires new thinking about service design and service quality. It
is essential to consider the entire network of services that comprise the back and front stages as complementary parts of
a “service system.” We need new concepts and methods in service design that recognize how back stage information and processes
can improve the front stage experience. This paper envisions a methodology for designing service systems that synthesizes
(front-stage-oriented) user-centered design techniques with (back stage) methods for designing information-intensive applications. 相似文献
15.
FORM: A feature-;oriented reuse method with domain-;specific reference architectures 总被引:3,自引:0,他引:3
Kyo C. Kang Sajoong Kim Jaejoon Lee Kijoo Kim Euiseob Shin Moonhang Huh 《Annals of Software Engineering》1998,5(1):143-168
Systematic discovery and exploitation of commonality across related software systems is a fundamental technical requirement
for achieving successful software reuse. By examining a class/family of related systems and the commonality underlying those
systems, it is possible to obtain a set of reference models, i.e., software architectures and components needed for implementing
applications in the class. FORM (Feature-;Oriented Reuse Method) supports development of such reusable architectures and components
(through a process called the “domain engineering”) and development of applications using the domain artifacts produced from
the domain engineering. FORM starts with an analysis of commonality among applications in a particular domain in terms of
services, operating environments, domain technologies, and implementation techniques. The model constructed during the analysis
is called a “feature” model, and it captures commonality as an AND/OR graph, where AND nodes indicate mandatory features and
OR nodes indicate alternative features selectable for different applications. Then, this model is used to define parameterized
reference architectures and appropriate reusable components instantiatable during application development. Architectures are
defined from three different viewpoints (subsystem, process, and module) and have intimate association with the features.
The subsystem architecture is used to package service features and allocate them to different computers in a distributed environment.
Each subsystem is further decomposed into processes considering the operating environment features. Modules are defined based
on the features on domain technology and implementation techniques. These architecture models that represent an architecture
at different levels of abstraction are derived from the feature hierarchy captured in the feature model. Modules serve as
basis for creating reusable components, and their specification defines how they are integrated into the application (e.g.,
as-;is integration of pre-;coded component, instantiation of parameterized templates, and filling-;in skeletal codes). Our
experiences have shown that for the electronic bulletin board and the private branch exchange (PBX) domains, “features” make
up for a common domain language and the main communication medium among application users and developers. Thus, the feature
model well represents a “decision space” of software development, and is a good starting point for identifying candidate reusable
components. 相似文献
16.
Gabriele Guidi Bernard Frischer Michele Russo Alessandro Spinetti Luca Carosso Laura Loredana Micoli 《Machine Vision and Applications》2006,17(6):349-360
Cultural heritage digitization is becoming more common every day, but the applications discussed in the literature address
mainly the digitization of objects at a resolution proportional to the object size, using low resolution for large artifacts
such as buildings or large statues, and high resolution for small detailed objects. The case studied in this paper concerns
a huge physical model of imperial Rome (16 × 17.5 m) whose extremely small details forced the use of high resolution and
low noise scanning, in contrast with the long range needed. This paper gives an account of the procedures and the technologies
used for solving this “contradiction”. 相似文献
17.
This paper proposes a comprehensive approach to the development of technology infrastructure for the application of information
techology (IT) based solutions in teleconstruction—the performance of on-site construction and related tasks through the use
of IT and robotics by a remotely located team of project participants: general contractor, subcontractors, equipment operators,
materials suppliers, and project office professionals. The paper proposes that technologies exist that enable both terrestrial
and extraterrestrial teleconstruction.
Thomas Bock: German-American “Frontiers of Engineering” Symposium Participant, Essen 2001, Alexander von Humboldt Foundation 2004 “CONNECT”
Award Recipient
Mirosław Skibniewski: German-American “Frontiers of Engineering” Symposium Participant and Member of the Organizing Committee, Essen 2001, Alexander
von Humboldt Foundation 2004 “CONNECT” Award Recipient
2–6: Photos copyright, Thomas Bock, TU Munich, Germany.
7–8: Photos and figure copyright, Prof. Masahiru Nohmi, Kagawa University, Japan. 相似文献
18.
We present a design approach for manipulative technologies that consider “user diversity” as a main lever for design. Different
dimensions of “diversity” are considered, e.g., the users' age, abilities, culture, cultural background, and alphabetization.
These dimensions drive the development of a user-centered design process for manipulative technologies for learning and play
environments. In particular, we explore the possibility of allowing young children to develop and interact with virtual/physical
worlds by manipulating physical objects in different contexts, like the classroom, the hospital, or the playground. In our
scenarios, we consider children with different abilities (fully able, physically impaired, or with cognitive delays), in different
cultures (Denmark, Tanzania, and Italy), and with a different level of alphabetization. The needs and expectations of such
heterogeneous user-groups are taken into account through a user-centered design process to define a concept of tangible media
for collaborative and distributed edutainment environments. The concept is implemented as a set of building blocks called
I-Blocks with individual processing and communication power. Using the I-Blocks system, children can do “programming by building,”
and thereby construct interacting artefacts in an intuitive manner without the need to learn and use traditional programming
languages. Here, we describe in detail the technology of I-Blocks and discuss lessons learned from “designing for diversity.” 相似文献
19.
Michael L. Nelson Frank McCown Joan A. Smith Martin Klein 《International Journal on Digital Libraries》2007,6(4):327-349
To date, most of the focus regarding digital preservation has been on replicating copies of the resources to be preserved
from the “living web” and placing them in an archive for controlled curation. Once inside an archive, the resources are subject
to careful processes of refreshing (making additional copies to new media) and migrating (conversion to new formats and applications).
For small numbers of resources of known value, this is a practical and worthwhile approach to digital preservation. However,
due to the infrastructure costs (storage, networks, machines) and more importantly the human management costs, this approach
is unsuitable for web scale preservation. The result is that difficult decisions need to be made as to what is saved and what
is not saved. We provide an overview of our ongoing research projects that focus on using the “web infrastructure” to provide
preservation capabilities for web pages and examine the overlap these approaches have with the field of information retrieval.
The common characteristic of the projects is they creatively employ the web infrastructure to provide shallow but broad preservation
capability for all web pages. These approaches are not intended to replace conventional archiving approaches, but rather they
focus on providing at least some form of archival capability for the mass of web pages that may prove to have value in the
future. We characterize the preservation approaches by the level of effort required by the web administrator: web sites are
reconstructed from the caches of search engines (“lazy preservation”); lexical signatures are used to find the same or similar
pages elsewhere on the web (“just-in-time preservation”); resources are pushed to other sites using NNTP newsgroups and SMTP
email attachments (“shared infrastructure preservation”); and an Apache module is used to provide OAI-PMH access to MPEG-21
DIDL representations of web pages (“web server enhanced preservation”). 相似文献
20.
Marco Giorgio Bevilacqua 《Nexus Network Journal》2007,9(2):249-262
The discovery of gunpowder and its military applications caused a revolution in the common systems of defence, which had not
changed substantially from the Roman period. New methods of laying out urban defences in the second half of the sixteenth
century was the product of a continuous response to the evolution of fire arms and their increasing power. The goal of this
article is to explain these assertions, analysing in detail the factors that characterized the “science of fortification”
in the sixteenth century. 相似文献