共查询到20条相似文献,搜索用时 31 毫秒
1.
Secure buffering in firm real-time database systems 总被引:2,自引:0,他引:2
Binto George Jayant R. Haritsa 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):178-198
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems
where enforcing security is crucial to the success of the enterprise. We investigate here the performance implications, in terms of killed transactions,
of guaranteeing multi-level secrecy in a real-time database system supporting applications with firm deadlines. In particular, we focus on the buffer management aspects of this issue.
Our main contributions are the following. First, we identify the importance and difficulties of providing secure buffer management
in the real-time database environment. Second, we present SABRE, a novel buffer management algorithm that provides covert-channel-free security. SABRE employs a fully dynamic one-copy allocation policy for efficient usage of buffer resources. It also incorporates
several optimizations for reducing the overall number of killed transactions and for decreasing the unfairness in the distribution
of killed transactions across security levels. Third, using a detailed simulation model, the real-time performance of SABRE
is evaluated against unsecure conventional and real-time buffer management policies for a variety of security-classified transaction
workloads and system configurations. Our experiments show that SABRE provides security with only a modest drop in real-time
performance. Finally, we evaluate SABRE's performance when augmented with the GUARD adaptive admission control policy. Our
experiments show that this combination provides close to ideal fairness for real-time applications that can tolerate covert-channel
bandwidths of up to one bit per second (a limit specified in military standards).
Received March 1, 1999 / Accepted October 1, 1999 相似文献
2.
E. Panagos A. Biliris 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):209-223
Client-server object-oriented database management systems differ significantly from traditional centralized systems in terms
of their architecture and the applications they target. In this paper, we present the client-server architecture of the EOS
storage manager and we describe the concurrency control and recovery mechanisms it employs. EOS offers a semi-optimistic locking
scheme based on the multi-granularity two-version two-phase locking protocol. Under this scheme, multiple concurrent readers
are allowed to access a data item while it is being updated by a single writer. Recovery is based on write-ahead redo-only
logging. Log records are generated at the clients and they are shipped to the server during normal execution and at transaction
commit. Transaction rollback is fast because there are no updates that have to be undone, and recovery from system crashes
requires only one scan of the log for installing the changes made by transactions that committed before the crash. We also
present a preliminary performance evaluation of the implementation of the above mechanisms.
Edited by R. King. Received July 1993 / Accepted May 1996 相似文献
3.
Computing systems are essential resources for both the business and public sectors. With the increasing interdependence of
integrated electronic commerce and business applications within the global computing environment, performance and reliability
are of great concern. Poor performance can mean lost cooperation, opportunity, and revenue. This paper describes performance
challenges that these applications face over the short and long term. We present an analytic technique that can predict the
performance of an e-commerce application over a given deployment period. This technique can be used to deduce performance
stress testing vectors over this period and for design and capacity planning exercises. A Web-based shopping server case study
is used as an example.
Published online: 22 August 2001 相似文献
4.
Periodic broadcast and scheduled multicast have been shown to be very effective in reducing the demand on server bandwidth.
While periodic broadcast is better for popular videos, scheduled multicast is more suitable for less popular ones. Work has
also been done to show that a hybrid of these techniques offer the best performance. Existing hybrid schemes, however, assume
that the characteristic of the workload does not change with time. This assumption is not true for many applications, such
as movie on demand, digital video libraries, or electronic commerce. In this paper, we show that existing scheduled multicast
techniques are not suited for hybrid designs. To address this issue, we propose a new approach and use it to design an adaptive
hybrid strategy. Our technique adjusts itself to cope with a changing workload. We provide simulation results to demonstrate
that the proposed technique is significantly better than the best static approach in terms of service latency, throughput,
defection rate, and unfairness. 相似文献
5.
Kelvin K.W. Law John C.S. Lui Leana Golubchik 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):133-153
Advances in high-speed networks and multimedia technologies have made it feasible to provide video-on-demand (VOD) services
to users. However, it is still a challenging task to design a cost-effective VOD system that can support a large number of
clients (who may have different quality of service (QoS) requirements) and, at the same time, provide different types of VCR
functionalities. Although it has been recognized that VCR operations are important functionalities in providing VOD service,
techniques proposed in the past for providing VCR operations may require additional system resources, such as extra disk I/O,
additional buffer space, as well as network bandwidth. In this paper, we consider the design of a VOD storage server that
has the following features: (1) provision of different levels of display resolutions to users who have different QoS requirements,
(2) provision of different types of VCR functionalities, such as fast forward and rewind, without imposing additional demand
on the system buffer space, I/O bandwidth, and network bandwidth, and (3) guarantees of the load-balancing property across
all disks during normal and VCR display periods. The above-mentioned features are especially important because they simplify
the design of the buffer space, I/O, and network resource allocation policies of the VOD storage system. The load-balancing
property also ensures that no single disk will be the bottleneck of the system. In this paper, we propose data block placement,
admission control, and I/O-scheduling algorithms, as well as determine the corresponding buffer space requirements of the
proposed VOD storage system. We show that the proposed VOD system can provide VCR and multi-resolution services to the viewing
clients and at the same time maintain the load-balancing property.
Received June 9, 1998 / Accepted April 26, 1999 相似文献
6.
Multimedia systems must be able to support a certain quality of service (QoS) to satisfy the stringent real-time performance
requirements of their applications. HeiRAT, the Heidelberg Resource Administration Technique, is a comprehensive QoS management
system that was designed and implemented in connection with a distributed multimedia platform for networked PCs and workstations.
HeiRAT includes techniques for QoS negotiation, QoS calculation, resource reservation, and resource scheduling for local and
network resources. 相似文献
7.
Integration – supporting multiple application classes with heterogeneous performance requirements – is an emerging trend
in networks, file systems, and operating systems. We evaluate two architectural alternatives – partitioned and integrated
– for designing next-generation file systems. Whereas a partitioned server employs a separate file system for each application
class, an integrated file server multiplexes its resources among all application classes; we evaluate the performance of the
two architectures with respect to sharing of disk bandwidth among the application classes. We show that although the problem
of sharing disk bandwidth in integrated file systems is conceptually similar to that of sharing network link bandwidth in
integrated services networks, the arguments that demonstrate the superiority of integrated services networks over separate
networks are not applicable to file systems. Furthermore, we show that: an integrated server outperforms the partitioned server
in a large operating region and has slightly worse performance in the remaining region; the capacity of an integrated server
is larger than that of the partitioned server; and an integrated server outperforms the partitioned server by a factor of
up to 6 in the presence of bursty workloads. 相似文献
8.
A large-scale, distributed video-on-demand (VOD) system allows geographically dispersed residential and business users to
access video services, such as movies and other multimedia programs or documents on demand from video servers on a high-speed
network. In this paper, we first demonstrate through analysis and simulation the need for a hierarchical architecture for
the VOD distribution network.We then assume a hierarchical architecture, which fits the existing tree topology used in today's
cable TV (CATV) hybrid fiber/coaxial (HFC) distribution networks. We develop a model for the video program placement, configuration,
and performance evaluation of such systems. Our approach takes into account the user behavior, the fact that the user requests
are transmitted over a shared channel before reaching the video server containing the requested program, the fact that the
input/output (I/O) capacity of the video servers is the costlier resource, and finally the communication cost. In addition,
our model employs batching of user requests at the video servers. We study the effect of batching on the performance of the
video servers and on the quality of service (QoS) delivered to the user, and we contribute dynamic batching policies which
improve server utilization, user QoS, and lower the servers' cost. The evaluation is based on an extensive analytical and
simulation study. 相似文献
9.
Shared memory provides a convenient programming model for parallel applications. However, such a model is provided on physically
distributed memory systems at the expense of efficiency of execution of the applications. For this reason, applications can
give minimum consistency requirements on the memory system, thus allowing alternatives to the shared memory model to be used
which exploit the underlying machine more efficiently. To be effective, these requirements need to be specified in a precise
way and to be amenable to formal analysis. Most approaches to formally specifying consistency conditions on memory systems
have been from the viewpoint of the machine rather than from the application domain.
In this paper we show how requirements on memory systems can be given from the viewpoint of the application domain formally
in a first-order theory MemReq, to improve the requirements engineering process for such systems. We show the general use of MemReq in expressing major classes of requirements for memory systems and conduct a case study of the use of MemReq in a real-life parallel system out of which the formalism arose. 相似文献
10.
Wee Teck Ng Peter M. Chen 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(3):194-204
Recent results in the Rio project at the University of Michigan show that it is possible to create an area of main memory
that is as safe as disk from operating system crashes. This paper explores how to integrate the reliable memory provided by
the Rio file cache into a database system. Prior studies have analyzed the performance benefits of reliable memory; we focus
instead on how different designs affect reliability. We propose three designs for integrating reliable memory into databases:
non-persistent database buffer cache, persistent database buffer cache, and persistent database buffer cache with protection.
Non-persistent buffer caches use an I/O interface to reliable memory and require the fewest modifications to existing databases.
However, they waste memory capacity and bandwidth due to double buffering. Persistent buffer caches use a memory interface
to reliable memory by mapping it into the database address space. This places reliable memory under complete database control
and eliminates double buffering, but it may expose the buffer cache to database errors. Our third design reduces this exposure
by write protecting the buffer pages. Extensive fault tests show that mapping reliable memory into the database address space
does not significantly hurt reliability. This is because wild stores rarely touch dirty, committed pages written by previous
transactions. As a result, we believe that databases should use a memory interface to reliable memory.
Received January 1, 1998 / Accepted June 20, 1998 相似文献
11.
Stefan Deßloch Theo Härder Nelson Mattos Bernhard Mitschang Joachim Thomas 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(2):79-95
The increasing power of modern computers is steadily opening up new application domains for advanced data processing such
as engineering and knowledge-based applications. To meet their requirements, concepts for advanced data management have been
investigated during the last decade, especially in the field of object orientation. Over the last couple of years, the database
group at the University of Kaiserslautern has been developing such an advanced database system, the KRISYS prototype. In this
article, we report on the results and experiences obtained in the course of this project. The primary objective for the first
version of KRISYS was to provide semantic features, such as an expressive data model, a set-oriented query language, deductive
as well as active capabilities. The first KRISYS prototype became completely operational in 1989. To evaluate its features
and to stabilize its functionality, we started to develop several applications with the system. These experiences marked the
starting point for an overall redesign of KRISYS. Major goals were to tune KRISYS and its query-processing facilities to a
suitable client/server environment, as well as to provide elaborate mechanisms for consistency control comprising semantic
integrity constraints, multi-user synchronization, and failure recovery. The essential aspects of the resulting client/server
architecture are embodied by the client-side data management needed to effectively support advanced applications and to gain
the required system performance for interactive work. The project stages of KRISYS properly reflect the essential developments
that have taken place in the research on advanced database systems over the last years. Hence, the subsequent discussions
will bring up a number of important aspects with regard to advanced data processing that are of significant general importance,
as well as of general applicability to database systems.
Received June 18, 1996 / Accepted November 11, 1997 相似文献
12.
We present efficient schemes for scheduling the delivery of variable-bit-rate MPEG-compressed video with stringent quality-of-service
(QoS) requirements. Video scheduling is being used to improve bandwidth allocation at a video server that uses statistical
multiplexing to aggregate video streams prior to transporting them over a network. A video stream is modeled using a traffic
envelope that provides a deterministic time-varying bound on the bit rate. Because of the periodicity in which frame types
in an MPEG stream are typically generated, a simple traffic envelope can be constructed using only five parameters. Using
the traffic-envelope model, we show that video sources can be statistically multiplexed with an effective bandwidth that is often less than the source peak rate. Bandwidth gain is achieved without sacrificing the stringency of the requested
QoS. The effective bandwidth depends on the arrangement of the multiplexed streams, which is a measure of the lag between the GOP periods of various streams. For homogeneous streams,
we give an optimal scheduling scheme for video sources at a video-on-demand server that results in the minimum effective bandwidth.
For heterogeneous sources, a sub-optimal scheduling scheme is given, which achieves acceptable bandwidth gain. Numerical examples
based on traces of MPEG-coded movies are used to demonstrate the effectiveness of our schemes. 相似文献
13.
Excessive buffer requirement to handle continuous-media playbacks is an impediment to cost- effective provisioning for on-line
video retrieval. Given the skewed distribution of video popularity, it is expected that often there are concurrent playbacks
of the same video file within a short time interval. This creates an opportunity to batch multiple requests and to service
them with a single stream from the disk without violating the on-demand constraint. However, there is a need to keep data
in memory between successive uses to do this. This leads to a buffer space trade-off between servicing a request in memory mode vs. servicing it in disk-mode. In this work, we develop a novel algorithm to minimize the buffer requirement to support a set of concurrent playbacks.
One of the beauties of the proposed scheme is that it enables the server to dynamically adapt to the changing workload while
minimizing the total buffer space requirement. Our algorithm makes a significant contribution in decreasing the total buffer
requirement, especially when the user access pattern is biased in favor of a small set of files. The idea of the proposed
scheme is modeled in detail using an analytical formulation, and optimality of the algorithm is proved. An analytical framework
is developed so that the proposed scheme can be used in combination with various existing disk-scheduling strategies. Our
simulation results confirm that under certain circumstances, it is much more resource efficient to support some of the playbacks
in memory mode and subsequently the proposed scheme enables the server to minimize the overall buffer space requirement. 相似文献
14.
Some studies of diaries and scheduling systems have considered how individuals use diaries with a view to proposing requirements
for computerised time management tools. Others have focused on the criteria for success of group scheduling systems. Few have
paid attention to how people use a battery of tools as an ensemble. This interview study reports how users exploit paper,
personal digital assistants (PDAs) and a group scheduling system for their time management. As with earlier studies, we find
many shortcomings of different technologies, but studying the ensemble rather than individual tools points towards a different
conclusion: rather than aiming towards producing electronic time management tools that replace existing paper-based tools,
we should be aiming to understand the relative strengths and weaknesses of each technology and look towards more seamless
integration between tools. In particular, the requirements for scheduling and those for more responsive, fluid time management
conflict in ways that demand different kinds of support. 相似文献
15.
High-speed networks and powerful end-systems enable new types of applications, such as video-on-demand and teleconferencing.
Such applications are very demanding on quality of service (QoS) because of the isochronous nature of the media they are using.
To support these applications, QoS guarantees are required. However, even with service guarantees, violations may occur because
of resources shortage, e.g., network congestion. In this paper we propose new adaptation approaches, which allow the system
to recover automatically, if possible, from QoS violations (1) by identifying a new configuration of system components that might support the initially
agreed QoS and by performing a user-transparent transition from the original configuration to the new one, (2) by redistributing
the levels of QoS that should be supported, in the future, by the components, or (3) by redistributing the levels of QoS that
should be supported immediately to meet end-to-end requirements based on the principle that (local) QoS violation at one component
may be recovered immediately by the other components participating in the support of the requested service. The proposed approaches,
together with suitable negotiation mechanisms, allow us (1) to reduce the probability of QoS violations which may be noticed
by the user, and thus, to increase the user confidence in the service provider, and (2) to improve the utilization of the
system resources, and thus to increase the system availability. 相似文献
16.
Praveen Seshadri 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(3):130-140
The explosion in complex multimedia content makes it crucial for database systems to support such data efficiently. This
paper argues that the “blackbox” ADTs used in current object-relational systems inhibit their performance, thereby limiting
their use in emerging applications. Instead, the next generation of object-relational database systems should be based on
enhanced abstract data type (E-ADT) technology. An (E-ADT) can expose the semantics of its methods to the database system, thereby permitting advanced query optimizations. Fundamental architectural changes
are required to build a database system with E-ADTs; the added functionality should not compromise the modularity of data
types and the extensibility of the type system. The implementation issues have been explored through the development of E-ADTs
in Predator. Initial performance results demonstrate an order of magnitude in performance improvements.
Received January 1, 1998 / Accepted May 27, 1998 相似文献
17.
Stefan Manegold Peter A. Boncz Martin L. Kersten 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(3):231-246
In the past decade, advances in the speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access
is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article,
we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines
for database architecture, in terms of both data structures and algorithms. We discuss how vertically fragmented data structures
optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and
introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical
model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system.
We obtained exact statistics on events such as TLB misses and L1 and L2 cache misses by using hardware performance counters
found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms
makes them perform well, which is confirmed by experimental results.
Received April 20, 2000 / Accepted June 23, 2000 相似文献
18.
Yuh-Jzer Joung 《Distributed Computing》2002,15(3):155-175
Summary. Group mutual exclusion occurs naturally in situations where a resource can be shared by processes of the same group, but
not by processes of different groups. For example, suppose data is stored in a CD-jukebox. Then when a disc is loaded for
access, users that need data on the disc can concurrently access the disc, while users that need data on a different disc
have to wait until the current disc is unloaded.
The design issues for group mutual exclusion have been modeled as the Congenial Talking Philosophers problem, and solutions for shared-memory models have been proposed [12,14]. As in ordinary mutual exclusion and many other
problems in distributed systems, however, techniques developed for shared memory do not necessary apply to message passing
(and vice versa). So in this paper we investigate solutions for Congenial Talking Philosophers in computer networks where
processes communicate by asynchronous message passing. We first present a solution that is a straightforward adaptation from
Ricart and Agrawala's algorithm for ordinary mutual exclusion. Then we show that the simple modification suffers a severe
performance degradation that could cause the system to behave as though only one process of a group can be in the critical
section at a time. We then present a more efficient and highly concurrent distributed algorithm for the problem, the first
such solution in computer networks.
Received: August 2000 / Accepted: November 2001 相似文献
19.
Cynthia E. Irvine Timothy Levin Jeffery D. Wilson David Shifflett Barbara Pereira 《Requirements Engineering》2002,7(4):192-206
Requirements specifications for high-assurance secure systems are rare in the open literature. This paper examines the development
of a requirements document for a multilevel secure system that must meet stringent assurance and evaluation requirements.
The system is designed to be secure, yet combines popular commercial components with specialised high-assurance ones. Functional
and non-functional requirements pertinent to security are discussed. A multidimensional threat model is presented. The threat
model accounts for the developmental and operational phases of system evolution and for each phase accounts for both physical
and non-physical threats. We describe our team-based method for developing a requirements document and relate that process
to techniques in requirements engineering. The system requirements document presented provides a calibration point for future
security requirements engineering techniques intended to meet both functional and assurance goals.
RID="*"
ID="*"The views expressed in this paper are those of the authors and should not be construed to reflect those of their employers
or the Department of Defense. This work was supported in part by the MSHN project of the DARPA/ITO Quorum programme and by
the MYSEA project of the DARPA/ATO CHATS programme.
Correspondence and offprint requests to: T. Levin, Department of Computer Science, Naval Postgraduate School, Monterey, CA 93943-5118, USA. Tel.: +1 831 656 2339;
Fax: +1 831 656 2814; Email: levin@nps.navy.mil 相似文献
20.
The GMAP: a versatile tool for physical data independence 总被引:1,自引:0,他引:1
Odysseas G. Tsatalos Marvin H. Solomon Yannis E. Ioannidis 《The VLDB Journal The International Journal on Very Large Data Bases》1996,5(2):101-118
Physical data independence is touted as a central feature of modern
database systems. It allows users to frame queries in terms of the logical
structure of the data, letting a query processor automatically translate
them into optimal plans that access physical storage structures. Both
relational and object-oriented systems, however, force users to frame their
queries in terms of a logical schema that is directly tied to physical
structures. We present an approach that eliminates this dependence. All
storage structures are defined in a declarative language based on
relational algebra as functions of a logical schema. We present an
algorithm, integrated with a conventional query optimizer, that translates
queries over this logical schema into plans that access the storage
structures. We also show how to compile update requests into plans that
update all relevant storage structures consistently and optimally.
Finally, we report on experiments with a prototype implementation of our
approach that demonstrate how it allows storage structures to be tuned to
the expected or observed workload to achieve significantly better
performance than is possible with conventional techniques.
Edited by
Matthias Jarke, Jorge Bocca, Carlo Zaniolo. Received
September 15, 1994 / Accepted September 1, 1995 相似文献