Coupling represents the degree of interdependence between two software components. Understanding software dependency is directly
related to improving software understandability, maintainability, and reusability. In this paper, we analyze the difference
between component coupling and component dependency, introduce a two-parameter component coupling metric and a three-parameter
component dependency metric. An important parameter in both these metrics is coupling distance, which represents the relevance
of two coupled components. These metrics are applicable to layered component-based software. These metrics can be used to
represent the dependencies induced by all types of software coupling. We show how to determine coupling and dependency of
all scales of software components using these metrics. These metrics are then applied to Apache HTTP, an open-source web server.
The study shows that coupling distance is related to the number of modifications of a component, which is an important indicator
of component fault rate, stability and subsequently, component complexity.
Srini RamaswamyEmail: Email:
Liguo Yu
received the Ph.D. degree in Computer Science from Vanderbilt University. He is an assistant professor of Computer and Information
Sciences Department at Indiana University South Bend. Before joining IUSB, he was a visiting assistant professor at Tennessee
Technological University. His research concentrates on software coupling, software maintenance, software reuse, software testing,
software management, and open-source software development.
Kai Chen
received the Ph.D. degree from the Department of Electrical Engineering and Computer Science at Vanderbilt University. He
is working at Google Incorporation. His current research interests include development and maintenance of open-source software,
embedded software design, component-based design, model-based design, formal methods and model verification.
Srini Ramaswamy
earned his Ph.D. degree in Computer Science in 1994 from the Center for Advanced Computer Studies (CACS) at the University
of Southwestern Louisiana (now University of Louisiana at Lafayette). His research interests are on intelligent and flexible
control systems, behavior modeling, analysis and simulation, software stability and scalability. He is currently the Chairperson
of the Department of Computer Science, University of Arkansas at Little Rock. Before joining UALR, he is the chairman of Computer
Science Department at Tennessee Tech University. He is member of the Association of Computing Machinery, Society for Computer
Simulation International, Computing Professionals for Social Responsibility and a senior member of the IEEE.
相似文献
The sharing of caches among proxies is an important technique to reduce Web traffic, alleviate network bottlenecks, and improve response time of document requests. Most existing work on cooperative caching has been focused on serving misses collaboratively. Very few have studied the effect of cooperation on document placement schemes and its potential enhancements on cache hit ratio and latency reduction. We propose a new document placement scheme which takes into account the contentions at individual caches in order to limit the replication of documents within a cache group and increase document hit ratio. The main idea of this new scheme is to view the aggregate disk space of the cache group as a global resource of the group and uses the concept of cache expiration age to measure the contention of individual caches. The decision of whether to cache a document at a proxy is made collectively among the caches that already have a copy of this document. We refer to this new document placement scheme as the Expiration Age-based scheme (EA scheme). The EA scheme effectively reduces the replication of documents across the cache group, while ensuring that a copy of the document always resides in a cache where it is likely to stay for the longest time. We report our study on the potentials and limits of the EA scheme using both analytic modeling and trace-based simulation. The analytical model compares and contrasts the existing (ad hoc) placement scheme of cooperative proxy caches with our new EA scheme and indicates that the EA scheme improves the effectiveness of aggregate disk usage, thereby increasing the average time duration for which documents stay in the cache. The trace-based simulations show that the EA scheme yields higher hit rates and better response times compared to the existing document placement schemes used in most of the caching proxies. 相似文献
This work focuses on the effect of chemical treatment of coconut sheath/unsaturated polyester (CS/UPR) composite on the performance of abrasive waterjet machining (AWJM). Two different chemical treatments, namely alkali (NaOH) and trichlorovinylsilane, were imposed on the CS fiber. Further, the induced compressive strength arising as a result of AWJM was studied along the radial and depth directions of the composite. Experimental results revealed significantly lower induced stress at all points compared to the ultimate stress of CS/UPR composites except the free-edge loading condition. The chemically treated composites also exhibited inconsistent results in the machining characteristics such as kerf taper angle (Ta) and surface roughness (Ra) under varying cutting conditions. However, no direct correlation was seen between interfacial adhesion and Ta and Ra of the cutting zone. The maximum decrease of 12% of Ta and 22% of Ra was found for silane-treated composites compared to the untreated ones. In addition, the composite failure mechanisms such as fiber pullout, fiber breakage, interfacial debonding, matrix failure, and voids were identified in the cutting surfaces through scanning electron microscopy analysis. 相似文献
Cyberspace is an integration of cyber physical system components that integrates computation, networking, physical processes, embedded computers and network monitors which uses feedback loops for controlling the processes where the computations are affected by processes and vice versa. More general, cyber physical systems include all equipments operated on preprogrammed instructions ranging from simple electronic devices to the ultra-modern warfare equipments along with life saving devices. Active cyber-attacks can cause cyber warfare situations by disrupting an entire community of people, which in turn raises an emergency situation to the nation. Thus, cyber warfare is a major threat to the nation at large. In this paper, we analyze the various aspects of cyber warfare situations and a survey on ongoing attacks, defense and cyber forensics strategies in that field. Internet of Things (IoT) is an emerging computing area which enables Machine to Machine communication in cyber physical systems. An attack on IoT causes major issues to the security on the devices and thus, the various threats and attacks on IoT are analyzed here. Overall monitoring and data acquisition in cyber physical systems is done by Supervisory Control and Data Acquisition systems and are mainly targeted by the attackers in order to leave the cyberspace applications not functioning. Therefore, the various threats, attacks and research issues pertaining to the cyberspace are surveyed in this paper along with a few research issues and challenges that are to be solved in the area of cyber warfare.
The selection of text features is a fundamental task and plays an important role in digital document analysis. Conventional methods in text feature extraction necessitate indigenous features. Obtaining an efficient feature is an extensive process, but a new and real-time representation of features in text data is a challenging task. Deep learning is making inroads in digital document mining. A significant distinction between deep learning and traditional methods is that deep learning learns features in a digital document in an automatic manner. In this paper, logistic regression and deep dependency parsing (LR-DDP) methods are proposed. The logistic regression token generation model generates robust tokens by means of Napierian grammar. With the robust generated tokens, a deep transition-based dependency parsing using duplex long short-term memory is designed. Experimental results demonstrate that our dependency parser achieves comparable performance in terms of digital document parsing accuracy, parsing time and overhead when compared to existing methods. Hence, these methods are found to be computationally efficient and accurate.
An experimental method using a novel design to characterize the air flow and water removal during vacuum dewatering in paper manufacturing is discussed. The experimental setup involves the intermittent application of vacuum, similar to commercial systems, using a rotating disk with slot opening arrangement. The system is capable of commercially realistic residence times of the order of milliseconds. The intermittent application of vacuum simulates vacuum dewatering on commercial paper machines. The air flow rate is calculated from changes in pressure and temperature in the vacuum tank underneath the sample. The role and importance of air flow during vacuum dewatering is studied by accurately measuring the air flow, properly taking into account the leaks during vacuum dewatering. The method described here provides for the first time accurate air flow and water removal data during vacuum dewatering. Methods of analysis of the experimental data are also presented. This information can be used to better understand the water removal process as well the role and importance of air flow during vacuum dewatering. 相似文献
Network processing is becoming an increasingly important paradigm as the Internet moves towards an architecture with more complex functionality in the data path. Modern routers not only forward packets, but also process headers and payloads to implement a variety of functions related to security, performance, and customization. It is important to get a detailed understanding of the workloads associated with this processing in order to be able to develop efficient network processing engines. We present a tool called PacketBench, which provides a framework for implementing network processing applications and obtaining an extensive set of workload characteristics. For statistics collection, PacketBench provides the ability to derive a number of microarchitectural and networking related metrics. We show a range of workload results that focus on individual packets and the variation between them. The understanding of workload details of network processing has many practical applications. We discuss how PacketBench results can be used to estimate network processing delay that are very close to those obtained from measurement. 相似文献
In mentoring, an experienced person (the mentor) undertakes to guide a less experienced person (the “mentee”) in the same or similar field. Mentoring can effectively bring an organization up to speed with new technology. What loosely distinguishes mentoring from training is that the latter is something often associated with a classroom. With mentoring, the learning happens on the job: a mentor educates by continuously hand-holding the mentee on the latter's tasks. Of course, in any real situation the education process must involve a judicious and inseparable mix of training and mentoring, with the same person often performing both roles. You can apply mentoring to people in your organization or even in a customer organization; the article focuses on the former. Mentors are particularly crucial for an organization learning object orientation. The author presents lessons learned from his experiences as a mentor on object oriented application development projects. He attempts to generalize his observations and present them as succinct, shop-usable advice, not just for the mentor but for the mentee as well 相似文献