共查询到20条相似文献,搜索用时 15 毫秒
1.
Michael L. Nelson Frank McCown Joan A. Smith Martin Klein 《International Journal on Digital Libraries》2007,6(4):327-349
To date, most of the focus regarding digital preservation has been on replicating copies of the resources to be preserved
from the “living web” and placing them in an archive for controlled curation. Once inside an archive, the resources are subject
to careful processes of refreshing (making additional copies to new media) and migrating (conversion to new formats and applications).
For small numbers of resources of known value, this is a practical and worthwhile approach to digital preservation. However,
due to the infrastructure costs (storage, networks, machines) and more importantly the human management costs, this approach
is unsuitable for web scale preservation. The result is that difficult decisions need to be made as to what is saved and what
is not saved. We provide an overview of our ongoing research projects that focus on using the “web infrastructure” to provide
preservation capabilities for web pages and examine the overlap these approaches have with the field of information retrieval.
The common characteristic of the projects is they creatively employ the web infrastructure to provide shallow but broad preservation
capability for all web pages. These approaches are not intended to replace conventional archiving approaches, but rather they
focus on providing at least some form of archival capability for the mass of web pages that may prove to have value in the
future. We characterize the preservation approaches by the level of effort required by the web administrator: web sites are
reconstructed from the caches of search engines (“lazy preservation”); lexical signatures are used to find the same or similar
pages elsewhere on the web (“just-in-time preservation”); resources are pushed to other sites using NNTP newsgroups and SMTP
email attachments (“shared infrastructure preservation”); and an Apache module is used to provide OAI-PMH access to MPEG-21
DIDL representations of web pages (“web server enhanced preservation”). 相似文献
2.
Requirement emergence computation of networked software 总被引:3,自引:0,他引:3
He Keqing Liang Peng Peng Rong Li Bing Liu Jing 《Frontiers of Computer Science in China》2007,1(3):322-328
Emergence Computation has become a hot topic in the research of complex systems in recent years. With the substantial increase
in scale and complexity of network-based information systems, the uncertain user requirements from the Internet and personalized
application requirement result in the frequent change for the software requirement. Meanwhile, the software system with non
self-possessed resource become more and more complex. Furthermore, the interaction and cooperation requirement between software
units and running environment in service computing increase the complexity of software systems. The software systems with
complex system characteristics are developing into the “Networked Software” with characteristics of change-on-demand and change-with-cooperation.
The concepts “programming”, “compiling” and “running” of software in common sense are extended from “desktop” to “network”.
The core issue of software engineering is moving to the requirement engineering, which becomes the research focus of complex
system software engineering.
In this paper, we present the software network view based on complex system theory, and the concept of networked software
and networked requirement. We propose the challenge problem in the research of emergence computation of networked software
requirement. A hierarchical & cooperative unified requirement modeling framework URF (Unified Requirement Framework) and related
RGPS (Role, Goal, Process and Service) meta-models are proposed. Five scales and the evolutionary growth mechanism in requirement
emergence computation of networked software are given with focus on user-dominant and domain-oriented requirement, and the
rules and predictability in requirement emergence computation are analyzed. A case study in the application of networked e-Business
with evolutionary growth based on State design pattern is presented in the end. 相似文献
3.
Reda Bendraou Philippe Desfray Marie-Pierre Gervais Alexis Muller 《Software and Systems Modeling》2008,7(3):329-343
As the Model Driven Development (MDD) and Product Line Engineering (PLE) appear as major trends for reducing software development
complexity and costs, an important missing stone becomes more visible: there is no standard and reusable assets for packaging
the know-how and artifacts required when applying these approaches. To overcome this limit, we introduce in this paper the
notion of MDA Tool Component, i.e., a packaging unit for encapsulating business know-how and required resources in order to
support specific modeling activities on a certain kind of model. The aim of this work is to provide a standard way for representing
this know-how packaging unit. This is done by introducing a two-layer MOF-compliant metamodel. Whilst the first layer focuses
on the definition of the structure and contents of the MDA Tool Component, the second layer introduces a language independent
way for describing its behavior. An OMG RFP (Request For Proposal) has been issued in order to standardize this approach.
This work is supported in part by the IST European project “MODELWARE” (contract no 511731) and extends the work presented
in the paper entitled “MDA Components: A Flexible Way for Implementing the MDA Approach” edited in proceedings of the ECMDA-FA’05 conference. 相似文献
4.
5.
Conclusion The interest in open systems in the West is increasing year after year. CSCW is thus slanted to become the most popular scientific-technical
direction of the first years of the next century, especially as a result of the increased activity of American providers and
the U.S. government program for the creation of an Internet-based “information highway of the century” [14]. For instance,
as part of the program for large-scale acceptance of Internet, the U.S. National Science Foundation (NSF) allocates funding
only to organizations that make their projects available on Internet [15], and the European Community officially subjected
all European research to specific networking requirements [16]. Theoretical topics of CSCW, including computer hermeneutics
and development of “computer Esperanto” will thus be strongly influenced by information science and information infrastructure,
which are changing the industry of information production and information use in “first world” countries.
We hope that we have managed to demonstrate that the drama in computer science is an apparent phenomenon, which is attributable
in our domestic conditions to the absence of direct and effective links with the “open world”. Our view can be supported by
the thought expressed by A. Toffler: “…some scientists paint a picture of the world of science as activated by its own inner
logic and evolving according to its own laws in total isolation of the surrounding world. In this connection we cannot but
remark that scientific hypotheses, theories, metaphors, and models are formed under the influence of economic, cultural, and
political factors acting outside the walls of laboratories” [17].
If the forms of cognitive practice change with changes in methods of production, forms of social organization, engineering
and technology, this means that we must elucidate the relationship between cognition and action, between theoretical and applied
practice, between consciousness and behavior…. Where scientific research goes into decline, we observe narrow specialization
of the scientific language and loss of its connections with the daily language.M. Wartofsky,Models
Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 112–124, March–April, 1995. 相似文献
6.
Michał Antkiewicz Thiago Tonelli Bartolomei Krzysztof Czarnecki 《Automated Software Engineering》2009,16(1):101-144
Framework-specific models represent the design of application code from the framework viewpoint by showing how framework-provided concepts are instantiated
in the code. Retrieving such models quickly and precisely is necessary for practical model-supported software engineering,
in which developers use design models for development tasks such as code understanding, verifying framework usage rules, and
round-trip engineering. Also, comparing models extracted at different times of the software lifecycle supports software evolution
tasks.
We describe an experimental study of the static analyses necessary to automatically retrieve framework-specific models from
application code. We reverse engineer a number of applications based on three open-source frameworks and evaluate the quality
of the retrieved models. The models are expressed using framework-specific modeling languages (FSMLs), each designed for an open-source framework. For reverse engineering, we use prototype implementations of the three
FSMLs.
Our results show that for the considered frameworks and a large body of application code rather simple code analyses are sufficient
for automatically retrieving framework-specific models with high precision and recall. Based on the initial results, we refine
the static analyses and repeat the study on a larger set of applications to provide more evidence and confirm the results.
The refined static analyses provide precision and recall of close to 100% for the analyzed applications.
This paper is an extended version of the paper “Automatic extraction of framework-specific models from framework-based application
code”, which was published in the proceedings of the Twenty-Second ACM/IEEE International Conference on Automated Software
Engineering, 2007. 相似文献
7.
8.
W. M. P. van der Aalst V. Rubin H. M. W. Verbeek B. F. van Dongen E. Kindler C. W. Günther 《Software and Systems Modeling》2010,9(1):87-111
Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being
executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that
one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at
finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior.
This generalization is often driven by the representation language and very crude assumptions about completeness. As a result,
parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting”
(allow for much more behavior without strong support for it). None of the existing techniques enables the user to control
the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable
approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach
has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches. 相似文献
9.
Nowadays, typical software and system engineering projects in various industrial sectors (automotive, telecommunication, etc.) involve hundreds of developers using quite a number of different tools. Thus, the data of a project as a whole is distributed over these tools. Therefore, it is necessary to make the relationships of different tool data repositories visible and keep them consistent with each other. This still is a nightmare due to the lack of domain-specific adaptable tool and data integration solutions which support maintenance of traceability links, semi-automatic consistency checking as well as incremental update propagation. Currently used solutions are usually hand-coded one-way transformations between pairs of tools only. In this article we propose a new rule-based approach that allows for the declarative specification of data integration rules concerning multiple data repositories. Hence, we call our approach “Multi Document Integration”. It generalizes the formalism of triple graph grammars and replaces the underlying data structure of directed graphs by the more general data structure of MOF-compliant meta models. Our integration rule specifications are translated into JMI-compliant Java code which is used for various purposes by a tool integration framework. As a result we give an answer to OMG’s request for proposals for a MOF-compliant “queries, views, and transformation” approach from the “model driven application development” (MDA) field. 相似文献
10.
During software development, projects often experience risky situations. If projects fail to detect such risks, they may exhibit
confused behavior. In this paper, we propose a new scheme for characterization of the level of confusion exhibited by projects
based on an empirical questionnaire. First, we designed a questionnaire from five project viewpoints, requirements, estimates,
planning, team organization, and project management activities. Each of these viewpoints was assessed using questions in which
experience and knowledge of software risks are determined. Secondly, we classify projects into “confused” and “not confused,”
using the resulting metrics data. We thirdly analyzed the relationship between responses to the questionnaire and the degree
of confusion of the projects using logistic regression analysis and constructing a model to characterize confused projects.
The experimental result used actual project data shows that 28 projects out of 32 were characterized correctly. As a result,
we concluded that the characterization of confused projects was successful. Furthermore, we applied the constructed model
to data from other projects in order to detect risky projects. The result of the application of this concept showed that 7
out of 8 projects were classified correctly. Therefore, we concluded that the proposed scheme is also applicable to the detection
of risky projects. 相似文献
11.
A software architecture is a key asset for any organization that builds complex software-intensive systems. Because of an
architecture's central role as a project blueprint, organizations should analyze the architecture before committing resources
to it. An analysis helps to ensure that sound architectural decisions are made. Over the past decade a large number of architecture
analysis methods have been created, and at least two surveys of these methods have been published. This paper examines the
criteria for analyzing architecture analysis methods, and suggests a new set of criteria that focus on the essence of what
it means to be an architecture analysis method. These criteria could be used to compare methods, to help understand the suitability
of a method, or to improve a method. We then examine two methods—the Architecture Tradeoff Analysis Method and Architecture-level
Modifiability Analysis—in light of these criteria, and provide some insight into how these methods can be improved.
Rick Kazman is a Senior Member of the Technical Staff at the Software Engineering Institute of Carnegie Mellon University and Professor
at the University of Hawaii. His primary research interests are software architecture, design and analysis tools, software
visualization, and software engineering economics. He also has interests in human-computer interaction and information retrieval.
Kazman has created several highly influential methods and tools for architecture analysis, including the SAAM and the ATAM.
He is the author of over 80 papers, and co-author of several books, including “Software Architecture in Practice”, and “Evaluating
Software Architectures: Methods and Case Studies”.
Len Bass is a Senior Member of the Technical Staff at the Software Engineering Institute (SEI). He has written two award winning books
in software architecture as well as several other books and numerous papers in a wide variety of areas of computer science
and software engineering. He is currently working on techniques for the methodical design of software architectures and to
understand how to support usability through software architecture. He has been involved in the development of numerous different
production or research software systems ranging from operating systems to database management systems to automotive systems.
Mark Klein is Senior Member of the Technical Staff of the Software Engineering Institute. He has over 20 years of experience in research
on various facets of software engineering, dependable real-time systems and numerical methods. Klein's most recent work focuses
on the analysis of software architectures, architecture tradeoff analysis, attribute-driven architectural design and scheduling
theory. Klein's work in real-time systems involved the development of rate monotonic analysis (RMA), the extension of the
theoretical basis for RMA, and its application to realistic systems. Klein's earliest work involved research in high-order
finite element methods for solving fluid flow equations arising in oil reservoir simulation. He is the co-author two books:
“A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems” and “Evaluating
Software Architecture: Methods and Case Studies”.
Anthony J. Lattanze is an Associate Teaching Professor at the Institute for Software Research International (ISRI) at Carnegie Mellon University
(CMU) and a senior member of the technical staff at the Software Engineering Institute (SEI). Anthony teaches courses in CMUs
Masters of Software Engineering Program in Software Architecture, Real-Time/Embedded Systems, and Software Development Studio.
His primary research interest is in the area software architectural design for embedded, software intensive systems. Anthony
consults and teaches throughout industry in the areas of software architecture design and architecture evaluation.
Prior to Carnegie Mellon, Mr. Lattanze was the Chief of Software Engineering for the Technology Development Group at the United
States Flight Test Center at Edwards Air Force Base, CA. During his tenure at the Flight Test Center, he was involved with
a number of software and systems engineering projects as a software and systems architect, project manager, and developer.
During this time as he was involved with the development, test, and evaluation of avionics systems for the B-2 Stealth Bomber,
F-117 Stealth Fighter, and F-22 Advanced Tactical Fighter among other systems.
Linda Northrop is the director of the Product Line Systems Program at the Software Engineering Institute (SEI) where she leads the SEI work
in software architecture, software product lines and predictable component engineering. Under her leadership the SEI has developed
software architecture and product line methods that are used worldwide, a series of five highly-acclaimed books, and Software
Architecture and Software Product Line Curricula. She is co-author of the book, “Software Product Lines: Practices and Patterns,”
and a primary author of the SEI Framework for Software Product Line Practice. 相似文献
12.
FGSPEC is a wide spectrum specification language intended to facilitate the software specification and the expression of transformation process from the functional specification whic describes “what to do ”to the corresponding design(perational)specification whic describer“how to do ”.The design emphasizes the coherence of multi-level specification mechanisms and a tree structure model is provided whic unifies the wide spectrum specification styles from“what”to“how”. 相似文献
13.
Massively multi-player games hold a huge market in the digital entertainment industry. Companies invest heavily in game developments
since a successful online game can attract millions of users, and this translates to a huge investment payoff. However, multi-player
online games are also subjected to various forms of “hacks” and “cheats”. Hackers can alter the graphic rendering to reveal
information otherwise be hidden in a normal game, or cheaters can use software robots to play the game automatically and thus
gain an unfair advantage. To overcome these problems, some popular online games release software patches constantly to block
“known” hacks or incorporate anti-cheating software to detect “known” cheats. This not only creates deployment difficulty
but new cheats will still be able to breach the normal game logic until software patches or updates of the anti-cheating software
are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a “scalable”
and “efficient” method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network
approach. The detection framework relies solely on the game states and runs in the game server only. Therefore, it is invulnerable
to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the proposed method, we have implemented
a prototype multi-player game system to detect whether a player is using any “aiming robot” for cheating or not. Experiments
show that the proposed method can effectively detect cheaters on a first-person shooter game with extremely low false positive
rate. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating
detection and security research in the area of online multi-player games. 相似文献
14.
An elementary and unified approach to program correctness 总被引:1,自引:0,他引:1
Jaime A. Bohórquez V 《Formal Aspects of Computing》2010,22(5):611-627
We present through the algorithmic language DHL (Dijkstra-Hehner language), a practical approach to a simple first order theory based on calculational logic, unifying Hoare
and Dijkstra’s iterative style of programming with Hehner’s recursive predicative programming theory, getting the “best of the two worlds” and without having to recur in any way to higher-order approaches such as predicate
transformers, Hoare logic, fixed-point or relational theory. 相似文献
15.
Mining Non-Redundant Association Rules 总被引:12,自引:2,他引:12
Mohammed J. Zaki 《Data mining and knowledge discovery》2004,9(3):223-248
The traditional association rule mining framework produces many redundant rules. The extent of redundancy is a lot larger
than previously suspected. We present a new framework for associations based on the concept of closed frequent itemsets. The number of non-redundant rules produced by the new approach is exponentially (in the length of the
longest frequent itemset) smaller than the rule set from the traditional approach. Experiments using several “hard” as well
as “easy” real and synthetic databases confirm the utility of our framework in terms of reduction in the number of rules presented
to the user, and in terms of time. 相似文献
16.
软件代码注释生成是软件工程领域近期研究的一个重要问题.目前很多研究工作已经在包含大量<代码片段,注释语句>对的开源数据集上取得了较好效果.但在企业应用中,待注释的代码往往是一个软件项目库,其必须首先决策在哪些代码行上生成注释更好,而且待注释的代码片段大小、粒度各不相同,需要研究提出一种注释决策和生成一体化的、抗噪音的代码注释生成方法.针对这个问题,提出一个面向软件项目的代码自动注释生成方法CoComment.所提方法能够自动抽取软件项目文档中的领域基本概念,并基于代码解析与文本匹配进行概念传播和扩展.在此基础上,通过定位概念相关的代码行/段进行自动注释决策,最终利用模板融合概念和上下文生成具有高可读性的自然语言代码注释.目前CoComment已经在3个企业软件项目、超过4.6万条人工代码注释数据上进行了对比试验.结果表明,所提方法不仅能够有效地进行代码注释决策,其注释内容与现有方法相比也能够提供更多有益于理解代码的信息,从而为软件项目代码的注释决策和注释生成问题提供了一种一体化的解决方案. 相似文献
17.
Valentina Vuksic 《AI & Society》2012,27(2):325-327
“Tripping through” is an invitation to plunge into the invisible relationships of hard and soft computer matter through sensuous mediation.
The projects outlined are designed to provoke and capture the specific behavior of individual computer components through
the use of appropriate software fragments. If one approaches a digital apparatus with a transducer that transforms electromagnetic
fields into acoustic waves, the analytical sphere is changed into concrete acoustical phenomena and enters the world of sensation
(electromagnetic emissions can be picked using induction microphones and output as acoustic signals. A suitable example is
a Monacor telephone adapter AC-71/3,5MM). In choreographies for software and computer parts, these become actors in noise
pieces for and in computers. Machine noises can be mediated for the public. They reveal the activity of computer programs
in the widest sense and the activity of the computers or computer parts they are running on. The time and space of computer
processes and memory span different levels of reality during “runtime.” Software being processed within this system of coordinates creates its own temporal and spatial dimensions, which are staged
for an audience to provide a sensual experience: that of logic encountering the physical world. 相似文献
18.
BOSS QUATTRO: an open system for parametric design 总被引:1,自引:0,他引:1
During the two past decades, engineers have shown growing interest in automatic structural optimization techniques. These
were first used to solve analytical problems and were then rapidly adapted to structural sizing problems coupled to finite
element analysis software. Moreover, shape optimization problems featuring complex CAD systems were addressed in the early
90’s.
This article describes the capabilities of the optimization product developed by SAMTECH S.A. in Liège, Belgium. One will
read here what led the company and a group of research engineers at LTAS (Laboratoire des Techniques Aéronautiques et Spatiales,
Université de Liège) to the achievement of a so-called open system for parametric design.
The BOSS/Quattro package is a general-purpose design program. It includes several “engines”: optimization, parametric studies,
“Monte-Carlo” studies, “design of experiments” and updating. It is interesting to note that the “engines” can be easily mixed through the graphical user interface (GUI),
and, this, with several analysis models.
Received December 30, 2000 相似文献
19.
We study here the effect of concurrent greedy moves of players in atomic congestion games where n selfish agents (players) wish to select a resource each (out of m resources) so that her selfish delay there is not much. The problem of “maintaining” global progress while allowing concurrent
play is exactly what is examined and answered here. We examine two orthogonal settings: (i) A game where the players decide
their moves without global information, each acting “freely” by sampling resources randomly and locally deciding to migrate
(if the new resource is better) via a random experiment. Here, the resources can have quite arbitrary latency that is load
dependent. (ii) An “organised” setting where the players are pre-partitioned into selfish groups (coalitions) and where each
coalition does an improving coalitional move. Our work considers concurrent selfish play for arbitrary latencies for the first
time. Also, this is the first time where fast coalitional convergence to an approximate equilibrium is shown. 相似文献
20.
We propose a general, formal definition of the concept of malware (malicious software) as a single sentence in the language of a certain modal logic. Our definition is general thanks to its abstract formulation, which, being abstract, is independent of—but nonetheless generally
applicable to—the manifold concrete manifestations of malware. From our formulation of malware, we derive equally general
and formal definitions of benware (benign software), anti-malware (“antibodies” against malware), and medware (medical software or “medicine” for affected software). We provide theoretical tools and practical techniques for the detection, comparison, and classification of malware and its derivatives. Our general defining principle is causation of (in)correctness. 相似文献