首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Technology, Standards, and Real-World Deployments of the EPC Network   总被引:2,自引:0,他引:2  
The EPC Network is a global RFID data sharing infrastructure based on standards that are built around the Electronic Product Code (EPC), an unambiguous numbering scheme for the designation of physical goods. The authors present the fundamental concepts and applications of the EPC Network, its integration with enterprise systems, and its functionality for data exchange between organizations in the supply chain.  相似文献   

2.
3.
In this paper, the authors reflect on their experiences of deploying ubiquitous computing systems in public spaces and present a series of lessons that they feel will be of benefit to researchers planning similar public deployments. They focus on experiences gained from building and deploying three experimental public display systems as part of the e-Campus project. However, they believe the lessons are likely to be generally applicable to many different types of public ubicomp deployment. This article is part of a special issue on Real-World Deployments.  相似文献   

4.
5.
Based on a preventive, development-before-the-fact philosophy that does not allow errors in the first place, the Universal Systems Language has evolved over several decades, offering system engineers and software developers a language they can use to solve problems previously considered next to impossible to solve with traditional approaches.  相似文献   

6.
Ontology based network management has recently evolved from a theoretical proposal to a more mature technology. As such, it is now being applied in many research projects in a number of different network management and security scenarios. This application has enabled the validation of the main ideas of the proposals and to learn some of the problems that it brings. This paper describes several research projects where ontology based network management proposals were applied, detailing the most important facets of the initial proposals that were used and explaining the main advantages and drawbacks that were found after prototyping these proposals.
Julio BerrocalEmail:
  相似文献   

7.
Software component selection is growing in importance. Its success relies on correctly assessing the candidate components' quality. For a particular project, you can assess quality by identifying and analyzing the criteria that affect it. Component selection is on the suitability and completeness of the criteria used for evaluation. Experiences from determining criteria for several industrial projects provide important lessons. For a particular selection process, you can organize selection criteria into a criteria catalog. A CC is built for a scope, which can be either a domain (workflow systems, mail servers, antivirus tools, and so on) or a category of domains (communication infrastructure, collaboration software, and so on). Structurally, a CC arranges selection criteria in a hierarchical tree-like structure. The higher-level selection criteria serve to classify more concrete selection criteria, usually allowing some overlap. They also serve to leverage the CC  相似文献   

8.
In many IT organizations, in response to change, people must work longer and harder. While people can do this in the short run, the short run has become the long run in many IT organizations. This trend must be quickly altered or the company's success will be negatively affected. Given a tight labor market for IT workers and ever-increasing job stress, the key to a sustained competitive advantage through people is the learning organization. the transformation to the learning organization must begin with the “whys” and “hows.” in addition, IT leadership must tell and show people that there is light at the end of the tunnel.  相似文献   

9.
Event Correlation in Integrated Management: Lessons Learned and Outlook   总被引:1,自引:0,他引:1  
When event correlation was first used in integrated management, in the early 1980s, several techniques devised by the artificial intelligence and database communities were applied to network element management for analyzing alarms sent by expensive, self-monitoring telephone switches. Today, it is used for detecting faults in wireless networks, for monitoring the performance of commodity, often non-self-aware devices in enterprise networks, for detecting intrusions in firewalls, for ascribing breaches in service level agreements to specific problems in the underlying IT infrastructure, etc. In other words, the problem to be solved has changed completely. Can today’s event correlators still meet customers’ expectations? If not, how should they evolve to meet them? In this paper, we try to capture the main lessons learned by the integrated management community in event correlation in the past 25 years, and to identify important challenges that we are faced with. By doing this, we hope to streamline and encourage research in this field, which needs better models, algorithms and systems to deal with ever more complex and integrated networks, systems and services.  相似文献   

10.
Damian  D. 《Software, IEEE》2007,24(2):21-27
Due to its communication and collaboration-intensive nature, as well as inherent interaction with most other development processes, the practice of requirements engineering is becoming a key challenge in global software engineering (GSE). In distributed projects, cross-functional stakeholder groups must specify and manage requirements across cultural, time-zone, and organizational boundaries. This creates a unique set of problems, not only when an organization opens new development subsidiaries across the world but also when software development is a multiorganizational business affair. We need innovative processes and technologies to manage stakeholders' expectations and interaction in global projects. This article reports on the state of the practice, drawn from industrial empirical studies, of stakeholders' interaction in global RE. The article revisits stakeholders' needs in global RE, discusses the challenges they face in distributed interaction, and offers practical advice to alleviate these challenges, as distilled from empirical studies of GSE practice  相似文献   

11.
The Companion cognitive architecture supports experiments in achieving human-level intelligence. This article describes seven key design goals of Companions, relating them to properties of human reasoning and learning, and to engineering concerns raised by attempting to build large-scale cognitive systems. We summarize our experiences with Companions in two domains: test taking and game playing.  相似文献   

12.
Organic computing (OC) recognizes that the behaviors of deployed systems can be much more interesting than those predicted by simulation. By exploiting self-X properties such as self-organization, self-optimization, self-protection, and context-awareness, OC researchers are developing methods for creating robust, trustworthy systems. This paper presents two examples of unanticipated behaviors that we have observed in our OC test bed of robotic toy cars, an unmodeled phenomenon with surprisingly large effects and an agent behavior that was believed (incorrectly) to be so rare that it could be ignored. We discuss the use of computational reflection as a tool for identifying such situations, consider the challenges posed by the large variety of such situations faced by real systems, and list lessons learned about the importance of test beds for advancing OC research.  相似文献   

13.
AIB Group released the following statement on 5 June in response to the announcement by the US Attorney’s Office in Maryland that John Rusnak be indicted on charges of bank fraud and false entry in bank records. “AIB Group and Allfirst welcome the Grand Jury’s decision to indict former employee John Rusnak on charges of bank fraud and false entry in bank records. AIB stated at the outset that it believed it was the victim of a complex and sophisticated fraud and these charges endorse that conclusion.”  相似文献   

14.
15.
Vicarious learning allows an observer to improve his decision making and modify his actions through observing others' past actions and resulting consequences. As organizations become larger and more disconnected, it becomes an increasing challenge to meaningfully share lessons learned allowing people to learn vicariously from both good and adverse experiences of others. In an effort to promote vicarious learning across disconnected groups, organizations create lessons learned programs to share information. The goal of a lessons learned program is to improve organizational effectiveness by avoiding costly errors "before" reoccurring in other parts of the organization. This type of antecedent learning requires collaboration between the people at the source and those receiving the lesson, as well as any human intermediaries involved with the process, to amplify the benefits of lessons learned. This paper makes explicit the connection between vicarious learning and collaborative lessons learned programs and develops a lessons learned process model using the theory of knowledge creation. The evaluation of four case studies reveals that a lessons learned program operates most effectively when the information is targeted to recipients who would find it useful and when human collaboration is acknowledged and supported by intermediaries  相似文献   

16.
Israel Aircraft Industries has recently been conducting a novel six-month intensive course to retrain practicing engineers to become software engineers working on embedded computer systems. The first course was concluded in January 1982 and the second course began in November 1982. This paper describes the objectives, educational philosophy, course content, and practical experience of the first course. It also describes how the second course was modified as a result of the lessons learned from the successes and failures of the first course.  相似文献   

17.
Anyone who has built or remodelled a house and has developed or enhanced software must have noticed the similarity of these activities. This paper examines these two processes from the points of view of budgeting, scheduling, and requirements creep. It is admitted from the start that some of the argument and conclusions are based on popular perceptions and personal observation over small populations, that is, the houses the author and some close friends have remodelled and built and software projects in which the author has participated as an analyst, designer, programmer, or consultant.  相似文献   

18.
This introductory paper to the special issue on Data Mining Lessons Learned presents lessons from data mining applications, including experience from science, business, and knowledge management in a collaborative data mining setting.  相似文献   

19.
This paper reports as a case study an attempt to model check the control subsystem of an operational NASA robotics system. Thirty seven properties including both safety and liveness specifications were formulated for the system. Twenty two of the thirty seven properties were successfully model checked. Several significant flaws in the original software system were identified and corrected during the model checking process. The case study presents the entire process in a semi-historical mode. The goal is to provide reusable knowledge of what worked, what did not work and why.  相似文献   

20.
The Computational Chemistry Grid (CCG) is a three-year, National Middleware Initiative program to develop cyberinfrastructure for the chemistry community. CCG is led by the University of Kentucky and involves collaborating sites at Louisiana State University, Ohio Supercomputing Center, Texas Advanced Computing Center, and the National Center for Supercomputing Applications. This paper discusses experiences developing the CCG cyberinfrastructure in the first year of the project. Special attention is paid to technological issues faced as well as issues raised running the CCG in production. The final section of the paper looks forward to challenges foreseen in the remaining two years.September 1, 2005.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号