首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   28篇
  免费   0篇
一般工业技术   1篇
冶金工业   2篇
自动化技术   25篇
  2007年   1篇
  2006年   1篇
  2005年   4篇
  2004年   6篇
  2003年   6篇
  2002年   5篇
  2000年   1篇
  1997年   1篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
排序方式: 共有28条查询结果,搜索用时 15 毫秒
1.
2.
TPC-W: a benchmark for e-commerce   总被引:1,自引:0,他引:1  
When choosing an e-commerce site's hardware and software configuration, we need to know how a specific combination of Web servers, commerce servers, database servers, and supporting hardware will handle a desired load level. Benchmarks let us compare competing alternatives. Researchers have extensively studied workloads on Web sites that provide information and have characterized their performance at the level of HTTP requests. My colleagues and I have also conducted studies to understand e-commerce site workloads and to search for invariants that cut across more than one type of e-commerce site. However, there is currently only one available benchmark for e-commerce sites: TPC benchmark W, designed by the Transaction Processing Performance Council. I explore TPC-W's main features, advantages, and limitations  相似文献   
3.
Application-level quality of service (QoS) is the Achilles' heel of services offered overthe Internet. The articles in this special issue cover various aspects of this complex problem, while exposing the challenges we have yet to overcome.  相似文献   
4.
Quorum attainment protocols are an important part of many mutual exclusion algorithms. Assessing the performance of such protocols in terms of number of messages, as is usually done, may be less significant than being able to compute the delay in attaining the quorum. Some protocols achieve higher reliability at the expense of increased message cost or delay. A unified analytical model which takes into account the network delay and its effect on the time needed to obtain a quorum is presented. A combined performability metric, which takes into account both availability and delay, is defined, and expressions to calculate its value are derived for two different reliable quorum attainment protocols: D. Agrawal and A. El Abbadi's (1991) and Majority Consensus algorithms (R.H. Thomas, 1979). Expressions for the primary site approach are also given as upper bound on performability and lower bound on delay. A parallel version of the Agrawal and El Abbadi protocol is introduced and evaluated. This new algorithm is shown to exhibit lower delay at the expense of a negligible increase in the number of messages exchanged. Numerical results derived from the model are discussed  相似文献   
5.
6.
It has recently been reported that additional X chromosomes occur in over 30% of B-cell non-Hodgkin's lymphomas (NHL), and that monosomy of the X chromosome occurs in 38% of female patients with T-cell leukaemia or lymphoma. These observations have suggested a possible role for the X chromosome in the evolution of NHL. We have now examined 280 cases of NHL, and have identified 19 examples of structurally altered X chromosomes in the malignant cells from 17 of these cases. These abnormalities were mainly characterized by either a translocation involving Xp22, or a translocation/deletion involving Xq28. The relevance of these observations is discussed with respect to other published reports, and together they suggest that lymphoma-associated oncogenes may exist on the X chromosome at bands p22 or q28.  相似文献   
7.
Although the traditional client-server model first established the Web's backbone, it tends to underuse the Internet's bandwidth and intensify the burden that dedicated servers face as their load increases.' Peer-to-peer computing relies on individual computers' computing power and storage capacity to better utilize bandwidth and distribute this load in a self-organizing manner. In P2P, nodes (or peers) act as both clients and servers, form an application-level network, and route messages (such as requests to locate a resource). The design of these routing protocols is of paramount importance to a P2P application's efficiency: naive approaches - such as Gnutella's flood routing, for example - can add traffic. P2P systems that exhibit the "small world" property - in which most peers have few links to other peers, but a few of them have many - are robust to random attacks, but can be highly vulnerable to targeted ones. P2P computing also has the potential to enhance reliability and fault tolerance because it doesn't rely on dedicated servers.' Each peer maintains a local directory with entries to the resources it manages. It can also cache other peers' directory entries. Important applications of P2P technologies include distributed directory systems, new e-commerce models, and Web service discovery, all of which require efficient resource-location mechanisms.  相似文献   
8.
High-volume Web sites often use clusters of servers to support their architectures. A load balancer in front of such clusters directs requests to the various servers in a way that equalizes, as much as possible, the load placed on each. There are two basic approaches to scaling Web clusters: adding more servers of the same type (scaling out, or horizontally) or upgrading the capacity of the servers in the cluster (scaling up, or vertically). Although more detailed and complex models would be required to obtain more accurate results about such systems' behavior, simple queuing theory provides a reasonable abstraction level to shed some insight on which scaling approach to employ in various scenarios. Typical issues in Web cluster design include: whether to use a large number of low-capacity inexpensive servers or a small number of high-capacity costly servers to provide a given performance level; how many servers of a given type are required to provide a certain performance level at a given cost; and how many servers are needed to build a Web site with a given reliability. Using queuing theory, I examine the average response time, capacity, cost, and reliability tradeoffs involved in designing Web server clusters.  相似文献   
9.
The next generation of complex software systems will be highly distributed, component-based, and service-oriented. They will need to operate in unattended mode, possibly in hostile environments, and they'll be composed of many "replaceable" components discoverable at runtime. Moreover, they will have to run on a multitude of unknown and heterogeneous hardware and network platforms. Three major requirements for such systems are performance, availability, and security. Performance requirements imply that these systems must be adaptable and self-configurable to changes in workload intensity. Availability and security requirements suggest that these systems also must adapt and reconfigure themselves to withstand attacks and failures. This paper focuses specifically on QoS requirements for performance and describes the framework for QoS-aware distributed applications.  相似文献   
10.
QoS in grid computing   总被引:1,自引:0,他引:1  
Grid computing is already a mainstream paradigm for resource-intensive scientific applications, but it also promises to become the future model for enterprise applications. The grid enables resource sharing and dynamic allocation of computational resources, thus increasing access to distributed data, promoting operational flexibility and collaboration, and allowing service providers to scale efficiently to meet variable demands. Large-scale grids are complex systems composed of thousands of components from disjoined domains. Planning the capacity to guarantee quality of service (QoS) in such environments is a challenge because global service-level agreements (SLAs) depend on local SLAs. We provide a motivating example for grid computing in an enterprise environment and then discuss the how resource allocation affects SLAs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号