首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   72篇
  免费   4篇
电工技术   1篇
无线电   7篇
一般工业技术   2篇
自动化技术   66篇
  2024年   1篇
  2022年   2篇
  2021年   1篇
  2019年   5篇
  2018年   5篇
  2017年   2篇
  2016年   4篇
  2015年   3篇
  2014年   4篇
  2013年   6篇
  2012年   6篇
  2011年   2篇
  2010年   2篇
  2009年   6篇
  2008年   7篇
  2007年   3篇
  2006年   4篇
  2005年   4篇
  2004年   3篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
排序方式: 共有76条查询结果,搜索用时 46 毫秒
41.
This article presents an overview of a Cluster and Grid Computing course offered as part of the Master of Engineering in Distributed Computing program at the University of Melbourne. It describes the operation of the course and its evolution to fulfil the demand for professionals in an emerging field of distributed computing.  相似文献   
42.
Hybrid content delivery networks (HCDN) benefit from the complementary advantages of P2P (Peer to Peer) networks and CDNs (Content Delivery Network). In order to extend a traditional CDN and enable it to offer hybrid content delivery service, we have modified a traditional domain name system‐based request routing mechanism. The proposed scheme relies on an oligopolistic mechanism to balance the load on the edge servers and employs a truthful profit‐maximizing auction to maximize the contribution of users in the P2P content delivery. In particular, economics of content delivery in HCDNs is studied, and it is shown that through our request routing mechanism, it is possible to deliver higher quality of service to majority of end‐users, increase the net profit of the HCDN provider and decrease the content distribution cost at the same time. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
43.
Service providers often geographically distribute their Web servers' facilities to improve performance, reliability, and scalability. Content delivery networks, which first evolved in 1998, replicate content over several mirrored Web servers, strategically placed at various locations to deal with flash crowds and to enhance response time. A CDN improves network performance by maximizing bandwidth, improving accessibility, and maintaining correctness through content replication. Unfortunately, although many commercial CDN providers exist, they don't cooperate in delivering content to end users in a scalable manner. In addition, content providers typically subscribe to one CDN provider and thus can't use multiple CDNs at the same time. Such a closed, noncooperative model results in "islands" of CDNs. We present a model for an open, scalable, and service-oriented architecture (SOA)-based system. This system helps to create open content and service delivery networks (CSDNs) that scale well and can share resources with other CSDNs through cooperation and coordination, thus overcoming the island CDN problem  相似文献   
44.
Recent technological advances in Grid computing enable the virtualization and dynamic delivery of computing services on demand to realize utility computing. In utility computing, computing services will always be available to the users whenever the need arises, similar to the availability of real-world utilities, such as electrical power, gas, and water. With this new outsourcing service model, users are able to define their service needs through Service Level Agreements (SLAs) and only have to pay when they use the services. They do not have to invest on or maintain computing infrastructures themselves and are not constrained to specific computing service providers. Thus, a commercial computing service will face two new challenges: (i) what are the objectives or goals it needs to achieve in order to support the utility computing model, and (ii) how to evaluate whether these objectives are achieved or not. To address these two new challenges, this paper first identifies four essential objectives that are required to support the utility computing model: (i) manage wait time for SLA acceptance, (ii) meet SLA requests, (iii) ensure reliability of accepted SLA, and (iv) attain profitability. It then describes two evaluation methods that are simple and intuitive: (i) separate and (ii) integrated risk analysis to analyze the effectiveness of resource management policies in achieving the objectives. Evaluation results based on simulation successfully demonstrate the applicability of separate and integrated risk analysis to assess policies in terms of the objectives. These evaluation results which constitute an a posteriori risk analysis of policies can later be used to generate an a priori risk analysis of policies by identifying possible risks for future utility computing situations.  相似文献   
45.
According to Wikipedia (http://en.wikipedia.Org/wiki/Web_2.0), Web 2.0 refers to "a perceived second generation of Web-based communities and hosted services-such as social networking sites, wikis, and folksonomies-rdquowhich aim to facilitate collaboration and sharing between users." Web 2.0 is especially widely used in the consumer space for those people who are their own IT administrators. However, this paper focus on enterprise IT.  相似文献   
46.
The Journal of Supercomputing - Energy- and latency-optimized Internet of Things (IoT) is an emerging research domain within fifth-generation (5G) wireless network paradigm. In traditional...  相似文献   
47.
Over the last few years, several nations around the world have set up Grids to share resources such as computers, data, and instruments to enable collaborative science, engineering, and business applications. These Grids follow a restricted organizational model wherein a Virtual Organization (VO) is created for a specific collaboration and all interactions such as resource sharing are limited to within the VO. Therefore, dispersed Grid initiatives have led to the creation of disparate Grids with little or no interaction between them. In this paper, we propose a model that: (a) promotes interlinking of islands of Grids through peering arrangements to enable InterGrid resource sharing; (b) provides a scalable structure for Grids that allow them to interconnect with one another and grow in a sustainable way; (c) creates a global Cyberinfrastructure to support e‐Science and e‐Business applications. This work identifies and proposes architecture, mechanisms, and policies that allow the internetworking of Grids and allows Grids to grow in a similar manner as the Internet. We term the structure resulting from such internetworking between Grids as the InterGrid. The proposed InterGrid architecture is composed of InterGrid Gateways responsible for managing peering arrangements between Grids. We discuss the main components of the architecture and present a research agenda to enable the InterGrid vision. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   
48.
Distributed computing systems and applications are not only changing the face of computing, they're also continually changing the way we live, work, and interact as a society. In recent years, the demand for educational courses in advanced computing and networking has expanded rapidly. This has led to a huge demand for Internet-based distributed computing technologies (such as Web and grid services) and applications that virtualize geographically distributed resources to enable the creation of virtual enterprises, marketplaces, and service-oriented computing industries. Because current academic programs don't yet address the skill set required to meet these demands, we launched a Master of Engineering in distributed computing program at the University of Melbourne  相似文献   
49.
The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety ofheterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer‐to‐peer or Grid computing. The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high‐throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web‐based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse‐grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid‐based infrastructure. This paper aims to present the state‐of‐the‐art of Grid computing and attempts to survey the major international efforts in developing this emerging technology. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   
50.
Cloud computing is emerging as an increasingly popular computing paradigm, allowing dynamic scaling of resources available to users as needed. This requires a highly accurate demand prediction and resource allocation methodology that can provision resources in advance, thereby minimizing the virtual machine downtime required for resource provisioning. In this paper, we present a dynamic resource demand prediction and allocation framework in multi‐tenant service clouds. The novel contribution of our proposed framework is that it classifies the service tenants as per whether their resource requirements would increase or not; based on this classification, our framework prioritizes prediction for those service tenants in which resource demand would increase, thereby minimizing the time needed for prediction. Furthermore, our approach adds the service tenants to matched virtual machines and allocates the virtual machines to physical host machines using a best‐fit heuristic approach. Performance results demonstrate how our best‐fit heuristic approach could efficiently allocate virtual machines to hosts so that the hosts are utilized to their fullest capacity. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号