首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
无线电   2篇
  2009年   1篇
  2008年   1篇
排序方式: 共有2条查询结果,搜索用时 0 毫秒
1
1.
We investigate a wireless computing architecture, where mobile terminals can execute their computation tasks either 1) locally, at the terminal's processor, or 2) remotely, assisted by the network infrastructure, or even 3) combining the former two options. Remote execution involves: 1) sending the task to a computation server via the wireless network, 2) executing the task at the server, and 3) downloading the results of the computation back to the terminal. Hence, it results to energy savings at the terminal (sparing its processor from computations) and execution speed gains due to (typically) faster server processor(s), as well as overheads due to the terminal server wireless communication. The net gains (or losses) are contingent on network connectivity and server load. These may vary in time, depending on user mobility, network, and server congestion (due to the concurrent sessions/connections from other terminals). In local execution, the wireless terminal faces the dilemma of power managing the processor, trading-off fast execution versus low energy consumption. We model the system within a Markovian dynamic control framework, allowing the computation of optimal execution policies. We study the associated energy versus delay trade-off and assess the performance gains attained in various test cases in comparison to conventional benchmark policies.  相似文献   
2.
We investigate efficient schemes for data communication from a server (base station and access point) to a mobile terminal over a wireless channel of randomly fluctuating quality. The terminal user generates requests for data items. If the buffer (cache) of the terminal contains the requested data, no access delay/latency is incurred. If not, the data is downloaded from the server, and until becoming available locally at the terminal, the user incurs a delay cost. Moreover, a transmission/power cost is incurred to transmit the data over the wireless link at a dynamically selected power level. To lower both the access delay and transmission costs, the system may prefetch data predictively and cache them on the terminal (especially during high-link-quality periods), anticipating future user requests. The goal is to jointly minimize the overall latency and power costs by dynamically choosing what data to (pre)fetch, what power level to use, and when to use it. We develop a modeling framework (based on dynamic programming and controlled Markov chains) that captures essential performance trade-offs. It allows for the computation of optimal decisions regarding what data to (pre)fetch and what power levels to use. To cope with emerging complexities, we then design efficient online heuristics whose simulation analysis demonstrates substantial performance gains over standard approaches.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号