首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study evaluates the capacity of the Internet to enhance development in emerging regions through Sen's freedom perspective. The paper begins with a qualitative evaluation of the Internet's potential as a freedom enhancer through examples and literature study. It then presents a quantitative evaluation based on web access logs obtained from the AirJaldi network in rural India. We categorize the data based on Sen's freedoms to contribute an information and communication technology-freedom taxonomy and note the challenges in doing so. The usage logs indicate that indeed users may have experienced enhancement in all of Sen's freedom categories; yet our qualitative evaluation suggests there is much unexploited potential. We conclude that it is important to look at the Internet-based Information and Communication Technologies for Development (ICTD) projects through Sen's freedom lens and call for such projects to be evaluated based on these broad freedom goals rather than on focused development goals.  相似文献   

2.
3.
The Journal of Supercomputing -  相似文献   

4.

There is too little engagement between community computing and human - computer interaction. In the future there should be more. Better integrating community computing and human - computer interaction can help to make HCI richer and more comprehensive, conceptually and methodologically. It can help HCI to have more of an impact on society and on everyday collective life. Six examples are briefly discussed.  相似文献   

5.
The existing state of the art within “interpretive” sociology is introduced as an initial point of reference. The special issue papers are then discussed in an order that proposes to exemplify the interdisciplinary nature of the arguments at hand. Thus, we attempt an experiential introduction to the prospect of developing both an individual and a disciplinary capacity for “interpretive” mobility. The paper concludes with an invitation to continue this development in the light of an explicit caveat emptor.  相似文献   

6.
We propose a combined atom–molecule system for quantum information processing in individual traps, such as provided by optical lattices. In this platform, different species of atoms—one atom carrying a qubit and the other enabling the interaction—are used to store and process quantum information via intermediate molecular states. We show how gates, initialization, and readout operations could be implemented using this approach. In particular, we describe in some detail the implementation of a two-qubit phase gate in which a pair of atoms is transferred into the ground rovibrational state of a polar molecule with a large dipole moment, thus allowing atoms transferred into molecules to interact via their dipole-dipole interaction. We also discuss how the reverse process could be used as a non-destructive readout tool of molecular qubit states. Finally, we generalize these ideas to use a decoherence-free subspace for qubit encoding to minimize the decoherence due to magnetic field fluctuations. In this case, qubits will be encoded into field-insensitive states of two identical atoms, while a third atom of a different species will be used to realize a phase gate.  相似文献   

7.
Computer engineers are continuously seeking new solutions to increase available processing speed, achievable transmission rates, and efficiency in order to satisfy users’ expectations. While multi-core systems, computing clouds, and other parallel processing techniques dominate current technology trends, elementary particles governed by quantum mechanics have been borrowed from the physicists’ laboratory and applied to computer engineering in the efforts to solve sophisticated computing and communications problems. In this paper, we review the quantum mechanical background of quantum computing from an engineering point of view and describe the possibilities offered by quantum-assisted and quantum-based computing and communications. In addition to the currently available solutions, the corresponding challenges will also be surveyed.  相似文献   

8.
We propose and analyze in details the revised model of XPROB, an infinite family of pool-based anonymous communication systems that can be used in various applications including high performance computing environments. XPROB overcomes the limitations of APROB Channel that only resists a global delaying adversary (GDA). Each instance of XPROB uses a pool mix as its core component to provide resistance against a global active adversary (GAA), a stronger yet more practical opponent than a GDA. For XPROB, a GAA can drop messages from users but cannot break the anonymity of the senders of messages. Analysis and experimental evaluations show that each instance of XPROB provides greater anonymity than APROB Channel for the same traffic load and user behaviors (rate and number of messages sent). In XPROB, any message can be delivered with high probability within a few rounds after its arrival into the system; thus, an opponent cannot be certain when a message will be delivered. Furthermore, users can choose their own preference balance between anonymity and delay. Through the evaluation, we prove that XPROB can provide anonymity for users in high-performance computing environments.  相似文献   

9.
Even with attractive computational advantages, mobile agent technology has not developed its full potential due to various security issues. This paper proposes a method called Private Key Consignment to solve the problem of how to protect the data carried by mobile agents. It exploits new functionalities and mechanism provided by the trusted computing technology, and adopts both public key and symmetric key cryptographic means for data and key protection. The most notable feature of this method is that it protects the private key of the agent by consigning it to a tamper proof hardware, thus, enabling convenient and secure use of the private key. It provides a new scheme of mobile agents' data protection.  相似文献   

10.
Scheduling becomes key in dynamic and heterogeneous utility computing settings. Market-based scheduling offers to increase efficiency of the resource allocation and provides incentives to offer computer resources and services. Current market mechanisms, however, are inefficient and computationally intractable in large-scale settings. The contribution of this paper is the proposal as well as analytical and numerical evaluation of GreedEx, an exchange for clearing utility computing markets, based on a greedy heuristic, that does achieve a distinct trade-off: GreedEx obtains fast and near-optimal resource allocations while generating prices that are truthful on the demand-side and approximately truthful on the supply-side.  相似文献   

11.
With the rapid advance of computing technologies, it becomes more and more common to construct high-performance computing environments with heterogeneous commodity computers. Previous loop scheduling schemes were not designed for this kind of environments. Therefore, better loop scheduling schemes are needed to further increase the performance of the emerging heterogeneous PC cluster environments. In this paper, we propose a new heuristic for the performance-based approach to partition loop iterations according to the performance weighting of cluster/grid nodes. In particular, a new parameter is proposed to consider HPCC benchmark results as part of performance estimation. A heterogeneous cluster and grid were built to verify the proposed approach, and three kinds of application program were implemented for execution on cluster testbed. Experimental results show that the proposed approach performs better than the previous schemes on heterogeneous computing environments.  相似文献   

12.
Market watchers are splashing criticisms and suspicions over the trusted computing, declaring that it will be many years before large numbers of companies and individuals benefit from this technology. This paper investigates a majority of TCG (Trusted Computing Group) members, classifies their products as TPM chips, TCG enabled hardware and software. The data are collected from both literature and websites. It indicates that the trusted computing technology in China is considerable on chips and hardware. As for TCG enabled software, there still have many obstacles for Chinese to combat.  相似文献   

13.
14.
Automated Trust Negotiation (ATN) is an important method to establish trust relationship between two strangers by exchanging their access control policies and credentials. Unfortunately, ATN is not widely adopted because of the complexity and multiformity of negotiation policies, especially in virtual computing environment, where the situation becomes worse than in traditional computing environment, due to the fact that a host with multiple virtual machines needs to be deployed with multiple negotiation policies. Moreover, all of these policies for each virtual machine must be upgraded and checked. To ease the burden on the administrator when deploying ATN access control policies and credentials in virtual computing environment, we propose an automated trusted negotiation architecture called virtual automated trust negotiation (VATN) to centralize ATN policies and credentials for multiple virtual machines in a physical node into a privileged virtual machine. VATN puts policy compliance checker and credential verification control in each virtual machine to improve the execution efficiency of trust negotiation. We implement VATN in Xen virtualization platform. Finally, we discuss the correctness of policy consistency checking and make performance analysis of VATN implemented in Xen.  相似文献   

15.
Computing power is largely becoming a basic supply which you can envisage to buy from a provider like you buy power or water. This is the result of a now long running trend that consists in connecting computing resources together so as to set up what can globally be referred to as a remote computing platform, the most up-to-date incarnation of which is the notion of a grid (Foster and Kesselman, 2003). These resources can then be shared among users, what means circulating codes and the results of their execution over a network, what is highly insecure. At the other end of the spectrum of computing devices, smart cards ([Mayes and Markantonakis, 2008] and [Hendry, 2001]) offer extremely secure but extremely limited computing capabilities. The question is thus to bridge the gap between computational power and high security. The aim of this paper is to show how large and high capacity remote computing architectures can interact with smart cards, which certainly are the most widely deployed, still the smallest computing systems of the information technology era, so as to improve the overall security of a global infrastructure.  相似文献   

16.
Since service level agreement(SLA)is essentially used to maintain reliable quality of service between cloud providers and clients in cloud environment,there has been a growing effort in reducing power consumption while complying with the SLA by maximizing physical machine(PM)-level utilization and load balancing techniques in infrastructure as a service.However,with the recent introduction of container as a service by cloud providers,containers are increasingly popular and will become the major deployment model in the cloud environment and specifically in platform as a service.Therefore,reducing power consumption while complying with the SLA at virtual machine(VM)-level becomes essential.In this context,we exploit a container consolidation scheme with usage prediction to achieve the above objectives.To obtain a reliable characterization of overutilized and underutilized PMs,our scheme jointly exploits the current and predicted CPU utilization based on local history of the considered PMs in the process of the container consolidation.We demonstrate our solution through simulations on real workloads.The experimental results show that the container consolidation scheme with usage prediction reduces the power consumption,number of container migrations,and average number of active VMs while complying with the SLA.  相似文献   

17.
18.
It is meaningful to use a little energy to obtain more performance improvement compared with the increased energy. It also makes sense to relax a small quantity of performance restriction to save an enormous amount of energy. Trading a small amount of energy for a considerable sum of performance or vice versa is possible if the relativities between performance and energy of parallel programs are exactly known. This work studies the relativities by recording the performance speedup and energy consumption of parallel programs when the number of cores on which programs run are changed. We demonstrate that the performance improvement and the increased energy consumption have a linear negative correlation.In addition, these relativities can guide us to do performance–energy adaptation under two assumptions. Our experiments show that the average correlation coefficients between performance and energy are higher than 97 %. Furthermore, it can be found that exchanging less than 6 % performance loss for more than 37 % energy consumption is feasible and vise versa.  相似文献   

19.
Service-oriented architecture (SOA), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above-mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.  相似文献   

20.
In this paper, we consider algorithms involved in the computation of the Duquenne–Guigues basis of implications. The most widely used algorithm for constructing the basis is Ganter’s Next Closure, designed for generating closed sets of an arbitrary closure system. We show that, for the purpose of generating the basis, the algorithm can be optimized. We compare the performance of the original algorithm and its optimized version in a series of experiments using artificially generated and real-life datasets. An important computationally expensive subroutine of the algorithm generates the closure of an attribute set with respect to a set of implications. We compare the performance of three algorithms for this task on their own, as well as in conjunction with each of the two algorithms for generating the basis. We also discuss other approaches to constructing the Duquenne–Guigues basis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号