首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The increasing adoption of “client and cloud” computing raises several important concerns about security. This article discusses security issues that are associated with “client and cloud” and their impact on organizations that host applications “in the cloud.” It describes how Microsoft minimizes the security vulnerabilities in these, possibly mission-critical, platforms and applications by following two, complementary approaches: developing the policies, practices, and technologies to make their “client and cloud” applications as secure as possible, and managing the security of the platform environment through clearly defined operational security policies.  相似文献   

2.
This article presents an object-oriented mechanism to achieve group communication in large scale grids. Group communication is a crucial feature for high-performance and grid computing. While previous work on collective communications imposed the use of dedicated interfaces, we propose a scheme where one can initiate group communications using the standard public methods of the class by instantiating objects through a special object factory. The object factory utilizes casting and introspection to construct a “parallel processing enhanced” implementation of the object which matches the original class’ interface. This mechanism is then extended in an evolution of the classical SPMD programming paradigm into the domain of clusters and grids named “Object-Oriented SPMD”. OOSPMD provides interprocess (inter-object) communications via transparent remote method invocations rather than custom interfaces. Such typed group communication constitutes a basis for improvement of component models allowing advanced composition of parallel building blocks. The typed group pattern leads to an interesting, uniform, and complete model for programming applications intended to be run on clusters and grids.  相似文献   

3.
Is current research on computing by older adults simply looking at a short-term problem? Or will the technology problems that plague the current generation also be problematic for today’s tech-savvy younger generations when they become “old”? This paper considers age-related and experience-related issues that affect ability to use new technology. Without more consideration of the skills of older users, it is likely that applications and devices 20 years from now will have changed such that this “older” generation finds themselves confronting an array of technologies that they little understand and find generally inaccessible. Recent evidence suggests that older adults bring specific strengths to Web browsing. A fuller investigation of these strengths and how to design to optimize for strengths of older users has the potential to address the need for usable technology for this increasingly important demographic.  相似文献   

4.
Multi-core CPUs,Clusters, and Grid Computing: A Tutorial   总被引:1,自引:0,他引:1  
The nature of computing is changing and it poses both challenges and opportunities for economists. Instead of increasing clock speed, future microprocessors will have “multi-cores” with separate execution units. “Threads” or other multi-processing techniques that are rarely used today are required to take full advantage of them. Beyond one machine, it has become easy to harness multiple computers to work in clusters. Besides dedicated clusters, they can be made up of unused lab computers or even your colleagues’ machines. Finally, grids of computers spanning the Internet are now becoming a reality.  相似文献   

5.
Personal computing applications are constantly increasing their potential power, thanks to steadily growing hardware capabilities and large diffusion of high-quality multimedia output devices. At the same time, mobile communication tools are becoming an integral part of our everyday life, with new advanced functionalities offered at an unrestrainable pace. Although the way we interact with information machines is substantially the same since 20 years—based on keyboard, mouse and window metaphor—other communication modalities are possible, and shortly may become popular as additional interaction methods. Given the paramount importance of the “interface” in present computer applications, no alternative should be ignored, as it could greatly improve the quality of both interaction processes and user cognitive performance. Without pretending to foresee the future, in this paper we provide an overview of the main current technologies which can enable potential novel interfaces, discussing their features, strengths, weaknesses and promising applications.  相似文献   

6.
In this paper, two soft computing approaches, which are known as artificial neural networks and Gene Expression Programming (GEP) are used in strength prediction of basalts which are collected from Gaziantep region in Turkey. The collected basalts samples are tested in the geotechnical engineering laboratory of the University of Gaziantep. The parameters, “ultrasound pulse velocity”, “water absorption”, “dry density”, “saturated density”, and “bulk density” which are experimentally determined based on the procedures given in ISRM (Rock characterisation testing and monitoring. Pergamon Press, Oxford, 1981) are used to predict “uniaxial compressive strength” and “tensile strength” of Gaziantep basalts. It is found out that neural networks are quite effective in comparison to GEP and classical regression analyses in predicting the strength of the basalts. The results obtained are also useful in characterizing the Gaziantep basalts for practical applications.  相似文献   

7.
Summary Formula size and depth are two important complexity measures of Boolean functions. We study the tradeoff between those two measures: We give an infinite set of Boolean functions and show for nearly each of them: There is no formula over “and”, “or”, “negation” computing it optimal with respect to both measures. That implies a logarithmic lower bound on circuit depth.  相似文献   

8.
Much of the ongoing research in ubiquitous computing has concentrated on providing context information, e.g. location information, to the level of services and applications. Typically, mobile clients obtain location information from their environment which is used to provide “locally optimal” services. In contrast, it may be of interest to obtain information about the current context a mobile user or device is in, from a client somewhere on the Web, i.e. to use the mobile device as an information provider for Internet clients. As an instance of such services we propose the metaphor of a “location-aware” Web homepage of mobile users providing information about, e.g. the current location a mobile user is at. Requesting this homepage can be as easy as typing a URL containing the mobile user's phone number such ashttp://mhp.net/+49123456789 in an off-the-shelf browser. The homepage is dynamically constructed as Web users access it and it can be configured in various ways that are controlled by the mobile user. We present the architecture and implementation and discuss issues around this example of “inverse” ubiquitous computing.  相似文献   

9.
General multibody system approaches are often not sufficient for specific situations in applications to yield an efficient and accurate solution. We concentrate on the simulation of the crankshaft dynamics which is characterized by flexible bodies and force laws describing the interaction between the bodies. The use of the floating frame of reference approach in our model leads to an index-2 DAE system. The algebraic constraints originate from the reference conditions and the normalization equation for the quaternions. For the time integration of this system, two aspects have to be taken into account: firstly, for efficiency exploiting the structure of the system and using parallelization. Secondly, consistent initial values also with respect to a related index-3 system have to be computed in order to compute missing initial velocities and to reduce transient phenomena. The work of C.B. Drab, J.R. Haslinger, and R.U. Pfau is supported by the “Bundesministerium für Wirtschaft und Arbeit” and by the government of Upper Austria within the framework “Industrielle Kompetenzzentren und Netzwerke”.  相似文献   

10.
Stable rankings for different effort models   总被引:1,自引:0,他引:1  
There exists a large and growing number of proposed estimation methods but little conclusive evidence ranking one method over another. Prior effort estimation studies suffered from “conclusion instability”, where the rankings offered to different methods were not stable across (a) different evaluation criteria; (b) different data sources; or (c) different random selections of that data. This paper reports a study of 158 effort estimation methods on data sets based on COCOMO features. Four “best” methods were detected that were consistently better than the “rest” of the other 154 methods. These rankings of “best” and “rest” methods were stable across (a) three different evaluation criteria applied to (b) multiple data sets from two different sources that were (c) divided into hundreds of randomly selected subsets using four different random seeds. Hence, while there exists no single universal “best” effort estimation method, there appears to exist a small number (four) of most useful methods. This result both complicates and simplifies effort estimation research. The complication is that any future effort estimation analysis should be preceded by a “selection study” that finds the best local estimator. However, the simplification is that such a study need not be labor intensive, at least for COCOMO style data sets.  相似文献   

11.
Efficient, scalable memory allocation for multithreaded applications on multiprocessors is a significant goal of recent research. In the distributed computing literature it has been emphasized that lock-based synchronization and concurrency-control may limit the parallelism in multiprocessor systems. Thus, system services that employ such methods can hinder reaching the full potential of these systems. A natural research question is the pertinence and the impact of lock-free concurrency control in key services for multiprocessors, such as in the memory allocation service, which is the theme of this work. We show the design and implementation of NBmalloc, a lock-free memory allocator designed to enhance the parallelism in the system. The architecture of NBmalloc is inspired by Hoard, a well-known concurrent memory allocator, with modular design that preserves scalability and helps avoiding false-sharing and heap-blowup. Within our effort to design appropriate lock-free algorithms for NBmalloc, we propose and show a lock-free implementation of a new data structure, flat-set, supporting conventional “internal” set operations as well as “inter-object” operations, for moving items between flat-sets. The design of NBmalloc also involved a series of other algorithmic problems, which are discussed in the paper. Further, we present the implementation of NBmalloc and a study of its behaviour in a set of multiprocessor systems. The results show that the good properties of Hoard w.r.t. false-sharing and heap-blowup are preserved.  相似文献   

12.
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and “possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to illustrate the benefits of this approach. Received: January 1999 / Accepted: November 1999  相似文献   

13.
About the Collatz conjecture   总被引:1,自引:0,他引:1  
This paper refers to the Collatz conjecture. The origin and the formalization of the Collatz problem are presented in the first section, named “Introduction”. In the second section, entitled “Properties of the Collatz function”, we treat mainly the bijectivity of the Collatz function. Using the obtained results, we construct a (set of) binary tree(s) which “simulate(s)”– in a way that will be specified – the computations of the values of the Collatz function. In the third section, we give an “efficient” algorithm for computing the number of iterations (recursive calls) of the Collatz function. A comparison between our algorithm and the standard one is also presented, the first being at least 2.25 “faster” (3.00 in medium). Finally, we describe a class of natural numbers for which the conjecture is true. Received 28 April 1997 / 10 June 1997  相似文献   

14.
Given an ordered labeled forest F (“the target forest”) and an ordered labeled forest G (“the pattern forest”), the most similar subforest problem is to find a subforest F′ of F such that the forest edit distance between F′ and G is minimum over all possible F′. This problem generalizes several well-studied problems which have important applications in locating patterns in hierarchical structures such as RNA molecules’ secondary structures and XML documents. Algorithms for the most similar subforest problem restricted to subforests which are either rooted subtrees or simple substructures exist in the literature; in this article, we show how to solve the most similar subforest problem for two other types of subforests: sibling substructures and closed subforests.  相似文献   

15.
When the Discrete Fourier Transform of an image is computed, the image is implicitly assumed to be periodic. Since there is no reason for opposite borders to be alike, the “periodic” image generally presents strong discontinuities across the frame border. These edge effects cause several artifacts in the Fourier Transform, in particular a well-known “cross” structure made of high energy coefficients along the axes, which can have strong consequences on image processing or image analysis techniques based on the image spectrum (including interpolation, texture analysis, image quality assessment, etc.). In this paper, we show that an image can be decomposed into a sum of a “periodic component” and a “smooth component”, which brings a simple and computationally efficient answer to this problem. We discuss the interest of such a decomposition on several applications.  相似文献   

16.
Multirelational classification: a multiple view approach   总被引:1,自引:0,他引:1  
Multirelational classification aims at discovering useful patterns across multiple inter-connected tables (relations) in a relational database. Many traditional learning techniques, however, assume a single table or a flat file as input (the so-called propositional algorithms). Existing multirelational classification approaches either “upgrade” mature propositional learning methods to deal with relational presentation or extensively “flatten” multiple tables into a single flat file, which is then solved by propositional algorithms. This article reports a multiple view strategy—where neither “upgrading” nor “flattening” is required—for mining in relational databases. Our approach learns from multiple views (feature set) of a relational databases, and then integrates the information acquired by individual view learners to construct a final model. Our empirical studies show that the method compares well in comparison with the classifiers induced by the majority of multirelational mining systems, in terms of accuracy obtained and running time needed. The paper explores the implications of this finding for multirelational research and applications. In addition, the method has practical significance: it is appropriate for directly mining many real-world databases.
Herna L. ViktorEmail:
  相似文献   

17.
The HaLoop approach to large-scale iterative data analysis   总被引:1,自引:0,他引:1  
The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce has enjoyed particular success. However, MapReduce lacks built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, and model fitting. This paper (This is an extended version of the VLDB 2010 paper “HaLoop: Efficient Iterative Data Processing on Large Clusters” PVLDB 3(1):285–296, 2010.) presents HaLoop, a modified version of the Hadoop MapReduce framework, that is designed to serve these applications. HaLoop allows iterative applications to be assembled from existing Hadoop programs without modification, and significantly improves their efficiency by providing inter-iteration caching mechanisms and a loop-aware scheduler to exploit these caches. HaLoop retains the fault-tolerance properties of MapReduce through automatic cache recovery and task re-execution. We evaluated HaLoop on a variety of real applications and real datasets. Compared with Hadoop, on average, HaLoop improved runtimes by a factor of 1.85 and shuffled only 4 % as much data between mappers and reducers in the applications that we tested.  相似文献   

18.
Recent advances in hardware and software technologies for computer games have proved to be more than capable of delivering quite detailed virtual environments on PC platforms and gaming consoles for so-called “serious” applications, at a fraction of the cost than was the case 8 years ago. SubSafe is a recent example of what can be achieved in part-task naval training applications using gaming technologies, exploiting freely available, freely distributable software. SubSafe is a proof-of-concept demonstrator that presents end users with an interactive, real-time three-dimensional model of part of a Trafalgar Class submarine. This “Part 1” paper presents the background to the SubSafe project and outlines the experimental design for a pilot study being conducted between August 2008 and January 2009, in conjunction with the Royal Navy’s Submarine School in Devonport. The study is investigating knowledge transfer from the classroom to a real submarine environment (during week 7 of the students’ “Submarine Qualification Dry” course), together with general usability and interactivity assessments. Part 2 of the paper (to be completed in early 2009) will present the results of these trials and consider future extensions of the research into other submarine training domains, including periscope ranging and look-interval assessment skills, survival systems deployment training and the planning and rehearsal of submersible rescue operations.  相似文献   

19.
The automated software system “Black Square,” Version 1.2 is described. The system is intended for the automation of image processing, analysis, and recognition. It is an open system for generating new knowledge: objects, algorithms of image processing, recognition procedures originally not intended for image processing, and methods for solving applied problems. The system combines the features of information retrieval, reference, training, and computing systems. This work was partially supported by the Russian Foundation for Basic Research, project nos. 03-07-90406, 05-04-49846, and 05-07-08000; by the INTAS grant no. 04-77-7067; by the Cooperative grant “Image Analysis and Synthesis: Theoretical Foundations and Prototypical Applications in Medical Imaging” within agreement between Italian National Research Council and Russian Academy of Sciences (RAS); by the grant of the RAS in the framework of the Program “Fundamental Science to Medicine.” An erratum to this article is available at .  相似文献   

20.
 The environmental data are in general imprecise and uncertain, but they are located in space and therefore obey to spatial constraints. The “spatial analysis” is a (natural) reasoning process through which geographers take advantage of these constraints to reduce this uncertainty and to improve their beliefs. Trying to automate this process is a really hard problem. We propose here the design of a revision operator able to perform a spatial analysis in the context of one particular “application profile”: it identifies objects bearing a same variable bound through local constraints. The formal background, on which this operator is built, is a decision algorithm from Reiter [9]; then the heuristics, which help this algorithm to become tractable on a true scale application, are special patterns for clauses and “spatial confinement” of conflicts. This operator is “anytime”, because it uses “samples” and works on small (tractable) blocks, it reaggregates the partial revision results on larger blocks, thus we name it a “hierarchical block revision” operator. Finally we illustrate a particular application: a flooding propagation. Of course this is among possible approaches of “soft-computing” for geographic applications. On leave at: Centre de Recherche en Géomatique Pavillon Casault, Université Laval Québec, Qc, Canada – G1K 7P4 Université de Toulon et du Var, Avenue de l'Université, BP 132, 83957 La Garde Cedex, France This work is currently supported by the European Community under the IST-1999-14189 project.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号