首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A map is a data structure that is commonly used to store data as key–value pairs and retrieve data as keys, values, or key–value pairs. Although Java offers different map implementation classes, Android SDK offers other implementations supposed to be more efficient than HashMap: ArrayMap and SparseArray variants (SparseArray, LongSparseArray, SparseIntArray, SparseLongArray, and SparseBooleanArray). Yet, the performance of these implementations in terms of CPU time, memory usage, and energy consumption is lacking in the official Android documentation; although saving CPU, memory, and energy is a major concern of users wanting to increase battery life. Consequently, we study the use of map implementations by Android developers in two ways. First, we perform an observational study of 5713 Android apps in GitHub. Second, we conduct a survey to assess developers’ perspective on Java and Android map implementations. Then, we perform an experimental study comparing HashMap, ArrayMap, and SparseArray variants map implementations in terms of CPU time, memory usage, and energy consumption. We conclude with guidelines for choosing among the map implementations: HashMap is preferable over ArrayMap to improve energy efficiency of apps, and SparseArray variants should be used instead of HashMap and ArrayMap when keys are primitive types.  相似文献   

2.
We define and prove a formal semantics divided into two complementary interacting components: the strictly linguistic (i.e. linguistically marked) semantics, we call linguistic agent (LA), and the strictly logical and referential semantics, we call rational agent (RA). This Linguistic \(\leftrightarrow \) Rational Agents’ Semantics (LRA semantics) applies to Deep Dependency trees (DD-trees) or more generally, to discourses, i.e. sequences of DD-trees, and interprets them by functional structures we call Meaning Representation Structures (MRS), similar to the DRT, but interpreted very differently. LRA semantics incrementally interprets the discourses by minimal finite models, called proto-models, in a monotonic logic of the LA and checks the proto-models with respect to the classical models of the RA. The proto-model is considered as the linguistic sense of the discourse. We define in full detail the LA which, as we believe, must be universal. On the other hand, we don’t propose a particular RA. We only define the scheme of interaction between the two agents and the stimuli of the RA used by the LA. After all, every discourse has in LRA semantics the single meaning and the single sense for every Rational Agent used to interact with the Linguistic Agent.  相似文献   

3.
An algorithm (called FTM) for scheduling of real-time sporadic tasks on a multicore platform is proposed. Each task has a deadline by which it must complete its non-erroneous execution. The FTM algorithm executes backups in order to recover from errors caused by non-permanent and permanent hardware faults. The worst-case schedulability analysis of FTM algorithm is presented considering an application-level error model, which is independent of the stochastic behavior of the underlying hardware-level fault model. Then, the stochastic behavior of hardware-level fault model is plugged in to the analysis to derive the probability of meeting all the deadlines. Such probabilistic guarantee is the level of assurance (i.e., reliability) regarding the correct functional and timing behaviors of the system. One of the salient features of FTM algorithm is that it executes some backups in active redundancy to exploit the parallel multicore architecture while other backups passively to avoid unnecessary execution of too many active backups. This paper also proposes a scheme to determine for each task the number of backups that should run in active redundancy in order to increase the probability of meeting all the deadlines. The effectiveness of the proposed approach is demonstrated using an example application.  相似文献   

4.
5.
Wearable apps are becoming increasingly popular in recent years. Nevertheless, to date, very few studies have examined the issues that wearable apps face. Prior studies showed that user reviews contain a plethora of insights that can be used to understand quality issues and help developers build better quality mobile apps. Therefore, in this paper, we mine user reviews in order to understand the user complaints about wearable apps. We manually sample and categorize 2,667 reviews from 19 Android wearable apps. Additionally, we examine the replies posted by developers in response to user complaints. This allows us to determine the type of complaints that developers care about the most, and to identify problems that despite being important to users, do not receive a proper response from developers. Our findings indicate that the most frequent complaints are related to Functional Errors, Cost, and Lack of Functionality, whereas the most negatively impacting complaints are related to Installation Problems, Device Compatibility, and Privacy & Ethical Issues. We also find that developers mostly reply to complaints related to Privacy & Ethical Issues, Performance Issues, and notification related issues. Furthermore, we observe that when developers reply, they tend to provide a solution, request more details, or let the user know that they are working on a solution. Lastly, we compare our findings on wearable apps with the study done by Khalid et al. (2015) on handheld devices. From this, we find that some complaint types that appear in handheld apps also appear in wearable apps; though wearable apps have unique issues related to Lack of Functionality, Installation Problems, Connection & Sync, Spam Notifications, and Missing Notifications. Our results highlight the issues that users of wearable apps face the most, and the issues to which developers should pay additional attention to due to their negative impact.  相似文献   

6.
We consider the neighbourhood load balancing problem. Given a network of processors and an arbitrary distribution of tasks over the network, the goal is to balance load by exchanging tasks between neighbours. In the continuous model, tasks can be arbitrarily divided and perfectly balanced state can always be reached. This is not possible in the discrete model where tasks are non-divisible. In this paper we consider the problem in a very general setting, where the tasks can have arbitrary weights and the nodes can have different speeds. Given a continuous load balancing algorithm that balances the load perfectly in \(T\) rounds, we convert the algorithm into a discrete version. This new algorithm is deterministic and balances the load in \(T\) rounds so that the difference between the average and the maximum load is at most \(2d\cdot w_{\max }\), where d is the maximum degree of the network and \(w_{\max }\) is the maximum weight of any task. For general graphs, these bounds are asymptotically lower compared to the previous results. The proposed conversion scheme can be applied to a wide class of continuous processes, including first and second order diffusion, dimension exchange, and random matching processes. For the case of identical tasks, we present a randomized version of our algorithm that balances the load up to a discrepancy of \(\mathscr {O}(\sqrt{d \log n})\) provided that the initial load on every node is large enough.  相似文献   

7.
Operating systems code is often developed according to principles like simplicity, low overhead, and low memory footprint. Schedulers are no exceptions. A scheduler is usually developed with flexibility in mind, and this restricts the ability to provide real-time guarantees. Moreover, even when schedulers can provide real-time guarantees, it is unlikely that these guarantees are properly quantified using theoretical analysis that carries on to the implementation. To be able to analyze the guarantees offered by operating systems’ schedulers, we developed a publicly available tool that analyzes timing properties extracted from the execution of a set of threads and computes the lower and upper bounds to the supply function offered by the execution platform, together with information about migrations and statistics on execution times. rt-muse evaluates the impact of many application and platform characteristics including the scheduling algorithm, the amount of available resources, the usage of shared resources, and the memory access overhead. Using rt-muse, we show the impact of Linux scheduling classes, shared data and application parallelism, on the delivered computing capacity. The tool provides useful insights on the runtime behavior of the applications and scheduler. In the reported experiments, rt-muse detected some issues arising with the real-time Linux scheduler: despite having available cores, Linux does not migrate SCHED_RR threads which are enqueued behind SCHED_FIFO threads with the same priority.  相似文献   

8.
This paper presents ibvdev a scalable and efficient low-level Java message-passing communication device over InfiniBand. The continuous increase in the number of cores per processor underscores the need for efficient communication support for parallel solutions. Moreover, current system deployments are aggregating a significant number of cores through advanced network technologies, such as InfiniBand, increasing the complexity of communication protocols, especially when dealing with hybrid shared/distributed memory architectures such as clusters. Here, Java represents an attractive choice for the development of communication middleware for these systems, as it provides built-in networking and multithreading support. As the gap between Java and compiled languages performance has been narrowing for the last years, Java is an emerging option for High Performance Computing (HPC). The developed communication middleware ibvdev increases Java applications performance on clusters of multicore processors interconnected via InfiniBand through: (1) providing Java with direct access to InfiniBand using InfiniBand Verbs API, somewhat restricted so far to MPI libraries; (2) implementing an efficient and scalable communication protocol which obtains start-up latencies and bandwidths similar to MPI performance results; and (3) allowing its integration in any Java parallel and distributed application. In fact, it has been successfully integrated in the Java messaging library MPJ Express. The experimental evaluation of this middleware on an InfiniBand cluster of multicore processors has shown significant point-to-point performance benefits, up to 85% start-up latency reduction and twice the bandwidth compared to previous Java middleware on InfiniBand. Additionally, the impact of ibvdev on message-passing collective operations is significant, achieving up to one order of magnitude performance increases compared to previous Java solutions, especially when combined with multithreading. Finally, the efficiency of this middleware, which is even competitive with MPI in terms of performance, increments the scalability of communications intensive Java HPC applications.  相似文献   

9.
Human experts as well as autonomous agents in a referral network must decide whether to accept a task or refer to a more appropriate expert, and if so to whom. In order for the referral network to improve over time, the experts must learn to estimate the topical expertise of other experts. This article extends concepts from Multi-agent Reinforcement Learning and Active Learning to referral networks for distributed learning in referral networks. Among a wide array of algorithms evaluated, Distributed Interval Estimation Learning (DIEL), based on Interval Estimation Learning, was found to be superior for learning appropriate referral choices, compared to ??-Greedy, Q-learning, Thompson Sampling and Upper Confidence Bound (UCB) methods. In addition to a synthetic data set, we compare the performance of the stronger learning-to-refer algorithms on a referral network of high-performance Stochastic Local Search (SLS) SAT solvers where expertise does not obey any known parameterized distribution. An evaluation of overall network performance and a robustness analysis is conducted across the learning algorithms, with an emphasis on capacity constraints and evolving networks, where experts with known expertise drop off and new experts of unknown performance enter — situations that arise in real-world scenarios but were heretofore ignored.  相似文献   

10.
Graphs appear in numerous applications including cyber security, the Internet, social networks, protein networks, recommendation systems, citation networks, and many more. Graphs with millions or even billions of nodes and edges are common-place. How to store such large graphs efficiently? What are the core operations/queries on those graph? How to answer the graph queries quickly? We propose Gbase, an efficient analysis platform for large graphs. The key novelties lie in (1) our storage and compression scheme for a parallel, distributed settings and (2) the carefully chosen graph operations and their efficient implementations. We designed and implemented an instance of Gbase using Mapreduce/Hadoop. Gbase provides a parallel indexing mechanism for graph operations that both saves storage space, as well as accelerates query responses. We run numerous experiments on real and synthetic graphs, spanning billions of nodes and edges, and we show that our proposed Gbase is indeed fast, scalable, and nimble, with significant savings in space and time.  相似文献   

11.
The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.  相似文献   

12.
In this article, a filter feature weighting technique for attribute selection in classification problems is proposed (LIA). It has two main characteristics. First, unlike feature weighting methods, it is able to consider attribute interactions in the weighting process, rather than only evaluating single features. Attribute subsets are evaluated by projecting instances into a grid defined by attributes in the subset. Then, the joint relevance of the subset is computed by measuring the information present in the cells of the grid. The final weight for each attribute is computed by taking into account its performance in each of the grids it participates. Second, many real problems contain low signal-to-noise ratios, due to instance of high noise levels, class overlap, class imbalance, or small training samples. LIA computes reliable local information for each of the cells by estimating the number of target class instances not due to chance, given a confidence value. In order to study its properties, LIA has been evaluated with a collection of 18 real datasets and compared to two feature weighting methods (Chi-Squared and ReliefF) and a subset feature selection algorithm (CFS). Results show that the method is significantly better in many cases, and never significantly worse. LIA has also been tested with different grid dimensions (1, 2, and 3). The method works best when evaluating attribute subsets larger than 1, hence showing the usefulness of considering attribute interactions.  相似文献   

13.
Two mobile agents, starting from different nodes of a network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds. Each agent has a distinct integer label from the set \(\{1,\ldots ,L\}\). Two main efficiency measures of rendezvous are its time (the number of rounds until the meeting) and its cost (the total number of edge traversals). We investigate tradeoffs between these two measures. A natural benchmark for both time and cost of rendezvous in a network is the number of edge traversals needed for visiting all nodes of the network, called the exploration time. Hence we express the time and cost of rendezvous as functions of an upper bound E on the time of exploration (where E and a corresponding exploration procedure are known to both agents) and of the size L of the label space. We present two natural rendezvous algorithms. Algorithm Cheap has cost O(E) (and, in fact, a version of this algorithm for the model where the agents start simultaneously has cost exactly E) and time O(EL). Algorithm Fast has both time and cost \(O(E\log L)\). Our main contributions are lower bounds showing that, perhaps surprisingly, these two algorithms capture the tradeoffs between time and cost of rendezvous almost tightly. We show that any deterministic rendezvous algorithm of cost asymptotically E (i.e., of cost \(E+o(E)\)) must have time \(\varOmega (EL)\). On the other hand, we show that any deterministic rendezvous algorithm with time complexity \(O(E\log L)\) must have cost \(\varOmega (E\log L)\).  相似文献   

14.
ReFlO is a framework and interactive tool to record and systematize domain knowledge used by experts to derive complex pipe-and-filter (PnF) applications. Domain knowledge is encoded as transformations that alter PnF graphs by refinement (adding more details), flattening (removing modular boundaries), and optimization (substituting inefficient PnF graphs with more efficient ones). All three kinds of transformations arise in reverse-engineering legacy PnF applications. We present the conceptual foundation and tool capabilities of ReFlO, illustrate how parallel PnF applications are designed and generated, and how domain-specific libraries of transformations are developed.  相似文献   

15.
An important concern for an efficient use of distributed computing is dealing with load balancing to ensure all available nodes and their shared resources are equally exploited. In large scale systems such as volunteer computing platforms and desktop grids, centralized solutions may introduce performance bottlenecks and single points of failure. Accordingly fully distributed alternatives have been considered, due to their inherent robustness and reliability. In extremely dynamic contexts, scheduling middlewares should adapt their job scheduling policies to the actual availability and overcome the volatility and heterogeneity typical of the underlying nodes. To deal with the dynamicity of a large pool of resources, self-organizing and adaptive solutions represent a promising research direction. Solutions based on bio-inspired methodologies are particularly suitable, as they inherently provide the desired features. In this paper we present a fully distributed load balancing mechanism, called ozmos, which aims at increasing the efficiency of distributed computing systems through peer-to-peer interaction between nodes. The proposed algorithm is based on a Chord overlay, and employs ant-like agents to spread information about the current load on each node, to reschedule tasks from overloaded systems to underloaded ones, and to relocate incompatible tasks on suitable resources in heterogeneous grids. By means of several evaluation scenarios we demonstrate the effectiveness of the proposed solution in achieving system-wide load balancing, both with homogeneous and heterogeneous resources. In particular we consider the load balancing performance of our approach, its scalability, as well as its communication efficiency.  相似文献   

16.
We consider the problem of implementing transactional memory in large-scale distributed networked systems. We present Spiral, a novel distributed directory-based protocol for transactional memory, and theoretically analyze and experimentally evaluate it for the performance boundaries of this approach from the worst-case perspective. Spiral is designed for the data-flow distributed implementation of software transactional memory which supports three basic operations: publish, allowing a shared object to be inserted in the directory so that other nodes can find it; lookup, providing a read-only copy of the object to the requesting node; move, allowing the requesting node to write the object locally after the node gets it. The protocol runs on a hierarchical directory construction based on sparse covers, where clusters at each level are ordered to avoid race conditions while serving concurrent requests. Given a shared object the protocol maintains a directory path pointing to the object. The basic idea is to use “spiral” paths that grow outward to search for the directory path of the object in a bottom-up fashion. For general networks, this protocol guarantees an \(\mathcal{O}(\log ^2 n\cdot \log D)\) approximation in sequential and one-shot concurrent executions of a finite set of move requests, where \(n\) is the number of nodes and \(D\) is the diameter of the network. It also guarantees poly-log approximation for any single lookup request. Our bounds are deterministic and hold in the worst-case. Moreover, this protocol requires only polylogarithmic bits of memory per node. Experimental evaluations in real networks also confirm our theoretical findings. To the best of our knowledge, this is the first deterministic consistency protocol for distributed transactional memory that achieves poly-log approximation in general networks.  相似文献   

17.
This paper focuses on the application level improvements in a sparse direct solver specifically used for large-scale unsymmetrical linear equations resulting from unstructured mesh discretization of coupled elliptic/hyperbolic PDEs. Existing sparse direct solvers are designed for distributed server systems taking advantage of both distributed memory and processing units. We conducted extensive numerical experiments with three state-of-the-art direct linear solvers that can work on distributed-memory parallel architectures; namely, MUMPS (MUMPS solver website, http://graal.ens-lyon.fr/MUMPS), WSMP (Technical Report TR RC-21886, IBM, Watson Research Center, Yorktown Heights, 2000), and SUPERLU_DIST (ACM Trans Math Softw 29(2):110–140, 2003). The performance of these solvers was analyzed in detail, using advanced analysis tools such as Tuning and Analysis Utilities (TAU) and Performance Application Programming Interface (PAPI). The performance is evaluated with respect to robustness, speed, scalability, and efficiency in CPU and memory usage. We have determined application level issues that we believe they can improve the performance of a distributed-shared memory hybrid variant of this solver, which is proposed as an alternative solver [SuperLU_MCDT (Many-Core Distributed)] in this paper. The new solver utilizing the MPI/OpenMP hybrid programming is specifically tuned to handle large unsymmetrical systems arising in reservoir simulations so that higher performance and better scalability can be achieved for a large distributed computing system with many nodes of multicore processors. Two main tasks are accomplished during this study: (i) comparisons of public domain solver algorithms; existing state-of-the-art direct sparse linear system solvers are investigated and their performance and weaknesses based on test cases are analyzed, (ii) improvement of direct sparse solver algorithm (SuperLU_MCDT) for many-core distributed systems is achieved. We provided results of numerical tests that were run on up to 16,384 cores, and used many sets of test matrices for reservoir simulations with unstructured meshes. The numerical results showed that SuperLU_MCDT can outperform SuperLU_DIST 3.3 in terms of both speed and robustness.  相似文献   

18.
Bufferless Network-on-Chip (NoC) emerges as an interesting option for NoC design in recent years, which can save considerable router power and area. However, bufferless NoC only works well under low-to-medium load because it becomes more easily congested as message injection rate increases. In this paper, we propose a novel distributed source-throttling congestion control mechanism that relieves the effect of congestion in bufferless NoC under high load, called Cbufferless. The proposed strategy uses a novel congestion detection and control mechanism, computing average deflection rate of routing flit and distributed throttling message injection. Utilizing the new mechanism, the congestion information can be directly obtained inside node, which allows the mechanism to be fully distributed without requiring any transmission of global congestion information among neighbor routers and within a router. Simulation results show that the proposed mechanism improves system throughput by up to $\sim $ 30 and $\sim $ 15.5 %, saves energy consumption by up to $\sim $ 40 and $\sim $ 19 % than that of baseline and injection rate throttling bufferless NoCs, respectively, and keeps lower message latency under congested load when compared.  相似文献   

19.
The computational complexity of Caputo fractional reaction–diffusion equation is \(O(MN^2)\) compared with \(O(MN)\) of traditional reaction–diffusion equation, where \(M\) , \(N\) are the number of time steps and grid points. A efficient parallel solution for Caputo fractional reaction–diffusion equation with explicit difference method is proposed. The parallel solution, which is implemented with MPI parallel programming model, consists of three procedures: preprocessing, parallel solver and postprocessing. The parallel solver involves the parallel tridiagonal matrix vector multiplication, vector vector addition and constant vector multiplication. The sum of constant vector multiplication is optimized. As to the authors’ knowledge, this is the first parallel solution for Caputo fractional reaction–diffusion equation. The experimental results show that the parallel solution compares well with the analytic solution. The parallel solution on single Intel Xeon X5540 CPU runs more than three times faster than the serial solution on single X5540 CPU core, and scales quite well on a distributed memory cluster system.  相似文献   

20.
There is an increasing interest in executing complex analyses over large graphs, many of which require processing a large number of multi-hop neighborhoods or subgraphs. Examples include ego network analysis, motif counting, finding social circles, personalized recommendations, link prediction, anomaly detection, analyzing influence cascades, and others. These tasks are not well served by existing vertex-centric graph processing frameworks, where user programs are only able to directly access the state of a single vertex at a time, resulting in high communication, scheduling, and memory overheads in executing such tasks. Further, most existing graph processing frameworks ignore the challenges in extracting the relevant portions of the graph that an analysis task is interested in, and loading those onto distributed memory. This paper introduces NScale, a novel end-to-end graph processing framework that enables the distributed execution of complex subgraph-centric analytics over large-scale graphs in the cloud. NScale enables users to write programs at the level of subgraphs rather than at the level of vertices. Unlike most previous graph processing frameworks, which apply the user program to the entire graph, NScale allows users to declaratively specify subgraphs of interest. Our framework includes a novel graph extraction and packing (GEP) module that utilizes a cost-based optimizer to partition and pack the subgraphs of interest into memory on as few machines as possible. The distributed execution engine then takes over and runs the user program in parallel on those subgraphs, restricting the scope of the execution appropriately, and utilizes novel techniques to minimize memory consumption by exploiting overlaps among the subgraphs. We present a comprehensive empirical evaluation comparing against three state-of-the-art systems, namely Giraph, GraphLab, and GraphX, on several real-world datasets and a variety of analysis tasks. Our experimental results show orders-of-magnitude improvements in performance and drastic reductions in the cost of analytics compared to vertex-centric approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号