首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Wearable apps are becoming increasingly popular in recent years. Nevertheless, to date, very few studies have examined the issues that wearable apps face. Prior studies showed that user reviews contain a plethora of insights that can be used to understand quality issues and help developers build better quality mobile apps. Therefore, in this paper, we mine user reviews in order to understand the user complaints about wearable apps. We manually sample and categorize 2,667 reviews from 19 Android wearable apps. Additionally, we examine the replies posted by developers in response to user complaints. This allows us to determine the type of complaints that developers care about the most, and to identify problems that despite being important to users, do not receive a proper response from developers. Our findings indicate that the most frequent complaints are related to Functional Errors, Cost, and Lack of Functionality, whereas the most negatively impacting complaints are related to Installation Problems, Device Compatibility, and Privacy & Ethical Issues. We also find that developers mostly reply to complaints related to Privacy & Ethical Issues, Performance Issues, and notification related issues. Furthermore, we observe that when developers reply, they tend to provide a solution, request more details, or let the user know that they are working on a solution. Lastly, we compare our findings on wearable apps with the study done by Khalid et al. (2015) on handheld devices. From this, we find that some complaint types that appear in handheld apps also appear in wearable apps; though wearable apps have unique issues related to Lack of Functionality, Installation Problems, Connection & Sync, Spam Notifications, and Missing Notifications. Our results highlight the issues that users of wearable apps face the most, and the issues to which developers should pay additional attention to due to their negative impact.  相似文献   

2.
3.
FEMPAR is an open source object oriented Fortran200X scientific software library for the high-performance scalable simulation of complex multiphysics problems governed by partial differential equations at large scales, by exploiting state-of-the-art supercomputing resources. It is a highly modularized, flexible, and extensible library, that provides a set of modules that can be combined to carry out the different steps of the simulation pipeline. FEMPAR includes a rich set of algorithms for the discretization step, namely (arbitrary-order) grad, div, and curl-conforming finite element methods, discontinuous Galerkin methods, B-splines, and unfitted finite element techniques on cut cells, combined with h-adaptivity. The linear solver module relies on state-of-the-art bulk-asynchronous implementations of multilevel domain decomposition solvers for the different discretization alternatives and block-preconditioning techniques for multiphysics problems. FEMPAR is a framework that provides users with out-of-the-box state-of-the-art discretization techniques and highly scalable solvers for the simulation of complex applications, hiding the dramatic complexity of the underlying algorithms. But it is also a framework for researchers that want to experience with new algorithms and solvers, by providing a highly extensible framework. In this work, the first one in a series of articles about FEMPAR, we provide a detailed introduction to the software abstractions used in the discretization module and the related geometrical module. We also provide some ingredients about the assembly of linear systems arising from finite element discretizations, but the software design of complex scalable multilevel solvers is postponed to a subsequent work.  相似文献   

4.
We define and prove a formal semantics divided into two complementary interacting components: the strictly linguistic (i.e. linguistically marked) semantics, we call linguistic agent (LA), and the strictly logical and referential semantics, we call rational agent (RA). This Linguistic \(\leftrightarrow \) Rational Agents’ Semantics (LRA semantics) applies to Deep Dependency trees (DD-trees) or more generally, to discourses, i.e. sequences of DD-trees, and interprets them by functional structures we call Meaning Representation Structures (MRS), similar to the DRT, but interpreted very differently. LRA semantics incrementally interprets the discourses by minimal finite models, called proto-models, in a monotonic logic of the LA and checks the proto-models with respect to the classical models of the RA. The proto-model is considered as the linguistic sense of the discourse. We define in full detail the LA which, as we believe, must be universal. On the other hand, we don’t propose a particular RA. We only define the scheme of interaction between the two agents and the stimuli of the RA used by the LA. After all, every discourse has in LRA semantics the single meaning and the single sense for every Rational Agent used to interact with the Linguistic Agent.  相似文献   

5.
A map is a data structure that is commonly used to store data as key–value pairs and retrieve data as keys, values, or key–value pairs. Although Java offers different map implementation classes, Android SDK offers other implementations supposed to be more efficient than HashMap: ArrayMap and SparseArray variants (SparseArray, LongSparseArray, SparseIntArray, SparseLongArray, and SparseBooleanArray). Yet, the performance of these implementations in terms of CPU time, memory usage, and energy consumption is lacking in the official Android documentation; although saving CPU, memory, and energy is a major concern of users wanting to increase battery life. Consequently, we study the use of map implementations by Android developers in two ways. First, we perform an observational study of 5713 Android apps in GitHub. Second, we conduct a survey to assess developers’ perspective on Java and Android map implementations. Then, we perform an experimental study comparing HashMap, ArrayMap, and SparseArray variants map implementations in terms of CPU time, memory usage, and energy consumption. We conclude with guidelines for choosing among the map implementations: HashMap is preferable over ArrayMap to improve energy efficiency of apps, and SparseArray variants should be used instead of HashMap and ArrayMap when keys are primitive types.  相似文献   

6.
Operating systems code is often developed according to principles like simplicity, low overhead, and low memory footprint. Schedulers are no exceptions. A scheduler is usually developed with flexibility in mind, and this restricts the ability to provide real-time guarantees. Moreover, even when schedulers can provide real-time guarantees, it is unlikely that these guarantees are properly quantified using theoretical analysis that carries on to the implementation. To be able to analyze the guarantees offered by operating systems’ schedulers, we developed a publicly available tool that analyzes timing properties extracted from the execution of a set of threads and computes the lower and upper bounds to the supply function offered by the execution platform, together with information about migrations and statistics on execution times. rt-muse evaluates the impact of many application and platform characteristics including the scheduling algorithm, the amount of available resources, the usage of shared resources, and the memory access overhead. Using rt-muse, we show the impact of Linux scheduling classes, shared data and application parallelism, on the delivered computing capacity. The tool provides useful insights on the runtime behavior of the applications and scheduler. In the reported experiments, rt-muse detected some issues arising with the real-time Linux scheduler: despite having available cores, Linux does not migrate SCHED_RR threads which are enqueued behind SCHED_FIFO threads with the same priority.  相似文献   

7.
The skyline operator determines points in a multidimensional dataset that offer some optimal trade-off. State-of-the-art CPU skyline algorithms exploit quad-tree partitioning with complex branching to minimise the number of point-to-point comparisons. Branch-phobic GPU skyline algorithms rely on compute throughput rather than partitioning, but fail to match the performance of sequential algorithms. In this paper, we introduce a new skyline algorithm, SkyAlign, that is designed for the GPU, and a GPU-friendly, grid-based tree structure upon which the algorithm relies. The search tree allows us to dramatically reduce the amount of work done by the GPU algorithm by avoiding most point-to-point comparisons at the cost of some compute throughput. This trade-off allows SkyAlign to achieve orders of magnitude faster performance than its predecessors. Moreover, a NUMA-oblivious port of SkyAlign outperforms native multicore state of the art on challenging workloads by an increasing margin as more cores and sockets are utilised.  相似文献   

8.
The class of polynomials computable by polynomial size log-depth arithmetic circuits (VNC 1) is known to be computable by constant width polynomial degree circuits (VsSC 0), but whether the converse containment holds is an open problem. As a partial answer to this question, we give a construction which shows that syntactically multilinear circuits of constant width and polynomial degree can be depth-reduced, which in our notation shows that sm-VsSC 0 ${\subseteq}$ ? sm-VNC 1. We further strengthen this inclusion, by giving a separate construction that provides a width-efficient simulation for constant width syntactically multilinear circuits by constant width syntactically multilinear algebraic branching programs; in our notation, sm-VsSC 0 ${\subseteq}$ ? sm-VBWBP. We then focus on polynomial size syntactically multilinear circuits and study relationships between classes of functions obtained by imposing various resource (width, depth, degree) restrictions on these circuits. Along the way, we also observe a characterization of the class NC 1 in terms of a restricted class of planar branching programs of polynomial size. Finally, in contrast to the general case, we report closure and stability of coefficient functions for the syntactically multilinear classes studied in this paper.  相似文献   

9.
Software developers rely on a build system to compile their source code changes and produce deliverables for testing and deployment. Since the full build of large software systems can take hours, the incremental build is a cornerstone of modern build systems. Incremental builds should only recompile deliverables whose dependencies have been changed by a developer. However, in many organizations, such dependencies still are identified by build rules that are specified and maintained (mostly) manually, typically using technologies like make. Incomplete rules lead to unspecified dependencies that can prevent certain deliverables from being rebuilt, yielding incomplete results, which leave sources and deliverables out-of-sync. In this paper, we present a case study on unspecified dependencies in the make-based build systems of the glib, openldap, linux and qt open source projects. To uncover unspecified dependencies in make-based build systems, we use an approach that combines a conceptual model of the dependencies specified in the build system with a concrete model of the files and processes that are actually exercised during the build. Our approach provides an overview of the dependencies that are used throughout the build system and reveals unspecified dependencies that are not yet expressed in the build system rules. During our analysis, we find that unspecified dependencies are common. We identify 6 common causes in more than 1.2 million unspecified dependencies.  相似文献   

10.
A Physical Error Estimation Tool (PEET) is introduced in Matlab for predicting physical gate errors of quantum information processing (QIP) operations by constructing and then simulating gate sequences for a wide variety of user-defined, Hamiltonian-based physical systems. PEET is designed to accommodate the interdisciplinary needs of quantum computing design by assessing gate performance for users familiar with the underlying physics of QIP, as well as those interested in higher-level computing operations. The structure of PEET separates the bulk of the physical details of a system into Gate objects, while the construction of quantum computing gate operations are contained in GateSequence objects. Gate errors are estimated by Monte Carlo sampling of noisy gate operations. The main utility of PEET, though, is the implementation of QuantumControl methods that act to generate and then test gate sequence and pulse-shaping techniques for QIP performance. This work details the structure of PEET and gives instructive examples for its operation.  相似文献   

11.
The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.  相似文献   

12.
Natural disasters, as well as human-made disasters, can have a deep impact on wide geographic areas, and emergency responders can benefit from the early estimation of emergency consequences. This work presents CrisMap, a Big Data crisis mapping system capable of quickly collecting and analyzing social media data. CrisMap extracts potential crisis-related actionable information from tweets by adopting a classification technique based on word embeddings and by exploiting a combination of readily-available semantic annotators to geoparse tweets. The enriched tweets are then visualized in customizable, Web-based dashboards, also leveraging ad-hoc quantitative visualizations like choropleth maps. The maps produced by our system help to estimate the impact of the emergency in its early phases, to identify areas that have been severely struck, and to acquire a greater situational awareness. We extensively benchmark the performance of our system on two Italian natural disasters by validating our maps against authoritative data. Finally, we perform a qualitative case-study on a recent devastating earthquake occurred in Central Italy.  相似文献   

13.

This paper is about people. It is about understanding how learning and communication mutually influence one another, allowing people to infer each other's communicative behavior. In order to understand how people learn to communicate, we refer to existing theories. They are the logical theories of learning and communication, situated cognition, and activity theory. Thus, this paper is about applying existing theories of analyzing conversations, human learning, and memory to a range of scenarios of actual human conversations. It also introduces a new way of analyzing conversations. We have recorded and observed actual human communications on the Web. We have applied those theories to analyze these communication scenarios. We describe the preliminary results on the analyses of the communication scenarios. In particular, we show our analysis of the recorded conversational structures. We illustrate how the re-enacting and re-sequencing of conversational structures is adapted to the context (i.e., environment) moment by moment. From our analyses, we found that people have internal rules (e.g., a combinatorial rule system). These internal rules can be related to how a person learns, adapts, and merges protocols situated in their context of communication. Our long term goal is to make use of these analyses to improve human communication on the Grid.  相似文献   

14.
Duration Calculus was introduced in [ZHR91] as a logic to specify and reason about requirements for real-time systems. It is an extension of Interval Temporal Logic [Mos85] where one can reason about integrated constraints over time-dependent and Boolean valued states without explicit mention of absolute time. Several major case studies, e.g. the gas burner system in [RRH93], have shown that Duration Calculus provides a high level of abstraction for both expressing and reasoning about specifications. Using Timed Automata [A1D92] one can express how real-time systems can be constructed at a level of detail which is close to an actual implementation. We consider in the paper the correctness of Timed Automata with respect to Duration Calculus formulae. For a subset of Duration Calculus, we show that one can automatically verify whether a Timed Automaton ? is correct with respect to a formulaD, abbreviated ? ?D, i.e. one can domodel-checking. The subset we consider is expressive enough to formalize the requirements to the gas burner system given in [RRH93]; but only for a discrete time domain. Model-checking is done by reducing the correctness problem ? ?D to the inclusion problem of regular languages.  相似文献   

15.
ReFlO is a framework and interactive tool to record and systematize domain knowledge used by experts to derive complex pipe-and-filter (PnF) applications. Domain knowledge is encoded as transformations that alter PnF graphs by refinement (adding more details), flattening (removing modular boundaries), and optimization (substituting inefficient PnF graphs with more efficient ones). All three kinds of transformations arise in reverse-engineering legacy PnF applications. We present the conceptual foundation and tool capabilities of ReFlO, illustrate how parallel PnF applications are designed and generated, and how domain-specific libraries of transformations are developed.  相似文献   

16.
Since large parallel machines are typically clusters of multicore nodes, parallel programs should be able to deal with both shared memory and distributed memory. This paper proposes a hybrid work stealing scheme, which combines the lifeline-based variant of distributed task pools with the node-internal load balancing of Java’s Fork/Join framework. We implemented our scheme by extending the APGAS library for Java, which is a branch of the X10 project. APGAS programmers can now spawn locality-flexible tasks with a new asyncAny construct. These tasks are transparently mapped to any resource in the overall system, so that the load is balanced over both nodes and cores. Unprocessed asyncAny-tasks can also be cancelled. In performance measurements with up to 144 workers on up to 12 nodes, we observed near linear speedups for four benchmarks and a low overhead for cancellation-related bookkeeping.  相似文献   

17.
Ensembles of classifiers are among the best performing classifiers available in many data mining applications, including the mining of data streams. Rather than training one classifier, multiple classifiers are trained, and their predictions are combined according to a given voting schedule. An important prerequisite for ensembles to be successful is that the individual models are diverse. One way to vastly increase the diversity among the models is to build an heterogeneous ensemble, comprised of fundamentally different model types. However, most ensembles developed specifically for the dynamic data stream setting rely on only one type of base-level classifier, most often Hoeffding Trees. We study the use of heterogeneous ensembles for data streams. We introduce the Online Performance Estimation framework, which dynamically weights the votes of individual classifiers in an ensemble. Using an internal evaluation on recent training data, it measures how well ensemble members performed on this and dynamically updates their weights. Experiments over a wide range of data streams show performance that is competitive with state of the art ensemble techniques, including Online Bagging and Leveraging Bagging, while being significantly faster. All experimental results from this work are easily reproducible and publicly available online.  相似文献   

18.
Assemblage consists in blending base wines in order to create target wines. Recent developments in aroma analysis allow us to measure chemical compounds impacting the taste of wines. This chemical analysis makes it possible to design a decision tool for the following problem: given a set of target wines, determine which volumes must be extracted from each base wine to produce wines that satisfy constraints on aroma concentration, volumes, alcohol contents and price. This paper describes the modeling of wine assemblage as a mixed constrained optimization problem, where the main goal is to minimize the gap to the desired concentrations for every aromatic criterion. The deterministic branch and bound solvers Couenne and IbexOpt behave well on the wine blending problem thanks to their interval constraint propagation/programming and polyhedral relaxation methods. We also study the performance of other optimization goals that could be embedded in a configuration tool, where the different possible interactions amount to solving the same constraints with different objective functions. We finally show on a recent generic wine blending instance that the proposed optimization process scales up well with the number of base wines.  相似文献   

19.
Directed model checking is a well-established approach for detecting error states in concurrent systems. A popular variant to find shortest error traces is to apply the A\(^*\) search algorithm with distance heuristics that never overestimate the real error distance. An important class of such distance heuristics is the class of pattern database heuristics. Pattern database heuristics are built on abstractions of the system under consideration. In this paper, we propose downward pattern refinement, a systematic approach for the construction of pattern database heuristics for concurrent systems of timed automata. First, we propose a general framework for pattern databases in the context of timed automata and show that desirable theoretical properties hold for the resulting pattern database. Afterward, we formally define a concept to measure the accuracy of abstractions. Based on this concept, we propose an algorithm for computing succinct abstractions that are still accurate to produce informed pattern databases. We evaluate our approach on large and complex industrial problems. The experiments show the practical potential of the resulting pattern database heuristic.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号