首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Participatory smartphone sensing has lately become more and more popular as a new paradigm for performing large-scale sensing, in which each smartphone contributes its sensed data for a collaborative sensing application. Most existing studies consider that smartphone users are strictly strategic and completely rational, which try to maximize their own payoffs. A number of incentive mechanisms are designed to encourage smartphone users to participate, which can achieve only suboptimal system performance. However, few existing studies can maximize a system-wide objective which takes both the platform and smartphone users into account. This paper focuses on the crucial problem of maximizing the system-wide performance or social welfare for a participatory smartphone sensing system. There are two great challenges. First, the social welfare maximization cannot be realized on the platform side because the cost of each user is private and unknown to the platform in reality. Second, the participatory sensing system is a large-scale real-time system due to the huge number of smartphone users who are geo-distributed in the whole world. A price-based decomposition framework is proposed in our previous work (Liu and Zhu, 2013), in which the platform provides a unit price for the sensing time spent by each user and the users return the sensing time via maximizing the monetary reward. This pricing framework is an effective incentive mechanism as users are motivated to participate for monetary rewards from the platform. In this paper, we propose two distributed solutions, which protect users’ privacy and achieve optimal social welfare. The first solution is designed based on the Lagrangian dual decomposition. A poplar iterative gradient algorithm is used to converge to the optimal value. Moreover, this distributed method is interpreted by our pricing framework. In the second solution, we first equivalently convert the original problem to an optimal pricing problem. Then, a distributed solution under the pricing framework via an efficient price-updating algorithm is proposed. Experimental results show that both two distributed solutions can achieve the maximum social welfare of a participatory smartphone system.  相似文献   

2.
With more and more real deployments of wireless sensor network applications, we envision that their success is nonetheless determined by whether the sensor networks can provide a high quality stream of data over a long period. In this paper, we propose a consistency-driven data quality management framework called Orchis that integrates the quality of data into an energy efficient sensor system design. Orchis consists of four components, data consistency models, adaptive data sampling and process protocols, consistency-driven cross-layer protocols and flexible APIs to manage the data quality, to support the goals of high data quality and energy efficiency. We first formally define a consistency model, which not only includes temporal consistency and numerical consistency, but also considers the application-specific requirements of data and data dynamics in the sensing field. Next, we propose an adaptive lazy energy efficient data collection protocol, which adapts the data sampling rate to the data dynamics in the sensing field and keeps lazy when the data consistency is maintained. Finally, we conduct a comprehensive evaluation to the proposed protocol based on both a TOSSIM-based simulation and a real prototype implementation using MICA2 motes. The results from both simulation and prototype show that our protocol reduces the number of delivered messages, improves the quality of collected data, and in turn extends the lifetime of the whole network. Our analysis also implies that a tradeoff should be carefully set between data consistency requirements and energy saving based on the specific requirements of different applications.  相似文献   

3.
Recent improvements of web development technologies, commonly referred to as HTML5, have resulted in an excellent framework for developing a fully-featured, purely web-based multi-agent platform. This paper presents an architecture of such a platform, named Radigost. Radigost agents and parts of the system itself are implemented in JavaScript and executed inside the client's web browser, while an additional set of Java-based components is deployed on an enterprise application server. Radigost is platform-independent, capable of running, without any prior installation or configuration steps, on a wide variety of software and hardware configurations, including personal computers, smartphones, tablets, and modern television sets. The system is standards-compliant and fully interoperable, in the sense that its agents can transparently interact with agents in existing, third-party multi-agent solutions. Finally, performance evaluation results show that the execution speed of Radigost is comparable to that of a non web-based implementation.  相似文献   

4.
In this paper, a fairly general framework for reasoning from inconsistent propositional bases is defined. Variable forgetting is used as a basic operation for weakening pieces of information so as to restore consistency. The key notion is that of recoveries, which are sets of variables whose forgetting enables restoring consistency. Several criteria for defining preferred recoveries are proposed, depending on whether the focus is laid on the relative relevance of the atoms or the relative entrenchment of the pieces of information (or both). Our framework encompasses several previous approaches as specific cases, including reasoning from preferred consistent subsets, and some forms of information merging. Interestingly, the gain in flexibility and generality offered by our framework does not imply a complexity shift compared to these specific cases.  相似文献   

5.
R. Francese  L. Pagli  D. Parente  M. Talamo 《Calcolo》1992,29(1-2):143-155
In this paper a new Memory Model of Computation (CMM) is introduced. In CMM, a RAM processor accesses a memory ofx cells in log x time. In fact, the usual assumption of the RAM model, that all memory cells are accessed in constant time, becomes impractical asx increases. With a very simple modification of the boolean circuits of the memory, CMM makes it possible to access in constant time, a memory cell consecutive to another already accessed cell. Problems of sizen requiring time T(n) in the RAM model can be solved in CMM with a multiplicative factor O(log x) in time complexity. Ad hoc algorithms are instead designed for other basic problems such as searching and sorting.  相似文献   

6.
An algebraic semantics for MOF   总被引:1,自引:0,他引:1  
In model-driven development, software artifacts are represented as models in order to improve productivity, quality, and cost effectiveness. In this area, the meta-object facility (MOF) standard plays a crucial role as a generic framework within which a wide range of modeling languages can be defined. The MOF standard aims at offering a good basis for model-driven development, providing some of the building concepts that are needed: what is a model, what is a metamodel, what is reflection in the MOF framework, and so on. However, most of these concepts are not yet fully formally defined in the current MOF standard. In this paper we define a reflective, algebraic, executable framework for precise metamodeling based on membership equational logic (mel) that supports the MOF standard. Our framework provides a formal semantics of the following notions: metamodel, model, and conformance of a model to its metamodel. Furthermore, by using the Maude language, which directly supports mel specifications, this formal semantics is executable. This executable semantics has been integrated within the Eclipse modeling framework as a plugin tool called MOMENT2. In this way, formal analyses, such as semantic consistency checks, model checking of invariants and LTL model checking, become available within Eclipse to provide formal support for model-driven development processes.  相似文献   

7.
《Information Systems》2005,30(2):89-118
Business rules are the basis of any organization. From an information systems perspective, these business rules function as constraints on a database helping ensure that the structure and content of the real world—sometimes referred to as miniworld—is accurately incorporated into the database. It is important to elicit these rules during the analysis and design stage, since the captured rules are the basis for subsequent development of a business constraints repository. We present a taxonomy for set-based business rules, and describe an overarching framework for modeling rules that constrain the cardinality of sets. The proposed framework results in various types constraints, i.e., attribute, class, participation, projection, co-occurrence, appearance and overlapping, on a semantic model that supports abstractions like classification, generalization/specialization, aggregation and association. We formally define the syntax of our proposed framework in Backus-Naur Form and explicate the semantics using first-order logic. We describe partial ordering in the constraints and define the concept of metaconstraints, which can be used for automatic constraint consistency checking during the design stage itself. We demonstrate the practicality of our approach with a case study and show how our approach to modeling business rules seamlessly integrates into existing database design methodology. Via our proposed framework, we show how explicitly capturing data semantics will help bridge the semantic gap between the real world and its representation in an information system.  相似文献   

8.
In this paper, we present an algorithm that can be used to implement sequential, causal, or cache consistency in distributed shared memory (DSM) systems. For this purpose it includes a parameter that allows us to choose the consistency model to be implemented. If all processes run the algorithm with the same value in this parameter, the corresponding consistency is achieved. (Additionally, the algorithm tolerates that processes use certain combination of parameter values.) This characteristic allows a concrete consistency model to be chosen, but implements it with the more efficient algorithm in each case (depending of the requirements of the applications). Additionally, as far as we know, this is the first algorithm proposed that implements cache coherence.In our algorithm, all the read and write operations are executed locally when implementing causal and cache consistency (i.e., they are fast). It is known that no sequential algorithm has only fast memory operations. In our algorithm, however, all the write operations and some read operations are fast when implementing sequential consistency. The algorithm uses propagation and full replication, where the values written by a process are propagated to the rest of the processes. It works in a cyclic turn fashion, with each process of the DSM system, broadcasting one message in turn. The values written by the process are sent in the message (instead of sending one message for each write operation): However, unnecessary values are excluded. All this permits the amount of message traffic owing to the algorithm to be controlled.  相似文献   

9.
Array Processors with Pipelined Buses (APPBs) are hybrid optical-electronic multiprocessor architectures in which message-pipelined optical buses are used for interprocessor communications. Presented in this paper is a structural variation of the basic APPB which utilizes optical switches to provide the capability of switching messages between buses without their being relayed by intermediate processors. Such switching capability eliminates the optical-electronic-optical signal conversion due to message relays and offers improved communication efficiency. We discuss routing issues, evaluate bandwidth improvement, and present efficient communications including matrix transpose, binary tree routing, and perfect shuffle which take advantage of the switching capability.  相似文献   

10.
Collaborative filtering (CF) is an effective technique addressing the information overloading problem, where each user is associated with a set of rating scores on a set of items. For a chosen target user, conventional CF algorithms measure similarity between this user and other users by utilizing pairs of rating scores on common rated items, but discarding scores rated by one of them only. We call these comparative scores as dual ratings, while the non-comparative scores as singular ratings. Our experiments show that only about 10% ratings are dual ones that can be used for similarity evaluation, while the other 90% are singular ones. In this paper, we propose SingCF approach, which attempts to incorporate multiple singular ratings, in addition to dual ratings, to implement collaborative filtering, aiming at improving the recommendation accuracy. We first estimate the unrated scores for singular ratings and transform them into dual ones. Then we perform a CF process to discover neighborhood users and make predictions for each target user. Furthermore, we provide a MapReduce-based distributed framework on Hadoop for significant improvement in efficiency. Experiments in comparison with the state-of-the-art methods demonstrate the performance gains of our approaches.  相似文献   

11.
We introduce a decentralized observation problem, where the system under observation is modeled as a regular language L over a finite alphabet Σ and n subsets of Σ model distributed observation points. A regular language KL models a set of distinguished behaviors, say, correct behaviors of the system. The objective is to check the existence of a function which, given the n observations corresponding to a behavior ρL, decides whether ρ is in K or not. We prove that checking the existence of such a function is undecidable. We then use this result to show undecidability of a decentralized supervisory control problem in the discrete event system framework.  相似文献   

12.
Due to the dynamic properties of autonomous resource providers, the coupling of independent services as a Grid transaction may abort with inconsistency. In many situations people would resort to compensation actions to regain consistency; consequently there comes the issue of compensation-cost. To handle such an issue, for the first time we set up a costing model for the all-or-nothing transaction of Grid services, and introduce the ECC metric to evaluate related service scheduling. The analysis of ECC estimation is based on the so-called CC-PreC commit pattern, which is an abstract of a category of common use cases of commit handling. Our analysis theoretically illustrates the high degree of computational complexity of scheduling optimization with respect to the cost labeling, timing and order of requests. Under certain typical conditions we prove that infinite possible schemes of scheduling can be reduced down to a finite set of candidates of scheduling. Especially based on the ECC metric, the caution scheduling is thoroughly investigated, which as a basic policy could be employed in certain common scenarios, and under which the intuitive product-first or cost-first schemes are justified in several typical situations.  相似文献   

13.
Plan coordination by revision in collective agent based systems   总被引:2,自引:0,他引:2  
In order to model plan coordination behavior of agents we develop a simple framework for representing plans, resources and goals of agents. Plans are represented as directed acyclic graphs of skills and resources that, given adequate initial resources, can realize special resources, called goals. Given the storage costs of resources, application costs of skills, and values of goals, it is possible to reason about the profits of a plan for an agent. We then model two forms of plan coordination behavior between two agents, viz. fusion, aiming at the maximization of the total yield of the agents involved, and collaboration, which aims at the maximization of the individual yield of each agent. We argue how both forms of cooperation can be seen as iterative plan revision processes. We also present efficient polynomial algorithms for agent plan fusion and collaboration that are based on this idea of iterative plan revision. Both the framework and the fusion algorithm will be illustrated by an example from the field of transportation, where agents are transportation companies.  相似文献   

14.
Summary.  In recent years, there is a growing tendency to support high-level synchronization operations, such as read-modify-write, FIFO queues and stacks, as part of the programmer’s shared memory model. This paper examines the problem of implementing hybrid consistency with high-level synchronization operations. It is shown that for any implementation of weak consistency, the time required to execute a read-modify-write, a dequeue or a pop operation is Ω(d), where d is the network delay. Following this, an efficient and simple algorithm for providing hybrid consistency that supports most types of high-level synchronization operations and weak read and weak write operations is presented. Weak read and weak write operations are executed instantaneously, while the time required to execute strong operations is O(d). This is within a constant factor of the lower bounds for most of the commonly used types of operations. Received: August 1994 / Accepted: June 1995  相似文献   

15.
Dealing with high-dimensional data has always been a major problem in many pattern recognition and machine learning applications. Trace ratio criterion is a criterion that can be applicable to many dimensionality reduction methods as it directly reflects Euclidean distance between data points of within or between classes. In this paper, we analyze the trace ratio problem and propose a new efficient algorithm to find the optimal solution. Based on the proposed algorithm, we are able to derive an orthogonal constrained semi-supervised learning framework. The new algorithm incorporates unlabeled data into training procedure so that it is able to preserve the discriminative structure as well as geometrical structure embedded in the original dataset. Under such a framework, many existing semi-supervised dimensionality reduction methods such as SDA, Lap-LDA, SSDR, SSMMC, can be improved using our proposed framework, which can also be used to formulate a corresponding kernel framework for handling nonlinear problems. Theoretical analysis indicates that there are certain relationships between linear and nonlinear methods. Finally, extensive simulations on synthetic dataset and real world dataset are presented to show the effectiveness of our algorithms. The results demonstrate that our proposed algorithm can achieve great superiority to other state-of-art algorithms.  相似文献   

16.
Shareable data services providing consistency guarantees, such as atomicity (linearizability), make building distributed systems easier. However, combining linearizability with efficiency in practical algorithms is difficult. A reconfigurable linearizable data service, called Rambo, was developed by Lynch and Shvartsman. This service guarantees consistency under dynamic conditions involving asynchrony, message loss, node crashes, and new node arrivals. The specification of the original algorithm is given at an abstract level aimed at concise presentation and formal reasoning about correctness. The algorithm propagates information by means of gossip messages. If the service is in use for a long time, the size and the number of gossip messages may grow without bound. This paper presents a consistent data service for long-lived objects that improves on Rambo in two ways: it includes an incremental communication protocol and a leave service. The new protocol takes advantage of the local knowledge, and carefully manages the size of messages by removing redundant information, while the leave service allows the nodes to leave the system gracefully. The new algorithm is formally proved correct by forward simulation using levels of abstraction. An experimental implementation of the system was developed for networks-of-workstations. The paper also includes selected analytical and preliminary empirical results that illustrate the advantages of the new algorithm.  相似文献   

17.
Variational constitutive updates provide a physically and mathematically sound framework for the numerical implementation of material models. In contrast to conventional schemes such as the return-mapping algorithm, they are directly and naturally based on the underlying variational principle. Hence, the resulting numerical scheme inherits all properties of that principle. In the present paper, focus is on a certain class of those variational methods which relies on energy minimization. Consequently, the algorithmic formulation is governed by energy minimization as well. Accordingly, standard optimization algorithms can be applied to solve the numerical problem. A further advantage compared to conventional approaches is the existence of a natural distance (semi metric) induced by the minimization principle. Such a distance is the foundation for error estimation and as a result, for adaptive finite elements methods. Though variational constitutive updates are relatively well developed for so-called standard dissipative solids, i.e., solids characterized by the normality rule, the more general case, i.e., generalized standard materials, is far from being understood. More precisely, (Int. J. Sol. Struct. 2009, 46:1676–1684) represents the first step towards this goal. In the present paper, a variational constitutive update suitable for a class of nonlinear kinematic hardening models at finite strains is presented. Two different prototypes of Armstrong–Frederick-type are re-formulated into the aforementioned variationally consistent framework. Numerical tests demonstrate the consistency of the resulting implementation.  相似文献   

18.
《Information and Computation》2007,205(8):1212-1234
This paper investigates the word problem for inverse monoids generated by a set Γ subject to relations of the form e = f, where e and f are both idempotents in the free inverse monoid generated by Γ. It is shown that for every fixed monoid of this form the word problem can be solved both in linear time on a RAM as well as in deterministic logarithmic space, which solves an open problem of Margolis and Meakin. For the uniform word problem, where the presentation is part of the input, EXPTIME-completeness is shown. For the Cayley-graphs of these monoids, it is shown that the first-order theory with regular path predicates is decidable. Regular path predicates allow to state that there is a path from a node x to a node y that is labeled with a word from some regular language. As a corollary, the decidability of the generalized word problem is deduced.  相似文献   

19.
We present a new protocol for the following task. Given two secrets a,b shared among n players, compute the value gab.The protocol uses the generic BGW approach for multiplication of shared secrets, but we show that if one is computing “multiplications in the exponent” the polynomial randomization step can be avoided (assuming the Decisional Diffie-Hellman Assumption holds). This results in a non-interactive and more efficient protocol.  相似文献   

20.
Ben-Amram  Galil 《Algorithmica》2002,32(3):364-395
In a seminal paper of 1989, Fredman and Saks proved lower bounds for some important data-structure problems in the cell probe model. This model assumes that data structures are stored in memory with a known word length. In this paper we consider random access machines (RAMs) that can add, subtract, compare, multiply and divide integer or real numbers, with no size limitation. These are referred to as algebraic RAMs . We prove new lower bounds for two important data-structure problems, union-findand dynamic prefix sums . To this end we apply the generalized Fredman—Saks techniqueintroduced by the authors in a previous paper. The generalized technique relies on a certain well-defined function, Output Variability , that characterizes in some sense the power of the computational model. Fredman and Saks' work implied bounds on output variability for the cell probe model; in this paper we provide the first bounds for algebraic RAMs, and show that they suffice for proving tight lower bounds for useful problems. An important feature of the problems we consider is that in a data structure of size n , the data stored are members of {0,\ldots,n} . This makes the derivation of lower bounds for such problems on a RAM without word-size limitations a particular challenge; previous RAM lower bounds we are aware of depend on the fact that the data for the computation can vary over a large domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号