首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dynamic spectrum sharing is a promising technology to improve spectrum utilization in future wireless networks. The flexible spectrum management provides new opportunities for licensed primary user and unlicensed secondary users to reallocate the spectrum resource efficiently. In this paper, we present an oligopoly pricing framework for dynamic spectrum allocation in which the primary users sell excessive spectrum to the secondary users for monetary return. We present two approaches, the strict constraints (type-I) and the QoS penalty (type-II), to model the realistic situation that the primary users have limited capacities. In the oligopoly model with strict constraints, we propose a low-complexity searching method to obtain the Nash Equilibrium and prove its uniqueness. When reduced to a duopoly game, we analytically show the interesting gaps in the leader–follower pricing strategy. In the QoS penalty based oligopoly model, a novel variable transformation method is developed to derive the unique Nash Equilibrium. When the market information is limited, we provide three myopically optimal algorithms “StrictBEST”, “StrictBR” and “QoSBEST” that enable price adjustment for duopoly primary users based on the Best Response Function (BRF) and the bounded rationality (BR) principles. Numerical results validate the effectiveness of our analysis and demonstrate the convergence of “StrictBEST” as well as “QoSBEST” to the Nash Equilibrium. For the “StrictBR” algorithm, we reveal the chaotic behaviors of dynamic price adaptation in response to the learning rates.  相似文献   

2.
We investigate the introduction of look-ahead in two-stage algorithms for the singular value decomposition (SVD). Our approach relies on a specialized reduction for the first stage that produces a band matrix with the same upper and lower bandwidth instead of the conventional upper triangular-band matrix. In the case of a CPU-GPU server, this alternative form accommodates a static look-aheadinto the algorithm in order to overlap the reduction of the “next” panel on the CPU and the “current” trailing update on the GPU. For multicore processors, we leverage the same compact form to formulate a version of the algorithm that advances the reduction of “future” panels, yielding a dynamic look-ahead that overcomes the performance bottleneck that the sequential panel factorization represents.  相似文献   

3.
Viral marketing is widely used by businesses to achieve their marketing objectives using social media. In this work, we propose a customized crowdsourcing approach for viral marketing which aims at efficient marketing based on information propagation through a social network. We term this approach the social community-based crowdsourcing platform and integrate it with an information diffusion model to find the most efficient crowd workers. We propose an intelligent viral marketing framework (IVMF) comprising two modules to achieve this end. The first module identifies the K-most influential users in a given social network for the platform using a novel linear threshold diffusion model. The proposed model considers the different propagation behaviors of the network users in relation to different contexts. Being able to consider multiple topics in the information propagation model as opposed to only one topic makes our model more applicable to a diverse population base. Additionally, the proposed content-based improved greedy (CBIG) algorithm enhances the basic greedy algorithm by decreasing the total amount of computations required in the greedy algorithm (the total influence propagation of a unique node in any step of the greedy algorithm). The highest computational cost of the basic greedy algorithm is incurred on computing the total influence propagation of each node. The results of the experiments reveal that the number of iterations in our CBIG algorithm is much less than the basic greedy algorithm, while the precision in choosing the K influential nodes in a social network is close to the greedy algorithm. The second module of the IVMF framework, the multi-objective integer optimization model, is used to determine which social network should be targeted for viral marketing, taking into account the marketing budget. The overall IVMF framework can be used to select a social network and recruit the K-most influential crowd workers. In this paper, IVMF is exemplified in the domain of personal care industry to show its importance through a real-life case.  相似文献   

4.
Flow level information is important for many applications in network measurement and analysis. In this work, we tackle the “Top Spreaders” and “Top Scanners” problems, where hosts that are spreading the largest numbers of flows, especially small flows, must be efficiently and accurately identified. The identification of these top users can be very helpful in network management, traffic engineering, application behavior analysis, and anomaly detection.We propose novel streaming algorithms and a “Filter-Tracker-Digester” framework to catch the top spreaders and scanners online. Our framework combines sampling and streaming algorithms, as well as deterministic and randomized algorithms, in such a way that they can effectively help each other to improve accuracy while reducing memory usage and processing time. To our knowledge, we are the first to tackle the “Top Scanners” problem in a streaming way. We address several challenges, namely: traffic scale, skewness, speed, memory usage, and result accuracy. The performance bounds of our algorithms are derived analytically, and are also evaluated by both real and synthetic traces, where we show our algorithm can achieve accuracy and speed of at least an order of magnitude higher than existing approaches.  相似文献   

5.
Online configuration of large-scale systems such as networks requires parameter optimization within a limited amount of time, especially when configuration is needed as a response to recover from a failure in the system. To quickly configure such systems in an online manner, we propose a Probabilistic Trans-Algorithmic Search (PTAS) framework which leverages multiple optimization search algorithms in an iterative manner. PTAS applies a search algorithm to determine how to best distribute available experiment budget among multiple optimization search algorithms. It allocates an experiment budget to each available search algorithm and observes its performance on the system-at-hand. PTAS then probabilistically reallocates the experiment budget for the next round proportional to each algorithm’s performance relative to the rest of the algorithms. This “roulette wheel” approach probabilistically favors the more successful algorithm in the next round. Following each round, the PTAS framework “transfers” the best result(s) among the individual algorithms, making our framework a trans-algorithmic one. PTAS thus aims to systematize how to “search for the best search” and hybridize a set of search algorithms to attain a better search. We use three individual search algorithms, i.e., Recursive Random Search (RRS) (Ye and Kalyanaraman, 2004), Simulated Annealing (SA) (Laarhoven and Aarts, 1987), and Genetic Algorithm (GA) (Goldberg, 1989), and compare PTAS against the performance of RRS, GA, and SA. We show the performance of PTAS on well-known benchmark objective functions including scenarios where the objective function changes in the middle of the optimization process. To illustrate applicability of our framework to automated network management, we apply PTAS on the problem of optimizing link weights of an intra-domain routing protocol on three different topologies obtained from the Rocketfuel dataset. We also apply PTAS on the problem of optimizing aggregate throughput of a wireless ad hoc network by tuning datarates of traffic sources. Our experiments show that PTAS successfully picks the best performing algorithm, RRS or GA, and allocates the time wisely. Further, our results show that PTAS’ performance is not transient and steadily improves as more time is available for search.  相似文献   

6.
7.
Recommender Systems are more and more playing an important role in our life, representing useful tools helping users to find “what they need” from a very large number of candidates and supporting people in making decisions in various contexts: what items to buy, which movie to watch, or even who they can invite to their social network, etc. In this paper, we propose a novel collaborative user-centered recommendation approach in which several aspects related to users and available in Online Social Networks – i.e. preferences (usually in the shape of items’ metadata), opinions (textual comments to which it is possible to associate a sentiment), behavior (in the majority of cases logs of past items’ observations made by users), feedbacks (usually expressed in the form of ratings) – are considered and integrated together with items’ features and context information within a general framework that can support different applications using proper customizations (e.g., recommendation of news, photos, movies, travels, etc.). Experiments on system accuracy and user satisfaction in several domains shows how our approach provides very promising and interesting results.  相似文献   

8.
Two new lattice reduction algorithms are presented and analyzed. These algorithms, called the Schmidt reduction and the Gram reduction, are obtained by relaxing some of the constraints of the classical LLL algorithm. By analyzing the worst case behavior and the average case behavior in a tractable model, we prove that the new algorithms still produce “good” reduced basis while requiring fewer iterations on average. In addition, we provide empirical tests on random lattices coming from applications, that confirm our theoretical results about the relative behavior of the different reduction algorithms.  相似文献   

9.
Dr. X. Merrheim 《Computing》1994,53(3-4):219-232
Many hardware-oriented algorithms computing the usual elementary functions (sine, cosine, exponential, logarithm, ...) only use shifts and additions. In this paper, we present new algorithms using shifts, adds and “small multiplications” (i. e. multiplications by few-digit-numbers). These CORDIC-like algorithms compute the elementary functions in radix 2 p (instead of the standard radix 2) and use table look-ups. The number of the required steps to compute functions with a given accuracy is reduced and since we use a quick “small multiplier”, the computation time is reduced.  相似文献   

10.
Adaptation to the characteristics of specific images and the preferences of individual users is critical to the success of an image retrieval system but insufficiently addressed by the existing approaches. In this paper, we propose an elegant and effective approach to data-adaptive and user-adaptive image retrieval based on the idea of peer indexing—describing an image through semantically relevant peer images. Specifically, we associate each image with a two-level peer index that models the “data characteristics” of the image as well as the “user characteristics” of individual users with respect to this image. Based on two-level image peer indexes, a set of retrieval parameters including query vectors and similarity metric are optimized towards both data and user characteristics by applying the pseudo feedback strategy. A cooperative framework is proposed under which peer indexes and image visual features are integrated to facilitate data- and user-adaptive image retrieval. Simulation experiments conducted on real-world images have verified the effectiveness of our approach in a relatively restricted setting.  相似文献   

11.
Users are the most critical strategic resource of any online social networking service (SNS). This paper offers strategic recommendations for SNS providers based on an empirical study exploring why users switch from a primary SNS to others. We first identify important characteristics that combine to distinguish SNSs from conventional information systems, then develop a “cyber migration” research model that includes push, pull and mooring factors which influence user intention to switch from one SNS to another. Findings from a field survey of 180 users reveal four significant factors that promote switching: dissatisfaction with socialization support, dissatisfaction with entertainment value, continuity cost, and peer influence. Strategies grounded in these factors are suggested for SNS providers to better attract and retain users.  相似文献   

12.
Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals’ degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores (“indirect reciprocity”), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. “tit-for-tat”) as well as indirect reciprocity (helping strangers in order to increase one’s reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level.  相似文献   

13.
Partitioned EDF scheduling: a closer look   总被引:1,自引:1,他引:0  
The partitioned EDF scheduling of implicit-deadline sporadic task systems upon identical multiprocessor platforms is considered. The problem is known to be intractable, but many different polynomial-time algorithms have been proposed for solving it approximately. These different approximation algorithms have previously been compared using utilization bounds; they are compared here using a different metric—the speedup factor. It is shown that from the perspective of their speedup factors, the best partitioning algorithms are those that (i) assign the tasks in decreasing order of utilization; and (ii) are “reasonable” in the sense that they will assign a task if there is capacity available on some processor—such algorithms include the widely-used First-Fit Decreasing, Best-Fit Decreasing, and Worst-Fit Decreasing partitioning heuristics.  相似文献   

14.
Social media influence analysis, sometimes also called authority detection, aims to rank users based on their influence scores in social media. Existing approaches of social influence analysis usually focus on how to develop effective algorithms to quantize users’ influence scores. They rarely consider a person’s expertise levels which are arguably important to influence measures. In this paper, we propose a computational approach to measuring the correlation between expertise and social media influence, and we take a new perspective to understand social media influence by incorporating expertise into influence analysis. We carefully constructed a large dataset of 13,684 Chinese celebrities from Sina Weibo (literally ”Sina microblogging”). We found that there is a strong correlation between expertise levels and social media influence scores. Our analysis gave a good explanation of the phenomenon of “top across-domain influencers”. In addition, different expertise levels showed influence variation patterns: e.g., (1) high-expertise celebrities have stronger influence on the “audience” in their expertise domains; (2) expertise seems to be more important than relevance and participation for social media influence; (3) the audiences of top expertise celebrities are more likely to forward tweets on topics outside the expertise domains from high-expertise celebrities.  相似文献   

15.
The increasing popularity of location-based applications creates new opportunities for users to travel together. In this paper, we study a novel spatio-social optimization problem , i.e., Optimal Group Route, for multi-user itinerary planning. With our problem formulation, users can individually specify sources and destinations, preferences on the Point-of-interest (POI) categories, as well as the distance constraints. The goal is to find a itinerary that can be traversed by all the users while maximizing the group’s preference of POI categories in the itinerary. Our work advances existing group trip planning studies by maximizing the group’s social experience. To this end, individual preferences of POI categories are aggregated by considering the agreement and disagreement among group members. Furthermore, planning a multi-user itinerary on large road networks is computationally challenging. We propose two efficient greedy algorithms with bounded approximation ratio, one exact solution which computes the optimal itinerary by exploring a limited number of paths in the road network, and a scaled approximation algorithm to speed up the dynamic programming employed by the exact solution. We conduct extensive empirical evaluations on two real-world road network/POI datasets and our results confirm the effectiveness and efficiency of our solutions.  相似文献   

16.
Many real-world networks, including social and information networks, are dynamic structures that evolve over time. Such dynamic networks are typically visualized using a sequence of static graph layouts. In addition to providing a visual representation of the network structure at each time step, the sequence should preserve the mental map between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network. In this paper, we propose a framework for dynamic network visualization in the on-line setting where only present and past graph snapshots are available to create the present layout. The proposed framework creates regularized graph layouts by augmenting the cost function of a static graph layout algorithm with a grouping penalty, which discourages nodes from deviating too far from other nodes belonging to the same group, and a temporal penalty, which discourages large node movements between consecutive time steps. The penalties increase the stability of the layout sequence, thus preserving the mental map. We introduce two dynamic layout algorithms within the proposed framework, namely dynamic multidimensional scaling and dynamic graph Laplacian layout. We apply these algorithms on several data sets to illustrate the importance of both grouping and temporal regularization for producing interpretable visualizations of dynamic networks.  相似文献   

17.
Consider a complete communication network on n nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are “odd” and which are “even”. Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case f=1. To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly.  相似文献   

18.
In The Philosophy of Information, Luciano Floridi presents an ontological theory of Being qua Being, which he calls “Informational Structural Realism”, a theory which applies, he says, to every possible world. He identifies primordial information (“dedomena) as the foundation of any structure in any possible world. The present essay examines Floridi’s defense of that theory, as well as his refutation of “Digital Ontology” (which some people might confuse with his own). Then, using Floridi’s ontology as a starting point, the present essay adds quantum features to dedomena, yielding an ontological theory for our own universe, Quantum Informational Structural Realism, which provides a metaphysical interpretation of key quantum phenomena, and diminishes the “weirdness” or “spookiness” of quantum mechanics.  相似文献   

19.
In these years, the company budgets are raised dramatically for eliminating the security problems or mitigating the security risks in companies, but the numbers of incidents happening on computer systems in intranet or internet are still increasing. Many researchers proposed the way–to isolate the computers storing sensitive information for preventing information on these computers revealed or vulnerability on these computers exploited. However, there are few materials available for implementing network isolation. In this paper, we define ways of network isolation, “physical isolation” and “logical isolation”. In ISO-17799, there is no implementation guidance for practicing network logical isolation but auditing network physical isolation. This paper also provides the implementation guidance of network isolation in two aspects. One is for the technique viewpoints. The other aspect is for management viewpoints. These proposed implementation outlines and security measures will be considered in revising the security plan, “The Implementation Plan for Information Security Level in Government Departments” [“The implementation plan for information security level in government departments,” National Information and Communication Security Taskforce, Taiwan R.O.C., Programs, Jul. 20 2005].  相似文献   

20.
In this paper, we introduce item-centric mining, a new semantics for mining long-tailed datasets. Our algorithm, TopPI, finds for each item its top-k most frequent closed itemsets. While most mining algorithms focus on the globally most frequent itemsets, TopPI guarantees that each item is represented in the results, regardless of its frequency in the database.TopPI allows users to efficiently explore Web data, answering questions such as “what are the k most common sets of songs downloaded together with the ones of my favorite artist?”. When processing retail data consisting of 55 million supermarket receipts, TopPI finds the itemset “milk, puff pastry” that appears 10,315 times, but also “frangipane, puff pastry” and “nori seaweed, wasabi, sushi rice” that occur only 1120 and 163 times, respectively. Our experiments with analysts from the marketing department of our retail partner demonstrate that item-centric mining discover valuable itemsets. We also show that TopPI can serve as a building-block to approximate complex itemset ranking measures such as the p-value.Thanks to efficient enumeration and pruning strategies, TopPI avoids the search space explosion induced by mining low support itemsets. We show how TopPI can be parallelized on multi-cores and distributed on Hadoop clusters. Our experiments on datasets with different characteristics show the superiority of TopPI when compared to standard top-k solutions, and to Parallel FP-Growth, its closest competitor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号