共查询到20条相似文献,搜索用时 31 毫秒
1.
Mohammad Ghodsi Hamid Mahini Vahab S. Mirrokni Morteza Zadimoghaddam 《Algorithmica》2011,60(4):853-876
We study permutation betting markets, introduced by Chen et al. (Proceedings of the ACM Conference on Electronic Commerce,
2007). For these markets, we consider subset bettings in which each trader can bet on a subset of candidates ending up in a subset
of positions. We consider the revenue maximization problem for the auctioneer in two main frameworks: the risk-free revenue
maximization (studied in Chen et al., Proceedings of the ACM Conference on Electronic Commerce, 2007), and the probabilistic revenue maximization. We also explore the use of some certain knowledge or extra information about
the possible outcomes of the market. We first show that finding the optimal revenue in the risk-free model for the subset
betting problem is inapproximable. This resolves an open question posed by Chen et al. (Proceedings of the ACM Conference
on Electronic Commerce, 2007). In order to identify solvable variants of the problem, we propose the singleton betting language which allows traders to
bet an arbitrary value on one candidate for one position. For singleton bettings, we first provide a linear-time implementable
necessary and sufficient condition for existence of a solution with positive revenue for any possible outcome. Furthermore,
we develop an LP-based polynomial-time algorithm to find the optimum solution of this problem. In addition, we show how to
extend this LP-based method to handle some extra information about the possible outcomes. Finally, we consider the revenue
maximization problem in a probabilistic setting. For this variant, we observe that the problem of maximizing the expected
revenue is polynomial-time solvable, but we show that maximizing the probability of achieving a pre-specified revenue is #P-Complete. 相似文献
2.
Tak-Wah Lam Lap-Kei Lee Isaac K. K. To Prudence W. H. Wong 《Journal of Scheduling》2012,15(1):105-116
Energy usage has been an important concern in recent research on online scheduling. In this paper, we study the tradeoff between
flow time and energy (Albers and Fujiwara in ACM Trans. Algorithms 3(4), 2007; Bansal et al. in Proceedings of ACM-SIAM Symposium on Discrete Algorithms, pp. 805–813, 2007b, Bansal et al. in Proceedings of International Colloquium on Automata, Languages and Programming, pp. 409–420, 2008; Lam et al. in Proceedings of European Symposium on Algorithms, pp. 647–659, 2008b) in the multi-processor setting. Our main result is an enhanced analysis of a simple non-migratory online algorithm called
CRR (classified round robin) on m≥2 processors, showing that its flow time plus energy is within O(1) times of the optimal non-migratory offline algorithm, when the maximum allowable speed is slightly relaxed. The result
still holds even if the comparison is made against the optimal migratory offline algorithm. This improves previous analysis
that CRR is O(log P)-competitive where P is the ratio of the maximum job size to the minimum job size. 相似文献
3.
k-Anonymity is a privacy preserving method for limiting disclosure of private information in data mining. The process of anonymizing
a database table typically involves generalizing table entries and, consequently, it incurs loss of relevant information.
This motivates the search for anonymization algorithms that achieve the required level of anonymization while incurring a
minimal loss of information. The problem of k-anonymization with minimal loss of information is NP-hard. We present a practical approximation algorithm that enables solving
the k-anonymization problem with an approximation guarantee of O(ln k). That algorithm improves an algorithm due to Aggarwal et al. (Proceedings of the international conference on database theory
(ICDT), 2005) that offers an approximation guarantee of O(k), and generalizes that of Park and Shim (SIGMOD ’07: proceedings of the 2007 ACM SIGMOD international conference on management
of data, 2007) that was limited to the case of generalization by suppression. Our algorithm uses techniques that we introduce herein for
mining closed frequent generalized records. Our experiments show that the significance of our algorithm is not limited only
to the theory of k-anonymization. The proposed algorithm achieves lower information losses than the leading approximation algorithm, as well
as the leading heuristic algorithms. A modified version of our algorithm that issues ℓ-diverse k-anonymizations also achieves lower information losses than the corresponding modified versions of the leading algorithms. 相似文献
4.
Transaction-level modeling is used in hardware design for describing designs at a higher level compared to the register-transfer
level (RTL) (e.g. Cai and Gajski in CODES+ISSS ’03: proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software
codesign and system synthesis, pp. 19–24, 2003; Chen et al. in FMCAD ’07: proceedings of the formal methods in computer aided design, pp. 53–61, 2007; Mahajan et al. in MEMOCODE ’07: proceedings of the 5th IEEE/ACM international conference on formal methods and models for
codesign, pp. 123–132, 2007; Swan in DAC ’06: proceedings of the 43rd annual conference on design automation, pp. 90–92, 2006). Each transaction represents a unit of work, which is also a useful unit for design verification. In such models, there
are many properties of interest which involve interactions between multiple transactions. Examples of this are ordering relationships
in sequential processing and hazard checking in pipelined circuits. Writing such properties on the RTL design requires significant
expertise in understanding the higher-level computation being done in a given RTL design and possible instrumentation of the
RTL to express the property of interest. This is a barrier to the easy use of such properties in RTL designs. 相似文献
5.
Jake Porway Qiongchen Wang Song Chun Zhu 《International Journal of Computer Vision》2010,88(2):254-283
In this paper we present a hierarchical and contextual model for aerial image understanding. Our model organizes objects (cars,
roofs, roads, trees, parking lots) in aerial scenes into hierarchical groups whose appearances and configurations are determined
by statistical constraints (e.g. relative position, relative scale, etc.). Our hierarchy is a non-recursive grammar for objects
in aerial images comprised of layers of nodes that can each decompose into a number of different configurations. This allows
us to generate and recognize a vast number of scenes with relatively few rules. We present a minimax entropy framework for
learning the statistical constraints between objects and show that this learned context allows us to rule out unlikely scene
configurations and hallucinate undetected objects during inference. A similar algorithm was proposed for texture synthesis
(Zhu et al. in Int. J. Comput. Vis. 2:107–126, 1998) but didn’t incorporate hierarchical information. We use a range of different bottom-up detectors (AdaBoost, TextonBoost,
Compositional Boosting (Freund and Schapire in J. Comput. Syst. Sci. 55, 1997; Shotton et al. in Proceedings of the European Conference on Computer Vision, pp. 1–15, 2006; Wu et al. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2007)) to propose locations of objects in new aerial images and employ a cluster sampling algorithm (C4 (Porway and Zhu, 2009)) to choose the subset of detections that best explains the image according to our learned prior model. The C4 algorithm
can quickly and efficiently switch between alternate competing sub-solutions, for example whether an image patch is better
explained by a parking lot with cars or by a building with vents. We also show that our model can predict the locations of
objects our detectors missed. We conclude by presenting parsed aerial images and experimental results showing that our cluster
sampling and top-down prediction algorithms use the learned contextual cues from our model to improve detection results over
traditional bottom-up detectors alone. 相似文献
6.
James Aspnes 《Distributed Computing》2012,25(2):179-188
We show that consensus can be solved by an alternating sequence of adopt-commit objects (Gafni in Proceedings of the seventeenth annual ACM symposium on principles of distributed computing, pp 143–152, 1998; Alistarh et al. in ISAAC, Lecture notes in computer science, vol 5878. Springer, Berlin, pp 943–953, 2009), which detect agreement, and conciliators, which ensure agreement with some probability. We observe that most known randomized consensus algorithms have this structure.
We give a deterministic implementation of an m-valued adopt-commit object for an unbounded number of processes that uses lg m + Θ(log log m) space and individual work. We also give a randomized conciliator for any number of values in the probabilistic-write model
with n processes that guarantees agreement with constant probability while using one multi-writer register, O(log n) expected individual work, and Θ(n) expected total work. Combining these objects gives a consensus protocol for the probabilistic-write model that uses O(log m + log n) individual work and O(n log m) total work. No previous protocol in this model uses sublinear individual work or linear total work for constant m. 相似文献
7.
In this paper we introduce a minimax model unifying several classes of single facility planar center location problems. We
assume that the transportation costs of the demand points to the serving facility are convex functions {Q
i
}, i=1,…,n, of the planar distance used. Moreover, these functions, when properly transformed, give rise to piecewise quadratic functions
of the coordinates of the facility location. In the continuous case, using results on LP-type models by Clarkson (J. ACM 42:488–499,
1995), Matoušek et al. (Algorithmica 16:498–516, 1996), and the derandomization technique in Chazelle and Matoušek (J. Algorithms 21:579–597, 1996), we claim that the model is solvable deterministically in linear time. We also show that in the separable case, one can
get a direct O(nlog n) deterministic algorithm, based on Dyer (Proceedings of the 8th ACM Symposium on Computational Geometry, 1992), to find an optimal solution. In the discrete case, where the location of the center (server) is restricted to some prespecified
finite set, we introduce deterministic subquadratic algorithms based on the general parametric approach of Megiddo (J. ACM
30:852–865, 1983), and on properties of upper envelopes of collections of quadratic arcs. We apply our methods to solve and improve the complexity
of a number of other location problems in the literature, and solve some new models in linear or subquadratic time complexity. 相似文献
8.
In this paper, we first present a new implementation of the 3-D fast curvelet transform, which is nearly 2.5 less redundant
than the Curvelab (wrapping-based) implementation as originally proposed in Ying et al. (Proceedings of wavelets XI conference,
San Diego, 2005) and Candès et al. (SIAM Multiscale Model. Simul. 5(3):861–899, 2006), which makes it more suitable to applications including massive data sets. We report an extensive comparison in denoising
with the Curvelab implementation as well as other 3-D multi-scale transforms with and without directional selectivity. The
proposed implementation proves to be a very good compromise between redundancy, rapidity and performance. Secondly, we exemplify
its usefulness on a variety of applications including denoising, inpainting, video de-interlacing and sparse component separation.
The obtained results are good with very simple algorithms and virtually no parameter to tune. 相似文献
9.
Ontology-driven coordination model for multiagent-based mobile workforce brokering systems 总被引:1,自引:1,他引:0
Coordination has been recognized by many researchers as the most important feature of multi-agent systems. Coordination is
defined as managing interdependencies amongst activities (Malone and Crowston in ACM Comput. Surv. 26(1):87–119, 1994). The traditional approach of implementing a coordination mechanism is to hard-wire it into a coordination system at design
time. However, in dynamic and open environments, many attributes of the system cannot be accurately identified at the design
time. Therefore, dynamic coordination, capable of coordinating activities at run-time, has emerged. On the other hand, a successful
dynamic coordination model for multi-agent systems requires knowledge sharing as well as common vocabulary. Therefore, an
ontological approach is an appropriate way in proposing dynamic coordination models for multi-agent systems. In this paper,
an Ontology-Driven Dynamic Coordination Model (O-DC) for Multiagent-Based Mobile Workforce Brokering Systems (MWBS) (Mousavi
et al. in Int. J. Comput. Sci. 6:(5):557–565, 2010; Mousavi et al. in Proceedings of 4th IEEE international symposium on information technology, ITSim’10, Kuala Lumpur, Malaysia,
15–17 June 2010, vol. 3, pp. 1416–1421, 2010; Mousavi and Nordin in Proceedings of the IEEE international conference on electrical engineering and informatics, Bandung,
Indonesia, 17–19 June 2007, pp. 294–297, 2007) is proposed and formulated. Subsequently, the applicability of O-DC is examined via simulation based on a real-world scenario. 相似文献
10.
Giorgio Ausiello Camil Demetrescu Paolo G. Franciosa Giuseppe F. Italiano Andrea Ribichini 《Algorithmica》2009,55(2):346-374
This article reports the results of an extensive experimental analysis of efficient algorithms for computing graph spanners
in the data streaming model, where an (α,β)-spanner of a graph G is a subgraph S⊆G such that for each pair of vertices the distance in S is at most α times the distance in G plus β. To the best of our knowledge, this is the first computational study of graph spanner algorithms in a streaming setting.
We compare experimentally the randomized algorithms proposed by Baswana () and by Elkin (In: Proceedings of the 34th International Colloquium on Automata, Languages and Programming (ICALP 2007),
Wroclaw, Poland, pp. 716–727, 9–13 July 2007) for general stretch factors with the deterministic algorithm presented by Ausiello et al. (In: Proceedings of the 15th Annual European Symposium on Algorithms (ESA 2007), Engineering and Applications Track, Eilat,
Israel, 8–10 October 2007. LNCS, vol. 4698, pp. 605–617, 2007), designed for building small stretch spanners. All the algorithms we implemented work in a data streaming model where the
input graph is given as a stream of edges in arbitrary order, and all of them need a single pass over the data. Differently
from the algorithm in Ausiello et al., the algorithms in Baswana () and Elkin (In: Proceedings of the 34th International Colloquium on Automata, Languages and Programming (ICALP 2007), Wroclaw,
Poland, pp. 716–727, 9–13 July 2007) need to know in advance the number of vertices in the graph.
The results of our experimental investigation on several input families confirm that all these algorithms are very efficient
in practice, finding spanners with stretch and size much smaller than the theoretical bounds and comparable to those obtainable
by off-line algorithms. Moreover, our experimental findings confirm that small values of the stretch factor are the case of
interest in practice, and that the algorithm by Ausiello et al. tends to produce spanners of better quality than the algorithms by Baswana and Elkin, while still using a comparable amount
of time and space resources.
Work partially supported by the Italian Ministry of University and Research under Project MAINSTREAM “Algorithms for Massive
Information Structures and Data Streams”. A preliminary version of this paper was presented at the 15th Annual European Symposium
on Algorithms (ESA 2007) 5. 相似文献
11.
We present a deterministic Logspace procedure, which, given a bipartite planar graph on n vertices, assigns O(log n) bits long weights to its edges so that the minimum weight perfect matching in the graph becomes unique. The Isolation Lemma as described in Mulmuley et al. (Combinatorica 7(1):105–131, 1987) achieves the same for general graphs using randomness, whereas we can do it deterministically when restricted to bipartite
planar graphs. As a consequence, we reduce both decision and construction versions of the perfect matching problem in bipartite
planar graphs to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a
highly parallel
SPL\mathsf{SPL}
algorithm for both decision and construction versions of the bipartite perfect matching problem. This improves the earlier
known bounds of non-uniform
SPL\mathsf{SPL}
by Allender et al. (J. Comput. Syst. Sci. 59(2):164–181, 1999) and
NC\mathsf{NC}
2 by Miller and Naor (SIAM J. Comput. 24:1002–1017, 1995), and by Mahajan and Varadarajan (Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing (STOC), pp.
351–357, 2000). It also rekindles the hope of obtaining a deterministic parallel algorithm for constructing a perfect matching in non-bipartite
planar graphs, which has been open for a long time. Further we try to find the lower bound on the number of bits needed for
deterministically isolating a perfect matching. We show that our particular method for isolation will require Ω(log n) bits. Our techniques are elementary. 相似文献
12.
Maria-Jose Escobar Guillaume S. Masson Thierry Vieville Pierre Kornprobst 《International Journal of Computer Vision》2009,82(3):284-301
We propose a bio-inspired feedforward spiking network modeling two brain areas dedicated to motion (V1 and MT), and we show
how the spiking output can be exploited in a computer vision application: action recognition. In order to analyze spike trains,
we consider two characteristics of the neural code: mean firing rate of each neuron and synchrony between neurons. Interestingly,
we show that they carry some relevant information for the action recognition application. We compare our results to Jhuang
et al. (Proceedings of the 11th international conference on computer vision, pp. 1–8, 2007) on the Weizmann database. As a conclusion, we are convinced that spiking networks represent a powerful alternative framework
for real vision applications that will benefit from recent advances in computational neuroscience. 相似文献
13.
In this paper we present new results on the performance of the Minimum Spanning Tree heuristic for the Minimum Energy Broadcast Routing (MEBR) problem. We first prove that, for any number of dimensions d≥2, the approximation ratio of the heuristic does not increase when the power attenuation coefficient α, that is the exponent to which the coverage distance must be raised to give the emission power, grows. Moreover, we show
that, for any fixed instance, as a limit for α going to infinity, the ratio tends to the lower bound of Clementi et al. (Proceedings of the 18th annual symposium on theoretical
aspects of computer science (STACS), pp. 121–131, 2001), Wan et al. (Wirel. Netw. 8(6):607–617, 2002) given by the d-dimensional kissing number, thus closing the existing gap between the upper and the lower bound. We then introduce a new
analysis allowing to establish a 7.45-approximation ratio for the 2-dimensional case, thus significantly decreasing the previously
known 12 upper bound (Wan et al. in Wirel. Netw. 8(6):607–617, 2002) (actually corrected to 12.15 in Klasing et al. (Proceedings of the 3rd IFIP-TC6 international networking conference, pp. 866–877,
2004)). Finally, we extend our analysis to any number of dimensions d≥2 and any α≥d, obtaining a general approximation ratio of 3
d
−1, again independent of α. The improvements of the approximation ratios are specifically significant in comparison with the lower bounds given by the
kissing numbers, as these grow at least exponentially with respect to d.
The research was partially funded by the European project COST Action 293, “Graphs and Algorithms in Communication Networks”
(GRAAL).
Preliminary version of this paper appeared in Flammini et al. (Proceedings of ACM joint workshop on foundations of mobile
computing (DIALM-POMC), pp. 85–91, 2004). 相似文献
14.
Koji Nuida 《International Journal of Information Security》2012,11(2):85-102
In this article, we propose a new construction of probabilistic collusion-secure fingerprint codes against up to three pirates
and give a theoretical security evaluation. Our pirate tracing algorithm combines a scoring method analogous to Tardos codes
(J ACM 55:1–24, 2008) with an extension of parent search techniques of some preceding 2-secure codes. Numerical examples show that our code lengths
are significantly shorter than (about 30–40% of) the shortest known c-secure codes by Nuida et al. (Des Codes Cryptogr 52:339–362, 2009) with c = 3. 相似文献
15.
The pressing need for efficient compression schemes for XML documents has recently been focused on stack computation (Hariharan,
S., & Shankar, P. in: Proceedings of the 2006 IEEE data compression conference, p. 453, 2006; League, C., & Eng, K. in: Proceedings of the 2007 IEEE data compression conference, pp. 272–282, 2007), and in particular calls for a formulation of information-lossless stack or pushdown compressors that allows a formal analysis
of their performance and a more ambitious use of the stack in XML compression, where so far it is mainly connected to parsing
mechanisms. In this paper we introduce the model of pushdown compressor, based on pushdown transducers that compute a single
injective function while keeping the widest generality regarding stack computation. 相似文献
16.
Modeling the dependence of credit ratings is an important issue for portfolio credit risk analysis. Multivariate Markov chain
models are a feasible mathematical tool for modeling the dependence of credit ratings. Here we develop a flexible multivariate
Markov chain model for modeling the dependence of credit ratings. The proposed model provides a parsimonious way to capture
both the cross-sectional and temporal associations among ratings of individual entities. The number of model parameters is
of the magnitude O(sm
2 + s
2
m), where m is the number of ratings categories and s is the number of entities in a credit portfolio. The proposed model is also easy to implement. The estimation method is formulated
as a set of s linear programming problems and the estimation algorithm can be implemented easily in a Microsoft EXCEL worksheet, see Ching
et al. Int J Math Educ Sci Eng 35:921–932 (2004). We illustrate the practical implementation of the proposed model using real ratings data. We evaluate risk measures, such
as Value at Risk and Expected Shortfall, for a credit portfolio using the proposed model and compare the risk measures with
those arising from Ching et al. IMRPreprintSeries (2007), Siu et al. Quant Finance 5:543–556 (2005). 相似文献
17.
Generalized sparse metric learning with relative comparisons 总被引:2,自引:2,他引:0
The objective of sparse metric learning is to learn a distance measure from a set of data in addition to finding a low-dimensional
representation. Despite demonstrated success, the performance of existing sparse metric learning approaches is usually limited
because the methods assumes certain problem relaxations or they target the SML objective indirectly. In this paper, we propose
a Generalized Sparse Metric Learning method. This novel framework offers a unified view for understanding many existing sparse
metric learning algorithms including the Sparse Metric Learning framework proposed in (Rosales and Fung ACM International
conference on knowledge discovery and data mining (KDD), pp 367–373, 2006), the Large Margin Nearest Neighbor (Weinberger
et al. in Advances in neural information processing systems (NIPS), 2006; Weinberger and Saul in Proceedings of the twenty-fifth
international conference on machine learning (ICML-2008), 2008), and the D-ranking Vector Machine (D-ranking VM) (Ouyang and
Gray in Proceedings of the twenty-fifth international conference on machine learning (ICML-2008), 2008). Moreover, GSML also
establishes a close relationship with the Pairwise Support Vector Machine (Vert et al. in BMC Bioinform, 8, 2007). Furthermore,
the proposed framework is capable of extending many current non-sparse metric learning models to their sparse versions including
Relevant Component Analysis (Bar-Hillel et al. in J Mach Learn Res, 6:937–965, 2005) and a state-of-the-art method proposed
in (Xing et al. Advances in neural information processing systems (NIPS), 2002). We present the detailed framework, provide
theoretical justifications, build various connections with other models, and propose an iterative optimization method, making
the framework both theoretically important and practically scalable for medium or large datasets. Experimental results show
that this generalized framework outperforms six state-of-the-art methods with higher accuracy and significantly smaller dimensionality
for seven publicly available datasets. 相似文献
18.
We address the problem of learning robot control by model-free reinforcement learning (RL). We adopt the probabilistic model
for model-free RL of Vlassis and Toussaint (Proceedings of the international conference on machine learning, Montreal, Canada,
2009), and we propose a Monte Carlo EM algorithm (MCEM) for control learning that searches directly in the space of controller
parameters using information obtained from randomly generated robot trajectories. MCEM is related to, and generalizes, the
PoWER algorithm of Kober and Peters (Proceedings of the neural information processing systems, 2009). In the finite-horizon case MCEM reduces precisely to PoWER, but MCEM can also handle the discounted infinite-horizon case.
An interesting result is that the infinite-horizon case can be viewed as a ‘randomized’ version of the finite-horizon case,
in the sense that the length of each sampled trajectory is a random draw from an appropriately constructed geometric distribution.
We provide some preliminary experiments demonstrating the effects of fixed (PoWER) vs randomized (MCEM) horizon length in
two simulated and one real robot control tasks. 相似文献
19.
We examine how to induce selfish heterogeneous users in a multicommodity network to reach an equilibrium that minimizes the
social cost. In the absence of centralized coordination, we use the classical method of imposing appropriate taxes (tolls)
on the edges of the network. We significantly generalize previous work (Yang and Huang in Transp. Res. Part B 38:1–15, [2004]; Karakostas and Kolliopoulos in Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 268–276,
[2004]; Fleischer et al. in Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 277–285, [2004]) by allowing user demands to be elastic. In this setting the demand of a user is not fixed a priori but it is a function of the routing cost experienced, a most natural
assumption in traffic and data networks.
Research supported by MITACS and a NSERC Discovery grant. 相似文献
20.
Chung-Chih Li 《Theory of Computing Systems》2009,45(4):880-896
A classic result known as the speed-up theorem in machine-independent complexity theory shows that there exist some computable
functions that do not have best programs for them (Blum in J. ACM 14(2):322–336, 1967 and J. ACM 18(2):290–305, 1971). In this paper we lift this result into type-2 computations. Although the speed-up phenomenon is essentially inherited from
type-1 computations, we observe that a direct application of the original proof to our type-2 speed-up theorem is problematic
because the oracle queries can interfere with the speed of the programs and hence the cancellation strategy used in the original
proof is no longer correct at type-2. We also argue that a type-2 analog of the operator speed-up theorem (Meyer and Fischer
in J. Symb. Log. 37:55–68, 1972) does not hold, which suggests that this curious speed-up phenomenon disappears in higher-typed computations beyond type-2.
The result of this paper adds one more piece of evidence to support the general type-2 complexity theory under the framework
proposed in Li (Proceedings of the Third International Conference on Theoretical Computer Science, pp. 471–484, 2004 and Proceedings of Computability in Europe: Logical Approach to Computational Barriers, pp. 182–192, 2006) and Li and Royer (On type-2 complexity classes: Preliminary report, pp. 123–138, 2001) as a reasonable setup. 相似文献