首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we identify and solve a multi-join optimization problem for Arbitrary Feature-based social image Similarity JOINs(AFS-JOIN). Given two collections(i.e., R and S) of social images that carry both visual, spatial and textual(i.e., tag) information, the multiple joins based on arbitrary features retrieves the pairs of images that are visually, textually similar or spatially close from different users. To address this problem, in this paper, we have proposed three methods to facilitate the multi-join processing: 1) two baseline approaches(i.e., a naïve join approach and a maximal threshold(MT)-based), and 2) a Batch Similarity Join(BSJ) method. For the BSJ method, given m users’ join requests, they are first conversed and grouped into m″ clusters which correspond to m″ join boxes, where m > m″. To speedup the BSJ processing, a feature distance space is first partitioned into some cubes based on four segmentation schemes; the image pairs falling in the cubes are indexed by the cube tree index; thus BSJ processing is transformed into the searching of the image pairs falling in some affected cubes for m″ AFS-JOINs with the aid of the index. An extensive experimental evaluation using real and synthetic datasets shows that our proposed BSJ technique outperforms the state-of-the-art solutions.  相似文献   

2.
A threshold gate is a linear combination of input variables with integer coefficients (weights). It outputs 1 if the sum is positive. The maximum absolute value of the coefficients of a threshold gate is called its weight. A degree-d perceptron is a Boolean circuit of depth 2 with a threshold gate at the top and any Boolean elements of fan-in at most d at the bottom level. The weight of a perceptron is the weight of its threshold gate.For any constant d ≥ 2 independent of the number of input variables n, we construct a degree-d perceptron that requires weights of at least \(n^{\Omega (n^d )} \); i.e., the weight of any degree-d perceptron that computes the same Boolean function must be at least \(n^{\Omega (n^d )} \). This bound is tight: any degree-d perceptron is equivalent to a degree-d perceptron of weight \(n^{O(n^d )} \). For the case of threshold gates (i.e., d = 1), the result was proved by Håstad in [2]; we use Håstad’s technique.  相似文献   

3.
The pull-based development model, widely used in distributed software teams on open source communities, can efficiently gather the wisdom from crowds. Instead of sharing access to a central repository, contributors create a fork, update it locally, and request to have their changes merged back, i.e., submit a pull-request. On the one hand, this model lowers the barrier to entry for potential contributors since anyone can submit pull-requests to any repository, but on the other hand it also increases the burden on integrators, who are responsible for assessing the proposed patches and integrating the suitable changes into the central repository. The role of integrators in pull-based development is crucial. They must not only ensure that pull-requests should meet the project’s quality standards before being accepted, but also finish the evaluations in a timely manner. To keep up with the volume of incoming pull-requests, continuous integration (CI) is widely adopted to automatically build and test every pull-request at the time of submission. CI provides extra evidences relating to the quality of pull-requests, which would help integrators to make final decision (i.e., accept or reject). In this paper, we present a quantitative study that tries to discover which factors affect the process of pull-based development model, including acceptance and latency in the context of CI. Using regression modeling on data extracted from a sample of GitHub projects deploying the Travis-CI service, we find that the evaluation process is a complex issue, requiring many independent variables to explain adequately. In particular, CI is a dominant factor for the process, which not only has a great influence on the evaluation process per se, but also changes the effects of some traditional predictors.  相似文献   

4.
Kurt Gödel’s Incompleteness theorem is well known in Mathematics/Logic/Philosophy circles. Gödel was able to find a way for any given P (UTM), (read as, “P of UTMforProgram of Universal Truth Machine”), actually to write down a complicated polynomial that has a solution iff (=if and only if), G is true, where G stands for a Gödel-sentence. So, if G’s truth is a necessary condition for the truth of a given polynomial, then P (UTM) has to answer first that G is true in order to secure the truth of the said polynomial. But, interestingly, P (UTM) could never answer that G was true. This necessarily implies that there is at least one truth a P (UTM), however large it may be, cannot speak out. Daya Krishna and Karl Potter’s controversy regarding the construal of India’s Philosophies dates back to the time of Potter’s publication of “Presuppositions of India’s Philosophies” (1963, Englewood Cliffs Prentice-Hall Inc.) In attacking many of India’s philosophies, Daya Krishna appears to have unwittingly touched a crucial point: how can there be the knowledge of a ‘non-cognitive’ mok?a? [‘mok?a’ is the final state of existence of an individual away from Social Contract]—See this author’s Indian Social Contract and its Dissolution (2008) mok?a does not permit the knowledge of one’s own self in the ordinary way with threefold distinction, i.e., subject–knowledge-object or knower–knowledge–known. But what is important is to demonstrate whether such ‘knowledge’ of non-cognitive mok?a state can be logically shown, in a language, to be possible to attain, and that there is no contradiction involved in such demonstration, because, no one can possibly express the ‘experience-itself’ in language. Hence, if such ‘knowledge’ can be shown to be logically not impossible in language, then, not only Daya Krishna’s arguments against ‘non-cognitive mok?a’ get refuted but also it would show the possibility of achieving ‘completeness’ in its truest sense, as opposed to Gödel’s ‘Incompleteness’. In such circumstances, man would himself become a Universal Truth Machine. This is because the final state of mok?a is construed as the state of complete knowledge in Advaita. This possibility of ‘completeness’ is set in this paper in the backdrop of ?rī ?a?karācārya’s Advaitic (Non-dualistic) claim involved in the mahāvākyas (extra-ordinary propositions). (Mahāvākyas that ?a?kara refers to are basically taken from different Upani?ads. For example, “Aham Brahmāsmi” is from B?hadāra?yaka Upanisad, and “Tattvamasi” is from Chāndogya Upani?ad. ?rī ?a?karācārya has written extensively. His main works include his Commentary on Brahma-Sūtras, on major Upani?ads, and on ?rīmadBhagavadGītā, called Bhā?yas of them, respectively. Almost all these works are available in English translation published by Advaita Ashrama, 5 Dehi Entally Road, Calcutta, 700014.) On the other hand, the ‘Incompleteness’ of Gödel is due to the intervening G-sentence, which has an adverse self-referential element. Gödel’s incompleteness theorem in its mathematical form with an elaborate introduction by R.W. Braithwaite can be found in Meltzer (Kurt Gödel: on formally undecidable propositions of principia mathematica and related systems. Oliver &; Boyd, Edinburgh, 1962). The present author believes first that semantic content cannot be substituted by any amount of arithmoquining, (Arithmoquining or arithmatization means, as Braithwaite says,—“Gödel’s novel metamathematical method is that of attaching numbers to the signs, to the series of signs (formulae) and to the series of series of signs (“proof-schemata”) which occur in his formal system…Gödel invented what might be called co-ordinate metamathematics…”) Meltzer (1962 p. 7). In Antone (2006) it is said “The problem is that he (Gödel) tries to replace an abstract version of the number (which can exist) with the concept of a real number version of that abstract notion. We can state the abstraction of what the number needs to be, [the arithmoquining of a number cannot be a proof-pair and an arithmoquine] but that is a concept that cannot be turned into a specific number, because by definition no such number can exist.”.), especially so where first-hand personal experience is called for. Therefore, what ultimately rules is the semanticity as in a first-hand experience. Similar points are voiced, albeit implicitly, in Antone (Who understands Gödel’s incompleteness theorem, 2006). (“…it is so important to understand that Gödel’s theorem only is true with respect to formal systems—which is the exact opposite of the analogous UTM (Antone (2006) webpage 2. And galatomic says in the same discussion chain that “saying” that it ((is)) only true for formal systems is more significant… We only know the world through “formal” categories of understanding… If the world as it is in itself has no incompleteness problem, which I am sure is true, it does not mean much, because that is not the world of time and space that we experience. So it is more significant that formal systems are incomplete than the inexperiencable ‘World in Itself’ has no such problem.—galatomic”) Antone (2006) webpage 2. Nevertheless galatomic certainly, but unwittingly succeeds in highlighting the possibility of experiencing the ‘completeness’ Second, even if any formal system including the system of Advaita of ?a?kara is to be subsumed or interpreted under Gödel’s theorem, or Tarski’s semantic unprovability theses, the ultimate appeal would lie to the point of human involvement in realizing completeness since any formal system is ‘Incomplete’ always by its very nature as ‘objectual’, and fails to comprehend the ‘subject’ within its fold.  相似文献   

5.
In this paper, typical learning task including data condensation, binary classification, identification of the independence between random variables and conditional density estimation is described from a unified perspective of a linear combination of densities, and accordingly a direct estimation framework based on a linear combination of Gaussian components (i.e., Gaussian basis functions) under integrated square error criterion is proposed to solve these learning tasks. The proposed direct estimation framework has three advantages. Firstly, different from most of the existing state-of-the-art methods in which estimating each component’s density in this linear combination of densities and then combining them linearly are required, it can directly estimate the linear combination of densities as a whole, and it has at least comparable to or even better approximation accuracy than the existing density estimation methods. Secondly, the time complexity of the proposed direct estimation framework is O(l 3) in which l is the number of Gaussian components in this framework which are generally viewed as the Gaussian distributions of the clusters in a dataset, and hence l is generally much less than the size of the dataset, so it is very suitable for large datasets. Thirdly, this proposed framework can be typically used to develop alternative approaches to classification, data condensation, identification of the independence between random variables, conditional density estimation and the similarity identification between multiple source domains and a target domain. Our preliminary results about experiments on several typical applications indicate the power of the proposed direct estimation framework.  相似文献   

6.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

7.
Lamport’s Bakery Algorithm (Commun ACM 17:453–455, 1974) implements mutual exclusion for a fixed number of threads with the first-come first-served property. It has the disadvantage, however, that it uses integer communication variables that can become arbitrarily large. Taubenfeld’s Black-White Bakery Algorithm (Proceedings of the DISC. LNCS, vol 3274, pp 56–70, 2004) keeps the integers bounded, and is adaptive in the sense that the time complexity only depends on the number of competing threads, say N. The present paper offers an assertional proof of correctness and shows that the concurrent complexity for throughput is linear in N, and for individual progress is quadratic in N. This is proved with a bounded version of UNITY, i.e., by assertional means.  相似文献   

8.
Inspired by applications in parallel computing, we analyze the setting of work stealing in multithreaded computations. We obtain tight upper bounds on the number of steals when the computation can be modeled by rooted trees. In particular, we show that if the computation with n processors starts with one processor having a complete k-ary tree of height h (and the remaining n ? 1 processors having nothing), the maximum possible number of steals is \({\sum }_{i=1}^{n}(k-1)^{i}\binom {h}{i}\).  相似文献   

9.
We address the problem of how a set of agents can decide to share a resource, represented as a unit-sized pie. The pie can be generated by the entire set but also by some of its subsets. We investigate a finite horizon non-cooperative bargaining game, in which the players take it in turns to make proposals on how the resource should for this purpose be allocated, and the other players vote on whether or not to accept the allocation. Voting is modelled as a Bayesian weighted voting game with uncertainty about the players’ weights. The agenda, (i.e., the order in which the players are called to make offers), is defined exogenously. We focus on impatient players with heterogeneous discount factors. In the case of a conflict, (i.e., no agreement by the deadline), no player receives anything. We provide a Bayesian subgame perfect equilibrium for the bargaining game and conduct an ex-ante analysis of the resulting outcome. We show that the equilibrium is unique, computable in polynomial time, results in an instant Pareto optimal outcome, and, under certain conditions provides a foundation for the core and also the nucleolus of the Bayesian voting game. In addition, our analysis leads to insights on how an individual’s bargained share is influenced by his position on the agenda. Finally, we show that, if the conflict point of the bargaining game changes, then the problem of determining the non-cooperative equilibrium becomes NP-hard even under the perfect information assumption. Our research also reveals how this change in conflict point impacts on the above mentioned results.  相似文献   

10.
In secure delegatable computation, computationally weak devices (or clients) wish to outsource their computation and data to an untrusted server in the cloud. While most earlier work considers the general question of how to securely outsource any computation to the cloud server, we focus on concrete and important functionalities and give the first protocol for the pattern matching problem in the cloud. Loosely speaking, this problem considers a text T that is outsourced to the cloud \({\textsc {S}}\) by a sender \({\textsc {SEN}}\). In a query phase, receivers \({\textsc {REC}}_1, \ldots , {\textsc {REC}}_l\) run an efficient protocol with the server \({\textsc {S}}\) and the sender \({\textsc {SEN}}\) in order to learn the positions at which a pattern of length m matches the text (and nothing beyond that). This is called the outsourced pattern matching problem which is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases that contain confidential data (e.g., health-related data about patient history). Our constructions are simulation-based secure in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to O(m) bits plus the number of occurrences—which is optimal. In contrast to generic solutions for delegatable computation, our schemes do not rely on fully homomorphic encryption but instead use novel ideas for solving pattern matching, based on a reduction to the subset sum problem. Interestingly, we do not rely on the hardness of the problem, but rather we exploit instances that are solvable in polynomial time. A follow-up result demonstrates that the random oracle is essential in order to meet our communication bound.  相似文献   

11.
It has been just over 100 years since the birth of Alan Turing and more than 65 years since he published in Mind his seminal paper, Computing Machinery and Intelligence (Turing in Computing machinery and intelligence. Oxford University Press, Oxford, 1950). In the Mind paper, Turing asked a number of questions, including whether computers could ever be said to have the power of “thinking” (“I propose to consider the question, Can computers think?” ...Alan Turing, Computing Machinery and Intelligence, Mind, 1950). Turing also set up a number of criteria—including his imitation game—under which a human could judge whether a computer could be said to be “intelligent”. Turing’s paper, as well as his important mathematical and computational insights of the 1930s and 1940s led to his popular acclaim as the “Father of Artificial Intelligence”. In the years since his paper was published, however, no computational system has fully satisfied Turing’s challenge. In this paper we focus on a different question, ignored in, but inspired by Turing’s work: How might the Artificial Intelligence practitioner implement “intelligence” on a computational device? Over the past 60 years, although the AI community has not produced a general-purpose computational intelligence, it has constructed a large number of important artifacts, as well as taken several philosophical stances able to shed light on the nature and implementation of intelligence. This paper contends that the construction of any human artifact includes an implicit epistemic stance. In AI this stance is found in commitments to particular knowledge representations and search strategies that lead to a product’s successes as well as its limitations. Finally, we suggest that computational and human intelligence are two different natural kinds, in the philosophical sense, and elaborate on this point in the conclusion.  相似文献   

12.
We present a new method for fast approximation of zeta constants, i.e., values ζ(n) of the Riemann zeta function, n ≥ 2, n is an integer, by rational fractions. The method makes it possible to fast approximate zeta constants and certain combinations of successive values of zeta constants by rational fractions. By choosing values of coefficients involved in the combinations, one can control the convergence rate of the approximations and the computation complexity for the zeta constants.  相似文献   

13.
Choosing the best location for starting a business or expanding an existing enterprize is an important issue. A number of location selection problems have been discussed in the literature. They often apply the Reverse Nearest Neighbor as the criterion for finding suitable locations. In this paper, we apply the Average Distance as the criterion and propose the so-called k-most suitable locations (k-MSL) selection problem. Given a positive integer k and three datasets: a set of customers, a set of existing facilities, and a set of potential locations. The k-MSL selection problem outputs k locations from the potential location set, such that the average distance between a customer and his nearest facility is minimized. In this paper, we formally define the k-MSL selection problem and show that it is NP-hard. We first propose a greedy algorithm which can quickly find an approximate result for users. Two exact algorithms are then proposed to find the optimal result. Several pruning rules are applied to increase computational efficiency. We evaluate the algorithms’ performance using both synthetic and real datasets. The results show that our algorithms are able to deal with the k-MSL selection problem efficiently.  相似文献   

14.
One of the biggest challenges in data embedding is that the confidential data need to be in the ‘transparency’ after being embedded into the audio signal. Therefore, embedding methods must reduce the influence of embedded data onto the original audio signal. In this paper, the multiple bit marking layers (MBML) method has been proposed to fulfill this requirement. This method reuses the results from the previous embedding time (layer) as the input data to continue embedding it into audio signals (i.e. the next layer). The quality of the proposed method is evaluated through embedding error (EE), signal-to-noise ratio (SNR), embedded capacity (EC) and contribution error (CE). Experimental results have shown that the proposed method provides better quality of EE, and SNR than any other proposed embedding methods such as: LSB (Least Significant Bit), ELS (Embedding Large Sample.), BM (Bit Marking), and the BM/SW (Sliding Window) method with a single layer.  相似文献   

15.
Based on unitary phase shift operation on single qubit in association with Shamir’s (tn) secret sharing, a (tn) threshold quantum secret sharing scheme (or (tn)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir’s scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new (tn)-QSS schemes can be easily constructed using other classical (tn) secret sharing.  相似文献   

16.
The popular Internet service, YouTube, has adopted by default the HyperText Markup Language version 5 (HTML5). With this adoption, YouTube has moved to Dynamic Adaptive Streaming over HTTP (DASH) as Adaptive BitRate (ABR) video streaming technology. Furthermore, rate adaptation in DASH is solely receiver-driven. This issue motivates this work to make a deep analysis of YouTube’s particular DASH implementation. Firstly, this article provides a state of the art about DASH and adaptive streaming technology, and also YouTube traffic characterization related work. Secondly, this paper describes a new methodology and test-bed for YouTube’s DASH implementation traffic characterization and performance measurement. This methodology and test-bed do not make use of proxies and, moreover, they are able to cope with YouTube traffic redirections. Finally, a set of experimental results are provided, involving a dataset of 310 YouTube’s videos. The depicted results show a YouTube’s traffic pattern characterization and a discussion about allowed download bandwidth, YouTube’s consumed bitrate and quality of the video. Moreover, the obtained results are cross-validated with the analysis of HTTP requests performed by YouTube’s video player. The outcomes of this article are applicable in the field of Quality of Service (QoS) and Quality of Experience (QoE) management. This is valuable information for Internet Service Providers (ISPs), because QoS management based on assured download bandwidth can be used in order to provide a target end-user’s QoE when YouTube service is being consumed.  相似文献   

17.
There are (at least) two distinct traditions within group decision support: what we will call the ‘Technology-driven’ tradition, which originates in the Information Systems discipline, and what we will call the ‘Model-driven’ tradition, which originates in OR/MS. Although proponents of the two traditions share many of the same objectives, in the past there has been little communication between the two groups. In this paper, we describe the basic distinction between the two traditions in terms of two primary themes: research focus (i.e., what the researchers find of interest) and research philosophy and methodology (i.e., how researchers go about studying their chosen subject matter); and we trace these implications of these differences through the key concepts of each tradition. We conclude by arguing that there are many opportunities for synergy between the two traditions.  相似文献   

18.
Recently, Shi et al. (Phys Rev A 92:022309, 2015) proposed quantum oblivious set member decision protocol where two legitimate parties, namely Alice and Bob, play a game. Alice has a secret k, and Bob has a set \(\{k_1,k_2,\ldots k_n\}\). The game is designed towards testing if the secret k is a member of the set possessed by Bob without revealing the identity of k. The output of the game will be either “Yes” (bit 1) or “No” (bit 0) and is generated at Bob’s place. Bob does not know the identity of k, and Alice does not know any element of the set. In a subsequent work (Shi et al in Quant Inf Process 15:363–371, 2016), the authors proposed a quantum scheme for private set intersection (PSI) where the client (Alice) gets the intersected elements with the help of a server (Bob) and the server knows nothing. In the present draft, we extended the game to compute the intersection of two computationally indistinguishable sets X and Y possessed by Alice and Bob, respectively. We consider Alice and Bob as rational players, i.e. they are neither “good” nor “bad”. They participate in the game towards maximizing their utilities. We prove that in this rational setting, the strategy profile ((cooperate, abort), (cooperate, abort)) is a strict Nash equilibrium. If ((cooperate, abort), (cooperate, abort)) is strict Nash, then fairness and correctness of the protocol are guaranteed.  相似文献   

19.
The World Health Organization (WHO) in 2013 reported that more than seven million unexpected losses every year are credited to air contamination. Because of incredible adaptability and expense viability of fibrous filters, they are broadly used for removing particulates from gasses. The influence of appropriate parameters, e.g., the fiber arrangement, solid volume fraction (SVF or α), fluid flow face velocity (mean inlet velocity), and filter thickness (I x ), on pressure drop and deposition efficiency are researched. Furthermore, to study the effects of variation of the laminar flow regime and fiber’s cross-sectional shape on the deposition of particles, only a single square fiber has been placed in a channel. By means of finite volume method (FVM), the 2-D motion of 100–1000 nm particles was investigated numerically. The Lagrangian method has been employed and the Saffman’s lift, Drag, and Brownian forces have been considered to affect this motion. Contribution of increasing the Reynolds number to filtration performance increased with smaller fine aerosols to a level of 59.72 %. However, for over 500 nm, the Re = 100 has more efficient results up to 26.97 %. Remarkably, the single square fiber in Re = 200 regime performs similarly to the optimum choice of multi-fibrous filters. It was portrayed the parallel circular multi-fibrous filter with a ratio of horizontal-to-vertical distances between fibers, l/h = 1.143; α = 0.687, I x  = 116.572, and h/d f  = 1.0 is the most efficient filter’s structure. The increase in the ratio of vertical distances between fibers-to-fiber’s diameter (h/d f ) and decrease in SVF or α, results in a drastically decrement of the filtration performance of both parallel and staggered structures. The obtained results have been validated with previous research findings.  相似文献   

20.
Post-quantum cryptography has drawn considerable attention from cryptologists on a global scale. At Asiacrypt 2017, Leander and May combined Grover’s and Simon’s quantum algorithms to break the FX-based block ciphers, which were introduced by Kilian and Rogaway to strengthen DES. In this study, we investigate the Feistel constructions using Grover’s and Simon’s algorithms to generate new quantum key-recovery attacks on different rounds of Feistel constructions. Our attacks require 20.25nr?0.75n quantum queries to break an r-round Feistel construction. The time complexity of our attacks is less than that observed for quantum brute-force search by a factor of 20.75n. When compared with the best classical attacks, i.e., Dinur et al.’s attacks at CRYPTO 2015, the time complexity is reduced by a factor of 20.5n without incurring any memory cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号