首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well known that the average case deterministic communication complexity is bounded below by an entropic quantity, which one would now call deterministic information complexity. In this paper we show a corresponding upper bound. We also improve known lower bounds for the public coin Las Vegas communication complexity by a constant factor.  相似文献   

2.
A measure of product variety induced complexity has been proposed for mixed-model assembly systems with serial, parallel and hybrid configurations. The complexity model was built based on the assumption of identical parallel stations, i.e., same product variants are produced at all parallel stations in the same volume and with the same mix ratio. In this paper, the existing complexity model is extended to general mixed-model assembly systems with non-identical parallel stations in the presence of product variety. Then it is discussed that how to reduce the system complexity using the variant differentiation, based on which a mathematical formulation is developed to minimize the complexity of a mixed-model assembly system. The formulated problem is a non-linear programming problem and then solved by genetic algorithm. Last the developed complexity mitigation model is applied to the configuration selection of assembly systems, i.e., to identify the system configuration with the minimum complexity.  相似文献   

3.
We introduce a method based on Kolmogorov complexity to prove lower bounds on communication complexity. The intuition behind our technique is close to information theoretic methods.We use Kolmogorov complexity for three different things: first, to give a general lower bound in terms of Kolmogorov mutual information; second, to prove an alternative to Yao’s minmax principle based on Kolmogorov complexity; and finally, to identify hard inputs.We show that our method implies the rectangle and corruption bounds, known to be closely related to the subdistribution bound. We apply our method to the hidden matching problem, a relation introduced to prove an exponential gap between quantum and classical communication. We then show that our method generalizes the VC dimension and shatter coefficient lower bounds. Finally, we compare one-way communication and simultaneous communication in the case of distributional communication complexity and improve the previous known result.  相似文献   

4.
We use Fourier analysis to get general lower bounds for the probabilistic communication complexity of large classes of functions. We give some examples showing how to use our method in some known cases and for some new functions.Our main tool is an inequality by Kahn, Kalai, and Linial, derived from two lemmas of Beckner.  相似文献   

5.
Studies on supply chain complexity mainly use the static and dynamic complexity distinction. While static complexity describes the structure of the supply chain, the number and the variety of its components and strengths of interactions between these; the dynamic complexity represents the uncertainty in the supply chain and involves the aspects of time and randomness. This distinction is also valid when classifying the drivers of supply chain complexity according to the way they are generated. Supply chain complexity drivers (e.g., number/variety of suppliers, number/variety of customers, number/variety of interactions, conflicting policies, demand amplification, differing/conflicting/non-synchronized decisions and actions, incompatible IT systems) play a significant and varying role in dealing with complexity of the different types of supply chains (e.g., food, chemical, electronics, automotive).  相似文献   

6.
This paper analyzes the complexity of double disk fault tolerant storage scheme RAIDSx under three types of states normal, fault, and reconstruction. As a result, the space overhead of RAIDSx is less than other code-mixing solution and is higher than XOR-based code solutions. Although RAIDSx's mapping produces the computation overhead of the controller, it optimizes the data-accessing performance of the entire storage system. Its parity computing overhead of data-accessing is low like RAID5 at the normal state and almost keeps constant increase from the normal to the fault or between two reconstruction states.  相似文献   

7.
8.
9.
10.
This work studies the quantum query complexity of Boolean functions in an unbounded-error scenario where it is only required that the query algorithm succeeds with a probability strictly greater than 1/2. We show that, just as in the communication complexity model, the unbounded-error quantum query complexity is exactly half of its classical counterpart for any (partial or total) Boolean function. Moreover, connecting the query and communication complexity results, we show that the “black-box” approach to convert quantum query algorithms into communication protocols by Buhrman-Cleve—Wigderson [STOC’98] is optimal even in the unbounded-error setting.We also study a related setting, called the weakly unbounded-error setting, where the cost of a query algorithm is given by q+log(1/2(p−1/2)), where q is the number of queries made and p>1/2 is the success probability of the algorithm. In contrast to the case of communication complexity, we show a tight multiplicative Θ(logn) separation between quantum and classical query complexity in this setting for a partial Boolean function. The asymptotic equivalence between them is also shown for some well-studied total Boolean functions.  相似文献   

11.
Graphical or Visual User Interface (GUI) is recognized as one of the most important application components for safety critical and business oriented software systems. It is highly advantageous for GUI designers and application developers to analyze the visual complexity of a GUI and predict users’ perception and judgment during the design phase. Although in recent years, various methods have been developed for visual complexity analysis, these have not been widely used due to applicability, practicality and validity issues. In this respect, we have conducted a comprehensive review of studies and methods in visual complexity analysis. After identifying and analyzing 85 research studies, we grouped the visual complexity analysis methods and accordingly a taxonomy is presented. Furthermore, conceptual comparison of the methods is given and gap analysis as well as possible future directions are provided. According to the our findings, major gaps for each visual complexity analysis method may be stated as follows: 1) In metric-model based methods, there is a lack of information about the suitability of the metric-model created for analysis, since the extent to which each metric contributes to visual complexity analysis is still not known exactly. 2) In heuristic- based methods, the extracted rule set is not yet extendable enough beyond the use for specific GUIs. 3) While the visual complexity analysis could be considered as a kind of computer vision task, there exist limited studies that does so. Therefore, generalizable solutions based on machine learning techniques seem to be a promising research direction to develop efficient approaches.  相似文献   

12.
For a least-squares problem of m degree polynomial regression with n measured values (nm + 1), the traditional methods need O(n2m) arithmetic operations. We prove that the arithmetic complexity of this problem does not exceed O(nlog22m).  相似文献   

13.
14.
15.
A measure of operation complexity in spaceflight is proposed using a weighted Euclidean norm based on four factors: complexity of operation step size (COSS), complexity of operation logic structure (COLS), complexity of operation instrument information (COII), and complexity of space mission information (CSMI). The development of the operation complexity measure followed four steps. First, four factors were identified to be reflected in the operation complexity measure for spaceflight. Second, the entropy theory was adopted to measure the four factors. Then, the weights of the four factors were determined based on a questionnaire survey of 10 astronauts. Finally, the operation complexity values of spaceflight operations were determined by the weighted Euclidean norm of the four factors. To verify the validity of this complexity measure, a one-factor experiment was designed to test the proposed hypotheses. Ten subjects participated in the experiment and performed 179 trials. Both objective indexes (operation time and error rate) and subjective indexes (workload evaluated by NASA Task Load Index questionnaire and subjective complexity rating) were used in the experiment. The data analysis showed that the average operation time, subjective complexity rating, and subjective workload could be predicted well from the operation complexity value (R = 0.876, 0.802, and 0.698, respectively); and the error rate could only be partly explained by the operation complexity value (R = 0.343).

Relevance to industry

The proposed operation complexity measure can be used for ergonomics evaluation of spaceflight operation design. It can also be used for astronaut training planning. Training resources can be allocated to spaceflight tasks according to their operation complexity.  相似文献   

16.
17.
For common data flow schemes, the number of copies of tokens made during a computation is shown to be a Blum complexity measure.(1) Results from abstract complexity theory (see Ref. 2) then hold for the copy measure, indicating, for example, that any implementation of a data flow processor will be constrained by its ability to copy tokens. The copy measure is a natural measure of complexity for data flow computations, and is distinct from the usual time or space measures. The result is generalized to a wider class of data flow schemas, including those with an apply operator. An example is also given of a data flow scheme which makes no copies.Supported in part by NSF Grant MCS 83-01536 and NSA OCREAE Grant MDA904-85-H-0002.  相似文献   

18.
19.
We present a greatly simplified proof of the length-space trade-off result for resolution in [P. Hertel, T. Pitassi, Exponential time/space speedups for resolution and the PSPACE-completeness of black-white pebbling, in: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS '07), Oct. 2007, pp. 137-149], and also prove a couple of other theorems in the same vein. We point out two important ingredients needed for our proofs to work, and discuss some possible conclusions. Our key trick is to look at formulas of the type F=GH, where G and H are over disjoint sets of variables and have very different length-space properties with respect to resolution.  相似文献   

20.
Bottom-Up-Heapsort is a variant of Heapsort. Its worst-case complexity for the number of comparisons is known to be bounded from above by 3/2n logn+0(n), wheren is the number of elements to be sorted. There is also an example of a heap which needs 5/4n logn-0(n log logn) comparisons. We show in this paper that the upper bound is asymptotically tight, i.e., we prove for largen the existence of heaps which need at least 3/2n logn–O(n log logn) comparisons. This result also proves the old conjecture that the best case for classical Heapsort needs only asymptotic n logn + O(n log logn) comparisons.This work was supported by the ESPRIT II program of the EC under Contract No. 3075 (project ALCOM).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号