首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the cover time of multiple random walks on undirected graphs G=(V,E). We consider k parallel, independent random walks that start from the same vertex. The speed-up is defined as the ratio of the cover time of a single random walk to the cover time of these k random walks. Recently, Alon et al. (2008) [5] derived several upper bounds on the cover time, which imply a speed-up of Ω(k) for several graphs; however, for many of them, k has to be bounded by O(logn). They also conjectured that, for any 1?k?n, the speed-up is at most O(k) on any graph. We prove the following main results:
We present a new lower bound on the speed-up that depends on the mixing time. It gives a speed-up of Ω(k) on many graphs, even if k is as large as n.
We prove that the speed-up is O(klogn) on any graph. For a large class of graphs we can also improve this bound to O(k), matching the conjecture of Alon et al.
We determine the order of the speed-up for any value of 1?k?n on hypercubes, random graphs and degree restricted expanders. For d-dimensional tori with d>2, our bounds are tight up to logarithmic factors.
Our findings also reveal a surprisingly sharp threshold behaviour for certain graphs, e.g., the d-dimensional torus with d>2 and hypercubes: there is a value T such that the speed-up is approximately min{T,k} for any 1?k?n.
  相似文献   

2.
The k-clique problem is a cornerstone of NP-completeness and parametrized complexity. When k is a fixed constant, the asymptotically fastest known algorithm for finding a k-clique in an n-node graph runs in O(n0.792k) time (given by Nešet?il and Poljak). However, this algorithm is infamously inapplicable, as it relies on Coppersmith and Winograd's fast matrix multiplication.We present good combinatorial algorithms for solving k-clique problems. These algorithms do not require large constants in their runtime, they can be readily implemented in any reasonable random access model, and are very space-efficient compared to their algebraic counterparts. Our results are the following:
We give an algorithm for k-clique that runs in O(nk/(εlogn)k−1) time and O(nε) space, for all ε>0, on graphs with n nodes. This is the first algorithm to take o(nk) time and O(nc) space for c independent of k.
Let k be even. Define a k-semiclique to be a k-node graph G that can be divided into two disjoint subgraphs U={u1,…,uk/2} and V={v1,…,vk/2} such that U and V are cliques, and for all i?j, the graph G contains the edge {ui,vj}. We give an time algorithm for determining if a graph has a k-semiclique. This yields an approximation algorithm for k-clique, in the following sense: if a given graph contains a k-clique, then our algorithm returns a subgraph with at least 3/4 of the edges in a k-clique.
  相似文献   

3.
Polynomial algorithms are given for the following two problems:
given a graph with n vertices and m edges, find a complete balanced bipartite subgraph Kq,q with ,
given a graph with n vertices, find a decomposition of its edges into complete balanced bipartite graphs having altogether O(n2/lnn) vertices.
The first algorithm can be modified to have running time linear in m and find a Kq,q with q=⌊q/5⌋. Previous proofs of the existence of such objects, due to K?vári, Sós and Turán (1954) [10], Chung, Erd?s and Spencer (1983) [5], Bublitz (1986) [4] and Tuza (1984) [13] were non-constructive.  相似文献   

4.
In 2011, Cai an Yang initiated the systematic parameterized complexity study of the following set of problems around Eulerian graphs: for a given graph G and integer k, the task is to decide if G contains a (connected) subgraph with k vertices (edges) with all vertices of even (odd) degrees. They succeed to establish the parameterized complexity of all cases except two, when we ask about:
a connected k-edge subgraph with all vertices of odd degrees, the problem known as k-Edge Connected Odd Subgraph; and  相似文献   

5.
6.
On the structure of generalized rough sets   总被引:3,自引:0,他引:3  
In this paper we consider some fundamental properties of generalized rough sets induced by binary relations on algebras and show that
1.
Any reflexive binary relation determines a topology.
2.
If θ is a reflexive and symmetric relation on a set X, then O={AX|θ-(A)=A} is a topology such that A is open if and only if it is closed.
3.
Conversely, for every topological space (X,O) satisfying the condition that A is open if and only if it is closed, there exists a reflexive and symmetric relation R such that O={AX|R-(A)=A}.
4.
Let θ be an equivalence relation on X. For any pseudo ω-closed subset A of Xθ(A) is an ω-closed set if and only if ω(xx, … , x) ∈ θ(A) for any x ∈ X.
Moreover we consider properties of generalized rough sets.  相似文献   

7.
8.
A queue layout of a graph consists of a linear order of its vertices, and a partition of its edges into queues, such that no two edges in the same queue are nested. The minimum number of queues in a queue layout of a graph G, denoted by qn(G), is called the queuenumber of G. Heath and Rosenberg [SIAM J. Comput. 21 (1992) 927-958] showed that boolean n-cube (i.e., the n-dimensional hypercube) can be laid out using at most n−1 queues. Heath et al. [SIAM J. Discrete Math. 5 (1992) 398-412] showed that the ternary n-cube can be laid out using at most 2n−2 queues. Recently, Hasunuma and Hirota [Inform. Process. Lett. 104 (2007) 41-44] improved the upper bound on queuenumber to n−2 for hypercubes. In this paper, we deal with the upper bound on queuenumber of a wider class of graphs called k-ary n-cubes, which contains hypercubes and ternary n-cubes as subclasses. Our result improves the previous bound in the case of ternary n-cubes. Let denote the n-dimensional k-ary cube. This paper contributes three main results as follows:
(1)
if n?3.
(2)
if n?2 and 4?k?8.
(3)
if n?1 and k?9.
  相似文献   

9.
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future.

New version program summary

Program title: OK3Catalogue identifier: ADST_v3_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 221 517No. of bytes in distributed program, including test data, etc.: 2 471 015Distribution format: tar.gzProgramming language: C++Computer: PC (Pentium 4, 1 GHz or more recommended)Operating system: Windows or UNIXRAM: 2048 MBytesClassification: 19.7Catalogue identifier of previous version: ADST_v2_0Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143Does the new version supersede the previous version?: YesNature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released.Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function.Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function.Summary of revisions:
1.
In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated.
2.
Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes.
3.
The target temperature rises during the time of energy deposition.
4.
Some procedures are improved to perform faster.
5.
The energy conservation is checked up on each step of calculation process and corrected if necessary.
New procedures included in OK3
1.
Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam.
2.
Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam.
3.
Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems.
4.
Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target.
5.
Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process.
6.
Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process.
7.
Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process.
Modified procedures in OK3
1.
Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm.
2.
Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition.
3.
Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster.
4.
Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy.
5.
Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3.
Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz.References:
[1]
A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377.
[2]
A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172.
[3]
A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
  相似文献   

10.
This work presents a new version of a software package for the study of chaotic flows, maps and fractals [1]. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well-known examples are implemented, with the capability of the users inserting their own ODE or iterative equations.

New version program summary

Program title: Chaos v2.0Catalogue identifier: AEAP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1275No. of bytes in distributed program, including test data, etc.: 7135Distribution format: tar.gzProgramming language: Scilab 5.1.1. Scilab 5.1.1 should be installed before running the program. Information about the installation can be found at http://wiki.scilab.org/howto/install/windows.Computer: PC-compatible running Scilab on MS Windows or LinuxOperating system: Windows XP, LinuxRAM: below 150 MegabytesClassification: 6.2Catalogue identifier of previous version: AEAP_v1_0Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 788Does the new version supersede the previous version?: YesNature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE).Solution method:
1.
Numerical solving of ordinary differential equations for the study of chaotic flows. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincare sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies.
2.
Numerical solving of iterative equations for the study of maps and fractals.
Reasons for new version: The program has been updated to use the new version 5.1.1 of Scilab with new graphical capabilities [2]. Moreover, new use cases have been added which make the handling of the program easier and more efficient.Summary of revisions:
1.
A new use case concerning coupled predator-prey models has been added [3].
2.
Three new use cases concerning fractals (Sierpinsky gasket, Barnsley's Fern and Tree) have been added [3].
3.
The graphical user interface (GUI) of the program has been reconstructed to include the new use cases.
4.
The program has been updated to use Scilab 5.1.1 with the new graphical capabilities.
Additional comments: The program package contains 12 subprograms.
interface.sce - the graphical user interface (GUI) that permits the choice of a routine as follows
1.sci - Lorenz dynamical system
2.sci - Chua dynamical system
3.sci - Rosler dynamical system
4.sci - Henon map
5.sci - Lyapunov exponents for Lorenz dynamical system
6.sci - Lyapunov exponent for the logistic map
7.sci - Shannon entropy for the logistic map
8.sci - Coupled predator-prey model
1f.sci - Sierpinsky gasket
2f.sci - Barnsley's Fern
3f.sci - Barnsley's Tree
Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE, Lyapunov exponents calculation and fractals.References:
[1]
C.C. Bordeianu, C. Besliu, Al. Jipa, D. Felea, I. V. Grossu, Comput. Phys. Comm. 178 (2008) 788.
[2]
S. Campbell, J.P. Chancelier, R. Nikoukhah, Modeling and Simulation in Scilab/Scicos, Springer, 2006.
[3]
R.H. Landau, M.J. Paez, C.C. Bordeianu, A Survey of Computational Physics, Introductory Computational Science, Princeton University Press, 2008.
  相似文献   

11.
In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.

Program summary

Program title: LCG Monte-Carlo Data BaseCatalogue identifier: ADZX_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU General Public LicenceNo. of lines in distributed program, including test data, etc.: 30 129No. of bytes in distributed program, including test data, etc.: 216 943Distribution format: tar.gzProgramming language: PerlComputer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 GbOperating system: Scientific Linux CERN 3/4RAM: 1 073 741 824 bytes (1 Gb)Classification: 9External routines:
perl >= 5.8.5;
Perl modules
DBD-mysql >= 2.9004,
File::Basename,
GD::SecurityImage,
GD::SecurityImage::AC,
Linux::Statistics,
XML::LibXML > 1.6,
XML::SAX,
XML::NamespaceSupport;
Apache HTTP Server >= 2.0.59;
mod auth external >= 2.2.9;
edg-utils-system RPM package;
gd >= 2.0.28;
rpm package CASTOR-client >= 2.1.2-4;
arc-server (optional)
Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times.Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server (http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles.Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions.Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators.Running time: Real time operations.References:[1] The main LCG MCDB server, http://mcdb.cern.ch/.[2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241.[3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.  相似文献   

12.
Based on the method of (n,k)-universal sets, we present a deterministic parameterized algorithm for the weighted rd-matching problem with time complexity O(4(r−1)k+o(k)), improving the previous best upper bound O(4rk+o(k)). In particular, the algorithm applied to the unweighted 3d-matching problem results in a deterministic algorithm with time O(16k+o(k)), improving the previous best result O(21.26k). For the weighted r-set packing problem, we present a deterministic parameterized algorithm with time complexity O(2(2r−1)k+o(k)), improving the previous best result O(22rk+o(k)). The algorithm, when applied to the unweighted 3-set packing problem, has running time O(32k+o(k)), improving the previous best result O(43.62k+o(k)). Moreover, for the weighted r-set packing and weighted rd-matching problems, we give a kernel of size O(kr), which is the first kernelization algorithm for the problems on weighted versions.  相似文献   

13.
In this paper, we study the complexity of several coloring problems on graphs, parameterized by the treewidth of the graph.
1.
The List Coloring problem takes as input a graph G, together with an assignment to each vertex v of a set of colors Cv. The problem is to determine whether it is possible to choose a color for vertex v from the set of permitted colors Cv, for each vertex, so that the obtained coloring of G is proper. We show that this problem is W[1]-hard, parameterized by the treewidth of G. The closely related Precoloring Extension problem is also shown to be W[1]-hard, parameterized by treewidth.
2.
An equitable coloring of a graph G is a proper coloring of the vertices where the numbers of vertices having any two distinct colors differs by at most one. We show that the problem is hard for W[1], parameterized by the treewidth plus the number of colors. We also show that a list-based variation, List Equitable Coloring is W[1]-hard for forests, parameterized by the number of colors on the lists.
3.
The list chromatic numberχl(G) of a graph G is defined to be the smallest positive integer r, such that for every assignment to the vertices v of G, of a list Lv of colors, where each list has length at least r, there is a choice of one color from each vertex list Lv yielding a proper coloring of G. We show that the problem of determining whether χl(G)?r, the List Chromatic Number problem, is solvable in linear time on graphs of constant treewidth.
  相似文献   

14.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

15.
16.
We give a lower bound for the treewidth of a graph in terms of the second smallest eigenvalue of its Laplacian matrix. We use this lower bound to show that the treewidth of a d-dimensional hypercube is at least ⌊3·2d/(2(d+4))⌋−1. The currently known upper bound is . We generalize this result to Hamming graphs. We also observe that every graph G on n vertices, with maximum degree Δ
(1)
contains an induced cycle (chordless cycle) of length at least 1+logΔ(μn/8) (provided G is not acyclic),
(2)
has a clique minor Kh for some ,
where μ is the second smallest eigenvalue of the Laplacian matrix of G.  相似文献   

17.
Shiftwork involving early morning starts and night work can affect both sleep and fatigue. This study aimed to assess the impact of different rostering schedules at an Australian mine site on sleep and subjective sleep quality. Participants worked one of four rosters;
4 × 4 (n = 14) 4D4O4N4O
7 × 4 (n = 10) 7D4O7N40
10 × 5 (n = 17) 5D5N50
14 × 7 (n = 12) 7D7N70
Sleep (wrist actigraphy and sleep diaries) was monitored for a full roster cycle including days off. Total sleep time (TST) was longer on days off (7.0 ± 1.9) compared to sleep when on day (6.0 ± 1.0) and nightshifts (6.2 ± 1.6). Despite an increase in TST on days off, this may be insufficient to recover from the severe sleep restriction occurring during work times. Restricted sleep and quick shift-change periods may lead to long-term sleep loss and associated fatigue.  相似文献   

18.
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images.

New version program summary

Program title: Fractal Analysis v02Catalogue identifier: AEEG_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 9999No. of bytes in distributed program, including test data, etc.: 4 366 783Distribution format: tar.gzProgramming language: MS Visual Basic 6.0Computer: PCOperating system: MS Windows 98 or laterRAM: 30 MClassification: 14Catalogue identifier of previous version: AEEG_v1_0Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999Does the new version supersede the previous version?: YesNature of problem: Estimating the fractal dimension of 2D and 3D images.Solution method: Optimized implementation of the box-counting algorithm.Reasons for new version:
1.
The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are:
a)
Easier integration with other applications (csv is a widely used, simple text file format);
b)
Less resources consumed and improved performance (only the information of interest, the “black points”, are stored);
c)
Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]);
d)
Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket).
2.
In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case.
Summary of revisions:
1.
The application interface was changed from SDI (single document interface) to MDI (multi-document interface).
2.
One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files).
Additional comments: User friendly graphical interface; Easy deployment mechanism.Running time: In the first approximation, the algorithm is linear.References:
[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009)  1999-2001.
[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.
  相似文献   

19.
This article provides goals for the design and improvement of default computer algebra expression simplification. These goals can also help users recognize and partially circumvent some limitations of their current computer algebra systems. Although motivated by computer algebra, many of the goals are also applicable to manual simplification, indicating what transformations are necessary and sufficient for good simplification when no particular canonical result form is required.After motivating the ten goals, the article then explains how the Altran partially factored form for rational expressions was extended for Derive and for the computer algebra in Texas Instruments products to help fulfill these goals. In contrast to the distributed Altran representation, this recursive partially factored semi-fraction form:
does not unnecessarily force common denominators,
discovers and preserves significantly more factors,
can represent general expressions, and
can produce an entire spectrum from fully factored over a common denominator through complete multivariate partial fractions, including a dense subset of all intermediate forms.
  相似文献   

20.
Consider the “Number in Hand” multiparty communication complexity model, where k players holding inputs x1,…,xk∈{0,1}n communicate to compute the value f(x1,…,xk) of a function f known to all of them. The main lower bound technique for the communication complexity of such problems is that of partition arguments: partition the k players into two disjoint sets of players and find a lower bound for the induced two-party communication complexity problem.In this paper, we study the power of partition arguments. Our two main results are very different in nature:
(i)
For randomized communication complexity, we show that partition arguments may yield bounds that are exponentially far from the true communication complexity. Specifically, we prove that there exists a 3-argument function f whose communication complexity is Ω(n), while partition arguments can only yield an Ω(logn) lower bound. The same holds for nondeterministiccommunication complexity.
(ii)
For deterministic communication complexity, we prove that finding significant gaps between the true communication complexity and the best lower bound that can be obtained via partition arguments, would imply progress on a generalized version of the “log-rank conjecture” in communication complexity. We also observe that, in the case of computing relations (search problems), very large gaps do exist.
We conclude with two results on the multiparty “fooling set technique”, another method for obtaining communication complexity lower bounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号